Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-02-12 Thread Emilien Macchi
On Feb 12, 2016 11:06 PM, "Spencer Krum"  wrote:
>
> The module would also be welcome under the voxpupuli[0] namespace on
> github. We currently have a puppet-corosync[1] module, and there is some
> overlap there, but a pure pacemaker module would be a welcome addition.
>
> I'm not sure which I would prefer, just that VP is an option. For
> greater openstack integration, gerrit is the way to go. For greater
> participation from the wider puppet community, github is the way to go.
> Voxpupuli provides testing and releasing infrastructure.

The thing is, we might want to gate it on tripleo since it's the first
consumer right now. Though I agree VP would be a good place too, to attract
more puppet users.

Dilemma!
Maybe we could start using VP, with good testing and see how it works.

Iterate later if needed. Thoughts?

>
> [0] https://voxpupuli.org/
> [1] https://github.com/voxpupuli/puppet-corosync
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
> On Fri, Feb 12, 2016, at 09:44 AM, Emilien Macchi wrote:
> > Please look and vote:
> > https://review.openstack.org/279698
> >
> >
> > Thanks for your feedback!
> >
> > On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
> > > I like the idea of moving it to use the OpenStack infrastructure.
> > >
> > > On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec  > > > wrote:
> > >
> > > On 02/09/2016 08:05 AM, Emilien Macchi wrote:
> > > > Hi,
> > > >
> > > > TripleO is currently using puppet-pacemaker [1] which is a
module
> > > hosted
> > > > & managed by Github.
> > > > The module was created and mainly maintained by Redhat. It
tends to
> > > > break TripleO quite often since we don't have any gate.
> > > >
> > > > I propose to move the module to OpenStack so we'll use
OpenStack Infra
> > > > benefits (Gerrit, Releases, Gating, etc). Another idea would be
to
> > > gate
> > > > the module with TripleO HA jobs.
> > > >
> > > > The question is, under which umbrella put the module? Puppet ?
> > > TripleO ?
> > > >
> > > > Or no umbrella, like puppet-ceph. <-- I like this idea
> > >
> > >
> > > I think the module not being under an umbrella makes sense.
> > >
> > >
> > > >
> > > > Any feedback is welcome,
> > > >
> > > > [1] https://github.com/redhat-openstack/puppet-pacemaker
> > >
> > > Seems like a module that would be useful outside of TripleO, so it
> > > doesn't seem like it should live under that.  Other than that I
don't
> > > have enough knowledge of the organization of the puppet modules to
> > > comment.
> > >
> > >
> > >
> > >
 __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > <
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> > > --
> > > Juan Antonio Osorio R.
> > > e-mail: jaosor...@gmail.com 
> > >
> > >
> > >
> > >
__
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > --
> > Emilien Macchi
> >
> >
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > Email had 1 attachment:
> > + signature.asc
> >   1k (application/pgp-signature)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-02-12 Thread Spencer Krum
The module would also be welcome under the voxpupuli[0] namespace on
github. We currently have a puppet-corosync[1] module, and there is some
overlap there, but a pure pacemaker module would be a welcome addition.

I'm not sure which I would prefer, just that VP is an option. For
greater openstack integration, gerrit is the way to go. For greater
participation from the wider puppet community, github is the way to go.
Voxpupuli provides testing and releasing infrastructure.


[0] https://voxpupuli.org/
[1] https://github.com/voxpupuli/puppet-corosync

-- 
  Spencer Krum
  n...@spencerkrum.com

On Fri, Feb 12, 2016, at 09:44 AM, Emilien Macchi wrote:
> Please look and vote:
> https://review.openstack.org/279698
> 
> 
> Thanks for your feedback!
> 
> On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
> > I like the idea of moving it to use the OpenStack infrastructure.
> > 
> > On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec  > > wrote:
> > 
> > On 02/09/2016 08:05 AM, Emilien Macchi wrote:
> > > Hi,
> > >
> > > TripleO is currently using puppet-pacemaker [1] which is a module
> > hosted
> > > & managed by Github.
> > > The module was created and mainly maintained by Redhat. It tends to
> > > break TripleO quite often since we don't have any gate.
> > >
> > > I propose to move the module to OpenStack so we'll use OpenStack Infra
> > > benefits (Gerrit, Releases, Gating, etc). Another idea would be to
> > gate
> > > the module with TripleO HA jobs.
> > >
> > > The question is, under which umbrella put the module? Puppet ?
> > TripleO ?
> > >
> > > Or no umbrella, like puppet-ceph. <-- I like this idea
> > 
> > 
> > I think the module not being under an umbrella makes sense.
> >  
> > 
> > >
> > > Any feedback is welcome,
> > >
> > > [1] https://github.com/redhat-openstack/puppet-pacemaker
> > 
> > Seems like a module that would be useful outside of TripleO, so it
> > doesn't seem like it should live under that.  Other than that I don't
> > have enough knowledge of the organization of the puppet modules to
> > comment.
> > 
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > 
> > -- 
> > Juan Antonio Osorio R.
> > e-mail: jaosor...@gmail.com 
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Adam Young

On 02/12/2016 09:04 PM, Fox, Kevin M wrote:

The problem I've run into though, is project is very well defined in a lot of users 
minds, and its not defined the same way openstack typically uses it. A lot of sites use 
project in a way that more closely maps to a keystone domain. Though that gets even 
muddier with keystone subprojects and domains all kind of merging together. Some other 
folks define projects closer to keystone groups. A single "project" may have 
permissions on multiple openstack projects.

Tenant as a term was much easier for me to teach users. Since they don't have a predefined notion 
of what it is. And get the notion that like a multitenant building, it gives them their own space 
in the greater building. IE, the "Foo" project has access to these 3 openstack 
"tenants"


There are arguments for both, but we are not going to switch back to 
tenant.  That would only continue the confusion.  We've been working 
toward project for 4+ years now.




Thanks,
Kevin

From: Adam Young [ayo...@redhat.com]
Sent: Friday, February 12, 2016 5:40 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] tenant vs. project

On 02/12/2016 08:28 PM, Monty Taylor wrote:

On 02/12/2016 06:40 PM, John Griffith wrote:


On Fri, Feb 12, 2016 at 5:01 AM, Sean Dague mailto:s...@dague.net>> wrote:

 Ok... this is going to be one of those threads, but I wanted to
try to
 get resolution here.

 OpenStack is wildly inconsistent in it's use of tenant vs.
project. As
 someone that wasn't here at the beginning, I'm not even sure
which one
 we are supposed to be transitioning from -> to.

 At a minimum I'd like to make all of devstack use 1 term, which
is the
 term we're trying to get to. That will help move the needle.

 However, again, I'm not sure which one that is supposed to be
(comments
 in various places show movement in both directions). So people with
 deeper knowledge here, can you speak up as to which is the
deprecated
 term and which is the term moving forward.

  -Sean

 --
 Sean Dague
 http://dague.net

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​I honestly don't have any real feeling about one over the other; BUT I
applaud the fact that somebody was brave enough to raise the question
again.

Sounds like Project is where we're supposed to be, so if we can get it
in Keystone we can all go work on updating it once and for all?​

Tis all good in keystone. If you're using keystoneauth and keystone v3
everything will work magically. However, there are still some steps
locally for things like config files and whatnot.


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thank you all.

The tenant vs project thing has been an annoyance for almost the entire
time I've been on OpenStack.  If we can standartdize on project moving
forward, it will make things better.


On a terminology thing:  when talking about Nova, Glance, etc instead of
using projects, I use the term services.  It makes it easier to distinguish.

Tenant never quite made sense to me. A tenant is the person that
occupies an apartment or building, but not the building itself.


Also, the term multi-tenancy implies a degree of isolation between users
that we never quite established between Keystone projects.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Adam Young

On 02/12/2016 06:17 AM, Eoghan Glynn wrote:



Hello all,

tl;dr
=

I have long thought that the OpenStack Summits have become too
commercial and provide little value to the software engineers
contributing to OpenStack.

I propose the following:

1) Separate the design summits from the conferences
2) Hold only a single OpenStack conference per year
3) Return the design summit to being a low-key, low-cost working event
I think you would hurt developer attendance.  I think the unified design 
summit sneaks under the radar of many companies that will send people to 
the conference but might not send them to a design-only summit.


I know a lot of people at smaller companies especially have to do double 
duty. I'm at a larger company and I have to do double duty, booth and 
design.  Sometimes my talks get accepted, too.


I think the combined summit works.  I would not want to have to travel 
any more than I do now.


I think the idea of more developer-specific socializing would be great.  
Downtime is also a good thing, and having the socializing in venues that 
don;t involve shouting and going hoarse would be a plus in my book.



TBH, after a day of summit, I am often ready to just disappear for a 
while, or go out with a small group of friends.  I tend to avoid the 
large parties.


That said, the Saxophone is coming to Austin, and I plan on trying to 
get an informal jam session together with anyone that has an 
instrument...and we'll see if we can find a piano.




details
===

The design summits originally started out as working events. Developers
got together in smallish rooms, arranged chairs in a fishbowl, and got
to work planning and designing.

With the OpenStack Summit growing more and more marketing- and
sales-focused, the contributors attending the design summit are often
unfocused. The precious little time that developers have to actually
work on the next release planning is often interrupted or cut short by
the large numbers of "suits" and salespeople at the conference event,
many of which are peddling a product or pushing a corporate agenda.

Many contributors submit talks to speak at the conference part of an
OpenStack Summit because their company says it's the only way they will
pay for them to attend the design summit. This is, IMHO, a terrible
thing. The design summit is a *working* event. Companies that contribute
to OpenStack projects should send their engineers to working events
because that is where work is done, not so that their engineer can go
give a talk about some vendor's agenda-item or newfangled product.

Part of the reason that companies only send engineers who are giving a
talk at the conference side is that the cost of attending the OpenStack
Summit has become ludicrously expensive. Why have the events become so
expensive? I can think of a few reasons:

a) They are held every six months. I know of no other community or open
source project that holds *conference-type* events every six months.

b) They are held in extremely expensive hotels and conference centers
because the number of attendees is so big.

c) Because the conferences have become sales and marketing-focused
events, companies shell out hundreds of thousands of dollars for schwag,
for rented event people, for food and beverage sponsorships, for keynote
slots, for lavish and often ridiculous parties, and more. This cost
means less money to send engineers to the design summit to do actual work.

I would love to see the OpenStack contributor community take back the
design summit to its original format and purpose and decouple it from
the OpenStack Summit's conference portion.

I believe the design summits should be organized by the OpenStack
contributor community, not the OpenStack Foundation and its marketing
and event planning staff. This will allow lower-cost venues to be chosen
that meet the needs only of the small group of active contributors, not
of huge masses of conference attendees. This will allow contributor
companies to send *more* engineers to *more* design summits, which is
something that really needs to happen if we are to grow our active
contributor pool.

Once this decoupling occurs, I think that the OpenStack Summit should be
renamed to the OpenStack Conference and Expo to better fit its purpose
and focus. This Conference and Expo event really should be held once a
year, in my opinion, and continue to be run by the OpenStack Foundation.

I, for one, would welcome events that have no conference check-in area,
no evening parties with 2000 people, no keynote and
powerpoint-as-a-service sessions, and no getting pulled into sales meetings.

OK, there, I said it.

Thoughts? Criticism? Support? Suggestions welcome.

Largely agree with the need to re-imagine summit, and perhaps cleaving
off the design summit is the best way forward on that.

But in any case, just a few counter-points to consider:

  * nostalgia for the days of yore will only get us so far, as *some* of
the friction in the curr

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Hongbin Lu
Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:

1.   Should Magnum introduce a new bay type, in which master nodes are 
managed by Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.

2.   How to consolidate the control services that originally runs on master 
nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor


From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container 
resource [1] reminded me the use case Kris mentioned below. I am going to 
propose a preliminary idea to address the use case. Of course, we could 
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only minion/worker/slave 
nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips, 
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and worker 
nodes. In these COEs, control services (i.e. scheduler) run on master nodes, 
and containers run on worker nodes. If we can port the COE control services to 
Magnum control plate and share them with all tenants, we eliminate the need of 
master nodes thus improving resource utilization. In the new COE, users 
create/manage containers through Magnum API endpoints. Magnum is responsible to 
spin tenant VMs, schedule containers to the VMs, and manage the life-cycle of 
those containers. Unlike other COEs, containers created by this COE are 
considered as OpenStack-manage resources. That means they will be tracked in 
Magnum DB, and accessible by other OpenStack services (i.e. Horizon, Heat, 
etc.).

What do you feel about this proposal? Let’s discuss.

[1] https://etherpad.openstack.org/p/magnum-native-api

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  25

Re: [openstack-dev] [docs] [api] Why WADL when you can Swagger

2016-02-12 Thread Joshua Hesketh
Hey Anne,

Thanks for all your work. This sounds really exciting.

The explanation in this email was largely what I was missing during my
earlier reviews of the spec. I'll take another look at the spec shortly but
it might be helpful to have some of this rationale added.

Cheers,
Josh

On Sat, Feb 13, 2016 at 12:43 PM, michael mccune  wrote:

> On 02/12/2016 04:45 PM, Anne Gentle wrote:
> 
>
>> What's new?
>> 
>> This week, with every build of the api-site, we are now running
>> fairy-slipper to migrate from WADL to Swagger for API reference
>> information. Those migrated Swagger files are then copied to
>> developer.openstack.org/draft/swagger
>> .
>>
>> We know that not all files migrate smoothly. We'd love to get teams
>> looking at these migrated files. Thank you to those of you already
>> submitting fixes!
>>
>> In addition, the infra team is reviewing a spec now so that we can serve
>> API reference information from the developer.openstack.org
>>  site:
>> https://review.openstack.org/#/c/276484/
>>
>>
> Anne, this is a great update!
>
> love to see the progress on swagger integration.
>
>
> 
>
>> Last but not least, if you want to learn more about Swagger in the
>> upstream contributors track at the Summit, vote for this session:
>>
>> https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7723
>>
>
> sweet, +1
>
> regards,
> mike
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Fox, Kevin M
The problem I've run into though, is project is very well defined in a lot of 
users minds, and its not defined the same way openstack typically uses it. A 
lot of sites use project in a way that more closely maps to a keystone domain. 
Though that gets even muddier with keystone subprojects and domains all kind of 
merging together. Some other folks define projects closer to keystone groups. A 
single "project" may have permissions on multiple openstack projects.

Tenant as a term was much easier for me to teach users. Since they don't have a 
predefined notion of what it is. And get the notion that like a multitenant 
building, it gives them their own space in the greater building. IE, the "Foo" 
project has access to these 3 openstack "tenants"

Thanks,
Kevin

From: Adam Young [ayo...@redhat.com]
Sent: Friday, February 12, 2016 5:40 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] tenant vs. project

On 02/12/2016 08:28 PM, Monty Taylor wrote:
> On 02/12/2016 06:40 PM, John Griffith wrote:
>>
>>
>> On Fri, Feb 12, 2016 at 5:01 AM, Sean Dague > > wrote:
>>
>> Ok... this is going to be one of those threads, but I wanted to
>> try to
>> get resolution here.
>>
>> OpenStack is wildly inconsistent in it's use of tenant vs.
>> project. As
>> someone that wasn't here at the beginning, I'm not even sure
>> which one
>> we are supposed to be transitioning from -> to.
>>
>> At a minimum I'd like to make all of devstack use 1 term, which
>> is the
>> term we're trying to get to. That will help move the needle.
>>
>> However, again, I'm not sure which one that is supposed to be
>> (comments
>> in various places show movement in both directions). So people with
>> deeper knowledge here, can you speak up as to which is the
>> deprecated
>> term and which is the term moving forward.
>>
>>  -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ​I honestly don't have any real feeling about one over the other; BUT I
>> applaud the fact that somebody was brave enough to raise the question
>> again.
>>
>> Sounds like Project is where we're supposed to be, so if we can get it
>> in Keystone we can all go work on updating it once and for all?​
>
> Tis all good in keystone. If you're using keystoneauth and keystone v3
> everything will work magically. However, there are still some steps
> locally for things like config files and whatnot.
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thank you all.

The tenant vs project thing has been an annoyance for almost the entire
time I've been on OpenStack.  If we can standartdize on project moving
forward, it will make things better.


On a terminology thing:  when talking about Nova, Glance, etc instead of
using projects, I use the term services.  It makes it easier to distinguish.

Tenant never quite made sense to me. A tenant is the person that
occupies an apartment or building, but not the building itself.


Also, the term multi-tenancy implies a degree of isolation between users
that we never quite established between Keystone projects.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Why WADL when you can Swagger

2016-02-12 Thread michael mccune

On 02/12/2016 04:45 PM, Anne Gentle wrote:


What's new?

This week, with every build of the api-site, we are now running
fairy-slipper to migrate from WADL to Swagger for API reference
information. Those migrated Swagger files are then copied to
developer.openstack.org/draft/swagger
.

We know that not all files migrate smoothly. We'd love to get teams
looking at these migrated files. Thank you to those of you already
submitting fixes!

In addition, the infra team is reviewing a spec now so that we can serve
API reference information from the developer.openstack.org
 site:
https://review.openstack.org/#/c/276484/



Anne, this is a great update!

love to see the progress on swagger integration.




Last but not least, if you want to learn more about Swagger in the
upstream contributors track at the Summit, vote for this session:
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7723


sweet, +1

regards,
mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Adam Young

On 02/12/2016 08:28 PM, Monty Taylor wrote:

On 02/12/2016 06:40 PM, John Griffith wrote:



On Fri, Feb 12, 2016 at 5:01 AM, Sean Dague mailto:s...@dague.net>> wrote:

Ok... this is going to be one of those threads, but I wanted to 
try to

get resolution here.

OpenStack is wildly inconsistent in it's use of tenant vs. 
project. As
someone that wasn't here at the beginning, I'm not even sure 
which one

we are supposed to be transitioning from -> to.

At a minimum I'd like to make all of devstack use 1 term, which 
is the

term we're trying to get to. That will help move the needle.

However, again, I'm not sure which one that is supposed to be 
(comments

in various places show movement in both directions). So people with
deeper knowledge here, can you speak up as to which is the 
deprecated

term and which is the term moving forward.

 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​I honestly don't have any real feeling about one over the other; BUT I
applaud the fact that somebody was brave enough to raise the question 
again.


Sounds like Project is where we're supposed to be, so if we can get it
in Keystone we can all go work on updating it once and for all?​


Tis all good in keystone. If you're using keystoneauth and keystone v3 
everything will work magically. However, there are still some steps 
locally for things like config files and whatnot.



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thank you all.

The tenant vs project thing has been an annoyance for almost the entire 
time I've been on OpenStack.  If we can standartdize on project moving 
forward, it will make things better.



On a terminology thing:  when talking about Nova, Glance, etc instead of 
using projects, I use the term services.  It makes it easier to distinguish.


Tenant never quite made sense to me. A tenant is the person that 
occupies an apartment or building, but not the building itself.



Also, the term multi-tenancy implies a degree of isolation between users 
that we never quite established between Keystone projects.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Monty Taylor

On 02/12/2016 06:40 PM, John Griffith wrote:



On Fri, Feb 12, 2016 at 5:01 AM, Sean Dague mailto:s...@dague.net>> wrote:

Ok... this is going to be one of those threads, but I wanted to try to
get resolution here.

OpenStack is wildly inconsistent in it's use of tenant vs. project. As
someone that wasn't here at the beginning, I'm not even sure which one
we are supposed to be transitioning from -> to.

At a minimum I'd like to make all of devstack use 1 term, which is the
term we're trying to get to. That will help move the needle.

However, again, I'm not sure which one that is supposed to be (comments
in various places show movement in both directions). So people with
deeper knowledge here, can you speak up as to which is the deprecated
term and which is the term moving forward.

 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​I honestly don't have any real feeling about one over the other; BUT I
applaud the fact that somebody was brave enough to raise the question again.

Sounds like Project is where we're supposed to be, so if we can get it
in Keystone we can all go work on updating it once and for all?​


Tis all good in keystone. If you're using keystoneauth and keystone v3 
everything will work magically. However, there are still some steps 
locally for things like config files and whatnot.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [api] Why WADL when you can Swagger

2016-02-12 Thread Monty Taylor

On 02/12/2016 03:45 PM, Anne Gentle wrote:

Hi all,
I wanted to give an update on the efforts to provide improved
application developer information on developer.openstack.org
. Wrangling this much valuable
information and gathering it in a way that helps people is no simple
matter. So. We move forward one step at a time.


It's the only way we can move around here. :)


What's new?

This week, with every build of the api-site, we are now running
fairy-slipper to migrate from WADL to Swagger for API reference
information. Those migrated Swagger files are then copied to
developer.openstack.org/draft/swagger
.

We know that not all files migrate smoothly. We'd love to get teams
looking at these migrated files. Thank you to those of you already
submitting fixes!

In addition, the infra team is reviewing a spec now so that we can serve
API reference information from the developer.openstack.org
 site:
https://review.openstack.org/#/c/276484/

Why are we doing all this?
--
The overall vision is still intact in the original specifications
[1][2], however we need to do a lot of web design and front end work to
get where we want to be.

What can I do?

Check out these Swagger files.
http://developer.openstack.org/draft/swagger/blockstorage-v1-swagger.json
http://developer.openstack.org/draft/swagger/blockstorage-v2-swagger.json
http://developer.openstack.org/draft/swagger/clustering-v1-swagger.json
http://developer.openstack.org/draft/swagger/compute-v2.1-swagger.json
http://developer.openstack.org/draft/swagger/data-processing-v1.1-swagger.json
http://developer.openstack.org/draft/swagger/database-v1-swagger.json
http://developer.openstack.org/draft/swagger/identity-admin-v2-swagger.json
http://developer.openstack.org/draft/swagger/identity-extensions-v2-swagger.json
http://developer.openstack.org/draft/swagger/identity-extensions-v3-swagger.json
http://developer.openstack.org/draft/swagger/identity-v2-swagger.json
http://developer.openstack.org/draft/swagger/identity-v3-swagger.json
http://developer.openstack.org/draft/swagger/image-v1-swagger.json
http://developer.openstack.org/draft/swagger/networking-extensions-v2-swagger.json
http://developer.openstack.org/draft/swagger/networking-v2-swagger.json
http://developer.openstack.org/draft/swagger/objectstorage-v1-swagger.json
http://developer.openstack.org/draft/swagger/orchestration-v1-swagger.json
http://developer.openstack.org/draft/swagger/share-v1-swagger.json
http://developer.openstack.org/draft/swagger/telemetry-v2-swagger.json

If you see a problem in the original WADL, log it here:
https://bugs.launchpad.net/openstack-api-site. If you see a problem with
the migration tool, log it here:
https://bugs.launchpad.net/openstack-doc-tools.

When will the work be completed?


I had hoped to have more to show by this point, but I await the infra
team's review of the server spec above, and we continue to work on the
bugs in the migration and output. Once the server spec work is complete,
we can release the draft site.


K. I'll go review ...


What if I have more questions?
--
You can always hop onto #openstack-doc or #openstack-sdks to ask me or
another API working group member for guidance.

Last but not least, if you want to learn more about Swagger in the
upstream contributors track at the Summit, vote for this session:
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7723


This is all super awesome Anne. Thanks for all the great work!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Postgres support in (fwaas) tests

2016-02-12 Thread Monty Taylor

On 02/12/2016 11:03 AM, Sean M. Collins wrote:

I know historically there were postgres jobs that tested things, but I
think the community moved away from having postgres at the gate?



Historically postgres jobs have been started by someone who is 
interested, then that person disappears and nobody else maintains it.


I mean no disrespect to postgres, it's an awesome database ... but 
interest and attention paid to maintain it has always lagged WAY below 
what you'd otherwise expect.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread John Griffith
On Fri, Feb 12, 2016 at 5:01 AM, Sean Dague  wrote:

> Ok... this is going to be one of those threads, but I wanted to try to
> get resolution here.
>
> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> someone that wasn't here at the beginning, I'm not even sure which one
> we are supposed to be transitioning from -> to.
>
> At a minimum I'd like to make all of devstack use 1 term, which is the
> term we're trying to get to. That will help move the needle.
>
> However, again, I'm not sure which one that is supposed to be (comments
> in various places show movement in both directions). So people with
> deeper knowledge here, can you speak up as to which is the deprecated
> term and which is the term moving forward.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I honestly don't have any real feeling about one over the other; BUT I
applaud the fact that somebody was brave enough to raise the question again.

Sounds like Project is where we're supposed to be, so if we can get it in
Keystone we can all go work on updating it once and for all?​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-12 Thread John Griffith
On Thu, Feb 11, 2016 at 10:31 AM, Walter A. Boring IV  wrote:

> There seems to be a few discussions going on here wrt to detaches.   One
> is what to do on the Nova side with calling os-brick's disconnect_volume,
> and also when to or not to call Cinder's terminate_connection and detach.
>
> My original post was simply to discuss a mechanism to try and figure out
> the first problem.  When should nova call brick to remove
> the local volume, prior to calling Cinder to do something.
> ​
>


> Nova needs to know if it's safe to call disconnect_volume or not. Cinder
> already tracks each attachment, and it can return the connection_info for
> each attachment with a call to initialize_connection.   If 2 of those
> connection_info dicts are the same, it's a shared volume/target.  Don't
> call disconnect_volume if there are any more of those left.
>
> On the Cinder side of things, if terminate_connection, detach is called,
> the volume manager can find the list of attachments for a volume, and
> compare that to the attachments on a host.  The problem is, Cinder doesn't
> track the host along with the instance_uuid in the attachments table.  I
> plan on allowing that as an API change after microversions lands, so we
> know how many times a volume is attached/used on a particular host.  The
> driver can decide what to do with it at terminate_connection, detach time.
>This helps account for
> the differences in each of the Cinder backends, which we will never get
> all aligned to the same model.  Each array/backend handles attachments
> different and only the driver knows if it's safe to remove the target or
> not, depending on how many attachments/usages it has
> on the host itself.   This is the same thing as a reference counter, which
> we don't need, because we have the count in the attachments table, once we
> allow setting the host and the instance_uuid at the same time.
>
> ​Not trying to drag this out or be difficult I promise.  But, this seems
like it is in fact the same problem, and I'm not exactly following; if you
store the info on the compute side during the attach phase, why would you
need/want to then create a split brain scenario and have Cinder do any sort
of tracking on the detach side of things?

Like the earlier posts said, just don't call terminate_connection if you
don't want to really terminate the connection?  I'm sorry, I'm just not
following the logic of why Cinder should track this and interfere with
things?  It's supposed to be providing a service to consumers and "do what
it's told" even if it's told to do the wrong thing.
 ​


> Walt
>
> On Tue, Feb 09, 2016 at 11:49:33AM -0800, Walter A. Boring IV wrote:
>>
>>> Hey folks,
>>> One of the challenges we have faced with the ability to attach a
>>> single
>>> volume to multiple instances, is how to correctly detach that volume.
>>> The
>>> issue is a bit complex, but I'll try and explain the problem, and then
>>> describe one approach to solving one part of the detach puzzle.
>>>
>>> Problem:
>>>When a volume is attached to multiple instances on the same host.
>>> There
>>> are 2 scenarios here.
>>>
>>>1) Some Cinder drivers export a new target for every attachment on a
>>> compute host.  This means that you will get a new unique volume path on a
>>> host, which is then handed off to the VM instance.
>>>
>>>2) Other Cinder drivers export a single target for all instances on a
>>> compute host.  This means that every instance on a single host, will
>>> reuse
>>> the same host volume path.
>>>
>>
>> This problem isn't actually new. It is a problem we already have in Nova
>> even with single attachments per volume.  eg, with NFS and SMBFS there
>> is a single mount setup on the host, which can serve up multiple volumes.
>> We have to avoid unmounting that until no VM is using any volume provided
>> by that mount point. Except we pretend the problem doesn't exist and just
>> try to unmount every single time a VM stops, and rely on the kernel
>> failing umout() with EBUSY.  Except this has a race condition if one VM
>> is stopping right as another VM is starting
>>
>> There is a patch up to try to solve this for SMBFS:
>>
>> https://review.openstack.org/#/c/187619/
>>
>> but I don't really much like it, because it only solves it for one
>> driver.
>>
>> I think we need a general solution that solves the problem for all
>> cases, including multi-attach.
>>
>> AFAICT, the only real answer here is to have nova record more info
>> about volume attachments, so it can reliably decide when it is safe
>> to release a connection on the host.
>>
>>
>> Proposed solution:
>>>Nova needs to determine if the volume that's being detached is a
>>> shared or
>>> non shared volume.  Here is one way to determine that.
>>>
>>>Every Cinder volume has a list of it's attachments.  In those
>>> attachments
>>> it contains the instance_uuid that the volume is attached to.  I presume
>>> Nova can find which of the volume attachments are o

Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-12 Thread Carl Baldwin
On Fri, Feb 12, 2016 at 5:01 AM, Ihar Hrachyshka  wrote:
>>> It is only internal implementation changes.
>>>
>>> That's not entirely true, is it? There are config variables to change and
>>> it opens up the possibility of a scenario that the operator may not care
>>> about.
>>>
>>
>> If we were to remove the non-pluggable version altogether, then the
>> default for ipam_driver would switch from None to internal. Therefore, there
>> would be no config file changes needed.
>>
>> I think this is correct.
>> Assuming the migration path to Neutron will include the data
>> transformation from built-in to pluggable IPAM, do we just remove the old
>> code and models?
>> On the other hand do you think it might make sense to give operators a
>> chance to rollback - perhaps just in case some nasty bug pops up?
>
>
> They can always revert to a previous release. And if we enable the new
> implementation start of Newton, we’ll have enough time to fix bugs that will
> pop up in gate.

So, to do this, we have to consider two classes of current users.
Since the pluggable implementation has been available, I think that we
have to assume someone might be using it.  Someone could easily have
turned it on in a green-field deployment.  If we push the offline
migration in to Mitaka as per my last email then we'll likely get a
few more of these but it doesn't really matter, the point is that I
think we need t assume that they exist.

1) Users of the old baked-in implementation
  - Their current data is stored in the old tables.

2) User of the new pluggable implementation
 - Their current data is stored in the new tables.

So, how does an unconditional migration work?  We can't just copy the
old tables to the new tables because we might clobber data for the
users in #2.  I've already heard that conditional migrations are a
pain and shouldn't be considered.  This seems like a problem.

I had an idea that I wanted to share but I'm warning you, it sounds a
little crazy even to me.  But, maybe it could work.  Read through it
for entertainment purposes if nothing else.

Instead of migrating data from the old tables to the new.  What if we
migrated the old tables in place in a patch set that removed all of
the old code?  The table structure is nearly identical, right?  The
differences, I imagine, could be easily handled by an alembic
migration.  Correct me if I'm wrong.

Now, we still have a difference between users in groups #1 and #2
above.  To keep them separate, we would call the new built-in
pluggable driver "built-in", "neutron", or whatever.  The name isn't
important except that it can't be "internal".

1) Users who were migrated to the new baked-in implementation.
  - Their current data is still in the old tables but they have been
migrated to look just like the new tables.
  - They have still not set "ipam_driver" in their config so they get
the new default of "built-in".

2) Early adopters of built-in pluggable ipam
  - Their current data is still in the new tables
  - They have their config set to "internal" already

So, now we have to deal with two identical pluggable implementations:
one called "built-in" and the other called "internal" but otherwise
they're identical in every way.  So, to handle this, could we
parameterize the plugin so that they share exactly the same code while
"internal" is deprecated?  Just the table names would be
parameterized.

We have to eventually reconcile users in group #1 with #2.  But, now
that the tables are identical we could provide an offline migration
that might be as simple as deleting the "old" tables and renaming the
"new" tables.  Now, only users in group #2 are required to perform an
offline migration.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-12 Thread Carl Baldwin
On Thu, Feb 11, 2016 at 10:04 AM, Armando M.  wrote:
> I believe we have more recovery options out a potentially fatal situation.
> In fact the offline script can provide a dry-run option that can just
> validate that the migration will succeed before it is even actually
> performed; I think that the size and the amount of tables involved in the
> data migration justifies this course of action rather than the other. Think
> about what Sean said, bugs are always lurking in the dark and as much as we
> can strive for correctness, things might go bad. This is not a routine
> migration and some operators may not be in a rush to embrace pluggable IPAM,
> hence I don't think we are in the position to make the decision on their
> behalf and go through the usual fast-path deprecation process.

I had a long discussion with Armando about this.  I was pretty
stubborn but there was one point that came up that got through and got
me thinking.  Basically, it is that having the ability to migrate to
pluggable IPAM in Mitaka could be a key component to giving users a
path to migrate to a 3rd party pluggable implementation.  Without it,
3rd parties will have to support two kinds of migration:  one for each
of the drivers currently available.

The only way to get something in to Mitaka is do to an offline
migration.  I agree that we shouldn't do the full automatic switch
this late in the cycle.

So, is this worth getting in to Mitaka to help this use case?  If it
is a significantly important component of migrating to 3rd party IPAM
then maybe the answer should be yes.  If it is just to help get people
to the pluggable internal implementation in Mitaka then I'd say no
because it it doesn't provide any user visible advantage and it
doesn't yet have the gate testing miles on it that the old
implementation has.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Jay Pipes

On 02/12/2016 02:18 PM, Doug Wiegley wrote:

I thought one of the original notions behind this was to make ‘nova
boot’ as simple as in the nova-net case, which would imply not
needing to use —nic at all. I personally like the idea of the flag
being for the no network case.


This would be my preference as well, even though it's technically a 
backwards-incompatible API change.


The idea behind get-me-a-network was specifically to remove the current 
required complexity of the nova boot command with regards to networking 
options and allow a return to the nova-net model where an admin could 
auto-create a bunch of unassigned networks and the first time a user 
booted an instance and did not specify any network configuration (the 
default, sane behaviour in nova-net), one of those unassigned networks 
would be grabbed for the troject, I mean prenant, sorry.


So yeah, the "opt-in to having no networking at all with a 
--no-networking or --no-nics option" would be my preference.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Kevin Benton
>plus I think a VM with a network is kinda nonsensical,

On the contrary, that's when I find them the most useful!
On Feb 12, 2016 14:33, "Doug Wiegley"  wrote:

>
> > On Feb 12, 2016, at 3:17 PM, Andrew Laski  wrote:
> >
> >
> >
> > On Fri, Feb 12, 2016, at 04:53 PM, Doug Wiegley wrote:
> >>
> >>> On Feb 12, 2016, at 2:15 PM, Andrew Laski  wrote:
> >>>
> >>>
> >>>
> >>> On Fri, Feb 12, 2016, at 04:03 PM, Doug Wiegley wrote:
>  Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically
>  gives it one nic with an IP, right?
> 
>  So how is this changing that behavior? Making it harder to use, for
> the
>  sake of preserving a really unusual corner case (no net with neutron),
>  seems a much worse user experience here.
> >>>
> >>> I was just going off of the behavior Matt described, that it was
> >>> possible to boot with no network by leaving that out of the request. If
> >>> the behavior already differs based on what networking backend is in use
> >>> then we've put users in a bad spot, and IMO furthers my case for having
> >>> explicit parameters in the request. I'm really not seeing how "--nic
> >>> auto|none" is any harder than leaving all network related parameters
> off
> >>> of the boot request, and it avoids potential confusion as to what will
> >>> happen.
> >>
> >> It hurts discoverability, and “expectedness”. If I’m new to openstack,
> >> having it default boot unusable just means the first time I use ’nova
> >> boot’, I’ll end up with a useless VM. People don’t read docs first, it
> >> should “just work” as far as that’s sane. And OpenStack has a LOT of
> >> these little annoyances for the sake of strict correctness while
> >> optimizing for an unusual or rare case.
> >>
> >> The original stated goal of this simpler neutron api was to get back to
> >> the simpler nova boot. I’d like to see that happen.
> >
> > I'm not suggesting that the default boot be unusable. I'm saying that
> > just like it's required to pass in a flavor and image/volume to boot an
> > instance why not require the user to specify something about the
> > network. And that network request can be as simple as specifying "auto"
> > or "none". That seems to meet all of the requirements without the
> > confusion of needing to guess what the default behavior will be when
> > it's left off because it can apparently mean different things based on
> > which backed is in use. For users that don't read the docs they'll get
> > an immediate failure response indicating that they need to specify
> > something about the network versus the current and proposed state where
> > it's not clear what will happen unless they've read the docs on
> > microversions and understand which network service is being used.
>
> I understand what you’re saying, and we’re almost down to style here. We
> have the previous nova-net ‘nova boot’ behavior, plus I think a VM with a
> network is kinda nonsensical, so adding that hoop for the default case
> doesn’t make sense to me. I’m not sure we’ll convince each other here, and
> that’s ok, I’ve said my peace.  :-)
>
> Thanks,
> doug
>
> >
> >>
> >> Thanks,
> >> doug
> >>
> >>>
> 
>  Thanks,
>  doug
> 
> 
> > On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
> >
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > On 02/12/2016 02:41 PM, Andrew Laski wrote:
> >
> >>> I think if the point of the experience for this API is to be
> >>> working out
>  of the box. So I very much like the idea of a defaults change
>  here to the thing we want to be easy. And an explicit option to
>  disable it if you want to do something more complicated.
> >
> >> I think this creates a potential for confusing behavior for users.
> >> They're happily booting instances with no network on 2.1, as silly
> >> as that might be, and then decide "ooh I'd like to use fancy new
> >> flag foo which is available in 2.35". So they add the flag to their
> >> request and set the version to 2.35 and suddenly multiple things
> >> change about their boot process because they missed that 2.24(or
> >> whatever) changed a default behavior. If this fits within the scope
> >> of microversions then users will need to expect that, but it's
> >> something that would be likely to trip me up as a user of the API.
> >
> > I agree - that's always been the trade-off with microversions. You
> > never get surprises, but you can't move to a new feature in 2.X
> > without also having to get everything that was also introduced in
> > 2.X-1 and before. The benefit, of course, is that the user will have
> > changed something explicitly before getting the new behavior, and at
> > least has a chance of figuring it out.
> >
> > - --
> >
> > - -- Ed Leafe
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v2
> > Comment: GPGTools - https://gpgtools.org
> >

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Andrew Laski


On Fri, Feb 12, 2016, at 05:29 PM, Doug Wiegley wrote:
> 
> > On Feb 12, 2016, at 3:17 PM, Andrew Laski  wrote:
> > 
> > 
> > 
> > On Fri, Feb 12, 2016, at 04:53 PM, Doug Wiegley wrote:
> >> 
> >>> On Feb 12, 2016, at 2:15 PM, Andrew Laski  wrote:
> >>> 
> >>> 
> >>> 
> >>> On Fri, Feb 12, 2016, at 04:03 PM, Doug Wiegley wrote:
>  Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically
>  gives it one nic with an IP, right?
>  
>  So how is this changing that behavior? Making it harder to use, for the
>  sake of preserving a really unusual corner case (no net with neutron),
>  seems a much worse user experience here.
> >>> 
> >>> I was just going off of the behavior Matt described, that it was
> >>> possible to boot with no network by leaving that out of the request. If
> >>> the behavior already differs based on what networking backend is in use
> >>> then we've put users in a bad spot, and IMO furthers my case for having
> >>> explicit parameters in the request. I'm really not seeing how "--nic
> >>> auto|none" is any harder than leaving all network related parameters off
> >>> of the boot request, and it avoids potential confusion as to what will
> >>> happen.
> >> 
> >> It hurts discoverability, and “expectedness”. If I’m new to openstack,
> >> having it default boot unusable just means the first time I use ’nova
> >> boot’, I’ll end up with a useless VM. People don’t read docs first, it
> >> should “just work” as far as that’s sane. And OpenStack has a LOT of
> >> these little annoyances for the sake of strict correctness while
> >> optimizing for an unusual or rare case.
> >> 
> >> The original stated goal of this simpler neutron api was to get back to
> >> the simpler nova boot. I’d like to see that happen.
> > 
> > I'm not suggesting that the default boot be unusable. I'm saying that
> > just like it's required to pass in a flavor and image/volume to boot an
> > instance why not require the user to specify something about the
> > network. And that network request can be as simple as specifying "auto"
> > or "none". That seems to meet all of the requirements without the
> > confusion of needing to guess what the default behavior will be when
> > it's left off because it can apparently mean different things based on
> > which backed is in use. For users that don't read the docs they'll get
> > an immediate failure response indicating that they need to specify
> > something about the network versus the current and proposed state where
> > it's not clear what will happen unless they've read the docs on
> > microversions and understand which network service is being used.
> 
> I understand what you’re saying, and we’re almost down to style here. We
> have the previous nova-net ‘nova boot’ behavior, plus I think a VM with a
> network is kinda nonsensical, so adding that hoop for the default case
> doesn’t make sense to me. I’m not sure we’ll convince each other here,
> and that’s ok, I’ve said my peace.  :-)

Me too :)

Thanks for the discussion.


> 
> Thanks,
> doug
> 
> > 
> >> 
> >> Thanks,
> >> doug
> >> 
> >>> 
>  
>  Thanks,
>  doug
>  
>  
> > On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
> > 
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> > 
> > On 02/12/2016 02:41 PM, Andrew Laski wrote:
> > 
> >>> I think if the point of the experience for this API is to be
> >>> working out
>  of the box. So I very much like the idea of a defaults change
>  here to the thing we want to be easy. And an explicit option to
>  disable it if you want to do something more complicated.
> > 
> >> I think this creates a potential for confusing behavior for users. 
> >> They're happily booting instances with no network on 2.1, as silly
> >> as that might be, and then decide "ooh I'd like to use fancy new
> >> flag foo which is available in 2.35". So they add the flag to their
> >> request and set the version to 2.35 and suddenly multiple things
> >> change about their boot process because they missed that 2.24(or
> >> whatever) changed a default behavior. If this fits within the scope
> >> of microversions then users will need to expect that, but it's
> >> something that would be likely to trip me up as a user of the API.
> > 
> > I agree - that's always been the trade-off with microversions. You
> > never get surprises, but you can't move to a new feature in 2.X
> > without also having to get everything that was also introduced in
> > 2.X-1 and before. The benefit, of course, is that the user will have
> > changed something explicitly before getting the new behavior, and at
> > least has a chance of figuring it out.
> > 
> > - -- 
> > 
> > - -- Ed Leafe
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v2
> > Comment: GPGTools - https://gpgtools.org
> > Comment: Using GnuPG with Thunderbi

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Doug Wiegley

> On Feb 12, 2016, at 3:17 PM, Andrew Laski  wrote:
> 
> 
> 
> On Fri, Feb 12, 2016, at 04:53 PM, Doug Wiegley wrote:
>> 
>>> On Feb 12, 2016, at 2:15 PM, Andrew Laski  wrote:
>>> 
>>> 
>>> 
>>> On Fri, Feb 12, 2016, at 04:03 PM, Doug Wiegley wrote:
 Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically
 gives it one nic with an IP, right?
 
 So how is this changing that behavior? Making it harder to use, for the
 sake of preserving a really unusual corner case (no net with neutron),
 seems a much worse user experience here.
>>> 
>>> I was just going off of the behavior Matt described, that it was
>>> possible to boot with no network by leaving that out of the request. If
>>> the behavior already differs based on what networking backend is in use
>>> then we've put users in a bad spot, and IMO furthers my case for having
>>> explicit parameters in the request. I'm really not seeing how "--nic
>>> auto|none" is any harder than leaving all network related parameters off
>>> of the boot request, and it avoids potential confusion as to what will
>>> happen.
>> 
>> It hurts discoverability, and “expectedness”. If I’m new to openstack,
>> having it default boot unusable just means the first time I use ’nova
>> boot’, I’ll end up with a useless VM. People don’t read docs first, it
>> should “just work” as far as that’s sane. And OpenStack has a LOT of
>> these little annoyances for the sake of strict correctness while
>> optimizing for an unusual or rare case.
>> 
>> The original stated goal of this simpler neutron api was to get back to
>> the simpler nova boot. I’d like to see that happen.
> 
> I'm not suggesting that the default boot be unusable. I'm saying that
> just like it's required to pass in a flavor and image/volume to boot an
> instance why not require the user to specify something about the
> network. And that network request can be as simple as specifying "auto"
> or "none". That seems to meet all of the requirements without the
> confusion of needing to guess what the default behavior will be when
> it's left off because it can apparently mean different things based on
> which backed is in use. For users that don't read the docs they'll get
> an immediate failure response indicating that they need to specify
> something about the network versus the current and proposed state where
> it's not clear what will happen unless they've read the docs on
> microversions and understand which network service is being used.

I understand what you’re saying, and we’re almost down to style here. We have 
the previous nova-net ‘nova boot’ behavior, plus I think a VM with a network is 
kinda nonsensical, so adding that hoop for the default case doesn’t make sense 
to me. I’m not sure we’ll convince each other here, and that’s ok, I’ve said my 
peace.  :-)

Thanks,
doug

> 
>> 
>> Thanks,
>> doug
>> 
>>> 
 
 Thanks,
 doug
 
 
> On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> On 02/12/2016 02:41 PM, Andrew Laski wrote:
> 
>>> I think if the point of the experience for this API is to be
>>> working out
 of the box. So I very much like the idea of a defaults change
 here to the thing we want to be easy. And an explicit option to
 disable it if you want to do something more complicated.
> 
>> I think this creates a potential for confusing behavior for users. 
>> They're happily booting instances with no network on 2.1, as silly
>> as that might be, and then decide "ooh I'd like to use fancy new
>> flag foo which is available in 2.35". So they add the flag to their
>> request and set the version to 2.35 and suddenly multiple things
>> change about their boot process because they missed that 2.24(or
>> whatever) changed a default behavior. If this fits within the scope
>> of microversions then users will need to expect that, but it's
>> something that would be likely to trip me up as a user of the API.
> 
> I agree - that's always been the trade-off with microversions. You
> never get surprises, but you can't move to a new feature in 2.X
> without also having to get everything that was also introduced in
> 2.X-1 and before. The benefit, of course, is that the user will have
> changed something explicitly before getting the new behavior, and at
> least has a chance of figuring it out.
> 
> - -- 
> 
> - -- Ed Leafe
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> Comment: GPGTools - https://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> 
> iQIcBAEBCgAGBQJWvkXeAAoJEKMgtcocwZqLdEwP/R36392zeHL55LR19ewoSg8/
> U9MJEmZo2RiWaJBqWlRsBF5DSNzi7oNzhot8bOcY+aO7XQAs2kfG1QF9YMj/YfTw
> iqsCtKNfdJZR1lNtq7u/TodkkFLP7gO8Q36efOYvAMIIlZlOoMAyvLWRxDGTGN+t
> ahgnw2z6oQDpARb6Yx7NFjogjccTENdkuDNyLy/hmuUpvLyvhUDQC1

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Andrew Laski


On Fri, Feb 12, 2016, at 04:53 PM, Doug Wiegley wrote:
> 
> > On Feb 12, 2016, at 2:15 PM, Andrew Laski  wrote:
> > 
> > 
> > 
> > On Fri, Feb 12, 2016, at 04:03 PM, Doug Wiegley wrote:
> >> Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically
> >> gives it one nic with an IP, right?
> >> 
> >> So how is this changing that behavior? Making it harder to use, for the
> >> sake of preserving a really unusual corner case (no net with neutron),
> >> seems a much worse user experience here.
> > 
> > I was just going off of the behavior Matt described, that it was
> > possible to boot with no network by leaving that out of the request. If
> > the behavior already differs based on what networking backend is in use
> > then we've put users in a bad spot, and IMO furthers my case for having
> > explicit parameters in the request. I'm really not seeing how "--nic
> > auto|none" is any harder than leaving all network related parameters off
> > of the boot request, and it avoids potential confusion as to what will
> > happen.
> 
> It hurts discoverability, and “expectedness”. If I’m new to openstack,
> having it default boot unusable just means the first time I use ’nova
> boot’, I’ll end up with a useless VM. People don’t read docs first, it
> should “just work” as far as that’s sane. And OpenStack has a LOT of
> these little annoyances for the sake of strict correctness while
> optimizing for an unusual or rare case.
> 
> The original stated goal of this simpler neutron api was to get back to
> the simpler nova boot. I’d like to see that happen.

I'm not suggesting that the default boot be unusable. I'm saying that
just like it's required to pass in a flavor and image/volume to boot an
instance why not require the user to specify something about the
network. And that network request can be as simple as specifying "auto"
or "none". That seems to meet all of the requirements without the
confusion of needing to guess what the default behavior will be when
it's left off because it can apparently mean different things based on
which backed is in use. For users that don't read the docs they'll get
an immediate failure response indicating that they need to specify
something about the network versus the current and proposed state where
it's not clear what will happen unless they've read the docs on
microversions and understand which network service is being used.

> 
> Thanks,
> doug
> 
> > 
> >> 
> >> Thanks,
> >> doug
> >> 
> >> 
> >>> On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
> >>> 
> >>> -BEGIN PGP SIGNED MESSAGE-
> >>> Hash: SHA512
> >>> 
> >>> On 02/12/2016 02:41 PM, Andrew Laski wrote:
> >>> 
> > I think if the point of the experience for this API is to be
> > working out
> >> of the box. So I very much like the idea of a defaults change
> >> here to the thing we want to be easy. And an explicit option to
> >> disable it if you want to do something more complicated.
> >>> 
>  I think this creates a potential for confusing behavior for users. 
>  They're happily booting instances with no network on 2.1, as silly
>  as that might be, and then decide "ooh I'd like to use fancy new
>  flag foo which is available in 2.35". So they add the flag to their
>  request and set the version to 2.35 and suddenly multiple things
>  change about their boot process because they missed that 2.24(or
>  whatever) changed a default behavior. If this fits within the scope
>  of microversions then users will need to expect that, but it's
>  something that would be likely to trip me up as a user of the API.
> >>> 
> >>> I agree - that's always been the trade-off with microversions. You
> >>> never get surprises, but you can't move to a new feature in 2.X
> >>> without also having to get everything that was also introduced in
> >>> 2.X-1 and before. The benefit, of course, is that the user will have
> >>> changed something explicitly before getting the new behavior, and at
> >>> least has a chance of figuring it out.
> >>> 
> >>> - -- 
> >>> 
> >>> - -- Ed Leafe
> >>> -BEGIN PGP SIGNATURE-
> >>> Version: GnuPG v2
> >>> Comment: GPGTools - https://gpgtools.org
> >>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> >>> 
> >>> iQIcBAEBCgAGBQJWvkXeAAoJEKMgtcocwZqLdEwP/R36392zeHL55LR19ewoSg8/
> >>> U9MJEmZo2RiWaJBqWlRsBF5DSNzi7oNzhot8bOcY+aO7XQAs2kfG1QF9YMj/YfTw
> >>> iqsCtKNfdJZR1lNtq7u/TodkkFLP7gO8Q36efOYvAMIIlZlOoMAyvLWRxDGTGN+t
> >>> ahgnw2z6oQDpARb6Yx7NFjogjccTENdkuDNyLy/hmuUpvLyvhUDQC1EouVNHPglA
> >>> Sb8tQYSsdKDHrggs8b3XuJZjJXYvn0M4Knnw3i/0DAVoZamVfsnHJ1EWRfOh7hq3
> >>> +C+MJfzfyz5K46ikvjpuSGPPZ8rPPLR1gaih/W2fmXdvKG7NSK3sIUcgJZ7lm4rh
> >>> VpVDCRWi9rlPuJa4JIKlZ8h6NNMHwiEq8Ea+dP7lHnx0qp8EIxkPDBdU6sCmeUGM
> >>> tqBeHjUU7f8/fbZkOdorn1NAEZfXcRew3/BeFFxrmu6X8Z2XHHXMBBtlmehEoDHO
> >>> 6/BzZH3I/5VPcvFZQfsYYivBj3vtmB8cVLbUNpD3xBLyJKVFEwfmGkkYQTlL0KDx
> >>> B+bqNDw2pK72/qN39rjmdZY/cZ4vGfBGu2CzJxbX+Zn2E8Mgg5rAuARG0OCNg9ll
> >>> uuVBy37vbPN

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Doug Wiegley

> On Feb 12, 2016, at 2:15 PM, Andrew Laski  wrote:
> 
> 
> 
> On Fri, Feb 12, 2016, at 04:03 PM, Doug Wiegley wrote:
>> Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically
>> gives it one nic with an IP, right?
>> 
>> So how is this changing that behavior? Making it harder to use, for the
>> sake of preserving a really unusual corner case (no net with neutron),
>> seems a much worse user experience here.
> 
> I was just going off of the behavior Matt described, that it was
> possible to boot with no network by leaving that out of the request. If
> the behavior already differs based on what networking backend is in use
> then we've put users in a bad spot, and IMO furthers my case for having
> explicit parameters in the request. I'm really not seeing how "--nic
> auto|none" is any harder than leaving all network related parameters off
> of the boot request, and it avoids potential confusion as to what will
> happen.

It hurts discoverability, and “expectedness”. If I’m new to openstack, having 
it default boot unusable just means the first time I use ’nova boot’, I’ll end 
up with a useless VM. People don’t read docs first, it should “just work” as 
far as that’s sane. And OpenStack has a LOT of these little annoyances for the 
sake of strict correctness while optimizing for an unusual or rare case.

The original stated goal of this simpler neutron api was to get back to the 
simpler nova boot. I’d like to see that happen.

Thanks,
doug

> 
>> 
>> Thanks,
>> doug
>> 
>> 
>>> On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
>>> 
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA512
>>> 
>>> On 02/12/2016 02:41 PM, Andrew Laski wrote:
>>> 
> I think if the point of the experience for this API is to be
> working out
>> of the box. So I very much like the idea of a defaults change
>> here to the thing we want to be easy. And an explicit option to
>> disable it if you want to do something more complicated.
>>> 
 I think this creates a potential for confusing behavior for users. 
 They're happily booting instances with no network on 2.1, as silly
 as that might be, and then decide "ooh I'd like to use fancy new
 flag foo which is available in 2.35". So they add the flag to their
 request and set the version to 2.35 and suddenly multiple things
 change about their boot process because they missed that 2.24(or
 whatever) changed a default behavior. If this fits within the scope
 of microversions then users will need to expect that, but it's
 something that would be likely to trip me up as a user of the API.
>>> 
>>> I agree - that's always been the trade-off with microversions. You
>>> never get surprises, but you can't move to a new feature in 2.X
>>> without also having to get everything that was also introduced in
>>> 2.X-1 and before. The benefit, of course, is that the user will have
>>> changed something explicitly before getting the new behavior, and at
>>> least has a chance of figuring it out.
>>> 
>>> - -- 
>>> 
>>> - -- Ed Leafe
>>> -BEGIN PGP SIGNATURE-
>>> Version: GnuPG v2
>>> Comment: GPGTools - https://gpgtools.org
>>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>> 
>>> iQIcBAEBCgAGBQJWvkXeAAoJEKMgtcocwZqLdEwP/R36392zeHL55LR19ewoSg8/
>>> U9MJEmZo2RiWaJBqWlRsBF5DSNzi7oNzhot8bOcY+aO7XQAs2kfG1QF9YMj/YfTw
>>> iqsCtKNfdJZR1lNtq7u/TodkkFLP7gO8Q36efOYvAMIIlZlOoMAyvLWRxDGTGN+t
>>> ahgnw2z6oQDpARb6Yx7NFjogjccTENdkuDNyLy/hmuUpvLyvhUDQC1EouVNHPglA
>>> Sb8tQYSsdKDHrggs8b3XuJZjJXYvn0M4Knnw3i/0DAVoZamVfsnHJ1EWRfOh7hq3
>>> +C+MJfzfyz5K46ikvjpuSGPPZ8rPPLR1gaih/W2fmXdvKG7NSK3sIUcgJZ7lm4rh
>>> VpVDCRWi9rlPuJa4JIKlZ8h6NNMHwiEq8Ea+dP7lHnx0qp8EIxkPDBdU6sCmeUGM
>>> tqBeHjUU7f8/fbZkOdorn1NAEZfXcRew3/BeFFxrmu6X8Z2XHHXMBBtlmehEoDHO
>>> 6/BzZH3I/5VPcvFZQfsYYivBj3vtmB8cVLbUNpD3xBLyJKVFEwfmGkkYQTlL0KDx
>>> B+bqNDw2pK72/qN39rjmdZY/cZ4vGfBGu2CzJxbX+Zn2E8Mgg5rAuARG0OCNg9ll
>>> uuVBy37vbPNrNV9UZSkSjmRma/l8kl1IzBbszH0ENbH/ov3ngKB0xWiLc1pBZKC9
>>> GcPgzIoclwLrVooRqOSf
>>> =Dqga
>>> -END PGP SIGNATURE-
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

[openstack-dev] [docs] [api] Why WADL when you can Swagger

2016-02-12 Thread Anne Gentle
Hi all,
I wanted to give an update on the efforts to provide improved application
developer information on developer.openstack.org. Wrangling this much
valuable information and gathering it in a way that helps people is no
simple matter. So. We move forward one step at a time.

What's new?

This week, with every build of the api-site, we are now running
fairy-slipper to migrate from WADL to Swagger for API reference
information. Those migrated Swagger files are then copied to
developer.openstack.org/draft/swagger.

We know that not all files migrate smoothly. We'd love to get teams looking
at these migrated files. Thank you to those of you already submitting
fixes!

In addition, the infra team is reviewing a spec now so that we can serve
API reference information from the developer.openstack.org site:
https://review.openstack.org/#/c/276484/

Why are we doing all this?
--
The overall vision is still intact in the original specifications [1][2],
however we need to do a lot of web design and front end work to get where
we want to be.

What can I do?

Check out these Swagger files.
http://developer.openstack.org/draft/swagger/blockstorage-v1-swagger.json
http://developer.openstack.org/draft/swagger/blockstorage-v2-swagger.json
http://developer.openstack.org/draft/swagger/clustering-v1-swagger.json
http://developer.openstack.org/draft/swagger/compute-v2.1-swagger.json
http://developer.openstack.org/draft/swagger/data-processing-v1.1-swagger.json
http://developer.openstack.org/draft/swagger/database-v1-swagger.json
http://developer.openstack.org/draft/swagger/identity-admin-v2-swagger.json
http://developer.openstack.org/draft/swagger/identity-extensions-v2-swagger.json
http://developer.openstack.org/draft/swagger/identity-extensions-v3-swagger.json
http://developer.openstack.org/draft/swagger/identity-v2-swagger.json
http://developer.openstack.org/draft/swagger/identity-v3-swagger.json
http://developer.openstack.org/draft/swagger/image-v1-swagger.json
http://developer.openstack.org/draft/swagger/networking-extensions-v2-swagger.json
http://developer.openstack.org/draft/swagger/networking-v2-swagger.json
http://developer.openstack.org/draft/swagger/objectstorage-v1-swagger.json
http://developer.openstack.org/draft/swagger/orchestration-v1-swagger.json
http://developer.openstack.org/draft/swagger/share-v1-swagger.json
http://developer.openstack.org/draft/swagger/telemetry-v2-swagger.json

If you see a problem in the original WADL, log it here:
https://bugs.launchpad.net/openstack-api-site. If you see a problem with
the migration tool, log it here:
https://bugs.launchpad.net/openstack-doc-tools.

When will the work be completed?


I had hoped to have more to show by this point, but I await the infra
team's review of the server spec above, and we continue to work on the bugs
in the migration and output. Once the server spec work is complete, we can
release the draft site.

What if I have more questions?
--
You can always hop onto #openstack-doc or #openstack-sdks to ask me or
another API working group member for guidance.

Last but not least, if you want to learn more about Swagger in the upstream
contributors track at the Summit, vote for this session:
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7723

Thanks,
Anne

--
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com

1.
http://specs.openstack.org/openstack/docs-specs/specs/mitaka/app-guides-mitaka-vision.html
2.
http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Any projects using sqlalchemy-utils?

2016-02-12 Thread Joshua Harlow
I'm fine with discarding it (since said support is only for posgres 
anyway afaik), if it's causing issues for (packaging) folks and nobody 
else in openstack really uses it either.


Let's see what folks think,

On another note (if not many projects are using it); I think it would be 
nice to try to use it more in other projects, since afaik mysql and 
postgres are both getting/now have native json support and afaik 
sqlalchemy-utils is the current way to support said native types. And a 
lot of openstack stores json blobs (metadata...) in text fields (when 
they really could start to move toward the native json field types) so 
its usage would seem appropriate,


But if it's to much of a pain to package for not much adoption, then 
that's ok to.


-Josh

On 02/12/2016 12:57 PM, Corey Bryant wrote:

Are any projects using sqlalchemy-utils?

taskflow started using it recently, however it's only needed for a
single type in taskflow (JSONType).  I'm wondering if it's worth the
effort of maintaining it and it's dependencies in Ubuntu main or if
perhaps we can just revert this bit to define the JSONType internally.

--
Regards,
Corey


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-12 Thread Andrew Woodward
On Thu, Feb 11, 2016 at 1:03 AM Aleksandr Didenko 
wrote:

> Hi,
>
>
> > So what is open? The composition layer.
>
> We can have different composition layers for every release and it's
> already implemented in releases - separate puppet modules/manifests dir for
> every release.
>

This requires the loss of all of the features in the newer version of fuel
since it relies on the older version of the serialized data from nailgun.
In addtion we currently don't allow for new clusters to be deployed this
way.


>
> > Currently, we just abandon support for previous versions in the
> composition layer and leave them to only be monuments in the
> stable/ series branches for maintenance. If we instead started
> making changes (forwards or backwards that) change the calls based on the
> openstack version [5] then we would be able to change the calls based on
> then needs of that release, and the puppet-openstack modules we are working
> with.
>
> So we'll have tons of conditionals in composition layer, right? Even if
> some puppet-openstack class have just one new parameter in new release,
> then we'll have to write a conditional and duplicate class declaration. Or
> write complex parameters hash definitions/merges and use
> create_resources(). The more releases we want to support the more
> complicated composition layer will become. That won't make contribution to
> fuel-library easier and even can greatly reduce development speed. Also are
> we going to add new features to stable releases using this workflow with
> single composition layer?
>
> Yes, we need conditionals in the composition layer, we already need these
to no jam the gate when we switch between stable and master, we might as
well maintain them properly so that we can start running multiple versions

Yes, this is, in part,  about taking advantage of new fuel features on
stable openstack releases, we are almost always behind and the previous
release(s) supported this already.

If its only supported in the newer version, then we would have a similar
problem with enabling the feature anyways as our current process results in
us developing on stable openstack with the newer fuel until late in the
cycle, when we switch packaging over.

>
> > Testing master while keeping stable. Given the ability to conditional
> what source of openstack bits, which versions of manifests we can start
> testing both master and keep health on stable. This would help accelerate
> both fuel development and deploying and testing development versions of
> openstack
>
> I'm sorry, but I don't see how we can accelerate things by making
> composition layer more and more complicated. If we're going to run CI and
> swarm for all of the supported releases on the ISO, that would rather
> decrease speed of development and testing drastically. Also aren't we
> "testing both master and keep health on stable" right now by running tests
> for master and stable versions of Fuel?
>
> No, this is about deploying stable and master from the same version of
Fuel, with the new features from fuel. As we develop new features in fuel
we frequently run into problems simply because openstack version we are
deploying is broken, this would allow for gating on stable and edge testing
master until it can become the new stable.

>
> > Deploying stable and upgrading later. Again given the ability to deploy
> multiple OpenStack versions within the same Fuel version, teams focused on
> upgrades can take advantage of the latest enhancements in fuel to work the
> upgrade process more easily, as an added benefit this would eventually lead
> to better support for end user upgrades too.
>
> Using the same composition layers is not required for this. Also how it
> differs from using the current upgrade procedure? When you have, for
> instance, 7.0 release and then upgrade to 8.0, so basically result is the
> same - you have two releases in Fuel, 2 directories with manifests, 2 repos
> with packages.
>

> > Deploying older versions, in the odd case that we need to take advantage
> of older OpenStack releases like in the case of Kilo with a newer version
> of Fuel we can easily maintain that version too as we can keep the older
> cases around in the composition layer with out adding much burden on the
> other components.
>
> Using the same composition layers is not required for this, "we can keep
> the older cases around" in the composition layer of previous version.
>

Again, we lose compatibility with the data from nailgun by simply pulling
in the old composition layer, we loose all new features that the
composition layer exposes

>
> Also, how many releases we're going to support? All of them starting from
> Kilo? What about ISO size? What about CI, infra (required HW), acceptance
> testing, etc impact?
>

On average, two openstack releases would be supported the version that this
fuel is being developed for, and the prior stable openstack release. There
is an abnormal exception for Kilo. For 9 I would propose Kilo 

Re: [openstack-dev] [all] Any projects using sqlalchemy-utils?

2016-02-12 Thread Haïkel
2016-02-12 21:57 GMT+01:00 Corey Bryant :
> Are any projects using sqlalchemy-utils?
>
> taskflow started using it recently, however it's only needed for a single
> type in taskflow (JSONType).  I'm wondering if it's worth the effort of
> maintaining it and it's dependencies in Ubuntu main or if perhaps we can
> just revert this bit to define the JSONType internally.
>
> --
> Regards,
> Corey

gnocchi does for a while.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Andrew Laski


On Fri, Feb 12, 2016, at 04:03 PM, Doug Wiegley wrote:
> Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically
> gives it one nic with an IP, right?
> 
> So how is this changing that behavior? Making it harder to use, for the
> sake of preserving a really unusual corner case (no net with neutron),
> seems a much worse user experience here.

I was just going off of the behavior Matt described, that it was
possible to boot with no network by leaving that out of the request. If
the behavior already differs based on what networking backend is in use
then we've put users in a bad spot, and IMO furthers my case for having
explicit parameters in the request. I'm really not seeing how "--nic
auto|none" is any harder than leaving all network related parameters off
of the boot request, and it avoids potential confusion as to what will
happen.

> 
> Thanks,
> doug
> 
> 
> > On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
> > 
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> > 
> > On 02/12/2016 02:41 PM, Andrew Laski wrote:
> > 
> >>> I think if the point of the experience for this API is to be
> >>> working out
>  of the box. So I very much like the idea of a defaults change
>  here to the thing we want to be easy. And an explicit option to
>  disable it if you want to do something more complicated.
> > 
> >> I think this creates a potential for confusing behavior for users. 
> >> They're happily booting instances with no network on 2.1, as silly
> >> as that might be, and then decide "ooh I'd like to use fancy new
> >> flag foo which is available in 2.35". So they add the flag to their
> >> request and set the version to 2.35 and suddenly multiple things
> >> change about their boot process because they missed that 2.24(or
> >> whatever) changed a default behavior. If this fits within the scope
> >> of microversions then users will need to expect that, but it's
> >> something that would be likely to trip me up as a user of the API.
> > 
> > I agree - that's always been the trade-off with microversions. You
> > never get surprises, but you can't move to a new feature in 2.X
> > without also having to get everything that was also introduced in
> > 2.X-1 and before. The benefit, of course, is that the user will have
> > changed something explicitly before getting the new behavior, and at
> > least has a chance of figuring it out.
> > 
> > - -- 
> > 
> > - -- Ed Leafe
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v2
> > Comment: GPGTools - https://gpgtools.org
> > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> > 
> > iQIcBAEBCgAGBQJWvkXeAAoJEKMgtcocwZqLdEwP/R36392zeHL55LR19ewoSg8/
> > U9MJEmZo2RiWaJBqWlRsBF5DSNzi7oNzhot8bOcY+aO7XQAs2kfG1QF9YMj/YfTw
> > iqsCtKNfdJZR1lNtq7u/TodkkFLP7gO8Q36efOYvAMIIlZlOoMAyvLWRxDGTGN+t
> > ahgnw2z6oQDpARb6Yx7NFjogjccTENdkuDNyLy/hmuUpvLyvhUDQC1EouVNHPglA
> > Sb8tQYSsdKDHrggs8b3XuJZjJXYvn0M4Knnw3i/0DAVoZamVfsnHJ1EWRfOh7hq3
> > +C+MJfzfyz5K46ikvjpuSGPPZ8rPPLR1gaih/W2fmXdvKG7NSK3sIUcgJZ7lm4rh
> > VpVDCRWi9rlPuJa4JIKlZ8h6NNMHwiEq8Ea+dP7lHnx0qp8EIxkPDBdU6sCmeUGM
> > tqBeHjUU7f8/fbZkOdorn1NAEZfXcRew3/BeFFxrmu6X8Z2XHHXMBBtlmehEoDHO
> > 6/BzZH3I/5VPcvFZQfsYYivBj3vtmB8cVLbUNpD3xBLyJKVFEwfmGkkYQTlL0KDx
> > B+bqNDw2pK72/qN39rjmdZY/cZ4vGfBGu2CzJxbX+Zn2E8Mgg5rAuARG0OCNg9ll
> > uuVBy37vbPNrNV9UZSkSjmRma/l8kl1IzBbszH0ENbH/ov3ngKB0xWiLc1pBZKC9
> > GcPgzIoclwLrVooRqOSf
> > =Dqga
> > -END PGP SIGNATURE-
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Doug Wiegley
Correct me if I’m wrong, but with nova-net, ‘nova boot’ automatically gives it 
one nic with an IP, right?

So how is this changing that behavior? Making it harder to use, for the sake of 
preserving a really unusual corner case (no net with neutron), seems a much 
worse user experience here.

Thanks,
doug


> On Feb 12, 2016, at 1:51 PM, Ed Leafe  wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> On 02/12/2016 02:41 PM, Andrew Laski wrote:
> 
>>> I think if the point of the experience for this API is to be
>>> working out
 of the box. So I very much like the idea of a defaults change
 here to the thing we want to be easy. And an explicit option to
 disable it if you want to do something more complicated.
> 
>> I think this creates a potential for confusing behavior for users. 
>> They're happily booting instances with no network on 2.1, as silly
>> as that might be, and then decide "ooh I'd like to use fancy new
>> flag foo which is available in 2.35". So they add the flag to their
>> request and set the version to 2.35 and suddenly multiple things
>> change about their boot process because they missed that 2.24(or
>> whatever) changed a default behavior. If this fits within the scope
>> of microversions then users will need to expect that, but it's
>> something that would be likely to trip me up as a user of the API.
> 
> I agree - that's always been the trade-off with microversions. You
> never get surprises, but you can't move to a new feature in 2.X
> without also having to get everything that was also introduced in
> 2.X-1 and before. The benefit, of course, is that the user will have
> changed something explicitly before getting the new behavior, and at
> least has a chance of figuring it out.
> 
> - -- 
> 
> - -- Ed Leafe
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
> Comment: GPGTools - https://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> 
> iQIcBAEBCgAGBQJWvkXeAAoJEKMgtcocwZqLdEwP/R36392zeHL55LR19ewoSg8/
> U9MJEmZo2RiWaJBqWlRsBF5DSNzi7oNzhot8bOcY+aO7XQAs2kfG1QF9YMj/YfTw
> iqsCtKNfdJZR1lNtq7u/TodkkFLP7gO8Q36efOYvAMIIlZlOoMAyvLWRxDGTGN+t
> ahgnw2z6oQDpARb6Yx7NFjogjccTENdkuDNyLy/hmuUpvLyvhUDQC1EouVNHPglA
> Sb8tQYSsdKDHrggs8b3XuJZjJXYvn0M4Knnw3i/0DAVoZamVfsnHJ1EWRfOh7hq3
> +C+MJfzfyz5K46ikvjpuSGPPZ8rPPLR1gaih/W2fmXdvKG7NSK3sIUcgJZ7lm4rh
> VpVDCRWi9rlPuJa4JIKlZ8h6NNMHwiEq8Ea+dP7lHnx0qp8EIxkPDBdU6sCmeUGM
> tqBeHjUU7f8/fbZkOdorn1NAEZfXcRew3/BeFFxrmu6X8Z2XHHXMBBtlmehEoDHO
> 6/BzZH3I/5VPcvFZQfsYYivBj3vtmB8cVLbUNpD3xBLyJKVFEwfmGkkYQTlL0KDx
> B+bqNDw2pK72/qN39rjmdZY/cZ4vGfBGu2CzJxbX+Zn2E8Mgg5rAuARG0OCNg9ll
> uuVBy37vbPNrNV9UZSkSjmRma/l8kl1IzBbszH0ENbH/ov3ngKB0xWiLc1pBZKC9
> GcPgzIoclwLrVooRqOSf
> =Dqga
> -END PGP SIGNATURE-
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Nate Johnston
A new contributor's point of view...

My first summit was Tokyo, so I have no comparison between how things are
and they way they used to be.  But by the end of Friday, I and the other
people I spoke to - both first-time participants and veterans - had the
same sentiment: we wished that the conference was restructured so that the
Design Summit was reordered or separated from the OpenStack Conference and
Expo section.

I felt this way because the Design Summit was placed at the rump end of the
Summit timeframe.  The days of the summit are such an intense firehose
directed at your brain that one of the pieces of advice given to new
attendees is to make sure to give yourself downtime so your brain can
process what you have seen.

Perhaps if the Design Summit was put first, when everyone was fresh, that
might show better results than having it afterwords, when cognitive fatigue
has set in?

Thanks,

--N.

On Sun, Feb 7, 2016 at 3:07 PM, Jay Pipes  wrote:

> Hello all,
>
> tl;dr
> =
>
> I have long thought that the OpenStack Summits have become too commercial
> and provide little value to the software engineers contributing to
> OpenStack.
>
> I propose the following:
>
> 1) Separate the design summits from the conferences
> 2) Hold only a single OpenStack conference per year
> 3) Return the design summit to being a low-key, low-cost working event
>
> details
> ===
>
> The design summits originally started out as working events. Developers
> got together in smallish rooms, arranged chairs in a fishbowl, and got to
> work planning and designing.
>
> With the OpenStack Summit growing more and more marketing- and
> sales-focused, the contributors attending the design summit are often
> unfocused. The precious little time that developers have to actually work
> on the next release planning is often interrupted or cut short by the large
> numbers of "suits" and salespeople at the conference event, many of which
> are peddling a product or pushing a corporate agenda.
>
> Many contributors submit talks to speak at the conference part of an
> OpenStack Summit because their company says it's the only way they will pay
> for them to attend the design summit. This is, IMHO, a terrible thing. The
> design summit is a *working* event. Companies that contribute to OpenStack
> projects should send their engineers to working events because that is
> where work is done, not so that their engineer can go give a talk about
> some vendor's agenda-item or newfangled product.
>
> Part of the reason that companies only send engineers who are giving a
> talk at the conference side is that the cost of attending the OpenStack
> Summit has become ludicrously expensive. Why have the events become so
> expensive? I can think of a few reasons:
>
> a) They are held every six months. I know of no other community or open
> source project that holds *conference-type* events every six months.
>
> b) They are held in extremely expensive hotels and conference centers
> because the number of attendees is so big.
>
> c) Because the conferences have become sales and marketing-focused events,
> companies shell out hundreds of thousands of dollars for schwag, for rented
> event people, for food and beverage sponsorships, for keynote slots, for
> lavish and often ridiculous parties, and more. This cost means less money
> to send engineers to the design summit to do actual work.
>
> I would love to see the OpenStack contributor community take back the
> design summit to its original format and purpose and decouple it from the
> OpenStack Summit's conference portion.
>
> I believe the design summits should be organized by the OpenStack
> contributor community, not the OpenStack Foundation and its marketing and
> event planning staff. This will allow lower-cost venues to be chosen that
> meet the needs only of the small group of active contributors, not of huge
> masses of conference attendees. This will allow contributor companies to
> send *more* engineers to *more* design summits, which is something that
> really needs to happen if we are to grow our active contributor pool.
>
> Once this decoupling occurs, I think that the OpenStack Summit should be
> renamed to the OpenStack Conference and Expo to better fit its purpose and
> focus. This Conference and Expo event really should be held once a year, in
> my opinion, and continue to be run by the OpenStack Foundation.
>
> I, for one, would welcome events that have no conference check-in area, no
> evening parties with 2000 people, no keynote and powerpoint-as-a-service
> sessions, and no getting pulled into sales meetings.
>
> OK, there, I said it.
>
> Thoughts? Criticism? Support? Suggestions welcome.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
_

[openstack-dev] [all] Any projects using sqlalchemy-utils?

2016-02-12 Thread Corey Bryant
Are any projects using sqlalchemy-utils?

taskflow started using it recently, however it's only needed for a single
type in taskflow (JSONType).  I'm wondering if it's worth the effort of
maintaining it and it's dependencies in Ubuntu main or if perhaps we can
just revert this bit to define the JSONType internally.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-12 Thread Andrew Laski
Starting a new thread to continue a thought that came up in
http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
The Nova API microversion framework allows for backwards compatible and
backwards incompatible changes but there is no way to programmatically
distinguish the two. This means that as a user of the API I need to
understand every change between the version I'm using now and a new
version I would like to move to in case an intermediate version changes
default behaviors or removes something I'm currently using.

I would suggest that a more user friendly approach would be to
distinguish the two types of changes. Perhaps something like 2.x.y where
x is bumped for a backwards incompatible change and y is still
monotonically increasing regardless of bumps to x. So if the current
version is 2.2.7 a new backwards compatible change would bump to 2.2.8
or a new backwards incompatible change would bump to 2.3.8. As a user
this would allow me to fairly freely bump the version I'm consuming
until x changes at which point I need to take more care in moving to a
new version.

Just wanted to throw the idea out to get some feedback. Or perhaps this
was already discussed and dismissed when microversions were added and I
just missed it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/12/2016 02:41 PM, Andrew Laski wrote:

>> I think if the point of the experience for this API is to be
>> working out
>>> of the box. So I very much like the idea of a defaults change
>>> here to the thing we want to be easy. And an explicit option to
>>> disable it if you want to do something more complicated.

> I think this creates a potential for confusing behavior for users. 
> They're happily booting instances with no network on 2.1, as silly
> as that might be, and then decide "ooh I'd like to use fancy new
> flag foo which is available in 2.35". So they add the flag to their
> request and set the version to 2.35 and suddenly multiple things
> change about their boot process because they missed that 2.24(or
> whatever) changed a default behavior. If this fits within the scope
> of microversions then users will need to expect that, but it's
> something that would be likely to trip me up as a user of the API.

I agree - that's always been the trade-off with microversions. You
never get surprises, but you can't move to a new feature in 2.X
without also having to get everything that was also introduced in
2.X-1 and before. The benefit, of course, is that the user will have
changed something explicitly before getting the new behavior, and at
least has a chance of figuring it out.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWvkXeAAoJEKMgtcocwZqLdEwP/R36392zeHL55LR19ewoSg8/
U9MJEmZo2RiWaJBqWlRsBF5DSNzi7oNzhot8bOcY+aO7XQAs2kfG1QF9YMj/YfTw
iqsCtKNfdJZR1lNtq7u/TodkkFLP7gO8Q36efOYvAMIIlZlOoMAyvLWRxDGTGN+t
ahgnw2z6oQDpARb6Yx7NFjogjccTENdkuDNyLy/hmuUpvLyvhUDQC1EouVNHPglA
Sb8tQYSsdKDHrggs8b3XuJZjJXYvn0M4Knnw3i/0DAVoZamVfsnHJ1EWRfOh7hq3
+C+MJfzfyz5K46ikvjpuSGPPZ8rPPLR1gaih/W2fmXdvKG7NSK3sIUcgJZ7lm4rh
VpVDCRWi9rlPuJa4JIKlZ8h6NNMHwiEq8Ea+dP7lHnx0qp8EIxkPDBdU6sCmeUGM
tqBeHjUU7f8/fbZkOdorn1NAEZfXcRew3/BeFFxrmu6X8Z2XHHXMBBtlmehEoDHO
6/BzZH3I/5VPcvFZQfsYYivBj3vtmB8cVLbUNpD3xBLyJKVFEwfmGkkYQTlL0KDx
B+bqNDw2pK72/qN39rjmdZY/cZ4vGfBGu2CzJxbX+Zn2E8Mgg5rAuARG0OCNg9ll
uuVBy37vbPNrNV9UZSkSjmRma/l8kl1IzBbszH0ENbH/ov3ngKB0xWiLc1pBZKC9
GcPgzIoclwLrVooRqOSf
=Dqga
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Andrew Laski


On Fri, Feb 12, 2016, at 03:11 PM, Sean Dague wrote:
> On 02/12/2016 02:19 PM, Andrew Laski wrote:
> > 
> > 
> > On Fri, Feb 12, 2016, at 01:45 PM, John Garbutt wrote:
> >> On 12 February 2016 at 18:17, Andrew Laski  wrote:
> >>>
> >>>
> >>> On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
>  Forgive me for thinking out loud, but I'm trying to sort out how nova
>  would use a microversion in the nova API for the get-me-a-network
>  feature recently added to neutron [1] and planned to be leveraged in
>  nova (there isn't a spec yet for nova, I'm trying to sort this out for a
>  draft).
> 
>  Originally I was thinking that a network is required for nova boot, so
>  we'd simply check for a microversion and allow not specifying a network,
>  easy peasy.
> 
>  Turns out you can boot an instance in nova (with neutron as the network
>  backend) without a network. All you get is a measly debug log message in
>  the compute logs [2]. That's kind of useless though and seems silly.
> 
>  I haven't tested this out yet to confirm, but I suspect that if you
>  create a nova instance w/o a network, you can latter try to attach a
>  network using the os-attach-interfaces API as long as you either provide
>  a network ID *or* there is a public shared network or the tenant has a
>  network at that point (nova looks those up if a specific network ID
>  isn't provided).
> 
>  The high-level plan for get-me-a-network in nova was simply going to be
>  if the user tries to boot an instance and doesn't provide a network, and
>  there isn't a tenant network or public shared network to default to,
>  then nova would call neutron's new auto-allocated-topology API to get a
>  network. This, however, is a behavior change.
> 
>  So I guess the question now is how do we handle that behavior change in
>  the nova API?
> 
>  We could add an auto-create-net boolean to the boot server request which
>  would only be available in a microversion, then we could check that
>  boolean in the compute API when we're doing network validation.
> 
> >>>
> >>> I think a flag like this is the right approach. If it's currently valid
> >>> to boot an instance without a network than there needs to be something
> >>> to distinguish a request that wants a network created vs. a request that
> >>> doesn't want a network.
> >>>
> >>> This is still hugely useful if all that's required from a user is to
> >>> indicate that they would like a network, they still don't need to
> >>> understand/provide details of the network.
> >>
> >> I was thinking a sort of opposite. Would this work?
> >>
> >> We add a new micro-version that does this:
> >> * nothing specified: do the best we can to get a port created
> >> (get-me-a-network, etc,), or fail if not possible
> >> * --no-nics option (or similar) that says "please don't give me any nics"
> >>
> >> This means folks that don't want a network, reliably have a way to do
> >> that. For everyone else, we do the same thing when using either
> >> neutron or nova-network VLAN manager.
> > 
> > I think this pushes our microversions meaning a bit further than
> > intended. I don't think the API should flip behaviors simply based on a
> > microversion.
> > 
> > What about requiring nic information with the microversion? Make users
> > indicate explicitly if they want a network or not and avoid changing a
> > default behavior.
> 
> I think changing default behavior like that is totally within bounds,
> and was part of the original design point of microversions (and why you
> have to opt into them). So people that don't want a network that go past
> that boundary know to start saying "hands-off".
> 
> I think if the point of the experience for this API is to be working out
> of the box. So I very much like the idea of a defaults change here to
> the thing we want to be easy. And an explicit option to disable it if
> you want to do something more complicated.

I think this creates a potential for confusing behavior for users.
They're happily booting instances with no network on 2.1, as silly as
that might be, and then decide "ooh I'd like to use fancy new flag foo
which is available in 2.35". So they add the flag to their request and
set the version to 2.35 and suddenly multiple things change about their
boot process because they missed that 2.24(or whatever) changed a
default behavior. If this fits within the scope of microversions then
users will need to expect that, but it's something that would be likely
to trip me up as a user of the API.


> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-12 Thread Ildikó Váncsa
Hi Walt,

Thanks for describing the bigger picture.

In my opinion when we will have microversion support available in Cinder that 
will give us a bit of a freedom and also possibility to handle these 
difficulties.

Regarding terminate_connection we will have issues with live_migration as it is 
today. We need to figure out what information would be best to feed back to 
Cinder from Nova, so we should figure out what API we would need after we are 
able to introduce it in a safe way. I still see benefit in storing the 
connection_info for the attachments.

Also I think the multiattach support should be disable for the problematic 
drivers like lvm, until we don't have a solution for proper detach on the whole 
call chain.

Best Regards,
Ildikó

> -Original Message-
> From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
> Sent: February 11, 2016 18:31
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to 
> call os-brick's connector.disconnect_volume
> 
> There seems to be a few discussions going on here wrt to detaches.   One
> is what to do on the Nova side with calling os-brick's disconnect_volume, and 
> also when to or not to call Cinder's
> terminate_connection and detach.
> 
> My original post was simply to discuss a mechanism to try and figure out the 
> first problem.  When should nova call brick to remove the
> local volume, prior to calling Cinder to do something.
> 
> Nova needs to know if it's safe to call disconnect_volume or not. Cinder 
> already tracks each attachment, and it can return the
> connection_info
> for each attachment with a call to initialize_connection.   If 2 of
> those connection_info dicts are the same, it's a shared volume/target.
> Don't call disconnect_volume if there are any more of those left.
> 
> On the Cinder side of things, if terminate_connection, detach is called, the 
> volume manager can find the list of attachments for a
> volume, and compare that to the attachments on a host.  The problem is, 
> Cinder doesn't track the host along with the instance_uuid in
> the attachments table.  I plan on allowing that as an API change after 
> microversions lands, so we know how many times a volume is
> attached/used on a particular host.  The driver can decide what to do with it 
> at
> terminate_connection, detach time. This helps account for
> the differences in each of the Cinder backends, which we will never get all 
> aligned to the same model.  Each array/backend handles
> attachments different and only the driver knows if it's safe to remove the 
> target or not, depending on how many attachments/usages
> it has
> on the host itself.   This is the same thing as a reference counter,
> which we don't need, because we have the count in the attachments table, once 
> we allow setting the host and the instance_uuid at
> the same time.
> 
> Walt
> > On Tue, Feb 09, 2016 at 11:49:33AM -0800, Walter A. Boring IV wrote:
> >> Hey folks,
> >> One of the challenges we have faced with the ability to attach a
> >> single volume to multiple instances, is how to correctly detach that
> >> volume.  The issue is a bit complex, but I'll try and explain the
> >> problem, and then describe one approach to solving one part of the detach 
> >> puzzle.
> >>
> >> Problem:
> >>When a volume is attached to multiple instances on the same host.
> >> There are 2 scenarios here.
> >>
> >>1) Some Cinder drivers export a new target for every attachment on
> >> a compute host.  This means that you will get a new unique volume
> >> path on a host, which is then handed off to the VM instance.
> >>
> >>2) Other Cinder drivers export a single target for all instances
> >> on a compute host.  This means that every instance on a single host,
> >> will reuse the same host volume path.
> >
> > This problem isn't actually new. It is a problem we already have in
> > Nova even with single attachments per volume.  eg, with NFS and SMBFS
> > there is a single mount setup on the host, which can serve up multiple 
> > volumes.
> > We have to avoid unmounting that until no VM is using any volume
> > provided by that mount point. Except we pretend the problem doesn't
> > exist and just try to unmount every single time a VM stops, and rely
> > on the kernel failing umout() with EBUSY.  Except this has a race
> > condition if one VM is stopping right as another VM is starting
> >
> > There is a patch up to try to solve this for SMBFS:
> >
> > https://review.openstack.org/#/c/187619/
> >
> > but I don't really much like it, because it only solves it for one
> > driver.
> >
> > I think we need a general solution that solves the problem for all
> > cases, including multi-attach.
> >
> > AFAICT, the only real answer here is to have nova record more info
> > about volume attachments, so it can reliably decide when it is safe to
> > release a connection on the host.
> >
> >
> >> Proposed solution:
> >>Nova needs to d

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Sean Dague
On 02/12/2016 02:19 PM, Andrew Laski wrote:
> 
> 
> On Fri, Feb 12, 2016, at 01:45 PM, John Garbutt wrote:
>> On 12 February 2016 at 18:17, Andrew Laski  wrote:
>>>
>>>
>>> On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
 Forgive me for thinking out loud, but I'm trying to sort out how nova
 would use a microversion in the nova API for the get-me-a-network
 feature recently added to neutron [1] and planned to be leveraged in
 nova (there isn't a spec yet for nova, I'm trying to sort this out for a
 draft).

 Originally I was thinking that a network is required for nova boot, so
 we'd simply check for a microversion and allow not specifying a network,
 easy peasy.

 Turns out you can boot an instance in nova (with neutron as the network
 backend) without a network. All you get is a measly debug log message in
 the compute logs [2]. That's kind of useless though and seems silly.

 I haven't tested this out yet to confirm, but I suspect that if you
 create a nova instance w/o a network, you can latter try to attach a
 network using the os-attach-interfaces API as long as you either provide
 a network ID *or* there is a public shared network or the tenant has a
 network at that point (nova looks those up if a specific network ID
 isn't provided).

 The high-level plan for get-me-a-network in nova was simply going to be
 if the user tries to boot an instance and doesn't provide a network, and
 there isn't a tenant network or public shared network to default to,
 then nova would call neutron's new auto-allocated-topology API to get a
 network. This, however, is a behavior change.

 So I guess the question now is how do we handle that behavior change in
 the nova API?

 We could add an auto-create-net boolean to the boot server request which
 would only be available in a microversion, then we could check that
 boolean in the compute API when we're doing network validation.

>>>
>>> I think a flag like this is the right approach. If it's currently valid
>>> to boot an instance without a network than there needs to be something
>>> to distinguish a request that wants a network created vs. a request that
>>> doesn't want a network.
>>>
>>> This is still hugely useful if all that's required from a user is to
>>> indicate that they would like a network, they still don't need to
>>> understand/provide details of the network.
>>
>> I was thinking a sort of opposite. Would this work?
>>
>> We add a new micro-version that does this:
>> * nothing specified: do the best we can to get a port created
>> (get-me-a-network, etc,), or fail if not possible
>> * --no-nics option (or similar) that says "please don't give me any nics"
>>
>> This means folks that don't want a network, reliably have a way to do
>> that. For everyone else, we do the same thing when using either
>> neutron or nova-network VLAN manager.
> 
> I think this pushes our microversions meaning a bit further than
> intended. I don't think the API should flip behaviors simply based on a
> microversion.
> 
> What about requiring nic information with the microversion? Make users
> indicate explicitly if they want a network or not and avoid changing a
> default behavior.

I think changing default behavior like that is totally within bounds,
and was part of the original design point of microversions (and why you
have to opt into them). So people that don't want a network that go past
that boundary know to start saying "hands-off".

I think if the point of the experience for this API is to be working out
of the box. So I very much like the idea of a defaults change here to
the thing we want to be easy. And an explicit option to disable it if
you want to do something more complicated.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Armando M.
On 12 February 2016 at 11:08, Matt Riedemann 
wrote:

>
>
> On 2/12/2016 12:44 PM, Armando M. wrote:
>
>>
>>
>> On 12 February 2016 at 09:15, Matt Riedemann > > wrote:
>>
>> Forgive me for thinking out loud, but I'm trying to sort out how
>> nova would use a microversion in the nova API for the
>> get-me-a-network feature recently added to neutron [1] and planned
>> to be leveraged in nova (there isn't a spec yet for nova, I'm trying
>> to sort this out for a draft).
>>
>> Originally I was thinking that a network is required for nova boot,
>> so we'd simply check for a microversion and allow not specifying a
>> network, easy peasy.
>>
>> Turns out you can boot an instance in nova (with neutron as the
>> network backend) without a network. All you get is a measly debug
>> log message in the compute logs [2]. That's kind of useless though
>> and seems silly.
>>
>>
>> Incidentally, I was checking this out with Horizon, and the dashboard
>> instance boot workflow doesn't let you proceed without specifying a
>> network (irrespective of the network backend). So if the user has no
>> networks, he/she is stuck and has to flip to the CLI. Nice,
>> uh?
>>
>>
>> I haven't tested this out yet to confirm, but I suspect that if you
>> create a nova instance w/o a network, you can latter try to attach a
>> network using the os-attach-interfaces API as long as you either
>> provide a network ID *or* there is a public shared network or the
>> tenant has a network at that point (nova looks those up if a
>> specific network ID isn't provided).
>>
>>
>> Just to make sure we're on the same page: if you're referring to 'public
>> shared network' as the devstack's provisioned network called 'public',
>> that's technically not shared and it represent your floating IP pool. A
>> user can explicitly boot VM's on it, but that's not to be confused with
>> a 'shared' provider network.
>>
>
> I was referring to this code:
>
>
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L217-L223
>
>
Ok I am with you: the public in the comment is somewhat misleading because
there's nothing 'public' about a shared network and as a matter of fact
RBAC [1] allows for networks to be shared only to a subset of tenants.

[1]
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html


>
>> That said, I tried the workflow of booting a vm without networks and
>> trying to attach an interface without specifying anything and I got a
>> 500 [1]. Error aside, I think it's it would be erroneous to expect the
>> attach command to accept no networks (and still pick one), when the boot
>> command doesn't.
>>
>> [1] http://paste.openstack.org/show/486856/
>>
>
> Cool, yeah, I was totally expecting an IndexError because of the code here:
>
>
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L610
>
>
>
>>
>> The high-level plan for get-me-a-network in nova was simply going to
>> be if the user tries to boot an instance and doesn't provide a
>> network, and there isn't a tenant network or public shared network
>>
>> to default to, then nova would call neutron's new
>> auto-allocated-topology API to get a network. This, however, is a
>> behavior change.
>>
>>
>> I assume that for you 'public shared network' it's not the public
>> network as available in DevStack because, because I don't believe that
>> user can boot VM's on that network automatically.
>>
>>
>> So I guess the question now is how do we handle that behavior change
>> in the nova API?
>>
>> We could add an auto-create-net boolean to the boot server request
>> which would only be available in a microversion, then we could check
>> that boolean in the compute API when we're doing network validation.
>>
>> Today if you don't specify a network and don't have a network
>> available, then the validation in the API is basically just quota
>> checking that you can get at least one port in your tenant [3]. With
>> a flag on a microversion, we could also validate some other things
>> about auto-creating a network (if we know that's going to be the
>> case once we hit the compute).
>>
>> Anyway, this is mostly me getting thoughts out of my head before the
>> weekend so I don't forget it and am looking for other ideas here or
>> things I might be missing.
>>
>>
>> John and I just finished talking about this a bit more and I think the
>> the thought process led us to this conclusion:
>>
>>  From Horizon, we could provide a 'get-me-a-network' button on the
>> Networks wizard for the boot workflow. If the user doesn't see any
>> Networks available he/she can hit the button, see the network being
>> pre-populated and choose to proceed, instead of going back to the
>> Network panel and do th

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread James Bottomley
On Fri, 2016-02-12 at 13:26 -0500, Eoghan Glynn wrote:
> 
> > > > > [...]
> > > > >   * much of the problem with the lavish parties is IMO
> > > > > related to
> > > > > the
> > > > > *exclusivity* of certain shindigs, as opposed to devs
> > > > > socializing at
> > > > > summit being inappropriate per se. In that vein, I think
> > > > > the
> > > > > cores
> > > > > party sends the wrong message and has run its course,
> > > > > while
> > > > > the TC
> > > > > dinner ... well, maybe Austin is the time to show some
> > > > > leadership
> > > > > on that? ;)
> > > > 
> > > > Well, Tokyo was the time to show some leadership on that --
> > > > there
> > > > was no "TC dinner" there :)
> > > 
> > > Excellent, that is/was indeed a positive step :)
> > > 
> > > For the cores party, much as I enjoyed the First Nation cuisine
> > > in
> > > Vancouver or the performance art in Tokyo, IMO it's probably time
> > > to
> > > draw a line under that excess also, as it too projects a notion
> > > of
> > > exclusivity that runs counter to building a community.
> > 
> > Are you sure you're concentrating on the right problem? 
> >  Communities
> > are naturally striated in terms of leadership.  In principle,
> > there's
> > nothing wrong with "exclusive" events that appear to be rewarding
> > the
> > higher striations, especially if it acts as an incentive to people
> > to
> > move up.  It's only actually "elitist" if you reward the top and
> > there's no real way to move up there from the bottom.  You also
> > want to
> > be careful about being pejorative; after all the same principle
> > would
> > apply to the Board Dinner as well.
> > 
> > I think the correct question to ask would be "does the cash spent
> > on
> > the TC party provide a return on investment either as an incentive
> > to
> > become a TC or to facililtate communications among TC members?". 
> >  If
> > you answer no to that, then eliminate it.
> 
> Well the cash spent on those two events is not my concern at all, as
> both are privately sponsored by an OpenStack vendor as opposed to 
> being paid for by the Foundation (IIUC). So in that sense, it's not 
> like the events are consuming "community funds" for which I'm 
> demanding an RoI. Vendor's marketing dollars, so the return is their
> own concern.
> 
> Neither am I against partying devs in general, seems like a useful
> ice-breaker at summit, just like at most other tech conferences.
> 
> My objection, FWIW, is simply around the "Upstairs, Downstairs" feel
> to such events (or if you're not old enough to have watched the BBC
> in the 1970s, maybe Downton Abbey would be more familiar).

Well, I'm old enough to remember it, yes.  One of the ironies the
series was pointing out was that the social striations usually got
mirrored more strongly downstairs than upstairs (Hudson was a more
Jealous guardian of Mr Bellamy's social status than the latter was).

> Honestly I don't know of any communication between two cores at a +2
> party that couldn't have just as easily happened surrounded by other
> contributors. Nor, I hope, does anyone put in the substantial 
> reviewing effort required to become a core in order to score a few 
> free beers and see some local entertainment. Similarly for the TC, 
> one would hope that dinner doesn't figure in the system incentives 
> that drives folks to throw their hat into the ring.

Heh, you'd be surprised.

I don't object to the proposal, just the implication that there's
something wrong with parties for specific groups: we did abandon the
speaker party at Plumbers because the separation didn't seem to be
useful and concentrated instead on doing a great party for everyone.

> In any case, I've derailed the main thrust of the discussion here,
> which I believe could be summed up by:
>
>   "let's dial down the glitz a notch, and get back to basics"
> 
> That sentiment I support in general, but I'd just be more selective
> as to which social events should be first in line to be culled in
> order to create a better atmosphere at summit.
> 
> And I'd be far more concerned about getting the choice of location,
> cadence, attendees, and format right, than in questions of who drinks
> with whom.

OK, so here's a proposal, why not reinvent the Cores party as a Meet
the Cores Party instead (open to all design summit attendees)?  Just
make sure it's advertised in a way that could only possibly appeal to
design summit attendees (so the suits don't want to go), use the same
buget (which will necessitate a dial down) and it becomes an inclusive
event that serves a useful purpose.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Guz Egor
Hongbin,
I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs 
Swarm).If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep themseparate because they need different attention (e.g. I almost don't 
care why/when "agent/slave" node died, but always double check that master node 
was repaired or replaced).   
One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally 
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):  
But we definitely should discuss it during midcycle next week.   --- Egor
  From: Hongbin Lu 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Thursday, February 11, 2016 8:50 PM
 Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
   
 Hi team,    Sorry 
for bringing up this old thread, but a recent debate on container resource [1] 
reminded me the use case Kris mentioned below. I am going to propose a 
preliminary idea to address the use case. Of course, we could continue the 
discussion in the team meeting or midcycle.    Idea: Introduce a docker-native 
COE, which consists of only minion/worker/slave nodes (no master nodes). Goal: 
Eliminate duplicated IaaS resources (master node VMs, lbaas vips, floating ips, 
etc.) Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and 
worker nodes. In these COEs, control services (i.e. scheduler) run on master 
nodes, and containers run on worker nodes. If we can port the COE control 
services to Magnum control plate and share them with all tenants, we eliminate 
the need of master nodes thus improving resource utilization. In the new COE, 
users create/manage containers through Magnum API endpoints. Magnum is 
responsible to spin tenant VMs, schedule containers to the VMs, and manage the 
life-cycle of those containers. Unlike other COEs, containers created by this 
COE are considered as OpenStack-manage resources. That means they will be 
tracked in Magnum DB, and accessible by other OpenStack services (i.e. Horizon, 
Heat, etc.).    What do you feel about this proposal? Let’s discuss.    
[1]https://etherpad.openstack.org/p/magnum-native-api    Best regards, Hongbin  
  From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?    We are looking 
at deploying magnum as an answer for how do we do containers company wide at 
Godaddy.  I am going to agree with both you and josh.    I agree that managing 
one large system is going to be a pain and pas experience tells me this wont be 
practical/scale, however from experience I also know exactly the pain Josh is 
talking about.    We currently have ~4k projects in our internal openstack 
cloud, about 1/4 of the projects are currently doing some form of containers on 
their own, with more joining every day.  If all of these projects were to 
convert of to the current magnum configuration we would suddenly be attempting 
to support/configure ~1k magnum clusters.  Considering that everyone will want 
it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.    From my point of view an ideal use case for companies 
like ours (yahoo/godaddy) would be able to support hierarchical projects in 
magnum.  That way we could create a project for each department, and then the 
subteams of those departments can have their own projects.  We create a a bay 
per department.  Sub-projects if they want to can support creation of their own 
bays (but support of the kube cluster would then fall to that team).  When a 
sub-project spins up a pod on a bay, minions get created inside that teams sub 
projects and the containers in that pod run on the capacity that was spun up  
under that project, the minions for each pod would be a in a scaling group and 
as such grow/shrink as dictated by load.    The above would make it so where we 
support a minimal, yet imho reasonable, number of kube clusters, give people 
who can't/don’t want to fall inline with the provided resource a way to make 
their own and still offer a "good enough for a single company" level of 
multi-tenancy. >Joshua, >   >If you share resources, you give up multi-tenancy. 
 No COE system has the >concept of multi-tenancy (kubernetes has some basic 
implementation but it >is totally insecure).  Not only does multi-t

Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

2016-02-12 Thread Ian Cordasco
-Original Message-
From: Victor Stinner 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 12, 2016 at 10:43:42
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

> Le 12/02/2016 15:12, Ian Cordasco a écrit :
> > Just to interject here, RFC 2616 is the one you're talking about. That
> > encoding requirement was dropped when HTTP/1.1 was updated in the
> > 7230-7235 RFCs. (...)
> > Where VCHAR is any visible US ASCII character. So while UTF-8 is still
> > a bad idea for the header value (and in fact, http.client on Python 3
> > will auto-encode headers to Latin 1) Latin 1 is no longer the
> > requirement.
> >
> > For those interested, you can read up on headers in HTTP/1.1 here:
> > https://tools.ietf.org/html/rfc7230#section-3.2
>  
> Oh thanks, it looks like my HTTP skills are rusty :-)
>  
> For Swift, it's maybe better to always try to use UTF-8, but fallback to
> Latin1 if an HTTP cannot be decoded from UTF-8. Swift has many clients
> implemented in various programming languages, I'm not sure that all
> clients use UTF-8.

I also don't meant to imply that people actually follow the RFCs. ;-) 

> By the way, https://review.openstack.org/#/c/237027/ was merged, cool.

Woot!

> I fixed the third patch mentioned in my previous email to support
> arbitrary byte strings for hash prefix and suffix in the configuration file:
> https://review.openstack.org/#/c/236998/
>  
> I also updated my HTTP parser patch for Python 3. With these two
> patches, test_utils now pass on Python 3.
> https://review.openstack.org/#/c/237042/
>  
> For me, it's a nice milestone to have test_utils working on Python 3. It
> allows to port more interesting stuff and start the real portage work ;-)

I've said it before, but allow me to say it again - Thank you, Victor, for your 
tireless efforts to get OpenStack onto Python 3.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Andrew Laski


On Fri, Feb 12, 2016, at 01:45 PM, John Garbutt wrote:
> On 12 February 2016 at 18:17, Andrew Laski  wrote:
> >
> >
> > On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
> >> Forgive me for thinking out loud, but I'm trying to sort out how nova
> >> would use a microversion in the nova API for the get-me-a-network
> >> feature recently added to neutron [1] and planned to be leveraged in
> >> nova (there isn't a spec yet for nova, I'm trying to sort this out for a
> >> draft).
> >>
> >> Originally I was thinking that a network is required for nova boot, so
> >> we'd simply check for a microversion and allow not specifying a network,
> >> easy peasy.
> >>
> >> Turns out you can boot an instance in nova (with neutron as the network
> >> backend) without a network. All you get is a measly debug log message in
> >> the compute logs [2]. That's kind of useless though and seems silly.
> >>
> >> I haven't tested this out yet to confirm, but I suspect that if you
> >> create a nova instance w/o a network, you can latter try to attach a
> >> network using the os-attach-interfaces API as long as you either provide
> >> a network ID *or* there is a public shared network or the tenant has a
> >> network at that point (nova looks those up if a specific network ID
> >> isn't provided).
> >>
> >> The high-level plan for get-me-a-network in nova was simply going to be
> >> if the user tries to boot an instance and doesn't provide a network, and
> >> there isn't a tenant network or public shared network to default to,
> >> then nova would call neutron's new auto-allocated-topology API to get a
> >> network. This, however, is a behavior change.
> >>
> >> So I guess the question now is how do we handle that behavior change in
> >> the nova API?
> >>
> >> We could add an auto-create-net boolean to the boot server request which
> >> would only be available in a microversion, then we could check that
> >> boolean in the compute API when we're doing network validation.
> >>
> >
> > I think a flag like this is the right approach. If it's currently valid
> > to boot an instance without a network than there needs to be something
> > to distinguish a request that wants a network created vs. a request that
> > doesn't want a network.
> >
> > This is still hugely useful if all that's required from a user is to
> > indicate that they would like a network, they still don't need to
> > understand/provide details of the network.
> 
> I was thinking a sort of opposite. Would this work?
> 
> We add a new micro-version that does this:
> * nothing specified: do the best we can to get a port created
> (get-me-a-network, etc,), or fail if not possible
> * --no-nics option (or similar) that says "please don't give me any nics"
> 
> This means folks that don't want a network, reliably have a way to do
> that. For everyone else, we do the same thing when using either
> neutron or nova-network VLAN manager.

I think this pushes our microversions meaning a bit further than
intended. I don't think the API should flip behaviors simply based on a
microversion.

What about requiring nic information with the microversion? Make users
indicate explicitly if they want a network or not and avoid changing a
default behavior.


> 
> Thanks,
> johnthetubaguy
> 
> PS
> I think we should focus on the horizon experience, CLI experience, and
> API experience separately, for a moment, to make sure each of those
> cases actually works out OK.
> 
> >> Today if you don't specify a network and don't have a network available,
> >> then the validation in the API is basically just quota checking that you
> >> can get at least one port in your tenant [3]. With a flag on a
> >> microversion, we could also validate some other things about
> >> auto-creating a network (if we know that's going to be the case once we
> >> hit the compute).
> >>
> >> Anyway, this is mostly me getting thoughts out of my head before the
> >> weekend so I don't forget it and am looking for other ideas here or
> >> things I might be missing.
> >>
> >> [1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
> >> [2]
> >> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
> >> [3]
> >> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107
> >>
> >> --
> >>
> >> Thanks,
> >>
> >> Matt Riedemann
> >>
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/c

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Doug Wiegley

> On Feb 12, 2016, at 12:03 PM, Matt Riedemann  
> wrote:
> 
> 
> 
> On 2/12/2016 12:45 PM, John Garbutt wrote:
>> On 12 February 2016 at 18:17, Andrew Laski  wrote:
>>> 
>>> 
>>> On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
 Forgive me for thinking out loud, but I'm trying to sort out how nova
 would use a microversion in the nova API for the get-me-a-network
 feature recently added to neutron [1] and planned to be leveraged in
 nova (there isn't a spec yet for nova, I'm trying to sort this out for a
 draft).
 
 Originally I was thinking that a network is required for nova boot, so
 we'd simply check for a microversion and allow not specifying a network,
 easy peasy.
 
 Turns out you can boot an instance in nova (with neutron as the network
 backend) without a network. All you get is a measly debug log message in
 the compute logs [2]. That's kind of useless though and seems silly.
 
 I haven't tested this out yet to confirm, but I suspect that if you
 create a nova instance w/o a network, you can latter try to attach a
 network using the os-attach-interfaces API as long as you either provide
 a network ID *or* there is a public shared network or the tenant has a
 network at that point (nova looks those up if a specific network ID
 isn't provided).
 
 The high-level plan for get-me-a-network in nova was simply going to be
 if the user tries to boot an instance and doesn't provide a network, and
 there isn't a tenant network or public shared network to default to,
 then nova would call neutron's new auto-allocated-topology API to get a
 network. This, however, is a behavior change.
 
 So I guess the question now is how do we handle that behavior change in
 the nova API?
 
 We could add an auto-create-net boolean to the boot server request which
 would only be available in a microversion, then we could check that
 boolean in the compute API when we're doing network validation.
 
>>> 
>>> I think a flag like this is the right approach. If it's currently valid
>>> to boot an instance without a network than there needs to be something
>>> to distinguish a request that wants a network created vs. a request that
>>> doesn't want a network.
>>> 
>>> This is still hugely useful if all that's required from a user is to
>>> indicate that they would like a network, they still don't need to
>>> understand/provide details of the network.
>> 
>> I was thinking a sort of opposite. Would this work?
>> 
>> We add a new micro-version that does this:
>> * nothing specified: do the best we can to get a port created
>> (get-me-a-network, etc,), or fail if not possible
>> * --no-nics option (or similar) that says "please don't give me any nics"
>> 
>> This means folks that don't want a network, reliably have a way to do
>> that. For everyone else, we do the same thing when using either
>> neutron or nova-network VLAN manager.
>> 
>> Thanks,
>> johnthetubaguy
>> 
>> PS
>> I think we should focus on the horizon experience, CLI experience, and
>> API experience separately, for a moment, to make sure each of those
>> cases actually works out OK.
>> 
 Today if you don't specify a network and don't have a network available,
 then the validation in the API is basically just quota checking that you
 can get at least one port in your tenant [3]. With a flag on a
 microversion, we could also validate some other things about
 auto-creating a network (if we know that's going to be the case once we
 hit the compute).
 
 Anyway, this is mostly me getting thoughts out of my head before the
 weekend so I don't forget it and am looking for other ideas here or
 things I might be missing.
 
 [1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
 [2]
 https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
 [3]
 https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ.

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Matt Riedemann



On 2/12/2016 12:44 PM, Armando M. wrote:



On 12 February 2016 at 09:15, Matt Riedemann mailto:mrie...@linux.vnet.ibm.com>> wrote:

Forgive me for thinking out loud, but I'm trying to sort out how
nova would use a microversion in the nova API for the
get-me-a-network feature recently added to neutron [1] and planned
to be leveraged in nova (there isn't a spec yet for nova, I'm trying
to sort this out for a draft).

Originally I was thinking that a network is required for nova boot,
so we'd simply check for a microversion and allow not specifying a
network, easy peasy.

Turns out you can boot an instance in nova (with neutron as the
network backend) without a network. All you get is a measly debug
log message in the compute logs [2]. That's kind of useless though
and seems silly.


Incidentally, I was checking this out with Horizon, and the dashboard
instance boot workflow doesn't let you proceed without specifying a
network (irrespective of the network backend). So if the user has no
networks, he/she is stuck and has to flip to the CLI. Nice,
uh?


I haven't tested this out yet to confirm, but I suspect that if you
create a nova instance w/o a network, you can latter try to attach a
network using the os-attach-interfaces API as long as you either
provide a network ID *or* there is a public shared network or the
tenant has a network at that point (nova looks those up if a
specific network ID isn't provided).


Just to make sure we're on the same page: if you're referring to 'public
shared network' as the devstack's provisioned network called 'public',
that's technically not shared and it represent your floating IP pool. A
user can explicitly boot VM's on it, but that's not to be confused with
a 'shared' provider network.


I was referring to this code:

https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L217-L223



That said, I tried the workflow of booting a vm without networks and
trying to attach an interface without specifying anything and I got a
500 [1]. Error aside, I think it's it would be erroneous to expect the
attach command to accept no networks (and still pick one), when the boot
command doesn't.

[1] http://paste.openstack.org/show/486856/


Cool, yeah, I was totally expecting an IndexError because of the code here:

https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L610




The high-level plan for get-me-a-network in nova was simply going to
be if the user tries to boot an instance and doesn't provide a
network, and there isn't a tenant network or public shared network

to default to, then nova would call neutron's new
auto-allocated-topology API to get a network. This, however, is a
behavior change.


I assume that for you 'public shared network' it's not the public
network as available in DevStack because, because I don't believe that
user can boot VM's on that network automatically.


So I guess the question now is how do we handle that behavior change
in the nova API?

We could add an auto-create-net boolean to the boot server request
which would only be available in a microversion, then we could check
that boolean in the compute API when we're doing network validation.

Today if you don't specify a network and don't have a network
available, then the validation in the API is basically just quota
checking that you can get at least one port in your tenant [3]. With
a flag on a microversion, we could also validate some other things
about auto-creating a network (if we know that's going to be the
case once we hit the compute).

Anyway, this is mostly me getting thoughts out of my head before the
weekend so I don't forget it and am looking for other ideas here or
things I might be missing.


John and I just finished talking about this a bit more and I think the
the thought process led us to this conclusion:

 From Horizon, we could provide a 'get-me-a-network' button on the
Networks wizard for the boot workflow. If the user doesn't see any
Networks available he/she can hit the button, see the network being
pre-populated and choose to proceed, instead of going back to the
Network panel and do the entire workflow.

As for Nova, we could introduce a new micro-version that changes the
behavior of nova boot without networks. In this case, if the tenant has
access to no networks, one will be created for him/her and the VM will
boot off of it.

On the other end, if the user does want a VM without nics, he/she should
be explicit about this and specify 'no-nic' boolean, e.g.

   nova boot  --flavor  --image 
--no-nics

John and I think this would be preferable because the output of the
command becomes more predictable: the user doesn't end up having VM's
connected to NICs accidentally if some net-create sneaks underneath.


Yeah, 

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Matt Riedemann



On 2/12/2016 12:45 PM, John Garbutt wrote:

On 12 February 2016 at 18:17, Andrew Laski  wrote:



On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:

Forgive me for thinking out loud, but I'm trying to sort out how nova
would use a microversion in the nova API for the get-me-a-network
feature recently added to neutron [1] and planned to be leveraged in
nova (there isn't a spec yet for nova, I'm trying to sort this out for a
draft).

Originally I was thinking that a network is required for nova boot, so
we'd simply check for a microversion and allow not specifying a network,
easy peasy.

Turns out you can boot an instance in nova (with neutron as the network
backend) without a network. All you get is a measly debug log message in
the compute logs [2]. That's kind of useless though and seems silly.

I haven't tested this out yet to confirm, but I suspect that if you
create a nova instance w/o a network, you can latter try to attach a
network using the os-attach-interfaces API as long as you either provide
a network ID *or* there is a public shared network or the tenant has a
network at that point (nova looks those up if a specific network ID
isn't provided).

The high-level plan for get-me-a-network in nova was simply going to be
if the user tries to boot an instance and doesn't provide a network, and
there isn't a tenant network or public shared network to default to,
then nova would call neutron's new auto-allocated-topology API to get a
network. This, however, is a behavior change.

So I guess the question now is how do we handle that behavior change in
the nova API?

We could add an auto-create-net boolean to the boot server request which
would only be available in a microversion, then we could check that
boolean in the compute API when we're doing network validation.



I think a flag like this is the right approach. If it's currently valid
to boot an instance without a network than there needs to be something
to distinguish a request that wants a network created vs. a request that
doesn't want a network.

This is still hugely useful if all that's required from a user is to
indicate that they would like a network, they still don't need to
understand/provide details of the network.


I was thinking a sort of opposite. Would this work?

We add a new micro-version that does this:
* nothing specified: do the best we can to get a port created
(get-me-a-network, etc,), or fail if not possible
* --no-nics option (or similar) that says "please don't give me any nics"

This means folks that don't want a network, reliably have a way to do
that. For everyone else, we do the same thing when using either
neutron or nova-network VLAN manager.

Thanks,
johnthetubaguy

PS
I think we should focus on the horizon experience, CLI experience, and
API experience separately, for a moment, to make sure each of those
cases actually works out OK.


Today if you don't specify a network and don't have a network available,
then the validation in the API is basically just quota checking that you
can get at least one port in your tenant [3]. With a flag on a
microversion, we could also validate some other things about
auto-creating a network (if we know that's going to be the case once we
hit the compute).

Anyway, this is mostly me getting thoughts out of my head before the
weekend so I don't forget it and am looking for other ideas here or
things I might be missing.

[1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
[2]
https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
[3]
https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think we're basically looking at the same use case. And as Kevin noted 
I was thinking an option on the --nic part of the request, like 
auto-allocate.


My thinking was with the microversion, that defaults to True and you 
have to opt-in to the old behavior (specify auto-allocate=False) of nova 
being OK with not providing any network for the instance. The flag that 
goes down through the compute AP

[openstack-dev] [swift] What's going on in Swift?

2016-02-12 Thread John Dickinson
Recently I polled many active members of the Swift dev community. I
asked 4 questions:

  1. What are you working on in Swift or the Swift ecosystem?
  2. Of the things you know about that people are working on, what's
 the most important?
  3. What are you excited about in Swift?
  4. What are you worried about in Swift?

Swift has a fantastic community. Right now, we’re at a global max of
active contributors[1]. Our contributor community is growing, and
we’ve got a large number of people who are consistently active. Our
upcoming hackathon in Bristol looks to be one of the largest we’ve
had, by number of registered attendees.

Not only is the size of our contributor community growing, but we see
more and more Swift clusters being deployed. One survey response that
came up several times is being excited about seeing 10+PB clusters in
production. Swift is growing in the number of deployments, the size of
those deployments, and the level of contributor participation. It’s
all very very exciting to see and be a part of.

No question, there is a lot to do in Swift and a lot going on.
Interestingly, regardless of what an individual was working on at the
moment, there were several things that came to the surface as commonly
mentioned "important things". These most-mentioned topics are:

  * container sharding
  * fast-POST
  * encryption.

All three of these topics are long, ongoing things. Interestingly, all
add end-user benefit without actually requiring any changes from the
end user. These aren’t features that are adding new (externally
visible) functionality; they are making the current system more
capable for the way users already want to use Swift today.

There is a *lot* of concern in the community about supporting the
interaction of different features. For example, understanding what
happens when you overwrite a versioned *LO via a COPY using an
encrypted EC policy (and with the different config options possible
for each of those) is hard to reason about. This makes it hard to
support, hard to test, hard to document, hard to review new code, and
hard for users to understand. Of course this is somewhat balanced by
the praise Swift gets for being flexible and adding features that
enable complex enterprise deployments. Unfortunately, the cost of
these complex interactions make it difficult to support the codebase,
add new things, and review patches.

Speaking of patch reviews...

There's also quite a bit of community concern about slow reviews. Some
of this is related to the concern around complex feature interaction.
Some of this is the ratio of cores to stuff-going-on. This issue is
exacerbated in swiftclient, where there is an obvious gap in the
contributor community.

On the positive side, there's a huge amount of praise for the
community itself. People find the community welcoming and pleasant to
work with. Even those who work in more remote time zones find it
enjoyable (paraphrased: "Before I worked on Swift, I didn't expect we
would work well together because the community is so spread out across
the world. But it's absolutely awesome."). There's one comment from a
relative new contributor about it being hard to get involved (learning
the code and community via IRC). Having an easy onramp to community
participation is very important for the health of the project.

So what are the plans moving forward? I’ve got some ideas, but
improving Swift is something we all can work together on. If you’ve got
an idea, let’s try it out!

Here’s some thoughts I have about how to improve

Goals we need to keep in mind:

  * Reduce complexity by reducing modes of operation (eg fast-POST only)
  * Remove extraneous features, modes of operation, and deprecated things
  * Add "magic" in the right places to make complex things simple and hard
things easy
  * Respond to patch submitters quickly
  * Remember that new community members are always joining, and we must
provide them with an easy way to get involved technically and socially.

Specific action item ideas:

  * Re-introduce the automatic config pipeline generator.
This reduces errors around one of the most confusing parts of Swift.
Our feature interaction matrix is exacerbated by the amount of
middleware that is used/recommended by default. This makes it hard
to add new middleware, and if we the active contributors are confused
by it, then deployers have no hope. An automatic pipeline generator
removes confusion and puts the magic in just one place.
  * Describe and test complex feature interaction (especially things
that aren't an atomic operation). Document the feature matrix.
  * Update "getting started" docs and install guides on a recurring
schedule. Challenge active contributors to rebuild their SAIOs from
scratch (without an automation script) once a month.



[1] http://d.not.mn/active_contribs.png




signature.asc
Description: OpenPGP digital signature
___

Re: [openstack-dev] [nova] Update on scheduler and resource tracker progress

2016-02-12 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-02-11 12:24:04 -0800:
> Hello all,
> 
> Performance working group, please pay attention to Chapter 2 in the 
> details section.
> 



> Chapter 2 - Addressing performance and scale
> 
> 
> One of the significant performance problems with the Nova scheduler is 
> the fact that for every call to the select_destinations() RPC API method 
> -- which itself is called at least once every time a launch or migration 
> request is made -- the scheduler grabs all records for all compute nodes 
> in the deployment. Once retrieving all these compute node records, the 
> scheduler runs each through a set of filters to determine which compute 
> nodes have the required capacity to service the instance's requested 
> resources. Having the scheduler continually retrieve every compute node 
> record on each request to select_destinations() is extremely 
> inefficient. The greater the number of compute nodes, the bigger the 
> performance and scale problem this becomes.
> 
> On a loaded cloud deployment -- say there are 1000 compute nodes and 900 
> of them are fully loaded with active virtual machines -- the scheduler 
> is still going to retrieve all 1000 compute node records on every 
> request to select_destinations() and process each one of those records 
> through all scheduler filters. Clearly, if we could filter the amount of 
> compute node records that are returned by removing those nodes that do 
> not have available capacity, we could dramatically reduce the amount of 
> work that each call to select_destinations() would need to perform.
> 
> The resource-providers-scheduler blueprint attempts to address the above 
> problem by replacing a number of the scheduler filters that currently 
> run *after* the database has returned all compute node records with 
> instead a series of WHERE clauses and join conditions on the database 
> query. The idea here is to winnow the number of returned compute node 
> results as much as possible. The fewer records the scheduler must 
> post-process, the faster the performance of each individual call to 
> select_destinations().
> 

This is great, and I think it is the way to go. However, I'm not sure how
dramatic the overall benefit will be, since it also shifts some load from
reads to writes. With 1000 active compute nodes updating their status,
each index added will be 1000 more index writes per update period. Still
a net win, but I'm always cautious about shifting things to more writes
on the database server. That said, I do think it will be a win and should
be done.

> The second major scale problem with the current Nova scheduler design 
> has to do with the fact that the scheduler does *not* actually claim 
> resources on a provider. Instead, the scheduler selects a destination 
> host to place the instance on and the Nova conductor then sends a 
> message to that target host which attempts to spawn the instance on its 
> hypervisor. If the spawn succeeds, the target compute host updates the 
> Nova database and decrements its count of available resources. These 
> steps (from nova-scheduler to nova-conductor to nova-compute to 
> database) all take some not insignificant amount of time. During this 
> time window, a different scheduler process may pick the exact same 
> target host for a like-sized launch request. If there is only room on 
> the target host for one of those size requests [5], one of those spawn 
> requests will fail and trigger a retry operation. This retry operation 
> will attempt to repeat the scheduler placement decisions (by calling 
> select_destinations()).
> 
> This retry operation is relatively expensive and needlessly so: if the 
> scheduler claimed the resources on the target host before sending its 
> pick back to the scheduler, then the chances of producing a retry will 
> be almost eliminated [6]. The resource-providers-scheduler blueprint 
> attempts to remedy this second scaling design problem by having the 
> scheduler write records to the allocations table before sending the 
> selected target host back to the Nova conductor.
> 

*This*, to me, is the thing that makes the scheduler dramatically more
scalable. The ability to run as many schedulers as I expect to need to
respond to user requests in a reasonable amount of time, is the key to
victory here.

However, I wonder how you will avoid serialization or getting into
a much tighter retry race for the claiming operations. There's talk
in the spec of inserting allocations in a table atomically. However,
with multiple schedulers, you'll still have the problem where one will
claim and the others will need to know that they cannot. We can talk
about nuts and bolts, but there's really only two ways this can work:
exclusive locking, or compare and swap retry loops.

I think the right way to go is probably the retries, so we can make
use of some of the advantages of Galera. But I think it will need some
colli

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Armando M.
On 12 February 2016 at 09:15, Matt Riedemann 
wrote:

> Forgive me for thinking out loud, but I'm trying to sort out how nova
> would use a microversion in the nova API for the get-me-a-network feature
> recently added to neutron [1] and planned to be leveraged in nova (there
> isn't a spec yet for nova, I'm trying to sort this out for a draft).
>
> Originally I was thinking that a network is required for nova boot, so
> we'd simply check for a microversion and allow not specifying a network,
> easy peasy.
>
> Turns out you can boot an instance in nova (with neutron as the network
> backend) without a network. All you get is a measly debug log message in
> the compute logs [2]. That's kind of useless though and seems silly.
>

Incidentally, I was checking this out with Horizon, and the dashboard
instance boot workflow doesn't let you proceed without specifying a network
(irrespective of the network backend). So if the user has no networks,
he/she is stuck and has to flip to the CLI. Nice, uh?


>
> I haven't tested this out yet to confirm, but I suspect that if you create
> a nova instance w/o a network, you can latter try to attach a network using
> the os-attach-interfaces API as long as you either provide a network ID
> *or* there is a public shared network or the tenant has a network at that
> point (nova looks those up if a specific network ID isn't provided).
>

Just to make sure we're on the same page: if you're referring to 'public
shared network' as the devstack's provisioned network called 'public',
that's technically not shared and it represent your floating IP pool. A
user can explicitly boot VM's on it, but that's not to be confused with a
'shared' provider network.

That said, I tried the workflow of booting a vm without networks and trying
to attach an interface without specifying anything and I got a 500 [1].
Error aside, I think it's it would be erroneous to expect the attach
command to accept no networks (and still pick one), when the boot command
doesn't.

[1] http://paste.openstack.org/show/486856/


> The high-level plan for get-me-a-network in nova was simply going to be if
> the user tries to boot an instance and doesn't provide a network, and there
> isn't a tenant network or public shared network

to default to, then nova would call neutron's new auto-allocated-topology
> API to get a network. This, however, is a behavior change.
>

I assume that for you 'public shared network' it's not the public network
as available in DevStack because, because I don't believe that user can
boot VM's on that network automatically.


> So I guess the question now is how do we handle that behavior change in
> the nova API?
>
> We could add an auto-create-net boolean to the boot server request which
> would only be available in a microversion, then we could check that boolean
> in the compute API when we're doing network validation.
>
> Today if you don't specify a network and don't have a network available,
> then the validation in the API is basically just quota checking that you
> can get at least one port in your tenant [3]. With a flag on a
> microversion, we could also validate some other things about auto-creating
> a network (if we know that's going to be the case once we hit the compute).
>
> Anyway, this is mostly me getting thoughts out of my head before the
> weekend so I don't forget it and am looking for other ideas here or things
> I might be missing.
>

John and I just finished talking about this a bit more and I think the the
thought process led us to this conclusion:

>From Horizon, we could provide a 'get-me-a-network' button on the Networks
wizard for the boot workflow. If the user doesn't see any Networks
available he/she can hit the button, see the network being pre-populated
and choose to proceed, instead of going back to the Network panel and do
the entire workflow.

As for Nova, we could introduce a new micro-version that changes the
behavior of nova boot without networks. In this case, if the tenant has
access to no networks, one will be created for him/her and the VM will boot
off of it.

On the other end, if the user does want a VM without nics, he/she should be
explicit about this and specify 'no-nic' boolean, e.g.

  nova boot  --flavor  --image 
--no-nics

John and I think this would be preferable because the output of the command
becomes more predictable: the user doesn't end up having VM's connected to
NICs accidentally if some net-create sneaks underneath.

Anyhow, food for thought.

Thanks for starting this thread.

Cheers,
Armando


> [1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
> [2]
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
> [3]
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> _

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Kevin Benton
Perhaps another option for '--nic'? nova boot --nic auto-allocate

On Fri, Feb 12, 2016 at 10:17 AM, Andrew Laski  wrote:

>
>
> On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
> > Forgive me for thinking out loud, but I'm trying to sort out how nova
> > would use a microversion in the nova API for the get-me-a-network
> > feature recently added to neutron [1] and planned to be leveraged in
> > nova (there isn't a spec yet for nova, I'm trying to sort this out for a
> > draft).
> >
> > Originally I was thinking that a network is required for nova boot, so
> > we'd simply check for a microversion and allow not specifying a network,
> > easy peasy.
> >
> > Turns out you can boot an instance in nova (with neutron as the network
> > backend) without a network. All you get is a measly debug log message in
> > the compute logs [2]. That's kind of useless though and seems silly.
> >
> > I haven't tested this out yet to confirm, but I suspect that if you
> > create a nova instance w/o a network, you can latter try to attach a
> > network using the os-attach-interfaces API as long as you either provide
> > a network ID *or* there is a public shared network or the tenant has a
> > network at that point (nova looks those up if a specific network ID
> > isn't provided).
> >
> > The high-level plan for get-me-a-network in nova was simply going to be
> > if the user tries to boot an instance and doesn't provide a network, and
> > there isn't a tenant network or public shared network to default to,
> > then nova would call neutron's new auto-allocated-topology API to get a
> > network. This, however, is a behavior change.
> >
> > So I guess the question now is how do we handle that behavior change in
> > the nova API?
> >
> > We could add an auto-create-net boolean to the boot server request which
> > would only be available in a microversion, then we could check that
> > boolean in the compute API when we're doing network validation.
> >
>
> I think a flag like this is the right approach. If it's currently valid
> to boot an instance without a network than there needs to be something
> to distinguish a request that wants a network created vs. a request that
> doesn't want a network.
>
> This is still hugely useful if all that's required from a user is to
> indicate that they would like a network, they still don't need to
> understand/provide details of the network.
>
>
>
> > Today if you don't specify a network and don't have a network available,
> > then the validation in the API is basically just quota checking that you
> > can get at least one port in your tenant [3]. With a flag on a
> > microversion, we could also validate some other things about
> > auto-creating a network (if we know that's going to be the case once we
> > hit the compute).
> >
> > Anyway, this is mostly me getting thoughts out of my head before the
> > weekend so I don't forget it and am looking for other ideas here or
> > things I might be missing.
> >
> > [1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
> > [2]
> >
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
> > [3]
> >
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread John Garbutt
On 12 February 2016 at 18:17, Andrew Laski  wrote:
>
>
> On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
>> Forgive me for thinking out loud, but I'm trying to sort out how nova
>> would use a microversion in the nova API for the get-me-a-network
>> feature recently added to neutron [1] and planned to be leveraged in
>> nova (there isn't a spec yet for nova, I'm trying to sort this out for a
>> draft).
>>
>> Originally I was thinking that a network is required for nova boot, so
>> we'd simply check for a microversion and allow not specifying a network,
>> easy peasy.
>>
>> Turns out you can boot an instance in nova (with neutron as the network
>> backend) without a network. All you get is a measly debug log message in
>> the compute logs [2]. That's kind of useless though and seems silly.
>>
>> I haven't tested this out yet to confirm, but I suspect that if you
>> create a nova instance w/o a network, you can latter try to attach a
>> network using the os-attach-interfaces API as long as you either provide
>> a network ID *or* there is a public shared network or the tenant has a
>> network at that point (nova looks those up if a specific network ID
>> isn't provided).
>>
>> The high-level plan for get-me-a-network in nova was simply going to be
>> if the user tries to boot an instance and doesn't provide a network, and
>> there isn't a tenant network or public shared network to default to,
>> then nova would call neutron's new auto-allocated-topology API to get a
>> network. This, however, is a behavior change.
>>
>> So I guess the question now is how do we handle that behavior change in
>> the nova API?
>>
>> We could add an auto-create-net boolean to the boot server request which
>> would only be available in a microversion, then we could check that
>> boolean in the compute API when we're doing network validation.
>>
>
> I think a flag like this is the right approach. If it's currently valid
> to boot an instance without a network than there needs to be something
> to distinguish a request that wants a network created vs. a request that
> doesn't want a network.
>
> This is still hugely useful if all that's required from a user is to
> indicate that they would like a network, they still don't need to
> understand/provide details of the network.

I was thinking a sort of opposite. Would this work?

We add a new micro-version that does this:
* nothing specified: do the best we can to get a port created
(get-me-a-network, etc,), or fail if not possible
* --no-nics option (or similar) that says "please don't give me any nics"

This means folks that don't want a network, reliably have a way to do
that. For everyone else, we do the same thing when using either
neutron or nova-network VLAN manager.

Thanks,
johnthetubaguy

PS
I think we should focus on the horizon experience, CLI experience, and
API experience separately, for a moment, to make sure each of those
cases actually works out OK.

>> Today if you don't specify a network and don't have a network available,
>> then the validation in the API is basically just quota checking that you
>> can get at least one port in your tenant [3]. With a flag on a
>> microversion, we could also validate some other things about
>> auto-creating a network (if we know that's going to be the case once we
>> hit the compute).
>>
>> Anyway, this is mostly me getting thoughts out of my head before the
>> weekend so I don't forget it and am looking for other ideas here or
>> things I might be missing.
>>
>> [1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
>> [2]
>> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
>> [3]
>> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> > > > [...]
> > > >   * much of the problem with the lavish parties is IMO related to
> > > > the
> > > > *exclusivity* of certain shindigs, as opposed to devs
> > > > socializing at
> > > > summit being inappropriate per se. In that vein, I think the
> > > > cores
> > > > party sends the wrong message and has run its course, while
> > > > the TC
> > > > dinner ... well, maybe Austin is the time to show some
> > > > leadership
> > > > on that? ;)
> > > 
> > > Well, Tokyo was the time to show some leadership on that -- there
> > > was no "TC dinner" there :)
> > 
> > Excellent, that is/was indeed a positive step :)
> > 
> > For the cores party, much as I enjoyed the First Nation cuisine in
> > Vancouver or the performance art in Tokyo, IMO it's probably time to
> > draw a line under that excess also, as it too projects a notion of
> > exclusivity that runs counter to building a community.
> 
> Are you sure you're concentrating on the right problem?  Communities
> are naturally striated in terms of leadership.  In principle, there's
> nothing wrong with "exclusive" events that appear to be rewarding the
> higher striations, especially if it acts as an incentive to people to
> move up.  It's only actually "elitist" if you reward the top and
> there's no real way to move up there from the bottom.  You also want to
> be careful about being pejorative; after all the same principle would
> apply to the Board Dinner as well.
> 
> I think the correct question to ask would be "does the cash spent on
> the TC party provide a return on investment either as an incentive to
> become a TC or to facililtate communications among TC members?".  If
> you answer no to that, then eliminate it.

Well the cash spent on those two events is not my concern at all, as
both are privately sponsored by an OpenStack vendor as opposed to being
paid for by the Foundation (IIUC). So in that sense, it's not like the
events are consuming "community funds" for which I'm demanding an RoI.
Vendor's marketing dollars, so the return is their own concern.

Neither am I against partying devs in general, seems like a useful
ice-breaker at summit, just like at most other tech conferences.

My objection, FWIW, is simply around the "Upstairs, Downstairs" feel
to such events (or if you're not old enough to have watched the BBC
in the 1970s, maybe Downton Abbey would be more familiar).

Honestly I don't know of any communication between two cores at a +2
party that couldn't have just as easily happened surrounded by other
contributors. Nor, I hope, does anyone put in the substantial reviewing
effort required to become a core in order to score a few free beers and
see some local entertainment. Similarly for the TC, one would hope that
dinner doesn't figure in the system incentives that drives folks to
throw their hat into the ring. 

In any case, I've derailed the main thrust of the discussion here,
which I believe could be summed up by:

  "let's dial down the glitz a notch, and get back to basics"

That sentiment I support in general, but I'd just be more selective
as to which social events should be first in line to be culled in
order to create a better atmosphere at summit.

And I'd be far more concerned about getting the choice of location,
cadence, attendees, and format right, than in questions of who drinks
with whom.

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Andrew Laski


On Fri, Feb 12, 2016, at 12:15 PM, Matt Riedemann wrote:
> Forgive me for thinking out loud, but I'm trying to sort out how nova 
> would use a microversion in the nova API for the get-me-a-network 
> feature recently added to neutron [1] and planned to be leveraged in 
> nova (there isn't a spec yet for nova, I'm trying to sort this out for a 
> draft).
> 
> Originally I was thinking that a network is required for nova boot, so 
> we'd simply check for a microversion and allow not specifying a network, 
> easy peasy.
> 
> Turns out you can boot an instance in nova (with neutron as the network 
> backend) without a network. All you get is a measly debug log message in 
> the compute logs [2]. That's kind of useless though and seems silly.
> 
> I haven't tested this out yet to confirm, but I suspect that if you 
> create a nova instance w/o a network, you can latter try to attach a 
> network using the os-attach-interfaces API as long as you either provide 
> a network ID *or* there is a public shared network or the tenant has a 
> network at that point (nova looks those up if a specific network ID 
> isn't provided).
> 
> The high-level plan for get-me-a-network in nova was simply going to be 
> if the user tries to boot an instance and doesn't provide a network, and 
> there isn't a tenant network or public shared network to default to, 
> then nova would call neutron's new auto-allocated-topology API to get a 
> network. This, however, is a behavior change.
> 
> So I guess the question now is how do we handle that behavior change in 
> the nova API?
> 
> We could add an auto-create-net boolean to the boot server request which 
> would only be available in a microversion, then we could check that 
> boolean in the compute API when we're doing network validation.
> 

I think a flag like this is the right approach. If it's currently valid
to boot an instance without a network than there needs to be something
to distinguish a request that wants a network created vs. a request that
doesn't want a network.

This is still hugely useful if all that's required from a user is to
indicate that they would like a network, they still don't need to
understand/provide details of the network.



> Today if you don't specify a network and don't have a network available, 
> then the validation in the API is basically just quota checking that you 
> can get at least one port in your tenant [3]. With a flag on a 
> microversion, we could also validate some other things about 
> auto-creating a network (if we know that's going to be the case once we 
> hit the compute).
> 
> Anyway, this is mostly me getting thoughts out of my head before the 
> weekend so I don't forget it and am looking for other ideas here or 
> things I might be missing.
> 
> [1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
> [2] 
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
> [3] 
> https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-02-12 Thread Emilien Macchi
Please look and vote:
https://review.openstack.org/279698


Thanks for your feedback!

On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
> I like the idea of moving it to use the OpenStack infrastructure.
> 
> On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec  > wrote:
> 
> On 02/09/2016 08:05 AM, Emilien Macchi wrote:
> > Hi,
> >
> > TripleO is currently using puppet-pacemaker [1] which is a module
> hosted
> > & managed by Github.
> > The module was created and mainly maintained by Redhat. It tends to
> > break TripleO quite often since we don't have any gate.
> >
> > I propose to move the module to OpenStack so we'll use OpenStack Infra
> > benefits (Gerrit, Releases, Gating, etc). Another idea would be to
> gate
> > the module with TripleO HA jobs.
> >
> > The question is, under which umbrella put the module? Puppet ?
> TripleO ?
> >
> > Or no umbrella, like puppet-ceph. <-- I like this idea
> 
> 
> I think the module not being under an umbrella makes sense.
>  
> 
> >
> > Any feedback is welcome,
> >
> > [1] https://github.com/redhat-openstack/puppet-pacemaker
> 
> Seems like a module that would be useful outside of TripleO, so it
> doesn't seem like it should live under that.  Other than that I don't
> have enough knowledge of the organization of the puppet modules to
> comment.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread James Bottomley
On Fri, 2016-02-12 at 07:45 -0500, Eoghan Glynn wrote:
> 
> > > [...]
> > >   * much of the problem with the lavish parties is IMO related to
> > > the
> > > *exclusivity* of certain shindigs, as opposed to devs
> > > socializing at
> > > summit being inappropriate per se. In that vein, I think the
> > > cores
> > > party sends the wrong message and has run its course, while
> > > the TC
> > > dinner ... well, maybe Austin is the time to show some
> > > leadership
> > > on that? ;)
> > 
> > Well, Tokyo was the time to show some leadership on that -- there 
> > was no "TC dinner" there :)
> 
> Excellent, that is/was indeed a positive step :)
> 
> For the cores party, much as I enjoyed the First Nation cuisine in 
> Vancouver or the performance art in Tokyo, IMO it's probably time to 
> draw a line under that excess also, as it too projects a notion of 
> exclusivity that runs counter to building a community.

Are you sure you're concentrating on the right problem?  Communities
are naturally striated in terms of leadership.  In principle, there's
nothing wrong with "exclusive" events that appear to be rewarding the
higher striations, especially if it acts as an incentive to people to
move up.  It's only actually "elitist" if you reward the top and
there's no real way to move up there from the bottom.  You also want to
be careful about being pejorative; after all the same principle would
apply to the Board Dinner as well.

I think the correct question to ask would be "does the cash spent on
the TC party provide a return on investment either as an incentive to
become a TC or to facililtate communications among TC members?".  If
you answer no to that, then eliminate it.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] publish and update Gerrit dashboard link automatically

2016-02-12 Thread Rossella Sblendido



On 02/12/2016 12:25 PM, Rossella Sblendido wrote:

Hi all,

it's hard sometimes for reviewers to filter reviews that are high
priority. In Neutron in this mail thread [1] we had the idea to create a
script for that. The script is now available in the Neutron repository [2].
The script queries Launchpad and creates a file that can be used by
gerrit-dash-creator to display a dashboard listing patches that fix
critical/high bugs, that implement approved blueprint or feature
requests. This is how it looks like today [3].
For it to be really useful the dashboard link needs to be updated once a
day at least. Here I need your help. I'd like to publish the URL in a
public place and update it every day in an automated way. How can I do
that?

thanks,

Rossella

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079816.html

[2]
https://github.com/openstack/neutron/blob/master/tools/milestone-review-dash.py

[3] https://goo.gl/FSKTj9


This last link is wrong, this is the right one [1] sorry.

[1] https://goo.gl/Hb3vKu



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-12 Thread Matt Riedemann
Forgive me for thinking out loud, but I'm trying to sort out how nova 
would use a microversion in the nova API for the get-me-a-network 
feature recently added to neutron [1] and planned to be leveraged in 
nova (there isn't a spec yet for nova, I'm trying to sort this out for a 
draft).


Originally I was thinking that a network is required for nova boot, so 
we'd simply check for a microversion and allow not specifying a network, 
easy peasy.


Turns out you can boot an instance in nova (with neutron as the network 
backend) without a network. All you get is a measly debug log message in 
the compute logs [2]. That's kind of useless though and seems silly.


I haven't tested this out yet to confirm, but I suspect that if you 
create a nova instance w/o a network, you can latter try to attach a 
network using the os-attach-interfaces API as long as you either provide 
a network ID *or* there is a public shared network or the tenant has a 
network at that point (nova looks those up if a specific network ID 
isn't provided).


The high-level plan for get-me-a-network in nova was simply going to be 
if the user tries to boot an instance and doesn't provide a network, and 
there isn't a tenant network or public shared network to default to, 
then nova would call neutron's new auto-allocated-topology API to get a 
network. This, however, is a behavior change.


So I guess the question now is how do we handle that behavior change in 
the nova API?


We could add an auto-create-net boolean to the boot server request which 
would only be available in a microversion, then we could check that 
boolean in the compute API when we're doing network validation.


Today if you don't specify a network and don't have a network available, 
then the validation in the API is basically just quota checking that you 
can get at least one port in your tenant [3]. With a flag on a 
microversion, we could also validate some other things about 
auto-creating a network (if we know that's going to be the case once we 
hit the compute).


Anyway, this is mostly me getting thoughts out of my head before the 
weekend so I don't forget it and am looking for other ideas here or 
things I might be missing.


[1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
[2] 
https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595
[3] 
https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/12/2016 07:30 AM, Dean Troyer wrote:

> IIRC Nova started with project, until the marriage with Rax, when
> many things changed, although the project -> tenant change may have
> never been completed.  Keystone v3 started the movement back to
> project. OpenStackClient made the commitment from the beginning to
> present exactly one term to the user in all cases, and we chose
> project.

The team from NASA had used 'project', as it made sense to them for
their uses. When they got together with the Rackspace team, who were
using 'tenant' in their existing code, there was much discussion over
which made the most sense. And since RAX pretty much footed the entire
bill for those early days, guess which prevailed. :)

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWvhLsAAoJEKMgtcocwZqLLOIQAI9iWTErgIxKuv13+pfZQufb
1t2mtha1MSiAR0TeB8mMMIm/PCVa6XTvRlPfz/MWeCJKlbNJXRGjQ+srtNcIRVQi
lPk0vN5R7Eo3dSm0Aogw27ndoL74b9SEULHrnPm1OmdETErW5+KVz1RiYZF88KUt
PgJLZIRI6ghUXG4HvNSZXOgIyHWV+a4pvODErUzERwEIkpa+/tqvj0l6xOCZq3am
vaEeEHbHAPNGqpDQ4rwXRRMnDKEaKBnIdoEHq75tcsNKSehEn/dDWaCCnhezgC2R
QFrTqan6J3aTvZ4/ydcxJAbtUr0Qo2YqpakFYuXJlz44liT/CV/bqrAM3aUTwkUi
8KwaNCcJ8WcBnzmI43d5+2Sv8BVWsFxqrB4eM9/KtiBit4GSqhHJtI8NdcC18hMb
yY/1abcKOSSDRIMcKMEvIzIYb0wIn83abj/pusmGZ8N2BU8418FsUE1wFhUNmgNV
KqiBQpiYqwgNrlavDlkjJCQx0RaydCKg58KKzpcmQS8QFNSrpqcxDRfq+9KXjGA/
jZb51IyndHERPemVfX3EKlqZ0Z0bq1/SDsoF0SxFoBWtbdKcBqFcNTqxuv739bN7
fT5qNHfJc976xPwYaWREzhQEVWeTpsZL28c7rZcjEPyqgF0UYZJJew/Yjj8q/tBI
0V3EJzTbcco3bpEDKwI/
=ZNLD
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Postgres support in (fwaas) tests

2016-02-12 Thread Sean M. Collins
I know historically there were postgres jobs that tested things, but I
think the community moved away from having postgres at the gate?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

2016-02-12 Thread Victor Stinner

Le 12/02/2016 15:12, Ian Cordasco a écrit :

Just to interject here, RFC 2616 is the one you're talking about. That
encoding requirement was dropped when HTTP/1.1 was updated in the
7230-7235 RFCs. (...)
Where VCHAR is any visible US ASCII character. So while UTF-8 is still
a bad idea for the header value (and in fact, http.client on Python 3
will auto-encode headers to Latin 1) Latin 1 is no longer the
requirement.

For those interested, you can read up on headers in HTTP/1.1 here:
https://tools.ietf.org/html/rfc7230#section-3.2


Oh thanks, it looks like my HTTP skills are rusty :-)

For Swift, it's maybe better to always try to use UTF-8, but fallback to 
Latin1 if an HTTP cannot be decoded from UTF-8. Swift has many clients 
implemented in various programming languages, I'm not sure that all 
clients use UTF-8.


By the way, https://review.openstack.org/#/c/237027/ was merged, cool.

I fixed the third patch mentioned in my previous email to support 
arbitrary byte strings for hash prefix and suffix in the configuration file:

https://review.openstack.org/#/c/236998/

I also updated my HTTP parser patch for Python 3. With these two 
patches, test_utils now pass on Python 3.

https://review.openstack.org/#/c/237042/

For me, it's a nice milestone to have test_utils working on Python 3. It 
allows to port more interesting stuff and start the real portage work ;-)


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Ronald Bradford
I am taking on the task in the Oslo Logging/Context projects to ensure
project replaces a deprecated use of tenant.

This includes the end user/operator view of logging configuration options
[1] replacing tenant with project, and RequestContext used across most
projects that has an inconsistent mixture of terms.

[1] http://docs.openstack.org/developer/oslo.log/opts.html



Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn:  http://www.linkedin.com/in/ronaldbradford
Twitter:@RonaldBradford 
Skype: RonaldBradford
GTalk: Ronald.Bradford
IRC: rbradfor


On Fri, Feb 12, 2016 at 10:34 AM, Monty Taylor  wrote:

> On 02/12/2016 07:36 AM, Jay Pipes wrote:
>
>> On 02/12/2016 07:01 AM, Sean Dague wrote:
>>
>>> Ok... this is going to be one of those threads, but I wanted to try to
>>> get resolution here.
>>>
>>> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
>>> someone that wasn't here at the beginning, I'm not even sure which one
>>> we are supposed to be transitioning from -> to.
>>>
>>> At a minimum I'd like to make all of devstack use 1 term, which is the
>>> term we're trying to get to. That will help move the needle.
>>>
>>> However, again, I'm not sure which one that is supposed to be (comments
>>> in various places show movement in both directions). So people with
>>> deeper knowledge here, can you speak up as to which is the deprecated
>>> term and which is the term moving forward.
>>>
>>
>> "Project" is the term that should be used.
>>
>
> Yes.
>
> FWIW - The ansible modules in ansible 2.0 (as well as the shade library)
> use the term "project" regardless of whether the cloud in question is
> running keystone v2 or v3. It's the least I could do to try to nudge
> consumption in the right direction.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Update on live migration priority

2016-02-12 Thread Murray, Paul (HP Cloud)
This time with a tag in case anyone is filtering...

From: Murray, Paul (HP Cloud)
Sent: 12 February 2016 16:16
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Update on live migration priority

The objective for the live migration priority is to improve the stability of 
migrations based on operator experience. The high level approach is to do the 
following:

1.   Improve CI

2.   Improve documentation

3.   Improve manageability of migrations

4.   Fix bugs

In this cycle we targeted a few immediately implementable features that would 
help, specifically giving operators commands to allow them to manage migrations 
(inspect progress, force completion, and cancel) and improve security 
(split-networks and remove ssh-based resize/migration; aka storage pools).

Most of these are on track to be completed in this cycle with the exception of 
storage pools work which is being deferred. Further details follow.

Expand CI coverage - in progress

There is a job in the experimental queue called: 
gate-tempest-dsvm-multinode-live-migrationqueued. This will become the job that 
performs live migration tests; any live migration tests in other jobs will be 
removed. At present the job has been configured to cover different storage 
configurations including cinder, NFS, ceph. Tests are now being added to the 
job. Patches are currently up for live migration of instances with swap and 
instances with ephemeral disks.

Please trigger the experimental queue if your patches touch migrations in some 
way so we can check the stability of the jobs. Once stable and with sufficient 
tests we will promote the job from the experimental queue so that it always 
runs.

See: https://review.openstack.org/#/q/topic:lm_test

Improve API docs - done

Some changes were made to the API guide for moving servers, including better 
descriptions for the server actions migrate, live migrate, shelve, resize and 
evacuate ( 
http://developer.openstack.org/api-guide/compute/server_concepts.html#server-actions
 ) and a section that describes reasons for moving VMs with common use cases 
outlined ( 
http://developer.openstack.org/api-guide/compute/server_concepts.html#moving-servers
 )

Block live migration with attached volumes - done

The selective block device migration API in libvirt 1.2.17 is used to allow 
block migration when volumes are attached. A follow on patch to allow readonly 
drives to be copied in block migration has not been completed. This patch is 
required to allow iso9600 format config drives to be migrated. Without it only 
vfat config drives can be migrated. There is still some thought going into that 
- see: https://review.openstack.org/#/c/234659

Force complete - requires python-novaclient change

Force-complete forces a live migration to complete  by pausing the VM and 
restarting it when it has completed migration. This is intended as a brute 
force way to make a VM complete its migration when it is taking too long. In 
the future auto-converge and post-copy will be looked at. These became 
available in qemu 2.5.

Force complete is done in nova but still requires a change to python-novaclient 
to implement the CLI.

Cancel - in progress

Cancel stops a live migration, leaving it on the source host with the migration 
status left as "cancelled". This is in progress and follows the pattern of 
force-complete. Unfortunately this needs to be bundled up into one patch to 
avoid multiple API bumps.

Patches for review:
https://review.openstack.org/#/q/status:open+topic:bp/abort-live-migration

Progress reporting - in progress (no pun intended)

Progress reporting introduces migrations as a sub-resource of servers and adds 
progress data to the migration record. There was some debate at the mid cycle 
and on the mailing list about how to record this transient data. It is a waste 
to keep writing it to the database, but as it is generated at the compute 
manager but examined at the API it was felt that writing it to the database is 
necessary to fit the existing architecture. The conclusions was that writing to 
the database every 5 seconds would not cause a significant overhead. 
Alternatives could be persued later if necessary. For discussion see this ML 
thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/085662.html 
and the IRC meeting transcript here: 
http://eavesdrop.openstack.org/meetings/nova_live_migration/2016/nova_live_migration.2016-02-09-14.01.log.html

Patches for review:
https://review.openstack.org/#/q/status:open+topic:bp/live-migration-progress-report

Split networking - done

Split networking adds a configuration parameter to specify 
live_migration_inbound_addr as the ip address or host name to be used as the 
target for migration traffic. This allows migration traffic to be isolated on a 
separate network to other management traffic, providing an opportunity to 
islate service levels for the two networks and improve security by moving 
unencrypted migr

[openstack-dev] Update on live migration priority

2016-02-12 Thread Murray, Paul (HP Cloud)
The objective for the live migration priority is to improve the stability of 
migrations based on operator experience. The high level approach is to do the 
following:

1.   Improve CI

2.   Improve documentation

3.   Improve manageability of migrations

4.   Fix bugs

In this cycle we targeted a few immediately implementable features that would 
help, specifically giving operators commands to allow them to manage migrations 
(inspect progress, force completion, and cancel) and improve security 
(split-networks and remove ssh-based resize/migration; aka storage pools).

Most of these are on track to be completed in this cycle with the exception of 
storage pools work which is being deferred. Further details follow.

Expand CI coverage - in progress

There is a job in the experimental queue called: 
gate-tempest-dsvm-multinode-live-migrationqueued. This will become the job that 
performs live migration tests; any live migration tests in other jobs will be 
removed. At present the job has been configured to cover different storage 
configurations including cinder, NFS, ceph. Tests are now being added to the 
job. Patches are currently up for live migration of instances with swap and 
instances with ephemeral disks.

Please trigger the experimental queue if your patches touch migrations in some 
way so we can check the stability of the jobs. Once stable and with sufficient 
tests we will promote the job from the experimental queue so that it always 
runs.

See: https://review.openstack.org/#/q/topic:lm_test

Improve API docs - done

Some changes were made to the API guide for moving servers, including better 
descriptions for the server actions migrate, live migrate, shelve, resize and 
evacuate ( 
http://developer.openstack.org/api-guide/compute/server_concepts.html#server-actions
 ) and a section that describes reasons for moving VMs with common use cases 
outlined ( 
http://developer.openstack.org/api-guide/compute/server_concepts.html#moving-servers
 )

Block live migration with attached volumes - done

The selective block device migration API in libvirt 1.2.17 is used to allow 
block migration when volumes are attached. A follow on patch to allow readonly 
drives to be copied in block migration has not been completed. This patch is 
required to allow iso9600 format config drives to be migrated. Without it only 
vfat config drives can be migrated. There is still some thought going into that 
- see: https://review.openstack.org/#/c/234659

Force complete - requires python-novaclient change

Force-complete forces a live migration to complete  by pausing the VM and 
restarting it when it has completed migration. This is intended as a brute 
force way to make a VM complete its migration when it is taking too long. In 
the future auto-converge and post-copy will be looked at. These became 
available in qemu 2.5.

Force complete is done in nova but still requires a change to python-novaclient 
to implement the CLI.

Cancel - in progress

Cancel stops a live migration, leaving it on the source host with the migration 
status left as "cancelled". This is in progress and follows the pattern of 
force-complete. Unfortunately this needs to be bundled up into one patch to 
avoid multiple API bumps.

Patches for review:
https://review.openstack.org/#/q/status:open+topic:bp/abort-live-migration

Progress reporting - in progress (no pun intended)

Progress reporting introduces migrations as a sub-resource of servers and adds 
progress data to the migration record. There was some debate at the mid cycle 
and on the mailing list about how to record this transient data. It is a waste 
to keep writing it to the database, but as it is generated at the compute 
manager but examined at the API it was felt that writing it to the database is 
necessary to fit the existing architecture. The conclusions was that writing to 
the database every 5 seconds would not cause a significant overhead. 
Alternatives could be persued later if necessary. For discussion see this ML 
thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/085662.html 
and the IRC meeting transcript here: 
http://eavesdrop.openstack.org/meetings/nova_live_migration/2016/nova_live_migration.2016-02-09-14.01.log.html

Patches for review:
https://review.openstack.org/#/q/status:open+topic:bp/live-migration-progress-report

Split networking - done

Split networking adds a configuration parameter to specify 
live_migration_inbound_addr as the ip address or host name to be used as the 
target for migration traffic. This allows migration traffic to be isolated on a 
separate network to other management traffic, providing an opportunity to 
islate service levels for the two networks and improve security by moving 
unencrypted migration traffic to an isolated network.

Resize/cold migrate using storage pools - deferred

The objective here was to change the libvirt implementation of migrate and 
resize to use libvirt storage pools instead

Re: [openstack-dev] [neutron] Postgres support in (fwaas) tests

2016-02-12 Thread Eichberger, German
All,

The FWaaS gate hook had implicit support for the postures database which was 
failing in the gate since postgres wasn’t available. Madhu proposed patch [1] 
to remove postgres support. This leads to the wider question if we want to 
support/test postgres with the Neutron gates. I haven’t seen postgres in other 
gate hooks nor do my deployments rely on it so I am in favor of removing/not 
supporting it - but I wanted to check with other before making that 
determination.

Thanks,
German


[1] https://review.openstack.org/#/c/279339/3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Update on scheduler and resource tracker progress

2016-02-12 Thread Ryan Rossiter

> On Feb 11, 2016, at 2:24 PM, Jay Pipes  wrote:
> 
> Hello all,
> 
> Performance working group, please pay attention to Chapter 2 in the details 
> section.
> 
> tl;dr
> -
> 
> At the Nova mid-cycle, we finalized decisions on a way forward in redesigning 
> the way that resources are tracked in Nova. This work is a major undertaking 
> and has implications for splitting out the scheduler from Nova, for the 
> ability of the placement engine to scale, and for removing long-standing 
> reporting and race condition bugs that have plagued Nova for years.
> 
> The following blueprint specifications outline the effort, which we are 
> calling the "resource providers framework":
> 
> * resource-classes (bp MERGED, code MERGED)
> * pci-generate-stats (bp MERGED, code IN REVIEW)
> * resource-providers (bp MERGED, code IN REVIEW)
> * generic-resource-pools (bp IN REVIEW, code TODO)
> * compute-node-inventory (bp IN REVIEW, code TODO)
> * resource-providers-allocations (bp IN REVIEW, code TODO)
> * resource-providers-scheduler (bp IN REVIEW, code TODO)
> 
> The group working on this code and doing the reviews are hopeful that the 
> generic-resource-pools work can be completed in Mitaka, and we also are going 
> to aim to get the compute-node-inventory work done in Mitaka, though that 
> will be more of a stretch.
> 
> The remainder of the resource providers framework blueprints will be targeted 
> to Newton. The resource-providers-scheduler blueprint is the final blueprint 
> required before the scheduler can be fully separated from Nova.
> 
> details
> ---
> 
> Chapter 1 - How the blueprints fit together
> ===
> 
> A request to launch an instance in Nova involves requests for two different 
> things: *resources* and *capabilities*. Resources are the quantitative part 
> of the request spec. Capabilities are the qualitative part of the request.
> 
> The *resource providers framework* is a set of 7 blueprints that reorganize 
> the way that Nova handles the quantitative side of the equation. These 7 
> blueprints are described below.
> 
> Compute nodes are a type of *resource provider*, since they allow instances 
> to *consume* some portion of its *inventory* of various types of resources. 
> We call these types of resources *"resource classes"*.
> 
> resource-classes bp: https://review.openstack.org/256297
> 
> The resource-providers blueprint introduces a new set of tables for storing 
> capacity and usage amounts of all resources in the system:
> 
> resource-providers bp: https://review.openstack.org/225546
> 
> While all compute nodes are resource providers [1], not all resource 
> providers are compute nodes. *Generic resource pools* are resource providers 
> that have an inventory of a *single resource class* and that provide that 
> resource class to consumers that are placed on multiple compute nodes.
> 
> The canonical example of a generic resource pool is a shared storage system. 
> Currently, a Nova compute node doesn't really know whether the storage 
> location it uses for storing disk images is a shared drive/cluster (ala NFS 
> or RBD) or if the storage location is a local disk drive [2]. The 
> generic-resource-pools blueprint covers the addition of these generic 
> resource pools, their relation to host aggregates, and the RESTful API [3] 
> added to control this external resource pool information.
> 
> generic-resource-pools bp: https://review.openstack.org/253187
> 
> Within the Nova database schemas [4], capacity and inventory information is 
> stored in a variety of tables, columns and formats. vCPU, RAM and DISK 
> capacity information is stored in integer fields, PCI capacity information is 
> stored in the pci_devices table, NUMA inventory is stored combined together 
> with usage information in a JSON blob, etc. The compute-node-inventory 
> blueprint migrates all of the disparate capacity information from 
> compute_nodes into the new inventory table.
> 
> compute-node-inventory bp: https://review.openstack.org/260048
> 
> For the PCI resource classes, Nova currently has an entirely different 
> resource tracker (in /nova/pci/*) that stores an aggregate view of the PCI 
> resources (grouped by product, vendor, and numa node) in the 
> compute_nodes.pci_stats field. This information is entirely redundant 
> information since all fine-grained PCI resource information is stored in the 
> pci_devices table. This storage of summary information presents a sync 
> problem. The pci-generate-stats blueprint describes the effort to remove this 
> storage of summary device pool information and instead generate this summary 
> information on the fly for the scheduler. This work is a pre-requisite to 
> having all resource classes managed in a unified manner in Nova:
> 
> pci-generate-stats bp: https://review.openstack.org/240852
> 
> In the same way that capacity fields are scattered among different tables, 
> columns and formats, so too are the fields 

[openstack-dev] Baremetal Deploy Ramdisk functional testing

2016-02-12 Thread Maksym Lobur
Hi All,

In bareon [1] we have test framework to test deploy ramdsik with bareon inside 
(baremetal deployments). This is a functional testing, we do a full 
partitioning/image_deployment in a VM, then reboot to see if tenant image 
deployed properly. Blueprint is at [2], test example is at [3]. We were going 
to put a framework to a separate repo, while keeping functional tests in bareon 
tree.

Does someone else have need to test some kind of deployment ramdisks? Maybe 
already have existing tools for this? Or would be interested to reuse our code? 
Current pull request is to create bareon-func-test repo [4]. But if that makes 
sense, we could do something like ramdisk-func-test, e.g. try to generalize the 
framework to test other ramdisks/agents.

[1] https://wiki.openstack.org/wiki/Bareon 

[2] https://blueprints.launchpad.net/bareon/+spec/bareon-functional-testing 

[3] http://pastebin.com/mL39QJS6 
[4] https://review.openstack.org/#/c/279120/ 



Regards,
Max Lobur,
OpenStack Developer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Monty Taylor

On 02/12/2016 07:36 AM, Jay Pipes wrote:

On 02/12/2016 07:01 AM, Sean Dague wrote:

Ok... this is going to be one of those threads, but I wanted to try to
get resolution here.

OpenStack is wildly inconsistent in it's use of tenant vs. project. As
someone that wasn't here at the beginning, I'm not even sure which one
we are supposed to be transitioning from -> to.

At a minimum I'd like to make all of devstack use 1 term, which is the
term we're trying to get to. That will help move the needle.

However, again, I'm not sure which one that is supposed to be (comments
in various places show movement in both directions). So people with
deeper knowledge here, can you speak up as to which is the deprecated
term and which is the term moving forward.


"Project" is the term that should be used.


Yes.

FWIW - The ansible modules in ansible 2.0 (as well as the shade library) 
use the term "project" regardless of whether the cloud in question is 
running keystone v2 or v3. It's the least I could do to try to nudge 
consumption in the right direction.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance]Glance v2 api support in Nova

2016-02-12 Thread Ian Cordasco
On Fri, Feb 12, 2016 at 9:24 AM, Mikhail Fedosin  wrote:
> Hello!
>
> In late December I wrote several messages about glance v2 support in Nova
> and Nova's xen plugin. Many things have been done after that and now I'm
> happy to announce that there we have a set of commits that makes Nova fully
> v2 compatible (xen plugin works too)!
>
> Here's the link to the top commit https://review.openstack.org/#/c/259097/
> Here's the link to approved spec for Mitaka
> https://github.com/openstack/nova-specs/blob/master/specs/mitaka/approved/use-glance-v2-api.rst
>
> I think it'll be a big step for OpenStack, because api v2 is much more
> stable and RESTful than v1.  We would very much like to deprecate v1 at some
> point. v2 is 'Current' since Juno, and after that there we've had a lot of
> attempts to adopt it in Nova, and every time it was postponed to next
> release cycle.
>
> Unfortunately, it may not happen this time - this work was marked as
> 'non-priority' when the related patches had been done. I think it's a big
> omission, because this work is essential for all OpenStack, and it will be a
> shame if we won't be able to land it in Mitaka.
> As far as I know, Feature Freeze will be announced on March, 3rd, and we
> still have enough time and people to test it before. All patches are split
> into small commits (100 LOC max), so they should be relatively easy to
> review.
>
> I wonder if Nova community members may change their decision and unblock
> this patches? Thanks in advance!
>
> Best regards,
> Mikhail Fedosin

As a fellow Glance core reviewer, I just want to thank you for your
effort on this (and the people who started this work a few cycles ago,
including Flavio and Fei Long Wang... assuming you're carrying on
their work.

If this won't make it into Mitaka, hopefully it can be reviewed and
merged early on in Newton by the Nova team.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Glance]Glance v2 api support in Nova

2016-02-12 Thread Mikhail Fedosin
Hello!

In late December I wrote several messages about glance v2 support in Nova
and Nova's xen plugin. Many things have been done after that and now I'm
happy to announce that there we have a set of commits that makes Nova fully
v2 compatible (xen plugin works too)!

Here's the link to the top commit https://review.openstack.org/#/c/259097/
Here's the link to approved spec for Mitaka
https://github.com/openstack/nova-specs/blob/master/specs/mitaka/approved/use-glance-v2-api.rst

I think it'll be a big step for OpenStack, because api v2 is much more
stable and RESTful than v1.  We would very much like to deprecate v1 at
some point. v2 is 'Current' since Juno, and after that there we've had a
lot of attempts to adopt it in Nova, and every time it was postponed to
next release cycle.

Unfortunately, it may not happen this time - this work was marked as
'non-priority' when the related patches had been done. I think it's a big
omission, because this work is essential for all OpenStack, and it will be
a shame if we won't be able to land it in Mitaka.
As far as I know, Feature Freeze will be announced on March, 3rd, and we
still have enough time and people to test it before. All patches are split
into small commits (100 LOC max), so they should be relatively easy to
review.

I wonder if Nova community members may change their decision and unblock
this patches? Thanks in advance!

Best regards,
Mikhail Fedosin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] create openstack/service-registry

2016-02-12 Thread Ryan Brown

On 02/12/2016 07:48 AM, Everett Toews wrote:

Hi All,

I need to air a concern over the create openstack/service-registry
[1] patch which aims to create an OpenStack service type registry
project to act as a dedicated registry location for service types in
OpenStack under the auspices of the API Working Group.

My concern is that it gets the API Working Group partially into the
big tent governance game. Personally I have zero interest in big tent
governance. I want better APIs, not to become embroiled in
governance. That said, I do fully recognize that by their nature APIs
in general (not just OpenStack) play a large role in governance.

The purpose of this email is not to dissuade the API WG from taking
on this responsibility. In fact, now that we've got a lot of
experience authoring guidelines and shepherding them through the
guideline process, it's time the WG evolved. My purpose is to simply
make sure we go into this with eyes wide open and understand the
consequences of doing so.

Thanks, Everett

[1] https://review.openstack.org/#/c/278612/


You're not wrong - it does involve us a little more in governance, but I 
think the value there (better namespacing etc) is something we can agree 
API-WG both does and should care about.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] pid=host

2016-02-12 Thread Steven Dake (stdake)
Dnaiel,

Unfortunately I am in a remodel process atm and unable to test.

It sounds like with qemu without KVM (as in emulated virtualization) the
processes get killed.  With KVM they don't get killed.

I'll add this to the bug tracker, but for our typical use case of "Deploy
OpenStack on bare metal" it seems like Kolla has always worked as is since
Docker 1.6.  What threw this bug into panic mode was the using of
virtualized environments where docker does kill everything.

Regards
-steve


On 2/9/16, 9:17 AM, "Daniel J Walsh"  wrote:

>Yes I am hearing from different people that this is working fine.  But
>keeping track of this issue in the bugzilla is best.
>I doubt that there was a change in the way docker handles killing
>processes that run in a container when PID=host.
>The way this is coded is to kill all processes in the pid1 cgroup.
>libvirt is supposed to start the VM in a different Cgroup
>which should mean that docker would loose track of the VM process, and
>not be able to kill it.  If this is a new failure, we
>need to establish whether libvirt and the VM end up in the same CGRoup
>or a different one.  And whether or not it is actually
>broken.
>
>On 02/08/2016 10:53 PM, Steven Dake (stdake) wrote:
>> Michal,
>>
>> You listed steps to reproduce but it wasn't clear if docker 1.10 kills
>>vms
>> or keeps them alive from your description.  From our discussion today,
>>it
>> sounded as if docker 1.10, docker 1.9, and docker 1.8.2 have different
>> behaviors on this front.  Could you expand?
>>
>> Dan had required to keep the discussion in the bugzilla for tracking
>> purposes.  Would you mind creating a bugzilla account and adding your
>>data
>> to the bug?
>>
>> Regards
>> -steve
>>
>>
>> On 2/8/16, 12:15 PM, "Michał Jastrzębski"  wrote:
>>
>>> Hey,
>>>
>>> So quick steps to reproduce this:
>>>
>>> 0. install docker 1.10
>>> 1. Deploy kolla
>>> 2. Run VM
>>> 3. On compute host - ps aux | grep qemu, should show your vm process
>>> 4. docker rm -f nova_libvirt
>>> 5. ps aux | grep qemu should still show running vm
>>> 6. re-deploy nova_libvirt
>>> 7. docker exec -it nova_libvirt virsh list - should show running vm
>>>
>>> Cheers,
>>> Michal
>>>
>>> On 8 February 2016 at 07:32, Steven Dake (stdake) 
>>> wrote:
 Hey folks,

 I know we have been through some changes with how pid=host works.  I'd
 like
 to get to the bottom of this, so we can either add the features we
need
 to
 docker, or say "all is good".

 Here is the last quote from this bugzilla where Red Hat in general is
 interested in the same behavior as the Kolla team has.  They have many
 people embedded in the Docker and Kubernetes communities, so it may
make
 sense to let them do the work there :)

 https://bugzilla.redhat.com/show_bug.cgi?id=1302807

 Mrunal Patel 2016-02-08 06:10:15 EST

 docker tracks the pids in a container using cgroups and hence all
 processes
 are killed even though we use pid=host. I believe we had probably
 prompted
 them to add this behavior in the first place.


 This statement appears at odds with what was tested on IRC a few days
 back
 with docker 1.10.  It is possible docker 1.10 had a regression here,
in
 which case if they fix it, we will be back to a dead VM during libvirt
 upgrade which we don't want.

 Can folks that tested this weigh in on the testing that was done on
that
 bugzilla with distro type, docker version, docker-py version, and
 results.
 Unfortunately you will have to create a Red Hat bugzilla account, but
 if you
 don't wish to do that, please send the information on list after
 reviewing
 the bugzilla and I'll submit it on your behalf.

 The outcomes I would be happy with is:

 * docker will never change the semantics of host=pid mode for killing
 child
 processes

 * Or alternatively docker will add a feature such as
host=pidnochildkill
 which Red Hat can spearhead

 Thoughts and comments welcome.

 Regards

 -steve








 
___
__
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>> 
>>>
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack

Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Brant Knudson
On Fri, Feb 12, 2016 at 7:42 AM, Sean Dague  wrote:

> On 02/12/2016 08:30 AM, Dean Troyer wrote:
> > On Fri, Feb 12, 2016 at 6:01 AM, Sean Dague  > > wrote:
> >
> > OpenStack is wildly inconsistent in it's use of tenant vs. project.
> As
> > someone that wasn't here at the beginning, I'm not even sure which
> one
> > we are supposed to be transitioning from -> to.
> >
> > At a minimum I'd like to make all of devstack use 1 term, which is
> the
> > term we're trying to get to. That will help move the needle.
> >
> > However, again, I'm not sure which one that is supposed to be
> (comments
> > in various places show movement in both directions). So people with
> > deeper knowledge here, can you speak up as to which is the deprecated
> > term and which is the term moving forward.
> >
> >
> > IIRC Nova started with project, until the marriage with Rax, when many
> > things changed, although the project -> tenant change may have never
> > been completed.  Keystone v3 started the movement back to project.
> > OpenStackClient made the commitment from the beginning to present
> > exactly one term to the user in all cases, and we chose project.
> >
> > I've thought about making that change in DevStack many times, and would
> > love to see it happen.  Somehow it never gets to the top of the queue.
> > And now in a plugin world, it'll be a bit harder to maintain
> compatibility.
>
> Hmmm... one issue with that:
>
> keystone only supports tenant_id in their service catalog for
> replacement, not project_id - https://review.openstack.org/#/c/279523/.
>
> Keystone folks, any idea if that's going to be adjusted?
>
> -Sean
>
>
Proposed the change to keystone here:
https://review.openstack.org/#/c/279576/

As an excuse for not supporting this earlier, the replacement of
$(tenant_id)s, $(user_id)s, etc., in the service catalog is considered a
legacy feature so this has been neglected.

- Brant

--
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> > For the cores party, much as I enjoyed the First Nation cuisine in
> > Vancouver
> > or the performance art in Tokyo, IMO it's probably time to draw a line
> > under
> > that excess also, as it too projects a notion of exclusivity that runs
> > counter
> > to building a community.
> 
> ... first nation cuisine? you know that's not really what Canadian's
> eat? /me sips on maple syrup and chows on some beavertail.

LOL, I was thinking of the roasted chunks of bison on a stick ... though
now that you mention it, I recall a faint whiff of maple syrup ;)

Cheers,
Eoghan 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Updating a volume attachment

2016-02-12 Thread Andrey Pavlov
Hello Shoham,

We've tried to write and implement similar spec [1].
And someone have tried to implement it [2].
You can see comments in it.

[1] - https://review.openstack.org/#/c/234269/
[2] - https://review.openstack.org/#/c/259518/

On Thu, Feb 11, 2016 at 8:20 PM, Shoham Peller
 wrote:
> Thank you Andrea for your reply.
>
> I know this spec and it is indeed a viable solution.
> However, I think allowing users to update the attachment detail, rather than
> detach and re-attach a volume for every change is more robust and more
> convenient.
>
> Also, IMHO it's a better user-experience if users can use a single API call
> instead of detach API call, poll for the detachment, re-attach the volume,
> and poll again for the attachment if they want to powerup the VM.
> The bdm DB updating, can happen from nova-api, without IRC'ing a compute
> node, and thus return only when the request has been completed fully.
>
> Don't you agree it's needed, even when detaching a boot volume is possible?
>
> Shoham
>
> On Thu, Feb 11, 2016 at 7:04 PM, Andrea Rosa  wrote:
>>
>> Hi
>>
>> On 11/02/16 16:51, Shoham Peller wrote:
>> > if the volume we want to update is the boot
>> > volume, detaching to update the bdm, is not even possible.
>>
>> You might be interested in the approved spec [1] we have for Mitaka
>> (ref. detach boot volume).
>> Unfortunately the spec was not part of the high-priority list and it
>> didn't get implemented but we are going to propose it again for Newton.
>>
>> Regards
>> --
>> Andrea Rosa
>>
>> [1]
>>
>> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/detach-boot-volume.html
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind regards,
Andrey Pavlov.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread gordon chung


On 12/02/16 07:45 AM, Eoghan Glynn wrote:
>
>>> [...]
>>>* much of the problem with the lavish parties is IMO related to the
>>>  *exclusivity* of certain shindigs, as opposed to devs socializing at
>>>  summit being inappropriate per se. In that vein, I think the cores
>>>  party sends the wrong message and has run its course, while the TC
>>>  dinner ... well, maybe Austin is the time to show some leadership
>>>  on that? ;)
>> Well, Tokyo was the time to show some leadership on that -- there was no
>> "TC dinner" there :)
> Excellent, that is/was indeed a positive step :)
>
> For the cores party, much as I enjoyed the First Nation cuisine in Vancouver
> or the performance art in Tokyo, IMO it's probably time to draw a line under
> that excess also, as it too projects a notion of exclusivity that runs counter
> to building a community.

... first nation cuisine? you know that's not really what Canadian's 
eat? /me sips on maple syrup and chows on some beavertail.

i went to the HK core party, it was nice. that said, i agree that the 
core party is not needed -- our egos are big enough already.

(apologies to those who never have been to core party)

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

2016-02-12 Thread Ian Cordasco
On Fri, Feb 12, 2016 at 5:13 AM, Victor Stinner  wrote:
> Change 237027: For the encoding of HTTP headers, it looks like Swift doesn't
> respect HTTP RFCs. The HTTP requires headers to be encoded to Latin1, but
> Swift (server or client, sorry I don't know) encode headers to UTF-8.
> Something should be do too, but it will require a deep analysis, prepare a
> transition period, etc. This problem is complex and cannot be fixed right
> now.

Just to interject here, RFC 2616 is the one you're talking about. That
encoding requirement was dropped when HTTP/1.1 was updated in the
7230-7235 RFCs. Now a field value is defined as

field-value = *( field-content / obs-fold )
field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]
field-vchar = VCHAR / obs-text
obs-fold = CRLF 1*( SP / HTAB )
obs-text = %x80-FF

Where VCHAR is any visible US ASCII character. So while UTF-8 is still
a bad idea for the header value (and in fact, http.client on Python 3
will auto-encode headers to Latin 1) Latin 1 is no longer the
requirement.

For those interested, you can read up on headers in HTTP/1.1 here:
https://tools.ietf.org/html/rfc7230#section-3.2

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Assaf Muller
On Fri, Feb 12, 2016 at 8:43 AM, Ihar Hrachyshka  wrote:
> Eoghan Glynn  wrote:
>
>>
>>
 [...]
   * much of the problem with the lavish parties is IMO related to the
 *exclusivity* of certain shindigs, as opposed to devs socializing at
 summit being inappropriate per se. In that vein, I think the cores
 party sends the wrong message and has run its course, while the TC
 dinner ... well, maybe Austin is the time to show some leadership
 on that? ;)
>>>
>>>
>>> Well, Tokyo was the time to show some leadership on that -- there was no
>>> "TC dinner" there :)
>>
>>
>> Excellent, that is/was indeed a positive step :)
>>
>> For the cores party, much as I enjoyed the First Nation cuisine in
>> Vancouver
>> or the performance art in Tokyo, IMO it's probably time to draw a line
>> under
>> that excess also, as it too projects a notion of exclusivity that runs
>> counter
>> to building a community.
>
>
> A lot of people I care about ignore the core reviewer party for those exact
> reasons: because it’s too elitist and divisive. I agree with them, and I
> ignore the party. I suggest everyone does the same.

I 'boycott' (Kind of a strong word since nobody cares in the first
place) the core party for the same reasons.

>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Ihar Hrachyshka

Eoghan Glynn  wrote:





[...]
  * much of the problem with the lavish parties is IMO related to the
*exclusivity* of certain shindigs, as opposed to devs socializing at
summit being inappropriate per se. In that vein, I think the cores
party sends the wrong message and has run its course, while the TC
dinner ... well, maybe Austin is the time to show some leadership
on that? ;)


Well, Tokyo was the time to show some leadership on that -- there was no
"TC dinner" there :)


Excellent, that is/was indeed a positive step :)

For the cores party, much as I enjoyed the First Nation cuisine in  
Vancouver
or the performance art in Tokyo, IMO it's probably time to draw a line  
under
that excess also, as it too projects a notion of exclusivity that runs  
counter

to building a community.


A lot of people I care about ignore the core reviewer party for those exact  
reasons: because it’s too elitist and divisive. I agree with them, and I  
ignore the party. I suggest everyone does the same.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Sean Dague
On 02/12/2016 08:30 AM, Dean Troyer wrote:
> On Fri, Feb 12, 2016 at 6:01 AM, Sean Dague  > wrote:
> 
> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> someone that wasn't here at the beginning, I'm not even sure which one
> we are supposed to be transitioning from -> to.
> 
> At a minimum I'd like to make all of devstack use 1 term, which is the
> term we're trying to get to. That will help move the needle.
> 
> However, again, I'm not sure which one that is supposed to be (comments
> in various places show movement in both directions). So people with
> deeper knowledge here, can you speak up as to which is the deprecated
> term and which is the term moving forward.
> 
> 
> IIRC Nova started with project, until the marriage with Rax, when many
> things changed, although the project -> tenant change may have never
> been completed.  Keystone v3 started the movement back to project. 
> OpenStackClient made the commitment from the beginning to present
> exactly one term to the user in all cases, and we chose project.
> 
> I've thought about making that change in DevStack many times, and would
> love to see it happen.  Somehow it never gets to the top of the queue. 
> And now in a plugin world, it'll be a bit harder to maintain compatibility.

Hmmm... one issue with that:

keystone only supports tenant_id in their service catalog for
replacement, not project_id - https://review.openstack.org/#/c/279523/.

Keystone folks, any idea if that's going to be adjusted?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Jay Pipes

On 02/12/2016 07:01 AM, Sean Dague wrote:

Ok... this is going to be one of those threads, but I wanted to try to
get resolution here.

OpenStack is wildly inconsistent in it's use of tenant vs. project. As
someone that wasn't here at the beginning, I'm not even sure which one
we are supposed to be transitioning from -> to.

At a minimum I'd like to make all of devstack use 1 term, which is the
term we're trying to get to. That will help move the needle.

However, again, I'm not sure which one that is supposed to be (comments
in various places show movement in both directions). So people with
deeper knowledge here, can you speak up as to which is the deprecated
term and which is the term moving forward.


"Project" is the term that should be used.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Dean Troyer
On Fri, Feb 12, 2016 at 6:01 AM, Sean Dague  wrote:

> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> someone that wasn't here at the beginning, I'm not even sure which one
> we are supposed to be transitioning from -> to.
>
> At a minimum I'd like to make all of devstack use 1 term, which is the
> term we're trying to get to. That will help move the needle.
>
> However, again, I'm not sure which one that is supposed to be (comments
> in various places show movement in both directions). So people with
> deeper knowledge here, can you speak up as to which is the deprecated
> term and which is the term moving forward.
>

IIRC Nova started with project, until the marriage with Rax, when many
things changed, although the project -> tenant change may have never been
completed.  Keystone v3 started the movement back to project.
OpenStackClient made the commitment from the beginning to present exactly
one term to the user in all cases, and we chose project.

I've thought about making that change in DevStack many times, and would
love to see it happen.  Somehow it never gets to the top of the queue.  And
now in a plugin world, it'll be a bit harder to maintain compatibility.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Victor Stinner

Le 12/02/2016 13:01, Sean Dague a écrit :

OpenStack is wildly inconsistent in it's use of tenant vs. project.


Yeah, it's time to find a 3rd term to avoid confusion! ;-)

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] gate issues

2016-02-12 Thread Corey O'Brien
Hey all,

We've made some progress with the gates this past week. There are still
some issues, but I want to point out that I've also seen a lot of real
errors get a recheck comment recently. It slows the gate down and wastes
infra quota to recheck things that are going to fail again. Can I suggest
that we all make sure to get back in the habit of looking at failures and
noting down a reason for the recheck? This will also help track what issues
still remain to be fixed with the gates.

Thanks,
Corey

On Mon, Feb 8, 2016 at 12:10 PM Hongbin Lu  wrote:

> Hi Team,
>
>
>
> In order to resolve issue #3, it looks like we have to significantly
> reduce the memory consumption of the gate tests. Details can be found in
> this patch https://review.openstack.org/#/c/276958/ . For core team, a
> fast review and approval of that patch would be greatly appreciated, since
> it is hard to work with a gate that takes several hours to complete. Thanks.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Corey O'Brien [mailto:coreypobr...@gmail.com]
> *Sent:* February-05-16 12:04 AM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* [openstack-dev] [Magnum] gate issues
>
>
>
> So as we're all aware, the gate is a mess right now. I wanted to sum up
> some of the issues so we can figure out solutions.
>
>
>
> 1. The functional-api job sometimes fails because bays timeout building
> after 1 hour. The logs look something like this:
>
> magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
> [3733.626171s] ... FAILED
>
> I can reproduce this hang on my devstack with etcdctl 2.0.10 as described
> in this bug (https://bugs.launchpad.net/magnum/+bug/1541105), but
> apparently either my fix with using 2.2.5 (
> https://review.openstack.org/#/c/275994/) is incomplete or there is
> another intermittent problem because it happened again even with that fix: (
> http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html
> )
>
>
>
> 2. The k8s job has some sort of intermittent hang as well that causes a
> similar symptom as with swarm.
> https://bugs.launchpad.net/magnum/+bug/1541964
>
>
>
> 3. When the functional-api job runs, it frequently destroys the VM causing
> the jenkins slave agent to die. Example:
> http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.html
> 
>
> When this happens, zuul re-queues a new build from the start on a new VM.
> This can happen many times in a row before the job completes.
>
> I chatted with openstack-infra about this and after taking a look at one
> of the VMs, it looks like memory over consumption leading to thrashing was
> a possible culprit. The sshd daemon was also dead but the console showed
> things like "INFO: task kswapd0:77 blocked for more than 120 seconds". A
> cursory glance and following some of the jobs seems to indicate that this
> doesn't happen on RAX VMs which have swap devices unlike the OVH VMs as
> well.
>
>
>
> 4. In general, even when things work, the gate is really slow. The
> sequential master-then-node build process in combination with underpowered
> VMs makes bay builds take 25-30 minutes when they do succeed. Since we're
> already close to tipping over a VM, we run functional tests with
> concurrency=1, so 2 bay builds means almost the entire allotted devstack
> testing time (generally 75 minutes of actual test time available it seems).
>
>
>
> Corey
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] create openstack/service-registry

2016-02-12 Thread Sean Dague
On 02/12/2016 07:48 AM, Everett Toews wrote:
> Hi All,
> 
> I need to air a concern over the create openstack/service-registry [1] patch 
> which aims to create an OpenStack service type registry project to act as a 
> dedicated registry location for service types in OpenStack under the auspices 
> of the API Working Group.
> 
> My concern is that it gets the API Working Group partially into the big tent 
> governance game. Personally I have zero interest in big tent governance. I 
> want better APIs, not to become embroiled in governance. That said, I do 
> fully recognize that by their nature APIs in general (not just OpenStack) 
> play a large role in governance.
> 
> The purpose of this email is not to dissuade the API WG from taking on this 
> responsibility. In fact, now that we've got a lot of experience authoring 
> guidelines and shepherding them through the guideline process, it's time the 
> WG evolved. My purpose is to simply make sure we go into this with eyes wide 
> open and understand the consequences of doing so. 
> 
> Thanks,
> Everett
> 
> [1] https://review.openstack.org/#/c/278612/

I guess I'm unclear on the concerns, or possibly in the role of the API WG.

The API WG is currently building some more generic recommendations.
There has been a feeling expressed the API WG should take a more active
role in shaping the API in OpenStack. Which is clearly going to mean
caring at least a little about governance.

My assumption here is that nothing can register a service endpoint
unless it's already an official project in OpenStack. Which means most
of that is back on the TC. However someone still needs to be sorting
things out here as the service types thing is kind of a mess today.

If you don't think that's the API WG, that's fine (it is why this went
to the mailing list last week), we can create a different subgroup for
this. The core team is a dedicated group so it doesn't need to overlap
with API WG.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Heat Convergence in Mitaka/Newton status and plans

2016-02-12 Thread Sergey Kraynev
Hi all,

I want to share results of the last Heat meeting, where we discussed
"convergence" status.
"Convergence" feature still has some issues, which prevent enabling
"convergence" engine by default. Some of them were found on TripleO during
manual deploying.
Also there are valuable numbers of related patches [1]

The final solution for Mitaka is saving "convergence"  feature in
experimental status and ready for deep, thorough testing.
So if somebody wants to try/test 'convergence' stuff, it's the best moment
for it.
However it's not recommended for production clouds.

In Newton release Heat team plans to enable this feature on the most part
of gate jobs (e.g. TripleO ha job).
After 1 month of execution jobs with enabled "convergence" and positive
results (low rate or equal 0 happening errors), feature will be enabled by
default.


[1]
https://review.openstack.org/#/q/message:Convergence++project:openstack/heat+status:open+branch:master

-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] create openstack/service-registry

2016-02-12 Thread Jay Pipes

On 02/12/2016 07:48 AM, Everett Toews wrote:

Hi All,

I need to air a concern over the create openstack/service-registry
[1] patch which aims to create an OpenStack service type registry
project to act as a dedicated registry location for service types in
OpenStack under the auspices of the API Working Group.

My concern is that it gets the API Working Group partially into the
big tent governance game. Personally I have zero interest in big tent
governance. I want better APIs, not to become embroiled in
governance. That said, I do fully recognize that by their nature APIs
in general (not just OpenStack) play a large role in governance.

The purpose of this email is not to dissuade the API WG from taking
on this responsibility. In fact, now that we've got a lot of
experience authoring guidelines and shepherding them through the
guideline process, it's time the WG evolved. My purpose is to simply
make sure we go into this with eyes wide open and understand the
consequences of doing so.


I hear your concern, Everett. Is there an alternative you would 
recommend? Are you saying that the API WG should only advise and the TC 
members should be the ones with the approval rights on that repo?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] create openstack/service-registry

2016-02-12 Thread Everett Toews
Hi All,

I need to air a concern over the create openstack/service-registry [1] patch 
which aims to create an OpenStack service type registry project to act as a 
dedicated registry location for service types in OpenStack under the auspices 
of the API Working Group.

My concern is that it gets the API Working Group partially into the big tent 
governance game. Personally I have zero interest in big tent governance. I want 
better APIs, not to become embroiled in governance. That said, I do fully 
recognize that by their nature APIs in general (not just OpenStack) play a 
large role in governance.

The purpose of this email is not to dissuade the API WG from taking on this 
responsibility. In fact, now that we've got a lot of experience authoring 
guidelines and shepherding them through the guideline process, it's time the WG 
evolved. My purpose is to simply make sure we go into this with eyes wide open 
and understand the consequences of doing so. 

Thanks,
Everett

[1] https://review.openstack.org/#/c/278612/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> > [...]
> >   * much of the problem with the lavish parties is IMO related to the
> > *exclusivity* of certain shindigs, as opposed to devs socializing at
> > summit being inappropriate per se. In that vein, I think the cores
> > party sends the wrong message and has run its course, while the TC
> > dinner ... well, maybe Austin is the time to show some leadership
> > on that? ;)
> 
> Well, Tokyo was the time to show some leadership on that -- there was no
> "TC dinner" there :)

Excellent, that is/was indeed a positive step :)

For the cores party, much as I enjoyed the First Nation cuisine in Vancouver
or the performance art in Tokyo, IMO it's probably time to draw a line under
that excess also, as it too projects a notion of exclusivity that runs counter
to building a community.

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-12 Thread Ilya Kutukov
Excuse me, i mean multi-release package. We already have release directives
in plugin metadata.yaml that defines releases compatible with the plugin.
As far as i understand "multi-release package" suppose ability to define
custom configuration/code for each of this releases.


On Fri, Feb 12, 2016 at 3:19 PM, Evgeniy L  wrote:

> >> We have package format <=4.0 where all files have fixed names and
> locations. This is the defaults.
>
> The thing is for 5.0 there should be no default, those fields from now on
> should be specified explicitly.
>
> >> Igor want to provide multi-package
>
> I'm not familiar with this idea, could you please clarify what
> multi-package is?
>
> Thanks,
>
> On Fri, Feb 12, 2016 at 2:57 PM, Ilya Kutukov 
> wrote:
>
>>
>>
>> On Fri, Feb 12, 2016 at 2:03 PM, Evgeniy L  wrote:
>>
>>> Ilya,
>>>
>>> What do you mean by "default"? From the data format I see that we don't
>>> "override defaults" we specify the data for specific release, the way it
>>> was always done for deployment scripts and repositories.
>>>
>>>
>> We have package format <=4.0 where all files have fixed names and
>> locations. This is the defaults.
>>
>> 1. The maintenance team wants ability to specify folder instead plugin
>> configuration files so there should be ability to change this paths to
>> define a folder or other non-standard location. Yes, plugin developer could
>> have whatever source structure and then translate it to the file structure
>> required for the FPB with scripts or build system, but ability to split
>> e.g. tasks files looks useful for me.
>>
>> 2. Igor want to provide multi-package, so, according to spec, this custom
>> release-specific paths to other package files could be specified in release
>> records.
>>
>> I don't see any reasons to complicate format even more and to have some
>>> things which are related to the deployment specified in the root and some
>>> in specific release.
>>>
>>> There is consistent mechanism to specify such kind of things, lets just
>>> use it.
>>>
>>> Thanks,
>>>
>>> On Fri, Feb 12, 2016 at 1:24 PM, Ilya Kutukov 
>>> wrote:
>>>


 On Fri, Feb 12, 2016 at 11:47 AM, Evgeniy L  wrote:

> Ilya,
>
> >> My opinion that i've seen no example of multiple software of
> plugins versions shipped in one package or other form of bundle. Its not a
> common practice.
>
> With plugins we extend Fuel capabilities, which supports multiple
> operating system releases, so it's absolutely natural to extend multiple
> releases at the same time.
>
>
 I just warning against idea when to merge content of several plugin
 distributions in one bundle. But it's ok for plugin code to support
 multiple releases one or another way.



> >> Anyway we need to provide ability to override paths in manifest
> (metadata.yaml).
>
> Could you please provide more information on that? I'm not sure if I
> understand your solution.
>
>
 https://review.openstack.org/#/c/271417/5/specs/9.0/plugins-v5.rst
 L150 and further
 We are overriding default path with special per-release path attributes.
 The question is to use per-release way described in spec or don't
 bother and specify this overrides only in metadata.yaml root.


> Also I'm not sure what we are arguing about, if plugin developer (or
> certification process of some company) requires to have plugin per 
> release,
> it's *very easy* and straight forward to do it even now *without any*
> changes.
>
> If plugin developer wants to deliver plugin for CentOS, Ubuntu, RH
> etc, let them do it, and again when we get full support
> of multi-version environments it's going to be even more crucial for UX to
> have a single plugin with multi-release support.
>
>
 Thanks,
>
> On Thu, Feb 11, 2016 at 9:35 PM, Ilya Kutukov 
> wrote:
>
>> My opinion that i've seen no example of multiple software of plugins
>> versions shipped in one package or other form of bundle. Its not a common
>> practice.
>>
>> Anyway we need to provide ability to override paths in manifest
>> (metadata.yaml).
>>
>> So the plugin developers could use this approaches to provide
>> multiple versions support:
>>
>> * tasks logic (do the plugin developers have access to current
>> release info?)
>> * hooks in pre-build process. Its not a big deal to preprocess source
>> folder to build different packages with scripts that adding or removing
>> some files or replacing some paths.
>> * and, perhaps, logic anchors with YACL or other DSL in tasks
>> dependancies if this functionality will be added this in theory could 
>> allow
>> to use or not to use some graph parts depending on release.
>>
>> I think its already better than nothing and more flexible than any
>> standardised approach.
>>
>> On Thu, 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Thierry Carrez

Eoghan Glynn wrote:

[...]
  * much of the problem with the lavish parties is IMO related to the
*exclusivity* of certain shindigs, as opposed to devs socializing at
summit being inappropriate per se. In that vein, I think the cores
party sends the wrong message and has run its course, while the TC
dinner ... well, maybe Austin is the time to show some leadership
on that? ;)


Well, Tokyo was the time to show some leadership on that -- there was no 
"TC dinner" there :)


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-12 Thread Evgeniy L
>> We have package format <=4.0 where all files have fixed names and
locations. This is the defaults.

The thing is for 5.0 there should be no default, those fields from now on
should be specified explicitly.

>> Igor want to provide multi-package

I'm not familiar with this idea, could you please clarify what
multi-package is?

Thanks,

On Fri, Feb 12, 2016 at 2:57 PM, Ilya Kutukov  wrote:

>
>
> On Fri, Feb 12, 2016 at 2:03 PM, Evgeniy L  wrote:
>
>> Ilya,
>>
>> What do you mean by "default"? From the data format I see that we don't
>> "override defaults" we specify the data for specific release, the way it
>> was always done for deployment scripts and repositories.
>>
>>
> We have package format <=4.0 where all files have fixed names and
> locations. This is the defaults.
>
> 1. The maintenance team wants ability to specify folder instead plugin
> configuration files so there should be ability to change this paths to
> define a folder or other non-standard location. Yes, plugin developer could
> have whatever source structure and then translate it to the file structure
> required for the FPB with scripts or build system, but ability to split
> e.g. tasks files looks useful for me.
>
> 2. Igor want to provide multi-package, so, according to spec, this custom
> release-specific paths to other package files could be specified in release
> records.
>
> I don't see any reasons to complicate format even more and to have some
>> things which are related to the deployment specified in the root and some
>> in specific release.
>>
>> There is consistent mechanism to specify such kind of things, lets just
>> use it.
>>
>> Thanks,
>>
>> On Fri, Feb 12, 2016 at 1:24 PM, Ilya Kutukov 
>> wrote:
>>
>>>
>>>
>>> On Fri, Feb 12, 2016 at 11:47 AM, Evgeniy L  wrote:
>>>
 Ilya,

 >> My opinion that i've seen no example of multiple software of plugins
 versions shipped in one package or other form of bundle. Its not a common
 practice.

 With plugins we extend Fuel capabilities, which supports multiple
 operating system releases, so it's absolutely natural to extend multiple
 releases at the same time.


>>> I just warning against idea when to merge content of several plugin
>>> distributions in one bundle. But it's ok for plugin code to support
>>> multiple releases one or another way.
>>>
>>>
>>>
 >> Anyway we need to provide ability to override paths in manifest
 (metadata.yaml).

 Could you please provide more information on that? I'm not sure if I
 understand your solution.


>>> https://review.openstack.org/#/c/271417/5/specs/9.0/plugins-v5.rst L150
>>> and further
>>> We are overriding default path with special per-release path attributes.
>>> The question is to use per-release way described in spec or don't bother
>>> and specify this overrides only in metadata.yaml root.
>>>
>>>
 Also I'm not sure what we are arguing about, if plugin developer (or
 certification process of some company) requires to have plugin per release,
 it's *very easy* and straight forward to do it even now *without any*
 changes.

 If plugin developer wants to deliver plugin for CentOS, Ubuntu, RH etc,
 let them do it, and again when we get full support
 of multi-version environments it's going to be even more crucial for UX to
 have a single plugin with multi-release support.


>>> Thanks,

 On Thu, Feb 11, 2016 at 9:35 PM, Ilya Kutukov 
 wrote:

> My opinion that i've seen no example of multiple software of plugins
> versions shipped in one package or other form of bundle. Its not a common
> practice.
>
> Anyway we need to provide ability to override paths in manifest
> (metadata.yaml).
>
> So the plugin developers could use this approaches to provide multiple
> versions support:
>
> * tasks logic (do the plugin developers have access to current release
> info?)
> * hooks in pre-build process. Its not a big deal to preprocess source
> folder to build different packages with scripts that adding or removing
> some files or replacing some paths.
> * and, perhaps, logic anchors with YACL or other DSL in tasks
> dependancies if this functionality will be added this in theory could 
> allow
> to use or not to use some graph parts depending on release.
>
> I think its already better than nothing and more flexible than any
> standardised approach.
>
> On Thu, Feb 11, 2016 at 6:31 PM, Simon Pasquier <
> spasqu...@mirantis.com> wrote:
>
>> Hi,
>>
>> On Thu, Feb 11, 2016 at 11:46 AM, Igor Kalnitsky <
>> ikalnit...@mirantis.com> wrote:
>>
>>> Hey folks,
>>>
>>> The original idea is to provide a way to build plugin that are
>>> compatible with few releases. It makes sense to me, cause it looks
>>> awful if you need to maintain different branches for different Fuel
>>> releases an

Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Davanum Srinivas
keystone v3 says "project":
http://developer.openstack.org/api-ref-identity-v3.html#projects-v3

That validates Ihar's observation from Neutron

-- Dims

On Fri, Feb 12, 2016 at 7:10 AM, Ihar Hrachyshka  wrote:
> Sean Dague  wrote:
>
>> Ok... this is going to be one of those threads, but I wanted to try to
>> get resolution here.
>>
>> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
>> someone that wasn't here at the beginning, I'm not even sure which one
>> we are supposed to be transitioning from -> to.
>>
>> At a minimum I'd like to make all of devstack use 1 term, which is the
>> term we're trying to get to. That will help move the needle.
>>
>> However, again, I'm not sure which one that is supposed to be (comments
>> in various places show movement in both directions). So people with
>> deeper knowledge here, can you speak up as to which is the deprecated
>> term and which is the term moving forward.
>
>
> Tenant is deprecated, and project is the new term to use.
>
> Why am I confident? Neutron is currently looking into doing the naming
> transition.
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] URL of Horizon is hard to find on the dashboard

2016-02-12 Thread Vitaly Kramskikh
We use HTTPS url for the title and HTTP url for the small link with (HTTP)
text near the title:



​

2016-02-12 17:09 GMT+07:00 Igor Kalnitsky :

> Vitaly,
>
> > Then we'll end up with 2 buttons (for HTTP and HTTPS links) for Horizon
> and every plugin link
>
> Why? And how do you handle it with link now?
>
> On Fri, Feb 12, 2016 at 11:15 AM, Vitaly Kramskikh
>  wrote:
> > Igor,
> >
> > Then we'll end up with 2 buttons (for HTTP and HTTPS links) for Horizon
> and
> > every plugin link, which would look quite ugly. We had all these options
> in
> > mind when we designed this change and decided to go with the current
> look.
> >
> > Seriously guys, I don't understand you concerns. After dismissing the
> > deployment result message, Horizon link is in the top block on the
> dashboard
> > - it's very hard to get lost.
> >
> > 2016-02-11 20:10 GMT+07:00 Igor Kalnitsky :
> >>
> >> Vitaly,
> >>
> >> What about adding some button with "Go" or "Visit" text? Somewhere on
> >> the right size of line? It'd be easy to understand what to click to
> >> visit the dashboard.
> >>
> >> - Igor
> >>
> >> On Thu, Feb 11, 2016 at 1:38 PM, Vitaly Kramskikh
> >>  wrote:
> >> > Roman,
> >> >
> >> > For with enabled SSL it still can be quite long as it contains FQDN.
> And
> >> > we
> >> > also need to change plugin link representation accordingly, which I
> >> > don't
> >> > fine acceptable. I think you just got used to the old interface where
> >> > the
> >> > link to Horizon was a part of deployment task result message. We've
> >> > merged
> >> > small style update to underline Horizon/plugin links, I think it would
> >> > be
> >> > enough to solve the issue.
> >> >
> >> > 2016-02-09 20:31 GMT+07:00 Roman Prykhodchenko :
> >> >>
> >> >> Cannot we use display the same link we use in the title?
> >> >>
> >> >> 9 лют. 2016 р. о 14:14 Vitaly Kramskikh 
> >> >> написав(ла):
> >> >>
> >> >> Hi, Roman,
> >> >>
> >> >> I think the only solution here is to underline the title so it would
> >> >> look
> >> >> like a link. I don't think it's a good idea to show full URL because:
> >> >>
> >> >> If SSL is enabled, there will be 2 links - HTTP and HTTPS.
> >> >> Plugins can provide their own links for their dashboards, and they
> >> >> would
> >> >> be shown using exactly the same representation which is used for
> >> >> Horizon.
> >> >> These links could be quite long.
> >> >>
> >> >>
> >> >> 2016-02-09 20:04 GMT+07:00 Roman Prykhodchenko :
> >> >>>
> >> >>> Whoops! I forgot to attach the link. Sorry!
> >> >>>
> >> >>> 1. http://i.imgur.com/8GaUtDq.png
> >> >>>
> >> >>> > 9 лют. 2016 р. о 13:48 Roman Prykhodchenko 
> >> >>> > написав(ла):
> >> >>> >
> >> >>> > Hi fuelers!
> >> >>> >
> >> >>> > I’m not sure, if it’s my personal problem or the UX can be
> improved
> >> >>> > a
> >> >>> > little, but I’ve literary spend more than 5 minutes trying to
> figure
> >> >>> > out how
> >> >>> > to find a URL of Horizon. I’ve made a screenshot [1] and I suggest
> >> >>> > to add a
> >> >>> > full a link with the full URL in its test after "The OpenStack
> >> >>> > dashboard
> >> >>> > Horizon is now available". That would make things much more
> usable.
> >> >>> >
> >> >>> >
> >> >>> > - romcheg
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> >
> __
> >> >>> > OpenStack Development Mailing List (not for usage questions)
> >> >>> > Unsubscribe:
> >> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> __
> >> >>> OpenStack Development Mailing List (not for usage questions)
> >> >>> Unsubscribe:
> >> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Vitaly Kramskikh,
> >> >> Fuel UI Tech Lead,
> >> >> Mirantis, Inc.
> >> >>
> >> >>
> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > Vitaly Kramskikh,
> >> > Fuel UI Tech Lead,
> >> > Mirantis, Inc.
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usa

Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Gyorgy Szombathelyi
Hi,

In keystone v2, it was called tenants, in v3, it is projects. It is funny to 
see the components switched to keystonemiddleware to configure 
[keystone_authtoken] with v3 settings, but other places still use the v2 
terminology (and even the v2 client).

Br,
György

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 2016 február 12, péntek 13:01
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [all] tenant vs. project
> 
> Ok... this is going to be one of those threads, but I wanted to try to get
> resolution here.
> 
> OpenStack is wildly inconsistent in it's use of tenant vs. project. As someone
> that wasn't here at the beginning, I'm not even sure which one we are
> supposed to be transitioning from -> to.
> 
> At a minimum I'd like to make all of devstack use 1 term, which is the term
> we're trying to get to. That will help move the needle.
> 
> However, again, I'm not sure which one that is supposed to be (comments in
> various places show movement in both directions). So people with deeper
> knowledge here, can you speak up as to which is the deprecated term and
> which is the term moving forward.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-12 Thread Ihar Hrachyshka

Sean Dague  wrote:


Ok... this is going to be one of those threads, but I wanted to try to
get resolution here.

OpenStack is wildly inconsistent in it's use of tenant vs. project. As
someone that wasn't here at the beginning, I'm not even sure which one
we are supposed to be transitioning from -> to.

At a minimum I'd like to make all of devstack use 1 term, which is the
term we're trying to get to. That will help move the needle.

However, again, I'm not sure which one that is supposed to be (comments
in various places show movement in both directions). So people with
deeper knowledge here, can you speak up as to which is the deprecated
term and which is the term moving forward.


Tenant is deprecated, and project is the new term to use.

Why am I confident? Neutron is currently looking into doing the naming  
transition.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] tenant vs. project

2016-02-12 Thread Sean Dague
Ok... this is going to be one of those threads, but I wanted to try to
get resolution here.

OpenStack is wildly inconsistent in it's use of tenant vs. project. As
someone that wasn't here at the beginning, I'm not even sure which one
we are supposed to be transitioning from -> to.

At a minimum I'd like to make all of devstack use 1 term, which is the
term we're trying to get to. That will help move the needle.

However, again, I'm not sure which one that is supposed to be (comments
in various places show movement in both directions). So people with
deeper knowledge here, can you speak up as to which is the deprecated
term and which is the term moving forward.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-12 Thread Ihar Hrachyshka

Salvatore Orlando  wrote:

On 11 February 2016 at 20:17, John Belamaric   
wrote:



On Feb 11, 2016, at 12:04 PM, Armando M.  wrote:



On 11 February 2016 at 07:01, John Belamaric   
wrote:




It is only internal implementation changes.

That's not entirely true, is it? There are config variables to change  
and it opens up the possibility of a scenario that the operator may not  
care about.




If we were to remove the non-pluggable version altogether, then the  
default for ipam_driver would switch from None to internal. Therefore,  
there would be no config file changes needed.


I think this is correct.
Assuming the migration path to Neutron will include the data  
transformation from built-in to pluggable IPAM, do we just remove the old  
code and models?
On the other hand do you think it might make sense to give operators a  
chance to rollback - perhaps just in case some nasty bug pops up?


They can always revert to a previous release. And if we enable the new  
implementation start of Newton, we’ll have enough time to fix bugs that  
will pop up in gate.


What's the team level of confidence in the robustness of the reference  
IPAM driver?


Salvatore




John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Multi release packages

2016-02-12 Thread Ilya Kutukov
On Fri, Feb 12, 2016 at 2:03 PM, Evgeniy L  wrote:

> Ilya,
>
> What do you mean by "default"? From the data format I see that we don't
> "override defaults" we specify the data for specific release, the way it
> was always done for deployment scripts and repositories.
>
>
We have package format <=4.0 where all files have fixed names and
locations. This is the defaults.

1. The maintenance team wants ability to specify folder instead plugin
configuration files so there should be ability to change this paths to
define a folder or other non-standard location. Yes, plugin developer could
have whatever source structure and then translate it to the file structure
required for the FPB with scripts or build system, but ability to split
e.g. tasks files looks useful for me.

2. Igor want to provide multi-package, so, according to spec, this custom
release-specific paths to other package files could be specified in release
records.

I don't see any reasons to complicate format even more and to have some
> things which are related to the deployment specified in the root and some
> in specific release.
>
> There is consistent mechanism to specify such kind of things, lets just
> use it.
>
> Thanks,
>
> On Fri, Feb 12, 2016 at 1:24 PM, Ilya Kutukov 
> wrote:
>
>>
>>
>> On Fri, Feb 12, 2016 at 11:47 AM, Evgeniy L  wrote:
>>
>>> Ilya,
>>>
>>> >> My opinion that i've seen no example of multiple software of plugins
>>> versions shipped in one package or other form of bundle. Its not a common
>>> practice.
>>>
>>> With plugins we extend Fuel capabilities, which supports multiple
>>> operating system releases, so it's absolutely natural to extend multiple
>>> releases at the same time.
>>>
>>>
>> I just warning against idea when to merge content of several plugin
>> distributions in one bundle. But it's ok for plugin code to support
>> multiple releases one or another way.
>>
>>
>>
>>> >> Anyway we need to provide ability to override paths in manifest
>>> (metadata.yaml).
>>>
>>> Could you please provide more information on that? I'm not sure if I
>>> understand your solution.
>>>
>>>
>> https://review.openstack.org/#/c/271417/5/specs/9.0/plugins-v5.rst L150
>> and further
>> We are overriding default path with special per-release path attributes.
>> The question is to use per-release way described in spec or don't bother
>> and specify this overrides only in metadata.yaml root.
>>
>>
>>> Also I'm not sure what we are arguing about, if plugin developer (or
>>> certification process of some company) requires to have plugin per release,
>>> it's *very easy* and straight forward to do it even now *without any*
>>> changes.
>>>
>>> If plugin developer wants to deliver plugin for CentOS, Ubuntu, RH etc,
>>> let them do it, and again when we get full support
>>> of multi-version environments it's going to be even more crucial for UX to
>>> have a single plugin with multi-release support.
>>>
>>>
>> Thanks,
>>>
>>> On Thu, Feb 11, 2016 at 9:35 PM, Ilya Kutukov 
>>> wrote:
>>>
 My opinion that i've seen no example of multiple software of plugins
 versions shipped in one package or other form of bundle. Its not a common
 practice.

 Anyway we need to provide ability to override paths in manifest
 (metadata.yaml).

 So the plugin developers could use this approaches to provide multiple
 versions support:

 * tasks logic (do the plugin developers have access to current release
 info?)
 * hooks in pre-build process. Its not a big deal to preprocess source
 folder to build different packages with scripts that adding or removing
 some files or replacing some paths.
 * and, perhaps, logic anchors with YACL or other DSL in tasks
 dependancies if this functionality will be added this in theory could allow
 to use or not to use some graph parts depending on release.

 I think its already better than nothing and more flexible than any
 standardised approach.

 On Thu, Feb 11, 2016 at 6:31 PM, Simon Pasquier >>> > wrote:

> Hi,
>
> On Thu, Feb 11, 2016 at 11:46 AM, Igor Kalnitsky <
> ikalnit...@mirantis.com> wrote:
>
>> Hey folks,
>>
>> The original idea is to provide a way to build plugin that are
>> compatible with few releases. It makes sense to me, cause it looks
>> awful if you need to maintain different branches for different Fuel
>> releases and there's no difference in the sources. In that case, each
>> bugfix to deployment scripts requires:
>>
>> * backport bugfix to other branches (N backports)
>> * build new packages for supported releases (N builds)
>> * release new packages (N releases)
>>
>> It's somehow.. annoying.
>>
>
> A big +1 on Igor's remark. I've already expressed it in another thread
> but it should be expected that plugin developers want to support 2
> consecutive versions of Fuel for a given version of their plugin.
> That being s

[openstack-dev] Mitaka bug smash in Nuremberg (Germany)

2016-02-12 Thread Alberto Planas Dominguez
Hi,

We are preparing a bug smash session for Mitaka in Nuremberg (Germany).
 This event is planned from March 7 to March 9, and is going to be
hosted by SUSE (basically that means tons of cookies).

You can subscribe yourself, and find more information here:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka-Nuremberg

Have fun!!
Alberto Planas

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham
Norton, HRB 21284 (AG Nürnberg)
Maxfeldstraße 5, 90409 Nürnberg, Germany



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-12 Thread Salvatore Orlando
On 11 February 2016 at 20:17, John Belamaric 
wrote:

>
> On Feb 11, 2016, at 12:04 PM, Armando M.  wrote:
>
>
>
> On 11 February 2016 at 07:01, John Belamaric 
> wrote:
>
>>
>>
>>
>> It is only internal implementation changes.
>>
>
> That's not entirely true, is it? There are config variables to change and
> it opens up the possibility of a scenario that the operator may not care
> about.
>
>
>
> If we were to remove the non-pluggable version altogether, then the
> default for ipam_driver would switch from None to internal. Therefore,
> there would be no config file changes needed.
>

I think this is correct.
Assuming the migration path to Neutron will include the data transformation
from built-in to pluggable IPAM, do we just remove the old code and models?
On the other hand do you think it might make sense to give operators a
chance to rollback - perhaps just in case some nasty bug pops up?
What's the team level of confidence in the robustness of the reference IPAM
driver?

Salvatore



>
>
> John
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][neutron] publish and update Gerrit dashboard link automatically

2016-02-12 Thread Rossella Sblendido

Hi all,

it's hard sometimes for reviewers to filter reviews that are high 
priority. In Neutron in this mail thread [1] we had the idea to create a 
script for that. The script is now available in the Neutron repository [2].
The script queries Launchpad and creates a file that can be used by 
gerrit-dash-creator to display a dashboard listing patches that fix 
critical/high bugs, that implement approved blueprint or feature 
requests. This is how it looks like today [3].
For it to be really useful the dashboard link needs to be updated once a 
day at least. Here I need your help. I'd like to publish the URL in a 
public place and update it every day in an automated way. How can I do that?


thanks,

Rossella

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079816.html
[2] 
https://github.com/openstack/neutron/blob/master/tools/milestone-review-dash.py

[3] https://goo.gl/FSKTj9

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> Hello all,
> 
> tl;dr
> =
> 
> I have long thought that the OpenStack Summits have become too
> commercial and provide little value to the software engineers
> contributing to OpenStack.
> 
> I propose the following:
> 
> 1) Separate the design summits from the conferences
> 2) Hold only a single OpenStack conference per year
> 3) Return the design summit to being a low-key, low-cost working event
> 
> details
> ===
> 
> The design summits originally started out as working events. Developers
> got together in smallish rooms, arranged chairs in a fishbowl, and got
> to work planning and designing.
> 
> With the OpenStack Summit growing more and more marketing- and
> sales-focused, the contributors attending the design summit are often
> unfocused. The precious little time that developers have to actually
> work on the next release planning is often interrupted or cut short by
> the large numbers of "suits" and salespeople at the conference event,
> many of which are peddling a product or pushing a corporate agenda.
> 
> Many contributors submit talks to speak at the conference part of an
> OpenStack Summit because their company says it's the only way they will
> pay for them to attend the design summit. This is, IMHO, a terrible
> thing. The design summit is a *working* event. Companies that contribute
> to OpenStack projects should send their engineers to working events
> because that is where work is done, not so that their engineer can go
> give a talk about some vendor's agenda-item or newfangled product.
> 
> Part of the reason that companies only send engineers who are giving a
> talk at the conference side is that the cost of attending the OpenStack
> Summit has become ludicrously expensive. Why have the events become so
> expensive? I can think of a few reasons:
> 
> a) They are held every six months. I know of no other community or open
> source project that holds *conference-type* events every six months.
> 
> b) They are held in extremely expensive hotels and conference centers
> because the number of attendees is so big.
> 
> c) Because the conferences have become sales and marketing-focused
> events, companies shell out hundreds of thousands of dollars for schwag,
> for rented event people, for food and beverage sponsorships, for keynote
> slots, for lavish and often ridiculous parties, and more. This cost
> means less money to send engineers to the design summit to do actual work.
> 
> I would love to see the OpenStack contributor community take back the
> design summit to its original format and purpose and decouple it from
> the OpenStack Summit's conference portion.
> 
> I believe the design summits should be organized by the OpenStack
> contributor community, not the OpenStack Foundation and its marketing
> and event planning staff. This will allow lower-cost venues to be chosen
> that meet the needs only of the small group of active contributors, not
> of huge masses of conference attendees. This will allow contributor
> companies to send *more* engineers to *more* design summits, which is
> something that really needs to happen if we are to grow our active
> contributor pool.
> 
> Once this decoupling occurs, I think that the OpenStack Summit should be
> renamed to the OpenStack Conference and Expo to better fit its purpose
> and focus. This Conference and Expo event really should be held once a
> year, in my opinion, and continue to be run by the OpenStack Foundation.
> 
> I, for one, would welcome events that have no conference check-in area,
> no evening parties with 2000 people, no keynote and
> powerpoint-as-a-service sessions, and no getting pulled into sales meetings.
> 
> OK, there, I said it.
> 
> Thoughts? Criticism? Support? Suggestions welcome.

Largely agree with the need to re-imagine summit, and perhaps cleaving
off the design summit is the best way forward on that.

But in any case, just a few counter-points to consider:

 * nostalgia for the days of yore will only get us so far, as *some* of
   the friction in the current design summit is due to its scale (read:
   success/popularity) as opposed to a wandering band of suits ruining
   everything. A decoupled design summit will still be a large event
   and will never recreate the intimate atmosphere of say the Bexar
   summit.

 * much of the problem with the lavish parties is IMO related to the
   *exclusivity* of certain shindigs, as opposed to devs socializing at 
   summit being inappropriate per se. In that vein, I think the cores
   party sends the wrong message and has run its course, while the TC
   dinner ... well, maybe Austin is the time to show some leadership
   on that? ;)

 * cost-wise we need to be careful also about quantifying the real cost
   deltas between a typical midcycle location (often hard to get to,
   with a limited choice of hotels) and a major city with direct routes
   and competition between airlines keeping airfares under control.
   Agreed let's scale down the glitz, but let'

Re: [openstack-dev] [swift] Plan to add Python 3 support to Swift

2016-02-12 Thread Victor Stinner

Hi,

Le 08/02/2016 14:42, Victor Stinner a écrit :
> https://review.openstack.org/#/c/237019/

Nice, this one was merged!



https://review.openstack.org/#/c/237027/
https://review.openstack.org/#/c/236998/


These two changes look to be blocked by encodings issues. Tim Burke, 
Samuel Merritt and Christian Schwede have concerns about the exact 
behaviour on Python 3.


Let me explain how I'm working. I spend between 10 minutes and one day 
to port a single unit test and then I submit a patch (or patches if the 
change is big). I prefer to work incrementally: first fix the unit test, 
and then enhance the code if needed.


Right now, unit tests don't pass, there are still serious bytes vs 
Unicode issues. I would prefer to reach a first milestone where unit 
tests pass and then discuss how to fix complex encoding issues.


Change 236998: For the hmac patch, supporting arbitrary hash prefix or 
suffix requires to implement a new feature. Swift parses the 
configuration files using the ConfigParser which doesn't support 
arbitrary bytes. I understand the use case and I agree that something 
should be done, but I suggest to do that later.


Change 237027: For the encoding of HTTP headers, it looks like Swift 
doesn't respect HTTP RFCs. The HTTP requires headers to be encoded to 
Latin1, but Swift (server or client, sorry I don't know) encode headers 
to UTF-8. Something should be do too, but it will require a deep 
analysis, prepare a transition period, etc. This problem is complex and 
cannot be fixed right now.


Just to be clear: Swift does not support Python 3, so we are still free 
to change completely the code for Python 3, until Swift is fully 
compatible with Python 3.


Swift is not my main target, so I cannot spend too much time on it. If 
you expect me to solve all issues at once, I'm sorry, I cannot help :-/


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >