Re: [openstack-dev] [neutron] - are you attending the Boston summit?

2017-05-05 Thread Sukhdev Kapur
Hey Neutron Folks,

Following our past tradition, we should have Neutron dinner while we are
all in Boston.
Miguel has few places in mind. I would propose that we nominate him as the
dinner organizer lieutenant.

Miguel, I hope you will take us to some cool place.

Thanks
-Sukhdev


On Thu, Apr 20, 2017 at 4:31 PM, Kevin Benton  wrote:

> Hi,
>
> If you are a Neutron developer attending the Boston summit, please add
> your name to the etherpad here so we can plan a Neutron social and easily
> coordinate in person meetings: https://etherpad.openstack.org/p/neutron-
> boston-summit-attendees
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-05 Thread Sukhdev Kapur
Rossella,

It has been nothing but pleasure working with you. Wish you best of luck in
your new endeavors.

regards..
-Sukhdev


On Thu, May 4, 2017 at 6:52 AM, Rossella Sblendido 
wrote:

> Hi all,
>
> I've moved to a new position recently and despite my best intentions I
> was not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.
>
> It's been a great experience working with you all, I learned a lot both
> on the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to
> contact me if you have any question or would like my feedback on something.
>
> ciao,
>
> Rossella
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-05 Thread Hongbin Lu
Hi all,

Thanks for your vote. According to the feedback, I adjusted the core team 
membership accordingly. Welcome Feng Shengqin to the core team.

https://review.openstack.org/#/admin/groups/1382,members

Best regards,
Hongbin

From: shubham sharma [mailto:shubham@gmail.com]
Sent: May-02-17 1:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Proposal a change of Zun core team

+1

Regards
Shubham

On Tue, May 2, 2017 at 6:33 AM, Qiming Teng 
> wrote:
+1

Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][goals] Community goals for Queen

2017-05-05 Thread Sean Dague

On 05/05/2017 05:09 PM, Matt Riedemann wrote:


This time is tough given how front-loaded the sessions are at the Forum
on Monday. The Nova onboarding session overlaps with this, along with
some other sessions that impact or are related to Nova. It would have
been nice to do stuff like this toward the end of the week, but I
realize scheduling is a nightmare and not everyone can be pleased, and
that ship has sailed. So I don't think I can be there, but I assume
anything that comes out of it will be proposed to the governance repo or
recapped in the mailing list afterward so we can discuss there.


Right, given that it's against Operating the VM/Baremetal Platform, and 
Nova Onboarding, I'll just give feedback here.


A migration path off of paste would be a huge win. Paste deploy is 
unmaintained (as noted in the etherpad) and being in etc means it's 
another piece of gratuitous state that makes upgrading harder than it 
really should be.


This is one of those that is going to require someone to commit to 
working out that migration path up front. But it would be a pretty good 
chunk of debt and upgrade ease.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Joshua Harlow

Fox, Kevin M wrote:

Note, when I say OpenStack below, I'm talking about 
nova/glance/cinder/neutron/horizon/heat/octavia/designate. No offence to the 
other projects intended. just trying to constrain the conversation a bit... 
Those parts are fairly comparable to what k8s provides.

I think part of your point is valid, that k8s isn't as feature rich in some 
ways, (networking for example), and will get more complex in time. But it has a 
huge amount of functionality for significantly less effort compared to an 
OpenStack deployment with similar functionality today.

I think there are some major things different between the two projects that are 
really paying off for k8s over OpenStack right now. We can use those as 
learning opportunities moving forward or the gap will continue to widen, as 
will the user migrations away from OpenStack. These are mostly architectural 
things.

Versions:
  * The real core of OpenStack is essentially version 1 + iterative changes.
  * k8s is essentially the third version of Borg. Plenty of room to ditch bad 
ideas/decisions.

That means OpenStack's architecture has essentially grown organically rather 
then being as carefully thought out. The backwards compatibility has been a 
good goal, but its so hard to upgrade most places burn it down and stand up 
something new anyway so a lot of work with a lot less payoff then you would 
think. Maybe it is time to consider OpenStack version 2...



Ya, just to add a thought (I agree with a lot of the rest, it's a very 
nice summary btw IMHO), but afaik a lot (I can't really quantify the 
number, for obvious reasons) of people have been burned by upgrading or 
operating what is being produced, that there has to be a lot of trust 
that is regained by such things not working from day zero. Trust is a 
really really hard thing to get back once it is lost, though I think it 
is great that people are trying, what are we doing to regain that trust 
back?


I honestly don't quite know. Anyone else?


I think OpenStack's greatest strength is its standardized api's. Thus far we've 
been changing the api's over time and keeping the implementation mostly the 
same... maybe we should consider keeping the api the same and switch some of 
the implementations out... It might take a while to get back to where we are 
now, but I suspect the overall solution would be much better now that we have 
so much experience with building the first one.

k8s and OpenStack do largely the same thing. get in user request, schedule the 
resource onto some machines and allow management/lifecycle of the thing.

Why then does k8s scalability goal target 5000 nodes and OpenStack really 
struggle with more then 300 nodes without a huge amount of extra work? I think 
its architecture. OpenStack really abuses rabbit, does a lot with relational 
databases that maybe are better done elsewhere, and forces isolation between 
projects that maybe is not the best solution.

Part of it I think is combined services. They don't have separate services for 
cinder-api/nova-api,neutron-api/heat-api/etc. Just kube-apiserver. same with 
the *-schedulers, just kube-scheduler. This means many fewer things to manage 
for ops, and allows for faster communication times (lower latency). In theory 
OpenStack could scale out much better with the finer grained services. but I'm 
not sure thats really ever shown true in practice.

Layered/eating own dogfood:
  * OpenStack assumes the operator will install "all the things".
  * K8s uses k8s to deploy lots of itself.

You use kubelet with the same yaml file format normally used to deploy stuff to 
deploy etcd/kube-apiserver/kube-scheduler/kube-controller-manager to get a 
working base system.
You then use the base system to launch sdn, ingress, service-discovery, the ui, 
etc.

This means the learning required is substantially less when it comes to 
debugging problems, performing upgrades, etc, because its the same for the most 
part for k8s as it is for any other app running on it. The learning costs is 
way lower.

Versioning:
  * With OpenStack, Upgrades are hard, mismatched version servers/agents are 
hard/impossible.
  * K8s, they support the controllers being 2 versions ahead of the clients.

Its hard to bolt this on after the fact, but its also harder when you have 
multiple communications channels to do it with. Having to do it in http, sql, 
and rabbit messages makes it so much harder. Having only one place talk to the 
single datastore (etcd) makes that easier, as is only having one place 
everything interacts with the servers kube-apiserver.

Some amount of distribution:
  * OpenStack components are generally expected to come from distro's.
  * K8s, Core pieces like kube-apiserver are distributed prebuilt and ready to 
go in container images if you chose to use them.

Minimal silo's:
  * the various OpenStack projects are very silo'ed.
  * Most of the k8s subsystems currently are all tightly integrated with each 
other and are 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Fox, Kevin M
Note, when I say OpenStack below, I'm talking about 
nova/glance/cinder/neutron/horizon/heat/octavia/designate. No offence to the 
other projects intended. just trying to constrain the conversation a bit... 
Those parts are fairly comparable to what k8s provides.

I think part of your point is valid, that k8s isn't as feature rich in some 
ways, (networking for example), and will get more complex in time. But it has a 
huge amount of functionality for significantly less effort compared to an 
OpenStack deployment with similar functionality today.

I think there are some major things different between the two projects that are 
really paying off for k8s over OpenStack right now. We can use those as 
learning opportunities moving forward or the gap will continue to widen, as 
will the user migrations away from OpenStack. These are mostly architectural 
things.

Versions:
 * The real core of OpenStack is essentially version 1 + iterative changes.
 * k8s is essentially the third version of Borg. Plenty of room to ditch bad 
ideas/decisions.

That means OpenStack's architecture has essentially grown organically rather 
then being as carefully thought out. The backwards compatibility has been a 
good goal, but its so hard to upgrade most places burn it down and stand up 
something new anyway so a lot of work with a lot less payoff then you would 
think. Maybe it is time to consider OpenStack version 2...

I think OpenStack's greatest strength is its standardized api's. Thus far we've 
been changing the api's over time and keeping the implementation mostly the 
same... maybe we should consider keeping the api the same and switch some of 
the implementations out... It might take a while to get back to where we are 
now, but I suspect the overall solution would be much better now that we have 
so much experience with building the first one.

k8s and OpenStack do largely the same thing. get in user request, schedule the 
resource onto some machines and allow management/lifecycle of the thing.

Why then does k8s scalability goal target 5000 nodes and OpenStack really 
struggle with more then 300 nodes without a huge amount of extra work? I think 
its architecture. OpenStack really abuses rabbit, does a lot with relational 
databases that maybe are better done elsewhere, and forces isolation between 
projects that maybe is not the best solution.

Part of it I think is combined services. They don't have separate services for 
cinder-api/nova-api,neutron-api/heat-api/etc. Just kube-apiserver. same with 
the *-schedulers, just kube-scheduler. This means many fewer things to manage 
for ops, and allows for faster communication times (lower latency). In theory 
OpenStack could scale out much better with the finer grained services. but I'm 
not sure thats really ever shown true in practice.

Layered/eating own dogfood:
 * OpenStack assumes the operator will install "all the things".
 * K8s uses k8s to deploy lots of itself.

You use kubelet with the same yaml file format normally used to deploy stuff to 
deploy etcd/kube-apiserver/kube-scheduler/kube-controller-manager to get a 
working base system.
You then use the base system to launch sdn, ingress, service-discovery, the ui, 
etc.

This means the learning required is substantially less when it comes to 
debugging problems, performing upgrades, etc, because its the same for the most 
part for k8s as it is for any other app running on it. The learning costs is 
way lower.

Versioning:
 * With OpenStack, Upgrades are hard, mismatched version servers/agents are 
hard/impossible.
 * K8s, they support the controllers being 2 versions ahead of the clients.

Its hard to bolt this on after the fact, but its also harder when you have 
multiple communications channels to do it with. Having to do it in http, sql, 
and rabbit messages makes it so much harder. Having only one place talk to the 
single datastore (etcd) makes that easier, as is only having one place 
everything interacts with the servers kube-apiserver.

Some amount of distribution:
 * OpenStack components are generally expected to come from distro's.
 * K8s, Core pieces like kube-apiserver are distributed prebuilt and ready to 
go in container images if you chose to use them.

Minimal silo's:
 * the various OpenStack projects are very silo'ed.
 * Most of the k8s subsystems currently are all tightly integrated with each 
other and are managed by the same teams.

This has lead to architectural decisions that take more of the bigger picture 
into account. Under OpenStack's current model, each project does their own 
thing without too much design into how it all comes together in the end. A lot 
of problems spring from this. I'm sure k8s will get more and more silo'ed as it 
grows and matures. But right now, the lack of silo's really are speeding its 
development.

Anyway, I think I'm done rambling for now... I do hope we can reflect on some 
of the differences between the projects and see if we can figure out how to 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Chris Friesen

On 05/05/2017 02:34 PM, Matt Riedemann wrote:


I don't have a nice way to wrap this up. I'm depressed and just want to go
outside. I don't expect pleasant replies to my rant, so I probably won't reply
again after this anyway since this is always a never-ending back and forth which
just leaves everyone bitter.


For what it's worth, I think that there are always going to be disagreements of 
this sort.


The upstream developers want to see things get done upstream.  Makes sense.

The downstream teams that add stuff on top of vanilla OpenStack usually do it 
because customers want functionality, not just for fun or to cause vendor 
lock-in.  And sometimes those things that get added aren't easily upstreamable, 
either because the use-case is too specific, or because the implementation 
doesn't match upstream standards, or because it's too tightly coupled to other 
code outside of OpenStack.  Sometimes there just aren't the resources available 
to push everything upstream.  This leads to downstream distros carrying private 
patches, which naturally makes them want less upstream churn.


I don't think there's any magic answer to this other than to encourage 
downstream folks to continue to try to push things upstream, with the 
expectation that not everything will make it.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Chris Friesen

On 05/05/2017 02:04 PM, John Griffith wrote:



On Fri, May 5, 2017 at 11:24 AM, Chris Friesen > wrote:

On 05/05/2017 10:48 AM, Chris Dent wrote:

Would it be accurate to say, then, that from your perpsective the
tendency of OpenStack to adopt new projects willy nilly contributes
to the sense of features winning out over deployment, configuration
and usability issues?


Personally I don't care about the new projects...if I'm not using them I can
ignore them, and if I am using them then I'll pay attention to them.

But within existing established projects there are some odd gaps.

Like nova hasn't implemented cold-migration or resize (or live-migration) of
an instance with LVM local storage if you're using libvirt.

Image properties get validated, but not flavor extra-specs or instance 
metadata.

Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
for anything stressful it falls over.

​Oh really?​
​I'd love some detail on this.  What falls over?


It's been a while since I looked at it, but the main issue was that with LIO as 
the iSCSI server there is no automatic traffic shaping/QoS between guests, or 
between guests and the host.  (There's no iSCSI server process to assign to a 
cgroup, for example.)


The throttling in IOPS/Bps is better than nothing, but doesn't really help when 
you don't necessarily know what your total IOPS/bandwidth actually is or how 
many volumes could get created.


So you have one or more guests that are hammering on the disk as fast as they 
can, combined with disks on the cinder server that maybe aren't as fast as they 
should be, and it ended up slowing down all the other guests.  And if the host 
is using the same physical disks for things like glance downloads or image 
conversion, then a badly-behaved guest can cause performance issues on the host 
as well due to IO congestion.  And if they fill up the host caches they can even 
affect writes to other unrelated devices.


So yes, it wasn't the ideal hardware for the purpose, and there are some tuning 
knobs, but in an ideal world we'd be able to reserve some amount/percentage of 
bandwidth/IOPs for the host and have the rest shared equally between all active 
iSCSI sessions (or unequally via a share allocation if desired).


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Michael Glasgow

On 5/4/2017 10:08 AM, Alex Schultz wrote:

On Thu, May 4, 2017 at 5:32 AM, Chris Dent  wrote:

I know you're not speaking as the voice of your employer when making
this message, so this is not directed at you, but from what I can
tell Oracle's presense upstream (both reviews and commits) in Ocata
and thus far in Pike has not been huge.


Probably because they are still on Kilo.


I don't want to stray off topic, but it seems worth clarifying that 
Oracle OpenStack for Oracle Linux 3.0 is based on Mitaka.


http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads/index.html

Our messaging tends to get confused because we (Oracle) publish separate 
OpenStack distributions for Solaris and Linux.  They are distinct 
products built and supported by different teams on their own schedules. 
It would be less confusing to the community and to customers if those 
efforts were coordinated or even consolidated, but so far that has not 
been possible.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Michał Jastrzębski
You are talking about OpenStack being hard because it's complex and at
the same time you're talking about using "non-linux-mainstream" tools
around. It's either flexibility or ease guys... Prescriptive is easy,
flexible is hard. When you want to learn about linux you're not
starting from compiling gentoo, you're installing ubuntu, click "next"
until it's finished and just trust it's working, after some time you
grow skills in linux and customize it to your needs.

We are talking about software that runs physics, hpc-like clusters,
mobile phone communication and wordpresses across thousands of
companies. It won't ever be simple and prescriptive. Best we can do is
as you said, hide complexity of it and allow smoother entry until
someone learns complexity. No tooling will ever replace experienced
operator, tooling can make easier time to gain this experience.

You mentioned Kubernetes as good example, Kubernetes is still
relatively young project and doesn't support some of things that you
yourself said you need. As it grows, as options becomes available, it
too will become more and more complex.

On 5 May 2017 at 14:52, Octave J. Orgeron  wrote:
> +1
>
> On 5/5/2017 3:46 PM, Alex Schultz wrote:
>>
>>
>>> Sooo... I always get a little triggered when I hear that OpenStack is
>>> hard to deploy. We've spent last few years fixing it and I think it's
>>> pretty well fixed now. Even as we speak I'm deploying 500+ vms on
>>> OpenStack cluster I deployed last week within one day.
>>>
>> No, you've written a complex tool (that you understand) to do it.
>> That's not the same someone who is not familiar with OpenStack trying
>> to deploy OpenStack. I too could quickly deploy a decently scaled
>> infrastructure with some of the tools (fuel/tripleo/puppet/etc), but
>> the reality is that each one of these tools is inherently hiding the
>> complexities of OpenStack.  Each (including yours) has their own
>> flavor of assumptions baked in to make it work.  That is also
>> confusing for the end user who tries to switch between them and only
>> gets some of the flexibility of each but then runs face first into
>> each tool's short comings.  Rather than assuming a tool has solved it
>> (which it hasn't or we'd all be using the same one by now), how about
>> we take some time to understand why we've had to write these tools in
>> the first place and see if there's something we improve on?  Learning
>> the tool to deploy OpenStack is not the same as deploying OpenStack,
>> managing it, and turning it around for the true cloud end user to
>> consume.
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Octave J. Orgeron

+1

On 5/5/2017 3:46 PM, Alex Schultz wrote:



Sooo... I always get a little triggered when I hear that OpenStack is
hard to deploy. We've spent last few years fixing it and I think it's
pretty well fixed now. Even as we speak I'm deploying 500+ vms on
OpenStack cluster I deployed last week within one day.


No, you've written a complex tool (that you understand) to do it.
That's not the same someone who is not familiar with OpenStack trying
to deploy OpenStack. I too could quickly deploy a decently scaled
infrastructure with some of the tools (fuel/tripleo/puppet/etc), but
the reality is that each one of these tools is inherently hiding the
complexities of OpenStack.  Each (including yours) has their own
flavor of assumptions baked in to make it work.  That is also
confusing for the end user who tries to switch between them and only
gets some of the flexibility of each but then runs face first into
each tool's short comings.  Rather than assuming a tool has solved it
(which it hasn't or we'd all be using the same one by now), how about
we take some time to understand why we've had to write these tools in
the first place and see if there's something we improve on?  Learning
the tool to deploy OpenStack is not the same as deploying OpenStack,
managing it, and turning it around for the true cloud end user to
consume.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Alex Schultz
On Fri, May 5, 2017 at 1:02 PM, Michał Jastrzębski  wrote:
> On 5 May 2017 at 11:33, Alex Schultz  wrote:
>> On Fri, May 5, 2017 at 10:48 AM, Chris Dent  wrote:
>>> On Fri, 5 May 2017, Alex Schultz wrote:
>>>
 You have to understand that as I'm mainly dealing with having to
 actually deploy/configure the software, when I see 'new project X'
 that does 'cool new things Y, Z' it makes me cringe.  Because it's
 just added complexity for new features that who knows if they are
 actually going to be consumed by a majority of end users.  I see a
 lot of new features for edge cases while the core functionally
 (insert the most used project configuration) still have awkward
 deployment, configuration and usability issues. But those aren't
 exciting so people don't want to work on them...
>>>
>>>
>>> Would it be accurate to say, then, that from your perpsective the
>>> tendency of OpenStack to adopt new projects willy nilly contributes
>>> to the sense of features winning out over deployment, configuration
>>> and usability issues?
>>>
>>
>> It does not help.
>>
>>> I think a lot of project contributors may not really see it that way
>>> because they think of their project and within that project there is
>>> a constant effort to clean things up, address bugs and tech debt,
>>> and try to slowly but surely evolve to some level of maturity. In
>>> their eyes those new projects are something else separate from their
>>> project.
>>>
>>> From the outside, however, it is all OpenStack and maybe it looks
>>> like there's loads of diffuse attention.
>>>
>>> If that's the case, then a question is whether or not the people who
>>> are spending time on those new projects would be spending time on
>>> the older projects instead if the new projects didn't exist. I don't
>>> know, but seems unlikely.
>>>
>>
>> So there's a trade off and I don't think we can just restrict entry
>> because some projects aren't user friendly.  I see it as a common
>> issue across all projects. Some are better than others, but what I
>> would like to see is the bar for usability raised within the OpenStack
>> community such that the end user (and deployer/operator) are all taken
>> into consideration.  For me the usability also goes with adoption. The
>> easier it is to consume, the easier it would be to adopt something.
>> If  you take a look at what is required to configure OpenStack for a
>> basic deployment, it is not easy to consume.  If you were to compare
>> the basic getting started/install guide for Kubernetes[0] vs
>> OpenStack[1], you can see what I mean about complexity.  I think just
>> the install guide for neutron on a controller node[2] is about the
>> same length as the kubernetes guide.  And we think this is ok?  We
>> should keep adding additional installation/management complexity for
>> each project?  You could argue that OpenStack has more features or
>> more flexible so it's apples to oranges but I don't think it has to be
>> if we worked on better patterns for configuration/deployment/upgrades.
>> It feels like OpenStack is the thing that you should pay professional
>> services to deploy rather than I do it yourself.  And I think that's a
>> shame.
>
> Sooo... I always get a little triggered when I hear that OpenStack is
> hard to deploy. We've spent last few years fixing it and I think it's
> pretty well fixed now. Even as we speak I'm deploying 500+ vms on
> OpenStack cluster I deployed last week within one day.
>

No, you've written a complex tool (that you understand) to do it.
That's not the same someone who is not familiar with OpenStack trying
to deploy OpenStack. I too could quickly deploy a decently scaled
infrastructure with some of the tools (fuel/tripleo/puppet/etc), but
the reality is that each one of these tools is inherently hiding the
complexities of OpenStack.  Each (including yours) has their own
flavor of assumptions baked in to make it work.  That is also
confusing for the end user who tries to switch between them and only
gets some of the flexibility of each but then runs face first into
each tool's short comings.  Rather than assuming a tool has solved it
(which it hasn't or we'd all be using the same one by now), how about
we take some time to understand why we've had to write these tools in
the first place and see if there's something we improve on?  Learning
the tool to deploy OpenStack is not the same as deploying OpenStack,
managing it, and turning it around for the true cloud end user to
consume.

> These problems aren't factor of OpenStack growing too fast, it's
> tooling that people are using. Granted, it took some time for us to
> build these tools, but we did build them. One of reasons why we could
> build them is that OpenStack, after being turned into Big Tent allowed
> us (Kolla) to quickly join "main herd" of OpenStack and innovate in
> our own way. If we'd put lot's of barriers like 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Fox, Kevin M
+1.

From: Octave J. Orgeron [octave.orge...@oracle.com]
Sent: Friday, May 05, 2017 1:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too 
slow at the same time

Thank you Alex for the points you made below. These are some of the big
issues that I see all OpenStack customers and operators struggling with.
Regardless of the tools that the community or vendors put on the table,
OpenStack is still a very complicated piece of technology to deploy and
manage. Even if we look at trippleo, kolla, puppet, Fuel, JuJu, etc.
they address very specific use-cases. Most will handle deploying a basic
HA control plane and configure things that are suited for a dev/test
setup, demo, or POC type of use-case. But do they handle a production
ready deployment for a bank, telco, retailer, or government? Is
everything configured to handle HA, scale, security, auditing,
multi-tenancy, etc. with all of the knobs and options set the way the
customer needs? Do we end up with ceilometer, aodh, gnocchi, ELK, etc.
all configured optimally? How about networking and security? How about
upgrades? Can you expect people to hit an upgrade button and not have
anything break? Let's be realistic here.. these tools are all good
starting points, but you're going to have to get your hands dirty at
some level to configure OpenStack to fit your actual business and
technical requirements. The life-cycle management of OpenStack is not
easy and requires a lot of resources. Sure vendors can try to fill the
void, but everything they build is on quicksand.

This is why you see vendors and major consulting houses jumping all over
this to fill the void with professional services. There are plenty of
big shops that are now looking at ways to out-source the management of
OpenStack because of how complex it is to manage and maintain. BTW, this
is the biggest market sign that a product is too complicated.

 From a vendors perspective, it's incredibly difficult to keep up with
the releases because once you get your automation tooling and any extra
value-added components integrated with a release, it's more than likely
already behind or EOL. Plus there won't be enough soak time with
customers to adopt it! Not only that, but by the time you make something
work for customers, there is a very high chance that the upstream
version of those components will have changed enough that you'll have to
either patch, re-architect, or slash and burn what you've already
delivered to your customers. Not to mention it maybe impossible to
upgrade your customers in a seamless or automated fashion. This is why
customers will stick to an older release because the upgrade path is too
painful.

If you consider the realities of all of the above, what ends up
happening? Enterprise customers will end up sticking to an older release
or paying someone else to deal with the complexity. This places vendors
at risk because there isn't a clear revenue model that can sustain the
engineering overhead required for maintaining and developing their
distribution. If customers aren't buy more or upgrading, then who's
keeping the lights on? You can only charge so much for support. So they
end up being bought or going under. Which leaves customers with fewer
choices and more risk.

So ultimately, the right way to fix this is to have an LTS branch and
for there to be better governance around how new features are
introduced. There needs to be more customer and vendor driven
involvement to solidifying a core set of features that everyone can rely
on working consistently across releases and upgrades. When new features
or enhancements come along, there should be more emphasis on usability,
sustainability, and upgradeability so that customers and vendors are not
stuck.

If we had an LTS branch and a solid governance model, both vendors and
customers would benefit greatly. It would help buffer the chaos of the
bleeding edge away from customers and allow vendors to deliver and
support them properly.

Octave

On 5/5/2017 12:33 PM, Alex Schultz wrote:
> So there's a trade off and I don't think we can just restrict entry
> because some projects aren't user friendly. I see it as a common issue
> across all projects. Some are better than others, but what I would
> like to see is the bar for usability raised within the OpenStack
> community such that the end user (and deployer/operator) are all taken
> into consideration. For me the usability also goes with adoption. The
> easier it is to consume, the easier it would be to adopt something. If
> you take a look at what is required to configure OpenStack for a basic
> deployment, it is not easy to consume. If you were to compare the
> basic getting started/install guide for Kubernetes[0] vs OpenStack[1],
> you can see what I mean about complexity. I think just the install
> guide for neutron on a controller node[2] is about the same length as
> the kubernetes 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean McGinnis
On Fri, May 05, 2017 at 04:17:29PM -0400, Jonathan Proulx wrote:
> On Fri, May 05, 2017 at 02:04:43PM -0600, John Griffith wrote:
> :On Fri, May 5, 2017 at 11:24 AM, Chris Friesen 
> 
> :> Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
> :> for anything stressful it falls over.
> :>
> :
> :​Oh really?​
> :
> :​I'd love some detail on this.  What falls over?
> 
> I'm a bit out of date on this personally, but we ditched all iSCSI a
> few years ago becasue we found it generally flaky on Linux. We we were
> motly using Equallogic SAN both for OpenStack and stand alone servers
> but saw same issues with some other targets as well.
> 
> So I wonder if this is a Cinder issue or just a Linux issue.
> 
> What we saw fall over was any slight network bump would permanently
> drop conenct to backing storage requiring reset.  But as I say this
> was decidedly not a Cinder issue.
> 
> -Jon
> 

Yeah, Cinder doesn't get in the data path, so it wouldn't be Cinder itself.

LVM will configure the iSCSI target, but then that is all in the Linux and
distro package realm.

Same for iSCSI initiator. But since we have a lot of external storage,
besides LVM, that uses the Linux iSCSI initiator, that makes me want to
point fingers at it being an environment issue with the network or something
interfering with the traffic.

At any rate, if you do see issues with the way Cinder is configuring any of
that, please do let us know.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Octave J. Orgeron

Hi Matt,

And this is actually part of the problem for vendors. Many Oracle 
engineers, including myself, have tried to get features and fixes pushed 
upstream. While that may sound easy, the reality is that it isn't! In 
many cases, it takes months for us to get something in or we get shot 
down altogether. Here are the big issues we run into:


 * If it's in support of Oracle specific technologies such as Solaris,
   ZFS, MySQL Cluster, etc. we are often shunned away because it's not
   Linux or "mainstream" enough. A great example is how our Nova
   drivers for Solaris Zones, Kernel Zones, and LDoms are turned away.
   So we have to spend extra cycles maintaining our patches because
   they are shunned away from getting into the gate.
 * If we release an OpenStack distribution and a year later, a major
   CVE security bug comes along.. we will patch it. But is there a way
   for us to push those changes back in? No, because the branch for
   that release is EOL'd and burned. So we have to maintain our own
   copy of the repos so we have something to work against.
 * Taking a release and productizing it takes more than just pulling
   the git repo and building packages. It requires integrated testing
   on a given OS distribution, hardware, and infrastructure. We have to
   test it against our own products and handle upgrades from the
   previous product release. We have to make sure it works for
   customers. Then we have to spin up our distribution, documentation, etc.

Lastly, just throwing resources at this isn't going to solve the 
cultural or logistics problems. Everyone has to work together and Oracle 
will continue to try and work with the community. If other vendors, 
customers, and operators are willing to work together to build an LTS 
branch and the governance around it, then Oracle will support that 
effort. But to go it alone I think is risky for any single individual or 
vendor. It's pretty obvious that over the past year, a lot of vendors 
that were ponying up efforts have had to pull the plug on their 
investments. A lot of the issues that I've out-lined effect the 
bottom-line for OpenStack vendors. This is not about which vendor does 
more or less or who has the bigger budget to spend. It's about making it 
easier for vendors to support and for customers to consume.


Octave

On 5/5/2017 2:40 PM, Matt Riedemann wrote:


If you're spending exorbitant amounts of time patching in your forks 
to keep up with the upstream code, then you're doing the wrong thing. 
Upstream your changes, or work against the APIs, or try to get the 
APIs you need upstream to build on for your downstream features. 
Otherwise this is all just burden you've put on yourself and I can't 
justify an LTS support model because it might make someone's 
downstream fork strategy easier to manage. As noted earlier, I don't 
see Oracle developers leading the way upstream. If you want to see 
major changes, then contribute those resources, get involved and make 
a lasting effect.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Alex Schultz
On Fri, May 5, 2017 at 1:49 PM, Sean Dague  wrote:
> On 05/05/2017 12:36 PM, Alex Schultz wrote:
>> On Fri, May 5, 2017 at 6:16 AM, Sean Dague  wrote:
>>> On 05/04/2017 11:08 AM, Alex Schultz wrote:
>>>
>>> The general statement of "people care more about features than
>>> usability/stability" gets thrown around a lot. And gets lots of head
>>> nodding. But rarely comes with specifics.
>>>
>>> Can we be specific about what feature work is outpacing the consumers
>>> that don't help with usability/stability?
>>>
>>
>> The cell v2 initial implementation was neither usable or stable (for
>> my definition of stable). Yea you could say 'but it's a work and
>> progress' and I would say, why is it required for the end user then?
>> If I wanted to I could probably go back and go through every project
>> and point out when a feature was added yet we still have a pile of
>> outstanding issues.  As Chris Friesen pointed out in his reply email,
>> there are things out there are specifics if you go looking.  You have
>> to understand that as I'm mainly dealing with having to actually
>> deploy/configure the software, when I see 'new project X' that does
>> 'cool new things Y, Z' it makes me cringe.  Because it's just added
>> complexity for new features that who knows if they are actually going
>> to be consumed by a majority of end users.  I see a lot of new
>> features for edge cases while the core functionally (insert the most
>> used project configuration) still have awkward deployment,
>> configuration and usability issues. But those aren't exciting so
>> people don't want to work on them...
>
> Chris pointed out bugs to partially implemented features that have not
> completed. Those are things that haven't gotten done.
>
> Calling cells v2 an unneeded feature seems kind of a stretch. There was
> so much operator push to get cells v1 merged even though it was wildly
> incomplete, and full of races and very weird unfixable bugs. It was
> merged on user request because many large operators were patching it in,
> out of tree. And cells v2 was exactly a usability and stability path out
> of cells v1.
>

I didn't say it wasn't an unneeded feature. I said the "initial
implementation as not usable or stable". This being due to missing
commands for managing (operator needs) or stable because things had to
change when people actually tried to use them (tooling/workflow
changes).  I understand why we need it (although storing credentials
in a database table makes me cry), it's just that I wish it was more
baked before made a requirement outside of devstack.

> While there may have been bumps in the road getting there, calling it a
> feature unrelated to stability and usability doesn't seem right to me.
> That's more of a "I wish the following bits were done differently."
>

The whole point was that yes, I wish it was released differently such
that it didn't end up being a multiple month affair of bug exposing
and lack of implemented tooling when it was switched to be mandatory.
 That to me was the usability/stability problem.  A feature was made
mandatory without proper understanding of the implications on the end
user.

Thanks,
-Alex

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][goals] Community goals for Queen

2017-05-05 Thread Matt Riedemann

On 5/5/2017 10:47 AM, Mike Perez wrote:

Hello everyone,

May 11th at 11:00-12:30 at the forum we will be discussing our community wide
goals for the Queen release [1]!

We do OpenStack-wide goals to "achieve visible common changes, push for basic
levels of consistency and user experience, and efficiently improve certain
areas where technical debt payments have become too high – across all OpenStack
projects" [2].

Our goals backlog: [3]

* New goals are highly welcome.
* Each goal would achieveable in one cycle, if not we need to break the goal
  into smaller connected goals.
* Some Goals already have a team (ex: Python 3) but some haven't.  Maybe could
  we dress a list of people able to step-up and volunteer to help on these
  ones.
* Some Goals might require some documentation for how to achieve it.

Thanks to Emilien for leading our community wide goals for Pike [4]

* There was an overwhelming amount of support for beginning Python 3.5 support
  [5] by having our unit tests voting.
* Unfortunately getting our API endpoints to support WSGI still has some work
  [6].

Lets start adding/updating what's in our backlog [3] to prepare for the forum
next week!

[1] - https://governance.openstack.org/tc/goals/pike/index.html
[2] - https://governance.openstack.org/tc/goals/pike/index.html
[3] - https://etherpad.openstack.org/p/community-goals
[4] - https://governance.openstack.org/tc/goals/pike/index.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This time is tough given how front-loaded the sessions are at the Forum 
on Monday. The Nova onboarding session overlaps with this, along with 
some other sessions that impact or are related to Nova. It would have 
been nice to do stuff like this toward the end of the week, but I 
realize scheduling is a nightmare and not everyone can be pleased, and 
that ship has sailed. So I don't think I can be there, but I assume 
anything that comes out of it will be proposed to the governance repo or 
recapped in the mailing list afterward so we can discuss there.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Dean Troyer
On Fri, May 5, 2017 at 3:39 PM, Davanum Srinivas  wrote:
> "If we had an LTS branch and a solid governance model" - The way to
> make it happen is for everyone to show up in the Stable team, do the
> work that is need to define/setup etc..
>
> Who is up for it? Please show up in the next weekly meeting and get
> active in the work needed to take care of the current stable work and
> the work needed to setup a LTS.

Thank you dims, this gets said every time we have this conversation,
and at best new interest wanes after a month, at worst nothing
changes.

I'm going to adopt sdague's stance here and ask for specifics.  Octave
(quoted in dims' first line above) suggests LTS + governance is the
solution.  Make specific proposals that include actionable tasks.
Include a clear problem statement so we know _which_ problem is being
solved.

The number one task for LTS is dims' suggestion above, show up next
week.   Number two is to sustain the level of interest that will be
expressed next week for 6 months and prove that there are resources
available and fully trained and operable before Newton is EOL.

I can't judge if starting next week is soon enough to extend Netwon's
EOL date, but starting in August most certainly is too late. (Newton
is our first realistic opportunity based on the LTS status of the
current testing configurations.)

Oh, and the number zero task is to make sure our stable team PTL can
stay for the party.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][goals] Community goals for Queen

2017-05-05 Thread Emilien Macchi
On Fri, May 5, 2017 at 11:47 AM, Mike Perez  wrote:
> Hello everyone,
>
> May 11th at 11:00-12:30 at the forum we will be discussing our community wide
> goals for the Queen release [1]!
>
> We do OpenStack-wide goals to "achieve visible common changes, push for basic
> levels of consistency and user experience, and efficiently improve certain
> areas where technical debt payments have become too high – across all 
> OpenStack
> projects" [2].
>
> Our goals backlog: [3]
>
> * New goals are highly welcome.
> * Each goal would achieveable in one cycle, if not we need to break the goal
>   into smaller connected goals.
> * Some Goals already have a team (ex: Python 3) but some haven't.  Maybe could
>   we dress a list of people able to step-up and volunteer to help on these
>   ones.
> * Some Goals might require some documentation for how to achieve it.
>
> Thanks to Emilien for leading our community wide goals for Pike [4]

count on my to be at the session and continue to bring my little contribution.

> * There was an overwhelming amount of support for beginning Python 3.5 support
>   [5] by having our unit tests voting.
> * Unfortunately getting our API endpoints to support WSGI still has some work
>   [6].
>
> Lets start adding/updating what's in our backlog [3] to prepare for the forum
> next week!
>
> [1] - https://governance.openstack.org/tc/goals/pike/index.html
> [2] - https://governance.openstack.org/tc/goals/pike/index.html
> [3] - https://etherpad.openstack.org/p/community-goals
> [4] - https://governance.openstack.org/tc/goals/pike/index.html
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Matt Riedemann

On 5/5/2017 3:23 PM, Octave J. Orgeron wrote:

From a vendors perspective, it's incredibly difficult to keep up with
the releases because once you get your automation tooling and any extra
value-added components integrated with a release, it's more than likely
already behind or EOL. Plus there won't be enough soak time with
customers to adopt it! Not only that, but by the time you make something
work for customers, there is a very high chance that the upstream
version of those components will have changed enough that you'll have to
either patch, re-architect, or slash and burn what you've already
delivered to your customers. Not to mention it maybe impossible to
upgrade your customers in a seamless or automated fashion. This is why
customers will stick to an older release because the upgrade path is too
painful.


If you're spending exorbitant amounts of time patching in your forks to 
keep up with the upstream code, then you're doing the wrong thing. 
Upstream your changes, or work against the APIs, or try to get the APIs 
you need upstream to build on for your downstream features. Otherwise 
this is all just burden you've put on yourself and I can't justify an 
LTS support model because it might make someone's downstream fork 
strategy easier to manage. As noted earlier, I don't see Oracle 
developers leading the way upstream. If you want to see major changes, 
then contribute those resources, get involved and make a lasting effect.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Davanum Srinivas
Octave, Folks,

"If we had an LTS branch and a solid governance model" - The way to
make it happen is for everyone to show up in the Stable team, do the
work that is need to define/setup etc..

Who is up for it? Please show up in the next weekly meeting and get
active in the work needed to take care of the current stable work and
the work needed to setup a LTS.

Thanks,
Dims


On Fri, May 5, 2017 at 4:23 PM, Octave J. Orgeron
 wrote:
> Thank you Alex for the points you made below. These are some of the big
> issues that I see all OpenStack customers and operators struggling with.
> Regardless of the tools that the community or vendors put on the table,
> OpenStack is still a very complicated piece of technology to deploy and
> manage. Even if we look at trippleo, kolla, puppet, Fuel, JuJu, etc. they
> address very specific use-cases. Most will handle deploying a basic HA
> control plane and configure things that are suited for a dev/test setup,
> demo, or POC type of use-case. But do they handle a production ready
> deployment for a bank, telco, retailer, or government? Is everything
> configured to handle HA, scale, security, auditing, multi-tenancy, etc. with
> all of the knobs and options set the way the customer needs? Do we end up
> with ceilometer, aodh, gnocchi, ELK, etc. all configured optimally? How
> about networking and security? How about upgrades? Can you expect people to
> hit an upgrade button and not have anything break? Let's be realistic here..
> these tools are all good starting points, but you're going to have to get
> your hands dirty at some level to configure OpenStack to fit your actual
> business and technical requirements. The life-cycle management of OpenStack
> is not easy and requires a lot of resources. Sure vendors can try to fill
> the void, but everything they build is on quicksand.
>
> This is why you see vendors and major consulting houses jumping all over
> this to fill the void with professional services. There are plenty of big
> shops that are now looking at ways to out-source the management of OpenStack
> because of how complex it is to manage and maintain. BTW, this is the
> biggest market sign that a product is too complicated.
>
> From a vendors perspective, it's incredibly difficult to keep up with the
> releases because once you get your automation tooling and any extra
> value-added components integrated with a release, it's more than likely
> already behind or EOL. Plus there won't be enough soak time with customers
> to adopt it! Not only that, but by the time you make something work for
> customers, there is a very high chance that the upstream version of those
> components will have changed enough that you'll have to either patch,
> re-architect, or slash and burn what you've already delivered to your
> customers. Not to mention it maybe impossible to upgrade your customers in a
> seamless or automated fashion. This is why customers will stick to an older
> release because the upgrade path is too painful.
>
> If you consider the realities of all of the above, what ends up happening?
> Enterprise customers will end up sticking to an older release or paying
> someone else to deal with the complexity. This places vendors at risk
> because there isn't a clear revenue model that can sustain the engineering
> overhead required for maintaining and developing their distribution. If
> customers aren't buy more or upgrading, then who's keeping the lights on?
> You can only charge so much for support. So they end up being bought or
> going under. Which leaves customers with fewer choices and more risk.
>
> So ultimately, the right way to fix this is to have an LTS branch and for
> there to be better governance around how new features are introduced. There
> needs to be more customer and vendor driven involvement to solidifying a
> core set of features that everyone can rely on working consistently across
> releases and upgrades. When new features or enhancements come along, there
> should be more emphasis on usability, sustainability, and upgradeability so
> that customers and vendors are not stuck.
>
> If we had an LTS branch and a solid governance model, both vendors and
> customers would benefit greatly. It would help buffer the chaos of the
> bleeding edge away from customers and allow vendors to deliver and support
> them properly.
>
> Octave
>
> On 5/5/2017 12:33 PM, Alex Schultz wrote:
>>
>> So there's a trade off and I don't think we can just restrict entry
>> because some projects aren't user friendly. I see it as a common issue
>> across all projects. Some are better than others, but what I would like to
>> see is the bar for usability raised within the OpenStack community such that
>> the end user (and deployer/operator) are all taken into consideration. For
>> me the usability also goes with adoption. The easier it is to consume, the
>> easier it would be to adopt something. If you take a look at what is
>> required to configure 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Matt Riedemann

On 5/5/2017 11:36 AM, Alex Schultz wrote:

The cell v2 initial implementation was neither usable or stable (for
my definition of stable). Yea you could say 'but it's a work and
progress' and I would say, why is it required for the end user then?
If I wanted to I could probably go back and go through every project
and point out when a feature was added yet we still have a pile of
outstanding issues.  As Chris Friesen pointed out in his reply email,
there are things out there are specifics if you go looking.  You have
to understand that as I'm mainly dealing with having to actually
deploy/configure the software, when I see 'new project X' that does
'cool new things Y, Z' it makes me cringe.  Because it's just added
complexity for new features that who knows if they are actually going
to be consumed by a majority of end users.  I see a lot of new
features for edge cases while the core functionally (insert the most
used project configuration) still have awkward deployment,
configuration and usability issues. But those aren't exciting so
people don't want to work on them...


We've talked about bug-fix only stability releases before and there is 
never enough support to do that.


If I had a "go talk to x" every time I have to say no to someone's 
blueprint after we do spec freeze it would make my life much easier and 
less depressing.


My first release as PTL in Newton I really wanted to focus in fixing 
bugs, technical debt, docs, broken features, testing gaps, etc. That's 
still a main focus of mine, but the constant grind of feature requests 
and pressure is going to make you change priorities sometimes. So I've 
accepted that Nova is going to approve somewhere around 70 blueprints 
per release (and merge less than that, but that's another problem), and 
we just have to be better about requiring the docs/tests at the time 
that the feature is added, before the code is merged, and follow up with 
people that said they'd deliver those things.


We've also been pretty aggressive about deprecating a lot of old busted 
API and legacy code in Nova over the last couple of years. I'm sure 
there is a camp of people that would never want us to remove anything no 
matter how busted or not tested it is, or how only one virt driver 
supports that feature. If you want to get an idea about the horrible 
mess of resource tracking and complexity that goes into scheduling in 
Nova, attend Jay Pipes' talks on Placement at the summit (or watch the 
videos afterward). There are still gremlins in there that are news to me 
today.


And yes, Ocata sucked. We lost our main cells v2 driver (alaski), we 
lost two core reviewers for the entire release basically (danpb was 
re-assigned off openstack, sdague was laid off from HPE and was looking 
for another job), and I personally was looking for a job and 
interviewing with companies for about 4 months, just trying to continue 
working upstream in some fashion. Several people put a lot of attention 
and care into the in-tree placement and cells v2 docs, but we dropped 
the ball on the install guide for those. I've admitted that before and 
I'll admit it again if it makes anyone feel any better.


We didn't anticipate some of the issues that TripleO would have with 
deployment changes. I appreciate all of the work that Emilien put into 
working with us on raising those issues and being patient while we 
worked through them. The Kolla team was/is also a pleasure to work with, 
mostly because I don't hear from them. :)


Big changes are hard to get right. We've also put a hell of a lot of 
effort into keeping rolling upgrades in mind when working on any of this 
stuff, but I don't know if anyone has ever acknowledged that, or 
appreciated it, maybe because if it's not something to complain about 
it's not worth mentioning.


Anyway, you'll never hear me disagree that I'd love to put less of a 
focus on new features and instead focus on hardening the things we 
already have, but when push comes to shove it's not easy justifying the 
time you spend (if you're lucky enough to even pick what you want to 
work on at any given time) on stuff like that. For example, I don't hear 
anyone asking me every other day in IRC to fix the mess that is the 
shelve feature.


It's also unfair to say that the rest of us who don't work in deployment 
projects think or care about users, be those end users of the cloud, or 
the operators. I'd say we put a lot of thought into how something we're 
working on is going to be consumed and try to make that as painless as 
possible. It doesn't always end up that way, or maybe isn't appreciated 
for the alternatives we didn't take. It's easy to throw stones from the 
outside.


I don't have a nice way to wrap this up. I'm depressed and just want to 
go outside. I don't expect pleasant replies to my rant, so I probably 
won't reply again after this anyway since this is always a never-ending 
back and forth which just leaves everyone bitter.


--

Thanks,

Matt


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Octave J. Orgeron
Thank you Alex for the points you made below. These are some of the big 
issues that I see all OpenStack customers and operators struggling with. 
Regardless of the tools that the community or vendors put on the table, 
OpenStack is still a very complicated piece of technology to deploy and 
manage. Even if we look at trippleo, kolla, puppet, Fuel, JuJu, etc. 
they address very specific use-cases. Most will handle deploying a basic 
HA control plane and configure things that are suited for a dev/test 
setup, demo, or POC type of use-case. But do they handle a production 
ready deployment for a bank, telco, retailer, or government? Is 
everything configured to handle HA, scale, security, auditing, 
multi-tenancy, etc. with all of the knobs and options set the way the 
customer needs? Do we end up with ceilometer, aodh, gnocchi, ELK, etc. 
all configured optimally? How about networking and security? How about 
upgrades? Can you expect people to hit an upgrade button and not have 
anything break? Let's be realistic here.. these tools are all good 
starting points, but you're going to have to get your hands dirty at 
some level to configure OpenStack to fit your actual business and 
technical requirements. The life-cycle management of OpenStack is not 
easy and requires a lot of resources. Sure vendors can try to fill the 
void, but everything they build is on quicksand.


This is why you see vendors and major consulting houses jumping all over 
this to fill the void with professional services. There are plenty of 
big shops that are now looking at ways to out-source the management of 
OpenStack because of how complex it is to manage and maintain. BTW, this 
is the biggest market sign that a product is too complicated.


From a vendors perspective, it's incredibly difficult to keep up with 
the releases because once you get your automation tooling and any extra 
value-added components integrated with a release, it's more than likely 
already behind or EOL. Plus there won't be enough soak time with 
customers to adopt it! Not only that, but by the time you make something 
work for customers, there is a very high chance that the upstream 
version of those components will have changed enough that you'll have to 
either patch, re-architect, or slash and burn what you've already 
delivered to your customers. Not to mention it maybe impossible to 
upgrade your customers in a seamless or automated fashion. This is why 
customers will stick to an older release because the upgrade path is too 
painful.


If you consider the realities of all of the above, what ends up 
happening? Enterprise customers will end up sticking to an older release 
or paying someone else to deal with the complexity. This places vendors 
at risk because there isn't a clear revenue model that can sustain the 
engineering overhead required for maintaining and developing their 
distribution. If customers aren't buy more or upgrading, then who's 
keeping the lights on? You can only charge so much for support. So they 
end up being bought or going under. Which leaves customers with fewer 
choices and more risk.


So ultimately, the right way to fix this is to have an LTS branch and 
for there to be better governance around how new features are 
introduced. There needs to be more customer and vendor driven 
involvement to solidifying a core set of features that everyone can rely 
on working consistently across releases and upgrades. When new features 
or enhancements come along, there should be more emphasis on usability, 
sustainability, and upgradeability so that customers and vendors are not 
stuck.


If we had an LTS branch and a solid governance model, both vendors and 
customers would benefit greatly. It would help buffer the chaos of the 
bleeding edge away from customers and allow vendors to deliver and 
support them properly.


Octave

On 5/5/2017 12:33 PM, Alex Schultz wrote:
So there's a trade off and I don't think we can just restrict entry 
because some projects aren't user friendly. I see it as a common issue 
across all projects. Some are better than others, but what I would 
like to see is the bar for usability raised within the OpenStack 
community such that the end user (and deployer/operator) are all taken 
into consideration. For me the usability also goes with adoption. The 
easier it is to consume, the easier it would be to adopt something. If 
you take a look at what is required to configure OpenStack for a basic 
deployment, it is not easy to consume. If you were to compare the 
basic getting started/install guide for Kubernetes[0] vs OpenStack[1], 
you can see what I mean about complexity. I think just the install 
guide for neutron on a controller node[2] is about the same length as 
the kubernetes guide. And we think this is ok? We should keep adding 
additional installation/management complexity for each project? You 
could argue that OpenStack has more features or more flexible so it's 
apples to oranges but I don't 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Jonathan Proulx
On Fri, May 05, 2017 at 02:04:43PM -0600, John Griffith wrote:
:On Fri, May 5, 2017 at 11:24 AM, Chris Friesen 

:> Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
:> for anything stressful it falls over.
:>
:
:​Oh really?​
:
:​I'd love some detail on this.  What falls over?

I'm a bit out of date on this personally, but we ditched all iSCSI a
few years ago becasue we found it generally flaky on Linux. We we were
motly using Equallogic SAN both for OpenStack and stand alone servers
but saw same issues with some other targets as well.

So I wonder if this is a Cinder issue or just a Linux issue.

What we saw fall over was any slight network bump would permanently
drop conenct to backing storage requiring reset.  But as I say this
was decidedly not a Cinder issue.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Dan Smith
> +1. ocata's cell v2 stuff added a lot of extra required complexity
> with no perceivable benefit to end users. If there was a long term
> stable version, then putting it in the non lts release would have
> been ok. In absence of lts, I would have recommended the cell v2
> stuff have been done in a branch instead and merged all together when
> it provided something (pike I think)

That's how cellsv1 was developed and that turned out spectacularly well.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread John Griffith
On Fri, May 5, 2017 at 11:24 AM, Chris Friesen 
wrote:

> On 05/05/2017 10:48 AM, Chris Dent wrote:
>
> Would it be accurate to say, then, that from your perpsective the
>> tendency of OpenStack to adopt new projects willy nilly contributes
>> to the sense of features winning out over deployment, configuration
>> and usability issues?
>>
>
> Personally I don't care about the new projects...if I'm not using them I
> can ignore them, and if I am using them then I'll pay attention to them.
>
> But within existing established projects there are some odd gaps.
>
> Like nova hasn't implemented cold-migration or resize (or live-migration)
> of an instance with LVM local storage if you're using libvirt.
>
> Image properties get validated, but not flavor extra-specs or instance
> metadata.
>
> Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
> for anything stressful it falls over.
>

​Oh really?​

​I'd love some detail on this.  What falls over?


> Some of the database pruning tools don't cover all the tables so the DB
> gets bigger over time.
>
> I'm sure there are historical reasons for all of these, I'm just pointing
> out some of the things that were surprising to me.
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean Dague
On 05/05/2017 10:09 AM, Chris Friesen wrote:
> On 05/05/2017 06:16 AM, Sean Dague wrote:
>> On 05/04/2017 11:08 AM, Alex Schultz wrote:
> 
>>> Probably because they are still on Kilo. Not sure how much they could
>>> be contributing to the current when their customers are demanding that
>>> something is rock solid which by now looks nothing like the current
>>> upstream.   I think this is part of the problem as the upstream can
>>> tend to outpace anyone else in terms of features or anything else.  I
>>> think the the bigger question could be what's the benefit of
>>> continuing to press forward and add yet more features when consumers
>>> cannot keep up to consume these?  Personally I think usability (and
>>> some stability) sometimes tends to take a backseat to features in the
>>> upstream which is unfortunate because it makes these problems worse.
>>
>> The general statement of "people care more about features than
>> usability/stability" gets thrown around a lot. And gets lots of head
>> nodding. But rarely comes with specifics.
>>
>> Can we be specific about what feature work is outpacing the consumers
>> that don't help with usability/stability?
> 
> On the usability/stability front, in nova you still can't correctly
> live-migrate if you have dedicated CPUs or hugepages, or a specific NUMA
> topology.  The commits for this have been under review since Kilo, but
> never quite make it in.  At the same time, there are no warnings or
> errors to the user saying that it's not stable...it just migrates and
> hopes that it doesn't collide with another instance.
> 
> On the usability front, the new "openstack" client doesn't support
> microversions, which limits its usefulness with nova.  (I think some
> folks are starting to look at this one.)

Those are things that have not yet been done. Which there are lots.
Would love for more of it to be done.

But neither is actually an answer to what features that have no impact
on usability/stability, are getting prioritized.

I'm sorry about being pedantic here, but I really want anyone claiming
that features without stability/usability improvements are trumping that
work to identify the features in that category. (not just complain about
things they wish also fit into the 5lb bag).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean Dague
On 05/05/2017 12:36 PM, Alex Schultz wrote:
> On Fri, May 5, 2017 at 6:16 AM, Sean Dague  wrote:
>> On 05/04/2017 11:08 AM, Alex Schultz wrote:
>>
>> The general statement of "people care more about features than
>> usability/stability" gets thrown around a lot. And gets lots of head
>> nodding. But rarely comes with specifics.
>>
>> Can we be specific about what feature work is outpacing the consumers
>> that don't help with usability/stability?
>>
> 
> The cell v2 initial implementation was neither usable or stable (for
> my definition of stable). Yea you could say 'but it's a work and
> progress' and I would say, why is it required for the end user then?
> If I wanted to I could probably go back and go through every project
> and point out when a feature was added yet we still have a pile of
> outstanding issues.  As Chris Friesen pointed out in his reply email,
> there are things out there are specifics if you go looking.  You have
> to understand that as I'm mainly dealing with having to actually
> deploy/configure the software, when I see 'new project X' that does
> 'cool new things Y, Z' it makes me cringe.  Because it's just added
> complexity for new features that who knows if they are actually going
> to be consumed by a majority of end users.  I see a lot of new
> features for edge cases while the core functionally (insert the most
> used project configuration) still have awkward deployment,
> configuration and usability issues. But those aren't exciting so
> people don't want to work on them...

Chris pointed out bugs to partially implemented features that have not
completed. Those are things that haven't gotten done.

Calling cells v2 an unneeded feature seems kind of a stretch. There was
so much operator push to get cells v1 merged even though it was wildly
incomplete, and full of races and very weird unfixable bugs. It was
merged on user request because many large operators were patching it in,
out of tree. And cells v2 was exactly a usability and stability path out
of cells v1.

While there may have been bumps in the road getting there, calling it a
feature unrelated to stability and usability doesn't seem right to me.
That's more of a "I wish the following bits were done differently."

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Matt Riedemann

On 5/5/2017 9:30 AM, Thierry Carrez wrote:

Davanum Srinivas wrote:

I would encourage folks here to help the stable branches we have right
now! Release/Requirements constantly wait on Stable team and Stable
team is way short of hands.

Please join #openstack-stable, throw your name in wiki etc
(https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch)
and get active.

If we have trouble taking care of what we have now, how can we do more?


Shameless plug:

For those in Boston next week, you can join the following on-boarding
session on Monday afternoon, to see what this work really means. It's
not as hard or time-consuming as you'd think:

https://www.openstack.org/summit/boston-2017/summit-schedule/events/18694/infraqarelease-mgmtregsstable-project-onboarding



If you want to be extra lazy on a sunny Friday, watch a video:

https://www.openstack.org/videos/video/openstack-stable-what-it-actually-means-to-maintain-stable-branches

I also ran a cross-project session at the Newton summit about the 
concept of extending the life of the oldest stable branch:


https://etherpad.openstack.org/p/stable-branch-eol-policy-newton

I even made slides!

https://docs.google.com/presentation/d/1k0mCHwRZ3_Z8zJw_WilsuTYYqnUDlY2PkgVJLz_xVQc/edit?usp=sharing

In the end it was determined that the cost of doing this didn't justify 
the drain on, or lack of, resources to do this, particularly on the 
infra team to be supporting older distro versions once they were past LTS.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Michał Jastrzębski
On 5 May 2017 at 11:33, Alex Schultz  wrote:
> On Fri, May 5, 2017 at 10:48 AM, Chris Dent  wrote:
>> On Fri, 5 May 2017, Alex Schultz wrote:
>>
>>> You have to understand that as I'm mainly dealing with having to
>>> actually deploy/configure the software, when I see 'new project X'
>>> that does 'cool new things Y, Z' it makes me cringe.  Because it's
>>> just added complexity for new features that who knows if they are
>>> actually going to be consumed by a majority of end users.  I see a
>>> lot of new features for edge cases while the core functionally
>>> (insert the most used project configuration) still have awkward
>>> deployment, configuration and usability issues. But those aren't
>>> exciting so people don't want to work on them...
>>
>>
>> Would it be accurate to say, then, that from your perpsective the
>> tendency of OpenStack to adopt new projects willy nilly contributes
>> to the sense of features winning out over deployment, configuration
>> and usability issues?
>>
>
> It does not help.
>
>> I think a lot of project contributors may not really see it that way
>> because they think of their project and within that project there is
>> a constant effort to clean things up, address bugs and tech debt,
>> and try to slowly but surely evolve to some level of maturity. In
>> their eyes those new projects are something else separate from their
>> project.
>>
>> From the outside, however, it is all OpenStack and maybe it looks
>> like there's loads of diffuse attention.
>>
>> If that's the case, then a question is whether or not the people who
>> are spending time on those new projects would be spending time on
>> the older projects instead if the new projects didn't exist. I don't
>> know, but seems unlikely.
>>
>
> So there's a trade off and I don't think we can just restrict entry
> because some projects aren't user friendly.  I see it as a common
> issue across all projects. Some are better than others, but what I
> would like to see is the bar for usability raised within the OpenStack
> community such that the end user (and deployer/operator) are all taken
> into consideration.  For me the usability also goes with adoption. The
> easier it is to consume, the easier it would be to adopt something.
> If  you take a look at what is required to configure OpenStack for a
> basic deployment, it is not easy to consume.  If you were to compare
> the basic getting started/install guide for Kubernetes[0] vs
> OpenStack[1], you can see what I mean about complexity.  I think just
> the install guide for neutron on a controller node[2] is about the
> same length as the kubernetes guide.  And we think this is ok?  We
> should keep adding additional installation/management complexity for
> each project?  You could argue that OpenStack has more features or
> more flexible so it's apples to oranges but I don't think it has to be
> if we worked on better patterns for configuration/deployment/upgrades.
> It feels like OpenStack is the thing that you should pay professional
> services to deploy rather than I do it yourself.  And I think that's a
> shame.

Sooo... I always get a little triggered when I hear that OpenStack is
hard to deploy. We've spent last few years fixing it and I think it's
pretty well fixed now. Even as we speak I'm deploying 500+ vms on
OpenStack cluster I deployed last week within one day.

These problems aren't factor of OpenStack growing too fast, it's
tooling that people are using. Granted, it took some time for us to
build these tools, but we did build them. One of reasons why we could
build them is that OpenStack, after being turned into Big Tent allowed
us (Kolla) to quickly join "main herd" of OpenStack and innovate in
our own way. If we'd put lot's of barriers like incubation, we'd still
have same issue with deployment. Stability not always comes from age
of project, sometimes change of methodology alltogether gives you
better stability at the end. Big Tent is meant to allow this kind of
innovation among other things. Setup I'm using now was deployed in
similar manner as this short guide I wrote for Boston summit [1].

Deployment, upgrades and such are problems we're fixing as we go.
Sometimes we make things harder (nova placement API caused a bit of a
headache for deployment tools...), then we make it easier again with
new feature. We might want to put some constrains in terms of
significant, deployment-changing, features merge timelines, but that's
logistics that we handle as community.

I keep hearing that OpenStack lacks leadership, and that's true, but
consider that "leadership" is always limiting for innovation.

[1] https://github.com/inc0/kolla-ansible-workshop

Cheers,
Michal

> Thanks,
> -Alex
>
> [0] 
> https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
> [1] https://docs.openstack.org/newton/install-guide-rdo/
> [2] 
> 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Alex Schultz
On Fri, May 5, 2017 at 10:48 AM, Chris Dent  wrote:
> On Fri, 5 May 2017, Alex Schultz wrote:
>
>> You have to understand that as I'm mainly dealing with having to
>> actually deploy/configure the software, when I see 'new project X'
>> that does 'cool new things Y, Z' it makes me cringe.  Because it's
>> just added complexity for new features that who knows if they are
>> actually going to be consumed by a majority of end users.  I see a
>> lot of new features for edge cases while the core functionally
>> (insert the most used project configuration) still have awkward
>> deployment, configuration and usability issues. But those aren't
>> exciting so people don't want to work on them...
>
>
> Would it be accurate to say, then, that from your perpsective the
> tendency of OpenStack to adopt new projects willy nilly contributes
> to the sense of features winning out over deployment, configuration
> and usability issues?
>

It does not help.

> I think a lot of project contributors may not really see it that way
> because they think of their project and within that project there is
> a constant effort to clean things up, address bugs and tech debt,
> and try to slowly but surely evolve to some level of maturity. In
> their eyes those new projects are something else separate from their
> project.
>
> From the outside, however, it is all OpenStack and maybe it looks
> like there's loads of diffuse attention.
>
> If that's the case, then a question is whether or not the people who
> are spending time on those new projects would be spending time on
> the older projects instead if the new projects didn't exist. I don't
> know, but seems unlikely.
>

So there's a trade off and I don't think we can just restrict entry
because some projects aren't user friendly.  I see it as a common
issue across all projects. Some are better than others, but what I
would like to see is the bar for usability raised within the OpenStack
community such that the end user (and deployer/operator) are all taken
into consideration.  For me the usability also goes with adoption. The
easier it is to consume, the easier it would be to adopt something.
If  you take a look at what is required to configure OpenStack for a
basic deployment, it is not easy to consume.  If you were to compare
the basic getting started/install guide for Kubernetes[0] vs
OpenStack[1], you can see what I mean about complexity.  I think just
the install guide for neutron on a controller node[2] is about the
same length as the kubernetes guide.  And we think this is ok?  We
should keep adding additional installation/management complexity for
each project?  You could argue that OpenStack has more features or
more flexible so it's apples to oranges but I don't think it has to be
if we worked on better patterns for configuration/deployment/upgrades.
It feels like OpenStack is the thing that you should pay professional
services to deploy rather than I do it yourself.  And I think that's a
shame.

Thanks,
-Alex

[0] 
https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
[1] https://docs.openstack.org/newton/install-guide-rdo/
[2] 
https://docs.openstack.org/newton/install-guide-rdo/neutron-controller-install.html

>
> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean McGinnis
On Fri, May 05, 2017 at 11:24:49AM -0600, Chris Friesen wrote:
> 
[snip]
> 
> Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
> for anything stressful it falls over.
> 

A bit of a tangent, but I would love to hear more about this. We have a lot
of folks using LVM successfully, at least as far as I've seen, so if there
are issues out there that are viewed as Cinder "ignoring" over adding new
features, I would really love to know what they are. And ideally have bug
reports to give us something to track and work on.

> Some of the database pruning tools don't cover all the tables so the DB gets
> bigger over time.
> 
> I'm sure there are historical reasons for all of these, I'm just pointing
> out some of the things that were surprising to me.
> 
> Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] So long, and thanks for all the fish

2017-05-05 Thread Bhatia, Manjeet S
Good luck !! it was nice working with you. 

> -Original Message-
> From: John Davidge [mailto:john.davi...@rackspace.com]
> Sent: Tuesday, May 2, 2017 3:08 AM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Subject: [openstack-dev] [neutron][all] So long, and thanks for all the fish
> 
> Friends and colleagues,
> 
> It is with a heavy heart that I write to say my adventure in the OpenStack
> community is coming to an end. It began in 2012 with my first job as an intern
> at Cisco, and ends here as the Technical Lead for Neutron in the OpenStack
> Innovation Center at Rackspace.
> 
> In that time I’ve worked with a great many wonderful people from all corners
> of this community, on a variety of projects that I’m proud to include my name
> in the commit logs. Thank you all for creating an exciting place to work, to
> debate, and occasionally to argue about the incredible democratizing power
> that is OpenStack. Your passion and expertise are an inspiration to the world.
> 
> Regretfully, I’m leaving a void in both the Neutron team and the OpenStack
> Manuals team. Neutron will need a new Docs liaison, and OpenStack Manuals
> will need a new lead for the Networking Guide. The cross-project work we’ve
> done together over the last couple of cycles has been engaging and fulfilling,
> and I encourage anyone interested in either or both roles to get in touch with
> Kevin Benton and Alexandra Settle.
> 
> Good luck and best wishes to all of you in the future.
> 
> Until we meet again,
> 
> John
> 
> 
> 
> Rackspace Limited is a company registered in England & Wales (company
> registered number 03897010) whose registered office is at 5 Millington Road,
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may
> contain confidential or privileged information intended for the recipient. Any
> dissemination, distribution or copying of the enclosed material is 
> prohibited. If
> you receive this transmission in error, please notify us immediately by 
> e-mail at
> ab...@rackspace.com and delete the original message. Your cooperation is
> appreciated.
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Alex Schultz
On Fri, May 5, 2017 at 10:52 AM, Dean Troyer  wrote:
> On Fri, May 5, 2017 at 11:36 AM, Alex Schultz  wrote:
>> configuration and usability issues. But those aren't exciting so
>> people don't want to work on them...
>
> [Not picking on Alex here, but I've seen this repeated again in this
> thread like every other one]
>
> s/people/corporate project managers/
>
> Yes, there are individual preferences involved here, but those of us
> who are in a position to CHOOSE which projects we invest in are
> actually in the minority, and shrinking.

Yes I do not want to discount the folks who get paid to do a thing.
That being said, I believe there's a push for some of the initiatives
(or at least specific implementations) that are outside of the 'I'm
getting paid for this' and rather 'I have to do this because I'm paid
to, so let me go do it with this $hotness'.  We could point to the
$language conversations as well where it's asked "is that really
necessary?  Is this better for the end user?".  I know my view is a
bit different because I personally like solving issues for the end
user and I understand that's not everyone's thing.  It's just
something that I think the community could benefit from.

>
> The choices of what contributing companies invest in are largely
> determined by their customers, or where they feel their business needs
> to go.  Infrastructure, SDKs, documentation, none of these are on any
> of the public roadmaps that I have seen (I have not seen them all so
> speak up where I am wrong here).
>

I know I like to push for upstream documentation to make sure when we
write something we actually write the thing that we said we would and
it works as expected.  But yea, other items like infrastructure  is
probably not high on some companies list.

> Those who are customers of these sponsor/contributor companies also
> need to be bugging them for these things too, and not just "enterprise
> ready" or "telco ready" or "GPU ready" or whatever their particular
> shiny thing is this year.
>
> Make long term releases important on corporate bottom lines and it
> WILL become a thing.
>

They already are, but just not in the public.  So at that point, they
become a reason why people continue to use older versions and still
diverge from upstream because they don't need to because $vendor is
the person they go to when they need something.

Thanks,
-Alex

> dt
> --
>
> Dean Troyer
> dtro...@gmail.com
>
>
> Tragedy.  Commons.  Rinse.  Repeat.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Fox, Kevin M
+1. ocata's cell v2 stuff added a lot of extra required complexity with no 
perceivable benefit to end users. If there was a long term stable version, then 
putting it in the non lts release would have been ok. In absence of lts, I 
would have recommended the cell v2 stuff have been done in a branch instead and 
merged all together when it provided something (pike I think)

OpenStack Operators already suffer a lot of pain and more pain without benefit 
is something we should be avoiding at all costs.

Thanks,
Kevin 

From: Alex Schultz [aschu...@redhat.com]
Sent: Friday, May 05, 2017 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too 
slow at the same time

On Fri, May 5, 2017 at 6:16 AM, Sean Dague  wrote:
> On 05/04/2017 11:08 AM, Alex Schultz wrote:
>> On Thu, May 4, 2017 at 5:32 AM, Chris Dent  wrote:
>>> On Wed, 3 May 2017, Drew Fisher wrote:
>>>
 This email is meant to be the ML discussion of a question I brought up
 during the TC meeting on April 25th.; [1]
>>>
>>>
>>> Thanks for starting this Drew, I hope my mentioning it in my tc
>>> report email wasn't too much of a nag.
>>>
>>> I've added [tc] and [all] tags to the subject in case people are
>>> filtering. More within.
>>>
 The TL;DR version is:

 Reading the user survey [2], I see the same issues time and time again.
 Pages 18-19 of the survey are especially common points.
 Things move too fast, no LTS release, upgrades are terrifying for
 anything that isn't N-1 -> N.
 These come up time and time again
 How is the TC working with the dev teams to address these critical issues?
>>>
>>>
>>> As I recall the "OpenStack-wide Goals"[a] are supposed to help address
>>> some of this sort of thing but it of course relies on people first
>>> proposing and detailing goals and then there actually being people
>>> to act on them. The first part was happening at[b] but it's not
>>> clear if that's the current way.
>>>
>>> Having people is the hard part. Given the current contribution
>>> model[c] that pretty much means enterprises ponying up the people do
>>> the work. If they don't do that then the work won't get done, and
>>> people won't buy the products they are supporting, I guess? Seems a
>>> sad state of affairs.
>>>
>>> There's also an issue where we seem to have decided that it is only
>>> appropriate to demand a very small number of goals per cycle
>>> (because each project already has too much on their plate, or too big
>>> a backlog, relative to resources). It might be that as the
>>> _Technical_ Committe is could be legitimate to make a larger demand.
>>> (Or it could be completely crazy.)
>>>
 I asked this because on page 18 is this comment:

 "Most large customers move slowly and thus are running older versions,
 which are EOL upstream sometimes before they even deploy them."
>>>
>>>
>>> Can someone with more of the history give more detail on where the
>>> expectation arose that upstream ought to be responsible things like
>>> long term support? I had always understood that such features were
>>> part of the way in which the corporately avaialable products added
>>> value?
>>>
 This is exactly what we're seeing with some of our customers and I
 wanted to ask the TC about it.
>>>
>>>
>>> I know you're not speaking as the voice of your employer when making
>>> this message, so this is not directed at you, but from what I can
>>> tell Oracle's presense upstream (both reviews and commits) in Ocata
>>> and thus far in Pike has not been huge. Maybe that's something that
>>> needs to change to keep the customers happy? Or at all.
>>>
>>
>> Probably because they are still on Kilo. Not sure how much they could
>> be contributing to the current when their customers are demanding that
>> something is rock solid which by now looks nothing like the current
>> upstream.   I think this is part of the problem as the upstream can
>> tend to outpace anyone else in terms of features or anything else.  I
>> think the the bigger question could be what's the benefit of
>> continuing to press forward and add yet more features when consumers
>> cannot keep up to consume these?  Personally I think usability (and
>> some stability) sometimes tends to take a backseat to features in the
>> upstream which is unfortunate because it makes these problems worse.
>
> The general statement of "people care more about features than
> usability/stability" gets thrown around a lot. And gets lots of head
> nodding. But rarely comes with specifics.
>
> Can we be specific about what feature work is outpacing the consumers
> that don't help with usability/stability?
>

The cell v2 initial implementation was neither usable or stable (for
my definition of stable). Yea you could say 'but it's a work and
progress' and I would say, why is it 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Monty Taylor

On 05/05/2017 11:52 AM, Dean Troyer wrote:

On Fri, May 5, 2017 at 11:36 AM, Alex Schultz  wrote:

configuration and usability issues. But those aren't exciting so
people don't want to work on them...


[Not picking on Alex here, but I've seen this repeated again in this
thread like every other one]

s/people/corporate project managers/

Yes, there are individual preferences involved here, but those of us
who are in a position to CHOOSE which projects we invest in are
actually in the minority, and shrinking.


It's worth noting that with almost no exceptions I am aware of, those of 
us who are in a position to choose what we work on actually are the ones 
focusing on the "not exciting" things. As Dean says, there aren't many 
of us in that position - so there's only so much we can accomplish in a 
given unit of time.


I bug companies all the time to get them to allocate resources to these 
things. It is rarely successful.



The choices of what contributing companies invest in are largely
determined by their customers, or where they feel their business needs
to go.  Infrastructure, SDKs, documentation, none of these are on any
of the public roadmaps that I have seen (I have not seen them all so
speak up where I am wrong here).

Those who are customers of these sponsor/contributor companies also
need to be bugging them for these things too, and not just "enterprise
ready" or "telco ready" or "GPU ready" or whatever their particular
shiny thing is this year.

Make long term releases important on corporate bottom lines and it
WILL become a thing.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Chris Friesen

On 05/05/2017 10:48 AM, Chris Dent wrote:


Would it be accurate to say, then, that from your perpsective the
tendency of OpenStack to adopt new projects willy nilly contributes
to the sense of features winning out over deployment, configuration
and usability issues?


Personally I don't care about the new projects...if I'm not using them I can 
ignore them, and if I am using them then I'll pay attention to them.


But within existing established projects there are some odd gaps.

Like nova hasn't implemented cold-migration or resize (or live-migration) of an 
instance with LVM local storage if you're using libvirt.


Image properties get validated, but not flavor extra-specs or instance metadata.

Cinder theoretically supports LVM/iSCSI, but if you actually try to use it for 
anything stressful it falls over.


Some of the database pruning tools don't cover all the tables so the DB gets 
bigger over time.


I'm sure there are historical reasons for all of these, I'm just pointing out 
some of the things that were surprising to me.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MassivelyDistributed] Fog / Edge / Massively Distributed Cloud Sessions during the summit

2017-05-05 Thread lebre . adrien
Dear all, 

A brief email to inform you about our schedule next week in Boston. 

In addition to interesting presentations that will deal with Fog/Edge/Massively 
Distributed Clouds challenges [1], I would like to highlight two important 
sessions: 

* A new Birds of a Feather session ``OpenStack on the Edge'' is now scheduled 
on Tuesday afternoon [2]. 
This will be the primary call to action covered by Jonathan Bryce during 
Monday's keynote about Edge Computing.
After introducing the goal of the WG, I will give the floor to participants to 
share their use-case (3/4 min for each presentation)
The Foundation has personally invited four large users that are planning for 
fog/edge computing.
This will guide the WG for the future and hopefully get more contributors 
involved.
Moreover, many of the Foundation staff already planned to attend and talk about 
the in-planning-phase OpenDev event and get input. 
The etherpad for this session is available at [3].

* Our regular face-to-face meeting for current and new members to discuss next 
cycle plans is still scheduled on Wednesday afternoon [5]
The etherpad for this session is available at [5]

I encourage all of you to attend both sessions.
See you in Boston and have a safe trip
ad_rien_

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/global-search?t=edge
 
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18988/openstack-on-the-edge-fogedgemassively-distributed-clouds-birds-of-a-feather
[3] 
https://etherpad.openstack.org/p/BOS-UC-brainstorming-MassivelyDistributed-Fog-Edge
[4] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18671/fogedgemassively-distributed-clouds-working-group
[5] https://etherpad.openstack.org/p/Massively_distributed_wg_boston_summit
https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] ironic-stable-maint update proposal

2017-05-05 Thread Thierry Carrez
Dmitry Tantsur wrote:
> With no objections recorded, I think we can make this change. Tony or
> someone with required ACL, could you please do it?

Done!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Dean Troyer
On Fri, May 5, 2017 at 11:36 AM, Alex Schultz  wrote:
> configuration and usability issues. But those aren't exciting so
> people don't want to work on them...

[Not picking on Alex here, but I've seen this repeated again in this
thread like every other one]

s/people/corporate project managers/

Yes, there are individual preferences involved here, but those of us
who are in a position to CHOOSE which projects we invest in are
actually in the minority, and shrinking.

The choices of what contributing companies invest in are largely
determined by their customers, or where they feel their business needs
to go.  Infrastructure, SDKs, documentation, none of these are on any
of the public roadmaps that I have seen (I have not seen them all so
speak up where I am wrong here).

Those who are customers of these sponsor/contributor companies also
need to be bugging them for these things too, and not just "enterprise
ready" or "telco ready" or "GPU ready" or whatever their particular
shiny thing is this year.

Make long term releases important on corporate bottom lines and it
WILL become a thing.

dt
-- 

Dean Troyer
dtro...@gmail.com


Tragedy.  Commons.  Rinse.  Repeat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Chris Dent

On Fri, 5 May 2017, Alex Schultz wrote:


You have to understand that as I'm mainly dealing with having to
actually deploy/configure the software, when I see 'new project X'
that does 'cool new things Y, Z' it makes me cringe.  Because it's
just added complexity for new features that who knows if they are
actually going to be consumed by a majority of end users.  I see a
lot of new features for edge cases while the core functionally
(insert the most used project configuration) still have awkward
deployment, configuration and usability issues. But those aren't
exciting so people don't want to work on them...


Would it be accurate to say, then, that from your perpsective the
tendency of OpenStack to adopt new projects willy nilly contributes
to the sense of features winning out over deployment, configuration
and usability issues?

I think a lot of project contributors may not really see it that way
because they think of their project and within that project there is
a constant effort to clean things up, address bugs and tech debt,
and try to slowly but surely evolve to some level of maturity. In
their eyes those new projects are something else separate from their
project.


From the outside, however, it is all OpenStack and maybe it looks

like there's loads of diffuse attention.

If that's the case, then a question is whether or not the people who
are spending time on those new projects would be spending time on
the older projects instead if the new projects didn't exist. I don't
know, but seems unlikely.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Alex Schultz
On Fri, May 5, 2017 at 6:16 AM, Sean Dague  wrote:
> On 05/04/2017 11:08 AM, Alex Schultz wrote:
>> On Thu, May 4, 2017 at 5:32 AM, Chris Dent  wrote:
>>> On Wed, 3 May 2017, Drew Fisher wrote:
>>>
 This email is meant to be the ML discussion of a question I brought up
 during the TC meeting on April 25th.; [1]
>>>
>>>
>>> Thanks for starting this Drew, I hope my mentioning it in my tc
>>> report email wasn't too much of a nag.
>>>
>>> I've added [tc] and [all] tags to the subject in case people are
>>> filtering. More within.
>>>
 The TL;DR version is:

 Reading the user survey [2], I see the same issues time and time again.
 Pages 18-19 of the survey are especially common points.
 Things move too fast, no LTS release, upgrades are terrifying for
 anything that isn't N-1 -> N.
 These come up time and time again
 How is the TC working with the dev teams to address these critical issues?
>>>
>>>
>>> As I recall the "OpenStack-wide Goals"[a] are supposed to help address
>>> some of this sort of thing but it of course relies on people first
>>> proposing and detailing goals and then there actually being people
>>> to act on them. The first part was happening at[b] but it's not
>>> clear if that's the current way.
>>>
>>> Having people is the hard part. Given the current contribution
>>> model[c] that pretty much means enterprises ponying up the people do
>>> the work. If they don't do that then the work won't get done, and
>>> people won't buy the products they are supporting, I guess? Seems a
>>> sad state of affairs.
>>>
>>> There's also an issue where we seem to have decided that it is only
>>> appropriate to demand a very small number of goals per cycle
>>> (because each project already has too much on their plate, or too big
>>> a backlog, relative to resources). It might be that as the
>>> _Technical_ Committe is could be legitimate to make a larger demand.
>>> (Or it could be completely crazy.)
>>>
 I asked this because on page 18 is this comment:

 "Most large customers move slowly and thus are running older versions,
 which are EOL upstream sometimes before they even deploy them."
>>>
>>>
>>> Can someone with more of the history give more detail on where the
>>> expectation arose that upstream ought to be responsible things like
>>> long term support? I had always understood that such features were
>>> part of the way in which the corporately avaialable products added
>>> value?
>>>
 This is exactly what we're seeing with some of our customers and I
 wanted to ask the TC about it.
>>>
>>>
>>> I know you're not speaking as the voice of your employer when making
>>> this message, so this is not directed at you, but from what I can
>>> tell Oracle's presense upstream (both reviews and commits) in Ocata
>>> and thus far in Pike has not been huge. Maybe that's something that
>>> needs to change to keep the customers happy? Or at all.
>>>
>>
>> Probably because they are still on Kilo. Not sure how much they could
>> be contributing to the current when their customers are demanding that
>> something is rock solid which by now looks nothing like the current
>> upstream.   I think this is part of the problem as the upstream can
>> tend to outpace anyone else in terms of features or anything else.  I
>> think the the bigger question could be what's the benefit of
>> continuing to press forward and add yet more features when consumers
>> cannot keep up to consume these?  Personally I think usability (and
>> some stability) sometimes tends to take a backseat to features in the
>> upstream which is unfortunate because it makes these problems worse.
>
> The general statement of "people care more about features than
> usability/stability" gets thrown around a lot. And gets lots of head
> nodding. But rarely comes with specifics.
>
> Can we be specific about what feature work is outpacing the consumers
> that don't help with usability/stability?
>

The cell v2 initial implementation was neither usable or stable (for
my definition of stable). Yea you could say 'but it's a work and
progress' and I would say, why is it required for the end user then?
If I wanted to I could probably go back and go through every project
and point out when a feature was added yet we still have a pile of
outstanding issues.  As Chris Friesen pointed out in his reply email,
there are things out there are specifics if you go looking.  You have
to understand that as I'm mainly dealing with having to actually
deploy/configure the software, when I see 'new project X' that does
'cool new things Y, Z' it makes me cringe.  Because it's just added
complexity for new features that who knows if they are actually going
to be consumed by a majority of end users.  I see a lot of new
features for edge cases while the core functionally (insert the most
used project configuration) still have awkward deployment,
configuration and usability issues. But 

Re: [openstack-dev] contribute to the travel assistance program

2017-05-05 Thread Amrith Kumar
That link again

https://www.eventbrite.com/e/openstack-summit-may-2017-boston-tickets-283756
75409

Sorry for the duplication.

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: Amrith Kumar [mailto:amrith.ku...@gmail.com]
> Sent: Friday, May 5, 2017 11:56 AM
> To: 'OpenStack Development Mailing List (not for usage questions)'
> ; 'OpenStack Operators'  operat...@lists.openstack.org>
> Subject: contribute to the travel assistance program
> 
> Folks,
> 
> Summit is just a couple of days away and it is still not too late to
> participate in the foundation's travel assistance program. Every bit
counts,
> and if you are able to contribute to it, you may be able to help one of
your
> fellow openstackers attend the summit. As you all know, it is a major
hiring
> place for openstack talent and given the recent shake-up's, this could be
a
> significant difference for someone you know.
> 
> Anything you can contribute will be a plus, please point your browser at
> 
>
https://www.eventbrite.com/e/openstack-summit-may-2017-boston-tickets-283756
75409
> 
> and scroll down to the last ticket type.
> 
> Thanks,
> 
> -amrith
> 
> --
> Amrith Kumar
> amrith.ku...@gmail.com
> 
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 22

2017-05-05 Thread Chris Dent


Placement and resource providers update 21. Please let me know if
anything is incorrect or missing.

If you're going to be in Boston there are some placement related
sessions that may be worth your while:

* Scheduler Wars: A New Hope

https://www.openstack.org/summit/boston-2017/summit-schedule/events/17501/scheduler-wars-a-new-hope

* Scheduler Wars: Revenge of the Split

https://www.openstack.org/summit/boston-2017/summit-schedule/events/17511/scheduler-wars-revenge-of-the-split

* Behind the Scenes with Placement and Resource Tracking in Nova

https://www.openstack.org/summit/boston-2017/summit-schedule/events/17511/behind-the-scenes-with-placement-and-resource-tracking-in-nova

* Comparing Kubernetes and OpenStack Resource Management

https://www.openstack.org/summit/boston-2017/summit-schedule/events/18726/comparing-kubernetes-and-openstack-resource-management

(I guess we'll have to see "NUMA Strikes Back" some other time.)

Next week there will be no scheduler subteam meeting, nor a
placement and resource providers update but efforts will be made to
summarize placement-related stuff that happens at the Forum.

# What Matters Most

Progress has begun on dealing with claims against the placement API.
Engaging with that is the top priority. There's plenty of other work
in progress too which needs to advance. Lots of links within.

# What's Changed

In addition to the work on claims, work has started on managing
resources that are shared via aggregates. When fully operational
this will finally allow correct consumption of shared disk!

Idempotent PUT for resource classes merged, which raises the max
microversion: https://review.openstack.org/#/c/448791/ .

# Help Wanted

(This section not changed since last week)

Areas where volunteers are needed.

* General attention to bugs tagged placement:
https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
section below).

* Helping to create and evaluate functional tests of the resource
tracker and the ways in which it and nova-scheduler use the
reporting client. For some info see
https://etherpad.openstack.org/p/nova-placement-functional
and talk to edleafe. He has a work in progress at:

https://review.openstack.org/#/c/446123/

that seeks input and assistance.

* Performance testing. If you have access to some nodes, some basic
   benchmarking and profiling would be very useful. See the
   performance section below.

# Main Themes

## Claims in the Scheduler

Work has started on placement-claims blueprint:

 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/placement-claims

We intentionally left some detail out of the spec because we knew
that we would find some edge cases while the implementation is
explored.

## Traits

The main API is in place. Debate raged on how best to manage updates
of standard os-traits. In the end a cache similar to the one used
for resource classes was created:

https://review.openstack.org/#/c/462769/

Work will be required at some point on filtering resource providers
based on traits, and adding traits to resource providers from the
resource tracker. There's been some discussion on that in the
reviews of shared providers (below) because it will be a part of
the same mass (MASS!) of SQL.

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Work and review on this is in progress at:

 https://review.openstack.org/#/q/status:open+topic:bp/shared-resources-pike

Reviewers should be aware that the patches, at least as of today,
are structured in a way that evolves from the current state to the
eventual desired state in a way that duplicates some effort and
code. This was done intentionally by Jay to make the testing and
review more incremental. It's probably best to read through the
entire stack before jumping to any conclusions. I know that I got
very concerned and confused by some of the duplication until I was
informed that it's just part of the process: the end goal ought to
be pretty clean.

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

Several reviews are in progress for documenting the placement API.
This is likely going to take quite a few iterations as we work out
the patterns and tooling. But it's great to see the progress and
when looking at the draft rendered docs it makes placement feel like
a real thing™.

We need multiple reviewers on this stuff, early in the process, as
it helps to identify missteps in the phrasing and styling before we
develop bad habits. We've also found some ways in which the general
style of the docs can be improved to say more about when particular
errors might happen. We'll likely need more constructive use of
inclusions.

Find me (cdent) or Andrey (avolkov) if you want to help out or have
other questions.

## 

[openstack-dev] contribute to the travel assistance program

2017-05-05 Thread Amrith Kumar
Folks,

Summit is just a couple of days away and it is still not too late to
participate in the foundation's travel assistance program. Every bit counts,
and if you are able to contribute to it, you may be able to help one of your
fellow openstackers attend the summit. As you all know, it is a major hiring
place for openstack talent and given the recent shake-up's, this could be a
significant difference for someone you know.

Anything you can contribute will be a plus, please point your browser at

https://www.eventbrite.com/e/openstack-summit-may-2017-boston-tickets-283756
75409

and scroll down to the last ticket type.

Thanks,

-amrith

--
Amrith Kumar
amrith.ku...@gmail.com




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptl][goals] Community goals for Queen

2017-05-05 Thread Mike Perez
Hello everyone,

May 11th at 11:00-12:30 at the forum we will be discussing our community wide
goals for the Queen release [1]!

We do OpenStack-wide goals to "achieve visible common changes, push for basic
levels of consistency and user experience, and efficiently improve certain
areas where technical debt payments have become too high – across all OpenStack
projects" [2].

Our goals backlog: [3]

* New goals are highly welcome.
* Each goal would achieveable in one cycle, if not we need to break the goal
  into smaller connected goals.
* Some Goals already have a team (ex: Python 3) but some haven't.  Maybe could
  we dress a list of people able to step-up and volunteer to help on these
  ones.
* Some Goals might require some documentation for how to achieve it.

Thanks to Emilien for leading our community wide goals for Pike [4]

* There was an overwhelming amount of support for beginning Python 3.5 support
  [5] by having our unit tests voting.
* Unfortunately getting our API endpoints to support WSGI still has some work
  [6].

Lets start adding/updating what's in our backlog [3] to prepare for the forum
next week!

[1] - https://governance.openstack.org/tc/goals/pike/index.html
[2] - https://governance.openstack.org/tc/goals/pike/index.html
[3] - https://etherpad.openstack.org/p/community-goals
[4] - https://governance.openstack.org/tc/goals/pike/index.html

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Thierry Carrez
Davanum Srinivas wrote:
> I would encourage folks here to help the stable branches we have right
> now! Release/Requirements constantly wait on Stable team and Stable
> team is way short of hands.
> 
> Please join #openstack-stable, throw your name in wiki etc
> (https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch)
> and get active.
> 
> If we have trouble taking care of what we have now, how can we do more?

Shameless plug:

For those in Boston next week, you can join the following on-boarding
session on Monday afternoon, to see what this work really means. It's
not as hard or time-consuming as you'd think:

https://www.openstack.org/summit/boston-2017/summit-schedule/events/18694/infraqarelease-mgmtregsstable-project-onboarding

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Davanum Srinivas
On Fri, May 5, 2017 at 9:34 AM, Bogdan Dobrelya  wrote:
> On 05.05.2017 14:12, Sean Dague wrote:
>> On 05/05/2017 07:16 AM, Bogdan Dobrelya wrote:
>>> So perhaps there is a (naive?) option #3: Do not support or nurse gates
>>> for stable branches upstream. Instead, only create and close them and
>>> attach 3rd party gating, if asked by contributors willing to support and
>>> nurse their gates.
>>
>> I think it's important to clarify the amount of infrastructure that goes
>> into testing OpenStack. We build a whole cloud, from source, installing
>> ~ 200 python packages, many from git trees, configure and boot 100+ VMs
>> on it in different ways. And do that with a number of different default
>> configs.
>>
>> Nothing prevents anyone from building a kilo branch in a public github
>> and doing their own CI against it. But we've never seen anyone do that,
>> because there is a lot of work in maintaining a CI system. A lot of
>> expertise needed to debug when things go wrong. Anyone can captain a
>
> We need no to underscore complexity of stable branches maintaining,
> indeed. Complexity and costs are huge, and it sounds even more
> reasonable for operators to join efforts of isolated teams that have
> been patching Kilo for years, multiple downstream - therefore isolated
> as well - places, fighting same problems but different ways, wasting
> engineering, management resources and hardware ever and ever again. It
> should be obvious that it is much less expensive to cooperate here. I
> can't get it what "prevents anyone from building a kilo branch in a
> public github and doing their own CI against it", a riddle it is.

Bogdan, Folks,

I would encourage folks here to help the stable branches we have right
now! Release/Requirements constantly wait on Stable team and Stable
team is way short of hands.

Please join #openstack-stable, throw your name in wiki etc
(https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch)
and get active.

If we have trouble taking care of what we have now, how can we do more?

Thanks,
Dims

>> ship when it's a sunny day. Only under failures do we see what is
>> required to actually keep the ship afloat.
>>
>> You could always drop all the hard parts of CI, actually testing the
>> trees build a full running cloud. But at that point, it becomes very odd
>> to call it a stable branch, as it is far less rigorous in validation
>> than master.
>>
>> At any rate, this basically comes up every year, and I don't think the
>> fundamental equation has changed.
>>
>>   -Sean
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Chris Friesen

On 05/05/2017 06:16 AM, Sean Dague wrote:

On 05/04/2017 11:08 AM, Alex Schultz wrote:



Probably because they are still on Kilo. Not sure how much they could
be contributing to the current when their customers are demanding that
something is rock solid which by now looks nothing like the current
upstream.   I think this is part of the problem as the upstream can
tend to outpace anyone else in terms of features or anything else.  I
think the the bigger question could be what's the benefit of
continuing to press forward and add yet more features when consumers
cannot keep up to consume these?  Personally I think usability (and
some stability) sometimes tends to take a backseat to features in the
upstream which is unfortunate because it makes these problems worse.


The general statement of "people care more about features than
usability/stability" gets thrown around a lot. And gets lots of head
nodding. But rarely comes with specifics.

Can we be specific about what feature work is outpacing the consumers
that don't help with usability/stability?


On the usability/stability front, in nova you still can't correctly live-migrate 
if you have dedicated CPUs or hugepages, or a specific NUMA topology.  The 
commits for this have been under review since Kilo, but never quite make it in. 
 At the same time, there are no warnings or errors to the user saying that it's 
not stable...it just migrates and hopes that it doesn't collide with another 
instance.


On the usability front, the new "openstack" client doesn't support 
microversions, which limits its usefulness with nova.  (I think some folks are 
starting to look at this one.)


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Chris Friesen

On 05/05/2017 05:16 AM, Bogdan Dobrelya wrote:


I may be wrong, but I have a strong perception that even when major
upgrades run smooth and flawless, operators tend to operate legacy
enterprise software for long, long, long. It would be an utopia to
expect any changes here, IMO.


One reason for this is that the end-users of telco or enterprise companies have 
high expectations for robustness and reliability.  Telecom-level validation 
takes a *long* time.  It could easily be six months to a year before new 
software actually makes it out to end-users.  Because this effort is expensive, 
they want to amortize it over a longer period.


Even when an upgrade is smooth and easy, it's still new code, with new unknown 
bugs.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Bogdan Dobrelya
On 05.05.2017 14:12, Sean Dague wrote:
> On 05/05/2017 07:16 AM, Bogdan Dobrelya wrote:
>> So perhaps there is a (naive?) option #3: Do not support or nurse gates
>> for stable branches upstream. Instead, only create and close them and
>> attach 3rd party gating, if asked by contributors willing to support and
>> nurse their gates.
> 
> I think it's important to clarify the amount of infrastructure that goes
> into testing OpenStack. We build a whole cloud, from source, installing
> ~ 200 python packages, many from git trees, configure and boot 100+ VMs
> on it in different ways. And do that with a number of different default
> configs.
> 
> Nothing prevents anyone from building a kilo branch in a public github
> and doing their own CI against it. But we've never seen anyone do that,
> because there is a lot of work in maintaining a CI system. A lot of
> expertise needed to debug when things go wrong. Anyone can captain a

We need no to underscore complexity of stable branches maintaining,
indeed. Complexity and costs are huge, and it sounds even more
reasonable for operators to join efforts of isolated teams that have
been patching Kilo for years, multiple downstream - therefore isolated
as well - places, fighting same problems but different ways, wasting
engineering, management resources and hardware ever and ever again. It
should be obvious that it is much less expensive to cooperate here. I
can't get it what "prevents anyone from building a kilo branch in a
public github and doing their own CI against it", a riddle it is.

> ship when it's a sunny day. Only under failures do we see what is
> required to actually keep the ship afloat.
> 
> You could always drop all the hard parts of CI, actually testing the
> trees build a full running cloud. But at that point, it becomes very odd
> to call it a stable branch, as it is far less rigorous in validation
> than master.
> 
> At any rate, this basically comes up every year, and I don't think the
> fundamental equation has changed.
> 
>   -Sean
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [docs] Contributor and reviewer stats for API docs

2017-05-05 Thread Anne Gentle
Hi all,

I've started a patch to the openstack-infra/reviewstats repo [1] to measure
the number of reviews, reviewers, contributors, and contributions to API
docs where the docs are housed in project repos.

A couple of goals can be realized if we know more: to determine the experts
in the speciality and to figure out if there are processes that can be
improved, or projects that can use extra help. And, to improve both the API
docs processes and content.

Both my Python and bash skills are rudimentary on a good day so I'd like to
get some help and learn at the same time so I can improve. Is the patch on
the right track? Is this the right repo to get data (or is it really for
reviews only)? Take a look and let me know.

Thanks in advance -
Anne

1. https://review.openstack.org/#/c/461280/

--

Read my blog: justwrite.click
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2017-05-04 20:09:35 -0400:
> On 04/05/17 10:14, Thierry Carrez wrote:
> > Chris Dent wrote:
> >> On Wed, 3 May 2017, Drew Fisher wrote:
> >>> "Most large customers move slowly and thus are running older versions,
> >>> which are EOL upstream sometimes before they even deploy them."
> >>
> >> Can someone with more of the history give more detail on where the
> >> expectation arose that upstream ought to be responsible things like
> >> long term support? I had always understood that such features were
> >> part of the way in which the corporately avaialable products added
> >> value?
> >
> > We started with no stable branches, we were just producing releases and
> > ensuring that updates vaguely worked from N-1 to N. There were a lot of
> > distributions, and they all maintained their own stable branches,
> > handling backport of critical fixes. That is a pretty classic upstream /
> > downstream model.
> >
> > Some of us (including me) spotted the obvious duplication of effort
> > there, and encouraged distributions to share that stable branch
> > maintenance work rather than duplicate it. Here the stable branches were
> > born, mostly through a collaboration between Red Hat developers and
> > Canonical developers. All was well. Nobody was saying LTS back then
> > because OpenStack was barely usable so nobody wanted to stay on any
> > given version for too long.
> 
> Heh, if you go back _that_ far then upgrades between versions basically 
> weren't feasible, so everybody stayed on a given version for too long. 
> It's true that nobody *wanted* to though :D
> 
> > Maintaining stable branches has a cost. Keeping the infrastructure that
> > ensures that stable branches are actually working is a complex endeavor
> > that requires people to constantly pay attention. As time passed, we saw
> > the involvement of distro packagers become more limited. We therefore
> > limited the number of stable branches (and the length of time we
> > maintained them) to match the staffing of that team.
> 
> I wonder if this is one that needs revisiting. There was certainly a 
> time when closing a branch came with a strong sense of relief that you 
> could stop nursing the gate. I personally haven't felt that way in a 
> couple of years, thanks to a lot of *very* hard work done by the folks 
> looking after the gate to systematically solve a lot of those recurring 
> issues (e.g. by introducing upper constraints). We're still assuming 
> that stable branches are expensive, but what if they aren't any more?
> 
> > Fast-forward to
> > today: the stable team is mostly one person, who is now out of his job
> > and seeking employment.
> >
> > In parallel, OpenStack became more stable, so the demand for longer-term
> > maintenance is stronger. People still expect "upstream" to provide it,
> > not realizing upstream is made of people employed by various
> > organizations, and that apparently their interest in funding work in
> > that area is pretty dead.
> >
> > I agree that our current stable branch model is inappropriate:
> > maintaining stable branches for one year only is a bit useless. But I
> > only see two outcomes:
> >
> > 1/ The OpenStack community still thinks there is a lot of value in doing
> > this work upstream, in which case organizations should invest resources
> > in making that happen (starting with giving the Stable branch
> > maintenance PTL a job), and then, yes, we should definitely consider
> > things like LTS or longer periods of support for stable branches, to
> > match the evolving usage of OpenStack.
> 
> Speaking as a downstream maintainer, it sucks that backports I'm still 
> doing to, say, Liberty don't benefit anybody but Red Hat customers, 
> because there's nowhere upstream that I can share them. I want everyone 
> in the community to benefit. Even if I could only upload patches to 
> Gerrit and not merge them, that would at least be something.
> 
> (In a related bugbear, why must we delete the branch at EOL? This is 
> pure evil for consumers of the code. It breaks existing git checkouts 
> and thousands of web links in bug reports, review comments, IRC logs...)

Among other things, closing the branch lets us avoid all of the
discussions about why no one is reviewing patches there and why
folks shouldn't bother submitting them.

I would support having the stable maintenance team review the state
of the gate and revise the policy, if it's warranted. But we've had
that conversation at least once a year for the last 5 years, and
we only came to a different conclusion one time that I remember.
Even if branches are cheaper to maintain now, they aren't free. We
need people to be around to do the work.

> > 2/ The OpenStack community thinks this is better handled downstream, and
> > we should just get rid of them completely. This is a valid approach, and
> > a lot of other open source communities just do that.
> 
> Maybe we need a 5th 'Open', because to me the idea 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Jeremy Stanley
On 2017-05-04 20:09:35 -0400 (-0400), Zane Bitter wrote:
> On 04/05/17 10:14, Thierry Carrez wrote:
[...]
> >Maintaining stable branches has a cost. Keeping the infrastructure that
> >ensures that stable branches are actually working is a complex endeavor
> >that requires people to constantly pay attention. As time passed, we saw
> >the involvement of distro packagers become more limited. We therefore
> >limited the number of stable branches (and the length of time we
> >maintained them) to match the staffing of that team.
> 
> I wonder if this is one that needs revisiting. There was certainly a time
> when closing a branch came with a strong sense of relief that you could stop
> nursing the gate. I personally haven't felt that way in a couple of years,
> thanks to a lot of *very* hard work done by the folks looking after the gate
> to systematically solve a lot of those recurring issues (e.g. by introducing
> upper constraints). We're still assuming that stable branches are expensive,
> but what if they aren't any more?
[...]
> Speaking as a downstream maintainer, it sucks that backports I'm still doing
> to, say, Liberty don't benefit anybody but Red Hat customers, because
> there's nowhere upstream that I can share them. I want everyone in the
> community to benefit. Even if I could only upload patches to Gerrit and not
> merge them, that would at least be something.
> 
> (In a related bugbear, why must we delete the branch at EOL? This is
> pure evil for consumers of the code. It breaks existing git checkouts and
> thousands of web links in bug reports, review comments, IRC logs...)
[...]

The points above are all interrelated. We close upstream development
of a branch at some point (by tagging and deleting it) to ensure
contributors _don't_ post new changes to Gerrit targeting those
branches _because_ we can't indefinitely maintain the contemporary
infrastructure required to test them and confirm they work and don't
introduce new regressions. OpenStack upstream development has taken
the approach that everything we officially maintain should be
continuously buildable and testable. Revisiting the other points
means revisiting that decision as well.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean Dague
On 05/05/2017 08:50 AM, Dean Troyer wrote:
> On Fri, May 5, 2017 at 7:16 AM, Sean Dague  wrote:
>> The general statement of "people care more about features than
>> usability/stability" gets thrown around a lot. And gets lots of head
>> nodding. But rarely comes with specifics.
> 
> I think a lot of this general sentiment comes from watching what gets
> the investment from contributors, specifically from contributing
> companies.  HP may have been an aberration for a time given the
> quantity of infra folk they once employed, but take a look at the
> distribution between feature work and non-feature work that our major
> contributing companies sponsor.

Ok, I still don't see that as specific, unless you are stating that
everything that's not Infra is a Feature?

(which I honestly don't think you believe, but I'm going to keep
pressing on all respondents to be very specific)

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Dean Troyer
On Fri, May 5, 2017 at 7:16 AM, Sean Dague  wrote:
> The general statement of "people care more about features than
> usability/stability" gets thrown around a lot. And gets lots of head
> nodding. But rarely comes with specifics.

I think a lot of this general sentiment comes from watching what gets
the investment from contributors, specifically from contributing
companies.  HP may have been an aberration for a time given the
quantity of infra folk they once employed, but take a look at the
distribution between feature work and non-feature work that our major
contributing companies sponsor.

Maybe it is time to set up direct sponsorship for some of this work
that keeps coming up as important but "nobody wants to pay for it".
Like an endowed chair or just a GoFundMe that lets someone continue to
be the Stable Branch Overlord.  It is the sort of work that no single
company wants to be on the hook for, nor is it a good idea from the
community perspective to be single sourced in that regard (again, the
infra situation last fall) so lets make it easy for downstream
consumers to share the funding of the upstream work they all want to
see done but do not want or can not afford the responsibility of
taking on themselves.

You might say we already do this, the Foundation employs people to
work directly on these things, and yes they do to great effect.  But
this topic keeps coming up because it is a specific request for
specific OpenStack consumers.  Lets help them help themselves.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] postpone our next meeting to 5/17 for Boston summit

2017-05-05 Thread Rico Lin
Dear team

As Boston summit is next week, we will postpone our next meeting to 5/17.
In another word, we will not have the meeting on next week. Feel free to
raise any discussion in #heat irc channel.

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-05 Thread Sean Dague
On 05/04/2017 01:10 PM, Flavio Percoco wrote:

> Some of the current TC activities depend on the meeting to some extent:
> 
> * We use the meeting to give the final ack on some the formal-vote reviews.
> * Some folks (tc members and not) use the meeting agenda to know what they
>  should be reviewing.
> * Some folks (tc members and not) use the meeting as a way to review or
>  paticipate in active discussions.
> * Some folks use the meeting logs to catch up on what's going on in the TC
> 
> In the resolution that has been proposed[1], we've listed possible
> solutions for
> some of this issues and others:
> 
> * Having office hours
> * Sending weekly updates (pulse) on the current reviews and TC discussions
> 
> Regardless we do this change on one-shot or multiple steps (or don't do
> it at
> all), I believe it requires changing the way TC activities are done:
> 
> * It requires folks (especially TC members) to be more active on reviewing
>  governance patches
> * It requires folks to engage more on the mailing list and start more
>  discussions there.
> 
> Sending this out to kick off a broader discussion on these topics.
> Thoughts?
> Opinions? Objections?

To baseline: I am all in favor of an eventual world to get rid of the TC
IRC meeting (and honestly IRC meetings in general), for all the reasons
listed above.

I shut down my IRC bouncer over a year ago specifically because I think
that the assumption of being on IRC all the time is an anti pattern that
we should be avoiding in our community.

But, that being said, we have a working system right now, one where I
honestly can't remember the last time we had an IRC meeting get to every
topic we wanted to cover and not run into the time limit. That is data
that these needs are not being addressed in other places (yet).

So the concrete steps I would go with is:

1) We need to stop requiring IRC meetings as part of meeting the Open
definition.

That has propagated this issue a lot -
https://review.openstack.org/#/c/462077

2) We really need to stop putting items like the project adds.

That's often forcing someone up in the middle of the night for 15
minutes for no particularly good reason.

3) Don't do interactive reviews in gerrit.

Again, kind of a waste of time that is better in async. It's mostly
triggered by the fact that gerrit doesn't make a good discussion medium
in looking at broad strokes. It's really good about precision feedback,
but broad strokes, it's tough.

One counter suggestion here is to have every governance patch that's not
trivial require that an email come to the list tagged [tc] [governance]
for people to comment more free form here.

4) See what the impact of the summary that Chris is sending out does to
make people feel like they understand what is going on in the meeting.
Because I also think that we make assumptions that the log of the
meeting describes what really happened. And I think that's often an
incorrect assumption. The same words used by Monty, Thierry, Jeremy mean
different things. Which you only know by knowing them all as people.
Having human interpretation of the meeting is good an puts together a
more ingestible narrative for people.


Then evaluate because we will know that we need the meeting less (or
less often) when we're regularly ending in 45 minutes, or 30 minutes,
instead of slamming up against the wall with people feeling they had
more to say.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean Dague
On 05/04/2017 11:08 AM, Alex Schultz wrote:
> On Thu, May 4, 2017 at 5:32 AM, Chris Dent  wrote:
>> On Wed, 3 May 2017, Drew Fisher wrote:
>>
>>> This email is meant to be the ML discussion of a question I brought up
>>> during the TC meeting on April 25th.; [1]
>>
>>
>> Thanks for starting this Drew, I hope my mentioning it in my tc
>> report email wasn't too much of a nag.
>>
>> I've added [tc] and [all] tags to the subject in case people are
>> filtering. More within.
>>
>>> The TL;DR version is:
>>>
>>> Reading the user survey [2], I see the same issues time and time again.
>>> Pages 18-19 of the survey are especially common points.
>>> Things move too fast, no LTS release, upgrades are terrifying for
>>> anything that isn't N-1 -> N.
>>> These come up time and time again
>>> How is the TC working with the dev teams to address these critical issues?
>>
>>
>> As I recall the "OpenStack-wide Goals"[a] are supposed to help address
>> some of this sort of thing but it of course relies on people first
>> proposing and detailing goals and then there actually being people
>> to act on them. The first part was happening at[b] but it's not
>> clear if that's the current way.
>>
>> Having people is the hard part. Given the current contribution
>> model[c] that pretty much means enterprises ponying up the people do
>> the work. If they don't do that then the work won't get done, and
>> people won't buy the products they are supporting, I guess? Seems a
>> sad state of affairs.
>>
>> There's also an issue where we seem to have decided that it is only
>> appropriate to demand a very small number of goals per cycle
>> (because each project already has too much on their plate, or too big
>> a backlog, relative to resources). It might be that as the
>> _Technical_ Committe is could be legitimate to make a larger demand.
>> (Or it could be completely crazy.)
>>
>>> I asked this because on page 18 is this comment:
>>>
>>> "Most large customers move slowly and thus are running older versions,
>>> which are EOL upstream sometimes before they even deploy them."
>>
>>
>> Can someone with more of the history give more detail on where the
>> expectation arose that upstream ought to be responsible things like
>> long term support? I had always understood that such features were
>> part of the way in which the corporately avaialable products added
>> value?
>>
>>> This is exactly what we're seeing with some of our customers and I
>>> wanted to ask the TC about it.
>>
>>
>> I know you're not speaking as the voice of your employer when making
>> this message, so this is not directed at you, but from what I can
>> tell Oracle's presense upstream (both reviews and commits) in Ocata
>> and thus far in Pike has not been huge. Maybe that's something that
>> needs to change to keep the customers happy? Or at all.
>>
> 
> Probably because they are still on Kilo. Not sure how much they could
> be contributing to the current when their customers are demanding that
> something is rock solid which by now looks nothing like the current
> upstream.   I think this is part of the problem as the upstream can
> tend to outpace anyone else in terms of features or anything else.  I
> think the the bigger question could be what's the benefit of
> continuing to press forward and add yet more features when consumers
> cannot keep up to consume these?  Personally I think usability (and
> some stability) sometimes tends to take a backseat to features in the
> upstream which is unfortunate because it makes these problems worse.

The general statement of "people care more about features than
usability/stability" gets thrown around a lot. And gets lots of head
nodding. But rarely comes with specifics.

Can we be specific about what feature work is outpacing the consumers
that don't help with usability/stability?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Sean Dague
On 05/05/2017 07:16 AM, Bogdan Dobrelya wrote:
> On 05.05.2017 2:09, Zane Bitter wrote:
>> On 04/05/17 10:14, Thierry Carrez wrote:
>>> We started with no stable branches, we were just producing releases and
>>> ensuring that updates vaguely worked from N-1 to N. There were a lot of
>>> distributions, and they all maintained their own stable branches,
>>> handling backport of critical fixes. That is a pretty classic upstream /
>>> downstream model.
>>>
>>> Some of us (including me) spotted the obvious duplication of effort
>>> there, and encouraged distributions to share that stable branch
>>> maintenance work rather than duplicate it. Here the stable branches were
>>> born, mostly through a collaboration between Red Hat developers and
>>> Canonical developers. All was well. Nobody was saying LTS back then
>>> because OpenStack was barely usable so nobody wanted to stay on any
>>> given version for too long.
>>
>> Heh, if you go back _that_ far then upgrades between versions basically
>> weren't feasible, so everybody stayed on a given version for too long.
>> It's true that nobody *wanted* to though :D
> 
> I may be wrong, but I have a strong perception that even when major
> upgrades run smooth and flawless, operators tend to operate legacy
> enterprise software for long, long, long. It would be an utopia to
> expect any changes here, IMO. Unless the major shift comes to
> enterprises for the very software delivery paradigm (infrastructure as a
> code, blue/green deployments done by CD pipeline promoting stable
> changes from trunk commits, unicorns all around), which is an utopia
> even more though.
> 
>>> Maintaining stable branches has a cost. Keeping the infrastructure that
>>> ensures that stable branches are actually working is a complex endeavor
>>> that requires people to constantly pay attention. As time passed, we saw
>>> the involvement of distro packagers become more limited. We therefore
>>> limited the number of stable branches (and the length of time we
>>> maintained them) to match the staffing of that team.
>>
>> I wonder if this is one that needs revisiting. There was certainly a
>> time when closing a branch came with a strong sense of relief that you
>> could stop nursing the gate. I personally haven't felt that way in a
> 
> This. What we really need, IMO, that sense of relief, but inversed,
> which is *operators* of enterprises world to feel it and start nursing
> their 3rd party CI gates *upstream* once a stable branch created! And
> hopefully never closed, for the time they do care of it. It sounds fair
> to open source software. As Jay noted, one shall get that one had put,
> which is a direct function of how much do one care and gives away.
> 
> So perhaps there is a (naive?) option #3: Do not support or nurse gates
> for stable branches upstream. Instead, only create and close them and
> attach 3rd party gating, if asked by contributors willing to support and
> nurse their gates.

I think it's important to clarify the amount of infrastructure that goes
into testing OpenStack. We build a whole cloud, from source, installing
~ 200 python packages, many from git trees, configure and boot 100+ VMs
on it in different ways. And do that with a number of different default
configs.

Nothing prevents anyone from building a kilo branch in a public github
and doing their own CI against it. But we've never seen anyone do that,
because there is a lot of work in maintaining a CI system. A lot of
expertise needed to debug when things go wrong. Anyone can captain a
ship when it's a sunny day. Only under failures do we see what is
required to actually keep the ship afloat.

You could always drop all the hard parts of CI, actually testing the
trees build a full running cloud. But at that point, it becomes very odd
to call it a stable branch, as it is far less rigorous in validation
than master.

At any rate, this basically comes up every year, and I don't think the
fundamental equation has changed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovn] metadata agent implementation

2017-05-05 Thread Daniel Alvarez Sanchez
Hi folks,

Now that it looks like the metadata proposal is more refined [0], I'd like
to get some feedback from you on the driver implementation.

The ovn-metadata-agent in networking-ovn will be responsible for
creating the namespaces, spawning haproxies and so on. But also,
it must implement most of the "old" neutron-metadata-agent functionality
which listens on a UNIX socket and receives requests from haproxy,
adds some headers and forwards them to Nova. This means that we can
import/reuse big part of neutron code.

I wonder what you guys think about depending on neutron tree for the
agent implementation despite we can benefit from a lot of code reuse.
On the other hand, if we want to get rid of this dependency, we could
probably write the agent "from scratch" in C (what about having C
code in the networking-ovn repo?) and, at the same time, it should
buy us a performance boost (probably not very noticeable since it'll
respond to requests from local VMs involving a few lookups and
processing simple HTTP requests; talking to nova would take most
of the time and this only happens at boot time).

I would probably aim for a Python implementation reusing/importing
code from neutron tree but I'm not sure how we want to deal with
changes in neutron codebase (we're actually importing code now).
Looking forward to reading your thoughts :)

Thanks,
Daniel

[0] https://review.openstack.org/#/c/452811/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] ironic-stable-maint update proposal

2017-05-05 Thread Dmitry Tantsur
With no objections recorded, I think we can make this change. Tony or someone 
with required ACL, could you please do it?


On 04/27/2017 04:21 PM, Dmitry Tantsur wrote:

Hi all!

I'd like to propose the following changes to the ironic-stable-maint group [0]:

1. Add Ruby Loo (rloo) to the group. Ruby does not need introduction in the 
Ironic community, she has been with the project for really long time and is well 
known for her high-quality and thorough reviews. She has been pretty active on 
stable branches as well [1].


2. Remove Jay Faulkner (sigh..) per his request at [2].

3. Remove Devananda (sigh again..) as he's no longer active on the project and 
was removed from ironic-core several months ago [3].


So for those on the team already, please reply with a +1 or -1 vote.
I'll also need somebody to apply this change, as I don't have ACL for that.

[0] https://review.openstack.org/#/admin/groups/950,members
[1] 
https://review.openstack.org/#/q/project:openstack/ironic+NOT+branch:master+reviewer:%22Ruby+Loo+%253Cruby.loo%2540intel.com%253E%22 


[2] http://lists.openstack.org/pipermail/openstack-dev/2017-April/115968.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2017-February/112442.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-05 Thread Chris MacNaughton
I'm looking forward to it!

On Thu, May 4, 2017 at 5:57 PM Andrew Mcleod 
wrote:

> I'll be there too! :)
>
>
> Andrew
>
> On Thu, May 4, 2017 at 5:34 PM, Alex Kavanagh  > wrote:
>
>> I will be there too.  Looking forward to catching up with existing and
>> new people.
>>
>> Cheers
>> Alex.
>>
>> On Tue, May 2, 2017 at 11:36 AM, James Page 
>> wrote:
>>
>>> Hi All
>>>
>>> The OpenStack summit is nearly upon us and for this summit we're running
>>> a project onboarding session on Monday at 4.40pm in MR-105 (see [0] for
>>> full details) for anyone who wants to get started either using the
>>> OpenStack Charms or contributing to the development of the Charms,
>>>
>>> The majority of the core development team will be present so its a great
>>> opportunity to learn more about our project from a use and development
>>> perspective!
>>>
>>> I've created an etherpad at [1] so if you're intending on coming along,
>>> please put your name down with some details on what you would like to get
>>> out of the session.
>>>
>>> Cheers
>>>
>>> James
>>>
>>> [0] http://tiny.cc/onhwky
>>> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Alex Kavanagh - Software Engineer
>> Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Bogdan Dobrelya
On 05.05.2017 2:09, Zane Bitter wrote:
> On 04/05/17 10:14, Thierry Carrez wrote:
>> We started with no stable branches, we were just producing releases and
>> ensuring that updates vaguely worked from N-1 to N. There were a lot of
>> distributions, and they all maintained their own stable branches,
>> handling backport of critical fixes. That is a pretty classic upstream /
>> downstream model.
>>
>> Some of us (including me) spotted the obvious duplication of effort
>> there, and encouraged distributions to share that stable branch
>> maintenance work rather than duplicate it. Here the stable branches were
>> born, mostly through a collaboration between Red Hat developers and
>> Canonical developers. All was well. Nobody was saying LTS back then
>> because OpenStack was barely usable so nobody wanted to stay on any
>> given version for too long.
> 
> Heh, if you go back _that_ far then upgrades between versions basically
> weren't feasible, so everybody stayed on a given version for too long.
> It's true that nobody *wanted* to though :D

I may be wrong, but I have a strong perception that even when major
upgrades run smooth and flawless, operators tend to operate legacy
enterprise software for long, long, long. It would be an utopia to
expect any changes here, IMO. Unless the major shift comes to
enterprises for the very software delivery paradigm (infrastructure as a
code, blue/green deployments done by CD pipeline promoting stable
changes from trunk commits, unicorns all around), which is an utopia
even more though.

>> Maintaining stable branches has a cost. Keeping the infrastructure that
>> ensures that stable branches are actually working is a complex endeavor
>> that requires people to constantly pay attention. As time passed, we saw
>> the involvement of distro packagers become more limited. We therefore
>> limited the number of stable branches (and the length of time we
>> maintained them) to match the staffing of that team.
> 
> I wonder if this is one that needs revisiting. There was certainly a
> time when closing a branch came with a strong sense of relief that you
> could stop nursing the gate. I personally haven't felt that way in a

This. What we really need, IMO, that sense of relief, but inversed,
which is *operators* of enterprises world to feel it and start nursing
their 3rd party CI gates *upstream* once a stable branch created! And
hopefully never closed, for the time they do care of it. It sounds fair
to open source software. As Jay noted, one shall get that one had put,
which is a direct function of how much do one care and gives away.

So perhaps there is a (naive?) option #3: Do not support or nurse gates
for stable branches upstream. Instead, only create and close them and
attach 3rd party gating, if asked by contributors willing to support and
nurse their gates.

> couple of years, thanks to a lot of *very* hard work done by the folks
> looking after the gate to systematically solve a lot of those recurring
> issues (e.g. by introducing upper constraints). We're still assuming
> that stable branches are expensive, but what if they aren't any more?
> 
>> Fast-forward to
>> today: the stable team is mostly one person, who is now out of his job
>> and seeking employment.
>>
>> In parallel, OpenStack became more stable, so the demand for longer-term
>> maintenance is stronger. People still expect "upstream" to provide it,
>> not realizing upstream is made of people employed by various
>> organizations, and that apparently their interest in funding work in
>> that area is pretty dead.
>>
>> I agree that our current stable branch model is inappropriate:
>> maintaining stable branches for one year only is a bit useless. But I
>> only see two outcomes:
>>
>> 1/ The OpenStack community still thinks there is a lot of value in doing
>> this work upstream, in which case organizations should invest resources
>> in making that happen (starting with giving the Stable branch
>> maintenance PTL a job), and then, yes, we should definitely consider
>> things like LTS or longer periods of support for stable branches, to
>> match the evolving usage of OpenStack.
> 
> Speaking as a downstream maintainer, it sucks that backports I'm still
> doing to, say, Liberty don't benefit anybody but Red Hat customers,
> because there's nowhere upstream that I can share them. I want everyone
> in the community to benefit. Even if I could only upload patches to
> Gerrit and not merge them, that would at least be something.
> 
> (In a related bugbear, why must we delete the branch at EOL? This is
> pure evil for consumers of the code. It breaks existing git checkouts
> and thousands of web links in bug reports, review comments, IRC logs...)
> 
>> 2/ The OpenStack community thinks this is better handled downstream, and
>> we should just get rid of them completely. This is a valid approach, and
>> a lot of other open source communities just do that.
> 
> Maybe we need a 5th 'Open', because to me the idea that 

Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-05 Thread Luigi Toscano
On Thursday, 4 May 2017 20:11:14 CEST Emilien Macchi wrote:
> On Thu, May 4, 2017 at 9:41 AM, Dan Prince  wrote:

> > I like the idea of getting pingtest out of tripleo.sh as more of a
> > stand alone tool. I would support an effort that re-implemented it...
> > and using tempest-lib would be totally fine. And as you point out one
> > could even combine these tests with a more common "Tempest" run that
> > incorporates the scenarios, etc.
> 
> I don't understand why we would re-implement the pingtest in a tempest
> plugin. Could you please tell us what is the technical difference between
> what does this scenario :
> https://github.com/openstack/tempest/blob/master/tempest/scenario/test_volum
> e_boot_pattern.py
> 
> And this pingtest:
> https://github.com/openstack/tripleo-heat-templates/blob/master/ci/pingtests
> /tenantvm_floatingip.yaml
> 
> They both create a volume Cinder, snapshot it in Glance and and spawn
> a Nova server from the volume.
> 
> What one does that the other one doesn't?

I assumed that you want to exercise that scenario (also) through Heat. If it 
is not the case, totally fine, a test like the one that you pointed out works.


> 
> > To me the message is clear that we DO NOT want to consume the normal
> > Tempest scenarios in TripleO upstream CI at this point. Sure there is
> > overlap there, but the focus of those tests is just plain different...
> 
> I haven't seen strong pushback in this thread except you.
> I'm against overlap in general and this one is pretty obvious. Why
> would we maintain a tripleo-specific Tempest scenario while existing
> ones would work for us. Please give me a technical reason in what is
> not good enough in the existing scenarios.

If that scenario is fine, sure. If you want (als) a Heat-based scenario, the 
Tempest plugin would contain just that.

> 
> > So ++ for the idea of experimenting with the use of tempest.lib. But
> > stay away from the idea of using Tempest smoke tests and the like for
> > TripleO I think ATM.
> > 
> > Its also worth noting there is some risk when maintaining your own in-
> > tree Tempest tests [1]. If I understood that thread correctly that
> > breakage wouldn't have occurred if the stable branch tests were gating
> > Tempest proper... which is a very hard thing to do if we have our own
> > in-tree stuff. So there is a cost to doing what you suggest here, but
> > probably one that we'd be willing to accept.
> 
> I'm not sure we have the resources to write and maintain our own
> in-tree tempest plugin, tbh.

Just a technical note here: apart from the one-time initial setup, as this 
hypotetical plugin would basically feed (the existing) heat templates through 
an Heat tempest plugin, it would not require a lot of maintainance.
You could argue that a Heat scenario test should go to a Heat Tempest plugin, 
but there is another discussion going on right now which shows that this may 
be complicated on the Heat side (see "[qa][heat][murano][daisycloud] Removing 
Heat support from Tempest").

Again, if the non-Heat existing scenario is fine, you are already covered.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-05 Thread Thierry Carrez
Sean McGinnis wrote:
> [...]
> But part of my concern to getting rid of the meeting is that I do find it
> valuable. The arguments against having it are some of the same I've heard for
> our in-person events. It's hard for some to travel to the PTG. There's a lot
> of active discussion at the PTG that is definitely a challenge for non-native
> speakers to keep up with. But I think we all recognize what value having 
> events
> like the PTG provide. Or the Summit/Design Summit/Forum/Midcycle/
> pick-your-favorite.

It's a great point. We definitely make faster progress on some reviews
by committing to that one-hour weekly busy segment. I think the
difference with the PTG (or midcycles) is that PTG is a lot more
productive setting than the meeting is, due to increased, face-to-face
bandwidth combined with a flexible schedule. It's also an exceptional
once-per-cycle event, rather than how we conduct business day-to-day.
It's useful to get together and we are very productive when we do, but
that doesn't mean we should all move and live all the time in the same
house to get things done.

I think we have come to rely too much on the weekly meeting. For a lot
of us, it provides a convenient, weekly hour to do TC business, and a
helpful reminder of what things should be reviewed before it. It allows
to conveniently ignore TC business for the rest of the week.
Unfortunately, due to us living on a globe, it happens at an hour that
is a problem for some, and a no-go for others. So that convenience is
paid in the price of other's inconvenience or exclusion. Changing or
rotating the hour just creates more confusion, disruption and misery. So
I think we need to reduce our dependency on that meeting.

We don't have to stop doing meetings entirely. But I think that
day-to-day TC business should be conducted more on the ML and the
reviews, and that meetings should be exceptional. That can be achieved
by posting a weekly pulse email, and only organizing meetings when we
need the additional bandwidth (like if the review and ML threads are not
going anywhere). Then the meeting can be organized at the most
convenient time for the most critical stakeholders, rather than at the
weekly set time that provides the less overall inconvenience. If we need
a meeting to directly discuss a new project team proposed by people
based in Beijing, we should not have that meeting at 4am Beijing time,
and that should be the only meeting topic.

Cheers!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-05 Thread Anna Taraday
Rossella,
It is pleasure to know you and your contribution for Neutron is remarkable
:) I wish you all the best!

On Fri, May 5, 2017 at 6:28 AM Edgar Magana 
wrote:

> Que??? Estás bien?
> Me ha sorprendido tu email.
>
> Sent from my iPhone
>
> > On May 4, 2017, at 6:55 AM, Rossella Sblendido 
> wrote:
> >
> > Hi all,
> >
> > I've moved to a new position recently and despite my best intentions I
> > was not able to devote to Neutron as much time and energy as I wanted.
> > It's time for me to move on and to leave room for new core reviewers.
> >
> > It's been a great experience working with you all, I learned a lot both
> > on the technical and on the human side.
> > I won't disappear, you will see me around in IRC, etc, don't hesitate to
> > contact me if you have any question or would like my feedback on
> something.
> >
> > ciao,
> >
> > Rossella
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=4ETEPXXTG_yxobZ8tGP2CkB7HSll3hz5srfvMBYPSl0=U7b_viotsOSh4RtVSstF2bliq2LsKpaBRiGJRG21rmU=
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Meeting minutes for IRC meeting 0800UTC May 5 2017

2017-05-05 Thread hu.zhijiang
Meeting ended Fri May  5 08:58:39 2017 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4) 

Minutes:
http://eavesdrop.openstack.org/meetings/daisycloud/2017/daisycloud.2017-05-05-07.59.html

Minutes (text): 
http://eavesdrop.openstack.org/meetings/daisycloud/2017/daisycloud.2017-05-05-07.59.txt

Log:
http://eavesdrop.openstack.org/meetings/daisycloud/2017/daisycloud.2017-05-05-07.59.log.html
 














B. R.,

Zhijiang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-16 and R-15, May 8-19

2017-05-05 Thread Thierry Carrez
Welcome to our regular release countdown email!

Development Focus
-

Teams should be focused on Pike feature development and completion of
release goals. Team members attending the Forum at the Boston Summit
should be focused in requirements gathering and collecting feedback from
other parts of our community, to be able to bring it back to the rest of
their team.

Actions
---

Some projects have not done any Ocata stable point release yet, and
should certainly consider doing so (assuming they have unreleased bugfix
backports landed on their stable/ocata branch):

aodh, barbican, congress, designate, freezer, glance, keystone, manila,
mistral, sahara, searchlight, tricircle, trove, zaqar

Some projects following an intermediary-releases model haven't done any
release in the Pike cycle yet, and should probably start considering
doing so. Remember that this is our "release early, release often" model:

aodh, bifrost, ceilometer, cloudkitty[-dashboard], ironic-python-agent,
karbor[-dashboard], magnum[-ui], murano-agent, panko, senlin-dashboard,
solum[-dashboard], tacker[-dashboard], vitrage-dashboard

This is also valid for libraries:

django-openstack-auth, glance-store, instack, networking-hyperv, os-vif,
pycadf, python-barbicanclient, python-ceilometerclient,
python-cloudkittyclient, python-congressclient, python-designateclient,
python-glanceclient, python-karborclient, python-keystoneclient,
python-magnumclient, python-muranoclient, python-searchlightclient,
python-swiftclient, python-tackerclient, python-vitrageclient,
requestsexceptions

Finally, some independent projects have not been releasing anything in
2017 yet, and should also start considering publishing a refresh:

solum, bandit, syntribos


Upcoming Deadlines & Dates
--

Forum at OpenStack Summit in Boston: May 8-11
Pike-2 milestone: June 8

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-05 Thread Julien Danjou
On Thu, May 04 2017, Flavio Percoco wrote:

> Sending this out to kick off a broader discussion on these topics. Thoughts?
> Opinions? Objections?

IRC meetings are a barrier of entry for many people because it makes it
complicated to contribute as you described (and as I did a while back¹).

So I'm completely for their suppression as one of the main medium of
conversation – that does not mean IRC has to be banned. :)

¹  https://julien.danjou.info/blog/2016/foss-projects-management-bad-practice

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Deep dive Thursday May 4th 1400 UTC on deployed-server

2017-05-05 Thread Marios Andreou
On Thu, May 4, 2017 at 8:58 PM, James Slagle  wrote:

> On Wed, May 3, 2017 at 8:34 AM, James Slagle 
> wrote:
> > I saw there was no deep dive scheduled for tomorrow, so I decided I'd
> > go ahead and plan one.
> >
> > It will be recorded if you can't make it on the short notice.
> >
> > I plan to cover the "deployed-server" feature in TripleO. We have been
> > using this feature since Newton to drive our multinode CI. I'll go
> > over how the multinode CI uses this feature to test TripleO on
> > pre-provisioned nodes.
> >
> > I'll also discuss the improvements that were done in Ocata to make
> > this feature ready for end user consumption. Finally, I'll cover
> > what's being done in Pike around using this feature more fully for an
> > end to end "split-stack".
> >
> > Thursday May 4th at 1400 UTC at https://bluejeans.com/176756457/
> >
> > You don't want to miss it! (or maybe you do). Go Owls!
> >
> > --
> > -- James Slagle
> > --
>
> Hi, Here is the recording link for this deep dive:
> https://www.youtube.com/watch?v=s8Hm4n9IjYg
>
>
 sorry to miss this and thanks for the link - fwiw I had a clash with
another call at that time :/ (& same for Gabriele's quickstart deepdive
last week)




> Also, here is an etherpad of links I used during the deep dive:
> https://etherpad.openstack.org/p/tripleo-deep-dive-deployed-server
>
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev