Re: [openstack-dev] [User-committee] Boston Forum - Formal Submission Now Open!

2017-03-22 Thread Eoghan Glynn
Thanks for putting this together!

But one feature gap is some means to tag topic submissions, e.g.
tagging the project-specific topics by individual project relevance.
That could be a basis for grouping topics, to allow folks to better
manage their time during the Forum.

(e.g. if someone was mostly interested in say networking issues, they
could plan to attend all the neutron- and kuryr-tagged topics more
easily if those slots were all scheduled in a near-contiguous block
with minimal conflicts)

On Mon, Mar 20, 2017 at 9:49 PM, Emilien Macchi  wrote:
> +openstack-dev mailing-list.
>
> On Mon, Mar 20, 2017 at 3:55 PM, Melvin Hillsman  wrote:
>> Hey everyone!
>>
>> We have made it to the next stage of the topic selection process for the
>> Forum in Boston.
>>
>> Starting today, our submission tool is open for you to submit abstracts for
>> the most popular sessions that came out of your brainstorming. Please note
>> that the etherpads are not being pulled into the submission tool and
>> discussion around which sessions to submit are encouraged.
>>
>> We are asking all session leaders to submit their abstracts at:
>>
>> http://forumtopics.openstack.org/
>>
>> before 11:59PM UTC on Sunday April 2nd!
>>
>> We are looking for a good mix of project-specific, cross-project or
>> strategic/whole-of-community discussions, and sessions that emphasize
>> collaboration between users and developers are most welcome!
>>
>> We assume that anything submitted to the system has achieved a good amount
>> of discussion and consensus that it is a worthwhile topic. After submissions
>> close, a team of representatives from the User Committee, the Technical
>> Committee, and Foundation staff will take the sessions proposed by the
>> community and fill out the schedule.
>>
>> You can expect the draft schedule to be released on April 10th.
>>
>> Further details about the Forum can be found at:
>> https://wiki.openstack.org/wiki/Forum
>>
>> Regards,
>>
>> OpenStack User Committee
>>
>>
>> ___
>> User-committee mailing list
>> user-commit...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Eoghan Glynn
> > We are close to the first milestone in Pike, right ? We also have
> > priorities for Placement that we discussed at the PTG and we never
> > discussed about how to cut placement during the PTG.
> >
> > Also, we haven't discussed yet with operators about how they would like
> > to see Placement being cut. At least, we should wait for the Forum about
> > that.
> >
> > For the moment, only operators using Ocata are using the placement API
> > and we know that most of them had problems when using it. Running for
> > cutting Placement in Queens would then mean that they would only have
> > one stable cycle after Ocata for using it.
> > Also, discussing at the above would then mean that we could punt other
> > disucssions. For example, I'd prefer to discuss how we could fix the
> > main problem we have with the scheduler about scheduler claims *before*
> > trying to think on how to cut Placement.
>
> It's definitely good to figure out what challenges people were having in
> rolling things out and document them, to figure out if they've been
> addressed or not. One key thing seemed to be not understanding that
> services need to all be registered in the catalog before services beyond
> keystone are launched. There is also probably a keystoneauth1 fix for
> this make it a softer fail.
>
> The cut over can be pretty seamless. Yes, upgrade scenarios need to be
> looked at. But that's honestly not much different from deprecating
> config options or making new aliases. It should be much less user
> noticable than the newly required cells v2 support.
>
> The real question to ask, now that there is a well defined external
> interface, will evolution of the Placement service stack, and addressing
> bugs and shortcomings related to it's usage, work better as a dedicated
> core team, or inside of Nova. My gut says Queens is the right time to
> make that split, and to start planning for it now.

>From a downstream perspective, I'd prefer to see a concentration on
deriving *user-visible* benefits from placement before incurring more
churn with an extraction (given the proximity to the churn on
deployment tooling from the scheduler decision-making cutover to
placement at the end of ocata).

Just my $0.02 ...

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-20 Thread Eoghan Glynn


> >> What are these issues? My original message was to highlight one
> >> particular deployment type which is completely independent of
> >> how things get packaged in the traditional sense of the word
> >> (rpms/deb/tar.gz).  Perhaps it's getting lost in terminology,
> >> but packaging the software in one way and how it's run can be two
> >> separate issues.  So what I'd like to know is how is that
> >> impacted by whatever ordering is necessary, and if there's anyway
> >> way not to explicitly have special cases that need to be handled
> >> by the end user when applying updates.  It seems like we all want
> >> similar things. I would like not to have to do anything different
> >> from the install for upgrade. Why can't apply configs, restart
> >> all services?  Or can I?  I seem to be getting mixed messages...
> >> 
> >> 
> > 
> > Sorry for being unclear on the issue. As Jay pointed out, if
> > nova-scheduler is upgraded before the placement service, the
> > nova-scheduler service will continue to start and take requests.
> > The problem is if the filter scheduler code is requesting a
> > microversion in the placement API which isn't available yet, in
> > particular this 1.4 microversion, then scheduling requests will
> > fail which to the end user means NoValidHost (the same as if we
> > don't have any compute nodes yet, or available).
> > 
> > So as Jay also pointed out, if placement and n-sch are upgraded
> > and restarted at the same time, the window for hitting this is
> > minimal. If deployment tooling is written to make sure to restart
> > the placement service *before* nova-scheduler, then there should be
> > no window for issues.
> > 
> 
> 
> Thanks all for providing insights. I'm trying to see a consensus here,
> and while I understand the concerns from Alex about the upgrade, I
> think it's okay for a deployer having a "controller" node (disclaimer:
> Nova doesn't have this concept, rather a list of components that are
> not compute nodes) to have a very quick downtime (I mean getting
> NoValidHosts if an user asks for an instance while the "controller" is
> upgraded).

Do we also need to be concerned about the placement API "warm-up" time?

i.e. if a placement-less newton deployment is upgraded to placement-ful
ocata, then would there surely be a short period during which placement
is able to respond to the incoming queries from the scheduler, but only
with incomplete information since all the computes haven't yet triggered
their first reporting cycle?

In that case, it wouldn't necessarily lead to a NoValidHost failure on a
instance boot request, but rather a potentially faulty placement decision,
being based on incomplete information. I mean "faulty" there in the sense
of not strictly following the configured scheduling strategy.

Is that a concern, or an acceptable short degradation of service?

Cheers,
Eoghan

> To be honest, Nova has never supported (yet) having rolling upgrades
> for services that are not computes. If you look at the upgrade devref,
> we ask for a maintenance window [1]. During that maintenance window,
> we say it's safer to upgrade "nova-conductor first and nova-api last"
> for coherence reasons but since that's during the maintenance window,
> we're not supposed to have user requests coming in.
> 
> So, to circle back with the original problem, I think having the
> nova-scheduler upgraded *before* placement is not a problem. If
> deployers don't want to implement an upgrade scenario where placement
> is upgraded before scheduler, that's fine. No need of extra work for
> deployers. That's just that *if* you implement that path, the
> scheduler could still get requests.
> 
> -Sylvain
> 
> [1]
> http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process
> 
> 
> > --
> > 
> > Thanks,
> > 
> > Matt
> > 
> > __
> >
> > 
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Eoghan Glynn

> >> Sylvain and I were talking about how he's going to work placement
> >> microversion requests into his filter scheduler patch [1]. He needs to
> >> make
> >> requests to the placement API with microversion 1.4 [2] or later for
> >> resource provider filtering on specific resource classes like VCPU and
> >> MEMORY_MB.
> >>
> >> The question was what happens if microversion 1.4 isn't available in the
> >> placement API, i.e. the nova-scheduler is running Ocata code now but the
> >> placement service is running Newton still.
> >>
> >> Our rolling upgrades doc [3] says:
> >>
> >> "It is safest to start nova-conductor first and nova-api last."
> >>
> >> But since placement is bundled with n-api that would cause issues since
> >> n-sch now depends on the n-api code.
> >>
> >> If you package the placement service separately from the nova-api service
> >> then this is probably not an issue. You can still roll out n-api last and
> >> restart it last (for control services), and just make sure that placement
> >> is
> >> upgraded before nova-scheduler (we need to be clear about that in [3]).
> >>
> >> But do we have any other issues if they are not packaged separately? Is it
> >> possible to install the new code, but still only restart the placement
> >> service before nova-api? I believe it is, but want to ask this out loud.
> >>
> >
> > Forgive me as I haven't looked really in depth, but if the api and
> > placement api are both collocated in the same apache instance this is
> > not necessarily the simplest thing to achieve.  While, yes it could be
> > achieved it will require more manual intervention of custom upgrade
> > scripts. To me this is not a good idea. My personal preference (now
> > having dealt with multiple N->O nova related acrobatics) is that these
> > types of requirements not be made.  We've already run into these
> > assumptions for new installs as well specifically in this newer code.
> > Why can't we turn all the services on and they properly enter a wait
> > state until such conditions are satisfied?
> 
> Simply put, because it adds a bunch of conditional, temporary code to
> the Nova codebase as a replacement for well-documented upgrade steps.
> 
> Can we do it? Yes. Is it kind of a pain in the ass? Yeah, mostly because
> of the testing requirements.
> 
> But meh, I can whip up an amendment to Sylvain's patch that would add
> the self-healing/fallback to legacy behaviour if this is what the
> operator community insists on.

I think Alex is suggesting something different than falling back to the
legacy behaviour. The ocata scheduler would still roll forward to basing
its node selection decisions on data provided by the placement API, but
would be tolerant of the 3 different transient cases that are problematic:

 1. placement API momentarily not running yet

 2. placement API already running, but still on the newton micro-version

 3. placement API already running ocata code, but not yet warmed up

IIUC Alex is suggesting that the nova services themselves are tolerant
of those transient conditions during the upgrade, rather than requiring
multiple upgrade toolings to independently force the new ordering
constraint.

On my superficial understanding, case #3 would require the a freshly
deployed ocata placement (i.e. when upgraded from a placement-less
newton deployment) to detect that it's being run for the first time
(i.e. no providers reported yet) and return say 503s to the scheduler
queries until enough time has passed for all computes to have reported
in their inventories & allocations.

Cheers,
Eoghan 

 
> I think Matt generally has been in the "push forward" camp because we're
> tired of delaying improvements to Nova because of some terror that we
> may cause some deployer somewhere to restart their controller services
> in a particular order in order to minimize any downtime of the control
> plane.
> 
> For the distributed compute nodes, I totally understand the need to
> tolerate long rolling upgrade windows. For controller nodes/services,
> what we're talking about here is adding code into Nova scheduler to deal
> with what in 99% of cases will be something that isn't even noticed
> because the upgrade tooling will be restarting all these nodes at almost
> the same time and the momentary failures that might be logged on the
> scheduler (400s returned from the placement API due to using an unknown
> parameter in a GET request) will only exist for a second or two as the
> upgrade completes.
> 
> So, yeah, a lot of work and testing for very little real-world benefit,
> which is why a number of us just want to more forward...
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


Re: [openstack-dev] [tripleo] PTG space request

2016-10-14 Thread Eoghan Glynn


- Original Message -
> 
> > > > >>> I would like to request for some space dedicated to TripleO project
> > > > >>> for the first OpenStack PTG.
> > > > >>>
> > > > >>> https://www.openstack.org/ptg/
> > > > >>>
> > > > >>> The event will happen in February 2017 during the next PTG in
> > > > >>> Atlanta.
> > > > >>> Any feedback is welcome,
> > > > >>
> > > > >> Just a quick note: as you can imagine we have finite space at the
> > > > >> event,
> > > > >> and the OpenStack Foundation wants to give priority to teams which
> > > > >> have
> > > > >> a diverse affiliation (or which are not tagged "single-vendor").
> > > > >> Depending on which teams decide to take advantage of the event and
> > > > >> which
> > > > >> don't, we may or may not be able to offer space to single-vendor
> > > > >> projects -- and TripleO is currently tagged single-vendor.
> > > > > 
> > > > > Thanks for the feedback Thierry, I can understand the need to somehow
> > > > > keep
> > > > > PTG space requirements bounded, but I would agree with Emilien and
> > > > > Eoghan
> > > > > that perhaps the single-vendor tag is too coarse a metric with which
> > > > > to
> > > > > judge all projects (it really doesn't capture cross-project
> > > > > collaboration
> > > > > at all IMO).
> > > > > 
> > > > > One of the main goals of TripleO is using OpenStack projects where
> > > > > possible, and as such we have a very broad range of cross project
> > > > > collaboration happening, and clearly the PTG is the ideal forum for
> > > > > such
> > > > > discussions.
> > > > 
> > > > Indeed, I totally agree.
> > > > 
> > > > While at this stage we can't *guarantee* space for TripleO, I really
> > > > hope that we'll be able to provide space. One interesting factor is
> > > > that
> > > > there should be less tension for space in the "horizontal" part of the
> > > > week than in the "vertical" part of the week**. TripleO's cross-cutting
> > > > nature makes it a good fit for the "horizontal" segment, so I'm hopeful
> > > > we can make that work. We should know very soon, once we collect the
> > > > results of the PTL survey. Stay tuned!
> > > > 
> > > > ** The PTG week is split between horizontal team meetings
> > > > (Monday/Tuesday) and vertical team meetings (Wednesday-Friday). As we
> > > > have more vertical teams than horizontal teams (and the space is the
> > > > same size), we /should/ have more flexibility to add horizontal stuff.
> > > 
> > > This split in the PTG week raises an obvious question ...
> > > 
> > > Do we expect the horizontal folks to skip town by mid-week, or to hang
> > > around without a home-room in order to talk to the vertical folks who
> > > put the "project" in "cross-project"?
> > > 
> > > Conversely, do we expect the vertical teams to turn up on the Monday in
> > > order to collaborate with the horizontal teams? (similarly without a
> > > project room for the first 2 days?)
> > > 
> > > Cheers,
> > > Eoghan
> > > 
> > 
> > We expect the "horizontal folks" and "vertical teams" to be the same
> > people, focusing on different things on different days of the week. We
> > *hope* to enable more contributions to horizontal teams as a result.
> 
> OK, that assumption could be at least partially true, especially for say
> horizontal efforts such as release-mgmt, VMT, upgrades, API-WG & such,
> and in general the type of concerns that are generally aired on the cross-
> project track at summit.
> 
> However it misses the fact that TripleO, as well as having many touch-points
> with other projects, also has its own vertical personality as an individual
> project, and so will also need space to have discussions about issues more
> internal to the project.
> 
> For such a project, assigning it to the horizontal bucket (Wed-Fri) feels

Now I've gone and confused myself ... I meant the *vertical bucket (Wed-FRi).

> more appropriate. I'd expect joint sessions between projects (e.g.TripleO/
> Ironic or Nova/Cinder) would still be help during the "vertical" portion
> of the week, especially if there's a trend towards some project-specific
> contributors limiting their attendance to those 3 days.
> 
> Cheers,
> Eoghan
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG space request

2016-10-14 Thread Eoghan Glynn

> > > >>> I would like to request for some space dedicated to TripleO project
> > > >>> for the first OpenStack PTG.
> > > >>>
> > > >>> https://www.openstack.org/ptg/
> > > >>>
> > > >>> The event will happen in February 2017 during the next PTG in
> > > >>> Atlanta.
> > > >>> Any feedback is welcome,
> > > >>
> > > >> Just a quick note: as you can imagine we have finite space at the
> > > >> event,
> > > >> and the OpenStack Foundation wants to give priority to teams which
> > > >> have
> > > >> a diverse affiliation (or which are not tagged "single-vendor").
> > > >> Depending on which teams decide to take advantage of the event and
> > > >> which
> > > >> don't, we may or may not be able to offer space to single-vendor
> > > >> projects -- and TripleO is currently tagged single-vendor.
> > > > 
> > > > Thanks for the feedback Thierry, I can understand the need to somehow
> > > > keep
> > > > PTG space requirements bounded, but I would agree with Emilien and
> > > > Eoghan
> > > > that perhaps the single-vendor tag is too coarse a metric with which to
> > > > judge all projects (it really doesn't capture cross-project
> > > > collaboration
> > > > at all IMO).
> > > > 
> > > > One of the main goals of TripleO is using OpenStack projects where
> > > > possible, and as such we have a very broad range of cross project
> > > > collaboration happening, and clearly the PTG is the ideal forum for
> > > > such
> > > > discussions.
> > > 
> > > Indeed, I totally agree.
> > > 
> > > While at this stage we can't *guarantee* space for TripleO, I really
> > > hope that we'll be able to provide space. One interesting factor is that
> > > there should be less tension for space in the "horizontal" part of the
> > > week than in the "vertical" part of the week**. TripleO's cross-cutting
> > > nature makes it a good fit for the "horizontal" segment, so I'm hopeful
> > > we can make that work. We should know very soon, once we collect the
> > > results of the PTL survey. Stay tuned!
> > > 
> > > ** The PTG week is split between horizontal team meetings
> > > (Monday/Tuesday) and vertical team meetings (Wednesday-Friday). As we
> > > have more vertical teams than horizontal teams (and the space is the
> > > same size), we /should/ have more flexibility to add horizontal stuff.
> > 
> > This split in the PTG week raises an obvious question ...
> > 
> > Do we expect the horizontal folks to skip town by mid-week, or to hang
> > around without a home-room in order to talk to the vertical folks who
> > put the "project" in "cross-project"?
> > 
> > Conversely, do we expect the vertical teams to turn up on the Monday in
> > order to collaborate with the horizontal teams? (similarly without a
> > project room for the first 2 days?)
> > 
> > Cheers,
> > Eoghan
> > 
> 
> We expect the "horizontal folks" and "vertical teams" to be the same
> people, focusing on different things on different days of the week. We
> *hope* to enable more contributions to horizontal teams as a result.

OK, that assumption could be at least partially true, especially for say
horizontal efforts such as release-mgmt, VMT, upgrades, API-WG & such,
and in general the type of concerns that are generally aired on the cross-
project track at summit.

However it misses the fact that TripleO, as well as having many touch-points
with other projects, also has its own vertical personality as an individual
project, and so will also need space to have discussions about issues more
internal to the project.

For such a project, assigning it to the horizontal bucket (Wed-Fri) feels
more appropriate. I'd expect joint sessions between projects (e.g.TripleO/
Ironic or Nova/Cinder) would still be help during the "vertical" portion
of the week, especially if there's a trend towards some project-specific
contributors limiting their attendance to those 3 days.

Cheers,
Eoghan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-14 Thread Eoghan Glynn


> > > Excerpts from Jaesuk Ahn's message of 2016-10-12 15:08:24 +:
> > >> It can be cheap if you are in the US. However, for Asia folks, it is not
> > >> that cheap considering it is all overseas travel. In addition,
> > >> all-in-one
> > >> event like the current summit makes us much easier to get the travel
> > >> fund
> > >> from the company, since the company only need to send everyone (tech,
> > >> ops,
> > >> business, strategy) to one event. Even as an ops or developers, doing
> > >> presentation or a meeting with one or two important company can be very
> > >> good excuse to get the travel money.
> > >>
> > >
> > > This is definitely on the list of concerns I heard while the split was
> > > being discussed.
> > >
> > > I think the concern is valid, and we'll have to see how it affects
> > > attendance at PTG's and summits.
> > >
> > > However, I am not so sure the overseas cost is being accurately
> > > characterized. Of course, the complications are higher with immigration
> > > details, but ultimately hotels around international hub airports are
> > > extremely cheap, and flights tend to be quite a bit less expensive and
> > > more numerous to these locations. You'll find flights from Narita to
> > > LAX for < $500 where as you'd be hard pressed to find Narita to Boston
> > > for under $600, and they'll be less convenient, possibly requiring more
> > > hotel days.
> > 
> > The bit about hotels contradicts my whole experience. I've never seen
> > hotels in
> > big busy hubs cheaper than in less popular and crowded cities. Following
> > your
> > logic, hotels e.g. in Paris should be cheaper than ones in e.g. Prague,
> > which I
> > promise you is far from being the case :)
> > 
> 
> Sorry I communicated that horribly.
> 
> The hotels next to LAX, which are _ugly_ and _disgusting_ but perfectly
> suitable for a PTG, are much cheaper than say, the ones in DT LA near the
> convention center, or in Hollywood, or near Disneyland.
> 
> A better comparison than LAX might be Atlanta or Minneapolis, which
> are cities that aren't such common end-destinations, but have tons of
> flights in and out and generally affordable accommodations.
> 
> > >
> > > Also worth considering is how cheap the space is for the PTG
> > > vs. Summit. Without need for large expo halls, keynote speakers,
> > > catered lunch and cocktail hours, we can rent a smaller, less impressive
> > > space. That should mean either a cheaper ticket price (if there is one
> > > at all) or more sponsored travel to the PTG. Either one of those should
> > > help alleviate the concerns about travel budget.
> > 
> > For upstream developers ticker price was 0. Now it will be > 0, so for
> > companies
> > who send mostly developers, this is a clear budget increase.
> > 
> 
> The nominal price of the PTG is expected to be something like $25 or
> $50. This isn't to cover all the costs, but to ensure that people don't
> just sign up "just in case I'm in the area" or anything like that.

Well, I've heard this concern about no-shows multiple times on this and
other threads, and TBH it simply doesn't ring true to my ears.

Up to now, we've had a scenario where summit was effectively *free* to
contributors. Did we have hoards of people sign up and then not show up?

And even if we did, surely it's not beyond our collective intelligence
to simply account for that in the planning, based on historical trends.

Take a stab at it, say 20% no-shows or whatever rough rate we've seen for
past summits. Scale the accommodation accordingly for ATL. Iterate that
for the second PTG, based on the observed no-show rate at the first.

OTOH if the $100 in really intended to pay for the coffees and M, then
let's just be upfront and say so. But let's not pretend that 100 bucks is
cheaper than free.

Cheers,
Eoghan
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG space request

2016-10-13 Thread Eoghan Glynn


> >>> I would like to request for some space dedicated to TripleO project
> >>> for the first OpenStack PTG.
> >>>
> >>> https://www.openstack.org/ptg/
> >>>
> >>> The event will happen in February 2017 during the next PTG in Atlanta.
> >>> Any feedback is welcome,
> >>
> >> Just a quick note: as you can imagine we have finite space at the event,
> >> and the OpenStack Foundation wants to give priority to teams which have
> >> a diverse affiliation (or which are not tagged "single-vendor").
> >> Depending on which teams decide to take advantage of the event and which
> >> don't, we may or may not be able to offer space to single-vendor
> >> projects -- and TripleO is currently tagged single-vendor.
> > 
> > Thanks for the feedback Thierry, I can understand the need to somehow keep
> > PTG space requirements bounded, but I would agree with Emilien and Eoghan
> > that perhaps the single-vendor tag is too coarse a metric with which to
> > judge all projects (it really doesn't capture cross-project collaboration
> > at all IMO).
> > 
> > One of the main goals of TripleO is using OpenStack projects where
> > possible, and as such we have a very broad range of cross project
> > collaboration happening, and clearly the PTG is the ideal forum for such
> > discussions.
> 
> Indeed, I totally agree.
> 
> While at this stage we can't *guarantee* space for TripleO, I really
> hope that we'll be able to provide space. One interesting factor is that
> there should be less tension for space in the "horizontal" part of the
> week than in the "vertical" part of the week**. TripleO's cross-cutting
> nature makes it a good fit for the "horizontal" segment, so I'm hopeful
> we can make that work. We should know very soon, once we collect the
> results of the PTL survey. Stay tuned!
> 
> ** The PTG week is split between horizontal team meetings
> (Monday/Tuesday) and vertical team meetings (Wednesday-Friday). As we
> have more vertical teams than horizontal teams (and the space is the
> same size), we /should/ have more flexibility to add horizontal stuff.

This split in the PTG week raises an obvious question ...

Do we expect the horizontal folks to skip town by mid-week, or to hang
around without a home-room in order to talk to the vertical folks who
put the "project" in "cross-project"?

Conversely, do we expect the vertical teams to turn up on the Monday in
order to collaborate with the horizontal teams? (similarly without a
project room for the first 2 days?)

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG space request

2016-10-12 Thread Eoghan Glynn


> Emilien Macchi wrote:
> > I would like to request for some space dedicated to TripleO project
> > for the first OpenStack PTG.
> > 
> > https://www.openstack.org/ptg/
> > 
> > The event will happen in February 2017 during the next PTG in Atlanta.
> > Any feedback is welcome,
> 
> Just a quick note: as you can imagine we have finite space at the event,
> and the OpenStack Foundation wants to give priority to teams which have
> a diverse affiliation (or which are not tagged "single-vendor").
> Depending on which teams decide to take advantage of the event and which
> don't, we may or may not be able to offer space to single-vendor
> projects -- and TripleO is currently tagged single-vendor.
> 
> The rationale is, the more organizations are involved in a given project
> team, the more value there is to offer common meeting space to that team
> for them to sync on priorities and get stuff done.

One of the professed primary purposes of splitting off the PTG was to enable
cross-project collaboration, so as to avoid horizontally-oriented contributors
needing to attend multiple midcycle meetups.

Denying PTG space to a project that clearly requires an immense amount of
cross-project collaboration seems to run counter to those stated goals.

The need for cross-project collaboration seems to me orthogonal to the
diversity of corporate affiliation within any individual project, given
the potential diversity within the other projects they may need to
collaborate with.

So the criteria applied should concentrate less on individual project
diversity, and much more on the cross-cutting nature of that project's
concerns.

Cheers,
Eoghan

> If more than 90% of
> contributions / reviews / core reviewers come from a single
> organization, there is less coordination needs and less value in having
> all those people from a single org to travel to a distant place to have
> a team meeting. And as far as recruitment of new team members go (to
> increase that diversity), the OpenStack Summit will be a better venue to
> do that.
> 
> I hope we'll be able to accommodate you, though. And in all cases
> TripleO people are more than welcome to join the event to coordinate
> with other teams. It's just not 100% sure we'll be able to give you a
> dedicated room for multiple days. We should know better in a week or so,
> once we get a good idea of who plans to meet at the event and who doesn't.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Naming polls - and some issues

2016-07-15 Thread Eoghan Glynn


> (top posting on purpose)
> 
> I have re-started the Q poll and am slowly adding all of you fine folks
> to it. Let's keep our fingers crossed that it works this time.
> 
> I also removed Quay. Somehow my brain didn't process the "it would be
> like naming the S release "Street"" when reading the original names.
> Based on the naming critera, "Circular Quay" would be a great option for
> "Circular" - but sadly we already named the C release Cactus. It's
> possible this choice will make someone unhappy, and if it does, I'm
> certainly sorry. On the other hand, there are _so_ many awesome names
> possible in this list, I don't think we'll miss it.

Excellent, thanks Monty for fixing this ... agreed that the remaining
Q* choices are more than enough.

Cheers,
Eoghan 

> I'll fire out new emails for P once Q is up and going.
> 
> On 07/15/2016 11:02 AM, Jamie Lennox wrote:
> > Partially because its name is Circular Quay, so it would be like calling
> > the S release Street for  Street.
> > 
> > Having said that there are not that many of them and Sydney people know
> > what you mean when you are going to the Quay.
> > 
> > 
> > On 14 July 2016 at 21:35, Neil Jerram <n...@tigera.io
> > <mailto:n...@tigera.io>> wrote:
> > 
> >     Not sure what the problem would be with 'Quay' or 'Street' - they
> > both sound like good options to me.
> > 
> > 
> > On Thu, Jul 14, 2016 at 11:29 AM Eoghan Glynn <egl...@redhat.com
> > <mailto:egl...@redhat.com>> wrote:
> > 
> > 
> > 
> > > >> Hey all!
> > > >>
> > > >> The poll emails for the P and Q naming have started to go
> > out - and
> > > >> we're experiencing some difficulties. Not sure at the
> > moment what's
> > > >> going on ... but we'll keep working on the issues and get
> > ballots to
> > > >> everyone as soon as we can.
> > > >
> > > > You'll need to re-send at least some emails, because the
> > link I received
> > > > is wrong - the site just reports
> > > >
> > > >   "Your voter key is invalid. You should have received a
> > correct URL by
> > > >   email."
> > >
> > > Yup. That would be a key symptom of the problems. One of the
> > others is
> > > that I just uploaded 3000 of the emails to the Q poll and it
> > shows 0
> > > active voters.
> > >
> > > I think maybe it needs a nap...
> > 
> > Any chance we could remove "Quay" from the Q release naming poll
> > before
> > the links are fixed and the real voting starts?
> > 
> > Otherwise we risk looking a bit silly, since "Quay" for the Q
> > release
> > would be somewhat akin to choosing "Street" for the S release ;)
> > 
> > Cheers,
> > Eoghan
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Naming polls - and some issues

2016-07-14 Thread Eoghan Glynn


> >> Hey all!
> >>
> >> The poll emails for the P and Q naming have started to go out - and
> >> we're experiencing some difficulties. Not sure at the moment what's
> >> going on ... but we'll keep working on the issues and get ballots to
> >> everyone as soon as we can.
> > 
> > You'll need to re-send at least some emails, because the link I received
> > is wrong - the site just reports
> > 
> >   "Your voter key is invalid. You should have received a correct URL by
> >   email."
> 
> Yup. That would be a key symptom of the problems. One of the others is
> that I just uploaded 3000 of the emails to the Q poll and it shows 0
> active voters.
> 
> I think maybe it needs a nap...

Any chance we could remove "Quay" from the Q release naming poll before
the links are fixed and the real voting starts?

Otherwise we risk looking a bit silly, since "Quay" for the Q release
would be somewhat akin to choosing "Street" for the S release ;)

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread Eoghan Glynn


> > Increase the commit requirements, but don't remove the summit pass
> > perk. You'll add a barrier for ATCs and see fewer of them at
> > summits.
> 
> The commit counts for the latest TC electorate show 27% had only one
> merged change in Gerrit during the qualifying one year period. It's
> worth keeping in mind that for recent summits only the current
> cycle's contributions were taken into account for free passes
> though, rather than a full year, so it's possibly more telling that
> 40% of the electorate had only 1 or 2 qualifying commits (the bare
> minimum to get free conference registration for Liberty, Mitaka or
> both).
> 
> A basic regression analysis of the owner totals for 3 through 50
> changes closely fits a power curve of 857.39*x^-1.239 and comes

Thanks Jeremy for doing the analysis, that model curve just begs
to be graphed :)  

/E

> within 3.3% of predicting how many of the electorate have only one
> change, but falls 15% short of predicting those with two. That said,
> the deviation only means an additional 94 more contributors who had
> one or two changes than the model predicts there should be, so I
> don't think there's a strong enough correlation with the limited
> data we have to say for sure that free summit passes result in a
> significant bloat in electorate size (certainly a lot less of one
> than I expected anyway).
> --
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread Eoghan Glynn


> >> However, the turnout continues to slide, dipping below 20% for
> >> the first time:
> >
> >Election | Electorate (delta %) | Votes | Turnout (delta %)
> >===
> >Oct '13  | 1106 | 342   | 30.92
> >Apr '14  | 1510  (+36.52)  | 448   | 29.69   (-4.05)
> >Oct '14  | 1893   (+25.35)  | 506   | 26.73   (-9.91)
> >Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
> >Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
> >Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)
> >
> >>
> >> This ongoing trend of a decreasing proportion of the electorate
> >> participating in TC elections is a concern.
> 
> One way to look at it is that every cycle (mostly due to the habit of
> giving summit passes to recent contributors) we have more and more
> one-patch contributors (more than 600 in Mitaka), and those usually are
> not really interested in voting... So the electorate number is a bit
> inflated, resulting in an apparent drop in turnout.
> 
> It would be interesting to run the same analysis but taking only >=3
> patch contributors as "expected voters" and see if the turnout still
> drops as much.
> 
> Long term I'd like to remove the summit pass perk (or no longer link it
> to "one commit"). It will likely result in a drop in contributors
> numbers (gasp), but a saner electorate.

I'd be -1 on removing this "perk" ... I prefer to think of it as
something that contributors "earn" by their efforts.

OTOH I'd be fine with moving the goalposts somewhat to require more
than a single commit. Increasing that to 3 or 5 commits would likely
still result in some contributors dividing that single patch into a
series of 3, but for others might not be worth the effort.

Another approach to consider would be to continue to offer the ATC
pass for a single commit, but to require a little more participation
in order to vote in TC/PTL elections (modulo Foundation bye-laws etc.)

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread Eoghan Glynn


> > > > Please join me in congratulating the 7 newly elected members of the TC.
> > > >
> > > > << REMOVE UNEEDED >
> > > > * Davanum Srinivas (dims)
> > > > * Flavio Percoco (flaper87)
> > > > * John Garbutt (johnthetubaguy)
> > > > * Matthew Treinish (mtreinish)
> > > > * Mike Perez (thingee)
> > > > * Morgan Fainberg (morgan)/(notmorgan)
> > > > * Thierry Carrez (ttx)
> > > >
> > > > Full results:
> > > > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fef5cc22eb3dc27a
> > > >
> > > > Thank you to all candidates who stood for election, having a good
> > group of
> > > > candidates helps engage the community in our democratic process.
> > > >
> > > > Thank you to all who voted and who encouraged others to vote. We need
> > to
> > > > ensure
> > > > your voice is heard.
> > > >
> > > > Thank you for another great round.
> > > >
> > > > Here's to Newton!
> > >
> > > Thanks Tony for efficiently running this election, congrats to
> > > the candidates who prevailed, and thanks to everyone who ran
> > > for putting themselves out there.
> > >
> > > It was the most open race since the pattern of TC 2.0 half-
> > > elections was established, which was great to see.
> > >
> > > However, the turnout continues to slide, dipping below 20% for
> > > the first time:
> > >
> > >  Election | Electorate (delta %) | Votes | Turnout (delta %)
> > >  ===
> > >  Oct '16  | 1106 | 342   | 30.92
> > >  Apr '14  | 1893   (+71.16)  | 506   | 26.73   (-13.56)
> > >  Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
> > >  Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
> > >  Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)
> >
> > Meh, I screwed up that table, not enough coffee yet today :)
> >
> > Should be:
> >
> >   Election | Electorate (delta %) | Votes | Turnout (delta %)
> >   ===
> >   Oct '13  | 1106 | 342   | 30.92
> >   Apr '14  | 1510   (+36.52)  | 448   | 29.69   (-4.05)
> >   Oct '14  | 1893   (+25.35)  | 506   | 26.73   (-9.91)
> >   Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
> >   Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
> >   Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)
> >
> 
> ​It would also be interesting to know how the "long tail" of OpenStack has
> evolved over time, as well.
> 
> https://twitter.com/tcarrez/status/710858829760598017
> 
> "​A long tail: ~2500 devs are involved in #OpenStack Mitaka, but less than
> 200 devs produce more than 50% of changes"
> 
> 652 contributors represents roughly 80% of the changes in Mitaka by
> eye-balling that graph.  That doesn't sound as bad.

Very true, though of course we've no way of knowing the intersection
between the ATCs responsible for 80% of the commits, and the voters
who turned out for the TC election. Intuition suggests the more engaged
a contributor is, the more likely she is to vote in TC elections. But
it would be great to have hard data to back that up.

I wonder if it would be possible to tag each vote with the approximate
range of commits that voter has on their record, without breaking
anonymity ... e.g. by encoding in each personalized voting URL the
bucket that voter falls into in rough terms (e.g. 1-5 commits, 10-50,
50-100 etc.)

I'd be less worried about the TC election participation rates declining
and lagging far behind the PTL elections if we'd more visibility on
what cohorts of voters are ignoring TC elections.

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread Eoghan Glynn


> > Please join me in congratulating the 7 newly elected members of the TC.
> > 
> > << REMOVE UNEEDED >
> > * Davanum Srinivas (dims)
> > * Flavio Percoco (flaper87)
> > * John Garbutt (johnthetubaguy)
> > * Matthew Treinish (mtreinish)
> > * Mike Perez (thingee)
> > * Morgan Fainberg (morgan)/(notmorgan)
> > * Thierry Carrez (ttx)
> > 
> > Full results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fef5cc22eb3dc27a
> > 
> > Thank you to all candidates who stood for election, having a good group of
> > candidates helps engage the community in our democratic process.
> > 
> > Thank you to all who voted and who encouraged others to vote. We need to
> > ensure
> > your voice is heard.
> > 
> > Thank you for another great round.
> > 
> > Here's to Newton!
> 
> Thanks Tony for efficiently running this election, congrats to
> the candidates who prevailed, and thanks to everyone who ran
> for putting themselves out there.
> 
> It was the most open race since the pattern of TC 2.0 half-
> elections was established, which was great to see.
> 
> However, the turnout continues to slide, dipping below 20% for
> the first time:
> 
>  Election | Electorate (delta %) | Votes | Turnout (delta %)
>  ===
>  Oct '16  | 1106 | 342   | 30.92
>  Apr '14  | 1893   (+71.16)  | 506   | 26.73   (-13.56)
>  Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
>  Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
>  Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)

Meh, I screwed up that table, not enough coffee yet today :)

Should be:

  Election | Electorate (delta %) | Votes | Turnout (delta %)
  ===
  Oct '13  | 1106 | 342   | 30.92
  Apr '14  | 1510   (+36.52)  | 448   | 29.69   (-4.05)
  Oct '14  | 1893   (+25.35)  | 506   | 26.73   (-9.91)
  Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
  Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
  Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)

Cheers,
Eoghan

> 
> This ongoing trend of a decreasing proportion of the electorate
> participating in TC elections is a concern.
> 
> Cheers,
> Eoghan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread Eoghan Glynn


> Please join me in congratulating the 7 newly elected members of the TC.
> 
> << REMOVE UNEEDED >
> * Davanum Srinivas (dims)
> * Flavio Percoco (flaper87)
> * John Garbutt (johnthetubaguy)
> * Matthew Treinish (mtreinish)
> * Mike Perez (thingee)
> * Morgan Fainberg (morgan)/(notmorgan)
> * Thierry Carrez (ttx)
> 
> Full results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fef5cc22eb3dc27a
> 
> Thank you to all candidates who stood for election, having a good group of
> candidates helps engage the community in our democratic process.
> 
> Thank you to all who voted and who encouraged others to vote. We need to
> ensure
> your voice is heard.
> 
> Thank you for another great round.
> 
> Here's to Newton!

Thanks Tony for efficiently running this election, congrats to
the candidates who prevailed, and thanks to everyone who ran
for putting themselves out there.

It was the most open race since the pattern of TC 2.0 half-
elections was established, which was great to see.

However, the turnout continues to slide, dipping below 20% for
the first time:

 Election | Electorate (delta %) | Votes | Turnout (delta %)
 ===
 Oct '16  | 1106 | 342   | 30.92
 Apr '14  | 1893   (+71.16)  | 506   | 26.73   (-13.56)
 Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
 Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
 Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)

This ongoing trend of a decreasing proportion of the electorate
participating in TC elections is a concern.

Cheers,
Eoghan 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Non-candidacy

2016-03-27 Thread Eoghan Glynn


> Hi all,
> 
> Thanks for all the fish. But it's time for me to move over and let some
> new voices contribute to the OpenStack Technical Committee.
> 
> I will continue to be a proponent for the viewpoint that OpenStack
> should be considered a toolkit of small, focused services and utilities,
> upon which great products can be built that expose cloud computing to
> ever-broader markets.
> 
> I'll just be a proponent of this view from outside the TC.
> 
> All the best, and thanks again for the opportunity to serve this past year.

Well, thank you Jay for your service ... and kudos also for the
gentlepersonly way you declared your non-candidacy early in the
nomination period.

The fact there's the longest possible notice that an incumbent
isn't running again may encourage some extra potential first-time
candidates off the bench and make for a more open race :)

Cheers,
Eoghan  

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-01 Thread Eoghan Glynn

> > Current thinking would be to give preferential rates to access the main
> > summit to people who are present to other events (like this new
> > separated contributors-oriented event, or Ops midcycle(s)). That would
> > allow for a wider definition of "active community member" and reduce
> > gaming.
> >
> 
>  I think reducing gaming is important. It is valuable to include those
>  folks who wish to make a contribution to OpenStack, I have confidence
>  the next iteration of entry structure will try to more accurately
>  identify those folks who bring value to OpenStack.
> >>>
> >>> There have been a couple references to "gaming" on this thread, which
> >>> seem to imply a certain degree of dishonesty, in the sense of bending
> >>> the rules.
> >>>
> >>> Can anyone who has used the phrase clarify:
> >>>
> >>>  (a) what exactly they mean by gaming in this context
> >>>
> >>> and:
> >>>
> >>>  (b) why they think this is a clear & present problem demanding a
> >>>  solution?
> >>>
> >>> For the record, landing a small number of patches per cycle and thus
> >>> earning an ATC summit pass as a result is not, IMO at least, gaming.
> >>>
> >>> Instead, it's called *contributing*.
> >>>
> >>> (on a small scale, but contributing none-the-less).
> >>>
> >>> Cheers,
> >>> Eoghan
> >>
> >> Sure I can tell you what I mean.
> >>
> >> In Vancouver I happened to be sitting behind someone who stated "I'm
> >> just here for the buzz." Which is lovely for that person. The problem is
> >> that the buzz that person is there for is partially created by me and I
> >> create it and mean to offer it to people who will return it in kind, not
> >> just soak it up and keep it to themselves.
> >>
> >> Now I have no way of knowing who this person is and how they arrived at
> >> the event. But the numbers for people offering one patch to OpenStack
> >> (the bar for a summit pass) is significantly higher than the curve of
> >> people offering two, three or four patches to OpenStack (patches that
> >> are accepted and merged). So some folks are doing the minimum to get a
> >> summit pass rather than being part of the cohort that has their first
> >> patch to OpenStack as a means of offering their second patch to OpenStack.
> >>
> >> I consider it an honour and a privilege that I get to work with so many
> >> wonderful people everyday who are dedicated to making open source clouds
> >> available for whoever would wish to have clouds. I'm more than a little
> >> tired of having my energy drained by folks who enjoy feeding off of it
> >> while making no effort to return beneficial energy in kind.
> >>
> >> So when I use the phrase gaming, this is the dynamic to which I refer.
> > 
> > Thanks for the response.
> > 
> > I don't know if drive-by attendance at design summit sessions by under-
> > qualified or uninformed summiteers is encouraged by the availability of
> > ATC passes. But as long as those individuals aren't actively derailing
> > the conversation in sessions, I wouldn't consider their buzz soakage as
> > a major issue TBH.
> > 
> > In any case, I would say that just meeting the bar for an ATC summit pass
> > (by landing the required number of patches) is not bending the rules or
> > misrepresenting in any way.
> > 
> > Even if specifically motivated by the ATC pass (as opposed to scratching
> > a very specific itch) it's still simply an honest and rational response
> > to an incentive offered by the foundation.
> > 
> > One could argue whether the incentive is mis-designed, but that doesn't
> > IMO make a gamer of any contributor who simply meets the required threshold
> > of activity.
> > 
> > Cheers,
> > Eoghan
> > 
> 
> No I'm not saying that. I'm saying that the larger issue is one of
> motivation.
> 
> Folks who want to help (even if they don't know how yet) carry an energy
> of intention with them which is nourishing to be around. Folks who are
> trying to get in the door and not be expected to help and hope noone
> notices carry an entirely different kind of energy with them. It is a
> non-nourishing energy.

Personally I don't buy into that notion of the wrong sort of people
sneaking in the door of summit, keeping their heads down and hoping
no-one notices.

We have an open community that conducts its business in public. Not
wanting folks with the wrong sort of energy to be around when that
business is being done, runs counter to our open ethos IMO.

There are a whole slew of folks who work fulltime on OpenStack but
contribute mainly in the background: operating clouds, managing
engineering teams, supporting customers, designing product roadmaps,
training new users etc. TBH we should be flattered that the design
summit sessions are interesting and engaging enough to also attract
some of that sort of audience, as well as the core contributors of
code. If those interested folks happen to also have the gumption to
earn an ATC pass by meeting the threshold for contributor 

Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-29 Thread Eoghan Glynn


> >>> Current thinking would be to give preferential rates to access the main
> >>> summit to people who are present to other events (like this new
> >>> separated contributors-oriented event, or Ops midcycle(s)). That would
> >>> allow for a wider definition of "active community member" and reduce
> >>> gaming.
> >>>
> >>
> >> I think reducing gaming is important. It is valuable to include those
> >> folks who wish to make a contribution to OpenStack, I have confidence
> >> the next iteration of entry structure will try to more accurately
> >> identify those folks who bring value to OpenStack.
> > 
> > There have been a couple references to "gaming" on this thread, which
> > seem to imply a certain degree of dishonesty, in the sense of bending
> > the rules.
> > 
> > Can anyone who has used the phrase clarify:
> > 
> >  (a) what exactly they mean by gaming in this context
> > 
> > and:
> > 
> >  (b) why they think this is a clear & present problem demanding a
> >  solution?
> > 
> > For the record, landing a small number of patches per cycle and thus
> > earning an ATC summit pass as a result is not, IMO at least, gaming.
> > 
> > Instead, it's called *contributing*.
> > 
> > (on a small scale, but contributing none-the-less).
> > 
> > Cheers,
> > Eoghan
> 
> Sure I can tell you what I mean.
> 
> In Vancouver I happened to be sitting behind someone who stated "I'm
> just here for the buzz." Which is lovely for that person. The problem is
> that the buzz that person is there for is partially created by me and I
> create it and mean to offer it to people who will return it in kind, not
> just soak it up and keep it to themselves.
> 
> Now I have no way of knowing who this person is and how they arrived at
> the event. But the numbers for people offering one patch to OpenStack
> (the bar for a summit pass) is significantly higher than the curve of
> people offering two, three or four patches to OpenStack (patches that
> are accepted and merged). So some folks are doing the minimum to get a
> summit pass rather than being part of the cohort that has their first
> patch to OpenStack as a means of offering their second patch to OpenStack.
> 
> I consider it an honour and a privilege that I get to work with so many
> wonderful people everyday who are dedicated to making open source clouds
> available for whoever would wish to have clouds. I'm more than a little
> tired of having my energy drained by folks who enjoy feeding off of it
> while making no effort to return beneficial energy in kind.
> 
> So when I use the phrase gaming, this is the dynamic to which I refer.

Thanks for the response.

I don't know if drive-by attendance at design summit sessions by under-
qualified or uninformed summiteers is encouraged by the availability of
ATC passes. But as long as those individuals aren't actively derailing
the conversation in sessions, I wouldn't consider their buzz soakage as
a major issue TBH. 

In any case, I would say that just meeting the bar for an ATC summit pass
(by landing the required number of patches) is not bending the rules or
misrepresenting in any way.

Even if specifically motivated by the ATC pass (as opposed to scratching
a very specific itch) it's still simply an honest and rational response
to an incentive offered by the foundation.

One could argue whether the incentive is mis-designed, but that doesn't
IMO make a gamer of any contributor who simply meets the required threshold
of activity.

Cheers,
Eoghan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-29 Thread Eoghan Glynn

> > Current thinking would be to give preferential rates to access the main
> > summit to people who are present to other events (like this new
> > separated contributors-oriented event, or Ops midcycle(s)). That would
> > allow for a wider definition of "active community member" and reduce
> > gaming.
> > 
> 
> I think reducing gaming is important. It is valuable to include those
> folks who wish to make a contribution to OpenStack, I have confidence
> the next iteration of entry structure will try to more accurately
> identify those folks who bring value to OpenStack.

There have been a couple references to "gaming" on this thread, which
seem to imply a certain degree of dishonesty, in the sense of bending
the rules.

Can anyone who has used the phrase clarify:

 (a) what exactly they mean by gaming in this context

and:

 (b) why they think this is a clear & present problem demanding a
 solution?

For the record, landing a small number of patches per cycle and thus
earning an ATC summit pass as a result is not, IMO at least, gaming.

Instead, it's called *contributing*.

(on a small scale, but contributing none-the-less).

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Eoghan Glynn


> Hi everyone,
> 
> TL;DR: Let's split the events, starting after Barcelona.
> 
> Long long version:
> 
> In a global and virtual community, high-bandwidth face-to-face time is
> essential. This is why we made the OpenStack Design Summits an integral
> part of our processes from day 0. Those were set at the beginning of
> each of our development cycles to help set goals and organize the work
> for the upcoming 6 months. At the same time and in the same location, a
> more traditional conference was happening, ensuring a lot of interaction
> between the upstream (producers) and downstream (consumers) parts of our
> community.
> 
> This setup, however, has a number of issues. For developers first: the
> "conference" part of the common event got bigger and bigger and it is
> difficult to focus on upstream work (and socially bond with your
> teammates) with so much other commitments and distractions. The result
> is that our design summits are a lot less productive than they used to
> be, and we organize other events ("midcycles") to fill our focus and
> small-group socialization needs. The timing of the event (a couple of
> weeks after the previous cycle release) is also suboptimal: it is way
> too late to gather any sort of requirements and priorities for the
> already-started new cycle, and also too late to do any sort of work
> planning (the cycle work started almost 2 months ago).
> 
> But it's not just suboptimal for developers. For contributing companies,
> flying all their developers to expensive cities and conference hotels so
> that they can attend the Design Summit is pretty costly, and the goals
> of the summit location (reaching out to users everywhere) do not
> necessarily align with the goals of the Design Summit location (minimize
> and balance travel costs for existing contributors). For the companies
> that build products and distributions on top of the recent release, the
> timing of the common event is not so great either: it is difficult to
> show off products based on the recent release only two weeks after it's
> out. The summit date is also too early to leverage all the users
> attending the summit to gather feedback on the recent release -- not a
> lot of people would have tried upgrades by summit time. Finally a common
> event is also suboptimal for the events organization : finding venues
> that can accommodate both events is becoming increasingly complicated.
> 
> Time is ripe for a change. After Tokyo, we at the Foundation have been
> considering options on how to evolve our events to solve those issues.
> This proposal is the result of this work. There is no perfect solution
> here (and this is still work in progress), but we are confident that
> this strawman solution solves a lot more problems than it creates, and
> balances the needs of the various constituents of our community.
> 
> The idea would be to split the events. The first event would be for
> upstream technical contributors to OpenStack. It would be held in a
> simpler, scaled-back setting that would let all OpenStack project teams
> meet in separate rooms, but in a co-located event that would make it
> easy to have ad-hoc cross-project discussions. It would happen closer to
> the centers of mass of contributors, in less-expensive locations.
> 
> More importantly, it would be set to happen a couple of weeks /before/
> the previous cycle release. There is a lot of overlap between cycles.
> Work on a cycle starts at the previous cycle feature freeze, while there
> is still 5 weeks to go. Most people switch full-time to the next cycle
> by RC1. Organizing the event just after that time lets us organize the
> work and kickstart the new cycle at the best moment. It also allows us
> to use our time together to quickly address last-minute release-critical
> issues if such issues arise.
> 
> The second event would be the main downstream business conference, with
> high-end keynotes, marketplace and breakout sessions. It would be
> organized two or three months /after/ the release, to give time for all
> downstream users to deploy and build products on top of the release. It
> would be the best time to gather feedback on the recent release, and
> also the best time to have strategic discussions: start gathering
> requirements for the next cycle, leveraging the very large cross-section
> of all our community that attends the event.
> 
> To that effect, we'd still hold a number of strategic planning sessions
> at the main event to gather feedback, determine requirements and define
> overall cross-project themes, but the session format would not require
> all project contributors to attend. A subset of contributors who would
> like to participate in this sessions can collect and relay feedback to
> other team members for implementation (similar to the Ops midcycle).
> Other contributors will also want to get more involved in the
> conference, whether that's giving presentations or hearing user stories.
> 
> The split should 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread Eoghan Glynn


> > Honestly I don't know of any communication between two cores at a +2
> > party that couldn't have just as easily happened surrounded by other
> > contributors. Nor, I hope, does anyone put in the substantial
> > reviewing effort required to become a core in order to score a few
> > free beers and see some local entertainment. Similarly for the TC,
> > one would hope that dinner doesn't figure in the system incentives
> > that drives folks to throw their hat into the ring.
> 
> Heh, you'd be surprised.
> 
> I don't object to the proposal, just the implication that there's
> something wrong with parties for specific groups: we did abandon the
> speaker party at Plumbers because the separation didn't seem to be
> useful and concentrated instead on doing a great party for everyone.
> 
> > In any case, I've derailed the main thrust of the discussion here,
> > which I believe could be summed up by:
> >
> >   "let's dial down the glitz a notch, and get back to basics"
> > 
> > That sentiment I support in general, but I'd just be more selective
> > as to which social events should be first in line to be culled in
> > order to create a better atmosphere at summit.
> > 
> > And I'd be far more concerned about getting the choice of location,
> > cadence, attendees, and format right, than in questions of who drinks
> > with whom.
> 
> OK, so here's a proposal, why not reinvent the Cores party as a Meet
> the Cores Party instead (open to all design summit attendees)?  Just
> make sure it's advertised in a way that could only possibly appeal to
> design summit attendees (so the suits don't want to go), use the same
> buget (which will necessitate a dial down) and it becomes an inclusive
> event that serves a useful purpose.

Sure, I'd be totally agnostic on the branding as long as the widest
audience is invited ... e.g. all ATCs, or even all summit attendees.

Actually that distinction between ATCs and other attendees just sparked
another thought ...

Traditionally all ATCs earn a free pass for summit, whereas the other
attendees pay $600 or more for entry. I'm wondering if (a) there's some
cross-subsidization going on here and (b) if the design summit was
cleaved off, would the loss of the income from the non-ATCs sound the
death-knell for the traditional ATC free pass?

>From my Pov, that would not be an excellent outcome. Someone with better
visibility on the financial structure of summit funding might be able
to clarify that.

Cheers,
Eoghan

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> > [...]
> >   * much of the problem with the lavish parties is IMO related to the
> > *exclusivity* of certain shindigs, as opposed to devs socializing at
> > summit being inappropriate per se. In that vein, I think the cores
> > party sends the wrong message and has run its course, while the TC
> > dinner ... well, maybe Austin is the time to show some leadership
> > on that? ;)
> 
> Well, Tokyo was the time to show some leadership on that -- there was no
> "TC dinner" there :)

Excellent, that is/was indeed a positive step :)

For the cores party, much as I enjoyed the First Nation cuisine in Vancouver
or the performance art in Tokyo, IMO it's probably time to draw a line under
that excess also, as it too projects a notion of exclusivity that runs counter
to building a community.

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> Hello all,
> 
> tl;dr
> =
> 
> I have long thought that the OpenStack Summits have become too
> commercial and provide little value to the software engineers
> contributing to OpenStack.
> 
> I propose the following:
> 
> 1) Separate the design summits from the conferences
> 2) Hold only a single OpenStack conference per year
> 3) Return the design summit to being a low-key, low-cost working event
> 
> details
> ===
> 
> The design summits originally started out as working events. Developers
> got together in smallish rooms, arranged chairs in a fishbowl, and got
> to work planning and designing.
> 
> With the OpenStack Summit growing more and more marketing- and
> sales-focused, the contributors attending the design summit are often
> unfocused. The precious little time that developers have to actually
> work on the next release planning is often interrupted or cut short by
> the large numbers of "suits" and salespeople at the conference event,
> many of which are peddling a product or pushing a corporate agenda.
> 
> Many contributors submit talks to speak at the conference part of an
> OpenStack Summit because their company says it's the only way they will
> pay for them to attend the design summit. This is, IMHO, a terrible
> thing. The design summit is a *working* event. Companies that contribute
> to OpenStack projects should send their engineers to working events
> because that is where work is done, not so that their engineer can go
> give a talk about some vendor's agenda-item or newfangled product.
> 
> Part of the reason that companies only send engineers who are giving a
> talk at the conference side is that the cost of attending the OpenStack
> Summit has become ludicrously expensive. Why have the events become so
> expensive? I can think of a few reasons:
> 
> a) They are held every six months. I know of no other community or open
> source project that holds *conference-type* events every six months.
> 
> b) They are held in extremely expensive hotels and conference centers
> because the number of attendees is so big.
> 
> c) Because the conferences have become sales and marketing-focused
> events, companies shell out hundreds of thousands of dollars for schwag,
> for rented event people, for food and beverage sponsorships, for keynote
> slots, for lavish and often ridiculous parties, and more. This cost
> means less money to send engineers to the design summit to do actual work.
> 
> I would love to see the OpenStack contributor community take back the
> design summit to its original format and purpose and decouple it from
> the OpenStack Summit's conference portion.
> 
> I believe the design summits should be organized by the OpenStack
> contributor community, not the OpenStack Foundation and its marketing
> and event planning staff. This will allow lower-cost venues to be chosen
> that meet the needs only of the small group of active contributors, not
> of huge masses of conference attendees. This will allow contributor
> companies to send *more* engineers to *more* design summits, which is
> something that really needs to happen if we are to grow our active
> contributor pool.
> 
> Once this decoupling occurs, I think that the OpenStack Summit should be
> renamed to the OpenStack Conference and Expo to better fit its purpose
> and focus. This Conference and Expo event really should be held once a
> year, in my opinion, and continue to be run by the OpenStack Foundation.
> 
> I, for one, would welcome events that have no conference check-in area,
> no evening parties with 2000 people, no keynote and
> powerpoint-as-a-service sessions, and no getting pulled into sales meetings.
> 
> OK, there, I said it.
> 
> Thoughts? Criticism? Support? Suggestions welcome.

Largely agree with the need to re-imagine summit, and perhaps cleaving
off the design summit is the best way forward on that.

But in any case, just a few counter-points to consider:

 * nostalgia for the days of yore will only get us so far, as *some* of
   the friction in the current design summit is due to its scale (read:
   success/popularity) as opposed to a wandering band of suits ruining
   everything. A decoupled design summit will still be a large event
   and will never recreate the intimate atmosphere of say the Bexar
   summit.

 * much of the problem with the lavish parties is IMO related to the
   *exclusivity* of certain shindigs, as opposed to devs socializing at 
   summit being inappropriate per se. In that vein, I think the cores
   party sends the wrong message and has run its course, while the TC
   dinner ... well, maybe Austin is the time to show some leadership
   on that? ;)

 * cost-wise we need to be careful also about quantifying the real cost
   deltas between a typical midcycle location (often hard to get to,
   with a limited choice of hotels) and a major city with direct routes
   and competition between airlines keeping airfares under control.
   Agreed let's scale down the glitz, but 

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> > For the cores party, much as I enjoyed the First Nation cuisine in
> > Vancouver
> > or the performance art in Tokyo, IMO it's probably time to draw a line
> > under
> > that excess also, as it too projects a notion of exclusivity that runs
> > counter
> > to building a community.
> 
> ... first nation cuisine? you know that's not really what Canadian's
> eat? /me sips on maple syrup and chows on some beavertail.

LOL, I was thinking of the roasted chunks of bison on a stick ... though
now that you mention it, I recall a faint whiff of maple syrup ;)

Cheers,
Eoghan 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Eoghan Glynn


> > > > [...]
> > > >   * much of the problem with the lavish parties is IMO related to
> > > > the
> > > > *exclusivity* of certain shindigs, as opposed to devs
> > > > socializing at
> > > > summit being inappropriate per se. In that vein, I think the
> > > > cores
> > > > party sends the wrong message and has run its course, while
> > > > the TC
> > > > dinner ... well, maybe Austin is the time to show some
> > > > leadership
> > > > on that? ;)
> > > 
> > > Well, Tokyo was the time to show some leadership on that -- there
> > > was no "TC dinner" there :)
> > 
> > Excellent, that is/was indeed a positive step :)
> > 
> > For the cores party, much as I enjoyed the First Nation cuisine in
> > Vancouver or the performance art in Tokyo, IMO it's probably time to
> > draw a line under that excess also, as it too projects a notion of
> > exclusivity that runs counter to building a community.
> 
> Are you sure you're concentrating on the right problem?  Communities
> are naturally striated in terms of leadership.  In principle, there's
> nothing wrong with "exclusive" events that appear to be rewarding the
> higher striations, especially if it acts as an incentive to people to
> move up.  It's only actually "elitist" if you reward the top and
> there's no real way to move up there from the bottom.  You also want to
> be careful about being pejorative; after all the same principle would
> apply to the Board Dinner as well.
> 
> I think the correct question to ask would be "does the cash spent on
> the TC party provide a return on investment either as an incentive to
> become a TC or to facililtate communications among TC members?".  If
> you answer no to that, then eliminate it.

Well the cash spent on those two events is not my concern at all, as
both are privately sponsored by an OpenStack vendor as opposed to being
paid for by the Foundation (IIUC). So in that sense, it's not like the
events are consuming "community funds" for which I'm demanding an RoI.
Vendor's marketing dollars, so the return is their own concern.

Neither am I against partying devs in general, seems like a useful
ice-breaker at summit, just like at most other tech conferences.

My objection, FWIW, is simply around the "Upstairs, Downstairs" feel
to such events (or if you're not old enough to have watched the BBC
in the 1970s, maybe Downton Abbey would be more familiar).

Honestly I don't know of any communication between two cores at a +2
party that couldn't have just as easily happened surrounded by other
contributors. Nor, I hope, does anyone put in the substantial reviewing
effort required to become a core in order to score a few free beers and
see some local entertainment. Similarly for the TC, one would hope that
dinner doesn't figure in the system incentives that drives folks to
throw their hat into the ring. 

In any case, I've derailed the main thrust of the discussion here,
which I believe could be summed up by:

  "let's dial down the glitz a notch, and get back to basics"

That sentiment I support in general, but I'd just be more selective
as to which social events should be first in line to be culled in
order to create a better atmosphere at summit.

And I'd be far more concerned about getting the choice of location,
cadence, attendees, and format right, than in questions of who drinks
with whom.

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Meter-list with multiple filters in simple query is not working

2015-10-12 Thread Eoghan Glynn


> Hi
> 
> Can anyone plz help me on how to specify a simple query with multiple values
> for a query field in a Ceilometer meter-list request? I need to fetch meters
> that belongs to more than one project id. I have tried the following query
> format, but only the last query value (in this case, project_id=
> d41cdd2ade394e599b40b9b50d9cd623) is used for filtering. Any help is
> appreciated here.
> 
> curl -H 'X-Auth-Token:'
> http://localhost:8777/v2/meters?q.field=project_id=eq=f28d2e522e1f466a95194c10869acd0c=project_id=eq=d41cdd2ade394e599b40b9b50d9cd623
> 
> Thanks
> Srikanth

By "not working" you mean "not doing what you (incorrectly) expect it to do"

Your query asks for samples with aproject_id set to *both* f28d.. *and* d41c..
The result is empty, as a sample can't be associated with two project_ids.

The ceilometer simple query API combines all filters using logical AND.

Seems like you want logical OR here, which is possible to express via complex
queries:

  http://docs.openstack.org/developer/ceilometer/webapi/v2.html#complex-query

Cheers,
Eoghan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps

2015-07-01 Thread Eoghan Glynn


   I think removing options from the API requires version bump. So if we
   plan to do this, that should be introduced in v3 as opposed to v2,
   which should remain the same and maintained for two cycles (assuming
   that we still have this policy in OpenStack). It this is achievable by
   removing the old code and relying on the new repo that would be the
   best, if not then we need to figure out how to freeze the old code.
  
  This is not an API change as we're not modifying anything in the API.
  It's just the endpoint *potentially* split in two. But you can also merge
  them as they are 2 separate entities (/v2/alarms and /v2/*).
  So there's no need for a v3 here.
 
 Will this be accessible in the same way as currently or it needs changes on
 client side?

How about ceilometer-api service returns 301 'Moved Permanently' for any
requests to /v2/alarms, redirecting to the new Aodh endpoint?

Being a standard HTTP response code, this should be handled gracefully by
any (non-broken) HTTP client.

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Eoghan Glynn


 Hi all,
 
 we have a problem with dependencies for the kilo-rc1 release of Heat - see
 bug [1]. Root cause is ceilometerclient was not updated for a long time and
 just got an update recently. We are sure that Heat in Kilo would not work
 with ceilometerclient =1.0.12 (users would not be able to create Ceilometer
 alarms in their stacks). In the same time, global requirements have
 ceilometerclient =1.0.12. That works on the gate, but will fail for any
 deployment that happens to use an outdated pypi mirror. I am also afraid
 that if the version of ceilometerclient would be upper-capped to 1.0.12 in
 stable/kilo, Heat in stable/kilo would be completely broken in regards to
 Ceilometer alarms usage.
 
 The patch to global requirements was already proposed [2] but is blocked by
 requirements freeze. Can we somehow apply for an exception and still merge
 it? Are there any other OpenStack projects besides Heat that use
 ceilometerclient's Python API (just asking to assert the testing burden)?

 [1] https://bugs.launchpad.net/python-ceilometerclient/+bug/1423291
 
 [2] https://review.openstack.org/#/c/167527/

Pavlo - part of the resistance here I suspect may be due to the
fact that I inadvertently broke the SEMVER rules when cutting
the ceilometerclient 1.0.13 release, i.e. it was not sufficiently
backward compatible with 1.0.12 to warrant only a Z-version bump.

Sean - would you be any happier with making a requirements freeze
exception to facilitate Heat if we were to cut a fresh ceiloclient
release that's properly versioned, i.e. 2.0.0?

Cheers,
Eoghan
 
 
 Best regards,
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [depfreeze] bug is unresolved due to requirements freeze

2015-04-02 Thread Eoghan Glynn

 Pavlo Shchelokovskyy wrote:
  unfortunately, in Heat we do not have yet integration tests for all the
  Heat resources (creating them in real OpenStack), and Ceilometer alarms
  are in those not covered. In unit tests the real client is of course
  mocked out. When we stumbled on this issue during normal Heat usage, we
  promptly raised a bug suggesting to make a new release, but propagating
  it to requirements took some time. The gate is not affected as it
  installs as per = in requirements the latest which is 1.0.13.
  
  With 1.0.12 ceilometerclient and Heat-Kilo, the Ceilometer alarm
  resource not doesn't work quite as expected, it can not be created at
  all, failing any stack that has it in the template.
 
 I'm +1 on the change.
 
 Let's wait until tomorrow to make sure this is not completely
 unacceptable to packagers.

Excellent, thank you sir!

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Stepping down as PTL

2015-04-02 Thread Eoghan Glynn

Hi Folks,

Just a quick note to say that I won't be running again for
ceilometer PTL over the liberty cycle.

I've taken on a new role internally that won't realistically
allow me the time that the PTL role deserves. But y'all haven't
seen the last of me, I'll be sticking around as a contributor,
bandwidth allowing.

I just wanted to take to opportunity to warmly thank everyone
in the ceilometer community for their efforts over the past two
cycles, and before.

And I'm sure I'll be leaving the reins in good hands :)

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] upgrades from juno to kilo

2015-03-31 Thread Eoghan Glynn


 I tracked down the cause of the check-grenade-dsvm failure on
 https://review.openstack.org/#/c/167370 . As I understand it, grenade is
 taking the previous stable release, deploying it, then upgrading to the
 current master (plus the proposed changeset) without changing any of the
 config from the stable deployment. Thus the policy.json file used in that
 test is the file from stable-juno. Then if we look at oslo_policy/policy.py
 we see that if the rule being looked for is missing then the default rule
 will be used, but then if that default rule is also missing a KeyError is
 thrown. Since the default rule was missing with ceilometer's policy.json
 file in Juno, that's what would happen here. I assume that KeyError then
 gets turned into the 403 Forbidden that is causing check-grenade-dsvm
 failure.
 
 I suspect the author of the already-merged
 https://review.openstack.org/#/c/115717 did what they did in
 ceilometer/api/rbac.py rather than what is proposed in
 https://review.openstack.org/#/c/167370 just to get the grenade tests to
 pass. I think they got lucky (unlucky for us), too, because I think they
 actually did break what the grenade tests are meant to catch. The patch set
 which was merged under https://review.openstack.org/#/c/115717 changed the
 rule that is checked in get_limited_to() from context_is_admin to
 segregation. But the segregation rule didn't exist in the Juno version
 of ceilometer's policy.json, so if a method that calls get_limited_to() was
 tested after an upgrade, I believe it would fail with a 403 Forbidden
 tracing back to a KeyError looking for the segregation rule... very
 similar to what we're seeing in https://review.openstack.org/#/c/167370
 
 Am I on the right track here? How should we handle this? Is there a way to
 maintain backward compatibility while fixing what is currently broken (as a
 result of https://review.openstack.org/#/c/115717 ) and allowing for a fix
 for https://bugs.launchpad.net/ceilometer/+bug/1435855 (the goal of
 https://review.openstack.org/#/c/167370 )? Or will we need to document in
 the release notes that the manual step of modifying ceilometer's policy.json
 is required when upgrading from Juno, and then correspondingly modify
 grenade's upgrade_ceilometer file?

Thanks for raising this issue.

IIUC the idea behind the unconventional approach taken by the original
RBAC patch that landed in juno was to ensure that API calls continued to
be allowed by default, as was previously the case.

However, you correctly point out that this missed a case where the new
logic is run against a completely unchanged policy.json from Juno or
before.

As we just discussed on #os-ceilometer IRC, we can achieve the following
three goals with a relatively minor change:

 1. allow API operations if no matching rule *and* no default rule

 2. apply the default rule *if* present

 3. tolerate the absence of the segregation rule

This would require:

 (a) explicitly checking for 'default' in _ENFORCER.rules.keys() before
 applying the enforcement approach in [1], otherwise falling back
 to the prior enforcement approach in [2]

 (b) explicitly checking for 'segregation' in _ENFORCER.rules.keys()
 before [3] otherwise falling back to checking for the literal
 'context_as_admin' as before.

If https://review.openstack.org/167370 is updated to follow this approach,
I think we can land it for kilo-rc1 without an upgrade exception.
 
Cheers,
Eoghan
 
[1] https://review.openstack.org/#/c/167370/5/ceilometer/api/rbac.py line 49

[2] https://review.openstack.org/#/c/115717/18/ceilometer/api/rbac.py line 51

[3] https://review.openstack.org/#/c/115717/18/ceilometer/api/rbac.py line 81

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Eoghan Glynn


 After some discussion with Sean Dague and a few others it became
 clear that it would be a good idea to introduce a new tool I've been
 working on to the list to get a sense of its usefulness generally,
 work towards getting it into global requirements, and get the
 documentation fleshed out so that people can actually figure out how
 to use it well.
 
 tl;dr: Help me make this interesting tool useful to you and your
 HTTP testing by reading this message and following some of the links
 and asking any questions that come up.
 
 The tool is called gabbi
 
  https://github.com/cdent/gabbi
  http://gabbi.readthedocs.org/
  https://pypi.python.org/pypi/gabbi
 
 It describes itself as a tool for running HTTP tests where requests
 and responses are represented in a declarative form. Its main
 purpose is to allow testing of APIs where the focus of test writing
 (and reading!) is on the HTTP requests and responses, not on a bunch of
 Python (that obscures the HTTP).
 
 The tests are written in YAML and the simplest test file has this form:
 
 ```
 tests:
 - name: a test
url: /
 ```
 
 This test will pass if the response status code is '200'.
 
 The test file is loaded by a small amount of python code which transforms
 the file into an ordered sequence of TestCases in a TestSuite[1].
 
 ```
 def load_tests(loader, tests, pattern):
  Provide a TestSuite to the discovery process.
  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
  return driver.build_tests(test_dir, loader, host=None,
intercept=SimpleWsgi,
fixture_module=sys.modules[__name__])
 ```
 
 The loader provides either:
 
 * a host to which real over-the-network requests are made
 * a WSGI app which is wsgi-intercept-ed[2]
 
 If an individual TestCase is asked to be run by the testrunner, those tests
 that are prior to it in the same file are run first, as prerequisites.
 
 Each test file can declare a sequence of nested fixtures to be loaded
 from a configured (in the loader) module. Fixtures are context managers
 (they establish the fixture upon __enter__ and destroy it upon
 __exit__).
 
 With a proper group_regex setting in .testr.conf each YAML file can
 run in its own process in a concurrent test runner.
 
 The docs contain information on the format of the test files:
 
  http://gabbi.readthedocs.org/en/latest/format.html
 
 Each test can state request headers and bodies and evaluate both response
 headers and response bodies. Request bodies can be strings in the
 YAML, files read from disk, or JSON created from YAML structures.
 Response verifcation can use JSONPath[3] to inspect the details of
 response bodies. Response header validation may use regular
 expressions.
 
 There is limited support for refering to the previous request
 to construct URIs, potentially allowing traversal of a full HATEOAS
 compliant API.
 
 At the moment the most complete examples of how things work are:
 
 * Ceilometer's pending use of gabbi:
https://review.openstack.org/#/c/146187/
 * Gabbi's testing of gabbi:
https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
(the loader and faked WSGI app for those yaml files is in:
https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
 
 One obvious thing that will need to happen is a suite of concrete
 examples on how to use the various features. I'm hoping that
 feedback will help drive that.
 
 In my own experimentation with gabbi I've found it very useful. It's
 helped me explore and learn the ceilometer API in a way that existing
 test code has completely failed to do. It's also helped reveal
 several warts that will be very useful to fix. And it is fast. To
 run and to write. I hope that with some work it can be useful to you
 too.

Thanks for the write-up Chris,

Needless to say, we're sold on the utility of this on the ceilometer
side, in terms of crafting readable, self-documenting tests that reveal
the core aspects of an API in a easily consumable way.

I'd be interested in hearing the api-wg viewpoint, specifically whether
that working group intends to recommend any best practices around the
approach to API testing.

If so, I think gabbi would be a worthy candidate for consideration.

Cheers,
Eoghan

 Thanks.
 
 [1] Getting gabbi to play well with PyUnit style tests and
  with infrastructure like subunit and testrepository was one of
  the most challenging parts of the build, but the result has been
  a lot of flexbility.
 
 [2] https://pypi.python.org/pypi/wsgi_intercept
 [3] https://pypi.python.org/pypi/jsonpath-rw
 
 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] Where should Schema files live?

2014-12-08 Thread Eoghan Glynn


 From: Sandy Walsh [sandy.wa...@rackspace.com] Monday, December 01, 2014 9:29
 AM
  
 From: Duncan Thomas [duncan.tho...@gmail.com]
 Sent: Sunday, November 30, 2014 5:40 AM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Where should Schema files live?
  
 Duncan Thomas
 On Nov 27, 2014 10:32 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
  
  We were thinking each service API would expose their schema via a new
  /schema resource (or something). Nova would expose its schema. Glance
  its own. etc. This would also work well for installations still using
  older deployments.
 This feels like externally exposing info that need not be external (since
 the notifications are not external to the deploy) and it sounds like it
 will potentially leak fine detailed version and maybe deployment config
 details that you don't want to make public - either for commercial reasons
 or to make targeted attacks harder
  
  
 Yep, good point. Makes a good case for standing up our own service or just
 relying on the tarballs being in a well know place.
 
 Hmm, I wonder if it makes sense to limit the /schema resource to service
 accounts. Expose it by role.
 
 There's something in the back of my head that doesn't like calling out to the
 public API though. Perhaps unfounded.

I'm wondering here how this relates to the other URLs in the
service catalog that aren't intended for external consumption,
e.g. the internalURL and adminURL.

I had assumed that these URLs would be visible to external clients,
but protected by firewall rules such that clients would be unable
to do anything in anger with those raw addresses from the outside.

So would including a schemaURL in the service catalog actually
expose an attack surface, assuming this was in general safely
firewalled off in any realistic deployment?

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-12-01 Thread Eoghan Glynn


 detail=concise is not a media type and looking at the grammar in the RFC it
 wouldn’t be valid.
 I think the grammar would allow for application/json; detail=concise. See
 the last line in the definition of the media-range nonterminal in the
 grammar (copied below for convenience):
 Accept = Accept :
 #( media-range [ accept-params ] )
 media-range= ( */*
 | ( type / * )
 | ( type / subtype )
 ) *( ; parameter )
accept-params  = ; q = qvalue *( accept-extension )
accept-extension = ; token [ = ( token | quoted-string ) ]
 The grammar does not define the parameter nonterminal but there is an
 example in the same section that seems to suggest what it could look like:
 Accept: text/*, text/html, text/html;level=1, */*
 Shaunak
 On Nov 26, 2014, at 2:03 PM, Everett Toews  everett.to...@rackspace.com 
 wrote:
 
 
 
 
 On Nov 20, 2014, at 4:06 PM, Eoghan Glynn  egl...@redhat.com  wrote:
 
 
 
 How about allowing the caller to specify what level of detail
 they require via the Accept header?
 
 ▶ GET /prefix/resource_name
 Accept: application/json; detail=concise
 
 The Accept request-header field can be used to specify certain media types
 which are acceptable for the response.” [1]
 
 detail=concise is not a media type and looking at the grammar in the RFC it
 wouldn’t be valid. It’s not appropriate for the Accept header.

Well it's not a media type for sure, as it's intended to be an
accept-extension.

(which is allowed by the spec to be specified in the Accept header,
 in addition to media types  q-values)

Cheers,
Eoghan
 
 Everett
 
 [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-25 Thread Eoghan Glynn


 I think Doug's suggestion of keeping the schema files in-tree and pushing
 them to a well-known tarball maker in a build step is best so far.
 
 It's still a little clunky, but not as clunky as having to sync two repos.

Yes, I tend to agree.

So just to confirm that my understanding lines up:

* the tarball would be used by the consumer-side for unit tests and
  limited functional tests (where the emitter service is not running)

* the tarball would be also be used by the consumer-side in DSVM-based
  CI and in a full production deployments (where the emitter service is
  running)

* the tarballs will be versioned, with old versions remaining accessible
  (as per the current practice with released source on tarballs.openstack.org)

* the consumer side will know which version of each schema it expects to
  support, and will download the appropriate tarball at runtime

* the emitter side will signal the schema version that's it actually using,
  via say a well-known field in the notification body

* the consumer will reject notification payloads with a mismatched major
  version to what it's expecting to support
 
 [snip]
   d. Should we make separate distro packages? Install to a well known
   location all the time? This would work for local dev and integration
   testing and we could fall back on B and C for production distribution.
   Of
   course, this will likely require people to add a new distro repo. Is
   that
   a concern?
 
  Quick clarification ... when you say distro packages, do you mean
  Linux-distro-specific package formats such as .rpm or .deb?
 
  Yep.
 
 So that would indeed work, but just to sound a small note of caution
 that keeping an oft-changing package (assumption #5) up-to-date for
 fedora20/21  epel6/7, or precise/trusty, would involve some work.
 
 I don't know much about the Debian/Ubuntu packaging pipeline, in
 particular how it could be automated.
 
 But in my small experience of Fedora/EL packaging, the process is
 somewhat resistant to many fine-grained updates.
 
 Ah, good to know. So, if we go with the tarball approach, we should be able
 to avoid this. And it allows the service to easily service up the schema
 using their existing REST API.

I'm not clear on how servicing up the schema via an existing API would
avoid the co-ordination issue identified in the original option (b)?

Would that API just be a very simple proxying in front of the well-known
source of these tarballs? 

For production deployments, is it likely that some shops will not want
to require access to an external site such as tarballs.openstack.org?

So in that case, would we require that they mirror, or just assume that
downstream packagers will bundle the appropriate schema versions with
the packages for the emitter and consumer services?

Cheers,
Eoghan

 Should we proceed under the assumption we'll push to a tarball in a
 post-build step? It could change if we find it's too messy.
 
 -S
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-21 Thread Eoghan Glynn


 Right, agreed. This has always been an open meeting.

Yes of course, as I said up-thread.

This is a central plank of our 4 Opens principle[1], which should
apply across the board to all community meetings related to the
OpenStack project.

All project meetings are held in public IRC channels and recorded.

Cheers,
Eoghan

[1] https://wiki.openstack.org/wiki/Open

 Now we're just
 setting the tone that the primary mission for this Open Meeting is
 OpenStack as a whole things, where existing comms channels aren't
 sufficient to handle the situation in the narrow. I look forward to
 there being a forum primarily about discussing and addressing those
 larger issues, and hopefully making a place where we provide
 opportunities for people that want to get involved in OpenStack as a
 whole to know what the issues are and jump in to help.
 
 -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-21 Thread Eoghan Glynn


  Why wouldn’t they live in the repo of the application that generates the
  notification, like we do with the database schema and APIs defined by
  those apps?
 
  That would mean downstream consumers (potentially in different languages)
  would need to pull all repos and extract just the schema parts. A
  separate repo would make it more accessible.
 
 OK, fair. Could we address that by publishing the schemas for an app in a
 tar ball using a post merge job?
 
 That's something to consider. At first blush it feels a little clunky to pull
 all projects to extract schemas whenever any of the projects change.
 
 But there is something to be said about having the schema files next to the
 code that going to generate the data.

My initial read of Doug's proposal was for a tarball of the project's schemas
to be published somewhere out-of-tree, e.g. to tarballs.openstack.org, via a
post-merge git hook or some-such.

Not 100% sure that's a correct interpretation of the proposal, but it would
avoid the need for the consumer projects to pull the repos of the emitter
projects. 

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-21 Thread Eoghan Glynn


  Some problems / options:
  a. Unlike Python, there is no simple pip install for text files. No
  version control per se. Basically whatever we pull from the repo. The
  problem with a git clone is we need to tweak config files to point to a
  directory and that's a pain for gating tests and CD. Could we assume a
  symlink to some well-known location?
  a': I suppose we could make a python installer for them, but that's a
  pain for other language consumers.
 
 Would it be unfair to push that burden onto the writers of clients
 in other languages?
 
 i.e. OpenStack, being largely python-centric, would take responsibility
 for both:
 
   1. Maintaining the text versions of the schema in-tree (e.g. as json)
 
 and:
 
   2. Producing a python-specific installer based on #1
 
 whereas, the first Java-based consumer of these schema would take
 #1 and package it up in their native format, i.e. as a jar or
 OSGi bundle.
 
 Certainly an option. My gut says it will lead to abandoned/fragmented
 efforts.
 If I was a ruby developer, would I want to take on the burden of maintaining
 yet another package?
 I think we need to treat this data as a form of API and there it's our
 responsibility to make easily consumable.
 
 (I'm not hard-line on this, again, just my gut feeling)

OK, that's fair.

[snip]
  d. Should we make separate distro packages? Install to a well known
  location all the time? This would work for local dev and integration
  testing and we could fall back on B and C for production distribution. Of
  course, this will likely require people to add a new distro repo. Is that
  a concern?
 
 Quick clarification ... when you say distro packages, do you mean
 Linux-distro-specific package formats such as .rpm or .deb?
 
 Yep.

So that would indeed work, but just to sound a small note of caution
that keeping an oft-changing package (assumption #5) up-to-date for
fedora20/21  epel6/7, or precise/trusty, would involve some work.

I don't know much about the Debian/Ubuntu packaging pipeline, in
particular how it could be automated.

But in my small experience of Fedora/EL packaging, the process is
somewhat resistant to many fine-grained updates.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A true cross-project weekly meeting

2014-11-20 Thread Eoghan Glynn

 Hi everyone,
 
 TL;DR:
 I propose we turn the weekly project/release meeting timeslot
 (Tuesdays at 21:00 UTC) into a weekly cross-project meeting, to
 discuss cross-project topics with all the project leadership, rather
 than keep it release-specific.
 
 
 Long version:
 
 Since the dawn of time (August 28, 2010 !), there has always been a
 project meeting on Tuesdays at 21:00 UTC. It used to be a all-hands
 meeting, then it turned more into a release management meeting. With the
 addition of more integrated projects, all the meeting time was spent in
 release status updates and there was no time to discuss project-wide
 issues anymore.
 
 During the Juno cycle, we introduced 1:1 sync points[1] for project
 release liaisons (usually PTLs) to synchronize their status with the
 release management team /outside/ of the meeting time. That freed time
 to discuss integrated-release-wide problems and announcements during the
 meeting itself.
 
 Looking back to the Juno meetings[2], it's quite obvious that the
 problems we discussed were not all release-management-related, though,
 and that we had free time left. So I think it would be a good idea in
 Kilo to recognize that and clearly declare that meeting the weekly
 cross-project meeting. There we would discuss release-related issues if
 needed, but also all the others cross-project hot topics of the day on
 which a direct discussion can help making progress.
 
 The agenda would be open (updated directly on the wiki and edited/closed
 by the chair a few hours before the meeting to make sure everyone knows
 what will be discussed). The chair (responsible for vetting/postponing
 agenda points and keeping the discussion on schedule) could rotate.
 
 During the Juno cycle we also introduced the concept of Cross-Project
 Liaisons[3], as a way to scale the PTL duties to a larger group of
 people and let new leaders emerge from our community. Those CPLs would
 be encouraged to participate in the weekly cross-project meeting
 (especially when a topic in their domain expertise is discussed), and
 the meeting would be open to all anyway (as is the current meeting).
 
 This is mostly a cosmetic change: update the messaging around that
 meeting to make it more obvious that it's not purely about the
 integrated release and that it is appropriate to put other types of
 cross-project issues on the agenda. Let me know on this thread if that
 sounds like a good idea, and we'll make the final call at next week
 meeting :)

+1 to involving the liaisons more directly

-1 to the meeting size growing too large for productive real-time
   communication on IRC

IME, there's a practical limit on the number of *active* participants
in an IRC meeting. Not sure what that magic threshold is, but I suspect
not much higher than 25.

So given that we're in an era of fretting about the scalability
challenges facing cross-project concerns, I'd hate to paint ourselves
into a corner with another cross-project scalability challenge.

How about the agenda each week includes a specific invitation to a
subset of the liaisons, based on relevance?

(e.g. the week there's a CI brownout, request all the QA liaisons attend;
 whereas the week that the docs team launch a new contribution workflow,
 request that all the docs liaisons are present).

Possibly with a standing invite to the release-mgmt liaison (or PTL)?

Of course, as you say, the meeting is otherwise open-as-open-can-be.

Cheers,
Eoghan
 
 [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting
 [2] http://eavesdrop.openstack.org/meetings/project/2014/
 [3] https://wiki.openstack.org/wiki/CrossProjectLiaisons
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-20 Thread Eoghan Glynn

Thanks for raising this Sandy,

Some questions/observations inline.

 Hey y'all,
 
 To avoid cross-posting, please inform your -infra / -operations buddies about 
 this post. 
 
 We've just started thinking about where notification schema files should live 
 and how they should be deployed. Kind of a tricky problem.  We could really 
 use your input on this problem ...
 
 The assumptions:
 1. Schema files will be text files. They'll live in their own git repo 
 (stackforge for now, ideally oslo eventually). 
 2. Unit tests will need access to these files for local dev
 3. Gating tests will need access to these files for integration tests
 4. Many different services are going to want to access these files during 
 staging and production. 
 5. There are going to be many different versions of these files. There are 
 going to be a lot of schema updates. 
 
 Some problems / options:
 a. Unlike Python, there is no simple pip install for text files. No version 
 control per se. Basically whatever we pull from the repo. The problem with a 
 git clone is we need to tweak config files to point to a directory and that's 
 a pain for gating tests and CD. Could we assume a symlink to some well-known 
 location?
 a': I suppose we could make a python installer for them, but that's a 
 pain for other language consumers.

Would it be unfair to push that burden onto the writers of clients
in other languages?

i.e. OpenStack, being largely python-centric, would take responsibility
for both:

  1. Maintaining the text versions of the schema in-tree (e.g. as json)

and:

  2. Producing a python-specific installer based on #1

whereas, the first Java-based consumer of these schema would take
#1 and package it up in their native format, i.e. as a jar or
OSGi bundle.

 b. In production, each openstack service could expose the schema files via 
 their REST API, but that doesn't help gating tests or unit tests. Also, this 
 means every service will need to support exposing schema files. Big 
 coordination problem.

I kind of liked this schemaURL endpoint idea when it was first
mooted at summit.

The attraction for me was that it would allow the consumer of the
notifications always have access to the actual version of schema
currently used on the emitter side, independent of the (possibly
out-of-date) version of the schema that the consumer has itself
installed locally via a static dependency.

However IIRC there were also concerns expressed about the churn
during some future rolling upgrades - i.e. if some instances of
the nova-api schemaURL endpoint are still serving out the old
schema, after others in the same deployment have already been
updated to emit the new notification version.

 c. In production, We could add an endpoint to the Keystone Service Catalog to 
 each schema file. This could come from a separate metadata-like service. 
 Again, yet-another-service to deploy and make highly available. 

Also to {puppetize|chef|ansible|...}-ize.

Yeah, agreed, we probably don't want to do down that road.

 d. Should we make separate distro packages? Install to a well known location 
 all the time? This would work for local dev and integration testing and we 
 could fall back on B and C for production distribution. Of course, this will 
 likely require people to add a new distro repo. Is that a concern?

Quick clarification ... when you say distro packages, do you mean 
Linux-distro-specific package formats such as .rpm or .deb?

Cheers,
Eoghan
 
 Personally, I'm leaning towards option D but I'm not sure what the 
 implications are. 
 
 We're early in thinking about these problems, but would like to start the 
 conversation now to get your opinions. 
 
 Look forward to your feedback.
 
 Thanks
 -Sandy
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Counting resources

2014-11-20 Thread Eoghan Glynn

 Aloha guardians of the API!
 
 I haven recently* reviewed a spec for neutron [1] proposing a distinct URI
 for returning resource count on list operations.
 This proposal is for selected neutron resources, but I believe the topic is
 general enough to require a guideline for the API working group. Your
 advice is therefore extremely valuable.
 
 In a nutshell the proposal is to retrieve resource count in the following
 way:
 GET /prefix/resource_name/count
 
 In my limited experience with RESTful APIs, I've never encountered one that
 does counting in this way. This obviously does not mean it's a bad idea.
 I think it's not great from a usability perspective to require two distinct
 URIs to fetch the first page and then the total number of elements. I
 reckon the first response page for a list operation might include also the
 total count. For example:
 
 {'resources': [{meh}, {meh}, {meh_again}],
  'resource_count': 55
  link_to_next_page}

How about allowing the caller to specify what level of detail
they require via the Accept header?

▶ GET /prefix/resource_name
  Accept: application/json; detail=concise

◀ HTTP/1.1 200 OK
  Content-Type: application/json
  {'resource_count': 55,
   other_collection-level_properties}

▶ GET /prefix/resource_name
  Accept: application/json

◀ HTTP/1.1 200 OK
  Content-Type: application/json
  {'resource_count': 55,
   other_collection-level_properties,
   'resources': [{meh}, {meh}, {meh_again}],
   link_to_next_page}

Same API, but the caller can choose not to receive the embedded
resource representations if they're only interested in the
collection-level properties such as the count (if the collection
is indeed countable).

Cheers,
Eoghan
 
 I am however completely open to consider other alternatives.
 What is your opinion on this matter?
 
 Regards,
 Salvatore
 
 
 * it's been 10 days now
 
 [1] https://review.openstack.org/#/c/102199/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-15 Thread Eoghan Glynn
1. *make a minor concession to proportionality* - while keeping the
   focus on consensus, e.g. by adopting the proportional Condorcet
   variant.
  
  It would be interesting to see the analysis again, but in the past this
  proved to not make much difference.
 
 For the record, I just ran the ballots in CIVS proportional mode and
 obtained the same set of winners:
 
 http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_88cae988dff29be6

Thanks for doing this Thierry,

Not surprising that it made no difference to the outcome this time,
given the smaller number of seats contested and the gap between the
first 6 and the trailing pack. IIRC the only previous election where
your analysis showed the proportional variant would have any impact
was the Oct 2013 contest for 11 seats, where the margins were tighter
in the lower preferences.

So in the absence of switching to simultaneous terms, adopting the
proportional variant of Condorcet is probably not worth the extra
conceptual complexity for the interested voter to digest.

Of course, we could just throw in the towel and cede the community
leadership to a deck of playing cards ;)

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-15 Thread Eoghan Glynn

 It's certainly not a fun thing to do (trying to guide a
 community of disjoint folks) and it likely comes with little
 recognition when successful, but IMHO we surely need more of
 this active technical leadership vs. blessing of projects; of
 course the boundary between being a engaging leadership and one
 that is perceived as trying to dictate technical goals for
 projects must be approached carefully (and with thoughtfulness)
 to avoid creating more problems than solutions...

That's a good point Josh,

TBH I'm not entirely sure if the likely result of the proposed
big tent / small core duality would be more or less active
technical leadership, given the absence of carrots and sticks.

Certainly, I couldn't foresee anything like the Juno gap analysis
exercise happening under that structure, at least not for any of
the projects outside the ring-zero laager.

I suspect that example-based nudges[1] on common concerns would
be the practical alternative (hey, this is what we've done to
address issue X, you might want to try that approach also ...)

Cheers,
Eoghan

[1] http://en.wikipedia.org/wiki/Nudge_theory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-11 Thread Eoghan Glynn

  I think you missed the most important option I mentioned in the thread - 
  for 
  the TC to engage more with the community and take an active technical 
  leadership role in the design of OpenStack as a whole.
 
 +1
 
 I've been in this scene for about half a year now and the only
 reason I have any particular awareness of TC activities is because I
 dig for it, hard.

Believe it or not, the communication of what the TC is up to has
actually improved significantly over the last cycle, with those
regular TC Update posts from russellb, ttx, vishy  others:

  http://www.openstack.org/blog/tag/technical-committee

The community engagement aspect I referred to up-thread was more
related to whether that communication channel is as bi-directional
as it could or should be.

Cheers,
Eoghan

 I voted in this most recent election mostly for novelty, not because
 I had a strong sense that I was engaging a representative process
 that was going to impact direction.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-10 Thread Eoghan Glynn

  So, just to round out this thread, the key questions are:
 
* whether a low  declining turnout is a real problem
 
  and, if so:
 
* could this have been driven by a weakness in the voting model,
  and/or the perception of representative balance in the outcomes
 
  The options that were mooted on the thread could be ranked in order
  of how radical they are, and how likely to have an impact:
 
0. *do nothing* - accept low turnout as a fact of life, or hope that
   natural factors such as a slowdown in contributor growth will
   eventually cause it to stabilize.
 
1. *make a minor concession to proportionality* - while keeping the
   focus on consensus, e.g. by adopting the proportional Condorcet
   variant.
 
 It would be interesting to see the analysis again, but in the past this 
 proved to not make much difference.

True.

For the first TC2.0 election IIRC it only changed the destination of the
last seat.

2. *weaken the continuity guarantee* - by un-staggering the terms,
   so that all seats are contested at each election.
 
 This is probably not feasible.

Interesting, why do you think it wouldn't be feasible?

(Given that in the first TC2.0 election, 11 of 13 seats were contested,
 with 2 just seats held back for term completion, and short-terms allocated
 to 5 of the 11 winners - so seemed to me we could revert to simultaneous
 terms after just one cycle of transition) 

3. *go all out on proportionality* - by adopting a faction-oriented
   voting model, such as STV with simultaneous terms.
 
 I actually like Condorcet (to be clear, I meant that any possible 
 problems with Condorcet are addressable with better education, not by 
 changing the system). I don't think STV would be a good move.

Fair enough. STV is likely to give rise to different outcomes to Condorcet,
so at least worth doing the thought-experiment as to how that might impact
on the functioning and perception of the TC.

4. *ensure some minimal turn-over* - by adopting either traditional
   term limits, or the more nuanced approach that Jeremy referenced
   up-thread.
 
  If it came down to it, my money would be on #2 or #3 for the reasons
  stated before. With the added bonus that this would allow TC elections
  to be viewed more as a plebiscite on some major issue (such as the
  layering discussions).
 
 I think you missed the most important option I mentioned in the thread - 
 for the TC to engage more with the community and take an active 
 technical leadership role in the design of OpenStack as a whole.

Fair point, I should have qualified that I was summarizing mooted changes
to the electoral system itself, as opposed to how the TC leads, engages
with the community etc.

The growth challenges session at summit was the most prominent example
of that IMO.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-01 Thread Eoghan Glynn


  +1 to this, with a term limit.
 
 Notable that the Debian TC has been discussing term limits for
 months now, and since DebConf they seem to have gotten much closer
 to a concrete proposal[1] in the last week or so. Could be worth
 watching for ideas on how our community might attempt to implement
 something similar.

That is indeed an interesting approach that the Debian folks are
considering.

So, just to round out this thread, the key questions are:

 * whether a low  declining turnout is a real problem

and, if so:

 * could this have been driven by a weakness in the voting model,
   and/or the perception of representative balance in the outcomes

The options that were mooted on the thread could be ranked in order
of how radical they are, and how likely to have an impact:

 0. *do nothing* - accept low turnout as a fact of life, or hope that
natural factors such as a slowdown in contributor growth will
eventually cause it to stabilize.

 1. *make a minor concession to proportionality* - while keeping the
focus on consensus, e.g. by adopting the proportional Condorcet
variant.

 2. *weaken the continuity guarantee* - by un-staggering the terms,
so that all seats are contested at each election.

 3. *go all out on proportionality* - by adopting a faction-oriented
voting model, such as STV with simultaneous terms.

 4. *ensure some minimal turn-over* - by adopting either traditional
term limits, or the more nuanced approach that Jeremy referenced
up-thread.

If it came down to it, my money would be on #2 or #3 for the reasons
stated before. With the added bonus that this would allow TC elections
to be viewed more as a plebiscite on some major issue (such as the
layering discussions).

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-01 Thread Eoghan Glynn


  So, just to round out this thread, the key questions are:
  
   * whether a low  declining turnout is a real problem
 
 I'd like to point out that there 580 'regular' contributors at the
 moment[1], these are the authors of 95% of the OpenStack code. 506 total
 number of voters.

Thanks Stef, there's great data on that dashboard.

I've added the regular contributors per-release to the table (up-thread)
showing the percentage of single-patch contributors:

  Release | Committers | Single-patch | 2-cycle MA | Regular
  --
  Juno| 1556   | 485 (31.2%)  | 29.8%  | 444 (28.5%)
  Icehouse| 1273   | 362 (28.4%)  | 28.0%  | 378 (29.7%)
  Havana  | 1005   | 278 (27.6%)  | 28.0%  | 293 (29.2%)
  Grizzly | 630| 179 (28.4%)  | 29.2%  | 186 (33.8%)
  Folsom  | 401| 120 (29.9%)  | 27.9%  | 107 (26.7%) 

Apart from a spike around grizzly, we're not seeing a noticeable
dilution of the regular cohort within the total committer population.
 
  and, if so:
  
   * could this have been driven by a weakness in the voting model,
 and/or the perception of representative balance in the outcomes
 
 I would suggest to explore another possible cause: are the elections
 advertised enough? Is there enough time for developers to understand
 also this important part of OpenStack and get involved? Do we use all
 possible channels to communicate the elections and their importance?
 
 Thoughts?

Sure, more and better communication is always good.

Though I don't know if was anything much different about how this
election cycle was advertized compared to others, or whether the
typical voter had become harder to reach since previous cycles.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Eoghan Glynn

   IIRC, there is no method for removing foundation members. So there
   are likely a number of people listed who have moved on to other
   activities and are no longer involved with OpenStack. I'd actually
   be quite interested to see the turnout numbers with voters who
   missed the last two elections prior to this one filtered out.
  
  Well, the base electorate for the TC are active contributors with
  patches landed to official projects within the past year, so these
  are devs getting their code merged but not interested in voting.
  This is somewhat different from (though potentially related to) the
  dead weight foundation membership on the rolls for board
  elections.
  
  Also, foundation members who have not voted in two board elections
  are being removed from the membership now, from what I understand
  (we just needed to get to the point where we had two years worth of
  board elections in the first place).
 
 Thanks, I lost my mind here and confused the board with the TC.
 
 So then my next question is, of those who did not vote, how many are
 from under-represented companies? A higher percentage there might point
 to disenfranchisement.

Well, that we don't know, because the ballots are anonymized.

So we can only make a stab at detecting partisan voting patterns, in
the form a strict preference for candidates from one company over all
others, but we've no way of knowing whether voters from those same
companies actually cast the ballots in question.

... i.e. from these data, the conclusion that the preferred pairs of
candidates were just more popular across-the-board would be equally
valid.

Conversely, we've no way of knowing if the voters employed by those
under-represented companies you mention had a higher or lower turnout
than the average.

If there is a concern about balanced representation, then the biggest
single change we could make to address this, IMO, would be to contest
all TC seats at all elections.

Staggered terms optimize for continuity, but by amplifying the majority
voice (if such a thing exists in our case), they tend to pessimize for
balanced representation.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Eoghan Glynn


  IIRC, there is no method for removing foundation members. So there
  are likely a number of people listed who have moved on to other
  activities and are no longer involved with OpenStack. I'd actually
  be quite interested to see the turnout numbers with voters who
  missed the last two elections prior to this one filtered out.
 
  Well, the base electorate for the TC are active contributors with
  patches landed to official projects within the past year, so these
  are devs getting their code merged but not interested in voting.
  This is somewhat different from (though potentially related to) the
  dead weight foundation membership on the rolls for board
  elections.
 
  Also, foundation members who have not voted in two board elections
  are being removed from the membership now, from what I understand
  (we just needed to get to the point where we had two years worth of
  board elections in the first place).
  
  Thanks, I lost my mind here and confused the board with the TC.
  
  So then my next question is, of those who did not vote, how many are
  from under-represented companies? A higher percentage there might point
  to disenfranchisement.
 
 Different but related question (might be hard to calculate though):
 
 If we remove people who have only ever landed one patch from the
 electorate, what do the turnout numbers look like? 2? 5?
 
 Do we have the ability to dig in slightly and find a natural definition
 or characterization amongst our currently voting electorate that might
 help us understand who the people are who do vote and what it is about
 those people who might be or feel different or more enfranchised? I've
 personally been thinking that the one-patch rule is, while tractable,
 potentially strange for turnout - especially when one-patch also gets
 you a free summit pass... but I have no data to say what actually
 defined active in active technical contributor.

Again, the ballots are anonymized so we've no way of doing that analysis.

The best we could IIUC would be to analyze the electoral roll, bucketizing
by number of patches landed, to see if there's a significant long-tail of
potential voters with very few patches.

But that's just as likely to be a distortion caused by the free summit
pass policy, so I'm not sure we could draw any solid conclusions on that
basis.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Eoghan Glynn


  IIRC, there is no method for removing foundation members. So there
  are likely a number of people listed who have moved on to other
  activities and are no longer involved with OpenStack. I'd actually
  be quite interested to see the turnout numbers with voters who
  missed the last two elections prior to this one filtered out.
 
  Well, the base electorate for the TC are active contributors with
  patches landed to official projects within the past year, so these
  are devs getting their code merged but not interested in voting.
  This is somewhat different from (though potentially related to) the
  dead weight foundation membership on the rolls for board
  elections.
 
  Also, foundation members who have not voted in two board elections
  are being removed from the membership now, from what I understand
  (we just needed to get to the point where we had two years worth of
  board elections in the first place).
 
  Thanks, I lost my mind here and confused the board with the TC.
 
  So then my next question is, of those who did not vote, how many are
  from under-represented companies? A higher percentage there might point
  to disenfranchisement.
 
  Different but related question (might be hard to calculate though):
 
  If we remove people who have only ever landed one patch from the
  electorate, what do the turnout numbers look like? 2? 5?
 
  Do we have the ability to dig in slightly and find a natural definition
  or characterization amongst our currently voting electorate that might
  help us understand who the people are who do vote and what it is about
  those people who might be or feel different or more enfranchised? I've
  personally been thinking that the one-patch rule is, while tractable,
  potentially strange for turnout - especially when one-patch also gets
  you a free summit pass... but I have no data to say what actually
  defined active in active technical contributor.
  
  Again, the ballots are anonymized so we've no way of doing that analysis.
  
  The best we could IIUC would be to analyze the electoral roll, bucketizing
  by number of patches landed, to see if there's a significant long-tail of
  potential voters with very few patches.
 
 Just looking at stackalytices numbers for Juno: Out of 1556 committers,
 1071 have committed more than one patch and 485 only a single patch.
 That's a third!

Here's the trend over the past four cycles, with a moving average in the
last column, as the eligible voters are derived from the preceding two
cycles:

 Release | Committers | Single-patch | 2-cycle MA
 
 Juno| 1556   | 485 (31.2%)  | 29.8%
 Icehouse| 1273   | 362 (28.4%)  | 28.0%
 Havana  | 1005   | 278 (27.6%)  | 28.8%
 Folsom  | 401| 120 (29.9%)  | 27.9%

So the proportion of single-patch committers is creeping up slowly, but
not at a rate that would account for the decline in voter turnout.

And since we've no way of knowing if voting patterns among the single-patch
committers differs in any way from the norm, these data don't tell us much.

If we're serious about improving participation rates, then I think we
should consider factors what would tend to drive interest levels and
excitement around election time. My own suspicion is that open races
where the outcome is in doubt are more likely to garner attention from
voters, than contests that feel like a foregone conclusion. That would
suggest un-staggering the terms as a first step.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Eoghan Glynn


   IIRC, there is no method for removing foundation members. So there
   are likely a number of people listed who have moved on to other
   activities and are no longer involved with OpenStack. I'd actually
   be quite interested to see the turnout numbers with voters who
   missed the last two elections prior to this one filtered out.
  
   Well, the base electorate for the TC are active contributors with
   patches landed to official projects within the past year, so these
   are devs getting their code merged but not interested in voting.
   This is somewhat different from (though potentially related to) the
   dead weight foundation membership on the rolls for board
   elections.
  
   Also, foundation members who have not voted in two board elections
   are being removed from the membership now, from what I understand
   (we just needed to get to the point where we had two years worth of
   board elections in the first place).
  
   Thanks, I lost my mind here and confused the board with the TC.
  
   So then my next question is, of those who did not vote, how many are
   from under-represented companies? A higher percentage there might point
   to disenfranchisement.
  
   Different but related question (might be hard to calculate though):
  
   If we remove people who have only ever landed one patch from the
   electorate, what do the turnout numbers look like? 2? 5?
  
   Do we have the ability to dig in slightly and find a natural definition
   or characterization amongst our currently voting electorate that might
   help us understand who the people are who do vote and what it is about
   those people who might be or feel different or more enfranchised? I've
   personally been thinking that the one-patch rule is, while tractable,
   potentially strange for turnout - especially when one-patch also gets
   you a free summit pass... but I have no data to say what actually
   defined active in active technical contributor.
   
   Again, the ballots are anonymized so we've no way of doing that analysis.
   
   The best we could IIUC would be to analyze the electoral roll,
   bucketizing
   by number of patches landed, to see if there's a significant long-tail of
   potential voters with very few patches.
  
  Just looking at stackalytices numbers for Juno: Out of 1556 committers,
  1071 have committed more than one patch and 485 only a single patch.
  That's a third!
 
 Here's the trend over the past four cycles, with a moving average in the
 last column, as the eligible voters are derived from the preceding two
 cycles:
 
  Release | Committers | Single-patch | 2-cycle MA
  
  Juno| 1556   | 485 (31.2%)  | 29.8%
  Icehouse| 1273   | 362 (28.4%)  | 28.0%
  Havana  | 1005   | 278 (27.6%)  | 28.8%
  Folsom  | 401| 120 (29.9%)  | 27.9%

Correction, I skipped a cycle in that table:

  Release | Committers | Single-patch | 2-cycle MA
  
  Juno| 1556   | 485 (31.2%)  | 29.8%
  Icehouse| 1273   | 362 (28.4%)  | 28.0%
  Havana  | 1005   | 278 (27.6%)  | 28.0%
  Grizzly | 630| 179 (28.4%)  | 29.2%
  Folsom  | 401| 120 (29.9%)  | 27.9%

Doesn't alter the trend though, still quite flat with some jitter and
a small uptick.

Cheers,
Eoghan 
 
 So the proportion of single-patch committers is creeping up slowly, but
 not at a rate that would account for the decline in voter turnout.
 
 And since we've no way of knowing if voting patterns among the single-patch
 committers differs in any way from the norm, these data don't tell us much.
 
 If we're serious about improving participation rates, then I think we
 should consider factors what would tend to drive interest levels and
 excitement around election time. My own suspicion is that open races
 where the outcome is in doubt are more likely to garner attention from
 voters, than contests that feel like a foregone conclusion. That would
 suggest un-staggering the terms as a first step.
 
 Cheers,
 Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][All] Prelude to functional testing summit discussions

2014-10-30 Thread Eoghan Glynn


 Hi everyone,
 
 Before we start the larger discussion at summit next week about the future of
 testing in OpenStack - specifically about spinning up functional testing and
 how
 it relates to tempest - I would like to share some of my thoughts on how we
 can
 get things started and how I think they'll eventually come together.
 
 Currently in tempest we have a large number of tests (mostly api-focused)
 which are probably a better fit for a project's functional test suite. The
 best
 example I can think of is the nova flavors tests. Validation of flavor
 manipulation doesn't need to run in the integrated test suite on every commit
 to
 every project because it only requires Nova. A simple win for initiating
 in-tree
 functional testing would be to move these kinds of tests into the projects
 and
 run the tests from the project repos instead of from tempest.
 
 This would have the advantage of making tempest slimmer for every project
 and begin the process of getting projects to take responsibility for their
 functional testing rather than relying on tempest. As tests are moved tempest
 can start to become the integration test suite it was meant to be. It would
 retain only tests that involve multiple projects and stop being the OpenStack
 black box testing suite. I think that this is the right direction for tempest
 moving forward, especially as we move to having project-specific functional
 testing.
 
 Doing this migration is dependent on some refactors in tempest and moving
 the required bits to tempest-lib so they can be easily consumed by the
 other projects. This will be discussed at summit, is being planned
 for implementation this cycle, and is similar to what is currently in
 progress
 for the cli tests.
 
 The only reason this testing existed in tempest in the first place was as
 mechanism to block and then add friction against breaking api changes.
 Tempest's
 api testing has been been pretty successful at achieving these goals. We'll
 want
 to ensure that migrated tests retain these characteristics. If we are using
 clients from tempest-lib we should get this automatically since to break
 the api you'd have to change the api client. Another option proposed was to
 introduce a hacking rule that would block changes to api tests at the same
 time
 other code was being changed.
 
 There is also a concern for external consumers of tempest if we move the
 tests
 out of the tempest tree (I'm thinking refstack). I think the solution is
 to maintain a load_tests discovery method inside of tempest or elsewhere that
 will run the appropriate tests from the other repos for something like
 refstack.
 Assuming that things are built in a compatible way using the same framework
 then
 running the tests from separate repos should be a simple matter of pointing
 the
 test runner in the right direction.
 
 I also want to comment on the role of functional testing. What I've proposed
 here is only one piece of what project specific functional testing should be
 and just what I feel is a good/easy start. I don't feel that this should be
 the only testing done in the projects.  I'm suggesting this as a first
 step because the tests already exist and it should be a relatively simple
 task.
 I also feel that using tempest-lib like this shouldn't be a hard requirement.
 Ideally the client definitions shouldn't have to live externally, or if they
 did
 they would be the official clients, but I am suggesting this as a first step
 to
 start a migration out of tempest.
 
 I don't want anyone to feel that they need block their functional testing
 efforts until tempest-lib becomes more useable. The larger value from
 functional
 testing is actually in enabling testing more tightly coupled to the projects
 (e.g. whitebox testing). I feel that any work necessary to enable functional
 testing should be prioritized.

Thanks Matt for getting the ball rolling on this conversation in advance
of summit.

Probably stating the obvious here, but I feel we should make a concious
effort to keep the approaches to in-tree functional testing as consistent
as possible across projects.

Towards that end, it would be good for folks with an interest in this area
to attend each other's sessions where possible:
  
 Cross-project: Tue, 12:05 [1]
 Heat:  Wed, 13:50 [2]
 Nova:  Wed, 16:30 [3]
 Ceilometer:Wed, 17:20 [4]
 QA:Wed, 17:20 [5]

Unfortunately there's a clash there between the QA tempest-lib session
and the ceilo session. I'm not sure how reasonable it would be to make
a last-minute schedule change to avoid that clash.

Cheers,
Eoghan

[1] http://kilodesignsummit.sched.org/event/575938e4837e8293615845582d7e3e7f
[2] http://kilodesignsummit.sched.org/event/eb261fb08b18ec1eaa2c67492e7fc385
[3] http://kilodesignsummit.sched.org/event/271a9075c1ced6c1269100ff4b8efc37
[4] http://kilodesignsummit.sched.org/event/daf63526a1883e84cec107c70cc6cad3
[5] 

Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Eoghan Glynn

  If we're serious about improving participation rates, then I think we
  should consider factors what would tend to drive interest levels and
  excitement around election time. My own suspicion is that open races
  where the outcome is in doubt are more likely to garner attention from
  voters, than contests that feel like a foregone conclusion. That would
  suggest un-staggering the terms as a first step.

 I am curious why you believe the staggering is dramatically changing the
 outcome of the elections.

Well, I don't.

In fact I've already stated the opposite in a prior discussion on TC
election turnout[1].

So I don't think un-staggering the terms would dramatically alter the
outcome, but I do think it would have a better chance of increasing the
voter turnout, than say standardized questionnaires. 

The last few seats would be perceived as being in play to a greater
extent IMO, hence increasing both voter interest, and possibly promoting
slightly more balance in the representation.

On the balance aspect, which does concern the outcome, the logic goes
along the lines that the impact of the majority opinion is amplified by
being applied *independently* to each staggered cohort ... e.g. the same
voter can rate both Monty and Thierry, say, as their #1 preference.

In the real world, I believe research suggests a weak correlation between
simultaneous terms and representation of minorities in local government;
some references can be found in [2] if interested, as per usual with
academic research, paywalls abound :(. The applicability of such research
to our scenario is, of course, questionable.

This is one further quirk (bug?) in the design of TC2.0, that may tend to
muddy the waters: the results of original TC2.0 election were used to
determine the term cohorts, as opposed to a random selection.

So the most popular candidates from that race who're still in the running
end in competition with each other every second election, whereas the less
popular remaining candidates contest the alternate elections.  

Switching to simultaneous terms would also remove that quirk (or fix that
bug, depending on your PoV).

Cheers,
Eoghan

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035832.html
[2] http://books.google.ie/books?id=xSibrZC0XSQC

 Because this is a condorcet system, and not a
 weighted one vote one, in a staggered election that would just mean
 that: Thierry, Vish, Jim, Mark, Jay, Michael, and Deva would be in the
 pool as well. Who I'd honestly expect to be ranked really highly (based
 on past election results, and based on their impact across a lot of
 projects).
 
 If there is some reference you have about why a race for 6 or 7 seats
 with 6 or 7 incumbents is considered less open than a race for 13 seats
 with 13 incumbents, would be great to see. Because to me, neither the
 math nor the psychology seem to support that.
 
 Note in both elections since we started open elections all incumbents
 that chose to run were re-elected. Which is approximately the same
 results we've seen in PTL elections (with only 1 exception that I know
 of). So that seems consistent with the rest of the community tone. We
 can argue separately whether we should be actively creating more turn
 over across the board, maybe we should be. But the TC doesn't seem
 massively out of line with the rest of the elections we do in the project.
 
 -Sean
 
 
  Cheers,
  Eoghan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 --
 Sean Dague
 http://dague.net
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][All] Prelude to functional testing summit discussions

2014-10-30 Thread Eoghan Glynn

 Matthew wrote:

 This would have the advantage of making tempest slimmer for every project
 and begin the process of getting projects to take responsibility for their
 functional testing rather than relying on tempest.

[much snipping]

 Sean wrote:

 Ok, so part of this remains to be seen about what the biggest bang for the
 buck is. The class of bugs I feel like we need to nail in Nova right now are
 going to require tests that bring up pieces of the wsgi stack, but are
 probably not runable on a real deploy. Again, this is about debugability.

So this notion of the biggest bang for our buck is an aspect of the drive
for in-tree functional tests, that's not entirely clear to me as yet.

i.e. whether individual projects should be prioritizing within this effort:

(a) the creation of net-new coverage for scenarios (especially known or
suspected bugs) that were not previously tested, in a non-unit sense

(b) the relocation of existing integration test coverage from Tempest to
the project trees, in order to make the management of Tempest more
tractable

It feels like there may be a tension between (a) and (b) in terms of the
pay-off for this effort. I'd interested in hearing other opinions on this,
on what aspect projects are expecting (and expected) to concentrate on
initially.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Topics for the Board/TC joint meeting in Paris

2014-10-30 Thread Eoghan Glynn


 This is already on the agenda proposed by the board (as well as a quick
 presentation on the need for structural reform in the ways we handle
 projects in OpenStack).

Would it be possible for the slidedeck and a quick summary of that
presentation to be posted to the os-dev list after the Board/TC joint
meeting on Sunday?

(Given that discussion will be highly relevant to the cross-project
sessions on Growth Challenges running on the Tuesday)

Thanks,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-30 Thread Eoghan Glynn


 The low (and dropping) level of turnout is worrying, particularly in
 light of that analysis showing the proportion of drive-by contributors
 is relatively static, but it is always going to be hard to discern the
 motives of people who didn't vote from the single bit of data we have on
 them.
 
 There is, however, another metric that we can pull from the actual
 voting data: the number of candidates actually ranked by each voter:
 
 Candidates
rankedFrequency
 
   08   2%
   1   17   3%
   2   24   5%
   3   20   4%
   4   31   6%
   5   36   7%
   6   68  13%
   7   39   8%
   8   17   3%
   99   2%
  10   21   4%
  11-   -
  12  216  43%
 
 (Note that it isn't possible to rank exactly n-1 candidates.)

 So even amongst the group of people who were engaged enough to vote,
 fewer than half ranked all of the candidates. A couple of hypotheses
 spring to mind:
 
 1) People don't understand the voting system.
 
 Under Condorcet, there is no such thing as tactical voting by an
 individual. So to the extent that these figures might reflect deliberate
 'tactical' voting, it means people don't understand Condorcet. The size
 of the spike at 6 (the number of positions available - the same spike
 appeared at 7 in the previous election) strongly suggests that lack of
 understanding of the voting system is at least part of the story. The
 good news is that this problem is eminently addressable.

Addressable by educating the voters on the subtleties of Condorcet, or
by switching to another model such as the single-transferable vote?  

I can see the attractions of Condorcet, in particular it tends to favor
consensus over factional candidates. Which could be seen as A Good Thing.

But in our case, seems to me, we're doubling down on consensus.

By combining Condorcet with staggered terms and no term limits, seems
we're favoring both consensus in general and the tendency to return the
*same* consensus candidates. (No criticism of the individual candidates
intended, just the sameness)

STV on the other hand combined with simultaneous terms, is actually
used in the real world[1] and has the advantage of ensuring factions
get some proportional representation and hence don't feel excluded
or disenfranchised.

Just a thought ... given that we're in the mood, as a community, to
consider radical structural reforms.

Cheers,
Eoghan

[1] so at least would be familiar to the large block of Irish and
Australian voters ... though some centenarian citizens of
Marquette, Michigan, may be a tad more comfortable with Condorcet ;)


 2) People aren't familiar with the candidates
 
 This is the one that worries me - it looks a lot like most voters are
 choosing not to rank many of the candidates because they don't feel they
 know enough about them to have an opinion. It seems to me that the TC
 has failed to engage the community enough on the issues of the day to
 move beyond elections as a simple name-recognition contest. (Kind of
 like how I imagine it is when you have to vote for your local
 dog-catcher here in the US. I have to imagine because they don't let me
 vote.) It gets worse, because the less the TC tries to engage the
 community on the issues and the less it attempts to actually lead (as
 opposed to just considering checklists and voting to ask for more time
 to consider checklists), the more entrenched the current revolving-door
 membership becomes. So every election serves to reinforce the TC
 members' perception that everything is going great, and also to
 reinforce the perception of those whose views are not represented that
 the TC is an echo chamber from which their views are more or less
 structurally excluded. That's a much harder problem to address.
 
 cheers,
 Zane.
 
 
  Cheers,
  Eoghan
 
  So the proportion of single-patch committers is creeping up slowly, but
  not at a rate that would account for the decline in voter turnout.
 
  And since we've no way of knowing if voting patterns among the
  single-patch
  committers differs in any way from the norm, these data don't tell us
  much.
 
  If we're serious about improving participation rates, then I think we
  should consider factors what would tend to drive interest levels and
  excitement around election time. My own suspicion is that open races
  where the outcome is in doubt are more likely to garner attention from
  voters, than contests that feel like a foregone conclusion. That would
  suggest un-staggering the terms as a first step.
 
  Cheers,
  Eoghan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


[openstack-dev] TC election by the numbers

2014-10-29 Thread Eoghan Glynn

Folks,

I haven't seen the customary number-crunching on the recent TC election,
so I quickly ran the numbers myself.

Voter Turnout
=

The turnout rate continues to decline, in this case from 29.7% to 26.7%.

Here's how the participation rates have shaped up since the first TC2.0
election:

 Election | Electorate | Voted | Turnout | Change
 
 10/2013  | 1106   | 342   | 30.9%   | -8.0% 
 04/2014  | 1510   | 448   | 29.7%   | -4.1%
 10/2014  | 1892   | 506   | 26.7%   | -9.9%

Partisan Voting
===

As per the usual analysis done by ttx, the number of ballots that
strictly preferred candidates from an individual company (with
multiple candidates) above all others:

 HP   ahead in 30 ballots (5.93%)
 RHAT ahead in 18 ballots (3.56%)
 RAX  ahead in 8  ballots (1.58%)

The top 6 pairings strictly preferred above all others were:

 35 voters (6.92%) preferred Monty Taylor  Doug Hellmann  (HP/HP)
 34 voters (6.72%) preferred Monty Taylor  Sean Dague (HP/HP)
 26 voters (5.14%) preferred Anne Gentle  Monty Taylor(RAX/HP)
 21 voters (4.15%) preferred Russell Bryant  Sean Dague   (RHAT/HP)
 21 voters (4.15%) preferred Russell Bryant  Eoghan Glynn (RHAT/RHAT)
 16 voters (3.16%) preferred Doug Hellmann  Sean Dague(HP/HP)

Conclusion
==

The rate of potentially partisan voting didn't diverge significantly
from the norms we've seen in previous elections.

The continuing decline in the turnout rate is a concern however, as
the small-scale changes tried (blogging on TC activity, standardized
questions in the TC nomination mails) have not arrested the fall-off
in participation.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-10-29 Thread Eoghan Glynn


  On Oct 29, 2014, at 3:32 PM, Eoghan Glynn egl...@redhat.com wrote:
  
  
  Folks,
  
  I haven't seen the customary number-crunching on the recent TC election,
  so I quickly ran the numbers myself.
  
  Voter Turnout
  =
  
  The turnout rate continues to decline, in this case from 29.7% to 26.7%.
  
  Here's how the participation rates have shaped up since the first TC2.0
  election:
  
  Election | Electorate | Voted | Turnout | Change
  
  10/2013  | 1106   | 342   | 30.9%   | -8.0%
  04/2014  | 1510   | 448   | 29.7%   | -4.1%
  10/2014  | 1892   | 506   | 26.7%   | -9.9%
 
 
 Overall percentage of the electorate voting is declining, but absolute
 numbers of voters has increased. And in fact, the electorate has grown more
 than the turnout has declined.

True that, but AFAIK the generally accepted metric on participation rates
in elections is turnout as opposed to absolute voter numbers.

Cheers,
Eoghan

  
  Partisan Voting
  ===
  
  As per the usual analysis done by ttx, the number of ballots that
  strictly preferred candidates from an individual company (with
  multiple candidates) above all others:
  
  HP   ahead in 30 ballots (5.93%)
  RHAT ahead in 18 ballots (3.56%)
  RAX  ahead in 8  ballots (1.58%)
  
  The top 6 pairings strictly preferred above all others were:
  
  35 voters (6.92%) preferred Monty Taylor  Doug Hellmann  (HP/HP)
  34 voters (6.72%) preferred Monty Taylor  Sean Dague (HP/HP)
  26 voters (5.14%) preferred Anne Gentle  Monty Taylor(RAX/HP)
  21 voters (4.15%) preferred Russell Bryant  Sean Dague   (RHAT/HP)
  21 voters (4.15%) preferred Russell Bryant  Eoghan Glynn (RHAT/RHAT)
  16 voters (3.16%) preferred Doug Hellmann  Sean Dague(HP/HP)
  
  Conclusion
  ==
  
  The rate of potentially partisan voting didn't diverge significantly
  from the norms we've seen in previous elections.
  
  The continuing decline in the turnout rate is a concern however, as
  the small-scale changes tried (blogging on TC activity, standardized
  questions in the TC nomination mails) have not arrested the fall-off
  in participation.
  
  Cheers,
  Eoghan
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] scheduling Kilo summit topics

2014-10-16 Thread Eoghan Glynn

Folks,

Just a quick reminder that we'll be considering the summit design
sessions proposals[1] at the weekly meeting today at 1500UTC.

We'll start the process of collaborative scheduling of topics with
each session proposer giving a short pitch for inclusion.

We've got 15 proposals already, contending for 6 slots, so let's try
to keep the pitches brief, as opposed to foreshadowing the entire
summit discussion :)

Thanks,
Eoghan 

[1] http://bit.ly/kilo-ceilometer-summit-topics 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-10-08 Thread Eoghan Glynn


Folks,

I would like to throw my hat into the ring for the upcoming Technical
Committee election.

I've been involved in the OpenStack community for nearly three years,
starting off by becoming core on glance, before moving my focus mainly
onto the ceilometer project. Along the way I've landed a smattering of
commits across nova, heat, cinder, horizon, neutron, oslo, devstack,
and openstack-manuals, while contributing to the stable-maint effort
over multiple cycles.

More recently, I've served the ceilometer project as PTL over the Juno
cycle, and will be doing so again for Kilo.

I'm employed by Red Hat to work primarily upstream, but I also have
some perspective on the sausage machine that operates downstream in
order to put the technology we produce into the hands of users.

My motivation in running is to bring more perspective from a smaller
project to the deliberations of the TC, which I think already has a
tonne of large-project and cross-project perspective to draw on.
Balance is a good and healthy thing :)

As a community we have a big challenge ahead of us to figure out how
we respond to the growing pains that been felt in our governance
structure and cross-project resources. This has crystallized around
the recent layering and big tent discussions. My own feeling has
always been in favor of a big welcoming tent, but I'm less convinced
that a small blessed core is necessarily a corollary of such
inclusiveness. Before we radically alter the release cycle model
that's served us relatively well thus far, I think a critical eye
should be brought to the proposals; in particular we really need to
clearly identify the core problems that these proposed changes are
intended to solve.

And so, onwards to the stock questions ...

Topic: OpenStack Mission


How do you feel the technical community is doing in meeting the
OpenStack Mission?

In all honesty, I would give us an A+ for effort, but more like a B-
for execution. In our excitement and enthusiasm to add shiny new
features, we sometimes take our eye off the ball in terms of what's
needed to make the lives of our users easier. I'm as guilty of this as
anyone else, perhaps even more so. But I think at this stage, it would
be of much benefit to our user community if we swung the pendulum
somewhat in the other direction, and shifted some focus onto easing
the practical challenges of operating our stuff at scale.

Topic: Technical Committee Mission
==

How do you feel the technical committee is doing in meeting the
technical committee mission?

Well, to be honest, if I thought this couldn't be improved, I wouldn't
be running for election.

On the one hand, I felt the gap analysis for already-integrated
projects was a good and healthy thing, and certainly assisted in
focusing resources over Juno on the areas where the TC saw
deficiencies.

On the other hand, I was quite disheartened by how some of the TC
discussions around project status played out. IMO there was a failure
of due diligence and mentoring. If we continue to have the TC making
important determinations about the future of projects (whether it be
their integrated status or their production readiness), then I think
we need to ensure that the TC keeps itself fully appraised from an
earlier stage about the direction that the project is taking, and
gives fair warning when it feels that a project needs a
course-correction.

Topic: Contributor Motivation
=

How would you characterize the various facets of contributor
motivation?

Most of my prior opensource experience was on various Apache projects
and in contrast it's striking that the OpenStack contributor community
is on the whole more skewed away from pure volunteerism, and towards
corporate contributors. By that, I mean contributors who are employed
by major operators and distributors of OpenStack, where their day-job
is to go forth and make it better. On balance, this is actually a
strength in our community. It certainly aids in the continuity and
sustainability of effort. It also helps ensure that less glamorous,
but ultra-important, efforts such as stable-maint and vulnerability
management get the attention they deserve.

However, despite this skew, I'm well aware that we're building a
broad church here. I think we all benefit from active efforts to
build diversity in our contributor community and harness the energy of
contributors with all kinds of motivations. Not just because it's the
right-on thing to do, but because it's the *smart* thing to do
... ensuring that we get access to all of the talents, especially from
previously under-represented groups. Towards this end, I continue to
support the efforts of programs such as the GNOME OPW and their recent
moves towards extending their reach to a wider set of contributor
backgrounds.

Topic: Rate of Growth
=

There is no argument the OpenStack technical community has a
substantial rate of 

Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-06 Thread Eoghan Glynn

  So a bit of background here. This began from thinking about functional
  dependencies, and pondering whether a map of the dependency graph of
  our projects could inform our gating structure, specifically to
  encourage (or dare I say, actually force) all of us (the project
  teams) to become more cognizant of the API contracts we are making
  with each other, and the pain it causes when we break those contracts.
 
  Let's not extend this exercise to a gigantic
  everything-needs-everything-to-do-everything picture, which is where
  it's heading now. Sure, telemetry is important for operators, and in
  no way am I saying anything else when I say: for Nova to fulfill its
  primary goal, telemetry is not necessary. It is optional. Desired, but
  optional.
 
  I don't follow the optional-but-not-necessary argument here, or
  where you're applying the cut-off for the graph not turning into
  a gigantic picture.
 
  There were a bunch of relationships in the original graph that
  are not strictly necessary for nova to fulfill it's primary goal,
  but are desired and existing functional dependencies in any case.
 
 
 For nova to do anything useful at all, it very simply needs an
 identity service (keystone), an image registry service (glance), and a
 network service (neutron (ignoring the fact that nova-network is still
 there because we actually want it to go away)). Without these, Nova is
 utterly useless.
 
 So, from a minimalist compute-centric perspective, THAT'S IT.

Sure, I get that idea of isolating the minimal set of dependencies
for the compute use-case.

I was questioning the fact the dependency graph referenced on the
thread earlier:

  http://i.imgur.com/y8zmNIM.png

selectively included *some* dependencies that lay outside this narrow
use-case for nova, but not others.

  So, are we trying to capture all dependencies here, or to apply
  some value-judgement and selectively capture just the good
  dependencies, for some definition of good?
 
 Nope. I am not making any value judgement whatsoever. I'm describing
 dependencies for minimally satisfying the intended purpose of a given
 project. For example, Nova's primary goal is not emit telemetry, it
 is scalable, on demand, self service access to compute resources [1]
 
 There are a lot of other super-awesome additional capabilities for
 which Nova depends on other services. And folks want to add more cool
 things on top of, next to, and underneath this ring compute. And
 make new non-compute-centric groups of projects. That's all wonderful.
 
 I happen to also fall in that camp - I think Ironic is a useful
 service, but I'm happy for it to not be in that inner ring of
 codependency. The nova.virt.ironic driver is optional from Nova's POV
 (Nova works fine without it), and Nova is optional from Ironic's POV
 (it's a bit awkward, but Ironic can deploy without Nova, though we're
 not testing it like that today).
 
 On the other hand, from a minimalist telemetry-centric perspective,
 what other projects do I need to run Ceilometer? Correct me if I'm
 wrong, but I think the answer is None.

At the very least, ceilometer would need keystone to function.

 Or rather, which ever ones I
 want. If I'm running Nova and not running Swift, Ceilometer works with
 Nova. If I'm running Swift but not Nova, Ceilometer works with Swift.
 There's no functional dependency on either Nova or Swift here - it's
 just consumption of an API. None of which is bad - but this informs
 how we gate test these projects, and how we allocate certain resources
 (like debugging gate-breaking bugs) across projects.

OK, so if project A doesn't *need* project B to function minimally,
but will use if it's available, and it's likely to be so in most
realistic deployments, then we still need to ensure somehow that
changes in project B don't break project A.

i.e. even if the dependency isn't a strict necessity, it may still
very likely be an actual reality in practice.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Eoghan Glynn

 As promised at this week’s TC meeting, I have applied the various blog posts
 and mailing list threads related to changing our governance model to a
 series of patches against the openstack/governance repository [1].
 
 I have tried to include all of the inputs, as well as my own opinions, and
 look at how each proposal needs to be reflected in our current policies so
 we do not drop commitments we want to retain along with the processes we are
 shedding [2].
 
 I am sure we need more discussion, so I have staged the changes as a series
 rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps won’t
 make sense without the changes that come after (hey, just like code!).

Thanks Doug for moving this discussion out of the blogosphere and
into gerrit. That will be very helpful in driving the discussion
forward.

However, given the proximity of the TC elections, should these all
patches all be workflow -1'd as WIP to ensure nothing lands before
the incoming TC is ratified?

(I'm assuming here that the decision-making on these fairly radical
proposals should rest with the new post-election TC - is that a
correct assumption?)

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-03 Thread Eoghan Glynn


- Original Message -
 
 
 On Fri, Oct 3, 2014 at 6:07 AM, Doug Hellmann  d...@doughellmann.com 
 wrote:
 
 
 
 
 On Oct 3, 2014, at 12:46 AM, Joe Gordon  joe.gord...@gmail.com  wrote:
 
 
 
 
 
 On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
 devananda@gmail.com  wrote:
 
 
 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann  d...@doughellmann.com 
 wrote:
  As promised at this week’s TC meeting, I have applied the various blog
  posts and mailing list threads related to changing our governance model to
  a series of patches against the openstack/governance repository [1].
  
  I have tried to include all of the inputs, as well as my own opinions, and
  look at how each proposal needs to be reflected in our current policies so
  we do not drop commitments we want to retain along with the processes we
  are shedding [2].
  
  I am sure we need more discussion, so I have staged the changes as a series
  rather than one big patch. Please consider the patches together when
  commenting. There are many related changes, and some incremental steps
  won’t make sense without the changes that come after (hey, just like
  code!).
  
  Doug
  
  [1]
  https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes
 
 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)
 
 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy
 
 
 After seeing Jay's idea of making a yaml file modeling things and talking to
 devananda about this I went ahead and tried to graph the relationships out.
 
 repo: https://github.com/jogo/graphing-openstack
 preliminary YAML file:
 https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
 sample graph: http://i.imgur.com/LwlkE73.png
 It turns out its really hard to figure out what the relationships are without
 digging deep into the code for each project, so I am sure I got a few things
 wrong (along with missing a lot of projects).
 
 The relationships are very important for setting up an optimal gate
 structure. I’m less convinced they are important for setting up the
 governance structure, and I do not think we want a specific gate
 configuration embedded in the governance structure at all. That’s why I’ve
 tried to describe general relationships (“optional inter-project
 dependences” vs. “strict co-dependent project groups” [1]) up until the very
 last patch in the series [2], which redefines the integrated release in
 terms of those other relationships and a base set of projects.
 
 
 I agree the relationships are very important for gate structure and less so
 for governance. I thought it would be nice to codify the relationships in a
 machine readable format so we can do things with it, like try making
 different rules and see how they would work. For example we can already make
 two groups of things that may be useful for testing:
 
 * services that nothing depends on
 * services that don't depend on other services
 
 Latest graph: http://i.imgur.com/y8zmNIM.png

This diagram is missing any relationships for ceilometer.

Ceilometer calls APIs provided by:

 * keystone
 * nova
 * glance
 * neutron
 * swift

Ceilometer consumes notifications from:

 * keystone
 * nova
 * glance
 * neutron
 * cinder
 * ironic
 * heat
 * sahara

Ceilometer serves incoming API calls from:

 * heat
 * horizon

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's a dependency (was Re: [all][tc] governance changes for big tent...) model

2014-10-03 Thread Eoghan Glynn


 So a bit of background here. This began from thinking about functional
 dependencies, and pondering whether a map of the dependency graph of
 our projects could inform our gating structure, specifically to
 encourage (or dare I say, actually force) all of us (the project
 teams) to become more cognizant of the API contracts we are making
 with each other, and the pain it causes when we break those contracts.
 
 Let's not extend this exercise to a gigantic
 everything-needs-everything-to-do-everything picture, which is where
 it's heading now. Sure, telemetry is important for operators, and in
 no way am I saying anything else when I say: for Nova to fulfill its
 primary goal, telemetry is not necessary. It is optional. Desired, but
 optional.

I don't follow the optional-but-not-necessary argument here, or
where you're applying the cut-off for the graph not turning into
a gigantic picture.

There were a bunch of relationships in the original graph that
are not strictly necessary for nova to fulfill it's primary goal,
but are desired and existing functional dependencies in any case.

So, are we trying to capture all dependencies here, or to apply
some value-judgement and selectively capture just the good
dependencies, for some definition of good?

 Even saying nova CAN-USE ceilometer is incorrect, though, since Nova
 isn't actually using Ceilometer to accomplish any functional task
 within it's domain. More correct would be to say Ceilometer
 CAN-ACCEPT notifications FROM Nova. Incidentally, this is very
 similar to the description of Heat and Horizon, except that, instead
 of consuming a public API, Ceilometer is consuming something else
 (internal notifications).

In addition to consuming notifications from nova, ceilometer also
calls out to the public nova, keystone, glance, neutron, and swift
APIs.

Hence: ceilometer CANUSE [nova, keystone, glance, neutron, swift].

In addition, the ceilometer API is invoked by heat and horizon.

I've submitted a pull request with these relationships, neglecting
the consumes-notifications-from relationship for now.

Cheers,
Eoghan

 -Deva
 
 On Fri, Oct 3, 2014 at 10:38 AM, Chris Dent chd...@redhat.com wrote:
  On Fri, 3 Oct 2014, Joe Gordon wrote:
 
  data is coming from here:
  https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
  and the key is here: https://github.com/jogo/graphing-openstack
 
 
  Cool, thanks.
 
  Many of those services expect[1] to be able to send notifications (or
  be polled by) ceilometer[2]. We've got an ongoing thread about the need
  to contractualize notifications. Are those contracts (or the desire
  for them) a form of dependency? Should they be?
 
 
  So in the case of notifications, I think that is a Ceilometer CAN-USE Nova
  THROUGH notifications
 
 
  Your statement here is part of the reason I asked. I think it is
  possible to argue that the dependency has the opposite order: Nova might
  like to use Ceilometer to keep metrics via notifications or perhaps:
  Nova CAN-USE Ceilometer FOR telemetry THROUGH notifications and polling.
 
  This is perhaps not the strict technological representation of the
  dependency, but it represents the sort of pseudo-social
  relationships between projects: Nova desires for Ceilometer (or at
  least something doing telemetry) to exist.
 
  Ceilometer itself is^wshould be agnostic about what sort of metrics are
  coming its way. It should accept them, potentially transform them, store
  them, and make them available for later use (including immediately). It
  doesn't^wshouldn't really care if Nova exists or not.
 
  There are probably lots of other relationships of this form between
  other services, thus the question: Is a use-of-notifications
  something worth tracking? I would say yes.
 
 
  --
  Chris Dent tw:@anticdent freenode:cdent
  https://tank.peermore.com/tanks/cdent
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] PTL Candidacy

2014-09-23 Thread Eoghan Glynn

Folks,

I'd like to continue serving as Telemetry PTL for a second cycle.

When I took on the role for Juno, I saw some challenges facing the
project that would take multi-cycle efforts to resolve, so I'd like to
have the opportunity to see that move closer to completion.

Over Juno, our focus as a project has necessarily been on addressing
the TC gap analysis. We've been successful in ensuring that the agreed
gap coverage tasks were completed. The team made great strides in
making the sql-alchemy driver a viable option for PoCs and small
deployments, getting meaningful Tempest  Grenade coverage in place,
and writing quality user- and operator-oriented documentation. This
has addressed a portion of our usability debt, but as always we need
to continue chipping away at that.

In parallel, an arms-length effort was kicked off to look at paying
down accumulated architectural debt in Ceilometer via a new approach
to more lightweight timeseries data storage via the Gnocchi project.
This was approached in such a way as to minimize the disruption to
the core project.

My vision for Kilo would be to shift our focus a bit more onto such
longer-terms strategic efforts. Clearly we need to complete the work
on Gnocchi and figure out the migration and co-existence issues.

In addition, we started a conversation with the Monasca folks at the
Juno summit on the commonality between the two projects. Over Kilo I
would like to broaden and deepen the collaboration that was first
mooted in Atlanta, by figuring out specific incremental steps around
converging some common functional areas such as alarming. We can also
learn from the experience of the Monasca project in getting the best
possible performance out of TSD storage in InfluxDB, or achieving very
high throughput messaging via Apache Kafka.

There are also cross-project debts facing our community that we need
to bring some of our focus to IME. In particular, I'm thinking here
about the move towards taking integration test coverage back out of
Tempest and into new project-specific functional test suites. Also the
oft-proposed, but never yet delivered-upon, notion of contractizing
cross-project interactions mediated by notifications.

Finally, it's worth noting that our entire community has a big
challenge ahead of it in terms of the proposed move towards a new
layering structure. If re-elected, I would see myself as an active
participant in that discussion, ensuring the interests of the project
are positively represented.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-23 Thread Eoghan Glynn

The ceilometer team released python-ceilometerclient vesion 1.0.11 yesterday:

  https://pypi.python.org/pypi/python-ceilometerclient/1.0.11

Cheers,
Eoghan

 Keystone team has released 0.11.1 of python-keystoneclient. Due to some
 delays getting things through the gate this took a few extra days.
 
 https://pypi.python.org/pypi/python-keystoneclient/0.11.1
 
 —Morgan
 
 
 —
 Morgan Fainberg
 
 
 -Original Message-
 From: John Dickinson m...@not.mn
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: September 17, 2014 at 20:54:19
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [release] client release deadline - Sept 18th
 
  I just release python-swiftclient 2.3.0
   
  In addition to some smaller changes and bugfixes, the biggest changes are
  the support
  for Keystone v3 and a refactoring that allows for better testing and
  extensibility of
  the functionality exposed by the CLI.
   
  https://pypi.python.org/pypi/python-swiftclient/2.3.0
   
  --John
   
   
   
  On Sep 17, 2014, at 8:14 AM, Matt Riedemann wrote:
   
  
  
   On 9/15/2014 12:57 PM, Matt Riedemann wrote:
  
  
   On 9/10/2014 11:08 AM, Kyle Mestery wrote:
   On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
   wrote:
  
  
   On 9/9/2014 4:19 PM, Sean Dague wrote:
  
   As we try to stabilize OpenStack Juno, many server projects need to
   get
   out final client releases that expose new features of their servers.
   While this seems like not a big deal, each of these clients releases
   ends up having possibly destabilizing impacts on the OpenStack whole
   (as
   the clients do double duty in cross communicating between services).
  
   As such in the release meeting today it was agreed clients should
   have
   their final release by Sept 18th. We'll start applying the dependency
   freeze to oslo and clients shortly after that, all other requirements
   should be frozen at this point unless there is a high priority bug
   around them.
  
   -Sean
  
  
   Thanks for bringing this up. We do our own packaging and need time
   for legal
   clearances and having the final client releases done in a reasonable
   time
   before rc1 is helpful. I've been pinging a few projects to do a final
   client release relatively soon. python-neutronclient has a release
   this
   week and I think John was planning a python-cinderclient release this
   week
   also.
  
   Just a slight correction: python-neutronclient will have a final
   release once the L3 HA CLI changes land [1].
  
   Thanks,
   Kyle
  
   [1] https://review.openstack.org/#/c/108378/
  
   --
  
   Thanks,
  
   Matt Riedemann
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   python-cinderclient 1.1.0 was released on Saturday:
  
   https://pypi.python.org/pypi/python-cinderclient/1.1.0
  
  
   python-novaclient 2.19.0 was released yesterday [1].
  
   List of changes:
  
   mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 --oneline
   --no-merges
   cd56622 Stop using intersphinx
   d96f13d delete python bytecode before every test run
   4bd0c38 quota delete tenant_id parameter should be required
   3d68063 Don't display duplicated security groups
   2a1c07e Updated from global requirements
   319b61a Fix test mistake with requests-mock
   392148c Use oslo.utils
   e871bd2 Use Token fixtures from keystoneclient
   aa30c13 Update requirements.txt to include keystoneclient
   bcc009a Updated from global requirements
   f0beb29 Updated from global requirements
   cc4f3df Enhance network-list to allow --fields
   fe95fe4 Adding Nova Client support for auto find host APIv2
   b3da3eb Adding Nova Client support for auto find host APIv3
   3fa04e6 Add filtering by service to hosts list command
   c204613 Quickstart (README) doc should refer to nova
   9758ffc Updated from global requirements
   53be1f4 Fix listing of flavor-list (V1_1) to display swap value
   db6d678 Use adapter from keystoneclient
   3955440 Fix the return code of the command delete
   c55383f Fix variable error for nova --service-type
   caf9f79 Convert to requests-mock
   33058cb Enable several checks and do not check docs/source/conf.py
   abae04a Updated from global requirements
   68f357d Enable check for E131
   b6afd59 Add support for security-group-default-rules
   ad9a14a Fix rxtx_factor name for creating a flavor
   ff4af92 Allow selecting the network for doing the ssh with
   9ce03a9 fix host resource repr to use 'host' attribute
   4d25867 Enable H233
   60d1283 Don't log sensitive auth data
   

Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-19 Thread Eoghan Glynn


 Hi,
 
 Nejc has been doing a great work and has been very helpful during the
 Juno cycle and his help is very valuable.
 
 I'd like to propose that we add Nejc Saje to the ceilometer-core group.
 
 Please, dear ceilometer-core members, reply with your votes!

With eight yeas and zero nays, I think we can call a result in this
vote.

Welcome to the ceilometer-core team Nejc!

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-19 Thread Eoghan Glynn


 Hi,
 
 Dina has been doing a great work and has been very helpful during the
 Juno cycle and her help is very valuable. She's been doing a lot of
 reviews and has been very active in our community.
 
 I'd like to propose that we add Dina Belova to the ceilometer-core
 group, as I'm convinced it'll help the project.
 
 Please, dear ceilometer-core members, reply with your votes!

With seven yeas and zero nays, I think we can call a result in this
vote.

Welcome to the ceilometer-core team Dina!

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Eoghan Glynn


 Hi All,
 
 My understanding of Zaqar is that it's like SQS. SQS uses distributed queues,
 which have a few unusual properties [0]:
 Message Order
 
 
 Amazon SQS makes a best effort to preserve order in messages, but due to the
 distributed nature of the queue, we cannot guarantee you will receive
 messages in the exact order you sent them. If your system requires that
 order be preserved, we recommend you place sequencing information in each
 message so you can reorder the messages upon receipt.
 At-Least-Once Delivery
 
 
 Amazon SQS stores copies of your messages on multiple servers for redundancy
 and high availability. On rare occasions, one of the servers storing a copy
 of a message might be unavailable when you receive or delete the message. If
 that occurs, the copy of the message will not be deleted on that unavailable
 server, and you might get that message copy again when you receive messages.
 Because of this, you must design your application to be idempotent (i.e., it
 must not be adversely affected if it processes the same message more than
 once).
 Message Sample
 
 
 The behavior of retrieving messages from the queue depends whether you are
 using short (standard) polling, the default behavior, or long polling. For
 more information about long polling, see Amazon SQS Long Polling .
 
 With short polling, when you retrieve messages from the queue, Amazon SQS
 samples a subset of the servers (based on a weighted random distribution)
 and returns messages from just those servers. This means that a particular
 receive request might not return all your messages. Or, if you have a small
 number of messages in your queue (less than 1000), it means a particular
 request might not return any of your messages, whereas a subsequent request
 will. If you keep retrieving from your queues, Amazon SQS will sample all of
 the servers, and you will receive all of your messages.
 
 The following figure shows short polling behavior of messages being returned
 after one of your system components makes a receive request. Amazon SQS
 samples several of the servers (in gray) and returns the messages from those
 servers (Message A, C, D, and B). Message E is not returned to this
 particular request, but it would be returned to a subsequent request.
 
 
 
 
 
 
 
 Presumably SQS has these properties because it makes the system scalable, if
 so does Zaqar have the same properties (not just making these same
 guarantees in the API, but actually having these properties in the
 backends)? And if not, why? I looked on the wiki [1] for information on
 this, but couldn't find anything.

The premise of this thread is flawed I think.

It seems to be predicated on a direct quote from the public
documentation of a closed-source system justifying some
assumptions about the internal architecture and design goals
of that closed-source system.

It then proceeds to hold zaqar to account for not making
the same choices as that closed-source system.

This puts the zaqar folks in a no-win situation, as it's hard
to refute such arguments when they have no visibility over
the innards of that closed-source system.

Sure, the assumption may well be correct that the designers
of SQS made the choice to expose applications to out-of-order
messages as this was the only practical way of acheiving their
scalability goals.

But since the code isn't on github and the design discussions
aren't publicly archived, we have no way of validating that.

Would it be more reasonable to compare against a cloud-scale
messaging system that folks may have more direct knowledge
of?

For example, is HP Cloud Messaging[1] rolled out in full
production by now?

Is it still cloning the original Marconi API, or has it kept
up with the evolution of the API? Has the nature of this API
been seen as the root cause of any scalability issues?

Cheers,
Eoghan

[1] 
http://www.openstack.org/blog/2013/05/an-introductory-tour-of-openstack-cloud-messaging-as-a-service

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][TC][Zaqar] Another graduation attempt, new lessons learned

2014-09-17 Thread Eoghan Glynn

Thanks for bring this to the list, Flavio.

A few thoughts in line ...

 Greetings,
 
 As probably many of you already know, Zaqar (formerly known as Marconi)
 has recently been evaluated for integration. This is the second time
 this project (and team) has gone through this process and just like last
 time, it wasn't as smooth as we all would have liked it to be.
 
 I thought about sending this email - regardless of what result is - to
 give a summary of what the experience have been like from the project
 side. Some things were quite frustrating and I think they could be
 revisited and improved, hence this email and ideas as to how I think we
 could make them better.
 
 ## Misunderstanding of the project goals:
 
 For both graduation attempts, the goals of the project were not
 clear. It felt like the communication between TC and PTL was
 insufficient to convey enough data to make an informed decision.
 
 I think we need to work on a better plan to follow-up with incubated
 projects. I think these projects should have a schedule and specific
 incubated milestones in addition to the integrated release milestones.
 For example, it'd be good to have at least 3 TC meetings were the
 project shows the progress, the goals that have been achieved and where
 it is standing on the integration requirements.
 
 These meetings should be used to address concerns right away. Based on
 Zaqar's experience, it's clear that graduating is more than just meeting
 the requirements listed here[0]. The requirements may change and other
 project-specific concerns may also be raised. The important thing here,
 though, is to be all on the same page of what's needed.
 
 I suggested after the Juno summit that we should have TC representative
 for each incubated project[1]. I still think that's a good idea and we
 should probably evaluate a way to make that, or something like that,
 happen. We tried to put it in practice during Juno - Devananda
 volunteered to be Zaqar's representative. Thanks for doing this - but it
 didn't work out as we expected. It would probably be a better idea,
 given the fact that we're all overloaded with things to do, to have a
 sub-team of 2 or 3 TC members assigned to a project. These TC
 representatives could lead incubated projects through the process and
 work as a bridge between the TC and the project.
 
 Would a plan like the one mentioned above scale for the current TC and
 the number of incubated projects?

Agreed that the expectations on the TC representative should be made
clearer. It would be best IMO if this individual (or small sub-team)
could commit to doing a deep-dive on the project and be ready to act
as a mediator with the rest of the TC around the project's intended
use-cases, architecture, APIs etc.

There need not necessarily be an expectation that the representative(s)
would champion the project, but they should ensure that there aren't
what the heck is this thing? style questions still being asked right
at the end of the incubation cycle. 

 [0]
 https://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst#n79
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035341.html
 
 ## Have a better structured meeting
 
 One of the hard things of attending a TC meeting as a representative for
 a project is that you get 13 people asking several different questions
 at the same time, which is impossible to keep up with. I think, for
 future integration/incubation/whatever reviews/meetings, there should be
 a better structure. One of the things I'd recommend is to move all the
 *long* technical discussions to the mailing list and avoid having them
 during the graduation meeting. IRC discussions are great but I'd
 probably advice having them in the project channel or during the project
 meeting time and definitely before the graduation meeting.
 
 What makes this `all-against-one` thing harder are the parallel
 discussions that normally happen during these meetings. We should really
 work hard on avoiding these kind of parallel discussions because they
 distract attendees and make the really discussion harder and frustrating.

As an observer from afar at those TC meetings, the tone of *some* of
the discussion seemed a bit adversarial, or at least very challenging
to respond to in a coherent way. I wouldn't relish the job of trying 
to field rapid-fire, overlapping questions in real-time, some of which
cast doubts on very fundamental aspects of the project. While I agree
every TC member's questions are important, that approach isn't the most
effective way of ensuring good answers are forthcoming.

+1 that this could be improved by ensuring that most of the detailed
technical discussion has already been aired well in advance on the ML.

In addition, I wonder might there be some mileage in approaches such
as:

 * encouraging the TC to register fundamental concerns/doubts in
   advance via an etherpad so that the project team gets a 

Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Eoghan Glynn


 I visited the Paris Design Summit space on Monday and confirmed that it
 should be possible to split it in a way that would allow to have
 per-program contributors meetups on the Friday. The schedule would go
 as follows:
 
 Tuesday: cross-project workshops
 Wednesday, Thursday: traditional scheduled slots
 Friday: contributors meetups
 
 We'll also have pods available all 4 days for more ad-hoc small meetings.

Excellent :)

 In the mean time, we need to discuss how we want to handle the selection
 of session topics.
 
 In past summits we used a Design-Summit-specific session suggestion
 website, and PTLs would approve/deny them. This setup grew less and less
 useful: session topics were selected collaboratively on etherpads,
 discussed in meetings, and finally filed/reorganized/merged on the
 website just before scheduling. Furthermore, with even less scheduled
 slots, we would have to reject most of the suggestions, which is more
 frustrating for submitters than the positive experience of joining team
 meetings to discuss which topics are the most important. Finally, topics
 will need to be split between scheduled sessions and the contributors
 meetup agenda, and that's easier to do on an Etherpad anyway.
 
 This is why I'd like to suggest that all programs use etherpads to
 collect important topics, select which ones would get in the very few
 scheduled slots we'll have left, which will get discussed in the
 contributors meetup, and which are better left for a pod discussion.
 I suggest we all use IRC team meetings to collaboratively discuss that
 content between interested contributors.
 
 To simplify the communication around this, I tried to collect the
 already-announced etherpads on a single page at:
 
 https://wiki.openstack.org/wiki/Summit/Planning
 
 Please add any that I missed !
 
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.

+1 on a collaborative scheduling process within each project.

That's pretty much what we did within the ceilometer core group for
the Juno summit, except that we used a googledocs spreadsheet instead
of an etherpad.

So I don't think we need to necessarily mandate usage of an etherpad,
just let every project decide whatever shared document format they
want to use.

FTR the benefit of a googledocs spreadsheet in my view would include
the ease of totalling votes  sessions slots, color-coding candidate
sessions for merging etc.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Eoghan Glynn


 Hi,
 
 Nejc has been doing a great work and has been very helpful during the
 Juno cycle and his help is very valuable.
 
 I'd like to propose that we add Nejc Saje to the ceilometer-core group.
 
 Please, dear ceilometer-core members, reply with your votes!

A hearty +1 for me, Nejc has made a great impact in Juno.

Cheers,
Eoghan 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-11 Thread Eoghan Glynn


 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].
 
 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on this
 thread ...

Here's my list of high-level cycle goals, for consideration ...


1. Address our usability debts

With some justification, we've been saddled with the perception
of not caring enough about the plight of users and operators. The
frustrating thing is that much of this is very fixable, *if* we take
time out from the headlong rush to add features. Achievable things
like documentation completeness, API consistency, CLI intuitiveness,
logging standardization, would all go a long way here.

These things are of course all not beyond the wit of man, but we
need to take the time out to actually do them. This may involve
a milestone, or even longer, where we accept that the rate of
feature addition will be deliberately slowed down. 


2. Address the drags on our development velocity

Despite the Trojan efforts of the QA team, the periodic brownouts
in the gate are having a serious impact on our velocity. Over the
past few cycles, we've seen the turnaround time for patch check/
verification spike up unacceptably long multiple times, mostly
around the milestones.

Whatever we can do to smoothen out these spikes, whether it be
moving much of the Tempest coverage into the project trees, or
switching focus onto post-merge verification as suggested by
Sean on this thread, or even considering some more left-field
approaches such as staggered milestones, we need to grasp this
nettle as a matter of urgency.

Further back in the pipeline, the effort required to actually get
something shepherded through review is steadily growing. To the
point that we need to consider some radical approaches that
retain the best of our self-organizing model, while setting more
reasonable  reliable expectations for patch authors, and making
it more likely that narrow domain expertise is available to review
their contributions in timely way. For the larger projects, this
is likely to mean something different (along the lines of splits
or sub-domains) than it does for the smaller projects.


3. Address the long-running what's in and what's out questions

The way some of the discussions about integration and incubation 
played out this cycle have made me sad. Not all of these discussions
have been fully supported by the facts on the ground IMO. And not
all of the issues that have been held up as justifications for
whatever course of exclusion or inclusion would IMO actually be
solved in that way.

I think we need to move the discussion around a new concept of
layering, or redefining what it means to be in the tent, to a
more constructive and collaborative place than heretofore.


4. Address the fuzziness in cross-service interactions

In a semi-organic way, we've gone and built ourselves a big ol'
service-oriented architecture. But without necessarily always
following the strong contracts, loose coupling, discoverability,
and autonomy that a SOA approach implies.

We need to take the time to go back and pay down some of the debt
that has accreted over multiple cycles around these these
cross-service interactions. The most pressing of these would
include finally biting the bullet on the oft-proposed but never
delivered-upon notion of stabilizing notifications behind a
well-defined contract. Also, the more recently advocated notions
of moving away from coarse-grained versioning of the inter-service
APIs, and supporting better introspection and discovery of
capabilities.

 by end of day Wednesday, September 10th.

Oh, yeah, and impose fewer arbitrary deadlines ;)

Cheers,
Eoghan

 After which time we can
 begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.
 
 
 best,
 Joe Gordon
 
 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
 [1]
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-11 Thread Eoghan Glynn


 Hi,
 
 Dina has been doing a great work and has been very helpful during the
 Juno cycle and her help is very valuable. She's been doing a lot of
 reviews and has been very active in our community.
 
 I'd like to propose that we add Dina Belova to the ceilometer-core
 group, as I'm convinced it'll help the project.
 
 Please, dear ceilometer-core members, reply with your votes!

A definite +1 from me.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-04 Thread Eoghan Glynn


   The implementation in ceilometer is very different to the Ironic one -
   are you saying the test you linked fails with Ironic, or that it fails
   with the ceilometer code today?
 
  Disclaimer: in Ironic terms, node = conductor, key = host
 
  The test I linked fails with Ironic hash ring code (specifically the
  part that tests consistency). With 1000 keys being mapped to 10 nodes,
  when you add a node:
  - current ceilometer code remaps around 7% of the keys ( 1/#nodes)
  - Ironic code remaps  90% of the keys
 
  So just to underscore what Nejc is saying here ...
 
  The key point is the proportion of such baremetal-nodes that would
  end up being re-assigned when a new conductor is fired up.
 
 That was 100% clear, but thanks for making sure.
 
 The question was getting a proper understanding of why it was
 happening in Ironic.
 
 The ceilometer hashring implementation is good, but it uses the same
 terms very differently (e.g. replicas for partitions) - I'm adapting
 the key fix back into Ironic - I'd like to see us converge on a single
 implementation, and making sure the Ironic one is suitable for
 ceilometer seems applicable here (since ceilometer seems to need less
 from the API),

Absolutely +1 on converging on a single implementation. That was
our intent on the ceilometer side from the get-go, to promote a
single implementation to oslo that both projects could share.

This turned out not to be possible in the short term when the
non-consistent aspect of the Ironic implementation was discovered
by Nejc, with the juno-3 deadline looming. However for kilo, we
would definitely be interested in leveraging a best-of-breed
implementation from oslo.

 If reassigning was cheap Ironic wouldn't have bothered having a hash ring :)

Fair enough, I was just allowing for the possibility that avoidance
of needless re-mapping hasn't as high a priority on the ironic side
as it was for ceilometer.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-09-04 Thread Eoghan Glynn


 Hi everyone,
 
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 
 Day 1. Cross-project sessions / incubated projects / other projects
 
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.
 
 Day 2 and Day 3. Scheduled sessions for various programs
 
 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.
 
 Day 4. Contributors meetups
 
 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.
 
 
 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.
 
 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

Apologies for jumping on this thread late.

I'm all for the idea of accommodating a more fluid form of project-
specific discussion, with the schedule emerging in a dynamic way. 

But one aspect of the proposed summit redesign that isn't fully clear
to me is the cross-over between the new Contributors meetups and the
Project pods that we tried out for the first time in Atlanta.

That seemed, to me at least, to be a very useful experiment. In fact:

 parallel midcycle-meetup-like contributors gatherings, with no time
  boundaries and an open agenda

sounds like quite a good description of how some projects used their
pods in ATL.

The advantage of the pods approach in my mind, included:

 * no requirement for reducing the number of design sessions slots,
   as the pod time ran in parallel with the design session tracks
   of other projects

 * depending on where in the week the project track occurred, the
   pod time could include a chunk of scene-setting/preparation 
   discussion *in advance of* the more structured design sessions

 * on a related theme, the pods did not rely on the graveyard shift
   at the backend of the summit when folks tend to hit their Friday
   afternoon brain-full state

Am I missing some compelling advantage of moving all these emergent
project-specific meetups to the Friday?

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-09-04 Thread Eoghan Glynn


  Am I missing some compelling advantage of moving all these emergent
  project-specific meetups to the Friday?
 
 One is that due to space limitations, we won't have nearly as many
 pods as in Atlanta (more like half or a third of them). Without one
 pod per program, the model breaks a bit.

A-ha, OK.

Will the subset of projects allocated a pod be fixed, or will the
pod space float between projects as the week progresses?

(for example, it's unlikely that a project will be using its pod
space when its design session track is in-progress, so the pod could
be passed on to another project)

Cheers,
Eoghan 
 
 The Friday setup also allows for more room (rather than a small
 roundtable) since we can reuse regular rooms (and maybe split them up).
 
 It appears on the schedule as a specific set of hours where contributors
 on a given program gather, so it's easier to reach critical mass.
 
 Finally for projects like Nova, which had regular sessions all the days.
 I also like having them all on the last day so that you can react to
 previous sessions and discuss useful stuff.
 
 If that makes you feel more comfortable, you can think of it as a
 pod-only day, with a bit more publicity, larger pods and where we use
 all the summit space available for pods :)
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-03 Thread Eoghan Glynn


 On 09/02/2014 11:33 PM, Robert Collins wrote:
  The implementation in ceilometer is very different to the Ironic one -
  are you saying the test you linked fails with Ironic, or that it fails
  with the ceilometer code today?
 
 Disclaimer: in Ironic terms, node = conductor, key = host
 
 The test I linked fails with Ironic hash ring code (specifically the
 part that tests consistency). With 1000 keys being mapped to 10 nodes,
 when you add a node:
 - current ceilometer code remaps around 7% of the keys ( 1/#nodes)
 - Ironic code remaps  90% of the keys

So just to underscore what Nejc is saying here ... 

The key point is the proportion of such baremetal-nodes that would
end up being re-assigned when a new conductor is fired up.

The defining property of a consistent hash-ring is that it
significantly reduces the number of re-mappings that occur when
the number of buckets change, when compared to a plain ol' hash.

This desire for stickiness would often be motivated by some form
of statefulness or initialization cost. In the ceilometer case,
we want to minimize unnecessary re-assignment so as to keep the
cadence of meter collection and alarm evaluation as even as
possible (given that each agent will be running off
non-synchronized periodic tasks).

In the case of the Ironic baremetal-node to conductor mapping,
perhaps such stickiness isn't really of any benefit?

If so, that's fine, but Nejc's main point UUIC is that an
approach that doesn't minimize the number of re-mappings isn't
really a form of *consistent* hashing.

Cheers,
Eoghan
 
 The problem lies in the way you build your hash ring[1]. You initialize
 a statically-sized array and divide its fields among nodes. When you do
 a lookup, you check which field in the array the key hashes to and then
 return the node that that field belongs to. This is the wrong approach
 because when you add a new node, you will resize the array and assign
 the fields differently, but the same key will still hash to the same
 field and will therefore hash to a different node.
 
 Nodes must be hashed onto the ring as well, statically chopping up the
 ring and dividing it among nodes isn't enough for consistency.
 
 Cheers,
 Nejc
 
 
  The Ironic hash_ring implementation uses a hash:
   def _get_partition(self, data):
   try:
   return (struct.unpack_from('I',
   hashlib.md5(data).digest())[0]
self.partition_shift)
   except TypeError:
   raise exception.Invalid(
   _(Invalid data supplied to HashRing.get_hosts.))
 
 
  so I don't see the fixed size thing you're referring to. Could you
  point a little more specifically? Thanks!
 
  -Rob
 
  On 1 September 2014 19:48, Nejc Saje ns...@redhat.com wrote:
  Hey guys,
 
  in Ceilometer we're using consistent hash rings to do workload
  partitioning[1]. We've considered generalizing your hash ring
  implementation
  and moving it up to oslo, but unfortunately your implementation is not
  actually consistent, which is our requirement.
 
  Since you divide your ring into a number of equal sized partitions,
  instead
  of hashing hosts onto the ring, when you add a new host,
  an unbound amount of keys get re-mapped to different hosts (instead of the
  1/#nodes remapping guaranteed by hash ring). I've confirmed this with the
  test in aforementioned patch[2].
 
  If this is good enough for your use-case, great, otherwise we can get a
  generalized hash ring implementation into oslo for use in both projects or
  we can both use an external library[3].
 
  Cheers,
  Nejc
 
  [1] https://review.openstack.org/#/c/113549/
  [2]
  https://review.openstack.org/#/c/113549/21/ceilometer/tests/test_utils.py
  [3] https://pypi.python.org/pypi/hash_ring
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-21 Thread Eoghan Glynn


 One of the outcomes from Juno will be horizontal scalability in the
 central agent and alarm evaluator via partitioning[1]. The compute
 agent will get the same capability if you choose to use it, but it
 doesn't make quite as much sense.
 
 I haven't investigated the alarm evaluator side closely yet, but one
 concern I have with the central agent partitioning is that, as far
 as I can tell, it will result in stored samples that give no
 indication of which (of potentially very many) central-agent it came
 from.
 
 This strikes me as a debugging nightmare when something goes wrong
 with the content of a sample that makes it all the way to storage.
 We need some way, via the artifact itself, to narrow the scope of
 our investigation.
 
 a) Am I right that no indicator is there?
 
 b) Assuming there should be one:
 
 * Where should it go? Presumably it needs to be an attribute of
   each sample because as agents leave and join the group, where
   samples are published from can change.
 
 * How should it be named? The never-ending problem.
 
 Thoughts?


Probably best to keep the bulk of this dicussion on-gerrit, but
FWIW here's my riff just commented there ...

Cheers,
Eoghan


WRT to marking each sample with an indication of originating agent.

First, IIUC, true provenance would require that the full chain-of-
ownership could be reconstructed for the sample, so we'd need to
also record the individual collector that persisted each sample.
So let's assume that we're only talking here about associating the
originating agent with the sample.  For most classes of bugs/issues
that could impact on an agent, we'd expect an equivalent impact on
all agents. However, I guess there would be a subset of issues, e.g.
an agent being left behind after an upgrade, that could be localized.

So in the classic ceilometer approach to metadata, one could imagine
the agent identity being recorded in the sample itself. However this
would become a lot more problematic, I think, after a shift to pure
timeseries data. In which case, I don't think we'd necessarily want
to pollute the limited number of dimensions that can be efficiently
associated with a datapoint with additional information purely related
to the implementation/architecture of ceilometer.

So how about turning the issue on its head, and putting the onus on
the agent to record its allocated resources for each cycle? The
obvious way to do that would be via logging.

Then in order to determine which agent was responsible for polling a
particular resource at a particular time, the problem would collapse
down to a distributed search over the agent log files for that period
(perhaps aided by whatever log retention scheme is in use, e.g. logstash).

 [1] https://review.openstack.org/#/c/113549/
 [2] https://review.openstack.org/#/c/115237/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Eoghan Glynn


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
  
  Sure additional cross-project resources can and need to be ponied up, but I
  am doubtful that will be enough.
 
 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?
 
 
 I am not sure what would be enough to get OpenStack back in a position where
 more developers/users are happier with the current state of affairs. Which
 is why I think we may want to try several things.
 
 
 
 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?
 
 
 Yes, all of the above.

Hi Joe,

In coming to that conclusion, have you thought about and explicitly
rejected all of the approaches that have been mooted to mitigate
those concerns?

Is there a strong reason why the following non-exhaustive list
would all be doomed to failure:

 * encouraging projects to follow the successful Sahara model,
   where one core contributor also made a large contribution to
   a cross-project effort (in this case infra, but could be QA
   or docs or release management or stable-maint ... etc)

   [this could be seen as essentially offsetting the cost of
that additional project drawing from the cross-project well]

 * assigning liaisons from each project to *each* of the cross-
   project efforts

   [this could be augmented/accelerated with one of the standard
on-boarding approaches, such as a designated mentor for the
liaison or even an immersive period of secondment]

 * applying back-pressure via the board representation to make
   it more likely that the appropriate number of net-new
   cross-project resources are forthcoming

   [c.f. Stef's we're not amateurs or volunteers mail earlier
on this thread]

I really think we need to do better than dismissing out-of-hand
the idea of beefing up the cross-project efforts. If it won't
work for specific reasons, let's get those reasons out onto
the table and make a data-driven decision on this.

 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:
 
 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint
 
 or something else?
 
 
 Good question.
 
 IMHO QA, Infra and release management are probably the most strained.

OK, well let's brain-storm on how some of those efforts could
potentially be made more scalable.

Should we for example start to look at release management as a
program onto itself, with a PTL *and* a group of cores to divide
and conquer the load?

(the hands-on rel mgmt for the juno-2 milestone, for example, was
 delegated - is there a good reason why such delegation wouldn't
 work as a matter of course?)

Should QA programs such as grenade be actively seeking new cores to
spread the workload?

(until recently, this had the effective minimum of 2 cores, despite
 now being a requirement for integrated projects)

Could the infra group potentially delegate some of the workload onto
the distro folks?

(given that it's strongly in their interest to have their distro
 represented in the CI gate.

None of the above ideas may make sense, but it doesn't feel like
every avenue has been explored here. I for one don't feel entirely
satisfied that every potential solution to cross-project strain was
fully thought-out in advance of the de-integration being presented
as the solution.

Just my $0.02 ...

Cheers,
Eoghan

[on vacation with limited connectivity]

 But I also think there is something missing from this list. Many of the 
 projects
 are hitting similar issues and end up solving them in different ways, which
 just leads to more confusion for the end user. Today we have a decent model
 for rolling out cross-project libraries (Oslo) but we don't have a good way
 of having broader cross project discussions such as: API standards (such as
 discoverability of features), logging standards, aligning on concepts
 (different projects have different terms and concepts for scaling and
 isolating failure domains), and an overall better user experience. So I
 think we have a whole class of cross project issues that we have not even
 begun addressing.
 
 
 
 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.
 
 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?
 
 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-15 Thread Eoghan Glynn

 But, if someone asked me what they should use for metering today,
 I'd point them towards Monasca in a heartbeat.

FWIW my view is that Monasca is an interesting emerging project, with a
team accreting around it that seems to be interested in collaboration.

We've had ongoing discussions with them about overlaps and differences
since the outset of this cycle, though of course our over-riding focus
for Juno has had to be on the TC gap analysis and on addressing
architectural debts.

But going forward into Kilo, I think there should be scope for possible
closer collaboration between the projects, figuring out the aspects that
are complementary and possible shared elements and/or converged APIs.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Eoghan Glynn

  Letting the industry field-test a project and feed their experience
  back into the community is a slow process, but that is the best
  measure of a project's success. I seem to recall this being an
  implicit expectation a few years ago, but haven't seen it discussed in
  a while.
  
  I think I recall us discussing a must have feedback that it's
  successfully deployed requirement in the last cycle, but we recognized
  that deployers often wait until a project is integrated.
 
 In the early discussions about incubation, we respected the need to
 officially recognize a project as part of OpenStack just to create the
 uptick in adoption necessary to mature projects. Similarly, integration is a
 recognition of the maturity of a project, but I think we have graduated
 several projects long before they actually reached that level of maturity.
 Actually running a project at scale for a period of time is the only way to
 know it is mature enough to run it in production at scale.
 
 I'm just going to toss this out there. What if we set the graduation bar to
 is in production in at least two sizeable clouds (note that I'm not saying
 public clouds). Trove is the only project that has, to my knowledge, met
 that bar prior to graduation, and it's the only project that graduated since
 Havana that I can, off hand, point at as clearly successful. Heat and
 Ceilometer both graduated prior to being in production; a few cycles later,
 they're still having adoption problems and looking at large architectural
 changes. I think the added cost to OpenStack when we integrate immature or
 unstable projects is significant enough at this point to justify a more
 defensive posture.
 
 FWIW, Ironic currently doesn't meet that bar either - it's in production in
 only one public cloud. I'm not aware of large private installations yet,
 though I suspect there are some large private deployments being spun up
 right now, planning to hit production with the Juno release.

We have some hard data from the user survey presented at the Juno summit,
with respectively 26  53 production deployments of Heat and Ceilometer
reported.

There's no cross-referencing of deployment size with services in production
in those data presented, though it may be possible to mine that out of the
raw survey responses.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Eoghan Glynn

  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.
 
 Sure additional cross-project resources can and need to be ponied up, but I
 am doubtful that will be enough.

OK, so what exactly do you suspect wouldn't be enough, for what
exactly?

Is it the likely number of such new resources, or the level of domain-
expertise that they can be realistically be expected bring to the
table, or the period of time to on-board them, or something else?

And which cross-project concern do you think is most strained by the
current set of projects in the integrated release? Is it:
 
 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint
  
or something else?

Each of those teams has quite different prerequisite skill-sets, and
the on-ramp for someone jumping in seeking to make a positive impact
will vary from team to team.

Different approaches have been tried on different teams, ranging from
dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
newly assigned dedicated resources (QA/Infra). Which of these models
might work in your opinion? Which are doomed to failure, and why?

So can you be more specific here on why you think adding more cross-
project resources won't be enough to address an identified shortage
of cross-project resources, while de-integrating projects would be?

And, please, can we put the proverbial strawman back in its box on
this thread? It's all well and good as a polemic device, but doesn't
really move the discussion forward in a constructive way, IMO.

Thanks,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.

That's fair.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.

Yeah, that's pretty much what I mean by the championing being subsumed
under the group will.

What's lost is not so much the ability to champion something, as the
freedom to do so in an independent/emergent way.

(Note that this is explicitly not verging into the retrospective veto
policy discussion on another thread[1], I'm totally assuming good faith
and good intent on the part of such champions)
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.

Yeah, so I guess it would be worth drilling down into that user
confusion.

Are users confused because they don't understand the current nature
of the group dynamic, the unseen hand that causes some blueprints to
prosper while others fester seemingly unnoticed?

(for example, in the sense of not appreciating the emergent championing
done by say the core subset interested in libvirt)

Or are they confused in that they read some implicit contract or
commitment into the targeting of those 100 blueprints to a release
cycle?

(in sense of expecting that the core team will land all/most of those
100 target'd BPs within the cycle)

Cheers,
Eoghan 

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html

  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
 
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
 
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
 
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


It seems like this is exactly what the slots give us, though. The core
review
   team picks a number of slots indicating how much work they think they can
   actually do (less than the available number of blueprints), and then
   blueprints queue up to get a slot based on priorities and turnaround time
   and other criteria that try to make slot allocation fair. By having the
   slots, not only is the review priority communicated to the review team,
   it
   is also communicated to anyone watching the project.
  
  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
  
  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
  
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
  
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
  
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Yeah, I'm really nervous about that aspect.
 
 Say a contributor proposes a new feature, a couple of core reviewers
 think it's important exciting enough for them to champion it but somehow
 the 'group will' is that it's not a high enough priority for this
 release, even if everyone agrees that it is actually cool and useful.
 
 What does imposing that 'group will' on the two core reviewers and
 contributor achieve? That the contributor and reviewers will happily
 turn their attention to some of the higher priority work? Or we lose a
 contributor and two reviewers because they feel disenfranchised?
 Probably somewhere in the middle.

Yeah, the outcome probably depends on the motivation/incentives that
are operating for individual contributors.

If their brief or primary interest was to land *specific* features,
then they may sit out the cycle, or just work away on their pet features
anyway under the radar.

If, OTOH, they have more of a over-arching make the project better
goal, they may gladly (or reluctantly) apply themselves to the group-
defined goals.

However, human nature being what it is, I'd suspect that the energy
levels applied to self-selected goals may be higher in the average case.
Just a gut feeling on that, no hard data to back it up. 

 On the other hand, what happens if work proceeds ahead even if not
 deemed a high priority? I don't think we can say that the contributor
 and two core reviewers were distracted from higher priority work,
 because blocking this work is probably unlikely to shift their focus in
 a productive way. Perhaps other reviewers are distracted because they
 feel the work needs more oversight than just the two core reviewers? It
 places more of a burden on the gate?

Well I think we have accept the reality that we can't force people to
work on stuff they don't want to, or entirely stop them working on the
stuff that they do.

So inevitably there will be some deviation from the shining path, as
set out in the group will. Agreed that blocking this work from say
being proposed on gerrit won't necessarily have the desired outcome

(OK, it could stop the transitive distraction of other reviewers, and
remove the gate load, but won't restore the time spent working off-piste
by the contributor and two cores in your example)

 I dunno ... the consequences of imposing group will worry me more than
 the consequences of allowing small groups to self-organize like this.

Yep, this capacity for self-organization of informal groups with aligned
interests (as opposed to corporate affiliations) is, or at least should
be IMO, seen as one of the primary strengths of the open source model.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn

  At the end of the day, that's probably going to mean saying No to more
  things. Everytime I turn around everyone wants the TC to say No to
  things, just not to their particular thing. :) Which is human nature.
  But I think if we don't start saying No to more things we're going to
  end up with a pile of mud that no one is happy with.
 
 That we're being so abstract about all of this is frustrating. I get
 that no-one wants to start a flamewar, but can someone be concrete about
 what they feel we should say 'no' to but are likely to say 'yes' to?
 
 
 I'll bite, but please note this is a strawman.

 No:
 * Accepting any more projects into incubation until we are comfortable with
 the state of things again
 * Marconi
 * Ceilometer

Well -1 to that, obviously, from me.

Ceilometer is on track to fully execute on the gap analysis coverage
plan agreed with the TC at the outset of this cycle, and has an active
plan in progress to address architectural debt.
 
 Divert all cross project efforts from the following projects so we can focus
 our cross project resources. Once we are in a bitter place we can expand our
 cross project resources to cover these again. This doesn't mean removing
 anything.
 * Sahara
 * Trove
 * Tripleo

You write as if cross-project efforts are both of fixed size and
amenable to centralized command  control.

Neither of which is actually the case, IMO.

Additional cross-project resources can be ponied up by the large
contributor companies, and existing cross-project resources are not
necessarily divertable on command.

 Yes:
 * All integrated projects that are not listed above

And what of the other pending graduation request?

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


  Divert all cross project efforts from the following projects so we can
  focus
  our cross project resources. Once we are in a bitter place we can expand
  our
  cross project resources to cover these again. This doesn't mean removing
  anything.
  * Sahara
  * Trove
  * Tripleo
  
  You write as if cross-project efforts are both of fixed size and
  amenable to centralized command  control.
  
  Neither of which is actually the case, IMO.
  
  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.
 
 What “cross-project efforts” are we talking about? The liaison program in
 Oslo has been a qualified success so far. Would it make sense to extend that
 to other programs and say that each project needs at least one designated
 QA, Infra, Doc, etc. contact?

Well my working assumption was that we were talking about people with
the appropriate domain knowledge who are focused primarily on standing
up the QA infrastructure.

(as opposed to designated points-of-contact within the individual
project teams who would be the first port of call for the QA/infra/doc
folks if they needed a project-specific perspective on some live issue)
 
That said however, I agree that it would be useful for the QA/infra/doc
teams to know who in each project is most domain-knowledgeable when they
need to reach out about a project-specific issue.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-12 Thread Eoghan Glynn


  Doesn't InfluxDB do the same?
  InfluxDB stores timeseries data primarily.
 
  Gnocchi in intended to store strongly-typed OpenStack resource
  representations (instance, images, etc.) in addition to providing
  a means to access timeseries data associated with those resources.
 
  So to answer your question: no, IIUC, it doesn't do the same thing.
 
 Ok, I think I'm getting closer on this.

Great!

 Thanks for the clarification. Sadly, I have more questions :)

Any time, Sandy :)
 
 Is this closer? a metadata repo for resources (instances, images, etc)
 + an abstraction to some TSDB(s)?

Somewhat closer (more clarification below on the metadata repository
aspect, and the completeness/authority of same).

 Hmm, thinking out loud ... if it's a metadata repo for resources, who is
 the authoritative source for what the resource is? Ceilometer/Gnocchi or
 the source service?

The source service is authoritative.

 For example, if I want to query instance power state do I ask ceilometer
 or Nova?

In that scenario, you'd ask nova.

If, on the other hand, you wanted to average out the CPU utilization
over all instances with a certain metadata attribute set (e.g. some
user metadata set by Heat that indicated membership of an autoscaling
group), then you'd ask ceilometer.

 Or is it metadata about the time-series data collected for that
 resource?

Both. But the focus of my preceding remarks was on the latter.

 In which case, I think most tsdb's have some sort of series
 description facilities.

Sure, and those should be used for metadata related directly to
the timeseries (granularity, retention etc.)

 I guess my question is, what makes this metadata unique and how
 would it differ from the metadata ceilometer already collects?

The primary difference between the way ceilometer currently stores
metadata, is the avoidance of per-sample snapshots of resource
metadata (as stated in the initial mail on this thread).
 
 Will it be using Glance, now that Glance is becoming a pure metadata repo?

No, we have no plans to use glance for this.

By becoming a pure metadata repo, presumably you mean this spec:

  
https://github.com/openstack/glance-specs/blob/master/specs/juno/metadata-schema-catalog.rst

I don't see this on the glance roadmap for Juno:

  https://blueprints.launchpad.net/glance/juno 

so presumably the integration of graffiti and glance is still more
of a longer term intent, than a present-tense becoming.

I'm totally open to correction on this by markwash and others,
but my reading of the debate around the recent change in glance's
mission statement was that the primary focus in the immediate
term was to expand into providing an artifact repository (for
artifacts such as Heat templates), while not to *precluding* any
future expansion into also providing a metadata repository.

The fossil-record of that discussion is here:

  https://review.openstack.org/98002

  Though of course these things are not a million miles from each
  other, one is just a step up in the abstraction stack, having a
  wider and more OpenStack-specific scope.
 
 Could it be a generic timeseries service? Is it openstack specific
 because it uses stackforge/python/oslo?

No, I meant OpenStack-specific in terms of it understanding
something of the nature of OpenStack resources and their ownership
(e.g. instances, with some metadata, each being associated with a
user  tenant etc.)

Not OpenStack-specific in the sense that it takes dependencies from
oslo or stackforge.

As for using python: yes, gnocchi is implemented in python, like
much of the rest of OpenStack.  However, no, I don't think that
choice of implementation language makes it OpenStack-specific.

 I assume the rules and schemas will be data-driven (vs. hard-coded)?

Well one of the ideas was to move away from loosely typed
representations of resources in ceilometer, in the form of a dict
of metadata containing whatever it contains, and instead decide
upfront what was the specific minimal information per resource
type that we need to store.

 ... and since the ceilometer collectors already do the bridge work, is
 it a pre-packaging of definitions that target openstack specifically?

I'm not entirely sure of what you mean by the bridge work in
this context.

The ceilometer collector effectively acts a concentrator, by
persisting the metering messages emitted by the other ceilometer
agents (i.e. the compute, central,  notification agents) to the
metering store.

These samples are stored by the collector pretty much as-is, so
there's no real bridging going on currently in the collector (in
the sense of mapping or transforming).

However, the collector is indeed the obvious hook point for
ceilometer to emit data to gnocchi.

 (not sure about wider and more specific)

I presume you're thinking oxymoron with wider and more specific?

I meant:

 * wider in the sense that it covers more ground than generic
   timeseries data storage

 * more specific in the sense that some of 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-12 Thread Eoghan Glynn
 
 It seems like this is exactly what the slots give us, though. The core review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

One thing I'm not seeing shine through in this discussion of slots is
whether any notion of individual cores, or small subsets of the core
team with aligned interests, can champion blueprints that they have
a particular interest in.

For example it might address some pain-point they've encountered, or
impact on some functional area that they themselves have worked on in
the past, or line up with their thinking on some architectural point.

But for whatever motivation, such small groups of cores currently have
the freedom to self-organize in a fairly emergent way and champion
individual BPs that are important to them, simply by *independently*
giving those BPs review attention.

Whereas under the slots initiative, presumably this power would be
subsumed by the group will, as expressed by the prioritization
applied to the holding pattern feeding the runways?

I'm not saying this is good or bad, just pointing out a change that
we should have our eyes open to.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Eoghan Glynn


 Hi Eoghan,
 
 Thanks for the note below. However, one thing the overview below does not
 cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged. Many
 folks feel that this technology is a viable solution for the problem space
 discussed below.

Great question Brad!

As it happens we've been working closely with Paul Dix (lead
developer of InfluxDB) to ensure that this metrics store would be
usable as a backend driver. That conversation actually kicked off
at the Juno summit in Atlanta, but it really got off the ground
at our mid-cycle meet-up in Paris on in early July.

I wrote a rough strawman version of an InfluxDB driver in advance
of the mid-cycle to frame the discussion, and Paul Dix traveled
to the meet-up so we could have the discussion face-to-face. The
conclusion was that InfluxDB would indeed potentially be a great
fit, modulo some requirements that we identified during the detailed
discussions:

 * shard-space-based retention  backgrounded deletion
 * capability to merge individual timeseries for cross-aggregation
 * backfill-aware downsampling

The InfluxDB folks have committed to implementing those features in
over July and August, and have made concrete progress on that score.

I hope that provides enough detail to answer to your question?

Cheers,
Eoghan

 Thanks,
 
 Brad
 
 
 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet: bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680
 
 
 
 From: Eoghan Glynn egl...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 08/06/2014 11:17 AM
 Subject: [openstack-dev] [tc][ceilometer] Some background on the gnocchi
 project
 
 
 
 
 
 Folks,
 
 It's come to our attention that some key individuals are not
 fully up-to-date on gnocchi activities, so it being a good and
 healthy thing to ensure we're as communicative as possible about
 our roadmap, I've provided a high-level overview here of our
 thinking. This is intended as a precursor to further discussion
 with the TC.
 
 Cheers,
 Eoghan
 
 
 What gnocchi is:
 ===
 
 Gnocchi is a separate, but related, project spun up on stackforge
 by Julien Danjou, with the objective of providing efficient
 storage and retrieval of timeseries-oriented data and resource
 representations.
 
 The goal is to experiment with a potential approach to addressing
 an architectural misstep made in the very earliest days of
 ceilometer, specifically the decision to store snapshots of some
 resource metadata alongside each metric datapoint. The core idea
 is to move to storing datapoints shorn of metadata, and instead
 allow the resource-state timeline to be reconstructed more cheaply
 from much less frequently occurring events (e.g. instance resizes
 or migrations).
 
 
 What gnocchi isn't:
 ==
 
 Gnocchi is not a large-scale under-the-radar rewrite of a core
 OpenStack component along the lines of keystone-lite.
 
 The change is concentrated on the final data-storage phase of
 the ceilometer pipeline, so will have little initial impact on the
 data-acquiring agents, or on transformation phase.
 
 We've been totally open at the Atlanta summit and other forums
 about this approach being a multi-cycle effort.
 
 
 Why we decided to do it this way:
 
 
 The intent behind spinning up a separate project on stackforge
 was to allow the work progress at arms-length from ceilometer,
 allowing normalcy to be maintained on the core project and a
 rapid rate of innovation on gnocchi.
 
 Note that that the developers primarily contributing to gnocchi
 represent a cross-section of the core team, and there's a regular
 feedback loop in the form of a recurring agenda item at the
 weekly team meeting to avoid the effort becoming silo'd.
 
 
 But isn't re-architecting frowned upon?
 ==
 
 Well, the architecture of other OpenStack projects have also
 under-gone change as the community understanding of the
 implications of prior design decisions has evolved.
 
 Take for example the move towards nova no-db-compute  the
 unified-object-model in order to address issues in the nova
 architecture that made progress towards rolling upgrades
 unneccessarily difficult.
 
 The point, in my understanding, is not to avoid doing the
 course-correction where it's deemed necessary. Rather, the
 principle is more that these corrections happen in an open
 and planned way.
 
 
 The path forward:
 
 
 A subset of the ceilometer community will continue to work on
 gnocchi in parallel with the ceilometer core over the remainder
 of the Juno cycle and into the Kilo timeframe. The goal is to
 have an initial implementation of gnocchi ready for tech preview
 by the end of Juno, and to have the integration/migration/
 co-existence questions addressed in Kilo.
 
 Moving the ceilometer core to using gnocchi will be contingent

Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Eoghan Glynn


 On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
 
  Hi Eoghan,
 
  Thanks for the note below. However, one thing the overview below does not
  cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged.
  Many
  folks feel that this technology is a viable solution for the problem space
  discussed below.
  Great question Brad!
 
  As it happens we've been working closely with Paul Dix (lead
  developer of InfluxDB) to ensure that this metrics store would be
  usable as a backend driver. That conversation actually kicked off
  at the Juno summit in Atlanta, but it really got off the ground
  at our mid-cycle meet-up in Paris on in early July.
 ...
 
  The InfluxDB folks have committed to implementing those features in
  over July and August, and have made concrete progress on that score.
 
  I hope that provides enough detail to answer to your question?
 
 I guess it begs the question, if influxdb will do what you want and it's
 open source (MIT) as well as commercially supported, how does gnocchi
 differentiate?

Hi Sandy,

One of the ideas behind gnocchi is to combine resource representation
and timeseries-oriented storage of metric data, providing an efficient
and convenient way to query for metric data associated with individual
resources.

Also, having an API layered above the storage driver avoids locking in
directly with a particular metrics-oriented DB, allowing for the
potential to support multiple storage driver options (e.g. to choose
between a canonical implementation based on Swift, an InfluxDB driver,
and an OpenTSDB driver, say).

A less compelling reason would be to provide a well-defined hook point
to innovate with aggregation/analytic logic not supported natively
in the underlying drivers (e.g. period-spanning statistics such as
exponentially-weighted moving average or even Holt-Winters).

Cheers,
Eoghan

 
  Cheers,
  Eoghan
 
  Thanks,
 
  Brad
 
 
  Brad Topol, Ph.D.
  IBM Distinguished Engineer
  OpenStack
  (919) 543-0646
  Internet: bto...@us.ibm.com
  Assistant: Kendra Witherspoon (919) 254-0680
 
 
 
  From: Eoghan Glynn egl...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 08/06/2014 11:17 AM
  Subject: [openstack-dev] [tc][ceilometer] Some background on the gnocchi
  project
 
 
 
 
 
  Folks,
 
  It's come to our attention that some key individuals are not
  fully up-to-date on gnocchi activities, so it being a good and
  healthy thing to ensure we're as communicative as possible about
  our roadmap, I've provided a high-level overview here of our
  thinking. This is intended as a precursor to further discussion
  with the TC.
 
  Cheers,
  Eoghan
 
 
  What gnocchi is:
  ===
 
  Gnocchi is a separate, but related, project spun up on stackforge
  by Julien Danjou, with the objective of providing efficient
  storage and retrieval of timeseries-oriented data and resource
  representations.
 
  The goal is to experiment with a potential approach to addressing
  an architectural misstep made in the very earliest days of
  ceilometer, specifically the decision to store snapshots of some
  resource metadata alongside each metric datapoint. The core idea
  is to move to storing datapoints shorn of metadata, and instead
  allow the resource-state timeline to be reconstructed more cheaply
  from much less frequently occurring events (e.g. instance resizes
  or migrations).
 
 
  What gnocchi isn't:
  ==
 
  Gnocchi is not a large-scale under-the-radar rewrite of a core
  OpenStack component along the lines of keystone-lite.
 
  The change is concentrated on the final data-storage phase of
  the ceilometer pipeline, so will have little initial impact on the
  data-acquiring agents, or on transformation phase.
 
  We've been totally open at the Atlanta summit and other forums
  about this approach being a multi-cycle effort.
 
 
  Why we decided to do it this way:
  
 
  The intent behind spinning up a separate project on stackforge
  was to allow the work progress at arms-length from ceilometer,
  allowing normalcy to be maintained on the core project and a
  rapid rate of innovation on gnocchi.
 
  Note that that the developers primarily contributing to gnocchi
  represent a cross-section of the core team, and there's a regular
  feedback loop in the form of a recurring agenda item at the
  weekly team meeting to avoid the effort becoming silo'd.
 
 
  But isn't re-architecting frowned upon?
  ==
 
  Well, the architecture of other OpenStack projects have also
  under-gone change as the community understanding of the
  implications of prior design decisions has evolved.
 
  Take for example the move towards nova no-db-compute  the
  unified-object-model in order to address issues in the nova
  architecture that made progress towards rolling upgrades
  unneccessarily difficult.
 
  The point, in my

Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project

2014-08-11 Thread Eoghan Glynn


  On 8/11/2014 4:22 PM, Eoghan Glynn wrote:
  Hi Eoghan,
 
  Thanks for the note below. However, one thing the overview below does
  not
  cover is why InfluxDB ( http://influxdb.com/ ) is not being leveraged.
  Many
  folks feel that this technology is a viable solution for the problem
  space
  discussed below.
  Great question Brad!
 
  As it happens we've been working closely with Paul Dix (lead
  developer of InfluxDB) to ensure that this metrics store would be
  usable as a backend driver. That conversation actually kicked off
  at the Juno summit in Atlanta, but it really got off the ground
  at our mid-cycle meet-up in Paris on in early July.
  ...
  The InfluxDB folks have committed to implementing those features in
  over July and August, and have made concrete progress on that score.
 
  I hope that provides enough detail to answer to your question?
  I guess it begs the question, if influxdb will do what you want and it's
  open source (MIT) as well as commercially supported, how does gnocchi
  differentiate?
  Hi Sandy,
 
  One of the ideas behind gnocchi is to combine resource representation
  and timeseries-oriented storage of metric data, providing an efficient
  and convenient way to query for metric data associated with individual
  resources.
 
 Doesn't InfluxDB do the same?

InfluxDB stores timeseries data primarily.

Gnocchi in intended to store strongly-typed OpenStack resource
representations (instance, images, etc.) in addition to providing
a means to access timeseries data associated with those resources.

So to answer your question: no, IIUC, it doesn't do the same thing.

Though of course these things are not a million miles from each
other, one is just a step up in the abstraction stack, having a
wider and more OpenStack-specific scope.
 
  Also, having an API layered above the storage driver avoids locking in
  directly with a particular metrics-oriented DB, allowing for the
  potential to support multiple storage driver options (e.g. to choose
  between a canonical implementation based on Swift, an InfluxDB driver,
  and an OpenTSDB driver, say).
 Right, I'm not suggesting to remove the storage abstraction layer. I'm
 just curious what gnocchi does better/different than InfluxDB?
 
 Or, am I missing the objective here and gnocchi is the abstraction layer
 and not an influxdb alternative? If so, my apologies for the confusion.

No worries :)

The intention is for gnocchi to provide an abstraction over
timeseries, aggregation, downsampling and archiving/retention
policies, with a number of drivers mapping onto real timeseries
storage options. One of those drivers is based on Swift, another
is in the works based on InfluxDB, and a third based on OpenTSDB
has also been proposed.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Eoghan Glynn


 Ignoring the question of is it ok to say: 'to run ceilometer in any sort of
 non-trivial deployment you must manager yet another underlying service,
 mongodb' I would prefer not adding an addition gate variant to all projects.
 With the effort to reduce the number of gate variants we have [0] I would
 prefer to see just ceilometer gate on both mongodb and sqlalchemy and the
 main integrated gate [1] pick just one.

Just checking to see that I fully understand what you mean there, Joe.

So would we:

 (a) add a new integrated-gate-ceilometer project-template to [1],
 in the style of integrated-gate-neutron or integrated-gate-sahara,
 which would replicate the main integrated-gate template but with
 the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)

or:

 (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
 the experimental column[2] in the openstack-ceilometer project,
 to the gate column on that project

or:

 (c) something else

Please excuse the ignorance of gate mechanics inherent in that question.

Cheers,
Eoghan


[1] 
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238
[2] 
http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n801

 
 [0] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
 [1]
 http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n238
 
 
 
 Does that work for you Devananda?
 
 Cheers,
 Eoghan
 
  -Deva
  
  
  [1]
  https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage
  
  [2]
  http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html
  is a very articulate example of this objection
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-11 Thread Eoghan Glynn


  So would we:
 
   (a) add a new integrated-gate-ceilometer project-template to [1],
   in the style of integrated-gate-neutron or integrated-gate-sahara,
   which would replicate the main integrated-gate template but with
   the addition of gate-tempest-dsvm-ceilometer-mongodb(-full)
 
  or:
 
   (b) simply move gate-tempest-dsvm-ceilometer-mongodb(-full) from
   the experimental column[2] in the openstack-ceilometer project,
   to the gate column on that project
 
  or:
 
   (c) something else
 
  Please excuse the ignorance of gate mechanics inherent in that question.
 
 
 
  Correct, AFAIK (a) or (b) would be sufficient.
 
  There is another option, which is make the mongodb version the default in
  integrated-gate and only run SQLA on ceilometer.
 
 
 Joe,
 
 I believe this last option is equivalent to making mongodb the
 recommended implementation by virtue of suddenly being the most tested
 implementation. I would prefer not to see that.
 
 Eoghan,
 
 IIUC (and I am not an infra expert) I would suggest (b) since this
 keeps the mongo tests within the ceilometer project only, which I
 think is fine from a what we test is what we recommend standpoint.

Fair enough ... though I think (a) would also have that quality
of encapsulation, as long as the new integrated-gate-ceilometer
project-template was only referenced by the openstack/ceilometer
project.

I'm not sure it makes a great deal of difference though, so would
be happy enough to go with either (b) or (a).

 Also, if there is a situation where a change in Nova passes with
 ceilometer+mysql and thus lands in Nova, but fails with
 ceilometer+mongodb, yes, that would break the ceilometer project's
 gate (but not the integrated gate). It would also indicate a
 substantial abstraction violation within ceilometer. I have proposed
 exactly this model for Ironic's deploy driver testing, and am willing
 to accept the consequences within the project if we break our own
 abstractions.

Fair point.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-09 Thread Eoghan Glynn


  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
 
 I think it's quite reasonable for a project to set aside some time to
 focus on stability, whether that is a whole release cycle or just a
 milestone. However, I think your question here is more about
 OpenStack-wide issues, and how we enco^D^D^D^D whether we can require
 integrated projects that are seen as having gate-affecting instability
 to pause and address that.
 
  On the integrated release side, more projects means stretching our
  limited strategic resources more. Is it time for the Technical Committee
  to more aggressively define what is in and what is out ? If we go
  through such a redefinition, shall we push currently-integrated projects
  that fail to match that definition out of the integrated release inner
  circle ?
 
 The integrated release is an overloaded term at the moment. Outside
 of the developer community, I see it often cited as a blessing of a
 project's legitimacy and production-worthiness. While I feel that a
 non-production-ready project should not be in the integrated
 release, this has not been upheld as a litmus test for integration in
 the past. Also, this does not imply that non-integrated projects
 should not be used in production. I've lost track of how many times
 I've heard someone say, Why would I deploy Ironic when it hasn't
 graduated yet.
 
 Integration is foremost an artifact of our testing and development
 processes -- an indication that a project has been following the
 release cadence, adheres to cross-project norms, is ready for
 cogating, and can be counted on to produce timely and stable builds at
 release time. This can plainly be seen by looking at the criteria for
 incubation and integration in our governance repo [1]. As written
 today, this does not have anything to do with the technical merit or
 production-worthiness of a project. It also does not have anything to
 do with what layer the project sits at -- whether it is IaaS, PaaS,
 or SaaS does not dictate whether it can be integrated.
 
 The TC has begun to scrutinize new projects more carefully on
 technical grounds, particularly since some recently-integrated
 projects have run into scaling limitations that have hampered their
 adoption. I believe this sort of technical guidance (or curation, if
 you will) is an essential function of the TC. We've learned that
 integration bestows legitimacy, as well as assumptions of stability,
 performance, and scalability, upon projects which are integrated.
 While that wasn't the intent, I think we need to accept that that is
 where we stand. We will be faced with some hard decisions regarding
 projects, both incubated and already integrated, which are clearly not
 meeting those expectations today.

How does this relate to the recent gap analysis undertaken by the
TC for already integrated projects, in order to measure their status
against the steadily rising bar for integration?

That aforementioned process is actually still ongoing, with the TC
doing progress reviews against the projects' action plans throughout
the Juno cycle.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-09 Thread Eoghan Glynn

Hi Folks,

Dina Belova has recently landed some infra patches[1,2] to create
an experimental mongodb-based Tempest job. This effectively just
overrides the ceilometer storage backend config so that mongodb
is used instead of sql-alchemy. The new job has been running
happily for a few days so I'd like now to consider the path
forwards with this.

One of our Juno goals under the TC gap analysis was to more fully
gate against mongodb, given that this is the storage backend
recommended/supported by many distros. The sql-alchemy backend,
on the other hand, is more suited for proofs of concept or small
deployments. However up to now we've been hampered from reflecting
that reality in the gate, due to the gate being stuck on Precise
for a long time, as befits LTS, and the version of mongodb needed
by ceilometer (i.e. 2.4) effectively unavailable on that Ubuntu
release (in fact it was limited to 2.0.4).

So the orientation towards gating on sql-alchemy was mostly
driven by legacy issues in the gate's usage of Precise, as
opposed to this being considered the most logical basket in
which to put all our testing eggs.

However, we're now finally in the brave new world of Trusty :)
So I would like to make the long-delayed change over soon.

This would involve transposing the roles of sql-alchemy and
mongodb in the gate - the mongodb variant becomes the blessed
job run by default, whereas the sql-alchemy based job to
relegated to the second tier.

So my questions are:

 (a) would the QA side of the house be agreeable to this switch?

and:

 (b) how long would the mongodb job need to be stable in this
 experimental mode before we pull the trigger on swicthing?

If the answer to (a) is yes, we can get infra patches proposed
early next week to make the swap.

Cheers,
Eoghan

[1] 
https://review.openstack.org/#/q/project:openstack-infra/config+branch:master+topic:ceilometer-mongodb-job,n,z
[2] 
https://review.openstack.org/#/q/project:openstack-infra/devstack-gate+branch:master+topic:ceilometer-backend,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-09 Thread Eoghan Glynn


  Hi Folks,
  
  Dina Belova has recently landed some infra patches[1,2] to create
  an experimental mongodb-based Tempest job. This effectively just
  overrides the ceilometer storage backend config so that mongodb
  is used instead of sql-alchemy. The new job has been running
  happily for a few days so I'd like now to consider the path
  forwards with this.
  
  One of our Juno goals under the TC gap analysis was to more fully
  gate against mongodb, given that this is the storage backend
  recommended/supported by many distros. The sql-alchemy backend,
  on the other hand, is more suited for proofs of concept or small
  deployments. However up to now we've been hampered from reflecting
  that reality in the gate, due to the gate being stuck on Precise
  for a long time, as befits LTS, and the version of mongodb needed
  by ceilometer (i.e. 2.4) effectively unavailable on that Ubuntu
  release (in fact it was limited to 2.0.4).
  
  So the orientation towards gating on sql-alchemy was mostly
  driven by legacy issues in the gate's usage of Precise, as
  opposed to this being considered the most logical basket in
  which to put all our testing eggs.
  
  However, we're now finally in the brave new world of Trusty :)
  So I would like to make the long-delayed change over soon.
  
  This would involve transposing the roles of sql-alchemy and
  mongodb in the gate - the mongodb variant becomes the blessed
  job run by default, whereas the sql-alchemy based job to
  relegated to the second tier.
  
  So my questions are:
  
  (a) would the QA side of the house be agreeable to this switch?
  
  and:
  
  (b) how long would the mongodb job need to be stable in this
  experimental mode before we pull the trigger on swicthing?
  
  If the answer to (a) is yes, we can get infra patches proposed
  early next week to make the swap.
  
  Cheers,
  Eoghan
  
  [1]
  https://review.openstack.org/#/q/project:openstack-infra/config+branch:master+topic:ceilometer-mongodb-job,n,z
  [2]
  https://review.openstack.org/#/q/project:openstack-infra/devstack-gate+branch:master+topic:ceilometer-backend,n,z
  
 
 My interpretation of the gap analysis [1] is merely that you have coverage,
 not that you switch to it and relegate the SQLAlchemy tests to second chair.
 I believe that's a dangerous departure from current standards. A dependency
 on mongodb, due to it's AGPL license, and lack of sufficient support for a
 non-AGPL storage back end, has consistently been raised as a blocking issue
 for Marconi. [2]

Sure, the main goal is to have full mongodb-based coverage in the gate.

So, if the QA/infra folks are prepared to host *both* jobs, then I'd be
happy to change my request to simply:

  let's promote the mongodb-based Tempest variant to the first tier,
  to run alongside the current sqlalchemy-based job 

Does that work for you Devananda?

Cheers,
Eoghan
 
 -Deva
 
 
 [1]
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Ceilometer_Gap_Coverage
 
 [2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html
 is a very articulate example of this objection

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-09 Thread Eoghan Glynn


 Eoghan,
 
 Nice work on this. I think that first of all this job should be run on every
 patch for some period of time (not only in experimental pipe)
 
 By the way, If you would like we can help from Rally side.
 We are running benchmarks on every patch in it's gates. Ceilometer is fully
 turned on in these jobs, so we can be first adopters and switch to mongodb.

Hi Boris,

Excellent, that additional coverage would certainly be helpful.

Though in terms of the performance characteristics reported by
Rally, I'm guessing we wouldn't see much change, given the faster
metering store access would all be happening asynchronously to
the paths measured by Rally, amiright?

Cheers,
Eoghan

 This will ensure that everything works stable even under load, and i hope
 convince people to switch integrate gate to mongo.
 
 
 Best regards,
 Boris Pavlovic
 
 
 On Sat, Aug 9, 2014 at 3:19 PM, Eoghan Glynn  egl...@redhat.com  wrote:
 
 
 
 Hi Folks,
 
 Dina Belova has recently landed some infra patches[1,2] to create
 an experimental mongodb-based Tempest job. This effectively just
 overrides the ceilometer storage backend config so that mongodb
 is used instead of sql-alchemy. The new job has been running
 happily for a few days so I'd like now to consider the path
 forwards with this.
 
 One of our Juno goals under the TC gap analysis was to more fully
 gate against mongodb, given that this is the storage backend
 recommended/supported by many distros. The sql-alchemy backend,
 on the other hand, is more suited for proofs of concept or small
 deployments. However up to now we've been hampered from reflecting
 that reality in the gate, due to the gate being stuck on Precise
 for a long time, as befits LTS, and the version of mongodb needed
 by ceilometer (i.e. 2.4) effectively unavailable on that Ubuntu
 release (in fact it was limited to 2.0.4).
 
 So the orientation towards gating on sql-alchemy was mostly
 driven by legacy issues in the gate's usage of Precise, as
 opposed to this being considered the most logical basket in
 which to put all our testing eggs.
 
 However, we're now finally in the brave new world of Trusty :)
 So I would like to make the long-delayed change over soon.
 
 This would involve transposing the roles of sql-alchemy and
 mongodb in the gate - the mongodb variant becomes the blessed
 job run by default, whereas the sql-alchemy based job to
 relegated to the second tier.
 
 So my questions are:
 
 (a) would the QA side of the house be agreeable to this switch?
 
 and:
 
 (b) how long would the mongodb job need to be stable in this
 experimental mode before we pull the trigger on swicthing?
 
 If the answer to (a) is yes, we can get infra patches proposed
 early next week to make the swap.
 
 Cheers,
 Eoghan
 
 [1]
 https://review.openstack.org/#/q/project:openstack-infra/config+branch:master+topic:ceilometer-mongodb-job,n,z
 [2]
 https://review.openstack.org/#/q/project:openstack-infra/devstack-gate+branch:master+topic:ceilometer-backend,n,z
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] swapping the roles of mongodb and sqlalchemy for ceilometer in Tempest

2014-08-09 Thread Eoghan Glynn


 +2 from me,
 
 More mongodb adoption (as stated) when it's questionable legally doesn't seem
 like a good long term strategy (I know it will/does impact yahoo adopting or
 using ceilometer...). Is this another one of those tactical changes that we
 keep on making that ends up being yet another piece of technical debt that
 someone will have to cleanup :-/
 
 If we thought a little more about this strategically maybe we would end up in
 a better place short term *and* long term??

Hi Joshua,

Since we currently do support mongodb as an *optional* storage driver,
and some distros do recommend its usage, then surely we should test this
driver fully in the upstream gate to support those users who take that
option?

(i.e. those users who accept MongoDB Inc's assurances[1] in regard to
licensing of the client-side driver)

Cheers,
Eohgan

[1] http://www.mongodb.org/about/licensing/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >