Re: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-08-02 Thread Pradeep Kilambi
On Wed, Aug 1, 2018 at 6:06 PM Jill Rouleau  wrote:

> On Tue, 2018-07-31 at 07:38 -0400, Pradeep Kilambi wrote:
> >
> >
> > On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau  wrote:
> > > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote:
> > > >
> > > >
> > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz  > > >
> > > > wrote:
> > > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr 
> > > > > wrote:
> > > > > >
> > > > > >
> > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi  > > t.co
> > > > > m> wrote:
> > > > > >>
> > > > > >> Your fellow reporter took a break from writing, but is now
> > > back
> > > > > on his
> > > > > >> pen.
> > > > > >>
> > > > > >> Welcome to the twenty-fifth edition of a weekly update in
> > > TripleO
> > > > > world!
> > > > > >> The goal is to provide a short reading (less than 5 minutes)
> > > to
> > > > > learn
> > > > > >> what's new this week.
> > > > > >> Any contributions and feedback are welcome.
> > > > > >> Link to the previous version:
> > > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/
> > > 1314
> > > > > 26.html
> > > > > >>
> > > > > >> +-+
> > > > > >> | General announcements |
> > > > > >> +-+
> > > > > >>
> > > > > >> +--> Rocky Milestone 3 is next week. After, any feature code
> > > will
> > > > > require
> > > > > >> Feature Freeze Exception (FFE), asked on the mailing-list.
> > > We'll
> > > > > enter a
> > > > > >> bug-fix only and stabilization period, until we can push the
> > > > > first stable
> > > > > >> version of Rocky.
> > > > > >
> > > > > >
> > > > > > Hey guys,
> > > > > >
> > > > > >   I would like to ask for FFE for backup and restore, where we
> > > > > ended up
> > > > > > deciding where is the best place for the code base for this
> > > > > project (please
> > > > > > see [1] for details). We believe that B support for
> > > overcloud
> > > > > control
> > > > > > plane will be good addition to a rocky release, but we started
> > > > > with this
> > > > > > initiative quite late indeed. The final result should the
> > > support
> > > > > in
> > > > > > openstack client, where "openstack overcloud (backup|restore)"
> > > > > would work as
> > > > > > a charm. Thanks in advance for considering this feature.
> > > > > >
> > > > >
> > > > > Was there a blueprint/spec for this effort?  Additionally do we
> > > have
> > > > > a
> > > > > list of the outstanding work required for this? If it's just
> > > these
> > > > > two
> > > > > playbooks, it might be ok for an FFE. But if there's additional
> > > > > tripleoclient related changes, I wouldn't necessarily feel
> > > > > comfortable
> > > > > with these unless we have a complete list of work.  Just as a
> > > side
> > > > > note, I'm not sure putting these in tripleo-common is going to
> > > be
> > > > > the
> > > > > ideal place for this.
> > >
> > > Was it this review? https://review.openstack.org/#/c/582453/
> > >
> > > For Stein we'll have an ansible role[0] and playbook repo[1] where
> > > these
> > > types of tasks should live.
> > >
> > > [0] https://github.com/openstack/ansible-role-openstack-operations
> > > [1] https://review.openstack.org/#/c/583415/
> > Thanks Jill! The issue is, we want to be able to backport this to
> > Queens once merged. With the new repos you're mentioning would this be
> > possible? If no, then this wont work for us unfortunately.
> >
>
> We wouldn't backport the new packages to Queens, however the repos will
> be on github and available to clon

Re: [openstack-dev] [tripleo][ci][metrics] FFE request for QDR integration in TripleO (Was: Stucked in the middle of work because of RDO CI)

2018-07-31 Thread Pradeep Kilambi
Hi Alex:

Can you consider this our FFE for the QDR patches. Its mainly blocked on CI
issues. Half the patches for QDR integration are already merged. The other
3 referenced need to get merged once CI passes. Please consider this out
formal request for FFE for QDR integration in tripleo.

Cheers,
~ Prad

On Tue, Jul 31, 2018 at 7:40 AM Sagi Shnaidman  wrote:

> Hi, Martin
>
> I see master OVB jobs are passing now [1], please recheck.
>
> [1] http://cistatus.tripleo.org/
>
> On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr  wrote:
>
>> Greetings guys,
>>
>>   it is pretty obvious that RDO CI jobs in TripleO projects are broken
>> [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd
>> patches ([1],[2],[3]) merged please even though the negative result of RDO
>> CI jobs? Half of the patches for this feature is merged and the other half
>> is stucked in this situation, were nobody reviews these patches, because
>> there is red -1. Those patches passed Zuul jobs several times already and
>> were manually tested too.
>>
>> Thanks in advance for consideration of this situation,
>> Martin
>>
>> [0]
>> https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure
>> [1] https://review.openstack.org/#/c/578749
>> [2] https://review.openstack.org/#/c/576057/
>> [3] https://review.openstack.org/#/c/572312/
>>
>> --
>> Martin Mágr
>> Senior Software Engineer
>> Red Hat Czech
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards
> Sagi Shnaidman
>


-- 
Cheers,
~ Prad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-07-31 Thread Pradeep Kilambi
On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau  wrote:

> On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote:
> >
> >
> > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz 
> > wrote:
> > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr 
> > > wrote:
> > > >
> > > >
> > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi  > > m> wrote:
> > > >>
> > > >> Your fellow reporter took a break from writing, but is now back
> > > on his
> > > >> pen.
> > > >>
> > > >> Welcome to the twenty-fifth edition of a weekly update in TripleO
> > > world!
> > > >> The goal is to provide a short reading (less than 5 minutes) to
> > > learn
> > > >> what's new this week.
> > > >> Any contributions and feedback are welcome.
> > > >> Link to the previous version:
> > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/1314
> > > 26.html
> > > >>
> > > >> +-+
> > > >> | General announcements |
> > > >> +-+
> > > >>
> > > >> +--> Rocky Milestone 3 is next week. After, any feature code will
> > > require
> > > >> Feature Freeze Exception (FFE), asked on the mailing-list. We'll
> > > enter a
> > > >> bug-fix only and stabilization period, until we can push the
> > > first stable
> > > >> version of Rocky.
> > > >
> > > >
> > > > Hey guys,
> > > >
> > > >   I would like to ask for FFE for backup and restore, where we
> > > ended up
> > > > deciding where is the best place for the code base for this
> > > project (please
> > > > see [1] for details). We believe that B support for overcloud
> > > control
> > > > plane will be good addition to a rocky release, but we started
> > > with this
> > > > initiative quite late indeed. The final result should the support
> > > in
> > > > openstack client, where "openstack overcloud (backup|restore)"
> > > would work as
> > > > a charm. Thanks in advance for considering this feature.
> > > >
> > >
> > > Was there a blueprint/spec for this effort?  Additionally do we have
> > > a
> > > list of the outstanding work required for this? If it's just these
> > > two
> > > playbooks, it might be ok for an FFE. But if there's additional
> > > tripleoclient related changes, I wouldn't necessarily feel
> > > comfortable
> > > with these unless we have a complete list of work.  Just as a side
> > > note, I'm not sure putting these in tripleo-common is going to be
> > > the
> > > ideal place for this.
>
> Was it this review? https://review.openstack.org/#/c/582453/
>
> For Stein we'll have an ansible role[0] and playbook repo[1] where these
> types of tasks should live.
>
> [0] https://github.com/openstack/ansible-role-openstack-operations
> [1] https://review.openstack.org/#/c/583415/


Thanks Jill! The issue is, we want to be able to backport this to Queens
once merged. With the new repos you're mentioning would this be possible?
If no, then this wont work for us unfortunately.




>
>
> >
> > Thanks Alex. For Rocky, if we can ship the playbooks with relevant
> > docs we should be good. We will integrated with client in Stein
> > release with restore logic included. Regarding putting tripleo-common,
> > we're open to suggestions. I think Dan just submitted the review so we
> > can get some eyes on the playbooks. Where do you suggest is better
> > place for these instead?
> >
> > >
> > > Thanks,
> > > -Alex
> > >
> > > > Regards,
> > > > Martin
> > > >
> > > > [1] https://review.openstack.org/#/c/582453/
> > > >
> > > >>
> > > >> +--> Next PTG will be in Denver, please propose topics:
> > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein
> > > >> +--> Multiple squads are currently brainstorming a framework to
> > > provide
> > > >> validations pre/post upgrades - stay in touch!
> > > >>
> > > >> +--+
> > > >> | Continuous Integration |
> > > >> +--+
> > > >>
&

Re: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-07-30 Thread Pradeep Kilambi
On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz  wrote:

> On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr  wrote:
> >
> >
> > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi 
> wrote:
> >>
> >> Your fellow reporter took a break from writing, but is now back on his
> >> pen.
> >>
> >> Welcome to the twenty-fifth edition of a weekly update in TripleO world!
> >> The goal is to provide a short reading (less than 5 minutes) to learn
> >> what's new this week.
> >> Any contributions and feedback are welcome.
> >> Link to the previous version:
> >>
> http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html
> >>
> >> +-+
> >> | General announcements |
> >> +-+
> >>
> >> +--> Rocky Milestone 3 is next week. After, any feature code will
> require
> >> Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a
> >> bug-fix only and stabilization period, until we can push the first
> stable
> >> version of Rocky.
> >
> >
> > Hey guys,
> >
> >   I would like to ask for FFE for backup and restore, where we ended up
> > deciding where is the best place for the code base for this project
> (please
> > see [1] for details). We believe that B support for overcloud control
> > plane will be good addition to a rocky release, but we started with this
> > initiative quite late indeed. The final result should the support in
> > openstack client, where "openstack overcloud (backup|restore)" would
> work as
> > a charm. Thanks in advance for considering this feature.
> >
>
> Was there a blueprint/spec for this effort?  Additionally do we have a
> list of the outstanding work required for this? If it's just these two
> playbooks, it might be ok for an FFE. But if there's additional
> tripleoclient related changes, I wouldn't necessarily feel comfortable
> with these unless we have a complete list of work.  Just as a side
> note, I'm not sure putting these in tripleo-common is going to be the
> ideal place for this.
>


Thanks Alex. For Rocky, if we can ship the playbooks with relevant docs we
should be good. We will integrated with client in Stein release with
restore logic included. Regarding putting tripleo-common, we're open to
suggestions. I think Dan just submitted the review so we can get some eyes
on the playbooks. Where do you suggest is better place for these instead?


>
> Thanks,
> -Alex
>
> > Regards,
> > Martin
> >
> > [1] https://review.openstack.org/#/c/582453/
> >
> >>
> >> +--> Next PTG will be in Denver, please propose topics:
> >> https://etherpad.openstack.org/p/tripleoci-ptg-stein
> >> +--> Multiple squads are currently brainstorming a framework to provide
> >> validations pre/post upgrades - stay in touch!
> >>
> >> +--+
> >> | Continuous Integration |
> >> +--+
> >>
> >> +--> Sprint theme: migration to Zuul v3 (More on
> >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals)
> >> +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI
> >> issue.
> >> +--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day
> on
> >> Ocata.
> >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
> >>
> >> +-+
> >> | Upgrades |
> >> +-+
> >>
> >> +--> Good progress on major upgrades workflow, need reviews!
> >> +--> More:
> https://etherpad.openstack.org/p/tripleo-upgrade-squad-status
> >>
> >> +---+
> >> | Containers |
> >> +---+
> >>
> >> +--> We switched python-tripleoclient to deploy containerized undercloud
> >> by default!
> >> +--> Image prepare via workflow is still work in progress.
> >> +--> More:
> >> https://etherpad.openstack.org/p/tripleo-containers-squad-status
> >>
> >> +--+
> >> | config-download |
> >> +--+
> >>
> >> +--> UI integration is almost done (need review)
> >> +--> Bug with failure listing is being fixed:
> >> https://bugs.launchpad.net/tripleo/+bug/1779093
> >> +--> More:
> >> https://etherpad.openstack.org/p/tripleo-config-download-squad-status
> >>
> >> +--+
> >> | Integration |
> >> +--+
> >>
> >> +--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK
> >> etc:
> >>
> https://review.openstack.org/#/q/topic:alternate_plans+(status:open+OR+status:merged)
> >> (need reviews).
> >> +--> More:
> >> https://etherpad.openstack.org/p/tripleo-integration-squad-status
> >>
> >> +-+
> >> | UI/CLI |
> >> +-+
> >>
> >> +--> Good progress on network configuration via UI
> >> +--> Config-download patches are being reviewed and a lot of testing is
> >> going on.
> >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status
> >>
> >> +---+
> >> | Validations |
> >> +---+
> >>
> >> +--> Working on OpenShift validations, need reviews.
> >> +--> More:
> >> https://etherpad.openstack.org/p/tripleo-validations-squad-status
> >>
> >> 

Re: [openstack-dev] [tripleo] Proposing Bogdan Dobrelya core on TripleO / Containers

2017-07-21 Thread Pradeep Kilambi
On Fri, Jul 21, 2017 at 1:36 PM, Brent Eagles  wrote:
>
>
> On Fri, Jul 21, 2017 at 12:25 PM, Emilien Macchi  wrote:
>>
>> Hi,
>>
>> Bogdan (bogdando on IRC) has been very active in Containerization of
>> TripleO and his quality of review has increased over time.
>> I would like to give him core permissions on container work in TripleO.
>> Any feedback is welcome as usual, we'll vote as a team.
>>
>> Thanks,
>> --
>> Emilien Macchi
>
>
> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+1


-- 
Cheers,
~ Prad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-12 Thread Pradeep Kilambi
On Tue, Jul 11, 2017 at 10:06 PM, Wesley Hayutin <whayu...@redhat.com> wrote:
>
>
> On Tue, Jul 11, 2017 at 9:04 PM, Emilien Macchi <emil...@redhat.com> wrote:
>>
>> On Tue, Jul 11, 2017 at 12:41 PM, Pradeep Kilambi <p...@redhat.com> wrote:
>> > On Tue, Jul 11, 2017 at 3:17 PM, Wesley Hayutin <whayu...@redhat.com>
>> > wrote:
>> >> Greetings,
>> >>
>> >> I was looking through the mailing list and I did not see any emails
>> >> explicitly calling out the decreased coverage for telemetry in tripleo
>> >> due
>> >> to [1].  A series of changes went into the CI system to disable
>> >> telemetry
>> >> [2].
>> >>
>> >> There is work being done to restore more coverage for telemetry by
>> >> limiting
>> >> the resources it consumes [3].  We are also working on additional
>> >> scenarios
>> >> in t-h-t/ci/environments/ to better cover ceilometer.
>> >>
>> >> If the CI environment you are working in has the resources to cover
>> >> ceilometer that is great, however if you find issues like [1] we highly
>> >> suggest you follow the same pattern until coverage is restored
>> >> upstream.
>> >>
>> >> Thank you!
>> >>
>> >> [1] https://bugs.launchpad.net/tripleo/+bug/1693174
>> >> [2] https://review.openstack.org/#/q/topic:bug/1680195
>> >> [3]
>> >> https://review.openstack.org/#/c/475838/
>> >> https://review.openstack.org/#/c/474969/
>> >> https://review.openstack.org/#/c/47/
>> >>
>> >>
>> >
>> > Thanks for starting this thread Wes. I concur with this. We got bitten
>> > recently by many issues that we could have caught in ci had telemetry
>> > been enabled. I spoke to trown and Emilien about this a few times
>> > already. I do understand the resource footprint it causes.  But with
>> > recent improvements and changes upstream, things should be back to
>> > being more manageable. We do have telemetry tested in scenario001 job,
>> > but that doesn't cover all scenarios. So there is a gap in coverage.
>>
>> What do you mean by gap in coverage?
>> We have scenarios on purpose, so we can horizontally scale the
>> coverage across multiple jobs and run the jobs only when we need (e.g.
>> touching telemetry files for scenario001).
>>
>> Please elaborate on what isn't covered by scenario001, because we
>> already cover Gnocchi, Panko, Aodh and Ceilometer (with RBD backend
>> and soon with Swift backend in scenario002).
>>
>
> Emilien,
> Gap is the wrong word to use in the case.
> Previously we had several jobs running with telemetry turned on including
> ovb jobs in tripleo and other jobs outside of the upstream CI system.
> The more jobs running, the more coverage.
> I think that is what Pradeep was referring to, but maybe I am
> misunderstanding this as well.

Yea may be gap is not the right word. But mostly i meant what Wes
said, but also I feel we are not testing Telemetry with full HA
currently in CI. scenario jobs only test deploy with 1 controller not
3. We have seen some recent issues where things work on controller 0
but controller 1 or 2 has statsd down for example. The ovb ha job
would have shown us that, had the ovb ha job included telemetry
enabled. Is it possible to run scenario001 job with full HA ?



>
>
>>
>> >  I hope we can either re-enable these services by default in CI and
>> > how things work or at least add a separate gate job to be able to test
>> > HA scenario properly with telemetry enabled.
>> >
>> > --
>> > Cheers,
>> > ~ Prad
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Cheers,
~ Prad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] decreased coverage for telemetry

2017-07-11 Thread Pradeep Kilambi
On Tue, Jul 11, 2017 at 3:17 PM, Wesley Hayutin  wrote:
> Greetings,
>
> I was looking through the mailing list and I did not see any emails
> explicitly calling out the decreased coverage for telemetry in tripleo due
> to [1].  A series of changes went into the CI system to disable telemetry
> [2].
>
> There is work being done to restore more coverage for telemetry by limiting
> the resources it consumes [3].  We are also working on additional scenarios
> in t-h-t/ci/environments/ to better cover ceilometer.
>
> If the CI environment you are working in has the resources to cover
> ceilometer that is great, however if you find issues like [1] we highly
> suggest you follow the same pattern until coverage is restored upstream.
>
> Thank you!
>
> [1] https://bugs.launchpad.net/tripleo/+bug/1693174
> [2] https://review.openstack.org/#/q/topic:bug/1680195
> [3]
> https://review.openstack.org/#/c/475838/
> https://review.openstack.org/#/c/474969/
> https://review.openstack.org/#/c/47/
>
>

Thanks for starting this thread Wes. I concur with this. We got bitten
recently by many issues that we could have caught in ci had telemetry
been enabled. I spoke to trown and Emilien about this a few times
already. I do understand the resource footprint it causes.  But with
recent improvements and changes upstream, things should be back to
being more manageable. We do have telemetry tested in scenario001 job,
but that doesn't cover all scenarios. So there is a gap in coverage.

 I hope we can either re-enable these services by default in CI and
how things work or at least add a separate gate job to be able to test
HA scenario properly with telemetry enabled.

-- 
Cheers,
~ Prad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Alex Schultz tripleo-core in all projects

2017-07-07 Thread Pradeep Kilambi
+1 to Alex

On Fri, Jul 7, 2017 at 4:20 PM, Juan Antonio Osorio  wrote:
> +1
>
> He's a great reviewer
>
> On 7 Jul 2017 8:40 pm, "Emilien Macchi"  wrote:
>>
>> Alex has demonstrated high technical and community skills in TripleO -
>> where he's already core on THT, instack-undercloud, and puppet-tripleo
>> - but also very involved in other repos.
>> I propose that we extend his core status to all TripleO projects and
>> of course trust him (like we trust all core members) to review patches
>> were we feel confortable with.
>>
>> He has shown an high interest in reviewed other TripleO projects and I
>> think he would be ready for this change.
>> As usual, this is an open proposal, any feedback is welcome.
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Cheers,
~ Prad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG + Release status

2017-02-25 Thread Pradeep Kilambi


Sent from my iPhone

> On Feb 25, 2017, at 11:14 AM, Emilien Macchi  wrote:
> 
>> On Fri, Feb 24, 2017 at 8:52 PM, Emilien Macchi  wrote:
>> Before leaving in week-end, I thought useful to share some infos with the 
>> team.
>> 
>> First of all, it was awesome to meet people face to face, really. I'm
>> already looking forward to the next time.
>> 
>> About Ocata release, it seems like we're getting very close to be
>> ready for RC2, probably next week. I've been discussed with upgrade
>> team and we have defined most of blockers, that would hopefully be
>> fixed next week.
> 
> I've seen quite a few number of services without upgrade tasks. We
> need to add them asap before the RC2:
> - redis
> - auditd
> - ceph-mds
> - cinder-backup
> - ec2api
> - etcd
> - horizon: https://review.openstack.org/#/c/437545/
> - all manila services
> - all mistral services
> - memcached
> - neutron-l3-compute-dvr
> - some neutron agents (dkpg, sriov)
> - libvirt
> - all octavia services
> - all ODL services
> - zaqar
> - all contrail services
> - all resources managed by Pacemaker (unless I missed something?)
> - panko-api

Panko is a new service in Ocata. We don't need this until pike for Panko. I'll 
get this into pike.1

> 
> Some services were introduced in Ocata so they don't necessarily need
> the upgrade tasks, though it would be great to have it before end of
> Pike, so we can tests it. For the others, they are required for Ocata
> RC2.
> 
> Thanks,
> 
>> Also breaking news: the upgrade job is now testing Nova upgrades from
>> Newton to Ocata and we've got a successful pingtest:
>> http://logs.openstack.org/55/438055/1/check/gate-tripleo-ci-centos-7-multinode-upgrades-nv/353b16a/console.html#_2017-02-25_00_58_53_067521
>> First time we run an actual overcloud upgrade in CI with all our
>> critical services. Big kudos to the team!
>> 
>> Next week, I'll take some time to prepare the Launchpad for Pike
>> cycle. I'll probably reach some of you regarding the blueprints we
>> discussed at PTG.
>> I'll also write a summary on the mailing-list of what happened this week.
>> 
>> For those who travel this week-end have a safe trip back to home!
>> 
>> See you online next week,
>> --
>> Emilien Macchi
> 
> 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] remaining ocata items

2017-01-24 Thread Pradeep Kilambi
On Tue, Jan 24, 2017 at 2:05 PM, Emilien Macchi  wrote:

> On Tue, Jan 24, 2017 at 12:52 PM, gordon chung  wrote:
> >
> >
> > On 24/01/17 11:53 AM, Emilien Macchi wrote:
> >> Yes, even outside TripleO, it's already hard to follow all changes
> >> made in projects, so please do not deprecate things at the end of a
> >> cycle.
> >> Let's take some time, we do it in Pike, making good communication, so
> >> folks like TripleO etc can have the whole Pike cycle to make
> >> adjustments and we can hopefully remove this service in Queen.
> >
> > sigh, this wasn't just a decision we came up with. just that work didn't
> > get done as quick as expected. ie. all the dependent work was done in
> > beginning of cycle. i suggest you read minds better :P.
>
> I'm ok if you deprecate it in Ocata, as long:
>
> 1) it's properly document how to make a transition to the new services.
> 2) We don't remove it in Pike, because work to deprecate it would have
> been done end of Ocata.
>
> Deal?
>

from what i understand, this would mean the default would change to setting
it via
pipeline and the dispatcher options wont take effect. So this will still
impact us and
we need to make changes to templates and puppet. Its not just about
deprecating it
and removing the code

So its better we wait for pike regardless.


> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Pradeep Kilambi
On Mon, Dec 12, 2016 at 3:48 PM, Michele Baldessari <mich...@acksyn.org>
wrote:

> Hi Pradeep,
>
> On Mon, Dec 12, 2016 at 02:51:59PM +0100, Giulio Fidente wrote:
> > On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:
> > >I would like to get some thoughts on $Subject. This came up when i was
> > >discussing the standalone roles for telemetry. Currently when we deploy
> > >redis in tripleo, its a pacemaker managed service. So if we were to
> > >deploy telemetry services on a dedicated node we could. But redis will
> > >have to be on a another node? (assuming we dont want to pull in
> > >pacemaker on to telemetry nodes).
>
> Ok so with the composable HA work [1] you should be able to split out
> the redis service on to dedicated nodes and these nodes can be either
> full pacemaker cluster members or only have the pacemaker-remote
> service.
>
> > currently redis instances are not configured as a redis cluster but use
> the
> > master/slave replication model instead and pacemaker is taking care of
> > electing/relocating the redis master as needed
> >
> > there shouldn't be any dependency on the redis profile for the telemetry
> > roles, they should instead just point at the redis_vip
> >
> > the redis_vip is always guaranteed (by haproxy) to point to the redis
> master
> >
> > >With most services moved out of pacemaker in Newton, I think its time to
> > >move redis as well? Are there any constraints in moving redis to be
> > >managed by systemd? Looking at how we do it, It should be easily movable
> > >to systemd? Can we consider doing this for Ocata?
> >
> > I think we could look at using the redis cluster which allows multiple
> > masters, but I am not sure this can happen in Ocata ... yet again, there
> > shouldn't be in the telemetry roles any dependency on redis itself
> >
> > if we were to use the cluster mode the only difference would probably be
> > that the redis_vip will start balancing requests across the nodes
>
> In general I am in favour to split out redis from pacemaker. There is
> the question that in theory we'd have two potentially separate quorums,
> but I think that with redis this should not be a big problem.
>
> Maybe let's start with a prototype and see how things look and iterate
> from there? I think it is a bit late for ocata, but we could at least
> start the work without changing the defaults (i.e. let the operator
> override the tripleo::service with a redis base profile instead of the
> pacemaker one)
>

Makes sense. I understand it might be too late for ocata. We don't really
have
any urgency so long as we can split out redis like you say we can with
composable HA.
I was more curious what the long term plan was and what you said makes
sense.

Thanks.
~ Prad


>
> Does that make sense,
> Michele
>
> [1] https://review.openstack.org/#/q/topic:bp/composable-ha
> --
> Michele Baldessari<mich...@acksyn.org>
> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Pradeep Kilambi
On Mon, Dec 12, 2016 at 8:58 AM, Giulio Fidente <gfide...@redhat.com> wrote:

> On 12/12/2016 02:51 PM, Giulio Fidente wrote:
>
>> On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:
>>
>>> Hey Folks:
>>>
>>> I would like to get some thoughts on $Subject. This came up when i was
>>> discussing the standalone roles for telemetry. Currently when we deploy
>>> redis in tripleo, its a pacemaker managed service. So if we were to
>>> deploy telemetry services on a dedicated node we could. But redis will
>>> have to be on a another node? (assuming we dont want to pull in
>>> pacemaker on to telemetry nodes).
>>>
>>
>> currently redis instances are not configured as a redis cluster but use
>> the master/slave replication model instead and pacemaker is taking care
>> of electing/relocating the redis master as needed
>>
>> there shouldn't be any dependency on the redis profile for the telemetry
>> roles, they should instead just point at the redis_vip
>>
>> the redis_vip is always guaranteed (by haproxy) to point to the redis
>> master
>>
>> With most services moved out of pacemaker in Newton, I think its time to
>>> move redis as well? Are there any constraints in moving redis to be
>>> managed by systemd? Looking at how we do it, It should be easily movable
>>> to systemd? Can we consider doing this for Ocata?
>>>
>>
>> I think we could look at using the redis cluster which allows multiple
>> masters, but I am not sure this can happen in Ocata ... yet again, there
>> shouldn't be in the telemetry roles any dependency on redis itself
>>
>> if we were to use the cluster mode the only difference would probably be
>> that the redis_vip will start balancing requests across the nodes
>>
>
> also note that the cluster model is not replicating data across the nodes,
> but rather sharding the data, with the promise of surviving failures of
> some of the nodes and I'm not sure we want that anyway
>
> https://redis.io/topics/cluster-spec


 I just thought we could use redis sentinel to achieve the HA and avoid
pacemaker headache. Which i think is already integrated a while ago in
tripleo. Its just not used, As pacemaker is taking over the control.

See https://redis.io/topics/sentinel



>
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-09 Thread Pradeep Kilambi
Hey Folks:

I would like to get some thoughts on $Subject. This came up when i was
discussing the standalone roles for telemetry. Currently when we deploy
redis in tripleo, its a pacemaker managed service. So if we were to deploy
telemetry services on a dedicated node we could. But redis will have to be
on a another node? (assuming we dont want to pull in pacemaker on to
telemetry nodes).

With most services moved out of pacemaker in Newton, I think its time to
move redis as well? Are there any constraints in moving redis to be managed
by systemd? Looking at how we do it, It should be easily movable to
systemd? Can we consider doing this for Ocata?

Thoughts?

~ Prad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-01 Thread Pradeep Kilambi
On Thu, Dec 1, 2016 at 5:26 PM, Emilien Macchi  wrote:

> Team,
>
> Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
> months now.  While he's very active in different areas of TripleO, his
> reviews and contributions on puppet-tripleo have been very useful.
> Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
> think he perfectly understands how puppet-tripleo works. His
> involvement in the project and contributions on puppet-tripleo deserve
> that we allow him to +2 puppet-tripleo.
>
> Thanks Alex for your involvement and hard work in the project, this is
> very appreciated!
>
> As usual, I'll let the team to vote about this proposal.
>


+1


>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Michele Baldessari part of core team

2016-11-04 Thread Pradeep Kilambi
On Fri, Nov 4, 2016 at 1:40 PM, Emilien Macchi  wrote:

> MIchele Baldessari (bandini on IRC) has consistently demonstrated high
> levels of contributions in TripleO projects, specifically in High
> Availability area where's he's for us a guru (I still don't understand
> how pacemaker works, but hopefully he does).
>
> He has done incredible work on composable services and also on
> improving our HA configuration by following reference architectures.
> Always here during meetings, and on #tripleo to give support to our
> team, he's a great team player and we are lucky to have him onboard.
> I believe he would be a great core reviewer on HA-related work and we
> expect his review stats to continue improving as his scope broadens
> over time.
>
> As usual, feedback is welcome and please vote for this proposal!
>


+1 of course :)


>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Pradeep Kilambi
On Wednesday, October 12, 2016, Emilien Macchi  wrote:

> On Wed, Oct 12, 2016 at 7:10 AM, Giulio Fidente  > wrote:
> > hi,
> >
> > we introduced support for the deployment of Ceph in the liberty release
> so
> > that it could optionally be used as backend for one or more of Cinder,
> > Glance, Nova and more recently Gnocchi.
> >
> > We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on
> > dedicated ceph-storage nodes so a deployment of OpenStack with Ceph would
> > need at least 1 more additional node to host a Ceph OSD.
> >
> > In our HA scenario the storage backends are configured as follows:
> >
> > Glance -> Swift
> > Nova (ephemeral) -> Local
> > Cinder (persistent) -> LVM (on controllers)
> > Gnocchi -> Swift
> >
> > The downside of the above configuration is that Cinder volumes can not be
> > replicated across the controller nodes and become unavailable if a
> > controller fails, while production environments generally expect
> persistent
> > storage to be highly available. Cinder volumes instead could even get
> lost
> > completely in case of a permanent failure of a controller.
> >
> > With the Newton release and the composable roles we can now deploy Ceph
> OSDs
> > on the compute nodes, removing the requirement we had for an additional
> node
> > to host a Ceph OSD.
> >
> > I would like to ask for some feedback on the possibility of deploying
> Ceph
> > by default in the HA scenario and use it as backend for Cinder.
> >
> > Also using Swift as backend for Glance and Gnocchi is enough to cover the
> > availability issue for the data, but it also means we're storing that
> data
> > on the controller nodes which might or might not be wanted; I don't see a
> > strong reason for defaulting them to Ceph, but it might make more sense
> when
> > Ceph is available; feedback about this would be appreciated as well.
> >
> > Finally a shared backend (Ceph) for Nova would allow live migrations but
> > probably decrease performances for the guests in general; so I'd be
> against
> > defaulting Nova to Ceph. Feedback?
>
> +1 on making ceph default backend for nova/glance/cinder/gnocchi.
> I think this is the most common use-case we currently have in our
> deployments AFIK.


 + 1 from me. Ceph is the recommended backend for gnocchi and this will
help a lot with some recent performance issue we have seen.

- Prad



> Also, I'll continue to work on scenarios jobs (scenario002 and
> scenario003 without Ceph to cover other use cases).
>
> > --
> > Giulio Fidente
> > GPG KEY: 08D733BA
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][FFE] Request to allow ceilometerclient 2.6.1 in upper-constraints for newton

2016-09-09 Thread Pradeep Kilambi
Hi,

This is FFE request to get ceilometerclient 2.6.1 into upper constraints.
There is already a patch for this blocked by freeze[1].

ceilometerclient 2.6.1 was cut 3 days after the freeze looks like and it
includes various critical fixes such as[2]. Without this aodh logs fill up
with following errors[3] and alarm creation fails.

Please consider approving this for newton asap.

Thanks,

~ Prad

[1] https://review.openstack.org/#/c/367386/
[2] https://review.openstack.org/#/c/366638/
[3]
http://logs.openstack.org/96/366896/4/check/gate-tripleo-ci-centos-7-scenario001-multinode-nv/97ed4b4/logs/subnode-2/var/log/aodh/evaluator.txt.gz#_2016-09-09_13_05_23_261
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-17 Thread Pradeep Kilambi
On Tue, May 17, 2016 at 1:31 PM, James Slagle <james.sla...@gmail.com>
wrote:

> On Tue, May 17, 2016 at 12:04 PM, Pradeep Kilambi <p...@redhat.com> wrote:
> > Thanks Steve. I was under the impression we cannot run puppet at this
> > stage. Hence my suggestion to run bash or some script here, but if we
> > can find a way to easily wire the existing aodh puppet manifests into
> > the upgrade process and get aodh up and running then even better, we
> > dont have to duplicate what puppet gives us already and reuse that.
>
> We could add any SoftwareDeployment resource(s) to the templates that
> trigger either scripts or puppet.
>
> >
> >
> >>> At most, it seems we'd have to surround the puppet apply with some
> >>> pacemaker commands to possibly set maintenance mode and migrate
> >>> constraints.
> >>>
> >>> The puppet manifest itself would just be the includes and classes for
> aodh.
> >>
> >> +1
> >>
> >>> One complication might be that the aodh packages from Mitaka might
> >>> pull in new deps that required updating other OpenStack services,
> >>> which we wouldn't yet want to do. That is probably worth confirming
> >>> though.
> >>
> >> It seems like we should at least investigate this approach before going
> >> ahead with the backport proposed - I'll -2 the backports pending further
> >> discussion and investigation into this alternate approach.
> >>
> >
> > Makes sense to me. I understand the hesitation behind backports. I'm
> > happy to work with jistr and slagle to see if this is a viable
> > alaternative. If we can get this working without too much effort, i'm
> > all for dumping the backports and going with this.
>
> Using a liberty overcloud-full image, I enabled the mitaka repos and
> tried to install aodh:
> http://paste.openstack.org/show/497395/
>
> It looks like it will cleanly pull in just aodh packages, and there
> aren't any transitive dependencies thatt require updating any other
> OpenStack services.
>
> That means that we ought to be able to take a liberty cloud and update
> it to use aodh from mitaka. That could be step 1 of the upgrade. The
> operator could pause there for as long as they wanted, and then
> continue on with the rest of the upgrade of the other services to
> Mitaka. It may even be possible to implement them as separate stack
> updates.
>
> Does that sound like it could work? Would we have to update some parts
> of Ceilometer as well, or does Liberty Ceilometer and Mitaka Aodh work
> together nicely?
>


To install Aodh along side ceilometer in Liberty, we have to explicitly
disable or remove ceilometer-alarms services before aodh is installed.
Otherwise both the evaluators will step on each other for alarms. But other
than that, they should work.


~ Prad




>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-17 Thread Pradeep Kilambi
On Tue, May 17, 2016 at 4:59 AM, Steven Hardy <sha...@redhat.com> wrote:
> Hi Pradeep,
>
> Firstly, as discussed on IRC, I echo all of bnemec's concerns, this is not
> well aligned with our stable branch policy[1], or the stable-maint
> direction towards "critical bugfixes only"[2], so if possible I'd rather we
> figured out a general way to solve this problem that doesn't involve
> invasive/risky feature backports.
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/090855.html
> [2] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094440.html
>
> On Mon, May 16, 2016 at 03:33:29PM -0400, James Slagle wrote:
>> On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi <p...@redhat.com> wrote:
>> > Hi Everyone:
>> >
>> > I wanted to start a discussion around considering backporting Aodh to
>> > stable/liberty for upgrades. We have been discussing quite a bit on whats
>> > the best way for our users to upgrade ceilometer alarms to Aodh when moving
>> > from liberty to mitaka. A quick refresh on what changed, In Mitaka,
>> > ceilometer alarms were replaced by Aodh. So only way to get alarms
>> > functionality is use aodh. Now when the user kicks off upgrades from 
>> > liberty
>> > to Mitaka, we want to make sure alarms continue to function as expected
>> > during the process which could take multiple days. To accomplish this I
>> > propose the following approach:
>> >
>> > * Backport Aodh functionality to stable/liberty. Note, Aodh functionality 
>> > is
>> > backwards compatible, so with Aodh running, ceilometer api and client will
>> > redirect requests to Aodh api. So this should not impact existing users who
>> > are using ceilometer api or client.
>> >
>> > * As part of Aodh deployed via heat stack update, ceilometer alarms 
>> > services
>> > will be replaced by openstack-aodh-*. This will be done by the puppet apply
>> > as part of stack convergence phase.
>> >
>> > * Add checks in the Mitaka pre upgrade steps when overcloud install kicks
>> > off to check and warn the user to update to liberty + aodh to ensure aodh 
>> > is
>> > running. This will ensure heat stack update is run and, if alarming is 
>> > used,
>> > Aodh is running as expected.
>> >
>> > The upgrade scenarios between various releases would work as follows:
>> >
>> > Liberty -> Mitaka
>> >
>> > * Upgrade starts with ceilometer alarms running
>> > * A pre-flight check will kick in to make sure Liberty is upgraded to
>> > liberty + aodh with stack update
>> > * Run heat stack update to upgrade to aodh
>> > * Now ceilometer alarms should be removed and Aodh should be running
>> > * Proceed with mitaka upgrade
>> > * End result, Aodh continue to run as expected
>> >
>> > Liberty + aodh -> Mitaka:
>> >
>> > * Upgrade starts with Aodh running
>> > * A pre-flight check will kick in to make sure Liberty is upgraded to Aodh
>> > with stack update
>> > * Confirming Aodh is indeed running, proceed with Mitaka upgrade with Aodh
>> > running
>> > * End result, Aodh continue to be run as expected
>> >
>> >
>> > This seems to be a good way to get the upgrades working for aodh. Other 
>> > less
>> > effective options I can think of are:
>> >
>> > 1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
>> > during migration, alarm functionality will be down until puppet converge
>> > runs and configures Aodh. This means alarms will be down during upgrade
>> > which is not ideal.
>> >
>> > 2. During Mitaka upgrades, replace with Aodh and add a bash script that
>> > fully configures Aodh and ensures aodh is functioning. This will involve
>> > significant work and results in duplicating everything puppet does today.
>>
>> How much duplication would this really be? Why would it have to be in bash?
>>
>> Could it be:
>>
>> Liberty -> Mitaka
>>
>> * Upgrade starts with ceilometer alarms running
>> * Add a new hook for the first step of Mitaka upgrade that does:
>>   ** sets up mitaka repos
>>   ** migrates from ceilometer alarms to aodh, can use puppet
>>   ** ensures aodh is running
>> * Proceed with rest of mitaka upgrade
>
> +1, I was thinking the same thing - I also don't get why it has to be bash,
> surely we could have a script that can apply a puppet manifest that uses 

Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-16 Thread Pradeep Kilambi
On Mon, May 16, 2016 at 3:33 PM, James Slagle <james.sla...@gmail.com>
wrote:

> On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi <p...@redhat.com> wrote:
> > Hi Everyone:
> >
> > I wanted to start a discussion around considering backporting Aodh to
> > stable/liberty for upgrades. We have been discussing quite a bit on whats
> > the best way for our users to upgrade ceilometer alarms to Aodh when
> moving
> > from liberty to mitaka. A quick refresh on what changed, In Mitaka,
> > ceilometer alarms were replaced by Aodh. So only way to get alarms
> > functionality is use aodh. Now when the user kicks off upgrades from
> liberty
> > to Mitaka, we want to make sure alarms continue to function as expected
> > during the process which could take multiple days. To accomplish this I
> > propose the following approach:
> >
> > * Backport Aodh functionality to stable/liberty. Note, Aodh
> functionality is
> > backwards compatible, so with Aodh running, ceilometer api and client
> will
> > redirect requests to Aodh api. So this should not impact existing users
> who
> > are using ceilometer api or client.
> >
> > * As part of Aodh deployed via heat stack update, ceilometer alarms
> services
> > will be replaced by openstack-aodh-*. This will be done by the puppet
> apply
> > as part of stack convergence phase.
> >
> > * Add checks in the Mitaka pre upgrade steps when overcloud install kicks
> > off to check and warn the user to update to liberty + aodh to ensure
> aodh is
> > running. This will ensure heat stack update is run and, if alarming is
> used,
> > Aodh is running as expected.
> >
> > The upgrade scenarios between various releases would work as follows:
> >
> > Liberty -> Mitaka
> >
> > * Upgrade starts with ceilometer alarms running
> > * A pre-flight check will kick in to make sure Liberty is upgraded to
> > liberty + aodh with stack update
> > * Run heat stack update to upgrade to aodh
> > * Now ceilometer alarms should be removed and Aodh should be running
> > * Proceed with mitaka upgrade
> > * End result, Aodh continue to run as expected
> >
> > Liberty + aodh -> Mitaka:
> >
> > * Upgrade starts with Aodh running
> > * A pre-flight check will kick in to make sure Liberty is upgraded to
> Aodh
> > with stack update
> > * Confirming Aodh is indeed running, proceed with Mitaka upgrade with
> Aodh
> > running
> > * End result, Aodh continue to be run as expected
> >
> >
> > This seems to be a good way to get the upgrades working for aodh. Other
> less
> > effective options I can think of are:
> >
> > 1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
> > during migration, alarm functionality will be down until puppet converge
> > runs and configures Aodh. This means alarms will be down during upgrade
> > which is not ideal.
> >
> > 2. During Mitaka upgrades, replace with Aodh and add a bash script that
> > fully configures Aodh and ensures aodh is functioning. This will involve
> > significant work and results in duplicating everything puppet does today.
>
> How much duplication would this really be? Why would it have to be in bash?
>

Well pretty much entire aodh configuration will need to happen, Here is
what we do in devstack, something along these lines[1]. So in short, we'll
need to install, create users, configure db and coordination backends,
configure api to run under mod wsgi. Sure, it doesn't have to be bash,
assumed that would be easiest to invoke during upgrades.



>
> Could it be:
>
> Liberty -> Mitaka
>
> * Upgrade starts with ceilometer alarms running
> * Add a new hook for the first step of Mitaka upgrade that does:
>   ** sets up mitaka repos
>   ** migrates from ceilometer alarms to aodh, can use puppet
>   ** ensures aodh is running
> * Proceed with rest of mitaka upgrade
>
> At most, it seems we'd have to surround the puppet apply with some
> pacemaker commands to possibly set maintenance mode and migrate
> constraints.
>
> The puppet manifest itself would just be the includes and classes for aodh.
>


Yea I guess we could do something like this, i'm not fully clear on the
details on how and when this would be called. But with the below caveat you
mentioned already.



>
> One complication might be that the aodh packages from Mitaka might
> pull in new deps that required updating other OpenStack services,
> which we wouldn't yet want to do. That is probably worth confirming
> though.
>

Yea we will be pulling in at least some new oslo deps and client libraries

Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-16 Thread Pradeep Kilambi
On Mon, May 16, 2016 at 12:25 PM, Ben Nemec <openst...@nemebean.com> wrote:

> From what I've seen of the IRC discussion it sounds like we're doing
> this, but can we at least agree that it is a bad way to handle service
> replacement?
>
> First, we are completely replacing a service on a minor upgrade, which
> even if it is a 100% compatible drop-in replacement may have
> implications for deployers around things like their monitoring setup.
> To me this is a pretty big violation of user expectations around what
> happens in a minor upgrade.
>

I completely understand the concern here and I fully agree that doing this
in a minor update is not something that stable branch policy agrees with,
even if its fully backward compatible. I'm open to suggestions.


>
> Second, if I'm understanding the problem we're trying to solve here, it
> means that on major upgrades we are leaving the controller services
> upgraded but unconfigured for the duration of the upgrade.  That means
> if something happens and pacemaker restarts a service then all hell may
> break loose.  I've actually run into this problem on undercloud upgrades
> because the package update restarts the services automatically, and if
> we haven't run puppet yet sometimes the old configs don't work with the
> new service.
>


This is exactly the problem. If we can come up with a way to handle this,
we dont need to backport. Would it be possible to run puppet after the
controller upgrade runs and then re-run after compute upgrades again?
Considering puppet runs should be idempotent?

 It would be good to prioritize handling this use case as sooner or later
we will run into this again.


>
> I understand that there are technical reasons things work the way they
> do today, but I also want to be on record that this is a problem to be
> solved, not a precedent to be followed in the future.
>
>
Thanks,
~ Prad


> On 05/16/2016 09:34 AM, Pradeep Kilambi wrote:
> > Hi Everyone:
> >
> > I wanted to start a discussion around considering backporting Aodh to
> > stable/liberty for upgrades. We have been discussing quite a bit on
> > whats the best way for our users to upgrade ceilometer alarms to Aodh
> > when moving from liberty to mitaka. A quick refresh on what changed, In
> > Mitaka, ceilometer alarms were replaced by Aodh. So only way to get
> > alarms functionality is use aodh. Now when the user kicks off upgrades
> > from liberty to Mitaka, we want to make sure alarms continue to function
> > as expected during the process which could take multiple days. To
> > accomplish this I propose the following approach:
> >
> > * Backport Aodh functionality to stable/liberty. Note, Aodh
> > functionality is backwards compatible, so with Aodh running, ceilometer
> > api and client will redirect requests to Aodh api. So this should not
> > impact existing users who are using ceilometer api or client.
> >
> > * As part of Aodh deployed via heat stack update, ceilometer alarms
> > services will be replaced by openstack-aodh-*. This will be done by the
> > puppet apply as part of stack convergence phase.
> >
> > * Add checks in the Mitaka pre upgrade steps when overcloud install
> > kicks off to check and warn the user to update to liberty + aodh to
> > ensure aodh is running. This will ensure heat stack update is run and,
> > if alarming is used, Aodh is running as expected.
> >
> > The upgrade scenarios between various releases would work as follows:
> >
> > *Liberty -> Mitaka
> > *
> > * Upgrade starts with ceilometer alarms running
> > * A pre-flight check will kick in to make sure Liberty is upgraded to
> > liberty + aodh with stack update
> > * Run heat stack update to upgrade to aodh
> > * Now ceilometer alarms should be removed and Aodh should be running
> > * Proceed with mitaka upgrade
> > * End result, Aodh continue to run as expected
> >
> > *Liberty + aodh -> Mitaka:
> > *
> > * Upgrade starts with Aodh running
> > * A pre-flight check will kick in to make sure Liberty is upgraded to
> > Aodh with stack update
> > * Confirming Aodh is indeed running, proceed with Mitaka upgrade with
> > Aodh running
> > * End result, Aodh continue to be run as expected
> >
> >
> > This seems to be a good way to get the upgrades working for aodh. Other
> > less effective options I can think of are:
> >
> > 1. Let the Mitaka upgrade kick off and do "yum update" which replace
> > aodh during migration, alarm functionality will be down until puppet
> > converge runs and configures Aodh. This means alarms will be down during
> > upgrade which is not

[openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-16 Thread Pradeep Kilambi
Hi Everyone:

I wanted to start a discussion around considering backporting Aodh to
stable/liberty for upgrades. We have been discussing quite a bit on whats
the best way for our users to upgrade ceilometer alarms to Aodh when moving
from liberty to mitaka. A quick refresh on what changed, In Mitaka,
ceilometer alarms were replaced by Aodh. So only way to get alarms
functionality is use aodh. Now when the user kicks off upgrades from
liberty to Mitaka, we want to make sure alarms continue to function as
expected during the process which could take multiple days. To accomplish
this I propose the following approach:

* Backport Aodh functionality to stable/liberty. Note, Aodh functionality
is backwards compatible, so with Aodh running, ceilometer api and client
will redirect requests to Aodh api. So this should not impact existing
users who are using ceilometer api or client.

* As part of Aodh deployed via heat stack update, ceilometer alarms
services will be replaced by openstack-aodh-*. This will be done by the
puppet apply as part of stack convergence phase.

* Add checks in the Mitaka pre upgrade steps when overcloud install kicks
off to check and warn the user to update to liberty + aodh to ensure aodh
is running. This will ensure heat stack update is run and, if alarming is
used, Aodh is running as expected.

The upgrade scenarios between various releases would work as follows:


*Liberty -> Mitaka*
* Upgrade starts with ceilometer alarms running
* A pre-flight check will kick in to make sure Liberty is upgraded to
liberty + aodh with stack update
* Run heat stack update to upgrade to aodh
* Now ceilometer alarms should be removed and Aodh should be running
* Proceed with mitaka upgrade
* End result, Aodh continue to run as expected


*Liberty + aodh -> Mitaka:*
* Upgrade starts with Aodh running
* A pre-flight check will kick in to make sure Liberty is upgraded to Aodh
with stack update
* Confirming Aodh is indeed running, proceed with Mitaka upgrade with Aodh
running
* End result, Aodh continue to be run as expected


This seems to be a good way to get the upgrades working for aodh. Other
less effective options I can think of are:

1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
during migration, alarm functionality will be down until puppet converge
runs and configures Aodh. This means alarms will be down during upgrade
which is not ideal.

2. During Mitaka upgrades, replace with Aodh and add a bash script that
fully configures Aodh and ensures aodh is functioning. This will involve
significant work and results in duplicating everything puppet does today.

If there are any suggestions please let me know. I'm open to any approach
that can save us time and effort to get this working.

Otherwise if we can agree to consider Aodh backported to Liberty i think
this will save us time.

Let me know what you guys think.

Cheers,
- Prad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] gnocchi backport exception for stable/mitaka

2016-03-29 Thread Pradeep Kilambi
Hi Everyone:

As Mitaka branch was cut yesterday, I would like to request a backport
exception to get gnocchi patches[1][2][3] into stable/mitaka. It
should low risk feature as we decided not to set ceilometer to use
gnocchi by default. So ceilometer would work as is and gnocchi is
deployed along side as a new service but not used out of the box. So
this should make upgrades pretty much a non issues as things should
work exactly like before. If someone want to use gnocchi backend, they
can add an env template file to override the backend. In Netwon, we'll
flip the switch to make gnocchi the default backend.

If we can please vote to agree to get this in as an exception it would
be super useful.

Thanks,
~ Prad

[1] https://review.openstack.org/#/c/252032/
[2] https://review.openstack.org/#/c/290710/
[3] https://review.openstack.org/#/c/238013/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Including Aodh and Gnocchi in Mitaka

2016-03-09 Thread Pradeep Kilambi
Hey Guys:

As many of you know, we have been working on getting Aodh and Gnocchi
support into TripleO. We would like to formally request these two be
made blockers to get into Mitaka release for TripleO. We tried to get
these into Liberty, but due to time constraints it seems slim that
that would happen at this point. We would like to get some assurance
from the community if this is possible. I will elaborate each case and
why we really want them in Mitaka.

Aodh is basically the same alarm functionality in ceilometer split out
into its own project.  This is deprecated in Liberty, as in the code
is un-maintained. In Mitaka, the ceilometer alarms are completely
removed. So Aodh is the replacement. Hence, Aodh has to make it into
Mitaka, without which we will not be able to provide alarm
functionality going forward. So please consider this a blocker for
Mitaka. Even better if we can get it backported to Liberty as the code
is unmaintained :) . The patch set is up for review [1]

Gnocchi on the other hand is a new project that was born to address
the performance issues Ceilometer has by reworking our approach to
metric storage, in order to mitigate some mis-steps in the original
storage architecture. Going forward Gnocchi is the preferred backend
for Ceilometer along with other supported alternatives. Gnocchi should
make ceilometer much more usable at large scale. It would be great if
we can get gnocchi support into Mitaka as well. Since this is a new
service, there shouldn't be any need for upgrades or migration. Which
hopefully should be make it less risky. The patch set is up for review
[2]

If we can make these a priority in review queues and get it into
Mitaka it would be a huge relief.

Please feel free to reach out to me on irc if i can help answer any question,

Thanks,
~ Pradeep

[1] https://review.openstack.org/#/c/289435/
[2] https://review.openstack.org/#/c/252032/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] ceilometer Floatingip Pollster change

2016-01-20 Thread Pradeep Kilambi
On Wed, Jan 20, 2016 at 3:27 PM, gord chung <g...@live.ca> wrote:

> i guess i should probably add that the current pollster doesn't work
> because of limitation in nova api[1].
> if we want to support discovery of whether nova-network or neutron is
> used, the existing pollster and nova-api still needs to be fixed.
>
> [1] https://bugs.launchpad.net/ceilometer/+bug/1346429
>


Yea I think since the existing pollster doesnt return any data, it
shouldn't be disruptive to replace with neutron api based fip pollster as i
indicated.
We can always add endpoint discovery at a later time when nova api gives
the expected fip data, so we can poll nova/neutron api's when available.
For now, I feel we should go ahead with the proposed implementation[1].

Any objections?

[1]  https://review.openstack.org/#/c/269369/


~ Pradeep



>
>
> On 19/01/2016 11:28 PM, Pradeep Kilambi wrote:
>
> Ceilometer floatingip pollster currently polls the nova api to get the
> floating ip info periodically. But due to limitations in the nova api as
> listed in this bug[1],
> the data ceilometer receives isn't all that useful. Nova doesn't return
> this info for all tenants. There has been some work in juno time frame but
> it was
> reverted due to the fact that the nova api logs were being spammed [2].
>
> Due to these concerns, the proposal now is to use the neutron api to get
> this data as proposed in this patch[3]. What we would like to know is,
>
>
> 1. Is the data gathered from current floating ip pollster being used? if
> so how and what context ?
>
> 2. This might only be an issue for pure nova-network scenario, but even
> that i'm not sure how useful the current data we gather is?
>
> 3. Are there any other concerns regarding this change that we should
> discuss?
>
> Any feedback appreciated,
>
>
> Thanks,
> ~ Pradeep
>
> [1] https://bugs.launchpad.net/nova/+bug/1402514
>
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html
>
> [3] https://review.openstack.org/#/c/269369/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Pradeep Kilambi; irc: prad
OpenStack Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] ceilometer Floatingip Pollster change

2016-01-19 Thread Pradeep Kilambi
Ceilometer floatingip pollster currently polls the nova api to get the
floating ip info periodically. But due to limitations in the nova api as
listed in this bug[1],
the data ceilometer receives isn't all that useful. Nova doesn't return
this info for all tenants. There has been some work in juno time frame but
it was
reverted due to the fact that the nova api logs were being spammed [2].

Due to these concerns, the proposal now is to use the neutron api to get
this data as proposed in this patch[3]. What we would like to know is,


1. Is the data gathered from current floating ip pollster being used? if so
how and what context ?

2. This might only be an issue for pure nova-network scenario, but even
that i'm not sure how useful the current data we gather is?

3. Are there any other concerns regarding this change that we should
discuss?

Any feedback appreciated,


Thanks,
~ Pradeep

[1] https://bugs.launchpad.net/nova/+bug/1402514

[2] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html

[3] https://review.openstack.org/#/c/269369/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] proposal to add Pradeep Kilambi to Ceilometer core

2015-09-03 Thread Pradeep Kilambi
On Thu, Sep 3, 2015 at 12:42 PM, gord chung <g...@live.ca> wrote:

>
>
> On 31/08/15 09:13 AM, gord chung wrote:
>
>> hi,
>>
>> we'd like to nominate Pradeep Kilambi to the Ceilometer core team. he has
>> contributed by adding declarative meter support in Ceilometer and provides
>> feedback/input in regards to packaging and design.
>>
>> as we did last time, please vote here:
>> https://review.openstack.org/#/c/218822/ . if for whatever reason you
>> cannot vote there, please respond to this.
>>
>> reviews:
>>
>> https://review.openstack.org/#/q/reviewer:%22Pradeep+Kilambi%22++project:openstack/ceilometer,n,z
>>
>> patches:
>>
>> https://review.openstack.org/#/q/owner:%22Pradeep+Kilambi%22+status:merged+project:openstack/ceilometer,n,z
>>
>> cheers,
>>
>> i'm please to welcome Pradeep to the Ceilometer core team. keep on
> keeping on.



Thanks! Appreciate the opportunity!



-- 
--
Pradeep Kilambi; irc: pradk
OpenStack Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-13 Thread Pradeep Kilambi
On Thu, Aug 13, 2015 at 8:50 AM, Roman Vasilets rvasil...@mirantis.com
wrote:

 Hi,
Could you provide the link to this code?



Here it is:

https://github.com/openstack/ceilometer/blob/master/ceilometer/profiler/notifications.py#L76





 On Wed, Aug 12, 2015 at 9:22 PM, Pradeep Kilambi pkila...@redhat.com
 wrote:

 We're in the process of converting existing meters to use a more
 declarative approach where we add the meter definition as part of a yaml.
 As part of this transition there are few notification handlers where the id
 is not consistent. For example, in profiler notification Handler the
 resource_id is set to profiler-%s % message[payload][base_id] . Is
 there a reason we have the prefix? Can we ignore this and directly set
 to message[payload][base_id] ? Seems like there is no real need for the
 prefix here unless i'm missing something. Can we go ahead and drop this?

 If we don't hear anything i'll assume there is no objection to dropping
 this prefix.


 Thanks,

 --
 --
 Pradeep Kilambi; irc: prad
 OpenStack Engineering

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Pradeep Kilambi; irc: prad
OpenStack Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-12 Thread Pradeep Kilambi
We're in the process of converting existing meters to use a more
declarative approach where we add the meter definition as part of a yaml.
As part of this transition there are few notification handlers where the id
is not consistent. For example, in profiler notification Handler the
resource_id is set to profiler-%s % message[payload][base_id] . Is
there a reason we have the prefix? Can we ignore this and directly set
to message[payload][base_id] ? Seems like there is no real need for the
prefix here unless i'm missing something. Can we go ahead and drop this?

If we don't hear anything i'll assume there is no objection to dropping
this prefix.


Thanks,

-- 
--
Pradeep Kilambi; irc: prad
OpenStack Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Liberty Mid Cycle Virtual Meetup

2015-06-19 Thread Pradeep Kilambi
Hi everyone,

In this week's Ceilometer IRC meeting, we took a vote[1] and decide that we 
will have a *virtual* mid-cycle meetup for Liberty cycle.

To figure out the convenient dates we created a doodle[2]. Please take a vote 
at your earliest, so we can settle on a date and get the rest of the details 
sorted out.

Thanks,
~ Pradeep

[1] 
http://eavesdrop.openstack.org/meetings/ceilometer/2015/ceilometer.2015-06-18-15.05.log.html
[2] http://doodle.com/6vfksdu38wcwqqd3

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-26 Thread Pradeep Kilambi (pkilambi)


On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote:


On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi)
pkila...@cisco.com wrote:

 
 
 On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:
 
 
 On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com
 wrote:
 
 On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com
 wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka
ihrac...@redhat.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.
 
 Salvatore
 
 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Hi all,
 
 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.
 
 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.
 
 Though the way it's to be implemented leaves several concerns, as
 follows:
 
 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.
 
 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).
 
 This would facilitate keeping commit history instead of loosing it
 during graduation.
 
 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.
 
 
 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.
 
 
 I wonder where discussion around the proposal is running. Is it
 public?
 
 The discussion started out privately as the incubation proposal was
 put together, but it's now on the mailing list, in person, and in IRC
 meetings. Lets keep the discussion going on list now.
 
 
 In the spirit of keeping the discussion going, I think we probably
 need to iterate in practice on this idea a little bit before we can
 crystallize on the policy and process for this new repo. Here are few
 ideas on how we can start this iteration:
 
 * Namespace for the new repo:
 Should this be in the neutron namespace, or a completely different
 namespace like neutron labs? Perhaps creating a separate namespace
 will help the packagers to avoid issues of conflicting package owners
 of the namespace.
 
 I don¹t think there is a technical requirement to choose a new
namespace.
 Python supports sharing a namespace, and packaging can support this
 feature (see: oslo.*).
 
 
 From what I understand there can be overlapping code between neutron and
 incubator to override/modify existing python/config files. In which
case,
 packaging(for Eg: rpm) will raise a path conflict. So we probably will
 need to worry about namespaces?

Doug's suggestion to use a separate namespace to indicate that the
incubator codebase isn’t fully supported is a good idea and what I had in
mind as a non-technical reason for a new namespace.  I still assert that
the potential for path conflicts can be avoided easily enough, and is not
a good reason on its own to use a different namespace.

 
 
 
 
 * Dependency on Neutron (core) repository:
 We would need to sort this out so that we can get UTs to run and pass
 in the new repo. Can we set the dependency on Neutron milestone
 releases? We already publish tar balls for the milestone releases, but
 I am not sure we publish these as packages to pypi. If not could we
 start doing that? With this in place, the incubator would always lag
 the Neutron core by at the most one milestone release.
 
 Given that it is possible to specify a dependency as a branch/hash/tag
in
 a git repo [1], I¹m not sure it¹s worth figuring out how to target
 tarballs.  Master branch of the incubation repo could then target the
 master branch of the Neutron repo and always be assured of being
current,
 and then released versions could target milestone tags or released
 versions.
 
 1: http

Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-25 Thread Pradeep Kilambi (pkilambi)


On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote:


On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:

 On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com
wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.
 
 Salvatore
 
 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Hi all,
 
 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.
 
 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.
 
 Though the way it's to be implemented leaves several concerns, as
 follows:
 
 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.
 
 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).
 
 This would facilitate keeping commit history instead of loosing it
 during graduation.
 
 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.
 
 
 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.
 
 
 I wonder where discussion around the proposal is running. Is it
public?
 
 The discussion started out privately as the incubation proposal was
 put together, but it's now on the mailing list, in person, and in IRC
 meetings. Lets keep the discussion going on list now.
 
 
 In the spirit of keeping the discussion going, I think we probably
 need to iterate in practice on this idea a little bit before we can
 crystallize on the policy and process for this new repo. Here are few
 ideas on how we can start this iteration:
 
 * Namespace for the new repo:
 Should this be in the neutron namespace, or a completely different
 namespace like neutron labs? Perhaps creating a separate namespace
 will help the packagers to avoid issues of conflicting package owners
 of the namespace.

I don¹t think there is a technical requirement to choose a new namespace.
 Python supports sharing a namespace, and packaging can support this
feature (see: oslo.*).


From what I understand there can be overlapping code between neutron and
incubator to override/modify existing python/config files. In which case,
packaging(for Eg: rpm) will raise a path conflict. So we probably will
need to worry about namespaces?



 
 * Dependency on Neutron (core) repository:
 We would need to sort this out so that we can get UTs to run and pass
 in the new repo. Can we set the dependency on Neutron milestone
 releases? We already publish tar balls for the milestone releases, but
 I am not sure we publish these as packages to pypi. If not could we
 start doing that? With this in place, the incubator would always lag
 the Neutron core by at the most one milestone release.

Given that it is possible to specify a dependency as a branch/hash/tag in
a git repo [1], I¹m not sure it¹s worth figuring out how to target
tarballs.  Master branch of the incubation repo could then target the
master branch of the Neutron repo and always be assured of being current,
and then released versions could target milestone tags or released
versions.

1: http://pip.readthedocs.org/en/latest/reference/pip_install.html#git


 
 * Modules overlapping with the Neutron (core) repository:
 We could initially start with the features that required very little
 or no changes to the Neutron core, to avoid getting into the issue of
 blocking on changes to the Neutron (core) repository before progress
 can be made in the incubator.

+1

I agree that it would be in an incubated effort¹s best interest to put
off doing invasive changes to the Neutron tree as long as possible to
ensure