[openstack-dev] Redis licensing terms changes

2018-08-22 Thread Haïkel
Hi,

I haven't seen this but I'd like to point that Redis moved to an open
core licensing model.
https://redislabs.com/community/commons-clause/

In short:
* base engine remains under BSD license
* modules move to ASL 2.0 + commons clause which is non-free
(prohibits sales of derived products)

IMHO, projects that rely on Redis as default driver, should consider
alternatives (off course, it's up to them).

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens

2018-02-15 Thread Haïkel
2018-02-15 11:25 GMT+01:00 Bob Ball :
> Hi Thomas,
>
> As noted on the patch, XenServer only has python 2 (and some versions of 
> XenServer even has Python 2.4) in domain0.  This is code that will not run in 
> Debian (only in XenServer's dom0) and therefore can be ignored or removed 
> from the Debian package.
> It's not practical to convert these to support python 3.
>
> Bob
>

We're not there yet but we also plan to work on migrating RDO to Python 3.
And I have to disagree, this code is called by other projects and their tests,
so it will likely be an impediment in migrating OpenStack to Python 3, not just
a "packaging" issue.

If this code is meant to run on Dom0, fine, then we won't package it,
but we also
have to decouple that dependency from Nova, Neutron, Ceilometer etc... to either
communicate directly through an API endpoint or a light wrapper around it.

Regards,
H.

> -Original Message-
> From: Thomas Goirand [mailto:z...@debian.org]
> Sent: 15 February 2018 08:31
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens
>
> Hi,
>
> Since I'm getting some pressure from other DDs to actively remove Py2 support 
> from my packages, I'm very much considering switching all of the Debian 
> packages for Queens to using exclusively Py3. I would have like to read some 
> opinions about this. Is it a good time for such move? I hope it is, because 
> I'd like to maintain as few Python package with Py2 support at the time of 
> Debian Buster freeze.
>
> Also, doing Queens, I've noticed that os-xenapi is still full of py2 only 
> stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch:
>
> https://review.openstack.org/544809
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][python3] python3 readiness?

2018-02-14 Thread Haïkel
2018-02-14 22:53 GMT+01:00 Tom Barron :
> On 13/02/18 16:53 -0600, Ben Nemec wrote:
>>
>>
>>
>> On 02/13/2018 01:57 PM, Tom Barron wrote:
>>>
>>> Since python 2.7 will not be maintained past 2020 [1] it is a reasonable
>>> conjecture that downstream distributions
>>> will drop support for python 2 between now and then, perhaps as early as
>>> next year.
>>
>>
>> I'm not sure I agree.  I suspect python 2 support will not go quietly into
>> that good night.  Personally I anticipate a lot of kicking and screaming
>> right up to the end, especially from change averse enterprise users.
>>
>> But that's neither here nor there.  I think we're all in agreement that
>> python 3 support is needed. :-)
>
>
> Yeah, but you raise a good issue.  How likely is it that EL8 will choose --
> perhaps under duress -- to support both python 2 and python 3 in the next
> big downstream release.  If this is done long enough that we can support
> TripleO deployments on CentOS 8 using python2 while at the same time testing
> TripleO deployments on CentOS using python3 then TripleO support for Fedora
> wouldn't be necessary.
>
> Perhaps this question is settled, perhaps it is open.  Let's try to nail
> down which for the record.
>

All I can say is that question is definitely settled. As far as
OpenStack is concerned,
there will be no Python2 on EL8.

>
>>
>>> In Pike, OpenStack projects, including TripleO, added python 3 unit
>>> tests.  That effort was a good start, but likely we can agree that it is
>>> *only* a start to gaining confidence that real life TripleO deployments will
>>> "just work" running python 3.  As agreed in the TripleO community meeting,
>>> this email is intended to kick off a discussion in advance of PTG on what
>>> else needs to be done.
>>>
>>> In this regard it is worth observing that TripleO currently only supports
>>> CentOS deployments and CentOS won't have python 3 support until RHEL does,
>>> which may be too late to test deploying with python3 before support for
>>> python2 is dropped.  Fedora does have support for python 3 and for this
>>> reason RDO has decided [2] to begin work to run with *stabilized* Fedora
>>> repositories in the Rocky cycle, aiming to be ready on time to migrate to
>>> Python 3 and support its use in downstream and upstream CI pipelines.
>>
>>
>> So that means we'll never have Python 3 on CentOS 7 and we need to start
>> supporting Fedora again in order to do functional testing on py3? That's
>> potentially messy.  My recollection of running TripleO CI on Fedora is that
>> it was, to put it nicely, a maintenance headache.  Even with the
>> "stabilized" repos from RDO, TripleO has a knack for hitting edge case bugs
>> in a fast-moving distro like Fedora.  I guess it's not entirely clear to me
>> what the exact plan is since there's some discussion of frozen snapshots and
>> such, which might address the fast-moving part.
>>
>> It also means more CI jobs, unless we're okay with dropping CentOS support
>> for some scenarios and switching them to Fedora.  Given the amount of
>> changes between CentOS 7 and current Fedora that's a pretty big gap in our
>> testing.
>>
>> I guess if RDO has chosen this path then we don't have much choice.  As
>> far as next steps, the first thing that would need to be done is to get
>> TripleO running on Fedora again.  I suggest starting with
>> https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377
>> :-)
>>
>> -Ben
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][python3] python3 readiness?

2018-02-14 Thread Haïkel
2018-02-14 17:05 GMT+01:00 Ben Nemec :
>
>
> On 02/13/2018 05:30 PM, David Moreau Simard wrote:
>>
>> On Tue, Feb 13, 2018 at 5:53 PM, Ben Nemec  wrote:
>>>
>>>
>>> I guess if RDO has chosen this path then we don't have much choice.
>>
>>
>> This makes it sound like we had a choice to begin with.
>> We've already had a lot of discussions around the topic but we're
>> ultimately stuck between a rock and a hard place.
>>
>> We're in this together and it's important that everyone understands
>> what's going on.
>>
>> It's not a secret to anyone that Fedora is more or less the upstream to
>> RHEL.
>> There's no py3 available in RHEL 7.
>> The alternative to making things work in Fedora is to use Software
>> Collections [1].
>>
>> If you're not familiar with Software Collections for python, it's more
>> or less the installation of RPM packages in a virtualenv.
>> Installing the "rh-python35" SCL would:
>> - Set up a chroot in /opt/rh/rh-python35/root
>> - Set up a py35 interpreter at /opt/rh/rh-python35/root/usr/bin/python3
>>
>> And then, when you would install packages *against* that SCL, they
>> would end up being installed
>> in /opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/.
>>
>> That means that you need *all* of your python packages to be built
>> against the software collections and installed in the right path.
>>
>> Python script with a #!/usr/bin/python shebang ? Probably not going to
>> work.
>> Need python-requests ? Nope, sclo-python35-python-requests.
>> Need one of the 1000+ python packages maintained by RDO ?
>> Those need to be re-built and maintained against the SCL too.
>>
>> If you want to see what it looks like in practice, here's a Zuul spec
>> file [2] or the official docs for SCL [3].
>
>
> Ick, I didn't realize SCLs were that bad.
>

And that's only the tip of the iceberg :)

> /me dons his fireproof suit
>
> I know this is a dirty word around these parts, but I note that EPEL appears
> to have python 3 packages...
>

All I can say is that option was put on the table.

> Ultimately, though, I'm not in a position to be making any definitive
> statements about how to handle this.  RDO has more consumers than just
> TripleO.  The purpose of my email was mostly to provide some historical
> perspective from back when we were doing TripleO CI on Fedora, why we're not
> doing that anymore, and in fact went so far as to explicitly disable Fedora
> in the undercloud installer.  If Fedora is still our best option then so be
> it, but I don't want anyone to think it's going to be as simple as
> s/CentOS/Fedora/ (I assume no one does, but you know what they say about
> ass-u-me :-).
>

I agree it won't be simple, we will have to provide those
repositories, determine how
we will gate updates, fix puppet modules, POI, etc.. and that's only a
beginning.

That's why we won't be providing raw Fedora but rather a curated
version and at some
point, we'll likely freeze it. That's kinda similar to how EL8 is
made, but it won't be EL8. :o)

Let's say that the time is ticking, if we want to ship a productized
OpenStack distro on
Python3, and possibly on EL8 (Hint: I don't know when it will be
released, and moreover,
I'm not the one who gets to decide when RHOSP will support EL8), we're
about to reach
the point of no-return.

H.

>
>>
>> Making stuff work on Fedora is not going to be easy for anyone but it
>> sure beats messing with 1500+ packages that we'd need to untangle
>> later.
>> Most of the hard work for Fedora is already done as far as packaging
>> is concerned, we never really stopped building packages for Fedora
>> [4].
>>
>> It means we should be prepared once RHEL 8 comes out.
>>
>> [1]: https://www.softwarecollections.org/en/
>> [2]:
>> https://softwarefactory-project.io/r/gitweb?p=scl/zuul-distgit.git;a=blob;f=zuul.spec;h=6bba6a79c1f8ff844a9ea3715ab2cef1b12d323f;hb=refs/heads/master
>> [3]:
>> https://www.softwarecollections.org/en/docs/guide/#chap-Packaging_Software_Collections
>> [4]: https://trunk.rdoproject.org/fedora-rawhide/report.html
>>
>> David Moreau Simard
>> Senior Software Engineer | OpenStack RDO
>>
>> dmsimard = [irc, github, twitter]
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tripleo][python3] python3 readiness?

2018-02-13 Thread Haïkel
2018-02-13 23:53 GMT+01:00 Ben Nemec :
>
>
> On 02/13/2018 01:57 PM, Tom Barron wrote:
>>
>> Since python 2.7 will not be maintained past 2020 [1] it is a reasonable
>> conjecture that downstream distributions
>> will drop support for python 2 between now and then, perhaps as early as
>> next year.
>
>
> I'm not sure I agree.  I suspect python 2 support will not go quietly into
> that good night.  Personally I anticipate a lot of kicking and screaming
> right up to the end, especially from change averse enterprise users.
>
> But that's neither here nor there.  I think we're all in agreement that
> python 3 support is needed. :-)
>
>> In Pike, OpenStack projects, including TripleO, added python 3 unit tests.
>> That effort was a good start, but likely we can agree that it is *only* a
>> start to gaining confidence that real life TripleO deployments will "just
>> work" running python 3.  As agreed in the TripleO community meeting, this
>> email is intended to kick off a discussion in advance of PTG on what else
>> needs to be done.
>>
>> In this regard it is worth observing that TripleO currently only supports
>> CentOS deployments and CentOS won't have python 3 support until RHEL does,
>> which may be too late to test deploying with python3 before support for
>> python2 is dropped.  Fedora does have support for python 3 and for this
>> reason RDO has decided [2] to begin work to run with *stabilized* Fedora
>> repositories in the Rocky cycle, aiming to be ready on time to migrate to
>> Python 3 and support its use in downstream and upstream CI pipelines.
>
>
> So that means we'll never have Python 3 on CentOS 7 and we need to start
> supporting Fedora again in order to do functional testing on py3? That's
> potentially messy.  My recollection of running TripleO CI on Fedora is that
> it was, to put it nicely, a maintenance headache.  Even with the
> "stabilized" repos from RDO, TripleO has a knack for hitting edge case bugs
> in a fast-moving distro like Fedora.  I guess it's not entirely clear to me
> what the exact plan is since there's some discussion of frozen snapshots and
> such, which might address the fast-moving part.
>
> It also means more CI jobs, unless we're okay with dropping CentOS support
> for some scenarios and switching them to Fedora.  Given the amount of
> changes between CentOS 7 and current Fedora that's a pretty big gap in our
> testing.
>
> I guess if RDO has chosen this path then we don't have much choice.  As far
> as next steps, the first thing that would need to be done is to get TripleO
> running on Fedora again.  I suggest starting with
> https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377
> :-)
>
> -Ben
>

RDO has *yet* to choose a plan, and people were invited to work on the
"stabilized" repository draft [0]. If anyone has a better plan that fits all the
constraints, please share it asap.
Whatever the plan, we're launching it with the Rocky cycle.

Among the constraints (but not limited to):
* EL8 is not available
* No Python3 on EL7 *and* no allocated resources to maintain it (that includes
rebuilding/maintaining *all* python modules + libraries)
* Bridge the gap between EL7 and EL8, Fedora 27/28 are the closest thing we
have to EL8 [1][2]
* SCL have a cost (and I cannot yet expose why but not jumping onto the SCL
bandwagon has proven to be the right bet)
* Have something stable enough so that upstream gate can use it.
That's why plan stress that updates will be gated (definition of how
is still open)
* Manage to align planets so that we can ship version X of OpenStack [3] on EL8
without additional delay

Well, I cannot say that I can't relate to what you're saying, though. [4]

Regards,
H.

[0] 
https://etherpad.openstack.org/p/stabilized-fedora-repositories-for-openstack
[1] Do not assume anything on EL8 (name included) it's more
complicated than that.
[2] Take a breath, but we might have to ship RDO as modules, not just RPMs or
Containers. I already have headaches about it.
[3] Do not ask which one, I do not know :)
[4] Good thing that next PTG will be in Dublin, I'll need a lot of
irish whiskey :)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [openstack-dev] package gluster-swift with Openstack swift repository

2017-08-10 Thread Haïkel
2017-08-10 9:52 GMT+02:00 Venkata R Edara :
> Hello All,
>
> we are from Red Hat and we have product called Gluster which is distributed
> file system. we have integrated Gluster with openstack-swift , the product
> is called
>
> gluster-swift . gluster-swift allows users to have SWIFT/S3 APIs with
> Gluster filesystem as back-end. we did re-base of gluster-swift with
> openstack-swift Newton release.
>
> As gluster-swift is dependent on openstack-swift we would like to have it
> packaged in openstack-swift newton repository.
>
> Is it possible to package gluster-swift project in openstack repository?.
>
> we would like to know the process of packaging with openstack repo if
> openstack community agrees for that.
>
> -Thanks
>
> Venkata R Edara.
>
>

Hi,

AFAIK, shipping binary packages is the responsibility of downstream
distributions of OpenStack, though
there are efforts to collaborate on packaging within OpenStack.
I'm one of the RDO Release Engineer, RDO community would be happy to
help you packaging gluster-swift.
Please contact the RDO mailing list or join RDO weekly irc meetings.

Regards,
H.


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] package gluster-swift with Openstack swift repository

2017-08-10 Thread Haïkel
2017-08-10 9:52 GMT+02:00 Venkata R Edara :
> Hello All,
>
> we are from Red Hat and we have product called Gluster which is distributed
> file system. we have integrated Gluster with openstack-swift , the product
> is called
>
> gluster-swift . gluster-swift allows users to have SWIFT/S3 APIs with
> Gluster filesystem as back-end. we did re-base of gluster-swift with
> openstack-swift Newton release.
>
> As gluster-swift is dependent on openstack-swift we would like to have it
> packaged in openstack-swift newton repository.
>
> Is it possible to package gluster-swift project in openstack repository?.
>
> we would like to know the process of packaging with openstack repo if
> openstack community agrees for that.
>
> -Thanks
>
> Venkata R Edara.
>
>

Hi,

AFAIK, shipping binary packages is the responsibility of downstream
distributions of OpenStack, though
there are efforts to collaborate on packaging within OpenStack.
I'm one of the RDO Release Engineer, RDO community would be happy to
help you packaging gluster-swift.
Please contact the RDO mailing list or join RDO weekly irc meetings.

Regards,
H.


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Deployment for production

2017-05-04 Thread Haïkel
2017-05-03 10:41 GMT+02:00 Fawaz Mohammed :
> Hi Satish,
>
> I believe RDO is not meant to be for production. I prefer to use the
> original upstream project "TripleO" as they have better documentation.
>

It is meant for production, but it is community-support.
I won't comment further but many people are working on RDO to make it
usable either as full-time RH employees (such as myself) or community
contributor (as I did previously).

Regards,
H.

> Other production grade deployment tools are:
> Fuel:
> https://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide.html
> Support CentOS and Ubuntu as hosts.
>
> Charm:
> https://docs.openstack.org/developer/charm-guide/
> Support Ubuntu only.
>
>
> On May 3, 2017 11:00 AM, "Satish Patel"  wrote:
>>
>> We did POC on RDO and we are happy with product but now question is,
>> should we use RDO for production deployment or other open source flavor
>> available to deploy on prod. Not sure what is the best method of production
>> deployment?
>>
>> Sent from my iPhone
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Packaging-RPM] Nominating Alberto Planas Dominguez for Packaging-RPM core

2017-02-16 Thread Haïkel
2017-02-16 15:43 GMT+01:00 Igor Yozhikov :
> Hello team.
> I want to announce the following changes to Packaging-RPM core team:
> I’d like to nominate Alberto Planas Dominguez known as aplanas on irc for
> Packaging-RPM core.
> Alberto done a lot of reviews for as for project modules [1],[2] as for rest
> of OpenStack components [3]. His experience within OpenStack components and
> packaging are very appreciated.
>

+1
Alberto was nominated for the next cycle and I was and am still supporting this.

Regards,
H.

>
> [1]
> http://stackalytics.com/?metric=marks_type=all=packaging-rpm-group_id=aplanas
> [2]
> http://stackalytics.com/?metric=marks_type=all=packaging-rpm-group
> [3] http://stackalytics.com/?user_id=aplanas=marks
>
> Packaging-RPM team please respond with +1/-1 to the proposed changes.
>
> Thanks,
> Igor Yozhikov
> Senior Deployment Engineer
> at Mirantis
> skype: igor.yozhikov
> cellular: +7 901 5331200
> slack: iyozhikov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The end of OpenStack packages in Debian?

2017-02-15 Thread Haïkel
2017-02-15 13:42 GMT+01:00 Thomas Goirand :
> Hi there,
>
> It's been a while since I planed on writing this message. I couldn't
> write it because the situation makes me really sad. At this point, it
> starts to be urgent to post it.
>
> As for many other folks, Mirantis decided to end its contract with me.
> This happened when I was the most successful doing the job, with all of
> the packaging CI moved to OpenStack infra at the end of the OpenStack
> Newton cycle, after we were able to release Newton this way. I was
> hoping to start packaging on every commit for Ocata. That's yet another
> reason for me to be very frustrated about all of this. Such is life...
>
> Over the last few months, I hoped for having enough strengths to
> continue my packaging work anyway, and get Ocata packages done. But
> that's not what happened. The biggest reason for this is that I know
> that this needs to be a full time job. And at this point, I still don't
> know what my professional future will be. A company, in Barcelona, told
> me I'd get hired to continue my past work of packaging OpenStack in
> Debian, but so far, I'm still waiting for a definitive answer, so I'm
> looking into some other opportunities.
>
> All this to say that, unless someone wants to hire me for it (which
> would be the best outcome, but I fear this wont happen), or if someone
> steps in (this seems unlikely at this point), both the packaging-deb and
> the faith of OpenStack packages in Debian are currently compromised.
>
> I will continue to maintain OpenStack Newton during the lifetime of
> Debian Stretch though, but I don't plan on doing anything more. This
> means that maybe, Newton will be the last release of OpenStack in
> Debian. If things continue this way, I probably will ask for the removal
> of all OpenStack packages from Debian Sid after Stretch gets released
> (unless I know that someone will do the work).
>
> As a consequence, the following projects wont get packages even in
> Ubuntu (as they were "community maintained", which means done by me and
> later sync into Ubuntu...):
>
> - congress
> - gnocchi
> - magnum
> - mistral
> - murano
> - sahara
> - senlin
> - watcher
> - zaqar
>
> Hopefully, Canonical will continue to maintain the other 15 (more
> core...) projects in UCA.
>
> Thanks for the fish,
>
> Thomas Goirand (zigo)
>
> P,S: To the infra folks: please keep the packaging CI as it is, as it
> will be useful for the lifetime of Stretch.
>

I'm sad to hear that as a fellow packager.
You've been a driving force for Debian packaging and improving
OpenStack since its early days.
Your work has helped many people to use OpenStack on Debian and
derived effectively. I hope
that you'll find asap a sponsorship or a dayjob to keep going.

Regards,
H.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [daisycloud-core] Kolla Mitaka requirements supported by CentOS

2016-10-14 Thread Haïkel
Le 12 oct. 2016 6:01 AM, "Steven Dake (stdake)" <std...@cisco.com> a écrit :
>
> Haikel,
>
>
>
> We attempted removing EPEL from our repo lists.  We got build errors on
cinder-volume.  We have iscsi integration because vendors require it to
work with their third party plugins.  The package iscsi-target-utils is not
in the newton repos for RDO.
>
>
>
> The package that fails can be seen here:
>
>
http://logs.openstack.org/04/385104/1/check/gate-kolla-dsvm-build-centos-source-centos-7-nv/f6cc1d8/console.html#_2016-10-11_19_34_40_662928
>
>
>
> If you could fix that up, it would be grand J
>
>

Sorry for the delay, it got in the wrong folder, I'll look into adding this
package.

H.
>
> Thanks
>
> -steve
>
>
>
>
>>
>> From: Haïkel <hgue...@fedoraproject.org>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
>> Date: Saturday, July 2, 2016 at 2:14 PM
>>
>> To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [daisycloud-core] Kolla Mitaka requirements
supported by CentOS
>>
>>
>>
>> 2016-07-02 20:42 GMT+02:00 jason <huzhiji...@gmail.com>:
>>>
>>> Pip Package Name Supported By Centos CentOS
Name  Repo Name
>>>
>>>
==
>>>
>>> ansible   yes
>>>
>>> ansible1.9.noarchepel
>>>
>>> docker-py  yes
>>>
>>> python-docker-py.noarchextras
>>>
>>> gitdb  yes
>>>
>>> python-gitdb.x86_64epel
>>>
>>> GitPython  yes
>>>
>>> GitPython.noarchepel
>>>
>>> oslo.config yes
>>>
>>> python2-oslo-config.noarch centos-openstack-mitaka
>>>
>>> pbryes
>>>
>>> python-pbr.noarch   epel
>>>
>>> setuptools yes
>>>
>>> python-setuptools.noarchbase
>>>
>>> six yes
>>>
>>> python-six.noarch base
>>>
>>> pycryptoyes
>>>
>>> python2-crypto  epel
>>>
>>> graphvizno
>>>
>>> Jinja2no (Note: Jinja2 2.7.2 will be installed as
>>>
>>> dependency by ansible)
>>>
>>>
>>
>>
>>
>> As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
>>
>> It's proven very hard to prevent EPEL pushing broken updates, or push
>>
>> updates to fit OpenStack requirements.
>>
>>
>>
>> Actually, all the dependency above but ansible, docker and git python
>>
>> modules are in CentOS Cloud SIG repositories.
>>
>> If you are interested to work w/ CentOS Cloud SIG, we can add missing
>>
>> dependencies in our repositories.
>>
>>
>>
>>
>>>
>>>
>>>
>>> As above table shows, only two (graphviz and Jinja2) are not supported
>>>
>>> by centos currently. As those not supported packages are definitly not
>>>
>>> used by OpenStack as well as Daisy. So basicaly we can use pip to
>>>
>>> install them after installing other packages by yum. But note that
>>>
>>> Jinja2 2.7.2 will be installed as dependency while yum install
>>>
>>> ansible, so we need to using pip to install jinja2 2.8 after that to
>>>
>>> overide the old one. Also note that we must make sure pip is ONLY used
>>>
>>> for installing those two not supported packages.
>>>
>>>
>>>
>>> But before you trying to use pip, please consider these:
>>>
>>>
>>>
>>> 1) graphviz is just for saving image depend graph text file and is not
>>>
>>> used by default and only used in build process if it is configured to
>>>
>>> be used.
>>>
>>>
>>>
>>> 2) Jinja2 rpm can be found at
>>>
>>> http://koji.fedoraproject.org/koji/packageinfo?packageID=6506, which I
>>>
>>> think is suitable for CentOS. I have tested it.
>>>
>>>
>>>
>>> So, as far as Kolla

Re: [openstack-dev] [openstack-announce] OpenStack Newton is officially released!

2016-10-06 Thread Haïkel
RDO Newton GA was ready for more than an hour ago, builds are currently running.
Formal publication should happen soon.

Good job to all projects and release mgmt team!

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-29 Thread Haïkel
2016-09-26 16:05 GMT+02:00 Anita Kuno <ante...@anteaya.info>:
> On 16-09-26 07:48 AM, Haïkel wrote:
>>
>> Hi,
>>
>> following our discussions about 3rd party gates in RPM packaging project,
>> I suggest that we vote in order to promote the following gates as voting:
>> - MOS CI
>> - SUSE CI
>>
>> After promotion, all patchsets submitted will have to validate these gates
>> in order to get merged. And gates maintainers should ensure that the gates
>> are running properly.
>>
>> Please vote before (and/or during) our thursday meeting.
>>
>>
>> +1 to promote both MOS and SUSE CI as voting gates.
>>
>> Regards,
>> H.
>
>
> I'm not sure what you mean by voting gates. Gates don't vote, an individual
> job can leave a verified +1 in the check queue or/and a verified +2 in the
> gate queue.
>
> Third party CI systems do not vote verified +2 in gerrit. They may if the
> project chooses vote verified +1 on a project.
>

Yeah, that was pretty much what was assumed.
Gates that do not leave verified +1 are called non-voting, so
logically gates that leaves verified +1 are called voting gates.

> If you need clarification in what third party ci systems may do in gerrit,
> you are welcome to reply to this email, join the #openstack-infra channel or
> participate in a third party meeting:
> http://eavesdrop.openstack.org/#Third_Party_Meeting
>
> Thank you,
> Anita.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging][rpm] 3rd-party gates promotion to voting gates

2016-09-26 Thread Haïkel
Hi,

following our discussions about 3rd party gates in RPM packaging project,
I suggest that we vote in order to promote the following gates as voting:
- MOS CI
- SUSE CI

After promotion, all patchsets submitted will have to validate these gates
in order to get merged. And gates maintainers should ensure that the gates
are running properly.

Please vote before (and/or during) our thursday meeting.


+1 to promote both MOS and SUSE CI as voting gates.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-23 Thread Haïkel
2016-09-21 16:34 GMT+02:00 Steven Dake (stdake) <std...@cisco.com>:
>
>
>
> On 9/20/16, 11:18 AM, "Haïkel" <hgue...@fedoraproject.org> wrote:
>
> 2016-09-19 19:40 GMT+02:00 Jeffrey Zhang <zhang.lei@gmail.com>:
> > Kolla core reviewer team,
> >
> > Kolla supports multiple Linux distros now, including
> >
> > * Ubuntu
> > * CentOS
> > * RHEL
> > * Fedora
> > * Debian
> > * OracleLinux
> >
> > But only Ubuntu, CentOS, and OracleLinux are widely used and we have
> > robust gate to ensure the quality.
> >
> > For fedora, Kolla hasn't any test for it and nobody reports any bug
> > about it( i.e. nobody use fedora as base distro image). We (kolla
> > team) also do not have enough resources to support so many Linux
> > distros. I prefer to deprecate fedora support now.  This is talked in
> > past but inconclusive[0].
> >
> > Please vote:
> >
> > 1. Kolla needs support fedora( if so, we need some guys to set up the
> > gate and fix all the issues ASAP in O cycle)
> > 2. Kolla should deprecate fedora support
> >
> > [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html
> >
>
>
> /me has no voting rights
>
> As RDO maintainer and Fedora developer, I support option 2. as it'd be
> very time-consuming to maintain Fedora support..
>
>
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
>
> Haikel,
>
> Quck Q – are you saying maintaining fedora in kolla is time consuming or that 
> maintaining rdo for fedora is time consuming (and something that is being 
> dropped)?
>

Both, in my experience in maintaining RDO on Fedora, I encountered
similar issues than Kolla. It's doable but a lot of work.
One of the biggest problem are updates, you may have disruptive
updates on python modules packages quite frequently, or even rarer,
get some updates reverted.
So keeping Fedora in a good shape would require a decent amount of efforts.

Regards,
H.



> Thanks for improving clarity on this situation.
>
> Regards
> -steve
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-20 Thread Haïkel
2016-09-19 19:40 GMT+02:00 Jeffrey Zhang :
> Kolla core reviewer team,
>
> Kolla supports multiple Linux distros now, including
>
> * Ubuntu
> * CentOS
> * RHEL
> * Fedora
> * Debian
> * OracleLinux
>
> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
> robust gate to ensure the quality.
>
> For fedora, Kolla hasn't any test for it and nobody reports any bug
> about it( i.e. nobody use fedora as base distro image). We (kolla
> team) also do not have enough resources to support so many Linux
> distros. I prefer to deprecate fedora support now.  This is talked in
> past but inconclusive[0].
>
> Please vote:
>
> 1. Kolla needs support fedora( if so, we need some guys to set up the
> gate and fix all the issues ASAP in O cycle)
> 2. Kolla should deprecate fedora support
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html
>


/me has no voting rights

As RDO maintainer and Fedora developer, I support option 2. as it'd be
very time-consuming to maintain Fedora support..


>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Packaging Rpm] PTL candidacy

2016-09-16 Thread Haïkel
Fellow RPM packagers,

I announce my candidacy for PTL of the Packaging Rpm project.
During the Newton cycle, we reached the point where we provide enough
artefacts to build OpenStack clients usable on all supported platforms.

As a PTL, my primary focus would be on:
* 3rd party CI: increase coverage and stability so that we can promote
  existing CI to voting gates.  Next step would be allowing other
  projects to consume our packaging for their own CI (mostly installer
  ones)
* better tooling to generate more native packages, and reduce
  churn. Also adding python3 support.
* allowing people to deploy a minimal OpenStack from our
  packages. Rather than focusing on shoving as much services as
  possible, I'd like us to focus on curating a minimal but high
  quality set of packages to build upon it. After such milestone,
  adding more services will be much easier later.

Why? The goal is to produce a production-ready and curated set of
OpenStack packages for all supported RPM-based platforms (SUSE, RHEL,
etc.). Such deliverables could be used to seed downstream
distributions and encourage collaboration between them around
packaging. It would also help OpenStack installers CI to test against
fresh OpenStack packages.


Off course, I plan to continue supporting these ongoing efforts:
* extending our packages set
* extending our contributors pool (including core)
* last but not least, foster collaboration between downstream vendors.

Best regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging-deb][PTL] candidacy

2016-09-13 Thread Haïkel
2016-09-12 21:10 GMT+02:00 Thomas Goirand :
> I am writing to submit my candidacy for re-election as the PTL for the
> packaging-deb project.
>
> The idea sparked in Vancouver (spring 2015). The project joined the
> big-tent about a year ago (in August 2015, it was approved by the TC)
> But it then took about a year to have it bootstraped. This was long and
> painful bootstrap, but today, I can proudly announce that it was finally
> well launched. Right now, all of Oslo and python-*client are built, and
> it is a mater of days until all services of Newton is completely built
> in OpenStack infra (Keystone is already there in Newton b2 version).
>
> I'll do my best to continue to drive the project, and hope to gather
> more contribution every day. Every contributor counts.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> P.S: It maybe will be a bit hard to find out who can vote, because only
> the debian/newton branch should count, and currently Stackalytics is
> counting the master which contains upstream commits. Hopefully, we can
> solve the issue before the elections.
>

Thomas has been very helpful in collaborating with other packaging
groups like RPM ones.
So I welcome his candidacy!

Regards,
H.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [telemetry] [requirements] [FFE] Oslo.db 4.13.3

2016-09-08 Thread Haïkel
2016-09-08 19:33 GMT+02:00 Mehdi Abaakouk :
>
>
> Le 2016-09-08 16:21, Matthew Thode a écrit :
>>>
>>> Once it’s in, we’ll trigger another oslo.db release.
>
>
> The release change is ready: https://review.openstack.org/#/c/367482/
>
> I have tested it against Gnocchi we don't have any issue anymore.
>
> Thanks all!
>

Good news, thanks for fixing it!

> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging-rpm] Javier Peña as additonal core reviewer for packaging-rpm core group

2016-09-02 Thread Haïkel
2016-09-02 12:45 GMT+02:00 Dirk Müller :
> Hi,
>
> I would like to suggest Javier Peña as an additional core reviewer for
> the packaging-rpm core group. He's been an extremely valueable

+1

Javier has done a good job as a reviewer, and is key contributor to
add RDO 3rd CI.
Good job!

H.




> contributing more to the packaging effort overall.
>
> See http://stackalytics.com/?user_id=jpena-c=rpm-packaging
>
>
> Please reply with +1/-1
>
> Thanks a lot in advance,
> Dirk
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] RDO manager

2016-07-25 Thread Haïkel
Can you point me the documentation you're using?
And explain me what you're looking for: installing RDO Manager? which
version? (master, Mitaka, etc.)

Regards,
H.

PS: I'm one of the RDO maintainers.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [daisycloud-core] Kolla Mitaka requirements supported by CentOS

2016-07-02 Thread Haïkel
2016-07-02 20:42 GMT+02:00 jason :
> Pip Package Name Supported By Centos CentOS Name  Repo 
> Name
> ==
> ansible   yes
> ansible1.9.noarchepel
> docker-py  yes
> python-docker-py.noarchextras
> gitdb  yes
> python-gitdb.x86_64epel
> GitPython  yes
> GitPython.noarchepel
> oslo.config yes
> python2-oslo-config.noarch centos-openstack-mitaka
> pbryes
> python-pbr.noarch   epel
> setuptools yes
> python-setuptools.noarchbase
> six yes
> python-six.noarch base
> pycryptoyes
> python2-crypto  epel
> graphvizno
> Jinja2no (Note: Jinja2 2.7.2 will be installed as
> dependency by ansible)
>

As one of RDO maintainer, I strongly invite kolla, not to use EPEL.
It's proven very hard to prevent EPEL pushing broken updates, or push
updates to fit OpenStack requirements.

Actually, all the dependency above but ansible, docker and git python
modules are in CentOS Cloud SIG repositories.
If you are interested to work w/ CentOS Cloud SIG, we can add missing
dependencies in our repositories.


>
> As above table shows, only two (graphviz and Jinja2) are not supported
> by centos currently. As those not supported packages are definitly not
> used by OpenStack as well as Daisy. So basicaly we can use pip to
> install them after installing other packages by yum. But note that
> Jinja2 2.7.2 will be installed as dependency while yum install
> ansible, so we need to using pip to install jinja2 2.8 after that to
> overide the old one. Also note that we must make sure pip is ONLY used
> for installing those two not supported packages.
>
> But before you trying to use pip, please consider these:
>
> 1) graphviz is just for saving image depend graph text file and is not
> used by default and only used in build process if it is configured to
> be used.
>
> 2) Jinja2 rpm can be found at
> http://koji.fedoraproject.org/koji/packageinfo?packageID=6506, which I
> think is suitable for CentOS. I have tested it.
>
> So, as far as Kolla deploy process concerned, there is no need to use
> pip to install graphviz and Jinja2. Further more, if we do not install
> Kolla either then we can get ride of pip totally!
>
> I encourage all of you to think about not using pip any more for
> Daisy+Kolla, because pip hase a lot of overlaps between yum/rpm, files
> may be overide back and force if not using them carefully. So not
> using pip will make things easier and make jump server more cleaner.
> Any ideas?
>
>
> Thanks,
> Zhijiang
>
> --
> Yours,
> Jason
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Continued support of Fedora as a base platform

2016-06-30 Thread Haïkel
2016-06-30 14:07 GMT+02:00 Steven Dake (stdake) :
> What really cratered our implementation of fedora was the introduction of
> DNF.  Prior to that, we led with Fedora.  I switched my focus to something
> slower moving (CentOS) so I could focus on a properly working RDO rather
> then working around the latest and greatest changes.
>
> That said, if someone wants to fix Kolla to run against dnf, that would be
> fantastic, as it will need to be done for CentOS8 an RHEL8.
>
> Regards
> -steve
>

That's something that we fixed for Fedora Cloud image. I'll give it a shot.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Continued support of Fedora as a base platform

2016-06-30 Thread Haïkel
My opinion as one of RDO release wranglers is not to support Fedora
for anything else that isn't trunk.
It's proven really hard to maintain all dependencies in a good state,
and when we managed to do that,
an update could break things at any time (like python-pymongo update
who was removed because of Pulp developers).

RDO actually ensure that spec files are buildable on Fedora but you'd
have to maintain dependencies separately and rely on
tools like yum priorities plugin to override base packages.

Fedora lifecycle is also not sync-ed with OpenStack, OpenStack being
released around 2 months before the next Fedora Stable.
So in practice, if you use stable N-1, you have 9 months of support
from Fedora, and updating to stable N requires some amount of work.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] version of python2-oslo-config

2016-06-29 Thread Haïkel
2016-06-28 17:53 GMT+02:00 Steven Dake (stdake) :
> The mitaka branch of Kolla requires 3.7 or later.
>
> Git checkout stable/mitaka
>
> Master may require 3.10, but that happens via the global requirements update
> process, of which RDO will surely address in the future.
>
> Regards
> -steve
>

Yes, we haven't branched Newton in stable repositories, but you can
get it in trunk repositories.
Feel free to CC rdo-list or me directly for anything related to packaging.

H.


> From: "hu.zhiji...@zte.com.cn" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Tuesday, June 28, 2016 at 4:30 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [kolla] version of python2-oslo-config
>
> Hi Kolla team,
>
> Base upon requirement.txt, Kolla needs oslo-config version 3.10. But CentOS
> Mitaka uses 3.9 ,which is python2-oslo-config-3.9.0-1.el7.noarch.rpm.
>
> I want to know if Kolla can also work on oslo-config-3.9.0. If it can, then
> will be a benefit because pip is conflict with rpm on python2-oslo-config
> package. For example, the rpm version has the ability to find config file in
> /usr/share/keystone/keystone-dist.conf but the pip version not.
>
>
> Thanks
> Zhijiang,
>
> 
> ZTE Information Security Notice: The information contained in this mail (and
> any attachment transmitted herewith) is privileged and confidential and is
> intended for the exclusive use of the addressee(s).  If you are not an
> intended recipient, any disclosure, reproduction, distribution or other
> dissemination or use of the information contained is strictly prohibited.
> If you have received this mail in error, please delete it and notify us
> immediately.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][packaging] Normalizing requirements file

2016-06-24 Thread Haïkel
2016-06-24 4:02 GMT+02:00 Tony Breeds :
>
> I think we need to pause on these 'normalizing' changes in g-r.  They're
> genertaing whitspace only reviews in many, (possibly all) projects that have
> managed requirements.
>
> We need to do more testing and possibly make the bot smarter befoer we look at
> this again.
>
>
> Yours Tony.
>

Roger.
Maybe we can put it at the agenda to the next (or later) meeting to
work on specifications
and define the next steps before moving on.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][general] Multiple implementation of /usr/bin/foo stored at the same location, leading to conflicts

2016-06-22 Thread Haïkel
Yes, RDO faced the very same issue:
https://github.com/rdo-packages/neutron-fwaas-distgit/blob/rpm-master/openstack-neutron-fwaas.spec#L115

My understanding was that neutron folks were looking for a solution,
but we ship this workaround for now a month.

Regards,
H

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][packaging] Normalizing requirements file

2016-06-21 Thread Haïkel
2016-06-22 7:23 GMT+02:00 Tony Breeds :
>
> I'm fine with doign something like this.  I wrote [1] some time ago but didn't
> push on it as I needed to verify that this wouldn't create a "storm" of
> pointless updates that just reorder things in every projects 
> *requirements.txt.
>
> I think the first step is to get the 'tool' added to the requirements repo to
> make it easy to run again when things get out of wack.
>
> perhaps openstack_requirements/cmds/normalize ?
>

Thanks Swapnil and Tony for your positive comments.

I didn't submit the script as I wanted to see in real conditions, how
well it fare and
get feedback from my peers, first. I'll submit the script in a separate review.

> we can bikeshed on the output format / incrementally improve things if we have
> a common base.
>

makes sense, I tried to stay as close to the main current style

Regards,
H.

> So I think that's a -1 on your review as it stands until we have the tool 
> merged.
>
> Yours Tony.
>
> [1] https://gist.github.com/tbreeds/f250b964383922bdea4645740ae4b195
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][packaging] Normalizing requirements file

2016-06-21 Thread Haïkel
Hi,

as a packager, I spend a lot of time to scrutinize the requirements
repo, and I find it easier to read if specifiers are ordered.
So in a quick glance, you can check which is the min version required
and max one without trying to search them among other specifiers.
I scripted a basic linter to do that (which also normalize comments to
PEP8 standards)

Initial review is here:
https://review.openstack.org/#/c/332623/

script is available here;
https://gist.github.com/hguemar/7a17bf93f6c8bd8ae5ec34bf9ab311a1

Your thoughts?

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla rpm distribution

2016-05-07 Thread Haïkel
All Kolla requirements are packaged in RDO either in Fedora or CentOS
Cloud SIG repositories.
Kolla relies on RDO packages for RPM packages, but there's also a RPM
upstream packaging project.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-06 Thread Haïkel
Started on removing some entries, I guess I have big cleanup to do RDO side.

H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-05-05 Thread Haïkel
Well, I'm more in favor having it as a sub-team of release mgmt team.

H,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Mitaka stable packaged for RDO

2016-04-07 Thread Haïkel
Hi,

the RDO community has packages available for Mitaka Stable in its
testing repositories.
Official RDO release will be announced after we validate this release
with our CI.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rdo-list] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-17 Thread Haïkel
+1 it fuels the confusion that RDO Manager has downstream-only patches
which is not the case anymore.

And I'll bite anyone who will try to sneak downstream-only patches in
RDO package of tripleO.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-15 Thread Haïkel
2016-02-14 23:16 GMT+01:00 Davanum Srinivas :
> Hi,
>
> Short Story:
> pycryptodome if installed inadvertently will break several projects:
> Example : https://review.openstack.org/#/c/279926/
>
> Long Story:
> There's a new kid in town pycryptodome:
> https://github.com/Legrandin/pycryptodome
>
> Because pycrypto itself has not been maintained for a while:
> https://github.com/dlitz/pycrypto
>
> So folks like pysaml2 and paramiko are trying to switch over:
> https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbfbde6b41ac63
> https://github.com/paramiko/paramiko/issues/637
>
> In fact pysaml2===4.0.3 has already switched over. So the requirements
> bot/script has been trying to alert us to this new dependency, you can
> see Nova fail.
> https://review.openstack.org/#/c/279926/
>
> Why does it fail? For example, the new library is strict about getting
> bytes for keys and has dropped some parameters in methods. for
> example:
> https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/PublicKey/RSA.py#L405
> https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA.py#L499
>
> Another problem, if pycrypto gets installed last then things will
> work, if it pycryptodome gets installed last, things will fail. So we
> definitely cannot allow both in our global-requirements and
> upper-constraints. We can always try to pin stuff, but things will
> fail as there are a lot of jobs that do not honor upper-constraints.
> And things will fail in the field for Mitaka.
>
> Action:
> So what can we do? One possibility is to pin requirements and hope for
> the best. Another is to tolerate the install of either pycrypto or
> pycryptodome and test both combinations so we don't have to fight this
> battle.
>
> Example for Nova : https://review.openstack.org/#/c/279909/
> Example for Glance : https://review.openstack.org/#/c/280008/
> Example for Barbican : https://review.openstack.org/#/c/280014/
>
> What do you think?
>
> Thanks,
> Dims
>

This is annoying from a packaging PoV.

We have dependencies relying on pycrypto (e.g oauthlib used by
keystone, paramiko by even more projects), and we can't control the
order of installation.
My 2 cts will be to favor the latter solution and test both
combinations until N or O releases (and then get rid of pycrypto
definitively), so we can handle this gracefully.


Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Any projects using sqlalchemy-utils?

2016-02-12 Thread Haïkel
2016-02-12 21:57 GMT+01:00 Corey Bryant :
> Are any projects using sqlalchemy-utils?
>
> taskflow started using it recently, however it's only needed for a single
> type in taskflow (JSONType).  I'm wondering if it's worth the effort of
> maintaining it and it's dependencies in Ubuntu main or if perhaps we can
> just revert this bit to define the JSONType internally.
>
> --
> Regards,
> Corey

gnocchi does for a while.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging] core reviewers nomination

2015-11-02 Thread Haïkel
I'd like to propose new candidates for RPM packaging core reviewers:
Alan Pevec
Jakub Ruzicka

Both are involved in downstream RDO project and this group creation.
Alan is part of the stable release team and Jakub has been working on
our tooling since the beginning.
Having them onboard as core reviewers would help accelerate the
bootstrap of the project.

RPM packaging core, please vote with +/- 1.

Regards,
H

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Haïkel
2015-06-03 23:41 GMT+02:00 Allison Randal alli...@lohutok.net:

 TBH, I don't think pip or distro packaging are ever going to be the
 right answer for fully configuring an OpenStack cloud. Because, there is
 no one true cloud, there are a variety of different configurations and
 combinations depending on whether you're in a dev/test scenario, running
 a private cloud, a public cloud, how many machines you're deploying to,
 what services you want to run on which machines, what your underlying
 network looks like, etc, etc...


I have to disagree on that point, integration with underlying OS and low-level
services is important. If that integration doesn't exists, it's
off-loaded to the
operators. So downstream packages bring more value than pip deployment,
as it will pull dependencies (not just things from PyPI), working combination
with underlying OS components etc.

Packages could be used in a variety of different configurations, even ones we
didn't expect. Any sensitive scenario that we can't support is likely
to be a packaging
bug for me.

In some cases, it makes sense for fine-tuning, but generally, you just want to
get things work and tweak your configuration.

 Having pip or distro packaging that's very opinionated about configuring
 a large set of related services is worse than useless when it's fighting
 against the configuration you need. It's on the order of installing the
 nginx package and finding that apt has set up a Wordpress instance and
 database you didn't want or need. Operator's nightmare.

 Both pip and distro packaging should be consumable by any set of config
 management/orchestration tools, which means just install the software
 with minimal configuration.


+1
As a matter of fact, I prefer that packages to be as agnostic as
possible about the
deployment and let that work to the orchestration tool.

H.

 Allison

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Source RPMs for RDO Kilo?

2015-06-03 Thread Haïkel
Hi Neil,

We're already having this discussion on the downstream list.
RDO is currently moving packages publication for RHEL/CentOS over CentOS
mirrors. That's just a matter of time and finish the tooling
automating the publication
process for source packages.

In the mean time, you can find sources in the following places
* our packaging sources live in Fedora dist-git:
ie: packaging sources for all services
http://pkgs.fedoraproject.org/cgit/openstack
* source packages are in Fedora and CBS (RHEL/CentOS) build systems.
http://koji.fedoraproject.org/
http://cbs.centos.org/koji/

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Source RPMs for RDO Kilo?

2015-06-03 Thread Haïkel
2015-06-03 12:59 GMT+02:00 Neil Jerram neil.jer...@metaswitch.com:
 Many thanks, Haïkel, that looks like the information that my team needed.

 Neil


Feel free to ask or join us on our downstream irc channel (#rdo @ freenode) if
you have further questions.
We also hold weekly public irc meetings about downstream packaging.

H.



 On 03/06/15 11:18, Haïkel wrote:

 Hi Neil,

 We're already having this discussion on the downstream list.
 RDO is currently moving packages publication for RHEL/CentOS over CentOS
 mirrors. That's just a matter of time and finish the tooling
 automating the publication
 process for source packages.

 In the mean time, you can find sources in the following places
 * our packaging sources live in Fedora dist-git:
 ie: packaging sources for all services
 http://pkgs.fedoraproject.org/cgit/openstack
 * source packages are in Fedora and CBS (RHEL/CentOS) build systems.
 http://koji.fedoraproject.org/
 http://cbs.centos.org/koji/

 Regards,
 H.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Haïkel
2015-06-03 17:23 GMT+02:00 Thomas Goirand z...@debian.org:
 i
 On 06/03/2015 12:41 AM, James E. Blair wrote:
 Hi,

 This came up at the TC meeting today, and I volunteered to provide an
 update from the discussion.

 I've just read the IRC logs. And there's one thing I would like to make
 super clear.


I still haven't read the logs as we had our post-mortem meeting today,
but I'll try to address your points.

 We, ie: Debian  Ubuntu folks, are very much clear on what we want to
 achieve. The project has been maturing in our heads for like more than 2
 years. We would like that ultimately, only a single set of packages Git
 repositories exist. We already worked on *some* convergence during the
 last years, but now we want a *full* alignment.

 We're not 100% sure how the implementation details will look like for
 the core packages (like about using the Debconf interface for
 configuring packages), but it will eventually happen. For all the rest
 (ie: Python module packaging), which represent the biggest work, we're
 already converging and this has zero controversy.

 Now, the Fedora/RDO/Suse people jumped on the idea to push packaging on
 the upstream infra. Great. That's socially tempting. But technically, I
 don't really see the point, apart from some of the infra tooling (super
 cool if what Paul Belanger does works for both Deb+RPM). Finally,
 indeed, this is not totally baked. But let's please not delay the
 Debian+Ubuntu upstream Gerrit collaboration part because of it. We would
 like to get started, and for the moment, nobody is approving the
 /stackforge/deb-openstack-pkg-tools [1] new repository because we're
 waiting on the TC decision.


First, we all agree that we should move packaging recipes (to use a
neutral term)
and reviewing to upstream gerrit. That should *NOT* be delayed.
We (RDO) are even willing to transfer full control of the openstack-packages
namespace on github. If you want to use another namespace, it's also
fine with us.

Then, about the infra/tooling things, it looks like a misunderstanding.
If we don't find an agreement on these topics, it's perfectly fine and
should not
prevent moving to upstream gerrit

So let's break the discussion in two parts.

1. upstream gerrit shared by everyone and get this started asap
2. continue discussion about infra/tooling within the new project, without
presumin the outcome.

Does it look like a good compromise to you?

Regards,
H.


 Cheers,

 Thomas Goirand (zigo)

 [1] https://review.openstack.org/#/c/185164/


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-01 Thread Haïkel
2015-06-01 17:32 GMT+02:00 Alan Pevec ape...@gmail.com:
 *Plan C* would be to just let projects tag stable point releases from
 time to time. That would solve all the original stated problems. And
 that would solve objections 2 and 3, which I think are the most valid ones.

 and *Plan D* would be to start doing automatic per-project
 micro-versions on each commit: e.g. 2015.1.N where N is increased on
 each commit. There's just TBD item how to provide source tarballs for
 this.

+1 for micro-versions rather than raw git checksums.

 This would solve 3. reference points for OSSAs and relnotes.
 Solution for 2. could be to add doc/source/release-notes.rst for each
 project wishing to maintain it, docs are already published for each
 branch e.g. http://docs.openstack.org/developer/keystone/kilo/

 Cheers,
 Alan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Haïkel
2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though


Hi,

I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
We try to stick as much as possible to upstream (almost zero
downstream patches),
and without intermediate releases, it will get difficult.

I'm personally not fond of this as it will lead to more fragmentation.
It may encourage
bad behaviors like shipping downstream patches for bug fixes and CVE instead
of collaborating upstream to differentiate themselves.
For instance, if we had no point-based release, for issues tracking
purposes, we would
have to maintain our sets of tags somewhere.

There's also the release notes issue that has already been mentioned.
Still continuous release notes won't solve the problem, as you wouldn't
be able to map these to the actual packages. Will we require operators
to find from which git commit, the packages were built and then try to figure
out which fixes are and are not included?

 Long version:

 At the stable branch session in Vancouver we discussed recent
 evolutions in the stable team processes and how to further adapt the
 work of the team in a big tent world.

 One of the key questions there was whether we should continue doing
 stable point releases. Those were basically tags with the same version
 number (2015.1.1) that we would periodically push to the stable
 branches for all projects.

 Those create three problems.

 (1) Projects do not all follow the same versioning, so some projects
 (like Swift) were not part of the stable point releases. More and more
 projects are considering issuing intermediary releases (like Swift
 does), like Ironic. That would result in a variety of version numbers,
 and ultimately less and less projects being able to have a common
 2015.1.1-like version.


And that's actually a pain point to track for these releases in which
OpenStack branch belong. And this is probably something that needs to
be resolved.

 (2) Producing those costs a non-trivial amount of effort on a very small
 team of volunteers, especially with projects caring about stable
 branches in various amounts. We were constantly missing the
 pre-announced dates on those ones. Looks like that effort could be
 better spent improving the stable branches themselves and keeping them
 working.


Agreed, but why not switching to a time-based release?
Regularly, we tag/generate/upload tarballs, this could even be automated.
As far as I'm concerned, I would be more happy to have more frequent releases.

 (3) The resulting stable point releases are mostly useless. Stable
 branches are supposed to be always usable, and the released version
 did not undergo significantly more testing. Issuing them actually
 discourages people from taking whatever point in stable branches makes
 the most sense for them, testing and deploying that.

 The suggestion we made during that session (and which was approved by
 the session participants) is therefore to just get rid of the stable
 point release concept altogether for non-libraries. That said:

 - we'd still do individual point releases for libraries (for critical
 bugs and security issues), so that you can still depend on a specific
 version there

 - we'd still very much maintain stable branches (and actually focus our
 efforts on that work) to ensure they are a continuous source of safe
 upgrades for users of a given series

 Now we realize that the cross-section of our community which was present
 in that session might not fully represent the consumers of those
 artifacts, which is why we expand the discussion on this mailing-list
 (and soon on the operators ML).


Thanks, I was not able to join this discussion, and that was the kind
of proposal
that I was fearing to see happen.

 If you were a consumer of those and will miss them, please explain why.
 In particular, please let us know how consuming that version (which was
 only made available every n months) is significantly better than picking
 your preferred time and get all the current stable branch HEADs at that
 time.


We provide both type of builds
* git continuous builds = for testing/CI and early feedback on potential issues
* point-release based builds = for GA, and production

Anyway, I won't force anyone to do something they don't want to do but I'm
willing to step in to keep point releases in one form or another.

Regards,
H.

 Thanks in advance for your feedback,

 [1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Haïkel
2015-05-29 21:36 GMT+02:00 Dave Walker em...@daviey.com:
 Responses inline.

 On 29 May 2015 6:15 pm, Haïkel hgue...@fedoraproject.org wrote:

 2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
  Hi everyone,
 
  TL;DR:
  - We propose to stop tagging coordinated point releases (like 2015.1.1)
  - We continue maintaining stable branches as a trusted source of stable
  updates for all projects though
 

 Hi,

 I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
 We try to stick as much as possible to upstream (almost zero
 downstream patches),
 and without intermediate releases, it will get difficult.

 If you consider *every* commit to be a release, then your life becomes
 easier. This is just a case of bumping the SemVer patch version per commit
 (as eloquently put by Jeremy).  We even have tooling to automate the version
 generation via pbr..

 Therefore, you might want to jump from X.X.100 to X.X.200 which would mean
 100 commits since the last update.


We have continuous builds for every commit master for a while now, and
it's been a
great tool with CI to have early feedback (missing deps, integration
issues etc.).
We could easily reuse that platform to track stable branches.

The problem is that downstream QA/CI cycle of a package could be much
longer than between
two commits. So we'd end up jamming updates.
I'd rather not drop downstream QA as it testes integration bits, and
it's unlikely
to be something that could be upstream.


 I'm personally not fond of this as it will lead to more fragmentation.
 It may encourage
 bad behaviors like shipping downstream patches for bug fixes and CVE
 instead
 of collaborating upstream to differentiate themselves.
 For instance, if we had no point-based release, for issues tracking
 purposes, we would
 have to maintain our sets of tags somewhere.

 I disagree, each distro already does security patching and whilst I expect
 this to still happens, it actually *encourages* upstream first workflow as
 you can select a release on your own cadence that includes commits you need,
 for your users.


If they choose to rebase upon stable branches, you could also cherry-pick.

 There's also the release notes issue that has already been mentioned.
 Still continuous release notes won't solve the problem, as you wouldn't
 be able to map these to the actual packages. Will we require operators
 to find from which git commit, the packages were built and then try to
 figure
 out which fixes are and are not included?

 Can you provide more detail? I'm not understanding the problem.


A release version makes it easy to know what fixes are shipped in a package.
If you rebase on stable branches, then you can just put the git sha1sum (though,
it's not very friendly) in the version, and leverage git branch
--contains to find out
if you fix is included.
Some distributors may choose to use their own release scheme, adding
complexity to
this simple but common problem.
Other may choose to cherry-pick which adds more complexity than the
previous scenario.

Let's say you're an operator and you want to check if a CVE is shipped
in all your nodes,
if you can't check with just the release version, it will be complicated.
It could be a barrier for heterogeneous systems

  Long version:
 
  At the stable branch session in Vancouver we discussed recent
  evolutions in the stable team processes and how to further adapt the
  work of the team in a big tent world.
 
  One of the key questions there was whether we should continue doing
  stable point releases. Those were basically tags with the same version
  number (2015.1.1) that we would periodically push to the stable
  branches for all projects.
 
  Those create three problems.
 
  (1) Projects do not all follow the same versioning, so some projects
  (like Swift) were not part of the stable point releases. More and more
  projects are considering issuing intermediary releases (like Swift
  does), like Ironic. That would result in a variety of version numbers,
  and ultimately less and less projects being able to have a common
  2015.1.1-like version.
 

 And that's actually a pain point to track for these releases in which
 OpenStack branch belong. And this is probably something that needs to
 be resolved.

  (2) Producing those costs a non-trivial amount of effort on a very small
  team of volunteers, especially with projects caring about stable
  branches in various amounts. We were constantly missing the
  pre-announced dates on those ones. Looks like that effort could be
  better spent improving the stable branches themselves and keeping them
  working.
 

 Agreed, but why not switching to a time-based release?
 Regularly, we tag/generate/upload tarballs, this could even be automated.
 As far as I'm concerned, I would be more happy to have more frequent
 releases.

  (3) The resulting stable point releases are mostly useless. Stable
  branches are supposed to be always usable, and the released version
  did

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Haïkel
2015-05-29 21:23 GMT+02:00 Ian Cordasco ian.corda...@rackspace.com:


 On 5/29/15, 12:14, Haïkel hgue...@fedoraproject.org wrote:

2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though


Hi,

I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
We try to stick as much as possible to upstream (almost zero
downstream patches),
and without intermediate releases, it will get difficult.

 Can you expound on why this is difficult? I believe you, but I want to
 understand it better.


It's impossible to ship a package for every commit, so that means we'll
end up to ship.
1. random commit = bad for tracking issues
2. time-based release (ie: rebuilding every 2 or 4 weeks)
3. cherry-picking commits from stable branches = leads to practical forks

I'm personally not fond of this as it will lead to more fragmentation.

 Could you explain this as well? Do you mean fragmentation between what
 distros are offering? In other words, Ubuntu is packaging Kilo @ SHA1 and
 RHEL is at SHA2. I'm not entirely certain that's a bad thing. That seems
 to give the packagers more freedom.


Freedom leads to fragmentation, it's not bad though I prefer collaborating
in stabilizing the same releases.
That's a personal preference, not an argument :)

It may encourage
bad behaviors like shipping downstream patches for bug fixes and CVE
instead
of collaborating upstream to differentiate themselves.

 Perhaps I'm wrong, but when a CVE is released, don't the downstream
 packagers usually patch whatever version they have and push that out?
 Isn't that the point of them being on an private list to receive embargoed
 notifications with the patches?


Yes, but everyone will have different strategies, when releasing a package as I
explained

For instance, if we had no point-based release, for issues tracking
purposes, we would
have to maintain our sets of tags somewhere.

 But, if I understand correct, downstream sometimes has patches they apply
 (or develop) to ensure the package is rock solid on their distribution.
 Those aren't always relevant upstream so you maintain them. How is this
 different?


Pure downstream patches are not a problem, though we aim to drop them
for our packages.
We'd need the aforementioned tags to track what we currently shipping in our
packages. Having a proper release version/changelog makes it easy to check
if you're shipping a fix or not either by release on release version
or git commit.

Usually, downstream patches are rebased against latest release.

There's also the release notes issue that has already been mentioned.
Still continuous release notes won't solve the problem, as you wouldn't
be able to map these to the actual packages. Will we require operators
to find from which git commit, the packages were built and then try to
figure
out which fixes are and are not included?

 I think this is wrong. If it's a continuously updated set of notes, then
 whatever SHA the head of stable/X is at will be the correct set of notes
 for that branch. If you decide to package a SHA earlier than that, then
 you would need to do this, but I'm not sure why you would want to package
 a SHA that isn't at the HEAD of that branch.


We were speaking about leveraging wiki, if we talking about Changelog shipped
in git, we agree then.
But that requires ensuring that changelogs are properly updated with
every commit,
then (which is not the case actually).


 Long version:

 At the stable branch session in Vancouver we discussed recent
 evolutions in the stable team processes and how to further adapt the
 work of the team in a big tent world.

 One of the key questions there was whether we should continue doing
 stable point releases. Those were basically tags with the same version
 number (2015.1.1) that we would periodically push to the stable
 branches for all projects.

 Those create three problems.

 (1) Projects do not all follow the same versioning, so some projects
 (like Swift) were not part of the stable point releases. More and more
 projects are considering issuing intermediary releases (like Swift
 does), like Ironic. That would result in a variety of version numbers,
 and ultimately less and less projects being able to have a common
 2015.1.1-like version.


And that's actually a pain point to track for these releases in which
OpenStack branch belong. And this is probably something that needs to
be resolved.

 Well there's been a lot of discussion around not integrating releases at
 all. That said, I'm not sure I disagree. Coordinating release numbers is
 fine. Coordinating release dates seems less so, especially since they
 prevent the project from delivering what it's promised so that it can
 manage to get something that's super stable by an arbitrary date.


Yes,  the problem to solve

Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-28 Thread Haïkel
2015-05-28 21:58 GMT+02:00 Paul Belanger pabelan...@redhat.com:

 Personally, I'm a fan of mock. Is there plan to add support for it? Also,
 currently containers are not being used in -infra.  Not saying it is a show
 stopper, but could see some initial planning that is required for it.



Nothing prevents us to run mock within Delorean container but I think this would
be an useless overhead, we already have (much better) isolation with Docker.
Moreover, leveraging docker is currently an option by mock upstream.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-28 Thread Haïkel
2015-05-28 10:40 GMT+02:00 Thomas Goirand z...@debian.org:

 I don't know delorean at all, but what should be kept in mind is that,
 for Debian and Ubuntu, we *must* use sbuild, which is what is used on
 the buildd networks.

 I also started working on openstack-pkg-tools to provide such sbuild
 based build env, so I'm not sure if we need to switch to Delorean. Could
 you point me to some documentation about it, so I can see by myself what
 Delorean is about?


Delorean basically will retrieve upstream sources and packaging
recipes (ie: spec + config files
for RPM), and will run a script to build packages in a docker container.

Here's the main script to rebuild a RPM package.
https://github.com/openstack-packages/delorean/blob/master/scripts/build_rpm.sh

The script basically uses rpmbuild to build packages, we could have a
build_deb.sh that uses
sbuild and add dockerfiles for the Debian/Ubuntu supported releases.

Regards,
H.

 Cheers,

 Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-28 Thread Haïkel
2015-05-27 23:26 GMT+02:00 Derek Higgins der...@redhat.com:
 On 27/05/15 09:14, Thomas Goirand wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hi all,

 tl;dr:
 - - We'd like to push distribution packaging of OpenStack on upstream
 gerrit with reviews.
 - - The intention is to better share the workload, and improve the overall
 QA for packaging *and* upstream.
 - - The goal is *not* to publish packages upstream
 - - There's an ongoing discussion about using stackforge or openstack.
 This isn't, IMO, that important, what's important is to get started.
 - - There's an ongoing discussion about using a distribution specific
 namespace, my own opinion here is that using /openstack-pkg-{deb,rpm} or
 /stackforge-pkg-{deb,rpm} would be the most convenient because of a
 number of technical reasons like the amount of Git repository.
 - - Finally, let's not discuss for too long and let's do it!!! :)

 Longer version:

 Before I start: some stuff below is just my own opinion, others are just
 given facts. I'm sure the reader is smart enough to guess which is what,
 and we welcome anyone involved in the project to voice an opinion if
 he/she differs.

 During the Vancouver summit, operation, Canonical, Fedora and Debian
 people gathered and collectively expressed the will to maintain
 packaging artifacts within upstream OpenStack Gerrit infrastructure. We
 haven't decided all the details of the implementation, but spent the
 Friday morning together with members of the infra team (hi Paul!) trying
 to figure out what and how.

 A number of topics have been raised, which needs to be shared.

 First, we've been told that such a topic deserved a message to the dev
 list, in order to let groups who were not present at the summit. Yes,
 there was a consensus among distributions that this should happen, but
 still, it's always nice to let everyone know.

 So here it is. Suse people (and other distributions), you're welcome to
 join the effort.

 - - Why doing this
 
 It's been clear to both Canonical/Ubuntu teams, and Debian (ie: myself)
 that we'd be a way more effective if we worked better together, on a
 collaborative fashion using a review process like on upstream Gerrit.
 But also, we'd like to welcome anyone, and especially the operation
 folks, to contribute and give feedback. Using Gerrit is the obvious way
 to give everyone a say on what we're implementing.

 As OpenStack is welcoming every day more and more projects, it's making
 even more sense to spread the workload.

 This is becoming easier for Ubuntu guys as Launchpad now understand not
 only BZR, but also Git.

 We'd start by merging all of our packages that aren't core packages
 (like all the non-OpenStack maintained dependencies, then the Oslo libs,
 then the clients). Then we'll see how we can try merging core packages.

 Another reason is that we believe working with the infra of OpenStack
 upstream will improve the overall quality of the packages. We want to be
 able to run a set of tests at build time, which we already do on each
 distribution, but now we want this on every proposed patch. Later on,
 when we have everything implemented and working, we may explore doing a
 package based CI on every upstream patch (though, we're far from doing
 this, so let's not discuss this right now please, this is a very long
 term goal only, and we will have a huge improvement already *before*
 this is implemented).

 - - What it will *not* be
 ===
 We do not have the intention (yet?) to publish the resulting packages
 built on upstream infra. Yes, we will share the same Git repositories,
 and yes, the infra will need to keep a copy of all builds (for example,
 because core packages will need oslo.db to build and run unit tests).
 But we will still upload on each distributions on separate repositories.
 So published packages by the infra isn't currently discussed. We could
 get to this topic once everything is implemented, which may be nice
 (because we'd have packages following trunk), though please, refrain to
 engage in this topic right now: having the implementation done is more
 important for the moment. Let's try to stay on tracks and be constructive.

 - - Let's keep efficiency in mind
 ===
 Over the last few years, I've been able to maintain all of OpenStack in
 Debian with little to no external contribution. Let's hope that the
 Gerrit workflow will not slow down too much the packaging work, even if
 there's an unavoidable overhead. Hopefully, we can implement some
 liberal ACL policies for the core reviewers so that the Gerrit workflow
 don't slow down anyone too much. For example we may be able to create
 new repositories very fast, and it may be possible to self-approve some
 of the most trivial patches (for things like typo in a package
 description, adding new debconf translations, and such obvious fixes, we
 shouldn't waste our time).

 There's a middle ground between the 

Re: [Openstack] Centos 7 root pasword

2014-10-14 Thread Haïkel
2014-10-14 9:41 GMT+02:00 Mridhul Pax mrid...@live.com:
 Hi Friends,

 I have downloaded a centos 7 image from the following site and created a
 glance image. Im able to provison a server via that image and the server
 booted up fine. Any one know how to login to the server ?

 I tried combinations like root/centos , centos/centos but no luck

 I downloaded the QCOW2 image from the following link :

 http://cloud.centos.org/centos/7/devel/

Cloud images have no password, you must pass an ssh key to connect to
your instance.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack