Re: [openstack-dev] Proposing Mark Goddard to ironic-core

2018-05-23 Thread Shivanand Tendulker
+1 from me.



On Sun, May 20, 2018 at 8:15 PM, Julia Kreger 
wrote:

> Greetings everyone!
>
> I would like to propose Mark Goddard to ironic-core. I am aware he
> recently joined kolla-core, but his contributions in ironic have been
> insightful and valuable. The kind of value that comes from operative use.
>
> I also make this nomination knowing that our community landscape is
> changing and that we must not silo our team responsibilities or ability to
> move things forward to small highly focused team. I trust Mark to use his
> judgement as he has time or need to do so. He might not always have time,
> but I think at the end of the day, we’re all in that same boat.
>
> -Julia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Mark Goddard to ironic-core

2018-05-23 Thread Shiina, Hironori
+1

> -Original Message-
> From: Julia Kreger [mailto:juliaashleykre...@gmail.com]
> Sent: Sunday, May 20, 2018 11:46 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: [openstack-dev] Proposing Mark Goddard to ironic-core
> 
> Greetings everyone!
> 
> I would like to propose Mark Goddard to ironic-core. I am aware he recently 
> joined kolla-core, but his contributions in ironic
> have been insightful and valuable. The kind of value that comes from 
> operative use.
> 
> I also make this nomination knowing that our community landscape is changing 
> and that we must not silo our team responsibilities
> or ability to move things forward to small highly focused team. I trust Mark 
> to use his judgement as he has time or need to
> do so. He might not always have time, but I think at the end of the day, 
> we’re all in that same boat.
> 
> -Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Zane Bitter

On 23/05/18 11:25, Dean Troyer wrote:

On Wed, May 23, 2018 at 12:58 PM, Julia Kreger
 wrote:

There is definitely value to be gained for both projects in terms of a
different point of view that might not have been able to play out in


Ironic is a bit different in this regard to the released code since
there _is_ overlap with the STX Bare Metal service.  There is also
not-overlapping aspects to it.  I would like to talk with you and the
Ironic team at some point about scope and goals for the long term.


the public community, but since we're dealing with squashed commits of
changes, it is really hard for us to delineate history/origin of code
fragments, and without that it makes it near impossible for projects
to even help them reconcile their technical debt because of that and
the lacking context surrounding that. It would be so much more
friendly to the community if we had stacks of patch files that we
could work with git.


+1


Unfortunately it was a requirement to not release the history.  There
are some bits that we were not allowed to release (for legal reasons,
not open core reasons) that are present in the history.  And yes it is
in most cases unusable to do anything more than browse for pulling
things upstream.


'git filter-branch' is your friend :)


What I did manage to get was permission to publish the individual
commits on top of the upstream base that do not run afoul of the legal
issues.  Given that this is all against Pike and we need to propose to
master first, they are not likely directly usable but the information
needed for the upstream work will be available.  These have not been
cleaned up yet but I plan to add them directly to the repos containing
the squashes as they are done.


Can I add myself to the list of confused people wanting to understand
better? I can see and understand value, but context and understanding
as to why as I mentioned above is going to be the main limiter for
interaction.


I have heard multiple reasons why this has been done, this is one area
I am not going to go into detail about other than the stuff that has
been cleared and released.  Understanding (some) business decisions
are not one of my strengths.

I will say that my opinion from working with WRS for a few months is
they do truly want to form a community around StarlingX and will be
moving their ongoing Titanium development there.

dt




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Follow Up: Private Enterprise Cloud Issues

2018-05-23 Thread David Medberry
There was a great turnout at the Private Enterprise Cloud Issues session
here in Vancouver. I'll propose a follow-on discussion for Denver PTG as
well as trying to sift the data a bit and pre-populate. Look for that
sifted data soon.

For folks unable to participate locally, the etherpad is here:

https://etherpad.openstack.org/p/YVR-private-enterprise-cloud-issues

(and I've cached a copy offline in case it gets reset/etc.)

-- 
-dave
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Michael Still
I think a good start would be a concrete list of the places you felt you
needed to change upstream and the specific reasons for each that it wasn't
done as part of the community.

For example, I look at your nova fork and it has a "don't allow this call
during an upgrade" decorator on many API calls. Why wasn't that done
upstream? It doesn't seem overly controversial, so it would be useful to
understand the reasoning for that change.

To be blunt I had a quick scan of the Nova fork and I don't see much of
interest there, but its hard to tell given how things are laid out now.
Hence the request for a list.

Michael




On Thu, May 24, 2018 at 6:36 AM, Dean Troyer  wrote:

> On Wed, May 23, 2018 at 2:20 PM, Brian Haley  wrote:
> > Even doing that is work - going through changes, finding nuggets,
> proposing
> > new specs I don't think we can expect a project to even go there, it
> has
> > to be driven by someone already involved in StarlingX, IMHO.
>
> In the beginning at least it will be.  We have prioritized lists for
> where we want to start.  Once I get the list and commits cleaned up
> everyone can look at them and weigh in on our starting point.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Did this email leave you hoping to cause me pain? Good news!
Sponsor me in city2surf 2018 and I promise to suffer greatly.
http://www.madebymikal.com/city2surf-2018/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Sergii Golovatiuk
Hi,

On Wed, May 23, 2018 at 8:20 PM, Sagi Shnaidman  wrote:
>
>>
>> to reduce the impact of a change. From my original reply:
>>
>> > If there's a high maintenance cost, we haven't properly identified the
>> > optimal way to separate functionality between tripleo/quickstart.
>>
>> IMHO this is a side effect of having a whole bunch of roles in a
>> single repo.  oooq-extras has a mix of tripleo and non-tripleo related
>> content. The reproducer IMHO is related to provisioning and could fall
>> in the oooq repo and not oooq-extras.  This is a structure problem
>> with quickstart.  If it's not version specific, then don't put it in a
>> version specific repo. But that doesn't mean don't use version
>> specific repos at all.
>>
>> This is one of the reasons why we're opting not to use this pattern of
>> a bunch of roles in a single repo for tripleo itself[0][1][2].  We
>> learned with the puppet modules that carrying all this stuff in a
>> single repo has a huge maintenance cost and if you split them out you
>> can identify re-usability and establish proper patterns for moving
>> functionality into a shared place[3].  Yes there is a maintenance cost
>> of maintaining independent repos, but at the same time there's a
>> benefit of re-usability by other projects/groups when you expose
>> important pieces of functionality as a standalone. You can establish
>> clear ways to interact with each piece, test items, and release
>> independently.  For example the ansible-role-container-registry is not
>> tripleo specific and anyone looking to manage a standalone docker
>> registry can use it & contribute.
>>
>
> We were moving between having all roles in one repo and having a separate
> repo for each role a few times. Each case has it's advantages and
> disadvantages. Last time we moved to have roles in 2 repos - quickstart and
> extras, it was a year ago I think. So far IMHO it's the best approach. There
> will be a mechanism to install additional roles, like we have for
> tirpleo-upgrade, ops-tools, etc etc.

But at the moment we don't have that mechanism so we should live
somehow until it's implemented.

> It may be a much broader topic to discuss, although I think having part of
> roles branched and part of not branched is much more headache.
> Tripleo-upgrade is a good example of it.
>
>>
>> > So in 90% code we DO need to backport every change, take for example the
>> > latest patch to extras: https://review.openstack.org/#/c/570167/, it's
>> > fixing reproducer. If oooq-extra was branched, we would need to backport
>> > this fix to every and every branch. And the same for all other 90% of
>> > code,
>> > which is complete nonsense.
>> > Just because not using "{% if release %}" construct - to block the whole
>> > work of CI team and make the CI code is absolutely unmaintainable?
>> >
>>
>> And you're saying what we currently have is maintainable?  We keep
>> breaking ourselves, there's big gaps in coverage and it takes
>> time[4][5] to identify breakages. I don't consider that maintainable
>> because this is a recurring topic because we clearly haven't fixed it
>> with the current setup.  It's time to re-evaluate what we have an see
>> if there's room for improvement.  I know I wasn't proposing to branch
>> all the repositories, but it might make sense to figure out if there's
>> a way to reduce our recurring issues with stable branches or
>> independent modules for some of the functions in CI.
>
>
>> Considering this is how we broke Queens, I'm not sure I agree.

We broke Queens, Pike, Newton by merging [1] without testing against
these releases.

>>
>
> First of all I don't see any connection between maintenance and CI
> breakages, it's different topics. And yes, it IS maintainable CI that we
> have now, and I have what to compare it with. I remember very well
> tripleo.sh based approach, also you can see almost green dashboards last
> time which proves my statement. CI is not ideal now, but it's definitely
> much better than 1-2 years ago.
>
>
> Of course we have breakages, the CI is actually history of breakages and
> fixes, as any other product. Wrt queens issue, it took about a week to solve
> it not because it was so hard, but because we had a very difficult weeks
> when trying to fix all Centos 7.5 issues and queens branch was in second
> priority. And by the way, we fixed everything much faster then it was with
> CentOS 7.4.  Having the negative attitude that every CI breakage is proof of
> wrong CI structure is not correct and doesn't help. If branching helped in
> this case, it would create much bigger problems in all other cases.

I would like to forget about feeling and discuss the technical side of
2 solutions, cost for every team and product in general to find the
solution that fits all.

>
> Anyway, we saw that having branch jobs in OVB only didn't catch queens issue
> (why - you know better) so we added multinode branch specific ones, which
> will catch such issues in the 

Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Dean Troyer
On Wed, May 23, 2018 at 2:20 PM, Brian Haley  wrote:
> Even doing that is work - going through changes, finding nuggets, proposing
> new specs I don't think we can expect a project to even go there, it has
> to be driven by someone already involved in StarlingX, IMHO.

In the beginning at least it will be.  We have prioritized lists for
where we want to start.  Once I get the list and commits cleaned up
everyone can look at them and weigh in on our starting point.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Jeremy Stanley
On 2018-05-23 15:20:28 -0400 (-0400), Brian Haley wrote:
> On 05/23/2018 02:00 PM, Jeremy Stanley wrote:
> > On 2018-05-22 17:41:18 -0400 (-0400), Brian Haley wrote:
> > [...]
> > > I read this the other way - the goal is to get all the forked code from
> > > StarlingX into upstream repos.  That seems backwards from how this should
> > > have been done (i.e. upstream first), and I don't see how a project would
> > > prioritize that over other work.
> > [...]
> > 
> > I have yet to see anyone suggest it should be prioritized over other
> > work. I expect the extracted and proposed changes/specs
> > corresponding to the divergence would be viewed on their own merits
> > just like any other change and ignored, reviewed, rejected, et
> > cetera as appropriate.
> 
> Even doing that is work - going through changes, finding nuggets,
> proposing new specs I don't think we can expect a project to
> even go there, it has to be driven by someone already involved in
> StarlingX, IMHO.

I gather that's the proposal at hand. The StarlingX development team
would do the work to write specs for these feature additions,
propose them through the usual processes, then start extracting the
relevant parts of their "technical debt" corresponding to any specs
which get approved and propose patches to those services for review.
If they don't, then I agree this will go nowhere.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Brian Haley

On 05/23/2018 02:00 PM, Jeremy Stanley wrote:

On 2018-05-22 17:41:18 -0400 (-0400), Brian Haley wrote:
[...]

I read this the other way - the goal is to get all the forked code from
StarlingX into upstream repos.  That seems backwards from how this should
have been done (i.e. upstream first), and I don't see how a project would
prioritize that over other work.

[...]

I have yet to see anyone suggest it should be prioritized over other
work. I expect the extracted and proposed changes/specs
corresponding to the divergence would be viewed on their own merits
just like any other change and ignored, reviewed, rejected, et
cetera as appropriate.


Even doing that is work - going through changes, finding nuggets, 
proposing new specs I don't think we can expect a project to even go 
there, it has to be driven by someone already involved in StarlingX, IMHO.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Colleen Murphy


On Wed, May 23, 2018, at 8:07 PM, Dean Troyer wrote:
> On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy  wrote:
> > It's also important to make the distinction between hosting something on 
> > openstack.org infrastructure and recognizing it in an official capacity. 
> > StarlingX is seeking both, but in my opinion the code hosting is not the 
> > problem here.
> 
> StarlingX is an OpenStack Foundation Edge focus area project and is
> seeking to use the CI infrastructure.  There may be a project or two
> contained within that may make sense as OpenStack projects in the
> not-called-big-tent-anymore sense but that is not on the table, there
> is a lot of work to digest before we could even consider that.  Is
> that the official capacity you are talking about?

I was talking about it being recognized by the OpenStack Foundation as part of 
one of its strategic focus areas. I understand StarlingX isn't seeking official 
recognition within the OpenStack project under the TC's governance.

Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [edge][glance]: Wiki of the possible architectures for image synchronisation

2018-05-23 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

Here I send the wiki page 
[1] where I 
summarize what I understood from the Forum session about image synchronisation 
in edge environment [2], [3].

Please check and correct/comment.

Thanks,
Gerg0


[1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment
[2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images
[3]: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Dean Troyer
On Wed, May 23, 2018 at 1:24 PM, Matt Riedemann  wrote:
> Rather than literally making this a priority, I expect most of the feeling
> is that because of the politics and pressure of competition with a fork in
> another foundation is driving the defensiveness about feeling pressured to
> prioritize review on whatever specs/patches are proposed as a result of the
> code dump.

David Letterman used to say "This is not a competition it is just an
exhibition.  No wagering!"  for Stupid Pet Tricks.

The feelings that is is a competition is one aspect that I want to
help ease if I can.  Once we have the list of individual
upstream-desired changes we can talk about priorities (we do have a
priority list internally) and desirability.

The targeted use cases for StarlingX/Titanium has requirements that do
not fit other use cases or may not be widely useful.  We need to
figure out how to handle those in the long term.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Dean Troyer
On Wed, May 23, 2018 at 12:58 PM, Julia Kreger
 wrote:
> There is definitely value to be gained for both projects in terms of a
> different point of view that might not have been able to play out in

Ironic is a bit different in this regard to the released code since
there _is_ overlap with the STX Bare Metal service.  There is also
not-overlapping aspects to it.  I would like to talk with you and the
Ironic team at some point about scope and goals for the long term.

> the public community, but since we're dealing with squashed commits of
> changes, it is really hard for us to delineate history/origin of code
> fragments, and without that it makes it near impossible for projects
> to even help them reconcile their technical debt because of that and
> the lacking context surrounding that. It would be so much more
> friendly to the community if we had stacks of patch files that we
> could work with git.

Unfortunately it was a requirement to not release the history.  There
are some bits that we were not allowed to release (for legal reasons,
not open core reasons) that are present in the history.  And yes it is
in most cases unusable to do anything more than browse for pulling
things upstream.

What I did manage to get was permission to publish the individual
commits on top of the upstream base that do not run afoul of the legal
issues.  Given that this is all against Pike and we need to propose to
master first, they are not likely directly usable but the information
needed for the upstream work will be available.  These have not been
cleaned up yet but I plan to add them directly to the repos containing
the squashes as they are done.

> Can I add myself to the list of confused people wanting to understand
> better? I can see and understand value, but context and understanding
> as to why as I mentioned above is going to be the main limiter for
> interaction.

I have heard multiple reasons why this has been done, this is one area
I am not going to go into detail about other than the stuff that has
been cleared and released.  Understanding (some) business decisions
are not one of my strengths.

I will say that my opinion from working with WRS for a few months is
they do truly want to form a community around StarlingX and will be
moving their ongoing Titanium development there.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Matt Riedemann

On 5/23/2018 11:00 AM, Jeremy Stanley wrote:

I have yet to see anyone suggest it should be prioritized over other
work. I expect the extracted and proposed changes/specs
corresponding to the divergence would be viewed on their own merits
just like any other change and ignored, reviewed, rejected, et
cetera as appropriate.


Rather than literally making this a priority, I expect most of the 
feeling is that because of the politics and pressure of competition with 
a fork in another foundation is driving the defensiveness about feeling 
pressured to prioritize review on whatever specs/patches are proposed as 
a result of the code dump.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Sagi Shnaidman
> to reduce the impact of a change. From my original reply:
>
> > If there's a high maintenance cost, we haven't properly identified the
> optimal way to separate functionality between tripleo/quickstart.
>
> IMHO this is a side effect of having a whole bunch of roles in a
> single repo.  oooq-extras has a mix of tripleo and non-tripleo related
> content. The reproducer IMHO is related to provisioning and could fall
> in the oooq repo and not oooq-extras.  This is a structure problem
> with quickstart.  If it's not version specific, then don't put it in a
> version specific repo. But that doesn't mean don't use version
> specific repos at all.
>
> This is one of the reasons why we're opting not to use this pattern of
> a bunch of roles in a single repo for tripleo itself[0][1][2].  We
> learned with the puppet modules that carrying all this stuff in a
> single repo has a huge maintenance cost and if you split them out you
> can identify re-usability and establish proper patterns for moving
> functionality into a shared place[3].  Yes there is a maintenance cost
> of maintaining independent repos, but at the same time there's a
> benefit of re-usability by other projects/groups when you expose
> important pieces of functionality as a standalone. You can establish
> clear ways to interact with each piece, test items, and release
> independently.  For example the ansible-role-container-registry is not
> tripleo specific and anyone looking to manage a standalone docker
> registry can use it & contribute.
>
>
We were moving between having all roles in one repo and having a separate
repo for each role a few times. Each case has it's advantages and
disadvantages. Last time we moved to have roles in 2 repos - quickstart and
extras, it was a year ago I think. So far IMHO it's the best approach.
There will be a mechanism to install additional roles, like we have for
tirpleo-upgrade, ops-tools, etc etc.
It may be a much broader topic to discuss, although I think having part of
roles branched and part of not branched is much more headache.
Tripleo-upgrade is a good example of it.


> > So in 90% code we DO need to backport every change, take for example the
> > latest patch to extras: https://review.openstack.org/#/c/570167/, it's
> > fixing reproducer. If oooq-extra was branched, we would need to backport
> > this fix to every and every branch. And the same for all other 90% of
> code,
> > which is complete nonsense.
> > Just because not using "{% if release %}" construct - to block the whole
> > work of CI team and make the CI code is absolutely unmaintainable?
> >
>
> And you're saying what we currently have is maintainable?  We keep
> breaking ourselves, there's big gaps in coverage and it takes
> time[4][5] to identify breakages. I don't consider that maintainable
> because this is a recurring topic because we clearly haven't fixed it
> with the current setup.  It's time to re-evaluate what we have an see
> if there's room for improvement.  I know I wasn't proposing to branch
> all the repositories, but it might make sense to figure out if there's
> a way to reduce our recurring issues with stable branches or
> independent modules for some of the functions in CI.
>

Considering this is how we broke Queens, I'm not sure I agree.
>
>
First of all I don't see any connection between maintenance and CI
breakages, it's different topics. And yes, it IS maintainable CI that we
have now, and I have what to compare it with. I remember very well
tripleo.sh based approach, also you can see almost green dashboards last
time which proves my statement. CI is not ideal now, but it's definitely
much better than 1-2 years ago.

Of course we have breakages, the CI is actually history of breakages and
fixes, as any other product. Wrt queens issue, it took about a week to
solve it not because it was so hard, but because we had a very difficult
weeks when trying to fix all Centos 7.5 issues and queens branch was in
second priority. And by the way, we fixed everything much faster then it
was with CentOS 7.4.  Having the negative attitude that every CI breakage
is proof of wrong CI structure is not correct and doesn't help. If
branching helped in this case, it would create much bigger problems in all
other cases.

Anyway, we saw that having branch jobs in OVB only didn't catch queens
issue (why - you know better) so we added multinode branch specific ones,
which will catch such issues in the future. We hit the problem, solved it,
set preventive actions and are ready to catch it next time. This is a
normal CI workflow and I don't see any problem with it. Having multinode
branch jobs is actually pretty similar to "branching" repos but without
maintenance nightmare.

Thanks

Thanks,
> -Alex
>
> [0] http://git.openstack.org/cgit/openstack/ansible-role-
> container-registry/
> [1] http://git.openstack.org/cgit/openstack/ansible-role-redhat-
> subscription/
> [2] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-keystone/
> [3] 

Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Dean Troyer
On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy  wrote:
> It's also important to make the distinction between hosting something on 
> openstack.org infrastructure and recognizing it in an official capacity. 
> StarlingX is seeking both, but in my opinion the code hosting is not the 
> problem here.

StarlingX is an OpenStack Foundation Edge focus area project and is
seeking to use the CI infrastructure.  There may be a project or two
contained within that may make sense as OpenStack projects in the
not-called-big-tent-anymore sense but that is not on the table, there
is a lot of work to digest before we could even consider that.  Is
that the official capacity you are talking about?

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Jeremy Stanley
On 2018-05-23 13:48:56 -0400 (-0400), Jay Pipes wrote:
[...]
> I believe you may be confusing packages (or package specs) with
> distributions?
> 
> Mirantis OpenStack was never hosted on an openstack
> infrastructure. Fuel is, as are deb spec files and Puppet
> manifests, etc. But the distribution of OpenStack is the
> collection of all those specs/build files along with a default
> configuration and things like project deltas exposed as patch
> files. Same goes for RDO, Canonical OpenStack, etc.
[...]

The Debian OpenStack packaging effort, when we were hosting it (the
maintainers eventually decided for the sake of consistency to move
it back into Debian's collaborative hosting instead) were in fact
done as forked copies of the Git repositories of official OpenStack
deliverables. Patch series and Git forks can be converted back and
forth, at some cost to developer efficiency, but ultimately are an
implementation detail.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Jeremy Stanley
On 2018-05-22 17:41:18 -0400 (-0400), Brian Haley wrote:
[...]
> I read this the other way - the goal is to get all the forked code from
> StarlingX into upstream repos.  That seems backwards from how this should
> have been done (i.e. upstream first), and I don't see how a project would
> prioritize that over other work.
[...]

I have yet to see anyone suggest it should be prioritized over other
work. I expect the extracted and proposed changes/specs
corresponding to the divergence would be viewed on their own merits
just like any other change and ignored, reviewed, rejected, et
cetera as appropriate.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Failing fullstack and ovsfw jobs

2018-05-23 Thread Slawomir Kaplonski
Hi,

Yesterday we had issue [1] with compiling openvswitch kernel module during 
fullstack and ovsfw scenario jobs.
This is now fixed by [2] so if You have a patch and those jobs are failing for 
You, please rebase it to have included this fix and it should works fine.

[1] https://bugs.launchpad.net/neutron/+bug/1772689
[2] https://review.openstack.org/#/c/570085/

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Julia Kreger
On Tue, May 22, 2018 at 5:41 PM, Brian Haley  wrote:
> On 05/22/2018 04:57 PM, Jay Pipes wrote:
[trim]

> I read this the other way - the goal is to get all the forked code from
> StarlingX into upstream repos.  That seems backwards from how this should
> have been done (i.e. upstream first), and I don't see how a project would
> prioritize that over other work.

There is definitely value to be gained for both projects in terms of a
different point of view that might not have been able to play out in
the public community, but since we're dealing with squashed commits of
changes, it is really hard for us to delineate history/origin of code
fragments, and without that it makes it near impossible for projects
to even help them reconcile their technical debt because of that and
the lacking context surrounding that. It would be so much more
friendly to the community if we had stacks of patch files that we
could work with git.

>> I'm truly wondering why was this even open-sourced to begin with? I'm as
>> big a supporter of open source as anyone, but I'm really struggling to
>> comprehend the business, technical, or marketing decisions behind this
>> action. Please help me understand. What am I missing?
>
>
> I'm just as confused.

Can I add myself to the list of confused people wanting to understand
better? I can see and understand value, but context and understanding
as to why as I mentioned above is going to be the main limiter for
interaction.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Jay Pipes

On 05/23/2018 12:49 PM, Colleen Murphy wrote:

On Tue, May 22, 2018, at 10:57 PM, Jay Pipes wrote:


Are any of the distributions of OpenStack listed at
https://www.openstack.org/marketplace/distros/ hosted on openstack.org
infrastructure? No. And I think that is completely appropriate.


Hang on, that's not quite true. From that list I see Mirantis, Debian, Ubuntu, 
and RedHat, who all have (or had until recently) significant parts of their 
distros hosted on openstack.org infrastructure and are/were even official 
OpenStack projects governed by the TC.


I believe you may be confusing packages (or package specs) with 
distributions?


Mirantis OpenStack was never hosted on an openstack infrastructure. Fuel 
is, as are deb spec files and Puppet manifests, etc. But the 
distribution of OpenStack is the collection of all those specs/build 
files along with a default configuration and things like project deltas 
exposed as patch files. Same goes for RDO, Canonical OpenStack, etc.



It's also important to make the distinction between hosting something on 
openstack.org infrastructure and recognizing it in an official capacity. 
StarlingX is seeking both, but in my opinion the code hosting is not the 
problem here.


Yep, you're absolutely right that there is a distinction between hosting 
and consuming the foundation's resources and recognizing StarlingX in 
some official capacity. I'm concerned about both items.


My concern with the former item is that I believe this is setting a 
precedent that the foundation's resources are being used to host a 
particular OpenStack distribution -- which is something I don't believe 
should happen. Vendor products/distributions [1] should be supported by 
that vendor, IMHO. [2]


My concern with the latter item is more an annoyance with what I see as 
Intel / Wind River playing the Linux Foundation against the OpenStack 
foundation to see which will bear the burden of supporting code that I 
feel is being dumped on the upstream community. I fully understand that 
Dean has been put into a very awkward situation with all of this, and I 
want to be clear that I mean no disrespect towards any Intel or Wind 
River engineer/contributor. My gripe is with the business/management 
decisions that led to this. Dean was very gracious in answering a number 
of my questions on the etherpad linked in the original post. Thank you 
to Dean for being gracious under fire.


Finally, I'd like to say that I did read the long discussion thread the 
TC had about this [3]. A number of the TC folks brought up interesting 
points about the subject at hand, and I recognize there's a bit of a 
damned-if-we-do-damned-if-we-don't situation. Jeremy pointed out concern 
about the optics of having the Linux Foundation hosting a fork of 
OpenStack and how bad that would look. A number of folks, including 
Jeremy, also brought up the potential renaming of the OpenStack 
Foundation to the Open Infrastructure Foundation and what such a rename 
might do to ease concerns over things like Airship and StarlingX. I 
don't personally feel a rename would ease much of the discontent, but 
I'm also clearly biased and recognize that I am so.


One point that I brought up on the etherpad was whether folks have 
considered an "edge constellation" instead of a fork of OpenStack? In 
other words, the edge constellation would be a description of an 
opinionated build of OpenStack (and other supporting services) that 
would be focused on the mobile/edge cloud use cases, but there would not 
be a fork of OpenStack. Anyway, I think it's worth considering at least; 
it's a sticky and awkward situation, for sure.


Best,
-jay

[1] Yes, even if that vendor has now chosen a different strategy of open 
sourcing their code versus keeping it proprietary


[2] For the record, I believe it was a mistake to put Mirantis' Fuel 
product (and let's face it, Fuel was a product of Mirantis) under the 
openstack.org's hosting.


[3] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-20.log.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Jeremy Stanley
On 2018-05-23 18:49:16 +0200 (+0200), Colleen Murphy wrote:
[...]
> It's also important to make the distinction between hosting
> something on openstack.org infrastructure and recognizing it in an
> official capacity. StarlingX is seeking both, but in my opinion
> the code hosting is not the problem here.

This may also be a poor time to mention that there have been
discussions within the Infra team for over a year about renaming the
infrastructure we're managing, since it's done in service of more
than just the OpenStack project. The hardest part has been coming up
with a good name. ;)
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Alex Schultz
On Wed, May 23, 2018 at 10:49 AM, Sagi Shnaidman  wrote:
> Alex,
>
> the problem is that you're working and focusing mostly on release specific
> code like featuresets and some scripts. But tripleo-quickstart(-extras) and
> tripleo-ci is much *much* more than set of featuresets. Only 10% of the code
> may be related to releases and branches, while other 90% is completely
> independent and not related to releases.
>

It is not necessarily about release specific code, it's about being
able to reduce the impact of a change. From my original reply:

> If there's a high maintenance cost, we haven't properly identified the 
> optimal way to separate functionality between tripleo/quickstart.

IMHO this is a side effect of having a whole bunch of roles in a
single repo.  oooq-extras has a mix of tripleo and non-tripleo related
content. The reproducer IMHO is related to provisioning and could fall
in the oooq repo and not oooq-extras.  This is a structure problem
with quickstart.  If it's not version specific, then don't put it in a
version specific repo. But that doesn't mean don't use version
specific repos at all.

This is one of the reasons why we're opting not to use this pattern of
a bunch of roles in a single repo for tripleo itself[0][1][2].  We
learned with the puppet modules that carrying all this stuff in a
single repo has a huge maintenance cost and if you split them out you
can identify re-usability and establish proper patterns for moving
functionality into a shared place[3].  Yes there is a maintenance cost
of maintaining independent repos, but at the same time there's a
benefit of re-usability by other projects/groups when you expose
important pieces of functionality as a standalone. You can establish
clear ways to interact with each piece, test items, and release
independently.  For example the ansible-role-container-registry is not
tripleo specific and anyone looking to manage a standalone docker
registry can use it & contribute.

> So in 90% code we DO need to backport every change, take for example the
> latest patch to extras: https://review.openstack.org/#/c/570167/, it's
> fixing reproducer. If oooq-extra was branched, we would need to backport
> this fix to every and every branch. And the same for all other 90% of code,
> which is complete nonsense.
> Just because not using "{% if release %}" construct - to block the whole
> work of CI team and make the CI code is absolutely unmaintainable?
>

And you're saying what we currently have is maintainable?  We keep
breaking ourselves, there's big gaps in coverage and it takes
time[4][5] to identify breakages. I don't consider that maintainable
because this is a recurring topic because we clearly haven't fixed it
with the current setup.  It's time to re-evaluate what we have an see
if there's room for improvement.  I know I wasn't proposing to branch
all the repositories, but it might make sense to figure out if there's
a way to reduce our recurring issues with stable branches or
independent modules for some of the functions in CI.

> Some of release related templates we moved recently from tripleo-ci to THT
> repo like scenarios, OC templates, etc. If we discover another things in
> oooq that could be moved to branched THT I'd be only happy for that.
>
> Sometimes it could be hard to maintain one file in extras templates with
> different logic for releases, like we have in tempest configuration for
> example. The solution is to create a few release-related templates and use
> one that match the current branch. It doesn't affect 90% of code and still
> "branch-like" approach. But I didn't see other scripts that are so release
> dependent. If we'll have ones, we could do the same. For now I see "{% if
> release %}" construct working very well.

Considering this is how we broke Queens, I'm not sure I agree.

>
> I didn't see still any advantage of branching CI code, except of a little
> bit nicer jinja templates without "{% if release ", but amount of
> disadvantages is so huge, that it'll literally block all current work in CI.
>

It's about reducing our risk with test coverage. We do not properly
test all jobs and all configurations when we make these changes. This
is a repeated problem and when we have to add version specific logic,
unless we're able to identify what this is actually impacting and
verify with jobs we have a risk of breaking ourselves.  We've seen
that code review is not sufficient for these changes as we merge
things and only find out after they've been merged that we broke
stable branches. Then it takes folks tracking down changes to decipher
what we broke. For example the original patch[4] broke Queens for
about a week.  That's 7 days of nothing being able to be merged,
that's not OK.

Thanks,
-Alex

[0] http://git.openstack.org/cgit/openstack/ansible-role-container-registry/
[1] http://git.openstack.org/cgit/openstack/ansible-role-redhat-subscription/
[2] 

Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Matt Riedemann

On 5/23/2018 9:49 AM, Colleen Murphy wrote:

Hang on, that's not quite true. From that list I see Mirantis, Debian, Ubuntu, 
and RedHat, who all have (or had until recently) significant parts of their 
distros hosted on openstack.org infrastructure and are/were even official 
OpenStack projects governed by the TC.


But isn't that primarily deployment tooling (Fuel, Charms, TripleO) 
rather than forks of other existing service projects like 
nova/cinder/ironic?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-23 Thread Colleen Murphy
On Tue, May 22, 2018, at 10:57 PM, Jay Pipes wrote:
> 
> Are any of the distributions of OpenStack listed at 
> https://www.openstack.org/marketplace/distros/ hosted on openstack.org 
> infrastructure? No. And I think that is completely appropriate.

Hang on, that's not quite true. From that list I see Mirantis, Debian, Ubuntu, 
and RedHat, who all have (or had until recently) significant parts of their 
distros hosted on openstack.org infrastructure and are/were even official 
OpenStack projects governed by the TC.

It's also important to make the distinction between hosting something on 
openstack.org infrastructure and recognizing it in an official capacity. 
StarlingX is seeking both, but in my opinion the code hosting is not the 
problem here.

Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Sagi Shnaidman
Alex,

the problem is that you're working and focusing mostly on release specific
code like featuresets and some scripts. But tripleo-quickstart(-extras) and
tripleo-ci is much *much* more than set of featuresets. Only 10% of the
code may be related to releases and branches, while other 90% is completely
independent and not related to releases.

So in 90% code we DO need to backport every change, take for example the
latest patch to extras: https://review.openstack.org/#/c/570167/, it's
fixing reproducer. If oooq-extra was branched, we would need to backport
this fix to every and every branch. And the same for all other 90% of code,
which is complete nonsense.
Just because not using "{% if release %}" construct - to block the whole
work of CI team and make the CI code is absolutely unmaintainable?

Some of release related templates we moved recently from tripleo-ci to THT
repo like scenarios, OC templates, etc. If we discover another things in
oooq that could be moved to branched THT I'd be only happy for that.

Sometimes it could be hard to maintain one file in extras templates with
different logic for releases, like we have in tempest configuration for
example. The solution is to create a few release-related templates and use
one that match the current branch. It doesn't affect 90% of code and still
"branch-like" approach. But I didn't see other scripts that are so release
dependent. If we'll have ones, we could do the same. For now I see "{% if
release %}" construct working very well.

I didn't see still any advantage of branching CI code, except of a little
bit nicer jinja templates without "{% if release ", but amount of
disadvantages is so huge, that it'll literally block all current work in CI.

Thanks



On Wed, May 23, 2018 at 7:04 PM, Alex Schultz  wrote:

> On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman 
> wrote:
> > Hi, Sergii
> >
> > thanks for the question. It's not first time that this topic is raised
> and
> > from first view it could seem that branching would help to that sort of
> > issues.
> >
> > Although it's not the case. Tripleo-quickstart(-extras) is part of CI
> code,
> > as well as tripleo-ci repo which have never been branched. The reason for
> > that is relative small impact on CI code from product branching. Think
> about
> > backport almost *every* patch to oooq and extras to all supported
> branches,
> > down to newton at least. This will be a really *huge* price and non
> > reasonable work. Just think about active maintenance of 3-4 versions of
> CI
> > code in each of 3 repositories. It will take all time of CI team with
> almost
> > zero value of this work.
> >
>
> So I'm not sure I completely agree with this assessment as there is a
> price paid for every {%if release in [...]%} that we have to carry in
> oooq{,-extras}.  These go away if we branch because we don't have to
> worry about breaking previous releases or current release (which may
> or may not actually have CI results).
>
> > What regards patch you listed, we would have backport this change to
> *every*
> > branch, and it wouldn't really help to avoid the issue. The source of
> > problem is not branchless repo here.
> >
>
> No we shouldn't be backporting every change.  The logic in oooq-extras
> should be version specific and if we're changing an interface in
> tripleo in a breaking fashion we're doing it wrong in tripleo. If
> we're backporting things to work around tripleo issues, we're doing it
> wrong in quickstart.
>
> > Regarding catching such issues and Bogdans point, that's right we added a
> > few jobs to catch such issues in the future and prevent breakages, and a
> few
> > running jobs is reasonable price to keep configuration working in all
> > branches. Comparing to maintenance nightmare with branches of CI code,
> it's
> > really a *zero* price.
> >
>
> Nothing is free. If there's a high maintenance cost, we haven't
> properly identified the optimal way to separate functionality between
> tripleo/quickstart.  I have repeatedly said that the provisioning
> parts of quickstart should be separate because those aren't tied to a
> tripleo version and this along with the scenario configs should be the
> only unbranched repo we have. Any roles related to how to
> configure/work with tripleo should be branched and tied to a stable
> branch of tripleo. This would actually be beneficial for tripleo as
> well because then we can see when we are introducing backwards
> incompatible changes.
>
> Thanks,
> -Alex
>
> > Thanks
> >
> >
> > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk 
> > wrote:
> >>
> >> Hi,
> >>
> >> Looking at [1], I am thinking about the price we paid for not
> >> branching tripleo-quickstart. Can we discuss the options to prevent
> >> the issues such as [1]? Thank you in advance.
> >>
> >> [1] https://review.openstack.org/#/c/569830/4
> >>
> >> --
> >> Best Regards,
> >> Sergii Golovatiuk
> >>
> >> 

Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Alex Schultz
On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman  wrote:
> Hi, Sergii
>
> thanks for the question. It's not first time that this topic is raised and
> from first view it could seem that branching would help to that sort of
> issues.
>
> Although it's not the case. Tripleo-quickstart(-extras) is part of CI code,
> as well as tripleo-ci repo which have never been branched. The reason for
> that is relative small impact on CI code from product branching. Think about
> backport almost *every* patch to oooq and extras to all supported branches,
> down to newton at least. This will be a really *huge* price and non
> reasonable work. Just think about active maintenance of 3-4 versions of CI
> code in each of 3 repositories. It will take all time of CI team with almost
> zero value of this work.
>

So I'm not sure I completely agree with this assessment as there is a
price paid for every {%if release in [...]%} that we have to carry in
oooq{,-extras}.  These go away if we branch because we don't have to
worry about breaking previous releases or current release (which may
or may not actually have CI results).

> What regards patch you listed, we would have backport this change to *every*
> branch, and it wouldn't really help to avoid the issue. The source of
> problem is not branchless repo here.
>

No we shouldn't be backporting every change.  The logic in oooq-extras
should be version specific and if we're changing an interface in
tripleo in a breaking fashion we're doing it wrong in tripleo. If
we're backporting things to work around tripleo issues, we're doing it
wrong in quickstart.

> Regarding catching such issues and Bogdans point, that's right we added a
> few jobs to catch such issues in the future and prevent breakages, and a few
> running jobs is reasonable price to keep configuration working in all
> branches. Comparing to maintenance nightmare with branches of CI code, it's
> really a *zero* price.
>

Nothing is free. If there's a high maintenance cost, we haven't
properly identified the optimal way to separate functionality between
tripleo/quickstart.  I have repeatedly said that the provisioning
parts of quickstart should be separate because those aren't tied to a
tripleo version and this along with the scenario configs should be the
only unbranched repo we have. Any roles related to how to
configure/work with tripleo should be branched and tied to a stable
branch of tripleo. This would actually be beneficial for tripleo as
well because then we can see when we are introducing backwards
incompatible changes.

Thanks,
-Alex

> Thanks
>
>
> On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk 
> wrote:
>>
>> Hi,
>>
>> Looking at [1], I am thinking about the price we paid for not
>> branching tripleo-quickstart. Can we discuss the options to prevent
>> the issues such as [1]? Thank you in advance.
>>
>> [1] https://review.openstack.org/#/c/569830/4
>>
>> --
>> Best Regards,
>> Sergii Golovatiuk
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best regards
> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] no meeting today

2018-05-23 Thread Jay Bryant
Just a reminder there is no meeting because ofthe summit today.

Jay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Mark Goddard to ironic-core

2018-05-23 Thread Ruby Loo
++. Great suggestion!

--ruby

On Sun, May 20, 2018 at 10:45 AM, Julia Kreger 
wrote:

> Greetings everyone!
>
> I would like to propose Mark Goddard to ironic-core. I am aware he
> recently joined kolla-core, but his contributions in ironic have been
> insightful and valuable. The kind of value that comes from operative use.
>
> I also make this nomination knowing that our community landscape is
> changing and that we must not silo our team responsibilities or ability to
> move things forward to small highly focused team. I trust Mark to use his
> judgement as he has time or need to do so. He might not always have time,
> but I think at the end of the day, we’re all in that same boat.
>
> -Julia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Sagi Shnaidman
Hi, Sergii

thanks for the question. It's not first time that this topic is raised and
from first view it could seem that branching would help to that sort of
issues.

Although it's not the case. Tripleo-quickstart(-extras) is part of CI code,
as well as tripleo-ci repo which have never been branched. The reason for
that is relative small impact on CI code from product branching. Think
about backport almost *every* patch to oooq and extras to all supported
branches, down to newton at least. This will be a really *huge* price and
non reasonable work. Just think about active maintenance of 3-4 versions of
CI code in each of 3 repositories. It will take all time of CI team with
almost zero value of this work.

What regards patch you listed, we would have backport this change to
*every* branch, and it wouldn't really help to avoid the issue. The source
of problem is not branchless repo here.

Regarding catching such issues and Bogdans point, that's right we added a
few jobs to catch such issues in the future and prevent breakages, and a
few running jobs is reasonable price to keep configuration working in all
branches. Comparing to maintenance nightmare with branches of CI code, it's
really a *zero* price.

Thanks


On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk 
wrote:

> Hi,
>
> Looking at [1], I am thinking about the price we paid for not
> branching tripleo-quickstart. Can we discuss the options to prevent
> the issues such as [1]? Thank you in advance.
>
> [1] https://review.openstack.org/#/c/569830/4
>
> --
> Best Regards,
> Sergii Golovatiuk
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Reusing Cinder drivers' code directly without running Cinder: In our applications, from Ansible, and for Containers

2018-05-23 Thread Gorka Eguileor
Hi,

During the last OpenStack PTG, I announced in the Cinder room the
development of cinderlib, and explained how this library allowed any
Python application to use Cinder storage drivers (there are over 80)
without running any services.

This takes the standalone effort one step further. Now you don't need to
run any Cinder services (API, Scheduler, Volume), RabbitMQ, or even a
DB, to manage and attach volumes and snapshots.

Even though we don't need a DB we still need to persist the metadata,
but the library supports JSON serialization so we can save it wherever
we want.  I'm also finishing a metadata persistence plugin mechanism to
allow external plugins for different storage solutions (DB, K8s CRDs,
Key-Value systems...).

This library opens a broad range of possibilities for the Cinder
drivers, and I have explored a couple of them: Using it from Ansible and
in containers with CSI driver that includes the latest features
including snapshots that were introduced last week.

The projects' documentation is lacking, but I've written a couple of
blog posts with a brief introduction to these POCs for anybody that is
interested:

- Cinderlib: https://gorka.eguileor.com/cinderlib
- Ansible storage role: https://gorka.eguileor.com/ansible-role-storage
- Cinderlib-CSI: https://gorka.eguileor.com/cinderlib-csi

And the repositories can be found in GitHub:

- Cinderlib: https://github.com/akrog/cinderlib
- Ansible storage role: https://github.com/akrog/ansible-role-storage
- Cinderlib-CSI:https://github.com/akrog/cinderlib-csi

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Mark Goddard to ironic-core

2018-05-23 Thread Dmitry Tantsur

On 05/20/2018 04:45 PM, Julia Kreger wrote:

Greetings everyone!

I would like to propose Mark Goddard to ironic-core. I am aware he recently 
joined kolla-core, but his contributions in ironic have been insightful and 
valuable. The kind of value that comes from operative use.


I also make this nomination knowing that our community landscape is changing and 
that we must not silo our team responsibilities or ability to move things 
forward to small highly focused team. I trust Mark to use his judgement as he 
has time or need to do so. He might not always have time, but I think at the end 
of the day, we’re all in that same boat.


I'm not sure I understand the first sentence, but I'm fully in support of adding 
Mark anyway.




-Julia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Bogdan Dobrelya

On 5/23/18 2:43 PM, Sergii Golovatiuk wrote:

Hi,

Looking at [1], I am thinking about the price we paid for not
branching tripleo-quickstart. Can we discuss the options to prevent
the issues such as [1]? Thank you in advance.

[1] https://review.openstack.org/#/c/569830/4



That was only a half of the full price, actually, see also additional 
multinode containers check/gate jobs  [0],[1] from now on executed 
against the master branches of all tripleo repos (IIUC), for release -2 
and -1

from master.

[0] https://review.openstack.org/#/c/569932/
[1] https://review.openstack.org/#/c/569854/


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Sergii Golovatiuk
Hi,

Looking at [1], I am thinking about the price we paid for not
branching tripleo-quickstart. Can we discuss the options to prevent
the issues such as [1]? Thank you in advance.

[1] https://review.openstack.org/#/c/569830/4

-- 
Best Regards,
Sergii Golovatiuk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]no meeting today

2018-05-23 Thread Zhipeng Huang
Enjoy the water view folks :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat]no meeting is week

2018-05-23 Thread Rico Lin
Hi all

As OpenStack summit  happening this week, let’s skip Heat meeting today.
-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev