Re: [openstack-dev] [kolla] Dropping core reviewer

2018-08-09 Thread Michał Jastrzębski
Hello Kollegues, Koalas and Koalines,

I feel I should do the same, as my work sadly doesn't involve Kolla,
or OpenStack for that matter, any more.

It has been a wonderful time and serving Kolla community as core and
PTL is achievement I'm most proud of and I thank you all for giving me
this opportunity. We've built something great!

Cheers,
Michal
On Thu, 9 Aug 2018 at 08:55, Steven Dake (stdake)  wrote:
>
> Kollians,
>
>
> Thanks for the kind words.
>
>
> I do plan to stay involved in the OpenStack community - specifically 
> targeting governance and will definitely be around - irc - mls - summits - 
> etc :)
>
>
> Cheers
>
> -steve
>
>
> 
> From: Surya Singh 
> Sent: Wednesday, August 8, 2018 10:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Dropping core reviewer
>
> words are not strong enough to appreciate your immense contribution and help 
> in OpenStack community.
> Projects like Kolla, Heat and Magnum are still rocking and many more to come 
> in future from you.
> Hope to see you around.
>
> Wish you all the luck !!
> -- Surya
>
> On Wed, Aug 8, 2018 at 6:15 PM Paul Bourke  wrote:
>>
>> +1. Will always have good memories of when Steve was getting the project
>> off the ground. Thanks Steve for doing a great job of building the
>> community around Kolla, and for all your help in general!
>>
>> Best of luck,
>> -Paul
>>
>> On 08/08/18 12:23, Eduardo Gonzalez wrote:
>> > Steve,
>> >
>> > Is sad to see you leaving kolla core team, hope to still see you around
>> > IRC and Summit/PTGs.
>> >
>> > I truly appreciate your leadership, guidance and commitment to make
>> > kolla the great project it is now.
>> >
>> > Best luck on your new projects and board of directors.
>> >
>> > Regards
>> >
>> >
>> >
>> >
>> >
>> > 2018-08-07 16:28 GMT+02:00 Steven Dake (stdake) > > >:
>> >
>> > Kollians,
>> >
>> >
>> > Many of you that know me well know my feelings towards participating
>> > as a core reviewer in a project.  Folks with the ability to +2/+W
>> > gerrit changes can sometimes unintentionally harm a codebase if they
>> > are not consistently reviewing and maintaining codebase context.  I
>> > also believe in leading an exception-free life, and I'm no exception
>> > to my own rules.  As I am not reviewing Kolla actively given my
>> > OpenStack individually elected board of directors service and other
>> > responsibilities, I am dropping core reviewer ability for the Kolla
>> > repositories.
>> >
>> >
>> > I want to take a moment to thank the thousands of people that have
>> > contributed and shaped Kolla into the modern deployment system for
>> > OpenStack that it is today.  I personally find Kolla to be my finest
>> > body of work as a leader.  Kolla would not have been possible
>> > without the involvement of the OpenStack global community working
>> > together to resolve the operational pain points of OpenStack.  Thank
>> > you for your contributions.
>> >
>> >
>> > Finally, quoting Thierry [1] from our initial application to
>> > OpenStack, " ... Long live Kolla!"
>> >
>> >
>> > Cheers!
>> >
>> > -steve
>> >
>> >
>> > [1] https://review.openstack.org/#/c/206789/
>> > 
>> >
>> >
>> >
>> >
>> >
>> > 
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > 
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Michał Jastrzębski
+1 from me:)

On Thu, May 31, 2018, 11:40 PM Martin André  wrote:

> If Steve wrote half of kolla-cli then it's a no brainer to me. +1!
>
> On Thu, May 31, 2018 at 7:02 PM, Borne Mace  wrote:
> > Greetings all,
> >
> > I would like to propose the addition of Steve Noyes to the kolla-cli core
> > reviewer team.  Consider this nomination as my personal +1.
> >
> > Steve has a long history with the kolla-cli and should be considered its
> > co-creator as probably half or more of the existing code was due to his
> > efforts.  He has now been working diligently since it was pushed
> upstream to
> > improve the stability and testability of the cli and has the second most
> > commits on the project.
> >
> > The kolla core team consists of 19 people, and the kolla-cli team of 2,
> for
> > a total of 21.  Steve therefore requires a minimum of 11 votes (so just
> 10
> > more after my +1), with no veto -2 votes within a 7 day voting window to
> end
> > on June 6th.  Voting will be closed immediately on a veto or in the case
> of
> > a unanimous vote.
> >
> > As I'm not sure how active all of the 19 kolla cores are, your attention
> and
> > timely vote is much appreciated.
> >
> > Thanks!
> >
> > -- Borne
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-29 Thread Michał Jastrzębski
strong +1 from me! Great work Mark!

On 29 April 2018 at 03:16, Steven Dake (stdake)  wrote:
> +1
>
>
>
>
>
>
>
> From: Jeffrey Zhang 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Thursday, April 26, 2018 at 5:32 PM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard
> (mgoddard) as kolla core member
>
>
>
> Kolla core reviewer team,
>
> It is my pleasure to nominate
>
> mgoddard for kolla core team.
>
> Mark has been working both upstream and downstream with kolla and
> kolla-ansible for over two years, building bare metal compute clouds with
> ironic for HPC. He's been involved with OpenStack since 2014. He started
> the kayobe deployment project which complements kolla-ansible. He is
> also the most active non-core contributor for last 90 days[1]
>
> Consider this nomination a +1 vote from me
>
> A +1 vote indicates you are in favor of
>
> mgoddard as a candidate, a -1
> is a
>
> veto. Voting is open for 7 days until
>
> May
>
>
>
> 4
>
> th, or a unanimous
> response is reached or a veto vote occurs.
>
> [1] http://stackalytics.com/report/contribution/kolla-group/90
>
>
>
> --
>
> Regards,
>
> Jeffrey Zhang
>
> Blog: http://xcodest.me
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-05 Thread Michał Jastrzębski
So I'll re-iterate comment which I made in BCN. In previous thread we
praised how Kolla provided stable API for images, and I agree that it
was great design choice (to provide stable API, not necessarily how
API looks), and this change would break it. So *if* we decide to do
it, we need to follow deprecation, that means we could deprecate these
files in this release and start removing them in next.

Support for LOCI in kolla-ansible is good thing, but I don't think
changing Kolla image API is required for that. LOCI provides base
image arument, so we could simply create base-image with all the
extended-start and set-config mechanisms and some shim to source
extended-start script that belongs to particular container. We will
need kolla layer image anyway because set_config is there to stay (as
Martin pointed out it's valuable tool fixing real issue and it's used
by more projects than just kolla-ansible). We could add another script
that would look like extended_start.sh -> source
$CONTAINER_NAME-extended-start.sh and copy all kolla's extended start
scripts to dir with proper naming (I believe this is solution that Sam
came up with shortly after BCN). This is purely techincal and not that
hard to do, much quicker and easier than deprecating API...

On 5 April 2018 at 12:28, Martin André  wrote:
> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke  wrote:
>> Hi all,
>>
>> This mail is to serve as a follow on to the discussion during yesterday's
>> team meeting[4], which was regarding the desire to move start scripts out of
>> the kolla images [0]. There's a few factors at play, and it may well be best
>> left to discuss in person at the summit in May, but hopefully we can get at
>> least some of this hashed out before then.
>>
>> I'll start by summarising why I think this is a good idea, and then attempt
>> to address some of the concerns that have come up since.
>>
>> First off, to be frank, this is effort is driven by wanting to add support
>> for loci images[1] in kolla-ansible. I think it would be unreasonable for
>> anyone to argue this is a bad objective to have, loci images have very
>> obvious benefits over what we have in Kolla today. I'm not looking to drop
>> support for Kolla images at all, I simply want to continue decoupling things
>> to the point where operators can pick and choose what works best for them.
>> Stemming from this, I think moving these scripts out of the images provides
>> a clear benefit to our consumers, both users of kolla and third parties such
>> as triple-o. Let me explain why.
>
> It's still very obscure to me how removing the scripts from kolla
> images will benefit consumers. If the reason is that you want to
> re-use them in other, non-kolla images, I believe we should package
> the scripts. I've left some comments in your spec review.
>
>> Normally, to run a docker image, a user will do 'docker run
>> helloworld:latest'. In any non trivial application, config needs to be
>> provided. In the vast majority of cases this is either provided via a bind
>> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via
>> environment variables (docker run --env HELLO=paul helloworld:latest). This
>> is all bog standard stuff, something anyone who's spent an hour learning
>> docker can understand.
>>
>> Now, lets say someone wants to try out OpenStack with Docker, and they look
>> at Kolla. First off they have to look at something called set_configs.py[2]
>> - over 400 lines of Python. Next they need to understand what that script
>> consumes, config.json [3]. The only reference for config.json is the files
>> that live in kolla-ansible, a mass of jinja and assumptions about how the
>> service will be run. Next, they need to figure out how to bind mount the
>> config files and config.json into the container in a way that can be
>> consumed by set_configs.py (which by the way, requires the base kolla image
>> in all cases). This is only for the config. For the service start up
>> command, this need to also be provided in config.json. This command is then
>> parsed out and written to a location in the image, which is consumed by a
>> series of start/extend start shell scripts. Kolla is *unique* in this
>> regard, no other project in the container world is interfacing with images
>> in this way. Being a snowflake in this regard is not a good thing. I'm still
>> waiting to hear from a real world operator who would prefer to spend time
>> learning the above to doing:
>
> You're pointing a very real documentation issue. I've mentioned in the
> other kolla thread that I have a stub for the kolla API documentation.
> I'll push a patch for what I have and we can iterate on that.
>
>>   docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint
>> /usr/bin/keystone [args]
>>
>> This is the Docker API, it's easy to understand and pretty much the standard
>> at this point.
>
> Sure, using the docker API works for simpler cases, 

Re: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project

2018-04-04 Thread Michał Jastrzębski
On 4 April 2018 at 14:45, Brandon Jozsa  wrote:
> I’ve been a part of the OpenStack-Helm project from the very beginning, and
> there was a lot of early brainstorming on how we could collaborate and
> contribute directly to Kolla-Kubernetes. In fact, this was the original
> intent when we met with Kolla back in Barcelona. We didn’t like the idea of
> fragmenting interested Kubernetes developers/operators in the
> OpenStack-via-Kubernetes space. Whatever the project, we wanted all the
> domain expertise concentrated on a single deployment effort. Even though
> OSH/K-k8s couldn’t reach an agreement on how to handle configmaps (our
> biggest difference from the start), there was a lot of early collaboration
> between the project cores. Early K-k8s contributors may remember Halcyon,
> which cores from both sides promoted for early development of
> OpenStack-via-Kubernetes, regardless of the project.
>
> One of the requests from the initial OSH team (in Barcelona) was to formally
> separate Kolla from Kolla-foo deployment projects, both at a project level
> and from a core perspective. Why have the same cores giving +2’s to Kolla,
> Kolla-Ansible, Kolla-Mesos (now dead) and Kolla-Kubernetes, who may not have
> any interest in another given discipline? We wanted reviews to be timely,
> and laser-focused, and we felt that this more atomic approach would benefit
> Kolla in the end. But unfortunately there was heavy resistance with limited
> yet very influential cores. I honestly think pushback was also because it
> would mean that any Kolla sub-projects would be subject to re-acceptance as
> big tent projects.

Limited, but very influential cores sounds like bad community, and as
it happens I was leading this community at that time, so I feel I
should comment. We would love to increase number of cores (raise a
limit) of images, but that comes with a cost. Cost being that person
who would like to become a core would need to contribute to project in
question and review other people contributions. Proper way to address
this problem would be just that - contributing to Kolla and reviewing
code. If I failed to notice contributions from someone who did that a
lot (I hope I didn't), I'm sorry. This is best and only way to solve
problem in question.

>
> There were also countless discussions about the preservation of the Kolla
> API, or Ansible + Jinja portions of Kolla-Ansible. It became clear to us
> that Kubernetes wasn’t going to be the first class citizen for the
> deployment model in Kolla-Kubernetes, forcing operators to troubleshoot
> between OpenStack, Kolla (container builds), Ansible, Kubernetes, and Helm.
> This is apparent still today. And while I understand the hesitation to
> change Kolla/Kolla-Ansible, I think this code-debt has somewhat contributed
> to sustainability of Kolla-Kubernetes. Somewhat to the point of tension, I
> very much agree with Thierry’s comments earlier.

How k8s wasn't first class citizen? I don't understand. All processes
were the same, time in PTG was generous compared to ansible etc. More
people uses Ansible due to it's maturity so it's obvious it's going to
have better testing etc, but again, solved by contributions.

> I want all of these projects to succeed but consolidation with purposeful
> and deliberate planning, which Rich has so kindly agreed to do, could be the
> right answer. So I +1 the idea, because I think it puts all like-minded
> individuals on the same focus (for the overall benefit of OpenStack and the
> overall OpenStack community). But we have to make sure there isn’t a lot of
> fallout from the decision either. To Steve Dake's previous point, there
> could be orphaned users/operators who feel “forced” into another project. I
> would hate to see that. It would be nice to at least plan this with the
> user-base and give them fair warning. And to this point, what is the active
> specific Kolla-Kubernetes core? Who is “PTL” of Kolla-Kubernetes today?

As per election results it's Jeffrey.

> On the other hand, I think that OSH has some improvements to make as well.
> Gating could use some help and the OpenStack-Infra team has been kindly
> helping out recently (a huge "thank you" to them). Docs…I think docs could
> always use some love. Please offer your experiences to the OSH team! We
> would love to hear your user input. Ultimately, if users/operators want to
> run something that even closely resembles production, then we need some
> decent production quality docs as opposed to leveraging the nascent gate
> scripts (Zuulv3 ansible+heat). Releases and release planning should be
> addressed, as users/vendors are going to want to be closer to OpenStack
> release dates (recent versions of OpenStack, Helm and Kubernetes). Clear and
> open roadmaps, with potential use of community-led planning tools. Open
> elections for PTL. Finally, the OSH team may still be be interested in
> diversifying it’s core-base. Matt M. would have to address this. I know 

Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-31 Thread Michał Jastrzębski
So my take on the issue.

I think splitting Kolla and Kolla-Ansible to completely new project
(including name change and all) might look good from purity
perspective (they're effectively separate), but would cause chaos and
damage to production deployments people use. While code will be the
same, do we scrub "kolla" name from kolla-ansible code? Do we change
config paths? Configs lands in /etc/kolla so I guess new project
shouldn't do that? Not to mention that operators are used to this
nomenclature and build tools around it (for example Kayobe) and there
is no telling how many production deployments would get hurt. At the
same time I don't think there is much to gain from split like that, so
that's not really practical.

We can do this for Kolla-kubernetes as it hasn't released 1.0 so there
won't (or shouldn't) be production environments based on it.

We already have separate core teams for Kolla and Kolla-Ansible. From
my experience organizing PTG and other events for both (or rather all
3 deliverables) together makes sense and makes scheduling of
attendance much easier.

On 31 March 2018 at 11:06, Steven Dake (stdake)  wrote:
> On March 31, 2018 at 6:45:03 AM, Jeremy Stanley (fu...@yuggoth.org) wrote:
>
> [...]
> Given this, it sounds like the current Kolla mission statement of
> "provide production-ready containers and deployment tools for
> operating OpenStack clouds" could use some adjustment to drop the
> production-ready containers aspect for further clarity. Do you
> agree?
> [...]
>
> I appreciate your personal interest in attempting to clarify the Kolla
> mission statement.
>
> The change in the Kolla mission statement you propose is unnecessary.
>
> Regards
>
> -steve
>
>
>
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency

2018-02-26 Thread Michał Jastrzębski
I'm for option 1 definitely. accidental ceph upgrade during routine
minor version upgrade is something we don't want. We will need big
warning about this version mismatch in release notes.

On 26 February 2018 at 07:01, Eduardo Gonzalez  wrote:
> I prefer option 1, breaking stable policy is not good for users. They will
> be forced to upgrade a major ceph version during a minor upgrade, which is
> not good and not excepted to be done ever.
>
> Regards
>
>
> 2018-02-26 9:51 GMT+01:00 Shake Chen :
>>
>> I prefer to the option 2.
>>
>> On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang 
>> wrote:
>>>
>>> Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging,
>>> i found it is caused by
>>> package dependency.
>>>
>>>
>>> *Background*
>>>
>>> Since we have no time to upgrade ceph from Jewel to Luminous at the end
>>> of pike cycle, we pinned
>>> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel
>>> and ceph luminous are on
>>> the different repos.
>>>
>>> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though
>>> ceph luminous still exists
>>> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping
>>> qemu to 2.5 to use ceph Jewel[1].
>>> And this works since then.
>>>
>>>
>>> *Now Issue*
>>>
>>> But recently, UCA changed the libvirt-daemon package dependency, and
>>> added following,
>>>
>>> Package: libvirt-daemon
>>> Version: 3.6.0-1ubuntu6.2~cloud0
>>> ...
>>> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<<
>>> 1:2.10+dfsg-0ubuntu3.4~)
>>>
>>> It requires qemu 2.10 now. So dependency is broken and nova-libvirt
>>> container is failed to build.
>>>
>>>
>>> *Possible Solution*
>>>
>>> I think there two possible ways now, but none of them is good.
>>>
>>> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in
>>> ceph-* container
>>> 2. Bump ceph from jewel to luminous. But this breaks the backport policy,
>>> obviously.
>>>
>>> So any idea on this?
>>>
>>> [0] https://review.openstack.org/534149
>>> [1] https://review.openstack.org/#/c/526931/
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Shake Chen
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-30 Thread Michał Jastrzębski
On 30 January 2018 at 09:34, Joshua Harlow  wrote:
> I'm ok with #2,
>
> Though I would like to show an alternative that we have been experimenting
> with that avoids the whole needs for a globals.yml and such files in the
> first place (and feels more naturally inline with how ansible works IMHO).
>
> So short explanation first; we have this yaml format that describes all of
> our clouds and there settings and such (and which servers belong in which
> cloud and so on and so forth). We have then setup a REST server (small
> gunicorn based one) that renders/serves this format into other formats.
>
> One of those other formats is one that is compatible with ansibles concept
> of dynamic inventory [1] and that is the one we are trying to send into
> kolla-ansible to get it to configure all the things (via typical mechanisms
> such as hostvars and groupvars).
>
> An example of this rendering:
>
> https://gist.github.com/harlowja/9d7b57571a2290c315fc9a4bf2957dac (this is
> dynamically generated from the other format, which is git version
> controlled...).
>
> The goal here is that we can just render all the needed variables and such
> for kolla-ansible (at a per-host basis if we have to) and avoid the need for
> having a special globals.yml (per-cloud/environment) and per-host special
> files in the first place.
>
> Was this kind of approach ever thought of?

Well that totally works:)
I routinely use inventory to override parts of globals (different
iface per node). You could have [all:vars] section in inventory and
set every variable usually set in globals there. However I think issue
here is about files in /etc/kolla/config - so config overrides.

I think one potential solution would be to have some sort of ansible
task that would translate ansible vars to ini format and lay down
files in /etc/kolla/config, but I think that's beyond scope of
Kolla-Ansible.

>
> Perhaps I can go into more detail if it seems like one others may want to
> follow
>
> [1]: http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html
>
>
> Paul Bourke wrote:
>>
>> Hi all,
>>
>> I'd like to revisit our policy of not templating everything in
>> kolla-ansible's template files. This is a policy that was set in place
>> very early on in kolla-ansible's development, but I'm concerned we
>> haven't been very consistent with it. This leads to confusion for
>> contributors and operators - "should I template this and submit a patch,
>> or do I need to start using my own config files?".
>>
>> The docs[0] are currently clear:
>>
>> "The Kolla upstream community does not want to place key/value pairs in
>> the Ansible playbook configuration options that are not essential to
>> obtaining a functional deployment."
>>
>> In practice though our templates contain many options that are not
>> necessary, and plenty of patches have merged that while very useful to
>> operators, are not necessary to an 'out of the box' deployment.
>>
>> So I'd like us to revisit the questions:
>>
>> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which
>> caters to operators via key/value config options?
>>
>> 2) Or, is it to be a solid reference implementation, where any degree of
>> customisation implies a clear 'bring your own configs' type policy.
>>
>> If 1), then we should potentially:
>>
>> * Update ours docs to remove the referenced paragraph
>> * Look at reorganising files like globals.yml into something more
>> maintainable.
>>
>> If 2),
>>
>> * We should make it clear to reviewers that patches templating options
>> that are non essential should not be accepted.
>> * Encourage patches to strip down existing config files to an absolute
>> minimum.
>> * Make this policy more clear in docs / templates to avoid frustration
>> on the part of operators.
>>
>> Thoughts?
>>
>> Thanks,
>> -Paul
>>
>> [0]
>>
>> https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-29 Thread Michał Jastrzębski
Hey,

So I'm also for option 2. There was big discussion in Atlanta about
"how hard it is to keep configs up to date and remove deprecated
options". merge_config makes it easier for us to handle this. With
amount of services we support I don't think we have enough time to
keep tabs on every config change across OpenStack.

On 29 January 2018 at 08:03, Steven Dake (stdake)  wrote:
> Agree, the “why” of this policy is stated here:
>
> https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html
>
>
>
> Paul, I think your corrective actions sound good.  Perhaps we should also
> reword “essential” to some other word that is more lenient.
>
>
>
> Cheers
>
> -steve
>
>
>
> From: Jeffrey Zhang 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Monday, January 29, 2018 at 7:14 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla] Policy regarding template customisation
>
>
>
> Thank Paul for pointing this out.
>
>
>
> for me, I prefer to consist with 2)
>
>
>
> There are thousands of configuration in OpenStack, it is hard for Kolla to
>
> add every key/value pair in playbooks. Currently, the merge_config is a more
>
> better solutions.
>
>
>
>
>
>
>
>
>
> On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke  wrote:
>
> Hi all,
>
> I'd like to revisit our policy of not templating everything in
> kolla-ansible's template files. This is a policy that was set in place very
> early on in kolla-ansible's development, but I'm concerned we haven't been
> very consistent with it. This leads to confusion for contributors and
> operators - "should I template this and submit a patch, or do I need to
> start using my own config files?".
>
> The docs[0] are currently clear:
>
> "The Kolla upstream community does not want to place key/value pairs in the
> Ansible playbook configuration options that are not essential to obtaining a
> functional deployment."
>
> In practice though our templates contain many options that are not
> necessary, and plenty of patches have merged that while very useful to
> operators, are not necessary to an 'out of the box' deployment.
>
> So I'd like us to revisit the questions:
>
> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which
> caters to operators via key/value config options?
>
> 2) Or, is it to be a solid reference implementation, where any degree of
> customisation implies a clear 'bring your own configs' type policy.
>
> If 1), then we should potentially:
>
> * Update ours docs to remove the referenced paragraph
> * Look at reorganising files like globals.yml into something more
> maintainable.
>
> If 2),
>
> * We should make it clear to reviewers that patches templating options that
> are non essential should not be accepted.
> * Encourage patches to strip down existing config files to an absolute
> minimum.
> * Make this policy more clear in docs / templates to avoid frustration on
> the part of operators.
>
> Thoughts?
>
> Thanks,
> -Paul
>
> [0]
> https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Regards,
>
> Jeffrey Zhang
>
> Blog: http://xcodest.me
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][kolla][rdo] Collaboration with Kolla for the RDO test days

2018-01-29 Thread Michał Jastrzębski
Cool, thank you David, sign me up!:)

On 29 January 2018 at 05:30, David Moreau Simard  wrote:
> Hi !
>
> For those who might be unfamiliar with the RDO [1] community project:
> we hang out in #rdo, we don't bite and we build vanilla OpenStack
> packages.
>
> These packages are what allows you to leverage one of the deployment
> projects such as TripleO, PackStack or Kolla to deploy on CentOS or
> RHEL.
> The RDO community collaborates with these deployment projects by
> providing trunk and stable packages in order to let them develop and
> test against the latest and the greatest of OpenStack.
>
> RDO test days typically happen around a week after an upstream
> milestone has been reached [2].
> The purpose is to get everyone together in #rdo: developers, users,
> operators, maintainers -- and test not just RDO but OpenStack itself
> as installed by the different deployment projects.
>
> We tried something new at our last test day [3] and it worked out great.
> Instead of encouraging participants to install their own cloud for
> testing things, we supplied a cloud of our own... a bit like a limited
> duration TryStack [4].
> This lets users without the operational knowledge, time or hardware to
> install an OpenStack environment to see what's coming in the upcoming
> release of OpenStack and get the feedback loop going ahead of the
> release.
>
> We used Packstack for the last deployment and invited Packstack cores
> to deploy, operate and troubleshoot the installation for the duration
> of the test days.
> The idea is to rotate between the different deployment projects to
> give every interested project a chance to participate.
>
> Last week, we reached out to Kolla to see if they would be interested
> in participating in our next RDO test days [5] around February 8th.
> We supply the bare metal hardware and their core contributors get to
> deploy and operate a cloud with real users and developers poking
> around.
> All around, this is a great opportunity to get feedback for RDO, Kolla
> and OpenStack.
>
> We'll be advertising the event a bit more as the test days draw closer
> but until then, I thought it was worthwhile to share some context for
> this new thing we're doing.
>
> Let me know if you have any questions !
>
> Thanks,
>
> [1]: https://www.rdoproject.org/
> [2]: https://www.rdoproject.org/testday/
> [3]: 
> https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
> [4]: http://trystack.org/
> [5]: 
> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] PTL non candidacy

2018-01-10 Thread Michał Jastrzębski
Hello,

A bit earlier than usually, but I'd like to say that I won't be
running for PTL reelection for Rocky cycle. I had privilege of being
PTL of Kolla for last 3 cycles and I would like to thank Kolla
community for this opportunity and trust. I'm very proud of what we've
accomplished over last 3 releases and I'm sure we will accomplish even
greater things in the future!

It's good for project to change leadership every now and then. I would
encourage everyone in community to consider running, I can promise
that this job is ... very interesting;) and extremely rewarding!

Thank you all for support and please, support new PTL as much as you
supported me.

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] no meeting 27th Dec

2017-12-20 Thread Michał Jastrzębski
Because merry Christmas everyone:)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-14 Thread Michał Jastrzębski
On 14 December 2017 at 07:09, Nick Barcet  wrote:
> Hello Thierry,
>
> Really appreciate the effort you are putting to find creative ways to help
> new contributors join our project.  However, based on my experience, it
> seems that:
> - the longer the delay between the releases, the harder they are to deliver
> - the main problem newcomers may have today, is that the release process is
> very disruptive if they have not completed their feature in time for the
> next cut, and not getting it in time will mean a 6 months delay, which is
> very frustrating.

I agree with these points. I would also add that besides that bigger
releases are hard to deliver, stable branches are harder to maintain
as time goes. By that I mean:
1. bug is submitted
2. bug is fixed in master
3. bug is backported to stable

As time goes discrepancy between master and stable grows and backports
are more expensive. Also it will lower the cadence of "this bug is
fixed by new feature" and could potentially mean users/ops will have
to suffer it for longer time.

> As a consequence, I would rather propose:
> - a longer cadence for stable releases
> - a quicker cadence for intermediary releases (release early, release
> often...)

I agree with that approach. With lightweight releases (but still
releases so some cross-community testing is involved, even if only
automated one) we leave decision to operators to either wait for a
year, upgrade in smaller chunks with some testing or chase master with
nearly no testing.

> A longer cadence for stable release would mean that we pick a duration
> between stable branches that would fit our users' need and that is a
> multiplier of the shorter release.  Based on the interview we conducted on
> our user base a year and a half ago, it seemed that 18mo was the right
> cadence, but we could easily poll the wider OpenStack user base to have
> confirmation.  The justification we got for an 18 month cadence was that it
> was itself a divider of most user hardware life-cycle (3 years) and would
> help in their overall life-cycle management (ie they can decide to upgrade
> their hw once in the duration, or not and get to a new version at hw renewal
> every 3 years).
>
> A quicker cadence for intermediary release would mean that instead of
> creating a branch per release, we would only tag the various project
> branches for a release, validating that integration tests and fixes are
> aligned at this point.  Distributions could independently decide to provide
> these release and create their own branch to maintain those, but this would
> not be the burden of the overall project.   As a consequence of the quicker
> cadence, the integration milestone duration should be reduced to something
> like 2 or 3 weeks.  Switching to tagging a release instead of branching,
> should allow for almost uninterrupted merge request, to the exception of the
> integration period when only integration fixes should be (temporarily)
> accepted, but overall simplifying what one has to do to resume his work from
> one version to another.  Also, with a quicker release cycle, the impact of
> missing the window would be less penalizing, which I believe is a big part
> of the newcomers frustration with the project.  If we were going with
> 18month stable, then 3 month or 1.5 months intermediary releases would be my
> recommendation.
>
> What would that mean for summits? I would think that we could only have one
> "worldwide" summit yearly, with the ability to have regional summits in
> between.
> What would that mean for PTG? I would propose to keep a 6 monthly cadence
> for in person PTG, but formalize the creation of a 3 monthly virtual project
> gathering over a period of 48h.  No cross project topics would happen in
> those.
>
> As a consequence of this I think we would have:
> - a stable branch lifecycle which is more aligned with our user base
> - the ability for fast user to run from master "tagged version"
> - the ability for distros to differentiate on the model the adopt with
> respect to the intermediary release
> - less frustration for newcomers
> - a project that moves faster
>
> Thanks for taking the time to read this proposal.
>
> Nick
>
>
>
> On Wed, Dec 13, 2017 at 11:17 AM, Thierry Carrez 
> wrote:
>>
>> Hi everyone,
>>
>> Over the past year, it has become pretty obvious to me that our
>> self-imposed rhythm no longer matches our natural pace. It feels like we
>> are always running elections, feature freeze is always just around the
>> corner, we lose too much time to events, and generally the impression
>> that there is less time to get things done. Milestones in the
>> development cycles are mostly useless now as they fly past us too fast.
>> A lot of other people reported that same feeling.
>>
>> As our various components mature, we have less quick-paced feature
>> development going on. As more and more people adopt OpenStack, we are
>> more careful about not breaking them, 

[openstack-dev] [kolla] Proposal to add Marcin (hrw) to kolla images core team

2017-11-02 Thread Michał Jastrzębski
It's my great pleasure to start another voting for core team addition!

Everyone knows hrw, thanks to him Kolla now runs on both Power and ARM!
Voting will be open for 14 days (until 16th of Nov).

Consider this mail my +1 vote

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Cancelling meeting 8 Nov

2017-11-01 Thread Michał Jastrzębski
Have a good summit everyone!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-21 Thread Michał Jastrzębski
Additionally you can build only images you need. Some of images we have are
quite.. niche. If you limit number of images to only those you care about,
that will lower total size significantly

On Oct 20, 2017 3:51 PM, "Steven Dake (stdake)"  wrote:

> Sam,
>
>
>
> Agreed this matches my experience. Building one by one though will result
> in massive image size sprawl.
>
>
>
> Regards
>
> -steve
>
>
>
> *From: *Sam Yaple 
> *Reply-To: *"s...@yaple.net" , "OpenStack Development
> Mailing List (not for usage questions)"  >
> *Date: *Thursday, October 19, 2017 at 10:37 PM
> *To: *Gabriele Cerami 
> *Cc: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [TripleO][Kolla] Concerns about containers
> images in DockerHub
>
>
>
>
>
>
>
> On Thu, Oct 19, 2017 at 11:23 PM, Gabriele Cerami 
> wrote:
>
> On 19 Oct, Sam Yaple wrote:
> > So it seems tripleo is building *all* images and then pushing them.
> > Reworking your number leads me to believe you will be consuming 10-15GB
> in
> > total on Dockerhub. Kolla images are only the size that you posted when
> > built as seperate services. Just keep building all the images at the same
> > time and you wont get anywhere near the numbers you posted.
>
> Makes sense, so considering the shared layers
> - a size of 10-15GB per build.
> - 4-6 builds rotated per release
> - 3-4 releases
>
>
>
> - a size of 1-2GB per build
>
> - 4-6 builds rotated per release
>
> - 3-4 releases
>
>
>
> At worst you are looking at 48GB not 360GB. Dont worry so much there!
>
>
> total size will be approximately be 360GB in the worst case, and 120GB in
> the best case, which seems a bit more reasonable.
>
> Thanks for he clarifications
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Michał Jastrzębski
On 19 October 2017 at 13:37, Michał Jastrzębski <inc...@gmail.com> wrote:
> On 19 October 2017 at 13:32, Joshua Harlow <harlo...@fastmail.com> wrote:
>> This reminded me of something I wanted to ask.
>>
>> Is it true to state that only way to get 'fully' shared-base layers is to
>> have `kolla-build` build all the projects (that a person/company/other may
>> use) in one invocation? (in part because of the jinja2 template generation
>> which would cause differences in dockerfiles?...)
>
> Well jinja2 should render same dockerfile no matter when you call it,
> so it should be fine. Alternatively you can run something like
> kolla-build nova --skip-parents  - this call will try to build all
> images with "nova" in them while not rebuilding openstack-base and
> base image.
>
>> I was pretty sure this was the case (unless things have changed), but just
>> wanting to check since that question seems (somewhat) on-topic...
>>
>> At godaddy we build individual projects using `kolla-build` (in part because
>> it makes it easy to rebuild + test + deply a single project with either an
>> update or a patch or ...) and I suspect others are doing this also (after
>> all the kolla-build command does take a regex of projects to build) - though
>> doing it in this way does seem like it would not reuse (all the layers
>> outside of the base operating system) layers 'optimally'?
>>
>> Thoughts?
>>
>> -Josh
>>
>> Sam Yaple wrote:
>>>
>>> docker_image wouldn't be the best place for that. Buf if you are looking
>>> for a quicker solution, kolla_docker was written specifically to be
>>> license compatible for openstack. its structure should make it easily
>>> adapted to delete an image. And you can copy it and cut it up thanks to
>>> the license.
>>>
>>> Are you pushing images with no shared base layers at all (300MB
>>> compressed image is no shared base layers)? With shared base layers a
>>> full image set of Kolla images should be much smaller than the numbers
>>> you posted.
>>>
>>> Thanks,
>>> SamYaple
>>>
>>> On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami <gcer...@redhat.com
>>> <mailto:gcer...@redhat.com>> wrote:
>>>
>>> Hi,
>>>
>>> our CI scripts are now automatically building, testing and pushing
>>> approved openstack/RDO services images to public repositories in
>>> dockerhub using ansible docker_image module.
>>>
>>> Promotions have had some hiccups, but we're starting to regularly
>>> upload
>>> new images every 4 hours.
>>>
>>> When we'll get at full speed, we'll potentially have
>>> - 3-4 different sets of images, one per release of openstack (counting
>>> a
>>>EOL release grace period)
>>> - 90-100 different services images per release
>>> - 4-6 different versions of the same image ( keeping older promoted
>>>images for a while )
>>>
>>> At around 300MB per image a possible grand total is around 650GB of
>>> space used.

That doesn't sound correct as images share a lot - full registry of
single type/distro (like centos source) is ~10gig

>>> We don't know if this is acceptable usage of dockerhub space and for
>>> this we already sent a similar email the to docker support to ask
>>> specifically if the user would get penalty in any way (e.g. enforcing
>>> quotas, rete limiting, blocking). We're still waiting for a reply.
>>>
>>> In any case it's critical to keep the usage around the estimate, and
>>> to
>>> achieve this we need a way to automatically delete the older images.
>>> docker_image module does not provide this functionality, and we think
>>> the only way is issuing direct calls to dockerhub API
>>>
>>> https://docs.docker.com/registry/spec/api/#deleting-an-image
>>> <https://docs.docker.com/registry/spec/api/#deleting-an-image>
>>>
>>> docker_image module structure doesn't seem to encourage the addition
>>> of
>>> such functionality directly in it, so we may be forced to use the uri
>>> module.
>>> With new images uploaded potentially every 4 hours, this will become a
>>> problem to be solved within the next two weeks.
>>>
>>> We'd appreciate any input for an existing, in progress and/or better
>>> solution for bulk deletion, and issues 

Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Michał Jastrzębski
On 19 October 2017 at 13:32, Joshua Harlow  wrote:
> This reminded me of something I wanted to ask.
>
> Is it true to state that only way to get 'fully' shared-base layers is to
> have `kolla-build` build all the projects (that a person/company/other may
> use) in one invocation? (in part because of the jinja2 template generation
> which would cause differences in dockerfiles?...)

Well jinja2 should render same dockerfile no matter when you call it,
so it should be fine. Alternatively you can run something like
kolla-build nova --skip-parents  - this call will try to build all
images with "nova" in them while not rebuilding openstack-base and
base image.

> I was pretty sure this was the case (unless things have changed), but just
> wanting to check since that question seems (somewhat) on-topic...
>
> At godaddy we build individual projects using `kolla-build` (in part because
> it makes it easy to rebuild + test + deply a single project with either an
> update or a patch or ...) and I suspect others are doing this also (after
> all the kolla-build command does take a regex of projects to build) - though
> doing it in this way does seem like it would not reuse (all the layers
> outside of the base operating system) layers 'optimally'?
>
> Thoughts?
>
> -Josh
>
> Sam Yaple wrote:
>>
>> docker_image wouldn't be the best place for that. Buf if you are looking
>> for a quicker solution, kolla_docker was written specifically to be
>> license compatible for openstack. its structure should make it easily
>> adapted to delete an image. And you can copy it and cut it up thanks to
>> the license.
>>
>> Are you pushing images with no shared base layers at all (300MB
>> compressed image is no shared base layers)? With shared base layers a
>> full image set of Kolla images should be much smaller than the numbers
>> you posted.
>>
>> Thanks,
>> SamYaple
>>
>> On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami > > wrote:
>>
>> Hi,
>>
>> our CI scripts are now automatically building, testing and pushing
>> approved openstack/RDO services images to public repositories in
>> dockerhub using ansible docker_image module.
>>
>> Promotions have had some hiccups, but we're starting to regularly
>> upload
>> new images every 4 hours.
>>
>> When we'll get at full speed, we'll potentially have
>> - 3-4 different sets of images, one per release of openstack (counting
>> a
>>EOL release grace period)
>> - 90-100 different services images per release
>> - 4-6 different versions of the same image ( keeping older promoted
>>images for a while )
>>
>> At around 300MB per image a possible grand total is around 650GB of
>> space used.
>>
>> We don't know if this is acceptable usage of dockerhub space and for
>> this we already sent a similar email the to docker support to ask
>> specifically if the user would get penalty in any way (e.g. enforcing
>> quotas, rete limiting, blocking). We're still waiting for a reply.
>>
>> In any case it's critical to keep the usage around the estimate, and
>> to
>> achieve this we need a way to automatically delete the older images.
>> docker_image module does not provide this functionality, and we think
>> the only way is issuing direct calls to dockerhub API
>>
>> https://docs.docker.com/registry/spec/api/#deleting-an-image
>> 
>>
>> docker_image module structure doesn't seem to encourage the addition
>> of
>> such functionality directly in it, so we may be forced to use the uri
>> module.
>> With new images uploaded potentially every 4 hours, this will become a
>> problem to be solved within the next two weeks.
>>
>> We'd appreciate any input for an existing, in progress and/or better
>> solution for bulk deletion, and issues that may arise with our space
>> usage in dockerhub
>>
>> Thanks
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] Proposing changes in stable policy for installers

2017-10-16 Thread Michał Jastrzębski
So my 0.02$

Problem with handling Newton goes beyond deployment tools. Yes, it's
popular to use, but if our dependencies (openstack services
themselves) are unmaintained, so should we. If we say "we support
Newton" in deployment tools, we make kind of promise we can't keep. If
for example there is CVE in Nova that affects Newton, there is nothing
we can do about it and our "support" is meaningless.

Not having LTS kind of model was issue for OpenStack operators
forever, but that's not problem we can solve in deployment tools
(although we are often asked for that because our communities are
largely operators and we're arguably closest projects to operators).

I, for one, think we should keep current stable policy, not make
exception for deployment tools, and address this issue across the
board. What Emilien is describing is real issue that hurts operators.

On 16 October 2017 at 15:38, Emilien Macchi  wrote:
> On Mon, Oct 16, 2017 at 4:27 AM, Thierry Carrez  wrote:
>> Emilien Macchi wrote:
>>> [...]
>>> ## Proposal
>>>
>>> Proposal 1: create a new policy that fits for projects like installers.
>>> I kicked-off something here: https://review.openstack.org/#/c/511968/
>>> (open for feedback).
>>> Content can be read here:
>>> http://docs-draft.openstack.org/68/511968/1/check/gate-project-team-guide-docs-ubuntu-xenial/1a5b40e//doc/build/html/stable-branches.html#support-phases
>>> Tag created here: https://review.openstack.org/#/c/511969/ (same,
>>> please review).
>>>
>>> The idea is really to not touch the current stable policy and create a
>>> new one, more "relax" that suits well for projects like installers.
>>>
>>> Proposal 2: change the current policy and be more relax for projects
>>> like installers.
>>> I haven't worked on this proposal while it was something I was
>>> considering doing first, because I realized it could bring confusion
>>> in which projects actually follow the real stable policy and the ones
>>> who have exceptions.
>>> That's why I thought having a dedicated tag would help to separate them.
>>>
>>> Proposal 3: no change anywhere, projects like installer can't claim
>>> stability etiquette (not my best option in my opinion).
>>>
>>> Anyway, feedback is welcome, I'm now listening. If you work on Kolla,
>>> TripleO, OpenStack-Ansible, PuppetOpenStack (or any project who has
>>> this need), please get involved in the review process.
>>
>> My preference goes to proposal 1, however rather than call it "relaxed"
>> I would make it specific to deployment/lifecycle or cycle-trailing
>> projects.
>>
>> Ideally this policy could get adopted by any such project. The
>> discussion started on the review and it's going well, so let's see where
>> it goes :)
>
> Thierry, when I read your comment on Gerrit I understand you prefer to
> amend the existing policy and just make a note for installers (which
> is I think the option #2 that I proposed). Can you please confirm
> that?
> So far I see option #1 has large consensus here, I'll wait for
> Thierry's answer to continue to work on it.
>
> Thanks for the feedback so far!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [kolla] [PTL] who can start the official meeting?

2017-10-11 Thread Michał Jastrzębski
I haven't seen "malicious" meeting starters yet, let's hope that won't
happen:) On the other hand, ad-hoc chair change can, and did, happen,
so I agree with fungi - I don't think we need to put restrictions on
that.

On 11 October 2017 at 09:11, Jeremy Stanley  wrote:
> On 2017-10-11 21:35:26 +0530 (+0530), Swapnil Kulkarni wrote:
> [...]
>> The problem here is if we know who are most likely to chair the
>> meeting e.g. [1] we can allow them to start the meeting.
> [...]
>
> I'm pretty certain I wouldn't want to have to propose a patch to
> update that every time I needed someone to chair a meeting in my
> absence. This doesn't seem like a common enough issue to warrant the
> added complexity and red tape of access controls on our meeting
> automation.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Non-critical, non-gate patches freeze during migration to zuul3

2017-10-03 Thread Michał Jastrzębski
Since infra is splitting zuul to zuulv2 and v3 [1], we get our gates
back and allows us to work on zuulv3 gates at same time. Therefore we
can un-freeze repo. Feel free to merge patches:)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123049.html

On 2 October 2017 at 10:13, Michał Jastrzębski <inc...@gmail.com> wrote:
> Hello,
>
> As you all know, Zuul v3 is on! Unfortunate side effect was that it
> broke our gates. For that reason I submitted patch removing legacy
> jobs at all and we will do quick migration to zuulv3 compatible, local
> jobs. That means between this patch merges [1] and we finish that, we
> will be without effective CI. For that reason I want us to not merge
> any patches that aren't critical bugfixes or gate-related work.
>
> Patches that migrates us to zuul v3 are [2] [3], please prioritize them.
>
> Regards,
> Michal
>
> [1] https://review.openstack.org/#/c/508944/
> [2] https://review.openstack.org/#/c/508661/
> [3] https://review.openstack.org/#/c/508376/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Non-critical, non-gate patches freeze during migration to zuul3

2017-10-02 Thread Michał Jastrzębski
Hello,

As you all know, Zuul v3 is on! Unfortunate side effect was that it
broke our gates. For that reason I submitted patch removing legacy
jobs at all and we will do quick migration to zuulv3 compatible, local
jobs. That means between this patch merges [1] and we finish that, we
will be without effective CI. For that reason I want us to not merge
any patches that aren't critical bugfixes or gate-related work.

Patches that migrates us to zuul v3 are [2] [3], please prioritize them.

Regards,
Michal

[1] https://review.openstack.org/#/c/508944/
[2] https://review.openstack.org/#/c/508661/
[3] https://review.openstack.org/#/c/508376/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Michał Jastrzębski
On 26 September 2017 at 13:54, Alex Schultz <aschu...@redhat.com> wrote:
> On Tue, Sep 26, 2017 at 2:34 PM, Michał Jastrzębski <inc...@gmail.com> wrote:
>> In Kolla, during this PTG, we came up with idea of scenario based
>> testing+documentation. Basically what we want to do is to provide set
>> of kolla configurations, howtos and tempest configs to test out
>> different "constellations" or use-cases. If, instead of in Kolla, do
>> these in cross-community manner (and just host kolla-specific things
>> in kolla), I think that would partially address what you're asking for
>> here.
>>
>
> So I'd like to point out that we do a lot of these similar deployments
> in puppet[0] and tripleo[1] for a while now but more to get the most
> coverage out of the fewest jobs in terms of CI.  They aren't
> necessarily realistic deployment use cases. We can't actually fully
> test deployment scenarios given the limited resources available.
>
> The problem with trying to push the constellation concept to
> deployment tools is that you're effectively saying in that the
> upstream isn't going to bother to doing it and is relying on an
> understaffed (see chef/puppet people emails) groups to now implement
> the thing you expect end users to use.  Simplification in openstack
> needs to not be pushed off to someone else as we're all responsible
> for it.  Have you seen the number of feature/configuration options the
> upstream services have? Now multiply by 20-30. Welcome to OpenStack
> configuration management.  Oh an try and keep up with all the new ones
> and the ones being deprecated every 6 months. /me cries
>
> Honestly it's time to stop saying yes to things unless they have some
> sort of minimum viability or it makes sense why we would force it on
> the end user (as confirmed by the end user, not because it sounds like
> a good idea).
>
> OpenStack has always been a pick your poison and construct your own
> cloud. The problem is that those pieces used for building are getting
> more complex and have even more inter-dependencies being added each
> cycle without a simple way for the operator to install or be able to
> migrate between versions.
>
> Thanks,
> -Alex
>
> [0] https://github.com/openstack/puppet-openstack-integration
> [1] 
> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html

Right, I don't think anyone considers addressing *all* of them... But
if you break down actual use cases, most people wants nova (qemu+kvm),
nautron (vxlan, potentially vlan), cinder+ceph ... if we agree to
cover 90% of users, that'll boil down to 4-5 different
"constellations". If you want fancy networking, we will try out best
to make it possible, but not necessarily as easy as just 20 or so node
mini private cloud for vms. I think if we could provide these 4 or 5
use cases, easy to deploy and manage, provide testing suite so people
can validate env, provide robust upgrades and so on, that alone would
make a lot of people happy.

>> On 26 September 2017 at 13:01, Jonathan Proulx <j...@csail.mit.edu> wrote:
>>> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>>>
>>> :OpenStack is big. Big enough that a user will likely be fine with learning
>>> :a new set of tools to manage it.
>>>
>>> New users in the startup sense of new, probably.
>>>
>>> People with entrenched environments, I doubt it.
>>>
>>> But OpenStack is big. Big enough I think all the major config systems
>>> are fairly well represented, so whether I'm right or wrong this
>>> doesn't seem like an issue to me :)
>>>
>>> Having common targets (constellations, reference architectures,
>>> whatever) so all the config systems build the same things (or a subset
>>> or superset of the same things) seems like it would have benefits all
>>> around.
>>>
>>> -Jon
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Michał Jastrzębski
In Kolla, during this PTG, we came up with idea of scenario based
testing+documentation. Basically what we want to do is to provide set
of kolla configurations, howtos and tempest configs to test out
different "constellations" or use-cases. If, instead of in Kolla, do
these in cross-community manner (and just host kolla-specific things
in kolla), I think that would partially address what you're asking for
here.

On 26 September 2017 at 13:01, Jonathan Proulx  wrote:
> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>
> :OpenStack is big. Big enough that a user will likely be fine with learning
> :a new set of tools to manage it.
>
> New users in the startup sense of new, probably.
>
> People with entrenched environments, I doubt it.
>
> But OpenStack is big. Big enough I think all the major config systems
> are fairly well represented, so whether I'm right or wrong this
> doesn't seem like an issue to me :)
>
> Having common targets (constellations, reference architectures,
> whatever) so all the config systems build the same things (or a subset
> or superset of the same things) seems like it would have benefits all
> around.
>
> -Jon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Michał Jastrzębski
On 26 September 2017 at 07:34, Attila Fazekas <afaze...@redhat.com> wrote:
> decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
>
> Fully pulling all container takes something like ~4.5 min (from localhost,
> one leaf request at a time),
> but on the gate vm  we usually have 4 core,
> so it is possible to go bellow 2 min with better pulling strategy,
> unless we hit some disk limit.

Check your $docker info. If you kept defaults, storage driver will be
devicemapper on loopback, which is awfully slow and not very reliable.
Overlay2 is much better and should speed things up quite a bit. For me
deployment of 5 node openstack on vms similar to gate took 6min (I had
registry available in same network). Also if you pull single image it
will download all base images as well, so next one will be
significantly faster.

>
> On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski <inc...@gmail.com>
> wrote:
>>
>> On 22 September 2017 at 17:21, Paul Belanger <pabelan...@redhat.com>
>> wrote:
>> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> >> > "if DevStack gets custom images prepped to make its jobs
>> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> >> > do we draw that line?). "
>> >> >
>> >> > IMHO we can try to have only one big image per distribution,
>> >> > where the packages are the union of the packages requested by all
>> >> > team,
>> >> > minus the packages blacklisted by any team.
>> >> [...]
>> >>
>> >> Until you realize that some projects want packages from UCA, from
>> >> RDO, from EPEL, from third-party package repositories. Version
>> >> conflicts mean they'll still spend time uninstalling the versions
>> >> they don't want and downloading/installing the ones they do so we
>> >> have to optimize for one particular set and make the rest
>> >> second-class citizens in that scenario.
>> >>
>> >> Also, preinstalling packages means we _don't_ test that projects
>> >> actually properly declare their system-level dependencies any
>> >> longer. I don't know if anyone's concerned about that currently, but
>> >> it used to be the case that we'd regularly add/break the package
>> >> dependency declarations in DevStack because of running on images
>> >> where the things it expected were preinstalled.
>> >> --
>> >> Jeremy Stanley
>> >
>> > +1
>> >
>> > We spend a lot of effort trying to keep the 6 images we have in nodepool
>> > working
>> > today, I can't imagine how much work it would be to start adding more
>> > images per
>> > project.
>> >
>> > Personally, I'd like to audit things again once we roll out zuulv3, I am
>> > sure
>> > there are some tweaks we could make to help speed up things.
>>
>> I don't understand, why would you add images per project? We have all
>> the images there.. What I'm talking about is to leverage what we'll
>> have soon (registry) to lower time of gates/DIB infra requirements
>> (DIB would hardly need to refresh images...)
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 17:21, Paul Belanger  wrote:
> On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> > "if DevStack gets custom images prepped to make its jobs
>> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> > do we draw that line?). "
>> >
>> > IMHO we can try to have only one big image per distribution,
>> > where the packages are the union of the packages requested by all team,
>> > minus the packages blacklisted by any team.
>> [...]
>>
>> Until you realize that some projects want packages from UCA, from
>> RDO, from EPEL, from third-party package repositories. Version
>> conflicts mean they'll still spend time uninstalling the versions
>> they don't want and downloading/installing the ones they do so we
>> have to optimize for one particular set and make the rest
>> second-class citizens in that scenario.
>>
>> Also, preinstalling packages means we _don't_ test that projects
>> actually properly declare their system-level dependencies any
>> longer. I don't know if anyone's concerned about that currently, but
>> it used to be the case that we'd regularly add/break the package
>> dependency declarations in DevStack because of running on images
>> where the things it expected were preinstalled.
>> --
>> Jeremy Stanley
>
> +1
>
> We spend a lot of effort trying to keep the 6 images we have in nodepool 
> working
> today, I can't imagine how much work it would be to start adding more images 
> per
> project.
>
> Personally, I'd like to audit things again once we roll out zuulv3, I am sure
> there are some tweaks we could make to help speed up things.

I don't understand, why would you add images per project? We have all
the images there.. What I'm talking about is to leverage what we'll
have soon (registry) to lower time of gates/DIB infra requirements
(DIB would hardly need to refresh images...)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 11:45, Clark Boylan <cboy...@sapwetik.org> wrote:
> On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
>> Another, more revolutionary (for good or ill) alternative would be to
>> move gates to run Kolla instead of DevStack. We're working towards
>> registry of images, and we support most of openstack services now. If
>> we enable mixed installation (your service in devstack-ish way, others
>> via Kolla), that should lower the amount of downloads quite
>> dramatically (lots of it will be downloads from registry which will be
>> mirrored/cached in every nodepool). Then all we really need is to
>> support barebone image with docker and ansible installed and that's
>> it.
>
> Except that it very likely isn't going to use less bandwidth. We already
> mirror most of these package repos so all transfers are local to the
> nodepool cloud region. In total we seem to grab about 139MB of packages
> for a neutron dvr multinode scenario job (146676348 bytes) on Ubuntu
> Xenial. This is based off the package list compiled at
> http://paste.openstack.org/raw/621753/ then asking apt-cache for the
> package size for the latest version.
>
> Kolla images on the other hand are in the multigigabyte range
> http://tarballs.openstack.org/kolla/images/.
>
> Clark

Right, all 200+ of them, with proper registry management it's going to
be more streamlined. That will lower amount of effort to handle DIB
images tho. We are going to build them anyway, so there net bandwidth
will actually be lower... Also I don't think it's bandwidth that's
issue here as much as general package management and installation of
packages even from locally available mirror, docker would help with
that.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 07:31, Jeremy Stanley  wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> "if DevStack gets custom images prepped to make its jobs
>> run faster, won't Triple-O, Kolla, et cetera want the same and where
>> do we draw that line?). "
>>
>> IMHO we can try to have only one big image per distribution,
>> where the packages are the union of the packages requested by all team,
>> minus the packages blacklisted by any team.
> [...]
>
> Until you realize that some projects want packages from UCA, from
> RDO, from EPEL, from third-party package repositories. Version
> conflicts mean they'll still spend time uninstalling the versions
> they don't want and downloading/installing the ones they do so we
> have to optimize for one particular set and make the rest
> second-class citizens in that scenario.
>
> Also, preinstalling packages means we _don't_ test that projects
> actually properly declare their system-level dependencies any
> longer. I don't know if anyone's concerned about that currently, but
> it used to be the case that we'd regularly add/break the package
> dependency declarations in DevStack because of running on images
> where the things it expected were preinstalled.
> --
> Jeremy Stanley

Another, more revolutionary (for good or ill) alternative would be to
move gates to run Kolla instead of DevStack. We're working towards
registry of images, and we support most of openstack services now. If
we enable mixed installation (your service in devstack-ish way, others
via Kolla), that should lower the amount of downloads quite
dramatically (lots of it will be downloads from registry which will be
mirrored/cached in every nodepool). Then all we really need is to
support barebone image with docker and ansible installed and that's
it.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][deployment][kolla][tripleo][osa] Service diagnostics task force

2017-09-13 Thread Michał Jastrzębski
Hello my dearest of communities,

During deployment tools session on PTG we discussed need for deep
health checking and metering of running services. It's very relevant
in context of containers (especially k8s) and HA. Things like
watchdog, heartbeats or exposing relative metrics (I don't want to get
into design here, suffice to say it's non-trivial problem to solve).

We would like to start a "task force", few volunteers from both
deployment tool side (ops, ha) and project dev teams. We would like to
design together a proper health check mechanism for one of projects to
create best practices and design, that later could be implemented in
all other services.

We would to ask for volunteer project team to join us and spearhead this effort.

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [cinder] [ironic] [tripleo] PTG cross-platform meetup possibility

2017-09-08 Thread Michał Jastrzębski
Hey,

I know about wed meeting. This is when we wanted to discuss kolla-k8s and
tripleo shared resources,  whatever they might be.  I assumed Mon meeting
is different?

On Sep 8, 2017 6:22 PM, "Richard Wellum"  wrote:

> Emilien,
>
> Can you please update this on the schedule if it's not already? The link
> to the spreadsheet is in this thread.
>
> Michal didn't seem to know about this Wednesday meeting because he replied
> to my email for Monday, with a confirmation, But maybe I'm missing
> something. And we, kolla, all thought it was Monday, although we searched
> for the conversation today and couldn't find anything. So not sure everyone
> is synched up. Could just be me; it's been a long week.
>
> Cheers,
>
> Rich
>
> On Fri, Sep 8, 2017, 5:15 PM Vikram Hosakote (vhosakot) <
> vhosa...@cisco.com> wrote:
>
>> Cool, we’ll hold the room reservation in Durango on Monday 2-4 pm for the
>> cross-project
>>
>> meeting with the deployment tool groups (Kolla, Ansible, TripleO, Chef,
>> Puppet, Helm,
>>
>> etc).
>>
>>
>>
>> Regards,
>>
>> Vikram Hosakote
>>
>> IRC:  vhosakot
>>
>>
>>
>> *From: *Emilien Macchi 
>> *Reply-To: *"OpenStack Development Mailing List (not for usage
>> questions)" 
>> *Date: *Friday, September 8, 2017 at 4:04 PM
>> *To: *"OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> *Subject: *Re: [openstack-dev] [kolla] [cinder] [ironic] [tripleo] PTG
>> cross-platform meetup possibility
>>
>>
>>
>> On Fri, Sep 8, 2017 at 12:49 PM, Richard Wellum 
>> wrote:
>>
>> Triple-O team - can you please confirm the Monday slot, 2-4pm still works
>>
>> for you?
>>
>>
>>
>> TripleO / Kolla interactions were scheduled on Wednesday morning in
>>
>> the TripleO room but I'm fine if you want to move it (just let us
>>
>> know).
>>
>> On Monday afternoon, Deployment tools group (Kolla, Ansible, TripleO,
>>
>> Chef, Puppet, Helm, etc) was supposed to meet in the Packaging room
>>
>> and collaborate.
>>
>>
>>
>> All of this was scheduled a few days / weeks ago but we're happy to
>>
>> re-visit it. Please let us know though because it will confuse people
>>
>> otherwise.
>>
>>
>>
>> Thanks,
>>
>> --
>>
>> Emilien Macchi
>>
>>
>>
>> 
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>>
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [cinder] [ironic] PTG cross-platform meetup possibility

2017-09-08 Thread Michał Jastrzębski
Awesome!

Thanks Rich and see you all in Denver!

On 8 September 2017 at 12:19, Richard Wellum  wrote:
> Room booked: 'Durango', 2-4pm for Kolla+Triple-O and 4-6 for
> Kolla+Ironic+Cinder.
>
> Please see: https://ethercalc.openstack.org/Queens-PTG-Discussion-Rooms -
> for more details.
>
> Thanks,
>
> ||Rich
>
>
> On Thu, Sep 7, 2017 at 11:09 AM Vikram Hosakote (vhosakot)
>  wrote:
>>
>> Sending the confirmed cross-project meets with the kolla community:
>>
>>
>>
>> Monday with Triple-O at 2 pm
>>
>> Monday with Cinder and Ironic at 4 pm
>>
>>
>>
>> Regards,
>>
>> Vikram Hosakote
>>
>> IRC:  vhosakot
>>
>>
>>
>> From: Jay S Bryant 
>> Reply-To: "jsbry...@electronicjungle.net" ,
>> "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Thursday, August 31, 2017 at 1:43 PM
>> To: "openstack-dev@lists.openstack.org"
>> 
>> Subject: Re: [openstack-dev] [kolla] [cinder] [ironic] PTG cross-platform
>> meetup possibility
>>
>>
>>
>> Rich,
>>
>> I can make Monday at 4 pm work.  Conflicts with some Docs discussions but
>> I can step out for a bit.
>>
>> Thanks!
>>
>> Jay
>>
>>
>>
>>
>>
>> On 8/31/2017 8:25 AM, Richard Wellum wrote:
>>
>> Hi,
>>
>>
>>
>> How does Monday at 4pm sound? Kolla already has a cross-platform
>> discussion with Triple-O at 2pm, so this would dovetail nicely.
>>
>>
>>
>> Thanks,
>>
>>
>> ||Rich
>>
>>
>>
>> On Wed, Aug 30, 2017 at 2:48 PM Ivan Kolodyazhny  wrote:
>>
>> Hi team,
>>
>>
>>
>> I'm interested in cinder containerization too. It would be great if we can
>> schedule our meetup after 3pm or even 4pm, It will increase my chances to
>> attend it.
>>
>>
>>
>> Thanks in advance.
>>
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>>
>>
>> On Wed, Aug 30, 2017 at 4:23 PM, Dmitry Tantsur 
>> wrote:
>>
>> Hi!
>>
>> I'm all for it. Monday sounds okay to me, though I'll have to manage some
>> conflicts, of course.
>>
>>
>>
>> On 08/29/2017 05:56 PM, Richard Wellum wrote:
>>
>> Hi Folks,
>>
>> Would there be some interest from Cinder and Ironic (and others of course)
>> team members to have a quick session at the PTG with the Kolla team on the
>> latest developments in Kolla (like the new kolla-ansible devmode for
>> example)?
>>
>> Also it would give the Kolla team an opportunity to hear about your teams
>> interest and experiences in containerization and what you need from Kolla
>> going forward.
>>
>> I'm thinking an hour or two on Monday afternoon, the first day of the PTG?
>>
>> Thanks,
>>
>> ||Rich
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>>
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Next 2 team meetings canceled

2017-09-04 Thread Michał Jastrzębski
Hey,

Let's cancel meetings at 6.9 and and 13.9 because of PTG.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG Agenda draft - action required

2017-09-04 Thread Michał Jastrzębski
Hey, I'll be there, we already have ceph topic (including move to L), thanks!

On 1 September 2017 at 07:39, Emilien Macchi  wrote:
> On Fri, Sep 1, 2017 at 6:20 AM, Giulio Fidente  wrote:
> [...]
>> roger, I have added to the thursday afternoon a 1h slot to discuss future
>> developments around Ceph integration
>>
>> specifically three topics:
>>
>>  - is use of jinja to create multiple ceph clusters a good idea?
>>  - upgrade ceph to luminous (maybe also in Kolla)
>>  - support multiple ceph-pools for cinder-volume
>
> Cool works for me.
> Kolla PTL in CC, to make sure it's visible.
> Michal, can you join you think? and some from your team?
>
>> ack, will do and add links to the etherpad to some LP blueprints
>>
>> thanks Emilien for setting everything up :D
>
> I appreciate your help in doing the agenda :-)
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [ptls] Install guide testing

2017-08-28 Thread Michał Jastrzębski
One thing I'd be worried about is that not only maintaining this doc
will be very costly (we're now only talking of few services out of big
tent..), will cover just basic architectures, but also if someone
actually uses it, it will make things like upgrade quite hard.

If it's only meant to be PoC installation guide, we should clearly say
so, if we mean it to be prod-ready, we need things like HA
configuration, database clustering, network architecture and so on..

On 28 August 2017 at 09:13, Jay S Bryant  wrote:
> Alex,
>
> Thank you for starting this thread.  I will bring this up with Cinder team
> and see what we can do to help.
>
> Jay
>
>
> On 8/25/2017 3:57 AM, Alexandra Settle wrote:
>
> Hi everyone,
>
>
>
> The documentation team is searching for volunteers to help test and verify
> the OpenStack installation instructions here:
> https://docs.openstack.org/install-guide/
>
>
>
> Previously this action has been undertaken by a series of volunteers, mostly
> from the documentation team. However due to the migration, and a significant
> drop in contributors, we are now seeking new individuals to help us complete
> this task.
>
>
>
> We will be tracking any work here:
> https://wiki.openstack.org/wiki/Documentation/PikeDocTesting You can see
> what we have previously done for testing here:
> https://wiki.openstack.org/wiki/Documentation/OcataDocTesting
>
>
>
> PTLs of cinder/keystone/horizon/neutron/nova/glance – Previously the
> documentation team performed the testing tasks for your respective projects
> as they lived within the openstack-manuals repo. We appreciate that you may
> or may not have the resources to continue this effort, but it would be great
> if your teams are able. Please let me know if you are able to so we can
> verify the instructions of these projects :)
>
>
>
> Thanks,
>
>
>
> Alex
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [tripleo] [openstack-ansible] [deployment] Collaboration at PTG

2017-08-25 Thread Michał Jastrzębski
We (Kolla) planned some time for that discussion:) It would be awesome
if we could have them on Mon-Tue, because Wed-Fri we'll have
kolla-specific design room. That being said if needed we can use it
for our cross-community discussions.

Biggest one for me is new direction of tripleo (k8s+ansible) and how
that corresponds to kolla-k8s (k8s+ansible).

On 25 August 2017 at 15:53, Jean-Philippe Evrard
 wrote:
> Hello Emilien,
>
> The Discussion room is a good idea. I like it.
> Most of the OpenStack-Ansible crew will be available the whole week, so we
> can even think of doing a conversation outside the Wed-Friday timeframe.
>
> If you/we all have enough time, maybe we could organise two sessions,
> probably with different formats?
> For example, a brainstorming session about how we collaborated on previous
> cycles and how we could collaborate in the future, and another session with
> the real action points based on the first conversation?
>
> On top of that, I have extra points I'd like to discuss with you:
> - Architecture of LB + web server + uwsgi
> - Possibility of sharing infrastructure (mariadb/rabbitmq/...)
> experience/code between projects.
>
> Thank you in advance.
>
> Best regards,
> JP
>
> On Fri, Aug 25, 2017 at 8:16 PM, Emilien Macchi  wrote:
>>
>> Cool, sounds like some people are interested (I haven't hear from
>> Kolla yet but I'm sure they are as well).
>>
>> I was wondering if we should take benefit of Discussion Rooms, useful
>> for inter-projects topics:
>> https://ethercalc.openstack.org/Queens-PTG-Discussion-Rooms
>>
>> There is still some place, let me know what you think and we can block
>> a slot (maybe 2h?)
>> I want to hear from Kolla and OpenStack Ansible at least and know if
>> you have schedule constraints otherwise I'll go ahead and block a
>> slot.
>>
>> Thanks,
>>
>> On Fri, Aug 18, 2017 at 4:37 AM, Flavio Percoco  wrote:
>> > On 17/08/17 10:24 -0500, Major Hayden wrote:
>> >>
>> >> On 08/17/2017 09:30 AM, Emilien Macchi wrote:
>> >>>
>> >>> If you're working on Kolla / OpenStack-Ansible - please let us know if
>> >>> you have specific constraints on the schedule, so we can maybe block a
>> >>> timeslot in the agenda from now.
>> >>> We'll have a "Packaging" room which is reserved for all topics related
>> >>> to OpenStack deployments, so we can use this one.
>> >>
>> >>
>> >> I don't have any constraints (that I'm aware of), but I'd be interested
>> >> in
>> >> participating!  Performance in the gate jobs has been one of my tasks
>> >> lately
>> >> and I'd like to see if we can collaborate there to make improvements
>> >> without
>> >> ruining infra's day. ;)
>> >>
>> >> As long as you can put up with a few Dad jokes, I'll be there.
>> >
>> >
>> > ++ I'm interested in this topic too!
>> >
>> > Flavio
>> >
>> > --
>> > @flaper87
>> > Flavio Percoco
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Kolla is now stable:follows-policy project

2017-08-18 Thread Michał Jastrzębski
As of today Kolla got this tag. That doesn't really mean we change
anything in our model as tag is given to projects which already
follows rules of stable policy, but that makes it official:)

Keep up with good work Kolla team!

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla-kubernetes] Proposing Rich Wellum to core team

2017-08-11 Thread Michał Jastrzębski
Hello,

It's my pleasure to start another core team vote. This time for our
colleague rwellum. I propose that he joins kolla-kubernetes team.

This is my +1 vote. Every kolla-kubernetes core has a vote and it can
be veto'ed.

Voting will last 2 weeks and will end at 25th of August.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][PTL][elections] Candidacy for Kolla PTL

2017-08-02 Thread Michał Jastrzębski
Hello everyone,

Once more unto the breach dear friends! I would like to run for PTL again for
Queens. Pike was very exciting release for Kolla. With strong focus on
Kolla-Kubernetes, Kolla-Ansible getting more and more production deployments
and Kolla images being successfully consumed by project outside of Kolla
like TripleO or OpenStack-Helm, I think this was quite a success for us.

We managed to improve one of biggest pain points we saw in Kolla - gates. We
still have lengths to go, but progress is significant.

I'd love to help our community to make Queens even better. We have several
exiting features in our plates, like automated push mechanism for images or
orchestration layer for kolla-kubernetes.

We also need to focus on another big pain point of Kolla - documentation.

I would like us to strengthen cooperation with Kubernetes community and I'm
strong believer that best bridge between communities is built on top of common
technical issues and mutual help. I think Kolla-Kubernetes naturally helps
with this cooperation and we can use it even more.

Per request, I also add small haiku to my nomination;)

Glues Stack together
Project by great people made
Queens approaching soon

Regards,
Michal inc0 Jastrzebski

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-24 Thread Michał Jastrzębski
>>...
>>DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
>>DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
>> keystone:latest
>>...

That's strange construction, are you sure guys that you don't want to
separate address:port from namespace? (tripleoupstream here).

Say you'd like to setup docker to point to insecure registry (add
--insecure-registry do systemd conf), that will take addr:port not
whole thing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Michał Jastrzębski
On 17 July 2017 at 10:13, Emilien Macchi  wrote:
> On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
>> On 14/07/17 08:08 -0700, Emilien Macchi wrote:
>>>
>>> On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:


 Greetings,

 As some of you know, I've been working on the second phase of TripleO's
 containerization effort. This phase if about migrating the docker based
 deployment onto Kubernetes.

 These phase requires work on several areas: Kubernetes deployment,
 OpenStack
 deployment on Kubernetes, configuration management, etc. While I've been
 diving
 into all of these areas, this email is about the second point, OpenStack
 deployment on Kubernetes.

 There are several tools we could use for this task. kolla-kubernetes,
 openstack-helm, ansible roles, among others. I've looked into these tools
 and
 I've come to the conclusion that TripleO would be better of by having
 ansible
 roles that would allow for deploying OpenStack services on Kubernetes.

 The existing solutions in the OpenStack community require using Helm.
 While
 I
 like Helm and both, kolla-kubernetes and openstack-helm OpenStack
 projects,
 I
 believe using any of them would add an extra layer of complexity to
 TripleO,
 which is something the team has been fighting for years years -
 especially
 now
 that the snowball is being chopped off.

 Adopting any of the existing projects in the OpenStack communty would
 require
 TripleO to also write the logic to manage those projects. For example, in
 the
 case of openstack-helm, the TripleO team would have to write either
 ansible
 roles or heat templates to manage - install, remove, upgrade - the charts
 (I'm
 happy to discuss this point further but I'm keepping it at a high-level
 on
 purpose for the sake of not writing a 10k-words-long email).

 James Slagle sent an email[0], a couple of days ago, to form TripleO
 plans
 around ansible. One take-away from this thread is that TripleO is
 adopting
 ansible more and more, which is great and it fits perfectly with the
 conclusion
 I reached.

 Now, what this work means is that we would have to write an ansible role
 for
 each service that will deploy the service on a Kubernetes cluster.
 Ideally
 these
 roles will also generate the configuration files (removing the need of
 puppet
 entirely) and they would manage the lifecycle. The roles would be
 isolated
 and
 this will reduce the need of TripleO Heat templates. Doing this would
 give
 TripleO full control on the deployment process too.

 In addition, we could also write Ansible Playbook Bundles to contain
 these
 roles
 and run them using the existing docker-cmd implementation that is coming
 out
 in
 Pike (you can find a PoC/example of this in this repo[1]).

 Now, I do realize the amount of work this implies and that this is my
 opinion/conclusion. I'm sending this email out to kick-off the discussion
 and
 gather thoughts and opinions from the rest of the community.

 Finally, what I really like about writing pure ansible roles is that
 ansible
 is
 a known, powerfull, tool that has been adopted by many operators already.
 It'll
 provide the flexibility needed and, if structured correctly, it'll allow
 for
 operators (and other teams) to just use the parts they need/want without
 depending on the full-stack. I like the idea of being able to separate
 concerns
 in the deployment workflow and the idea of making it simple for users of
 TripleO
 to do the same at runtime. Unfortunately, going down this road means that
 my
 hope of creating a field where we could collaborate even more with other
 deployment tools will be a bit limited but I'm confident the result would
 also
 be useful for others and that we all will benefit from it... My hopes
 might
 be a
 bit naive *shrugs*
>>>
>>>
>>> Of course I'm biased since I've been (a little) involved in that work
>>> but I like the idea of :
>>>
>>> - Moving forward with our containerization. docker-cmd will help us
>>> for sure for this transition (I insist on the fact TripleO is a
>>> product that you can upgrade and we try to make it smooth for our
>>> operators), so we can't just trash everything and switch to a new
>>> tool. I think the approach that we're taking is great and made of baby
>>> steps where we try to solve different problems.
>>> - Using more Ansible - the right way - when it makes sense : with the
>>> TripleO containerization, we only use Puppet for Configuration
>>> Management, managing a few resources but not for orchestration (or not
>>> all the 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Michał Jastrzębski
Guys you just described Kolla-Kubernetes pretty much... how about
we join effort and work towards this goal together?

On 14 July 2017 at 08:43, Flavio Percoco  wrote:
> On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote:
>>
>> On 14.07.2017 11:17, Flavio Percoco wrote:
>>>
>>>
>>> Greetings,
>>>
>>> As some of you know, I've been working on the second phase of TripleO's
>>> containerization effort. This phase if about migrating the docker based
>>> deployment onto Kubernetes.
>>>
>>> These phase requires work on several areas: Kubernetes deployment,
>>> OpenStack
>>> deployment on Kubernetes, configuration management, etc. While I've been
>>> diving
>>> into all of these areas, this email is about the second point, OpenStack
>>> deployment on Kubernetes.
>>>
>>> There are several tools we could use for this task. kolla-kubernetes,
>>> openstack-helm, ansible roles, among others. I've looked into these
>>> tools and
>>> I've come to the conclusion that TripleO would be better of by having
>>> ansible
>>> roles that would allow for deploying OpenStack services on Kubernetes.
>>>
>>> The existing solutions in the OpenStack community require using Helm.
>>> While I
>>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>>> projects, I
>>> believe using any of them would add an extra layer of complexity to
>>> TripleO,
>>
>>
>> It's hard to estimate that complexity w/o having a PoC of such an
>> integration. We should come up with a final choice once we have it done.
>>
>> My vote would go for investing engineering resources into solutions that
>> have problems already solved, even by the price of added complexity (but
>> that sort of depends...). Added complexity may be compensated with
>> removed complexity (like those client -> Mistral -> Heat -> Mistral ->
>> Ansible manipulations discussed in the mail thread mentioned below [0])
>
>
> I agree it's hard to estimate but you gotta draw the line somewhere. I
> actually
> spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote
> the
> pyhelm lib (took some code from the openstack-helm folks) and I wrote the
> ansible helm module myself. I'd say I've spent enough time on this research.
>
> I don't think getting a full PoC working is worth it as that will require
> way
> more work for not much value since we can anticipate some of the
> complexities
> already.
>
> As far as the complexity comment goes, I disagree with you. I don't think
> you're
> evaluating the amount of complexity that there *IS* already in TripleO and
> how
> adding more complexity (layers, states, services) would make things worse
> for
> not much extra value.
>
> By all means, I might be wrong here so, do let me know if you're seeing
> something I'm not.
> Flavio
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Michał Jastrzębski
Hey,

I'll just throw a grenade (pun intended) into your discussion - sorry!
How about kolla-kubernetes? State awareness is done by kubernetes,
it's designed for containers and we already have most of services
ready and we'll be running ansible inside containers on top of k8s,
for all the things that k8s is not natively good at. Sounds like
somewhat you describe just switch heat with k8s.

Cheers,
Michal

On 10 July 2017 at 08:37, Lars Kellogg-Stedman  wrote:
> On Fri, Jul 7, 2017 at 1:50 PM, James Slagle  wrote:
>>
>> There are also some ideas forming around pulling the Ansible playbooks
>>
>> and vars out of Heat so that they can be rerun (or run initially)
>> independently from the Heat SoftwareDeployment delivery mechanism:
>
>
> I think the closer we can come to "the operator runs ansible-playbook to
> configure the overcloud" the better, but not because I think Ansible is
> inherently a great tool: rather, I think the many layers of indirection in
> our existing model make error reporting and diagnosis much more complicated
> that it needs to be.  Combined with Puppet's "fail as late as possible"
> model, this means that (a) operators waste time waiting for a deployment
> that is ultimately going to fail but hasn't yet, and (b) when it does fail,
> they need relatively intimate knowledge of our deployment tools to backtrack
> through logs and find the root cause of the failure.
>
> If we can offer a deployment mode that reduces the number of layers between
> the operator and the actions being performed on the hosts I think we would
> win on both fronts: faster failures and reporting errors as close as
> possible to the actual problem will result in less frustration across the
> board.
>
> I do like Steve's suggestion of a split model where Heat is responsible for
> instantiating OpenStack resources while Ansible is used to perform host
> configuration tasks.  Despite all the work done on Ansible's OpenStack
> modules, they feel inflexible and frustrating to work with when compared to
> Heat's state-aware, dependency ordered deployments.  A solution that allows
> Heat to output configuration that can subsequently be consumed by Ansible --
> either running manually or perhaps via Mistral for API-driven-deployments --
> seems like an excellent goal.  Using Heat as a "front-end" to the process
> means that we get to keep the parameter validation and documentation that is
> missing in Ansible, while still following the Unix philosophy of giving you
> enough rope to hang yourself if you really want it.
>
> --
> Lars Kellogg-Stedman 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-08 Thread Michał Jastrzębski
Hello,

Unfortunately we still don't have proper dockerhub uploading
mechanism, that's in progress. For now you need to build your own
images, here's doc for that:
https://docs.openstack.org/kolla/latest/image-building.html
Also feel free to join us on #openstack-kolla irc if you have further questions.

Cheers,
Michal

On 8 July 2017 at 11:03, Leni Kadali Mutungi  wrote:
> Hello all.
>
> I am trying to use the Cinder and Glance Docker images you provide in
> relation to the setup here:
> http://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/
>
> I tried to run `sudo docker pull
> kollaglue/centos-rdo-glance-registry:latest` and got an error of not
> found. I thought that it could possible to use a Dockerfile to spin up
> an equivalent of it, so I would like some guidance on how to go about
> doing that. Best practices and so on. Alternatively, if it is
> possible, may you point me in the direction of the equivalent images
> mentioned in the guides if they have been superseded by something else? 
> Thanks.
>
> CCing the oVirt users and devel lists to see if anyone has experienced
> something similar.
>
> --
> - Warm regards
> Leni Kadali Mutungi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Today meeting cancelled due to lack of agenda

2017-07-05 Thread Michał Jastrzębski
Some time to kick start brains after 4th of July:)

Regards,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-28 Thread Michał Jastrzębski
Aaaand it's done. Congrats Surya and welcome to core team!

On 27 June 2017 at 19:56, zhubingbing <zhu.bingb...@99cloud.net> wrote:
>
>
>
> +1
>
>>> -Original Message-
>>> From: Michał Jastrzębski [mailto:inc...@gmail.com]
>>> Sent: Wednesday, June 14, 2017 10:46 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> <openstack-dev@lists.openstack.org>
>>> Subject: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya)
>>> for
>>> core
>>>
>>> Hello,
>>>
>>> With great pleasure I'm kicking off another core voting to kolla-ansible
>>> and
>>> kolla teams:) this one is about spsurya. Voting will be open for 2 weeks
>>> (till
>>> 28th Jun).
>>>
>>> Consider this mail my +1 vote, you know the drill:)
>>>
>>> Regards,
>>> Michal
>>>
>>>
>>
>>-
>>duonghq
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-23 Thread Michał Jastrzębski
Great idea!

I would also throw another issue new people often have (I had it too).
Namely what to contribute. Lot's of people wants to do something but
not quite know where to start.
So few ideas for start:
* List of triaged bugs
* List of work items of large blueprits

Thoughts,
Michal

On 23 June 2017 at 13:17, Mike Perez  wrote:
> Hello all,
>
> Every month we have people asking on IRC or the dev mailing list having
> interest in working on OpenStack, and sometimes they're given different
> answers from people, or worse, no answer at all.
>
> Suggestion: lets work our efforts together to create some common
> documentation so that all teams in OpenStack can benefit.
>
> First it’s important to note that we’re not just talking about code projects
> here. OpenStack contributions come in many forms such as running meet ups,
> identifying use cases (product working group), documentation, testing, etc.
> We want to make sure those potential contributors feel welcomed too!
>
> What is common documentation? Things like setting up Git, the many accounts
> you need to setup to contribute (gerrit, launchpad, OpenStack foundation
> account). Not all teams will use some common documentation, but the point is
> one or more projects will use them. Having the common documentation worked
> on by various projects will better help prevent duplicated efforts,
> inconsistent documentation, and hopefully just more accurate information.
>
> A team might use special tools to do their work. These can also be
> integrated in this idea as well.
>
> Once we have common documentation we can have something like:
> 1. Choose your own adventure: I want to contribute by code
> 2. What service type are you interested in? (Database, Block storage,
> compute)
> 3. Here’s step-by-step common documentation to setting up Git, IRC,
> Mailing Lists, Accounts, etc.
> 4. A service type project might choose to also include additional
> documentation in that flow for special tools, etc.
>
> Important things to note in this flow:
> * How do you want to contribute?
> * Here are **clear** names that identify the team. Not code names like
> Cloud Kitty, Cinder, etc.
> * The documentation should really aim to not be daunting:
> * Someone should be able to glance at it and feel like they can finish
> things in five minutes. Not be yet another tab left in their browser that
> they’ll eventually forget about
> * No wall of text!
> * Use screen shots
> * Avoid covering every issue you could hit along the way.
>
> ## Examples of More Simple Documentation
> I worked on some documentation for the Upstream University preparation that
> has received excellent feedback meet close to these suggestions:
> * IRC [1]
> * Git [2]
> * Account Setup [3]
>
> ## 500 Feet Birds Eye view
> There will be a Contributor landing page on the openstack.org website.
> Existing contributors will find reference links to quickly jump to things.
> New contributors will find a banner at the top of the page to direct them to
> the choose your own adventure to contributing to OpenStack, with ordered
> documentation flow that reuses existing documentation when necessary.
> Picture also a progress bar somewhere to show how close you are to being
> ready to contribute to whatever team. Of course there are a lot of other
> fancy things we can come up with, but I think getting something up as an
> initial pass would be better than what we have today.
>
> Here's an example of what the sections/chapters could look like:
>
> - Code
> * Volumes (Cinder)
>  * IRC
>  * Git
>  * Account Setup
>  * Generating Configs
> * Compute (Nova)
>  * IRC
>  * Git
>  * Account Setup
> * Something about hypervisors (matrix?)
> -  Use Cases
> * Products (Product working group)
> * IRC
> * Git
> * Use Case format
>
> There are some rough mock up ideas [4]. Probably Sphinx will be fine for
> this. Potentially we could use this content for conference lunch and learns,
> upstream university, and the on-boarding events at the Forum. What do you
> all think?
>
> [1] - http://docs.openstack.org/upstream-training/irc.html
> [2] - http://docs.openstack.org/upstream-training/git.html
> [3] - http://docs.openstack.org/upstream-training/accounts.html
> [4] -
> https://www.dropbox.com/s/o46xh1cp0sv0045/OpenStack%20contributor%20portal.pdf?dl=0
>
> —
>
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Michał Jastrzębski
One of key components which, imho, made SIGs successful in k8s is
infrastructure behind it.

When someone proposes an issue, they can tag SIG to it. Everyone in
this SIG will be notified that there is an issue they might be
interested it, they check it out and provide feedback. That also
creates additional familiarity with dev toolset for non-dev sig
members. I think what would be important for OpenStack SIGs to be
successful is connecting SIGs to both Launchpad and Gerrit.

For example:
New blueprint is introduced to Kolla-ansible that allows easy PCI
passthrough, we tag HPC and Scientific SIGs and everyone is notified
(via mail) that there is this thing in project Kolla they might want
to check out.
New change is proposed that addresses important issue - also tag SIGs
to encourage their reviews on actual implementation.

I think github gives good all-in-one toolset for SIG mgmt, issue mgmt,
code reviews and all. With our diverse tools this will be more
challenging, but important. And yes, we need SIG people to have
visibility into gerrit. If you ask me what's biggest problem in
OpenStack I'd say that operator community don't review implementation
details enough. Having notifs pushed into them would hopefully change
this a little bit.


On 21 June 2017 at 09:55, Matt Riedemann  wrote:
> On 6/21/2017 11:17 AM, Shamail Tahir wrote:
>>
>>
>>
>> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez > > wrote:
>>
>> Shamail Tahir wrote:
>> > In the past, governance has helped (on the UC WG side) to reduce
>> > overlaps/duplication in WGs chartered for similar objectives. I
>> would
>> > like to understand how we will handle this (if at all) with the new
>> SIG
>> > proposa?
>>
>> I tend to think that any overlap/duplication would get solved
>> naturally,
>> without having to force everyone through an application process that
>> may
>> discourage natural emergence of such groups. I feel like an
>> application
>> process would be premature optimization. We can always encourage
>> groups
>> to merge (or clean them up) after the fact. How much
>> overlaps/duplicative groups did you end up having ?
>>
>>
>> Fair point, it wasn't many. The reason I recalled this effort was because
>> we had to go through the exercise after the fact and that made the volume of
>> WGs to review much larger than had we asked the purpose whenever they were
>> created. As long as we check back periodically and not let the work for
>> validation/clean up pile up then this is probably a non-issue.
>>
>>
>> > Also, do we have to replace WGs as a concept or could SIG
>> > augment them? One suggestion I have would be to keep projects on the
>> TC
>> > side and WGs on the UC side and then allow for spin-up/spin-down of
>> SIGs
>> > as needed for accomplishing specific goals/tasks (picture of a
>> diagram
>> > I created at the Forum[1]).
>>
>> I feel like most groups should be inclusive of all community, so I'd
>> rather see the SIGs being the default, and ops-specific or
>> dev-specific
>> groups the exception. To come back to my Public Cloud WG example, you
>> need to have devs and ops in the same group in the first place before
>> you would spin-up a "address scalability" SIG. Why not just have a
>> Public Cloud SIG in the first place?
>>
>>
>> +1, I interpreted originally that each use-case would be a SIG versus the
>> SIG being able to be segment oriented (in which multiple use-cases could be
>> pursued)
>>
>>
>>  > [...]
>> > Finally, how will this change impact the ATC/AUC status of the SIG
>> > members for voting rights in the TC/UC elections?
>>
>> There are various options. Currently you give UC WG leads the AUC
>> status. We could give any SIG lead both statuses. Or only give the AUC
>> status to a subset of SIGs that the UC deems appropriate. It's really
>> an
>> implementation detail imho. (Also I would expect any SIG lead to
>> already
>> be both AUC and ATC somehow anyway, so that may be a non-issue).
>>
>>
>> We can discuss this later because it really is an implementation detail.
>> Thanks for the answers.
>>
>>
>> --
>> Thierry Carrez (ttx)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>> Thanks,
>> Shamail Tahir
>> t: @ShamailXD
>> tz: Eastern Time
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-20 Thread Michał Jastrzębski
On 19 June 2017 at 06:05, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-06-16 15:50:54 -0700:
>> So I'm trying to figure out how to actually use it.
>>
>> We (and any other container based deploy..) will run into some
>> chicken/egg problem - you need to deploy container to generate big
>> yaml with defaults, then you need to overload it with your
>
> The config schema file (the "big YAML with defaults") should be part of
> the packaged software, so the deployment tool shouldn't need to generate
> it unless you're handling drivers that are not included in tree.

Right that's what I was missing, I guess we can generate these during
buildtime without big issues, then it will be embedded into container,
shouldn't be too hard change and would work for both source and
binary.
>> configurations, validate if they're not deprecated, run container with
>
> It doesn't do it today, but the thing that converts the input data to
> the INI file could automatically translate old option names to their new
> names.
>
>> this ansible role (or module...really doesn't matter), spit out final
>
> Why does the config file need to be generated inside a container?

Outside of container you don't have oslo or nova (python libs), so to
get access to these you need to do it inside container.

>> confg, lay it down, deploy container again. And that will have to be
>> done for every host class (as configs might differ host to host). Imho
>> a bit too much for this to be appealing (but I might be wrong). I'd
>> much rather have:
>> 1. Yaml as input to oslo.config instead of broken ini
>
> I'm not opposed to switching to YAML, but it's a bit more involved than
> just adding support in the parser. All of the work that has been done on
> generating sample default files and documentation needs to be updated to
> support YAML. We need a migration path to move everyone from INI to
> YAML. And we need to update devstack and all of its plugins to edit the
> new file format. There are probably more tasks involved in the
> migration. I'm dealing with a couple of other projects right now, and
> don't have time to plan all of that out myself. If someone else wants to
> pick it up, I can help with reviews on the spec and code changes.

Switching is a big no, everyone would hate us with emotion pure as
mountain spring water. It's to support both at same time, which makes
it slightly more complex. We could make full switch after few releases
of deprecation I guess. Anyway, agree, lots of work.

>
>> 2. Validator to throw an error if one of our regular,
>> template-rendered, configs is deprecated
>>
>> We can run this validator in gate to have quick feedback when
>> something gets deprecated.
>>
>> Thoughts?
>> Michal
>>
>> On 16 June 2017 at 13:24, Emilien Macchi  wrote:
>> > On Fri, Jun 16, 2017 at 11:09 AM, Jiří Stránský  wrote:
>> >> On 15.6.2017 19:06, Emilien Macchi wrote:
>> >>>
>> >>> I missed [tripleo] tag.
>> >>>
>> >>> On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi 
>> >>> wrote:
>> 
>>  If you haven't followed the "Configuration management with etcd /
>>  confd" thread [1], Doug found out that using confd to generate
>>  configuration files wouldn't work for the Cinder case where we don't
>>  know in advance of the deployment what settings to tell confd to look
>>  at.
>>  We are still looking for a generic way to generate *.conf files for
>>  OpenStack, that would be usable by Deployment tools and operators.
>>  Right now, Doug and I are investigating some tooling that would be
>>  useful to achieve this goal.
>> 
>>  Doug has prototyped an Ansible role that would generate configuration
>>  files by consumming 2 things:
>> 
>>  * Configuration schema, generated by Ben's work with Machine Readable
>>  Sample Config.
>> $ oslo-config-generator --namespace cinder --format yaml >
>>  cinder-schema.yaml
>> 
>>  It also needs: https://review.openstack.org/#/c/474306/ to generate
>>  some extra data not included in the original version.
>> 
>>  * Parameters values provided in config_data directly in the playbook:
>>  config_data:
>>    DEFAULT:
>>  transport_url: rabbit://user:password@hostname
>>  verbose: true
>> 
>>  There are 2 options disabled by default but which would be useful for
>>  production environments:
>>  * Set to true to always show all configuration values:
>>  config_show_defaults
>>  * Set to true to show the help text: config_show_help: true
>> 
>>  The Ansible module is available on github:
>>  https://github.com/dhellmann/oslo-config-ansible
>> 
>>  To try this out, just run:
>> $ ansible-playbook ./playbook.yml
>> 
>>  You can quickly see the output of cinder.conf:
>>   https://clbin.com/HmS58
>> 
>> 

Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-16 Thread Michał Jastrzębski
So I'm trying to figure out how to actually use it.

We (and any other container based deploy..) will run into some
chicken/egg problem - you need to deploy container to generate big
yaml with defaults, then you need to overload it with your
configurations, validate if they're not deprecated, run container with
this ansible role (or module...really doesn't matter), spit out final
confg, lay it down, deploy container again. And that will have to be
done for every host class (as configs might differ host to host). Imho
a bit too much for this to be appealing (but I might be wrong). I'd
much rather have:
1. Yaml as input to oslo.config instead of broken ini
2. Validator to throw an error if one of our regular,
template-rendered, configs is deprecated

We can run this validator in gate to have quick feedback when
something gets deprecated.

Thoughts?
Michal

On 16 June 2017 at 13:24, Emilien Macchi  wrote:
> On Fri, Jun 16, 2017 at 11:09 AM, Jiří Stránský  wrote:
>> On 15.6.2017 19:06, Emilien Macchi wrote:
>>>
>>> I missed [tripleo] tag.
>>>
>>> On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi 
>>> wrote:

 If you haven't followed the "Configuration management with etcd /
 confd" thread [1], Doug found out that using confd to generate
 configuration files wouldn't work for the Cinder case where we don't
 know in advance of the deployment what settings to tell confd to look
 at.
 We are still looking for a generic way to generate *.conf files for
 OpenStack, that would be usable by Deployment tools and operators.
 Right now, Doug and I are investigating some tooling that would be
 useful to achieve this goal.

 Doug has prototyped an Ansible role that would generate configuration
 files by consumming 2 things:

 * Configuration schema, generated by Ben's work with Machine Readable
 Sample Config.
$ oslo-config-generator --namespace cinder --format yaml >
 cinder-schema.yaml

 It also needs: https://review.openstack.org/#/c/474306/ to generate
 some extra data not included in the original version.

 * Parameters values provided in config_data directly in the playbook:
 config_data:
   DEFAULT:
 transport_url: rabbit://user:password@hostname
 verbose: true

 There are 2 options disabled by default but which would be useful for
 production environments:
 * Set to true to always show all configuration values:
 config_show_defaults
 * Set to true to show the help text: config_show_help: true

 The Ansible module is available on github:
 https://github.com/dhellmann/oslo-config-ansible

 To try this out, just run:
$ ansible-playbook ./playbook.yml

 You can quickly see the output of cinder.conf:
  https://clbin.com/HmS58


 What are the next steps:

 * Getting feedback from Deployment Tools and operators on the concept
 of this module.
Maybe this module could replace what is done by Kolla with
 merge_configs and OpenStack Ansible with config_template.
 * On the TripleO side, we would like to see if this module could
 replace the Puppet OpenStack modules that are now mostly used for
 generating configuration files for containers.
A transition path would be having Heat to generate Ansible vars
 files and give it to this module. We could integrate the playbook into
 a new task in the composable services, something like
"os_gen_config_tasks", a bit like we already have for upgrade tasks,
 also driven by Ansible.
>>
>>
>> This sounds good to me, though one issue i can presently see is that Puppet
>> modules sometimes contain quite a bit of data processing logic ("smart"
>> variables which map 1-to-N rather than 1-to-1 to actual config values, and
>> often not just in openstack service configs, e.g. puppet-nova also
>> configures libvirt, etc.). Also we use some non-config aspects from the
>> Puppet modules (e.g. seeding Keystone tenants/services/endpoints/...). We'd
>> need to implement this functionality elsewhere when replacing the Puppet
>> modules. Not a blocker, but something to keep in mind.
>
> 2 interesting things:
>
> - For the logic that are done by puppet modules for some parameters:
> yes I agree, this problem isn't solved now. This thread talks about
> config management, with some data in entry, it's a very little step I
> know, but it's on purpose.
>   Once we figure how to do that, we can think about the data
> generation and where to put the logic (I think the logic is too
> opinionated to be in a common project, but I might be wrong).
>
> - Things like libvirt, mysql, etc will be managed by something else
> but Puppet I think; this is out of topic for now. For Keystone
> resources, same thing, we could use some native python clients or
> Ansible modules if we switch to 

Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Michał Jastrzębski
Since there are 2 topics that are very very important to me:
1. binary resolution waiting for votes
2. kolla stable:follows-policy tag

Is there anything I can to do help with either?

On 16 June 2017 at 09:23, Thierry Carrez  wrote:
> Clay Gerrard wrote:
>> I'm loving this new ML thing the TC is doing!  Like... I'm not going to
>> come to the meeting.  I'm not a helpful person in general and probably
>> wouldn't have anything productive to say.
>>
>> But I love the *idea* that I know *when and where* this is being decided
>> so that if I *did* care enough about community goals to come make a
>> stink about it I know exactly what I should do - _show up and say my
>> piece_!  Just this *idea* is going to help a *ton* later when John tells
>> me "shut up clay; just review the patch" [1] - because if I had
>> something to say about it i should have been there when it was time to
>> say something about it!
>
> FWIW the "decision" won't be made at the meeting, but we'll try to reach
> consensus on the set of goals we find reasonable to propose. Expect
> another heated thread as a result of the meeting :)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Michał Jastrzębski
First of all, we definitely need that distinction to be clear.
Second, what are incentives to actually be an OpenStack project?
1. TC oversight - it's more a requirement than incentive
2. PTG space - definitely incentive
...anything else?

What else? TC has an important role, we need oversight to keep
OpenStack tidy, working together, follow four opens and all that. But
to agree for TC oversight (which you could argue makes life harder,
more rules to adhere to) project has to be an OpenStack project. How
do we push projects to join?

On 15 June 2017 at 10:17, Davanum Srinivas  wrote:
> On Thu, Jun 15, 2017 at 1:01 PM, gordon chung  wrote:
>>
>>
>> On 15/06/17 11:28 AM, Davanum Srinivas wrote:
>>> The purpose (my 2 cents) is to highlight what projects are under
>>> governance and those that are not.
>>
>> going down the rabbit hole, what does it mean to be under governance?
>> projects that want to use the openstack brand and were, at the time of
>> acceptance, supported some subset[1] of: 'open',
> [DIMS] Yes,
>> had some testing,
> [DIMS] Yes
>> supported keystone,
> [DIMS] No. It was never mandatory
> had a human resource?
> [DIMS] Yes, hopefully a bunch of people work on it
>  i don't really see how this
>> differs from big tent?
> [DIMS] Tons of folks confused about "Big-Tent", folks are confusing
> that label with "projects under governance".
> seems more like the same but without the
>> 'big-tent' stigma?
> [DIMS] Not sure about stigma. The label is useless right now.
>
>
>
>>
>> are we hoping openstack foundation to be a cloud-specific apache
>> foundation? maybe it already is, and if so, i don't really understand
>> the additional labeling we're trying to achieve.
>>
>> [1]
>> https://governance.openstack.org/tc/reference/new-projects-requirements.html
>>
>> --
>> gord
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-14 Thread Michał Jastrzębski
Hello,

With great pleasure I'm kicking off another core voting to
kolla-ansible and kolla teams:) this one is about spsurya. Voting will
be open for 2 weeks (till 28th Jun).

Consider this mail my +1 vote, you know the drill:)

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Michał Jastrzębski
On 8 June 2017 at 09:50, Michał Jastrzębski <inc...@gmail.com> wrote:
> On 8 June 2017 at 09:27, Flavio Percoco <fla...@redhat.com> wrote:
>> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>>>
>>> On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>>>>
>>>> On 06.06.2017 18:08, Emilien Macchi wrote:
>>>>>
>>>>> Another benefit is that confd will generate a configuration file when
>>>>> the application will start. So if etcd is down *after* the app
>>>>> startup, it shouldn't break the service restart if we don't ask confd
>>>>> to re-generate the config. It's good for operators who were concerned
>>>>> about the fact the infrastructure would rely on etcd. In that case, we
>>>>> would only need etcd at the initial deployment (and during lifecycle
>>>>> actions like upgrades, etc).
>>>>>
>>>>> The downside is that in the case of containers, they would still have
>>>>> a configuration file within the container, and the whole goal of this
>>>>> feature was to externalize configuration data and stop having
>>>>> configuration files.
>>>>
>>>>
>>>> It doesn't look a strict requirement. Those configs may (and should) be
>>>> bind-mounted into containers, as hostpath volumes. Or, am I missing
>>>> something what *does* make embedded configs a strict requirement?..
>>>
>>>
>>> mmh, one thing I liked about this effort was possibility of stop
>>> bind-mounting
>>> config files into the containers. I'd rather find a way to not need any
>>> bindmount and have the services get their configs themselves.
>>
>>
>> Probably sent too early!
>>
>> If we're not talking about OpenStack containers running in a COE, I guess
>> this
>> is fine. For k8s based deployments, I think I'd prefer having installers
>> creating configmaps directly and use that. The reason is that depending on
>> files
>> that are in the host is not ideal for these scenarios. I hate this idea
>> because
>> it makes deployments inconsistent and I don't want that.
>
> Well, I disagree. If we're doing this we're essentially getting rid of
> "files" at all. It might actually be easier to handle from COE than
> configmap, as configmap has to be generated and when you get to host
> specific things it's quite a pain to handle. I, for one, would happily
> use cantral DB for config options if we define schema correctly.
>
> That being said defining schema correctly is quite a challenge. Few
> hard cases I see right now can be found in single use case - PCI
> Passthrough
>
> 1. I have multiple PCI devices in host, I need to specify list of them
> 2. PCI buses differes host to host, I need to specify groups of hosts
> that will share same bus configuration and reflect that in service
> config
>
> Maybe we should gather few of hard use cases like that and make sure
> we can address them in our config schema?

Speaking of hard use cases: here's another - config rolling upgrade +
config rollback. If we have single option in etcd, when service
restarts it automatically gets new config which creates funny edge
cases when you want to do rolling upgrade of config and some other
node fails->service restarts->config gets updated "accidentally".

>>
>> Flavio
>>
>> --
>> @flaper87
>> Flavio Percoco
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Michał Jastrzębski
On 8 June 2017 at 09:27, Flavio Percoco  wrote:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>>
>> On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>>>
>>> On 06.06.2017 18:08, Emilien Macchi wrote:

 Another benefit is that confd will generate a configuration file when
 the application will start. So if etcd is down *after* the app
 startup, it shouldn't break the service restart if we don't ask confd
 to re-generate the config. It's good for operators who were concerned
 about the fact the infrastructure would rely on etcd. In that case, we
 would only need etcd at the initial deployment (and during lifecycle
 actions like upgrades, etc).

 The downside is that in the case of containers, they would still have
 a configuration file within the container, and the whole goal of this
 feature was to externalize configuration data and stop having
 configuration files.
>>>
>>>
>>> It doesn't look a strict requirement. Those configs may (and should) be
>>> bind-mounted into containers, as hostpath volumes. Or, am I missing
>>> something what *does* make embedded configs a strict requirement?..
>>
>>
>> mmh, one thing I liked about this effort was possibility of stop
>> bind-mounting
>> config files into the containers. I'd rather find a way to not need any
>> bindmount and have the services get their configs themselves.
>
>
> Probably sent too early!
>
> If we're not talking about OpenStack containers running in a COE, I guess
> this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending on
> files
> that are in the host is not ideal for these scenarios. I hate this idea
> because
> it makes deployments inconsistent and I don't want that.

Well, I disagree. If we're doing this we're essentially getting rid of
"files" at all. It might actually be easier to handle from COE than
configmap, as configmap has to be generated and when you get to host
specific things it's quite a pain to handle. I, for one, would happily
use cantral DB for config options if we define schema correctly.

That being said defining schema correctly is quite a challenge. Few
hard cases I see right now can be found in single use case - PCI
Passthrough

1. I have multiple PCI devices in host, I need to specify list of them
2. PCI buses differes host to host, I need to specify groups of hosts
that will share same bus configuration and reflect that in service
config

Maybe we should gather few of hard use cases like that and make sure
we can address them in our config schema?

>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][kolla][release] Policy regarding backports of gate code

2017-06-05 Thread Michał Jastrzębski
Hello,

Since we're working hard on providing pipeline for docker publishing,
that will require heavy gating of container images to be published. We
also would like to publish stable/ocata images to enable release
upgrade gates from O to P.

My question is, is it ok to backport gate logic to stable branch?
Regular code doesn't change so it might not be considered a feature
backport (users won't see a thing).
Since zuul runs all the jobs regarding of branch, unless we backport
this code, our multinode ocata jobs will be just huge waste of
resources.

First of reviews in question: https://review.openstack.org/#/c/466007/
As you can see it's quite an extensive overhaul of gating so it's much
more than a bug. How should we proceed?

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread Michał Jastrzębski
On 1 June 2017 at 09:22, Jeremy Stanley  wrote:
> On 2017-06-01 16:38:05 +0200 (+0200), Thierry Carrez wrote:
> [...]
>> For teams that are placed on the Wednesday-Friday segment, please
>> let us know whether you'd like to make use of the room on Friday
>> (pick between 2 days or 3 days).
> [...]
>
> As you didn't specify how to let you know, I'll just reply here.
>
> If at all possible, I'd like the Infrastructure room available
> through Friday.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Looks good! What is approximate size of L room?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-23 Thread Michał Jastrzębski
On 23 May 2017 at 08:13, Doug Hellmann  wrote:
> Excerpts from Davanum Srinivas (dims)'s message of 2017-05-23 10:44:30 -0400:
>> Team,
>>
>> Background:
>> For projects based on Go and Containers we need to ship binaries, for
>
> Can you elaborate on the use of the term "need" here. Is that because
> otherwise the projects can't be consumed? Is it the "norm" for
> projects from those communities? Something else?
>
>> example Kubernetes, etcd both ship binaries and maintain stable
>> branches as well.
>>   https://github.com/kubernetes/kubernetes/releases
>>   https://github.com/coreos/etcd/releases/
>>
>> Kubernetes for example ships container images to public registeries as well:
>>   
>> https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube?pli=1
>>   
>> https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube
>
> What are the support lifetimes for those images? Who maintains them?
>
>> So here's a proposal based on the really long thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
>>
>> The idea is to augment the existing processes for the new deliverables.
>>
>> * Projects define CI jobs for generating binaries and containers (some
>> already do!)
>> * Release team automation will kick builds off when specific versions
>> are released for the binaries and containers (Since Go based projects
>> can do cross-builds, we won't need to run these jobs on multiple
>> architectures which will keep the release process simple)
>
> I see how this would work for Go builds, since we would be tagging the
> thing being built. My understanding is that Kolla images are using the
> Kolla version, not the version of the software inside the image, though.
> How would that work? (Or maybe I misunderstood something from another
> thread and that's not how the images are versioned?)

Currently tagging is not fully answered question. Depends what
cadence/method for pushing we'll end up with. But since one image can
have multiple tags, we can do several at once. We can tag with :pike,
pike-2 (rev number), and :version-of-main-component, all pointing to
same image.

>> * Just like we upload stuff to tarballs.openstack.org, we will upload
>> binaries and containers there as well
>
> I know there's an infra spec for doing some of this, so I assume we
> anticipate having the storage capacity needed?
>
>> * Just like we upload things to pypi, we will upload containers with
>> specific versions to public repos.
>> * Projects can choose from the existing release models to make this
>> process as frequent as they need.
>>
>> Please note that I am deliberately ruling out the following
>> * Daily/Nightly releases that are accessible to end users, especially
>> from stable branches.
>
> The Kolla team did seem to want periodic builds for testing (to avoid
> having to build images in the test pipeline, IIUC). Do we still want to
> build those to tarballs.o.o? Does that even meet the needs of those test
> jobs?
>
>> * Project teams directly responsible for pushing stuff to end users

One thing to consider here is exactly same issue which was moved in
different thread, maybe even to higher degree. Golang binaries will
have their binaries built into it, so if one of deps has CVE, whole
binary will have it. Higher degree, because while containers can have
manifest of versions built into it, golang doesn't really (versioning
of deps in golang is actually quite tricky thing). If we want to ship
these binaries, they will have same dangers as images pushed to
dockerhub.

>> What do you think?
>>
>> Thanks,
>> Dims
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread Michał Jastrzębski
[snip]

So from Kolla perspective, since our dev guide is really also
operators guide (we are operators tool so we're kinda "special" on
that front), we'd love to handle both deployment guide, user manuals
and all that in our tree. If we could create infrastructure that would
allow us to segregate our content and manage it ourselves, I think
that would be useful. Tell us how to help:)

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-19 Thread Michał Jastrzębski
Kolla:
Attendees - full room (20-30?)
Notes - Conflict with kolla-k8s demo probably didn't help

While we didn't have etherpad, slides, recording (and video dongle
that could fit my laptop), we had great session with analog tools
(whiteboard and my voice chords). We walked through architecture of
each Kolla project, how they relate to each other and so on.

Couple things to take out from our onboarding:
1. Bring dongles
2. We could've used bigger room - people were leaving because we had
no chairs left
3. Recording would be awesome
4. Low tech is not a bad tech

All and all, when we started session I didn't know what to expect or
what people will expect so we just...rolled with it, and people seemed
to be happy with it:) I think onboarding rooms were great idea (kudos
to whoever came up with it)! I'll be happy to run it again in Sydney.

Cheers,
Michal


On 19 May 2017 at 08:12, Julien Danjou  wrote:
> On Fri, May 19 2017, Sean Dague wrote:
>
>> If you ran a room, please post the project, what you did in the room,
>> what you think worked, what you would have done differently. If you
>> attended a room you didn't run, please provide feedback about which one
>> it was, and what you thought worked / didn't work from the other side of
>> the table.
>
> We shared a room for Telemetry and CloudKitty for 90 minutes.
> I was there with Gordon Chung for Telemetry.
> Christophe Sauthier was there for CloudKitty.
>
> We only had 3 people showing up in the session. One wanted to read his
> emails in a quiet room, the two others had a couple of question on
> Telemetry – though it was not really related to contribution as far as I
> can recall.
>
> I had to leave after 45 minutes because they was an overlap with a talk
> I was doing and rescheduling did not seem possible. And everybody left a
> few minutes after I left apparently.
>
> --
> Julien Danjou
> -- Free Software hacker
> -- https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Michał Jastrzębski
On 18 May 2017 at 08:03, Paul Belanger <pabelan...@redhat.com> wrote:
> On Tue, May 16, 2017 at 02:11:18PM +, Sam Yaple wrote:
>> I would like to bring up a subject that hasn't really been discussed in
>> this thread yet, forgive me if I missed an email mentioning this.
>>
>> What I personally would like to see is a publishing infrastructure to allow
>> pushing built images to an internal infra mirror/repo/registry for
>> consumption of internal infra jobs (deployment tools like kolla-ansible and
>> openstack-ansible). The images built from infra mirrors with security
>> turned off are perfect for testing internally to infra.
>>
> Zuulv3 should have a little with this, it will allow for DAG graph for jobs,
> which means the top level job could be an image build then all jobs below can
> now consume said image.  The steps we are still working on is artifact 
> handling
> but long term, it should be possible for the testing jobs to setup the dynamic
> infrastructure needed themselves.
>
>> If you build images properly in infra, then you will have an image that is
>> not security checked (no gpg verification of packages) and completely
>> unverifiable. These are absolutely not images we want to push to
>> DockerHub/quay for obvious reasons. Security and verification being chief
>> among them. They are absolutely not images that should ever be run in
>> production and are only suited for testing. These are the only types of
>> images that can come out of infra.
>>
> We disable gpg for Ubuntu packaging for a specific reason, most this is 
> because
> our APT repos are not official mirrors of upstream. We regenerate indexes 
> every
> 2 hours as not to break long running jobs.  We have talked in the past of 
> fixing
> this, but it requires openstack-infra to move to a new mirroring tool for APT.

So idea to solve this particular problem goes like this:

Publish job is not a change-driven, it'll be periodical (24h?) during
low time. Then in this job we can turn off using infra mirrors and
just use upstream signed.

That being said, all the technical issues we saw so far (unless I'm
missing something) are solvable and we (kolla community) would love to
do all the heavy lifting to solve it. We need to wait for TC to
resolve non-technical issues before we can proceed tho.

>> Thanks,
>> SamYaple
>>
>> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski <inc...@gmail.com>
>> wrote:
>>
>> > On 16 May 2017 at 06:22, Doug Hellmann <d...@doughellmann.com> wrote:
>> > > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> > >> Flavio Percoco wrote:
>> > >> > From a release perspective, as Doug mentioned, we've avoided
>> > releasing projects
>> > >> > in any kind of built form. This was also one of the concerns I raised
>> > when
>> > >> > working on the proposal to support other programming languages. The
>> > problem of
>> > >> > releasing built images goes beyond the infrastructure requirements.
>> > It's the
>> > >> > message and the guarantees implied with the built product itself that
>> > are the
>> > >> > concern here. And I tend to agree with Doug that this might be a
>> > problem for us
>> > >> > as a community. Unfortunately, putting your name, Michal, as contact
>> > point is
>> > >> > not enough. Kolla is not the only project producing container images
>> > and we need
>> > >> > to be consistent in the way we release these images.
>> > >> >
>> > >> > Nothing prevents people for building their own images and uploading
>> > them to
>> > >> > dockerhub. Having this as part of the OpenStack's pipeline is a
>> > problem.
>> > >>
>> > >> I totally subscribe to the concerns around publishing binaries (under
>> > >> any form), and the expectations in terms of security maintenance that it
>> > >> would set on the publisher. At the same time, we need to have images
>> > >> available, for convenience and testing. So what is the best way to
>> > >> achieve that without setting strong security maintenance expectations
>> > >> for the OpenStack community ? We have several options:
>> > >>
>> > >> 1/ Have third-parties publish images
>> > >> It is the current situation. The issue is that the Kolla team (and
>> > >> likely others) would rather automate the process and use OpenStack
>> > &

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Michał Jastrzębski
>> Issue with that is
>>
>> 1. Apache served is harder to use because we want to follow docker API
>> and we'd have to reimplement it
>
> No, the idea is apache is transparent, for now we have been using proxypass
> module in apache.  I think what Doug was mentioning was have a primary docker
> registery, with is RW for a publisher, then proxy it to regional mirrors as 
> RO.

That would also work, yes

>> 2. Running registry is single command
>>
> I've seen this mentioned a few times before, just because it is one command or
> 'simple' to do, doesn't mean we want to or can.  Currently our infrastructure 
> is
> complicated, for various reasons.  I am sure we'll get to the right technical
> solution for making jobs happy. Remember our infrastructure spans 6 clouds 
> and 15
> regions and want to make sure it is done correctly.

And that's why we discussed dockerhub. Remember that I was willing to
implement proper registry, but we decided to go with dockerhub simply
because it puts less stress on both infra and infra team. And I
totally agree with that statement. Dockerhub publisher + apache
caching was our working idea.

>> 3. If we host in in infra, in case someone actually uses it (there
>> will be people like that), that will eat up lot of network traffic
>> potentially
>
> We can monitor this and adjust as needed.
>
>> 4. With local caching of images (working already) in nodepools we
>> loose complexity of mirroring registries across nodepools
>>
>> So bottom line, having dockerhub/quay.io is simply easier.
>>
> See comment above.
>
>> > Doug
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Michał Jastrzębski
Be careful with overlay, I've seen it acting in a ways you don't want
it to act up. That was some time ago, but memories persist. To my
experience best option is btrfs. If you don't want to repartition
disk, btrfs on loopback isn't horrible too. deviemapper on loopback is
horrible, but that's different.

On 17 May 2017 at 17:24, Dan Prince  wrote:
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
>
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
>
>  https://review.openstack.org/#/c/451916/
>
> For TripleO there are a couple of considerations:
>
>  - we intend to support in place upgrades from baremetal to containers
>
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
>
>  - we'd like to to use a docker storage backend that is production
> ready.
>
>  - our target OS is latest Centos/RHEL 7
>
> As we approach pike 2 I'm keen to move towards a more production docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the combinations
> above?
>
> Looking around at what is recommended in other projects it seems to be
> a mix as well from devicemapper to btrfs.
>
> [1] https://docs.openshift.com/container-platform/3.3/install_config/in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
> at.sh#n30
>
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 11:36, Michał Jastrzębski <inc...@gmail.com> wrote:
> On 17 May 2017 at 11:04, Doug Hellmann <d...@doughellmann.com> wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-05-17 07:47:31 -0700:
>>> On 17 May 2017 at 04:14, Chris Dent <cdent...@anticdent.org> wrote:
>>> > On Wed, 17 May 2017, Thierry Carrez wrote:
>>> >
>>> >> Back to container image world, if we refresh those images daily and they
>>> >> are not versioned or archived (basically you can only use the latest and
>>> >> can't really access past dailies), I think we'd be in a similar situation
>>> >> ?
>>> >
>>> >
>>> > Yes, this.
>>>
>>> I think it's not a bad idea to message "you are responsible for
>>> archving your containers". Do that, combine it with good toolset that
>>> helps users determine versions of packages and other metadata and
>>> we'll end up with something that itself would be greatly appreciated.
>>>
>>> Few potential user stories.
>>>
>>> I have OpenStack <100 nodes and need every single one of them, hence
>>> no CI. At the same time I want to have fresh packages to avoid CVEs. I
>>> deploy kolla with tip-of-the-stable-branch and setup cronjob that will
>>> upgrade it every week. Because my scenerio is quite typical and
>>> containers already ran through gates that tests my scenerio, I'm good.
>>>
>>> Another one:
>>>
>>> I have 300+ node cloud, heavy CI and security team examining every
>>> container. While I could build containers locally, downloading them is
>>> just simpler and effectively the same (after all, it's containers
>>> being tested not build process). Every download our security team
>>> scrutinize contaniers and uses toolset Kolla provides to help them.
>>> Additional benefit is that on top of our CI these images went through
>>> Kolla CI which is nice, more testing is always good.
>>>
>>> And another one
>>>
>>> We are Kolla community. We want to provide testing for full release
>>> upgrades every day in gates, to make sure OpenStack and Kolla is
>>> upgradable and improve general user experience of upgrades. Because
>>> infra is resource constrained, we cannot afford building 2 sets of
>>> containers (stable and master) and doing deploy->test->upgrade->test.
>>> However because we have these cached containers, that are fresh and
>>> passed CI for deploy, we can just use them! Now effectively we're not
>>> only testing Kolla's correctness of upgrade procedure but also all the
>>> other project team upgrades! Oh, it seems Nova merged something that
>>> negatively affects upgrades, let's make sure they are aware!
>>>
>>> And last one, which cannot be underestimated
>>>
>>> I am CTO of some company and I've heard OpenStack is no longer hard to
>>> deploy, I'll just download kolla-ansible and try. I'll follow this
>>> guide that deploys simple OpenStack with 2 commands and few small
>>> configs, and it's done! Super simple! We're moving to OpenStack and
>>> start contributing tomorrow!
>>>
>>> Please, let's solve messaging problems, put burden of archiving on
>>> users, whatever it takes to protect our community from wrong
>>> expectations, but not kill this effort. There are very real and
>>> immediate benefits to OpenStack as a whole if we do this.
>>>
>>> Cheers,
>>> Michal
>>
>> You've presented some positive scenarios. Here's a worst case
>> situation that I'm worried about.
>>
>> Suppose in a few months the top several companies contributing to
>> kolla decide to pull out of or reduce their contributions to
>> OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
>> redirect their efforts to other projects.  Maybe they start
>> contributing directly to kubernetes. The kolla team is hit badly,
>> and all of the people from that team who know how the container
>> publishing jobs work are gone.
>
> There are only 2 ways to defend against that: diverse community, which
> we have. If Intel, Red Hat, Oracle, Cisco and IBM back out of
> OpenStack, we'd still have almost 50% of contributors. I think we'll
> much more likely to survive than most of other Big Tent projects. In
> fact, I'd think with our current diversity, that we'll survive for as
> long as OpenStack survives.

Diverse community and off-by-one errors;) I was meaning to say diverse
commu

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 11:04, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-17 07:47:31 -0700:
>> On 17 May 2017 at 04:14, Chris Dent  wrote:
>> > On Wed, 17 May 2017, Thierry Carrez wrote:
>> >
>> >> Back to container image world, if we refresh those images daily and they
>> >> are not versioned or archived (basically you can only use the latest and
>> >> can't really access past dailies), I think we'd be in a similar situation
>> >> ?
>> >
>> >
>> > Yes, this.
>>
>> I think it's not a bad idea to message "you are responsible for
>> archving your containers". Do that, combine it with good toolset that
>> helps users determine versions of packages and other metadata and
>> we'll end up with something that itself would be greatly appreciated.
>>
>> Few potential user stories.
>>
>> I have OpenStack <100 nodes and need every single one of them, hence
>> no CI. At the same time I want to have fresh packages to avoid CVEs. I
>> deploy kolla with tip-of-the-stable-branch and setup cronjob that will
>> upgrade it every week. Because my scenerio is quite typical and
>> containers already ran through gates that tests my scenerio, I'm good.
>>
>> Another one:
>>
>> I have 300+ node cloud, heavy CI and security team examining every
>> container. While I could build containers locally, downloading them is
>> just simpler and effectively the same (after all, it's containers
>> being tested not build process). Every download our security team
>> scrutinize contaniers and uses toolset Kolla provides to help them.
>> Additional benefit is that on top of our CI these images went through
>> Kolla CI which is nice, more testing is always good.
>>
>> And another one
>>
>> We are Kolla community. We want to provide testing for full release
>> upgrades every day in gates, to make sure OpenStack and Kolla is
>> upgradable and improve general user experience of upgrades. Because
>> infra is resource constrained, we cannot afford building 2 sets of
>> containers (stable and master) and doing deploy->test->upgrade->test.
>> However because we have these cached containers, that are fresh and
>> passed CI for deploy, we can just use them! Now effectively we're not
>> only testing Kolla's correctness of upgrade procedure but also all the
>> other project team upgrades! Oh, it seems Nova merged something that
>> negatively affects upgrades, let's make sure they are aware!
>>
>> And last one, which cannot be underestimated
>>
>> I am CTO of some company and I've heard OpenStack is no longer hard to
>> deploy, I'll just download kolla-ansible and try. I'll follow this
>> guide that deploys simple OpenStack with 2 commands and few small
>> configs, and it's done! Super simple! We're moving to OpenStack and
>> start contributing tomorrow!
>>
>> Please, let's solve messaging problems, put burden of archiving on
>> users, whatever it takes to protect our community from wrong
>> expectations, but not kill this effort. There are very real and
>> immediate benefits to OpenStack as a whole if we do this.
>>
>> Cheers,
>> Michal
>
> You've presented some positive scenarios. Here's a worst case
> situation that I'm worried about.
>
> Suppose in a few months the top several companies contributing to
> kolla decide to pull out of or reduce their contributions to
> OpenStack.  IBM, Intel, Oracle, and Cisco either lay folks off or
> redirect their efforts to other projects.  Maybe they start
> contributing directly to kubernetes. The kolla team is hit badly,
> and all of the people from that team who know how the container
> publishing jobs work are gone.

There are only 2 ways to defend against that: diverse community, which
we have. If Intel, Red Hat, Oracle, Cisco and IBM back out of
OpenStack, we'd still have almost 50% of contributors. I think we'll
much more likely to survive than most of other Big Tent projects. In
fact, I'd think with our current diversity, that we'll survive for as
long as OpenStack survives.

Also all the more reasons why *we shouldn't build images personally*,
we should have autonomous process to do it for us.

> The day after everyone says goodbye, the build breaks. Maybe a bad
> patch lands, or maybe some upstream assumption changes. The issue
> isn't with the infra jobs themselves. The break means no new container
> images are being published. Since there's not much of a kolla team
> any more, it looks like it will be a while before anyone has time
> to figure out how to fix the problem.

> Later that same day, a new zero-day exploit is announced in a
> component included in all or most of those images. Something that
> isn't developed in the community, such as OpenSSL or glibc. The
> exploit allows a complete breach of any app running with it. All
> existing published containers include the bad bits and need to be
> updated.

I guess this is problem of all the software ever written. If community
dies around it, people who uses it are in lots of 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 08:55, Doug Hellmann  wrote:
> Excerpts from Chris Dent's message of 2017-05-17 12:14:40 +0100:
>> On Wed, 17 May 2017, Thierry Carrez wrote:
>>
>> > Back to container image world, if we refresh those images daily and they
>> > are not versioned or archived (basically you can only use the latest and
>> > can't really access past dailies), I think we'd be in a similar situation ?
>>
>> Yes, this.
>>
>
> Is that how container publishing works? Can we overwrite an existing
> archive, so that there is only ever 1 version of a published container
> at any given time?

We can do it either way, but that's how we want it, top of stable
branch daily + top of master.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Michał Jastrzębski
On 17 May 2017 at 04:14, Chris Dent  wrote:
> On Wed, 17 May 2017, Thierry Carrez wrote:
>
>> Back to container image world, if we refresh those images daily and they
>> are not versioned or archived (basically you can only use the latest and
>> can't really access past dailies), I think we'd be in a similar situation
>> ?
>
>
> Yes, this.

I think it's not a bad idea to message "you are responsible for
archving your containers". Do that, combine it with good toolset that
helps users determine versions of packages and other metadata and
we'll end up with something that itself would be greatly appreciated.

Few potential user stories.

I have OpenStack <100 nodes and need every single one of them, hence
no CI. At the same time I want to have fresh packages to avoid CVEs. I
deploy kolla with tip-of-the-stable-branch and setup cronjob that will
upgrade it every week. Because my scenerio is quite typical and
containers already ran through gates that tests my scenerio, I'm good.

Another one:

I have 300+ node cloud, heavy CI and security team examining every
container. While I could build containers locally, downloading them is
just simpler and effectively the same (after all, it's containers
being tested not build process). Every download our security team
scrutinize contaniers and uses toolset Kolla provides to help them.
Additional benefit is that on top of our CI these images went through
Kolla CI which is nice, more testing is always good.

And another one

We are Kolla community. We want to provide testing for full release
upgrades every day in gates, to make sure OpenStack and Kolla is
upgradable and improve general user experience of upgrades. Because
infra is resource constrained, we cannot afford building 2 sets of
containers (stable and master) and doing deploy->test->upgrade->test.
However because we have these cached containers, that are fresh and
passed CI for deploy, we can just use them! Now effectively we're not
only testing Kolla's correctness of upgrade procedure but also all the
other project team upgrades! Oh, it seems Nova merged something that
negatively affects upgrades, let's make sure they are aware!

And last one, which cannot be underestimated

I am CTO of some company and I've heard OpenStack is no longer hard to
deploy, I'll just download kolla-ansible and try. I'll follow this
guide that deploys simple OpenStack with 2 commands and few small
configs, and it's done! Super simple! We're moving to OpenStack and
start contributing tomorrow!

Please, let's solve messaging problems, put burden of archiving on
users, whatever it takes to protect our community from wrong
expectations, but not kill this effort. There are very real and
immediate benefits to OpenStack as a whole if we do this.

Cheers,
Michal

> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 12:36, Jeremy Stanley <fu...@yuggoth.org> wrote:
> On 2017-05-16 11:46:14 -0700 (-0700), Michał Jastrzębski wrote:
> [...]
>> So CVE tracking might not be required by us. Since we still use
>> distro packages under the hood, we can just use these.
> [...]
>
> I think the question is how I, as a semi-clueful downstream user of
> your images, can tell whether the image I'm deploying has fixes for
> some specific recently disclosed vulnerability. It sounds like your
> answer is that I should compare the package manifest against the
> versions listed on the distro's CVE tracker or similar service? That
> should be prominently documented, perhaps in a highly visible FAQ
> list.

One thing we've been working on prior to summit was manifesto of
versions - I think we can provide single file with all the versions of
packages in container, we can add track of CI jobs that led containers
to this place, all the informations that semi-careful downstream user
can use to help him/her to determine what's that they're getting. I'm
all for that kind of features.

>> Since we'd rebuild daily, that alone would ensure timely update to
>> our containers. What we can promise to potential users is that
>> containers out there were built lately (24hrs)
> [...]
>
> As outlined elsewhere in the thread, there are a myriad of reasons
> why this could end up not being the case from time to time so I can
> only assume your definition of "promise" differs from mine (and
> unfortunately, from most people who might be trying to decide
> whether it's safe to rely on these images in a sensitive/production
> environment).

By "promise" I mean clear documentation of where containers came from
and what did they pass. After that, take it or leave it.

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:49, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
>> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
>> >> So another consideration. Do you think whole rule of "not building
>> >> binares" should be reconsidered? We are kind of new use case here. We
>> >> aren't distro but we are packagers (kind of). I don't think putting us
>> >> on equal footing as Red Hat, Canonical or other companies is correct
>> >> here.
>> >>
>> >> K8s is something we want to work with, and what we are discussing is
>> >> central to how k8s is used. K8s community creates this culture of
>> >> "organic packages" built by anyone, most of companies/projects already
>> >> have semi-official container images and I think expectations on
>> >> quality of these are well...none? You get what you're given and if you
>> >> don't agree, there is always way to reproduce this yourself.
>> >>
>> >> [Another huge snip]
>> >>
>> >
>> > I wanted to have the discussion, but my position for now is that
>> > we should continue as we have been and not change the policy.
>> >
>> > I don't have a problem with any individual or group of individuals
>> > publishing their own organic packages. The issue I have is with
>> > making sure it is clear those *are* "organic" and not officially
>> > supported by the broader community. One way to do that is to say
>> > they need to be built somewhere other than on our shared infrastructure.
>> > There may be other ways, though, so I'm looking for input on that.
>>
>> What I was trying to say here is, current discussion aside, maybe we
>> should revise this "not supported by broader community" rule. They may
>> very well be supported to a certain point. Support is not just yes or
>> no, it's all the levels in between. I think we can afford *some* level
>> of official support, even if that some level means best effort made by
>> community. If Kolla community, not an individual like myself, would
>> like to support these images best to our ability, why aren't we
>> allowed? As long as we are crystal clear what is scope of our support,
>> why can't we do it? I think we've already proven that it's going to be
>> tremendously useful for a lot of people, even in a shape we discuss
>> today, that is "best effort, you still need to validate it for
>> yourself"...
>
> Right, I understood that. So far I haven't heard anything to change
> my mind, though.
>
> I think you're underestimating the amount of risk you're taking on
> for yourselves and by extension the rest of the community, and
> introducing to potential consumers of the images, by promising to
> support production deployments with a small team of people without
> the economic structure in place to sustain the work.

Again, we tell what it is and what it is not. I think support is
loaded term here. Instead we can create lengthy documentation
explaining to a detail lifecycle and testing certain container had to
pass before it lands in dockerhub. Maybe add link to particular set of
jobs that container had passed. Only thing we can offer is automated
and transparent process of publishing. On top of that? You are on your
own. But even within these boundaries, a lot of people could have
better experience of running OpenStack...

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:33, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
>> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >> >>
>> >> >> Flavio Percoco wrote:
>> >> >>>
>> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >> >>> projects
>> >> >>> in any kind of built form. This was also one of the concerns I raised
>> >> >>> when
>> >> >>> working on the proposal to support other programming languages. The
>> >> >>> problem of
>> >> >>> releasing built images goes beyond the infrastructure requirements. 
>> >> >>> It's
>> >> >>> the
>> >> >>> message and the guarantees implied with the built product itself that 
>> >> >>> are
>> >> >>> the
>> >> >>> concern here. And I tend to agree with Doug that this might be a 
>> >> >>> problem
>> >> >>> for us
>> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >> >>> point is
>> >> >>> not enough. Kolla is not the only project producing container images 
>> >> >>> and
>> >> >>> we need
>> >> >>> to be consistent in the way we release these images.
>> >> >>>
>> >> >>> Nothing prevents people for building their own images and uploading 
>> >> >>> them
>> >> >>> to
>> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
>> >> >>> problem.
>> >> >>
>> >> >>
>> >> >> I totally subscribe to the concerns around publishing binaries (under
>> >> >> any form), and the expectations in terms of security maintenance that 
>> >> >> it
>> >> >> would set on the publisher. At the same time, we need to have images
>> >> >> available, for convenience and testing. So what is the best way to
>> >> >> achieve that without setting strong security maintenance expectations
>> >> >> for the OpenStack community ? We have several options:
>> >> >>
>> >> >> 1/ Have third-parties publish images
>> >> >> It is the current situation. The issue is that the Kolla team (and
>> >> >> likely others) would rather automate the process and use OpenStack
>> >> >> infrastructure for it.
>> >> >>
>> >> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> >> This would allow to automate the process, but it would be a bit weird 
>> >> >> to
>> >> >> use common infra resources to publish in a private repo.
>> >> >>
>> >> >> 3/ Publish transient (per-commit or daily) images
>> >> >> A "daily build" (especially if you replace it every day) would set
>> >> >> relatively-limited expectations in terms of maintenance. It would end 
>> >> >> up
>> >> >> picking up security updates in upstream layers, even if not 
>> >> >> immediately.
>> >> >>
>> >> >> 4/ Publish images and own them
>> >> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> >> those images and publish them officially.
>> >> >>
>> >> >> Personally I think (4) is not realistic. I think we could make (3) 
>> >> >> work,
>> >> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >> >
>> >> >
>> >> > Agreed #4 is a bit unrealistic.
>> >> >
>> >> > Not sure I understand the difference between #2 and #3. Is it just the
>> >> > cadence?
>> >> >
>> >> > I'd prefer for these builds to have a daily cadence because it sets the
>> >> > expectations w.r.t maintenance right: "These images are daily builds 
>> >> > and not
>> >> > certified releases. For stable builds you're better off building it
>> >> > yourself"
>> >>
>> >> And daily builds are exactly what I wanted in the first place:) We
>> >> probably will keep publishing release packages too, but we can be so
>> >> called 3rd party. I also agree [4] is completely unrealistic and I
>> >> would be against putting such heavy burden of responsibility on any
>> >> community, including Kolla.
>> >>
>> >> While daily cadence will send message that it's not stable, truth will
>> >> be that it will be more stable than what people would normally build
>> >> locally (again, it passes more gates), but I'm totally fine in not
>> >> saying that and let people decide how they want to use it.
>> >>
>> >> So, can we move on with implementation?
>> >
>> > I don't want the images published to docker hub. Are they still useful
>> > to you if they aren't published?
>>
>> What do you mean? We need images available...whether it's dockerhub,
>> infra-hosted registry or any other way to have them, we need to be
>> able to have images that are available and fresh without building.
>> Dockerhub/quay.io is least problems for infra team/resources.
>
> There are 2 separate concerns.
>
> The first concern is whether this is a good idea at all, from a
> policy perspective. Do we have the people to maintain the images,
> track CVEs, etc.? Do we have the response time to update or remove
> bad images? 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:27, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
>> So another consideration. Do you think whole rule of "not building
>> binares" should be reconsidered? We are kind of new use case here. We
>> aren't distro but we are packagers (kind of). I don't think putting us
>> on equal footing as Red Hat, Canonical or other companies is correct
>> here.
>>
>> K8s is something we want to work with, and what we are discussing is
>> central to how k8s is used. K8s community creates this culture of
>> "organic packages" built by anyone, most of companies/projects already
>> have semi-official container images and I think expectations on
>> quality of these are well...none? You get what you're given and if you
>> don't agree, there is always way to reproduce this yourself.
>>
>> [Another huge snip]
>>
>
> I wanted to have the discussion, but my position for now is that
> we should continue as we have been and not change the policy.
>
> I don't have a problem with any individual or group of individuals
> publishing their own organic packages. The issue I have is with
> making sure it is clear those *are* "organic" and not officially
> supported by the broader community. One way to do that is to say
> they need to be built somewhere other than on our shared infrastructure.
> There may be other ways, though, so I'm looking for input on that.

What I was trying to say here is, current discussion aside, maybe we
should revise this "not supported by broader community" rule. They may
very well be supported to a certain point. Support is not just yes or
no, it's all the levels in between. I think we can afford *some* level
of official support, even if that some level means best effort made by
community. If Kolla community, not an individual like myself, would
like to support these images best to our ability, why aren't we
allowed? As long as we are crystal clear what is scope of our support,
why can't we do it? I think we've already proven that it's going to be
tremendously useful for a lot of people, even in a shape we discuss
today, that is "best effort, you still need to validate it for
yourself"...

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 10:41, Jeremy Stanley  wrote:
> On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
>> Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> [...]
>> > If you build images properly in infra, then you will have an image that is
>> > not security checked (no gpg verification of packages) and completely
>> > unverifiable. These are absolutely not images we want to push to
>> > DockerHub/quay for obvious reasons. Security and verification being chief
>> > among them. They are absolutely not images that should ever be run in
>> > production and are only suited for testing. These are the only types of
>> > images that can come out of infra.
>>
>> This sounds like an implementation detail of option 3? I think not
>> signing the images does help indicate that they're not meant to be used
>> in production environments.
> [...]
>
> I'm pretty sure Sam wasn't talking about whether or not the images
> which get built are signed, but whether or not the package manager
> used when building the images vets the distro packages it retrieves
> (the Ubuntu package mirror we maintain in our CI doesn't have
> "secure APT" signatures available for its indices so we disable that
> security measure by default in the CI system to allow us to use
> those mirrors). Point being, if images are built in the upstream CI
> with packages from our Ubuntu package mirror then they are (at least
> at present) not suitable for production use from a security
> perspective for this particular reason even in absence of the other
> concerns expressed.
> --
> Jeremy Stanley

This is valid concern, but also particularly easy to solve. If we
decide to use nightly builds (or midday in Hawaii? Any timezone with
least traffic would do), we can skip infra mirrors. In fact, that
approach would help us in a different sense as well. Since these
wouldn't be bound to any particular patchset, we could test it to an
extreme, so voting gates for both kolla-ansible and kolla-kubernetes
deployment. I was reluctant to have deploy gates voting inside Kolla,
but that would allow us to do it. In fact, net uplink consumption from
infra would go down, as we won't need to publish tarballs of registry
every commit, we'll do it once a day in a most convenient hour.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 09:40, Clint Byrum  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>> > Container images introduce some extra complexity, over the basic
>> > operating system style packages mentioned above. Due to the way
>> > they are constructed, they are likely to include content we don't
>> > produce ourselves (either in the form of base layers or via including
>> > build tools or other things needed when assembling the full image).
>> > That extra content means there would need to be more tracking of
>> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> > as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>>
>
> What's at stake isn't so much "how do we get the bits to the users" but
> "how do we only get bits to users that they need". If you build and push
> daily, do you expect all of your users to also _pull_ daily? Redeploy
> all their containers? How do you detect that there's new CVE-fixing
> stuff in a daily build?
>
> This is really the realm of distributors that have full-time security
> teams tracking issues and providing support to paying customers.
>
> So I think this is a fine idea, however, it needs to include a commitment
> for a full-time paid security team who weighs in on every change to
> the manifest. Otherwise we're just lobbing time bombs into our users'
> data-centers.

One thing I struggle with is...well...how does *not having* built
containers help with that? If your company have full time security
team, they can check our containers prior to deployment. If your
company doesn't, then building locally will be subject to same risks
as downloading from dockerhub. Difference is, dockerhub containers
were tested in our CI to extend that our CI allows. No matter whether
or not you have your own security team, local CI, staging env, that
will be just a little bit of testing on top of that which you get for
free, and I think that's value enough for users to push for this.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
So another consideration. Do you think whole rule of "not building
binares" should be reconsidered? We are kind of new use case here. We
aren't distro but we are packagers (kind of). I don't think putting us
on equal footing as Red Hat, Canonical or other companies is correct
here.

K8s is something we want to work with, and what we are discussing is
central to how k8s is used. K8s community creates this culture of
"organic packages" built by anyone, most of companies/projects already
have semi-official container images and I think expectations on
quality of these are well...none? You get what you're given and if you
don't agree, there is always way to reproduce this yourself.

[Another huge snip]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:32, Doug Hellmann  wrote:
> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
>> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>> > Folks,
>> >
>> > See $TITLE :)
>> >
>> > Thanks,
>> > Dims
>> >
>>
>> My preference would be to have an #openstack-tc channel.
>>
>> One thing I like about the dedicated meeting time was if I was not able to
>> attend, or when I was just a casual observer, it was easy to catch up on
>> what was discussed because it was all in one place and did not have any
>> non TC conversations interlaced.
>>
>> If we just use -dev, there is a high chance there will be a lot of cross-
>> talk during discussions. There would also be a lot of effort to grep
>> through the full day of activity to find things relevant to TC
>> discussions. If we have a dedicated channel for this, it makes it very
>> easy for anyone to know where to go to get a clean, easy to read capture
>> of all relevant discussions. I think that will be important with the
>> lack of a captured and summarized meeting to look at.
>>
>> Sean
>>
>
> I definitely understand this desire. I think, though, that any
> significant conversations should be made discoverable via an email
> thread summarizing them. That honors the spirit of moving our
> "decision making" to asynchronous communication tools.

To both this and Dims's concerns, I actually think we need some place
to just come and ask "guys, is this fine?". If answer would be "let's
talk on ML because it's important", that's cool, but on the other hand
sometimes simple "yes" would suffice. Not all conversations with TC
requires mailing thread, but I'd love to have some "semi-official" TC
space where I can drop question, quickly discuss cross-project issues
and such.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:30, Emilien Macchi <emil...@redhat.com> wrote:
> On Tue, May 16, 2017 at 11:12 AM, Doug Hellmann <d...@doughellmann.com> wrote:
>> Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
>>> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
>>> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
>>> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
>>> >> >On 15 May 2017 at 11:19, Davanum Srinivas <dava...@gmail.com> wrote:
>>> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
>>> >> >> things:
>>> >> >>
>>> >> >> 1) Can folks install just one or two services for their specific 
>>> >> >> scenario?
>>> >> >
>>> >> >Yes, that's more of a kolla-ansible feature and require a little bit
>>> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
>>> >> >allow maximum flexibility in that space.
>>> >> >
>>> >> >> 2) Can the container images from kolla be run on bare docker daemon?
>>> >> >
>>> >> >Yes, but they need to either override our default CMD (kolla_start) or
>>> >> >provide ENVs requred by it, not a huge deal
>>> >> >
>>> >> >> 3) Can someone take the kolla container images from say dockerhub and
>>> >> >> use it without the Kolla framework?
>>> >> >
>>> >> >Yes, there is no such thing as kolla framework really. Our images
>>> >> >follow stable ABI and they can be deployed by any deploy mechanism
>>> >> >that will follow it. We have several users who wrote their own deploy
>>> >> >mechanism from scratch.
>>> >> >
>>> >> >Containers are just blobs with binaries in it. Little things that we
>>> >> >add are kolla_start script to allow our config file management and
>>> >> >some custom startup scripts for things like mariadb to help with
>>> >> >bootstrapping, both are entirely optional.
>>> >>
>>> >> Just as a bonus example, TripleO is currently using kolla images. They 
>>> >> used to
>>> >> be vanilla and they are not anymore but only because TripleO depends on 
>>> >> puppet
>>> >> being in the image, which has nothing to do with kolla.
>>> >>
>>> >> Flavio
>>> >>
>>> >
>>> >When you say "using kolla images," what do you mean? In upstream
>>> >CI tests? On contributors' dev/test systems? Production deployments?
>>>
>>> All of them. Note that TripleO now builds its own "kolla images" (it uses 
>>> the
>>> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
>>> said, TripleO uses kolla images was intended to answer Dims question on 
>>> whether
>>> these images (or Dockerfiles) can be consumed by other projects.
>>>
>>> Flavio
>>>
>>
>> Ah, OK. So TripleO is using the build instructions for kolla images, but
>> not the binary images being produced today?
>
> Exactly. We have to add Puppet packaging into the list of things we
> want in the binary, that's why we don't consume the binary directly.

And frankly, if we get this thing agreed on, I don't see why TripleO
couldn't publish their images too. If we build technical infra in
Kolla, everyone else can benefit from it.

>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:12, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >>
>> >> Flavio Percoco wrote:
>> >>>
>> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >>> projects
>> >>> in any kind of built form. This was also one of the concerns I raised
>> >>> when
>> >>> working on the proposal to support other programming languages. The
>> >>> problem of
>> >>> releasing built images goes beyond the infrastructure requirements. It's
>> >>> the
>> >>> message and the guarantees implied with the built product itself that are
>> >>> the
>> >>> concern here. And I tend to agree with Doug that this might be a problem
>> >>> for us
>> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >>> point is
>> >>> not enough. Kolla is not the only project producing container images and
>> >>> we need
>> >>> to be consistent in the way we release these images.
>> >>>
>> >>> Nothing prevents people for building their own images and uploading them
>> >>> to
>> >>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>> >>
>> >>
>> >> I totally subscribe to the concerns around publishing binaries (under
>> >> any form), and the expectations in terms of security maintenance that it
>> >> would set on the publisher. At the same time, we need to have images
>> >> available, for convenience and testing. So what is the best way to
>> >> achieve that without setting strong security maintenance expectations
>> >> for the OpenStack community ? We have several options:
>> >>
>> >> 1/ Have third-parties publish images
>> >> It is the current situation. The issue is that the Kolla team (and
>> >> likely others) would rather automate the process and use OpenStack
>> >> infrastructure for it.
>> >>
>> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> This would allow to automate the process, but it would be a bit weird to
>> >> use common infra resources to publish in a private repo.
>> >>
>> >> 3/ Publish transient (per-commit or daily) images
>> >> A "daily build" (especially if you replace it every day) would set
>> >> relatively-limited expectations in terms of maintenance. It would end up
>> >> picking up security updates in upstream layers, even if not immediately.
>> >>
>> >> 4/ Publish images and own them
>> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> those images and publish them officially.
>> >>
>> >> Personally I think (4) is not realistic. I think we could make (3) work,
>> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >
>> >
>> > Agreed #4 is a bit unrealistic.
>> >
>> > Not sure I understand the difference between #2 and #3. Is it just the
>> > cadence?
>> >
>> > I'd prefer for these builds to have a daily cadence because it sets the
>> > expectations w.r.t maintenance right: "These images are daily builds and 
>> > not
>> > certified releases. For stable builds you're better off building it
>> > yourself"
>>
>> And daily builds are exactly what I wanted in the first place:) We
>> probably will keep publishing release packages too, but we can be so
>> called 3rd party. I also agree [4] is completely unrealistic and I
>> would be against putting such heavy burden of responsibility on any
>> community, including Kolla.
>>
>> While daily cadence will send message that it's not stable, truth will
>> be that it will be more stable than what people would normally build
>> locally (again, it passes more gates), but I'm totally fine in not
>> saying that and let people decide how they want to use it.
>>
>> So, can we move on with implementation?
>
> I don't want the images published to docker hub. Are they still useful
> to you if they aren't published?

What do you mean? We need images available...whether it's dockerhub,
infra-hosted registry or any other way to have them, we need to be
able to have images that are available and fresh without building.
Dockerhub/quay.io is least problems for infra team/resources.

> Doug
>
>>
>> Thanks!
>> Michal
>>
>> >
>> > Flavio
>> >
>> > --
>> > @flaper87
>> > Flavio Percoco
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 07:49, Sean Dague  wrote:
> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
>> Folks,
>>
>> See $TITLE :)
>>
>> Thanks,
>> Dims
>
> I'd rather avoid #openstack-tc and just use #openstack-dev.
> #openstack-dev is pretty low used environment (compared to like
> #openstack-infra or #openstack-nova). I've personally been trying to
> make it my go to way to hit up members of other teams whenever instead
> of diving into project specific channels, because typically it means we
> can get a broader conversation around the item in question.
>
> Our fragmentation of shared understanding on many issues is definitely
> exacerbated by many project channels, and the assumption that people
> need to watch 20+ different channels, with different context, to stay up
> on things.
>
> I would love us to have the problem that too many interesting topics are
> being discussed in #openstack-dev that we feel the need to parallelize
> them with a different channel. But I would say we should wait until
> that's actually a problem.
>
> -Sean

I, on the flip side, would be all for #openstack-tc. First,
#openstack-dev is not obvious to look for TC members, #openstack-tc
would be channel to talk about tc related stuff, which in large
portion would be something significant and worth coming back to, so
having this "filtered" field just for cross-community discussions
would make digging through logs much easier.

> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 07:11, Sam Yaple <sam...@yaple.net> wrote:
> I would like to bring up a subject that hasn't really been discussed in this
> thread yet, forgive me if I missed an email mentioning this.
>
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security turned
> off are perfect for testing internally to infra.
>
> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.

So I guess we need new feature:) since we can test gpg packages...

> Thanks,
> SamYaple
>
> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski <inc...@gmail.com>
> wrote:
>>
>> On 16 May 2017 at 06:22, Doug Hellmann <d...@doughellmann.com> wrote:
>> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> >> Flavio Percoco wrote:
>> >> > From a release perspective, as Doug mentioned, we've avoided
>> >> > releasing projects
>> >> > in any kind of built form. This was also one of the concerns I raised
>> >> > when
>> >> > working on the proposal to support other programming languages. The
>> >> > problem of
>> >> > releasing built images goes beyond the infrastructure requirements.
>> >> > It's the
>> >> > message and the guarantees implied with the built product itself that
>> >> > are the
>> >> > concern here. And I tend to agree with Doug that this might be a
>> >> > problem for us
>> >> > as a community. Unfortunately, putting your name, Michal, as contact
>> >> > point is
>> >> > not enough. Kolla is not the only project producing container images
>> >> > and we need
>> >> > to be consistent in the way we release these images.
>> >> >
>> >> > Nothing prevents people for building their own images and uploading
>> >> > them to
>> >> > dockerhub. Having this as part of the OpenStack's pipeline is a
>> >> > problem.
>> >>
>> >> I totally subscribe to the concerns around publishing binaries (under
>> >> any form), and the expectations in terms of security maintenance that
>> >> it
>> >> would set on the publisher. At the same time, we need to have images
>> >> available, for convenience and testing. So what is the best way to
>> >> achieve that without setting strong security maintenance expectations
>> >> for the OpenStack community ? We have several options:
>> >>
>> >> 1/ Have third-parties publish images
>> >> It is the current situation. The issue is that the Kolla team (and
>> >> likely others) would rather automate the process and use OpenStack
>> >> infrastructure for it.
>> >>
>> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> This would allow to automate the process, but it would be a bit weird
>> >> to
>> >> use common infra resources to publish in a private repo.
>> >>
>> >> 3/ Publish transient (per-commit or daily) images
>> >> A "daily build" (especially if you replace it every day) would set
>> >> relatively-limited expectations in terms of maintenance. It would end
>> >> up
>> >> picking up security updates in upstream layers, even if not
>> >> immediately.
>> >>
>> >> 4/ Publish images and own them
>> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> those images and publish them officially.
>> >>
>> >> Personally I think (4) is not realistic. I think we could make (3)
>> >> work,
>> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >>
>> >
>> > At the forum we talked about putting test images on a "private"
>> > repository hosted on openstack.org somewhere. I think that's option
>> > 3 f

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).

Issue with that is

1. Apache served is harder to use because we want to follow docker API
and we'd have to reimplement it
2. Running registry is single command
3. If we host in in infra, in case someone actually uses it (there
will be people like that), that will eat up lot of network traffic
potentially
4. With local caching of images (working already) in nodepools we
loose complexity of mirroring registries across nodepools

So bottom line, having dockerhub/quay.io is simply easier.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>>
>> Flavio Percoco wrote:
>>>
>>> From a release perspective, as Doug mentioned, we've avoided releasing
>>> projects
>>> in any kind of built form. This was also one of the concerns I raised
>>> when
>>> working on the proposal to support other programming languages. The
>>> problem of
>>> releasing built images goes beyond the infrastructure requirements. It's
>>> the
>>> message and the guarantees implied with the built product itself that are
>>> the
>>> concern here. And I tend to agree with Doug that this might be a problem
>>> for us
>>> as a community. Unfortunately, putting your name, Michal, as contact
>>> point is
>>> not enough. Kolla is not the only project producing container images and
>>> we need
>>> to be consistent in the way we release these images.
>>>
>>> Nothing prevents people for building their own images and uploading them
>>> to
>>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>
>
> Agreed #4 is a bit unrealistic.
>
> Not sure I understand the difference between #2 and #3. Is it just the
> cadence?
>
> I'd prefer for these builds to have a daily cadence because it sets the
> expectations w.r.t maintenance right: "These images are daily builds and not
> certified releases. For stable builds you're better off building it
> yourself"

And daily builds are exactly what I wanted in the first place:) We
probably will keep publishing release packages too, but we can be so
called 3rd party. I also agree [4] is completely unrealistic and I
would be against putting such heavy burden of responsibility on any
community, including Kolla.

While daily cadence will send message that it's not stable, truth will
be that it will be more stable than what people would normally build
locally (again, it passes more gates), but I'm totally fine in not
saying that and let people decide how they want to use it.

So, can we move on with implementation?

Thanks!
Michal

>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
On 15 May 2017 at 12:12, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>> For starters, I want to emphasize that fresh set of dockerhub images
>> was one of most requested features from Kolla on this summit and few
>> other features more or less requires readily-available docker
>> registry. Features like full release upgrade gates.
>>
>> This will have numerous benefits for users that doesn't have resources
>> to put sophisticated CI/staging env, which, I'm willing to bet, is
>> still quite significant user base. If we do it correctly (and we will
>> do it correctly), images we'll going to push will go through series of
>> gates which we have in Kolla (and will have more). So when you pull
>> image, you know that it was successfully deployed within scenerios
>> available in our gates, maybe even upgrade and increase scenerio
>> coverage later? That is a huge benefit for actual users.
>
> I have no doubt that consumers of the images would like us to keep
> creating them. We had lots of discussions last week about resource
> constraints and sustainable practices, though, and this strikes me
> as an area where we're deviating from our history in a way that
> will require more maintenance work upstream.
>
>> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
>> > Last week at the Forum we had a couple of discussions about
>> > collaboration between the various teams building or consuming
>> > container images. One topic that came up was deciding how to publish
>> > images from the various teams to docker hub or other container
>> > registries. While the technical bits seem easy enough to work out,
>> > there is still the question of precedence and whether it's a good
>> > idea to do so at all.
>> >
>> > In the past, we have refrained from publishing binary packages in
>> > other formats such as debs and RPMs. (We did publish debs way back
>> > in the beginning, for testing IIRC, but switched away from them to
>> > sdists to be more inclusive.) Since then, we have said it is the
>> > responsibility of downstream consumers to build production packages,
>> > either as distributors or as a deployer that is rolling their own.
>> > We do package sdists for python libraries, push some JavaScript to
>> > the NPM registries, and have tarballs of those and a bunch of other
>> > artifacts that we build out of our release tools.  But none of those
>> > is declared as "production ready," and so the community is not
>> > sending the signal that we are responsible for maintaining them in
>> > the context of production deployments, beyond continuing to produce
>> > new releases when there are bugs.
>>
>> So for us that would mean something really hacky and bad. We are
>> community driven not company driven project. We don't have Red Hat or
>> Canonical teams behind us (we have contributors, but that's
>> different).
>
> Although I work at Red Hat, I want to make sure it's clear that my
> objection is purely related to community concerns. For this
> conversation, I'm wearing my upstream TC and Release team hats.
>
>> > Container images introduce some extra complexity, over the basic
>> > operating system style packages mentioned above. Due to the way
>> > they are constructed, they are likely to include content we don't
>> > produce ourselves (either in the form of base layers or via including
>> > build tools or other things needed when assembling the full image).
>> > That extra content means there would need to be more tracking of
>> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> > as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>
> A daily build job introduces new questions about how big the images
> are and how many of them we keep, but let's focus on whether the
> change in policy is something we want to adopt before we consider
> those questions.

http://tarballs.openstack.org/kolla/images/ we are already doing this
for last few months. Only difference is that it's hacky and we want
something that's not hacky.

Let's separate resource constrains for now please, because from
current standpoint all the resources we need is a single vm that's
gonna run 1hr every day and some uplink megabytes (probably less than
1gig every day as Docker will cache a lot). If that's an issue, we can
work on it and limit amount of pushes to just version changes,
something we were discussing anyway.

>
>> > Given our security and stable team resources, I'm not entirely
>> > comfortable with us publishing these images, and giving the appearance
>> > that the community *as a whole* is committing to supporting them.
>> > I don't have any objection to someone from the community publishing
>> > them, as long as it is made clear who the 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
On 15 May 2017 at 11:47, Sean Dague <s...@dague.net> wrote:
> On 05/15/2017 01:52 PM, Michał Jastrzębski wrote:
>> For starters, I want to emphasize that fresh set of dockerhub images
>> was one of most requested features from Kolla on this summit and few
>> other features more or less requires readily-available docker
>> registry. Features like full release upgrade gates.
>>
>> This will have numerous benefits for users that doesn't have resources
>> to put sophisticated CI/staging env, which, I'm willing to bet, is
>> still quite significant user base. If we do it correctly (and we will
>> do it correctly), images we'll going to push will go through series of
>> gates which we have in Kolla (and will have more). So when you pull
>> image, you know that it was successfully deployed within scenerios
>> available in our gates, maybe even upgrade and increase scenerio
>> coverage later? That is a huge benefit for actual users.
>
> That concerns me quite a bit. Given the nature of the patch story on
> containers (which is a rebuild), I really feel like users should have
> their own build / CI pipeline locally to be deploying this way. Making
> that easy for them to do, is great, but skipping that required local
> infrastructure puts them in a bad position should something go wrong.

I totally agree they should. Even if they do, it's still would be
additive to gating that we run, so it's even better.

> I do get that many folks want that, but I think it builds in a set of
> expectations that it's not possible to actually meet from an upstream
> perspective.
>
>> On 15 May 2017 at 10:34, Doug Hellmann <d...@doughellmann.com> wrote:
>>> Last week at the Forum we had a couple of discussions about
>>> collaboration between the various teams building or consuming
>>> container images. One topic that came up was deciding how to publish
>>> images from the various teams to docker hub or other container
>>> registries. While the technical bits seem easy enough to work out,
>>> there is still the question of precedence and whether it's a good
>>> idea to do so at all.
>>>
>>> In the past, we have refrained from publishing binary packages in
>>> other formats such as debs and RPMs. (We did publish debs way back
>>> in the beginning, for testing IIRC, but switched away from them to
>>> sdists to be more inclusive.) Since then, we have said it is the
>>> responsibility of downstream consumers to build production packages,
>>> either as distributors or as a deployer that is rolling their own.
>>> We do package sdists for python libraries, push some JavaScript to
>>> the NPM registries, and have tarballs of those and a bunch of other
>>> artifacts that we build out of our release tools.  But none of those
>>> is declared as "production ready," and so the community is not
>>> sending the signal that we are responsible for maintaining them in
>>> the context of production deployments, beyond continuing to produce
>>> new releases when there are bugs.
>>
>> So for us that would mean something really hacky and bad. We are
>> community driven not company driven project. We don't have Red Hat or
>> Canonical teams behind us (we have contributors, but that's
>> different).
>>
>>> Container images introduce some extra complexity, over the basic
>>> operating system style packages mentioned above. Due to the way
>>> they are constructed, they are likely to include content we don't
>>> produce ourselves (either in the form of base layers or via including
>>> build tools or other things needed when assembling the full image).
>>> That extra content means there would need to be more tracking of
>>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>>> as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>
> There have been many instances where 24 hours wasn't good enough as
> embargoes end up pretty weird in terms of when things hit mirrors. It
> also assumes that when a CVE hits some other part of the gate or
> infrastructure isn't wedged so that it's not possible to build new
> packages. Or the capacity demands happen during a feature freeze, with
> tons of delay in there. There are many single points of failure in this
> process.
>
>>> Given our security and stable team resources, I'm not entirely
>>> comfortable with us 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
On 15 May 2017 at 11:19, Davanum Srinivas <dava...@gmail.com> wrote:
> Sorry for the top post, Michal, Can you please clarify a couple of things:
>
> 1) Can folks install just one or two services for their specific scenario?

Yes, that's more of a kolla-ansible feature and require a little bit
of ansible know-how, but entirely possible. Kolla-k8s is built to
allow maximum flexibility in that space.

> 2) Can the container images from kolla be run on bare docker daemon?

Yes, but they need to either override our default CMD (kolla_start) or
provide ENVs requred by it, not a huge deal

> 3) Can someone take the kolla container images from say dockerhub and
> use it without the Kolla framework?

Yes, there is no such thing as kolla framework really. Our images
follow stable ABI and they can be deployed by any deploy mechanism
that will follow it. We have several users who wrote their own deploy
mechanism from scratch.

Containers are just blobs with binaries in it. Little things that we
add are kolla_start script to allow our config file management and
some custom startup scripts for things like mariadb to help with
bootstrapping, both are entirely optional.

>
> Thanks,
> Dims
>
> On Mon, May 15, 2017 at 1:52 PM, Michał Jastrzębski <inc...@gmail.com> wrote:
>> For starters, I want to emphasize that fresh set of dockerhub images
>> was one of most requested features from Kolla on this summit and few
>> other features more or less requires readily-available docker
>> registry. Features like full release upgrade gates.
>>
>> This will have numerous benefits for users that doesn't have resources
>> to put sophisticated CI/staging env, which, I'm willing to bet, is
>> still quite significant user base. If we do it correctly (and we will
>> do it correctly), images we'll going to push will go through series of
>> gates which we have in Kolla (and will have more). So when you pull
>> image, you know that it was successfully deployed within scenerios
>> available in our gates, maybe even upgrade and increase scenerio
>> coverage later? That is a huge benefit for actual users.
>>
>> On 15 May 2017 at 10:34, Doug Hellmann <d...@doughellmann.com> wrote:
>>> Last week at the Forum we had a couple of discussions about
>>> collaboration between the various teams building or consuming
>>> container images. One topic that came up was deciding how to publish
>>> images from the various teams to docker hub or other container
>>> registries. While the technical bits seem easy enough to work out,
>>> there is still the question of precedence and whether it's a good
>>> idea to do so at all.
>>>
>>> In the past, we have refrained from publishing binary packages in
>>> other formats such as debs and RPMs. (We did publish debs way back
>>> in the beginning, for testing IIRC, but switched away from them to
>>> sdists to be more inclusive.) Since then, we have said it is the
>>> responsibility of downstream consumers to build production packages,
>>> either as distributors or as a deployer that is rolling their own.
>>> We do package sdists for python libraries, push some JavaScript to
>>> the NPM registries, and have tarballs of those and a bunch of other
>>> artifacts that we build out of our release tools.  But none of those
>>> is declared as "production ready," and so the community is not
>>> sending the signal that we are responsible for maintaining them in
>>> the context of production deployments, beyond continuing to produce
>>> new releases when there are bugs.
>>
>> So for us that would mean something really hacky and bad. We are
>> community driven not company driven project. We don't have Red Hat or
>> Canonical teams behind us (we have contributors, but that's
>> different).
>>
>>> Container images introduce some extra complexity, over the basic
>>> operating system style packages mentioned above. Due to the way
>>> they are constructed, they are likely to include content we don't
>>> produce ourselves (either in the form of base layers or via including
>>> build tools or other things needed when assembling the full image).
>>> That extra content means there would need to be more tracking of
>>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>>> as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>>
>>> Given our security and 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> Last week at the Forum we had a couple of discussions about
> collaboration between the various teams building or consuming
> container images. One topic that came up was deciding how to publish
> images from the various teams to docker hub or other container
> registries. While the technical bits seem easy enough to work out,
> there is still the question of precedence and whether it's a good
> idea to do so at all.
>
> In the past, we have refrained from publishing binary packages in
> other formats such as debs and RPMs. (We did publish debs way back
> in the beginning, for testing IIRC, but switched away from them to
> sdists to be more inclusive.) Since then, we have said it is the
> responsibility of downstream consumers to build production packages,
> either as distributors or as a deployer that is rolling their own.
> We do package sdists for python libraries, push some JavaScript to
> the NPM registries, and have tarballs of those and a bunch of other
> artifacts that we build out of our release tools.  But none of those
> is declared as "production ready," and so the community is not
> sending the signal that we are responsible for maintaining them in
> the context of production deployments, beyond continuing to produce
> new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

> Container images introduce some extra complexity, over the basic
> operating system style packages mentioned above. Due to the way
> they are constructed, they are likely to include content we don't
> produce ourselves (either in the form of base layers or via including
> build tools or other things needed when assembling the full image).
> That extra content means there would need to be more tracking of
> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
> as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

> Given our security and stable team resources, I'm not entirely
> comfortable with us publishing these images, and giving the appearance
> that the community *as a whole* is committing to supporting them.
> I don't have any objection to someone from the community publishing
> them, as long as it is made clear who the actual owner is. I'm not
> sure how easy it is to make that distinction if we publish them
> through infra jobs, so that may mean some outside process. I also
> don't think there would be any problem in building images on our
> infrastructure for our own gate jobs, as long as they are just for
> testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we *can* promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our 

Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Michał Jastrzębski
You are talking about OpenStack being hard because it's complex and at
the same time you're talking about using "non-linux-mainstream" tools
around. It's either flexibility or ease guys... Prescriptive is easy,
flexible is hard. When you want to learn about linux you're not
starting from compiling gentoo, you're installing ubuntu, click "next"
until it's finished and just trust it's working, after some time you
grow skills in linux and customize it to your needs.

We are talking about software that runs physics, hpc-like clusters,
mobile phone communication and wordpresses across thousands of
companies. It won't ever be simple and prescriptive. Best we can do is
as you said, hide complexity of it and allow smoother entry until
someone learns complexity. No tooling will ever replace experienced
operator, tooling can make easier time to gain this experience.

You mentioned Kubernetes as good example, Kubernetes is still
relatively young project and doesn't support some of things that you
yourself said you need. As it grows, as options becomes available, it
too will become more and more complex.

On 5 May 2017 at 14:52, Octave J. Orgeron  wrote:
> +1
>
> On 5/5/2017 3:46 PM, Alex Schultz wrote:
>>
>>
>>> Sooo... I always get a little triggered when I hear that OpenStack is
>>> hard to deploy. We've spent last few years fixing it and I think it's
>>> pretty well fixed now. Even as we speak I'm deploying 500+ vms on
>>> OpenStack cluster I deployed last week within one day.
>>>
>> No, you've written a complex tool (that you understand) to do it.
>> That's not the same someone who is not familiar with OpenStack trying
>> to deploy OpenStack. I too could quickly deploy a decently scaled
>> infrastructure with some of the tools (fuel/tripleo/puppet/etc), but
>> the reality is that each one of these tools is inherently hiding the
>> complexities of OpenStack.  Each (including yours) has their own
>> flavor of assumptions baked in to make it work.  That is also
>> confusing for the end user who tries to switch between them and only
>> gets some of the flexibility of each but then runs face first into
>> each tool's short comings.  Rather than assuming a tool has solved it
>> (which it hasn't or we'd all be using the same one by now), how about
>> we take some time to understand why we've had to write these tools in
>> the first place and see if there's something we improve on?  Learning
>> the tool to deploy OpenStack is not the same as deploying OpenStack,
>> managing it, and turning it around for the true cloud end user to
>> consume.
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Michał Jastrzębski
On 5 May 2017 at 11:33, Alex Schultz  wrote:
> On Fri, May 5, 2017 at 10:48 AM, Chris Dent  wrote:
>> On Fri, 5 May 2017, Alex Schultz wrote:
>>
>>> You have to understand that as I'm mainly dealing with having to
>>> actually deploy/configure the software, when I see 'new project X'
>>> that does 'cool new things Y, Z' it makes me cringe.  Because it's
>>> just added complexity for new features that who knows if they are
>>> actually going to be consumed by a majority of end users.  I see a
>>> lot of new features for edge cases while the core functionally
>>> (insert the most used project configuration) still have awkward
>>> deployment, configuration and usability issues. But those aren't
>>> exciting so people don't want to work on them...
>>
>>
>> Would it be accurate to say, then, that from your perpsective the
>> tendency of OpenStack to adopt new projects willy nilly contributes
>> to the sense of features winning out over deployment, configuration
>> and usability issues?
>>
>
> It does not help.
>
>> I think a lot of project contributors may not really see it that way
>> because they think of their project and within that project there is
>> a constant effort to clean things up, address bugs and tech debt,
>> and try to slowly but surely evolve to some level of maturity. In
>> their eyes those new projects are something else separate from their
>> project.
>>
>> From the outside, however, it is all OpenStack and maybe it looks
>> like there's loads of diffuse attention.
>>
>> If that's the case, then a question is whether or not the people who
>> are spending time on those new projects would be spending time on
>> the older projects instead if the new projects didn't exist. I don't
>> know, but seems unlikely.
>>
>
> So there's a trade off and I don't think we can just restrict entry
> because some projects aren't user friendly.  I see it as a common
> issue across all projects. Some are better than others, but what I
> would like to see is the bar for usability raised within the OpenStack
> community such that the end user (and deployer/operator) are all taken
> into consideration.  For me the usability also goes with adoption. The
> easier it is to consume, the easier it would be to adopt something.
> If  you take a look at what is required to configure OpenStack for a
> basic deployment, it is not easy to consume.  If you were to compare
> the basic getting started/install guide for Kubernetes[0] vs
> OpenStack[1], you can see what I mean about complexity.  I think just
> the install guide for neutron on a controller node[2] is about the
> same length as the kubernetes guide.  And we think this is ok?  We
> should keep adding additional installation/management complexity for
> each project?  You could argue that OpenStack has more features or
> more flexible so it's apples to oranges but I don't think it has to be
> if we worked on better patterns for configuration/deployment/upgrades.
> It feels like OpenStack is the thing that you should pay professional
> services to deploy rather than I do it yourself.  And I think that's a
> shame.

Sooo... I always get a little triggered when I hear that OpenStack is
hard to deploy. We've spent last few years fixing it and I think it's
pretty well fixed now. Even as we speak I'm deploying 500+ vms on
OpenStack cluster I deployed last week within one day.

These problems aren't factor of OpenStack growing too fast, it's
tooling that people are using. Granted, it took some time for us to
build these tools, but we did build them. One of reasons why we could
build them is that OpenStack, after being turned into Big Tent allowed
us (Kolla) to quickly join "main herd" of OpenStack and innovate in
our own way. If we'd put lot's of barriers like incubation, we'd still
have same issue with deployment. Stability not always comes from age
of project, sometimes change of methodology alltogether gives you
better stability at the end. Big Tent is meant to allow this kind of
innovation among other things. Setup I'm using now was deployed in
similar manner as this short guide I wrote for Boston summit [1].

Deployment, upgrades and such are problems we're fixing as we go.
Sometimes we make things harder (nova placement API caused a bit of a
headache for deployment tools...), then we make it easier again with
new feature. We might want to put some constrains in terms of
significant, deployment-changing, features merge timelines, but that's
logistics that we handle as community.

I keep hearing that OpenStack lacks leadership, and that's true, but
consider that "leadership" is always limiting for innovation.

[1] https://github.com/inc0/kolla-ansible-workshop

Cheers,
Michal

> Thanks,
> -Alex
>
> [0] 
> https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
> [1] https://docs.openstack.org/newton/install-guide-rdo/
> [2] 
> 

[openstack-dev] [kolla] Next 2 meetings canceled

2017-05-02 Thread Michał Jastrzębski
As we said on last meeting, next 2 (3.05 and 10.05) meetings are
canceled due to summit and pre-summit preparation. See you all at 17!

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-ansible] Proposing Bertrand Lallau for kolla and kolla-ansible core

2017-05-02 Thread Michał Jastrzębski
Hello,

It's my pleasure to start another core reviewer vote. Today it's
Bertrand (blallau). Consider this mail my +1 vote. Members of
kolla-ansible and kolla core team, please cast your votes:) Voting
will be open for 2 weeks (until 16th of May).

I also wanted to say that Bertrand went through our core mentorship
program (if only for few weeks because he did awesome job before too)
:)

Thank you,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Forum] Moderators needed!

2017-04-28 Thread Michał Jastrzębski
I can moderate HA session if you want (although there is one listed in
schedule?). Feel free to sign me up

On 28 April 2017 at 06:07, Jay Pipes  wrote:
> On 04/28/2017 08:22 AM, Shamail Tahir wrote:
>>
>> Hi everyone,
>>
>> Most of the proposed/accepted Forum sessions currently have moderators
>> but there are six sessions that do not have a confirmed moderator yet.
>> Please look at the list below and let us know if you would be willing to
>> help moderate any of these sessions.
>
>
> 
>
>> Cloud-Native Design/Refactoring across OpenStack
>>
>> 
>
>
> Hi Shamail,
>
> The one above looks like Alan (cc'd) is the moderator. :)
>
> Despite him having a awkwardly over-sized brain -- which unfortunately will
> limit the number of other people that can fit in the room -- I do think Alan
> will be a good moderator.
>
> Best,
> -jay
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Which distros are used as base ones?

2017-04-28 Thread Michał Jastrzębski
We tried to use mariadb package few months ago, but it turns out it
was ancient version that broke horribly on multinode..

On 28 April 2017 at 02:41, Christian Berendt
 wrote:
>
>> On 27. Apr 2017, at 11:46, Marcin Juszkiewicz 
>>  wrote:
>>
>> Does someone care about Ubuntu?
>
> Yes, we do. We are using the Ubuntu source images with the Newton and Ocata 
> branches from kolla/kolla-ansible.
>
> Christian.
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Which distros are used as base ones?

2017-04-27 Thread Michał Jastrzębski
Ofc we do. I, for one, run mostly ubuntu (but I must admit I haven't
been building images last 2 or so weeks). It's strange what you're
saying because ubunut-source build is a voting gate, so if there would
be problem like that - we couldn't merge anything... Let's try to find
out why your build failed.

On 27 April 2017 at 02:46, Marcin Juszkiewicz
 wrote:
> Hi
>
> When I joined Kolla project I got info that Debian is going away. So I
> took care of it and now it is updated to current 'testing' and has far
> more images enabled than in past.
>
> CentOS support works fine. Even on AArch64 (if you have [1] applied).
>
> 1. https://review.openstack.org/#/c/430940/
>
> But what about Ubuntu? I have a feeling that no one is using it. Ocata
> packages were used in master until I switched them to use Pike
> repository instead.
>
> Today 'openstack-base' failed for me in 'ubuntu/source' build because
> 'libmariadbclient-dev' does not exist in Ubuntu repositories.
>
> Does someone care about Ubuntu?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Michał Jastrzębski
So after discussion started here [1] we came up with something like that:

1. Docker build will create "fingerprint" - manifesto of versions
saved somewhere (LABEL?)
2. We create new CLI tool kolla-registry for easier management of
pushing and versioning
3. kolla-registry will be able to query existing source docker
registry (incl. dockerhub) for latest tag-revision and it's version
manifesto, also dest registry for tags-revisions and manifesto
4. if source image manifesto != dest image manifesto -> push source
image to dest registry and increase tag-revision by 1
5. kolla-registry will output latest list of images:tags-revisions
available for kolla-k8s/ansible to consume
6. we keep :4.0.0 style images for every tag in kolla repository.
These are static and will not be revised.

Different scenerios can be handled this way
1. Autopushing to dockerhub will query freshest built registry
(tarballs, source) and and dockerhub (dest), it will create
image:branchname (nova-api:ocata) for HEAD of stable branch every run
and image:branchname-revision with revision increase
2. Users will have easy time managing their local registry - dockerhub
(source) and local (dest), if nova-api:ocata on dockerhub is newer
than local, pull it to local and increase local tip and revision

Thoughts?
Michal

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2017-04-19.log.html#t2017-04-19T19:10:25

On 19 April 2017 at 10:45, Fox, Kevin M <kevin@pnnl.gov> wrote:
> That works for detecting changes in the build system.
>
> It does not solve the issue of how to keep containers atomic on end user 
> systems.
>
> All images in a k8s deployment should be the same image. This is done by 
> specifying the same tag. When a new update is done, the updated deployment 
> should specify a new tag to distinguish it from the old tag so that roll 
> forwards/roll backs work atomically and as designed. Otherwise, roll back can 
> actually break or roll forward wont actually grab newer images.
>
> Thanks,
> Kevin
>
> ____
> From: Michał Jastrzębski [inc...@gmail.com]
> Sent: Wednesday, April 19, 2017 8:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub
>
> I think LABEL is great idea for all the "informative" stuff. In fact
> if we could somehow abuse LABEL to fill it up after we get packages
> installed, we could use it for version manifesto. That will make logic
> around "if version changed" much easier since we'll have easy access
> to this information on both image and container.
>
> Our autopushing mechanism will work with tags and HEAD of stable
> branch in this case.
>
> Kevin, then your use case would be done like that:
> 1. pull container nova-compute:ocata, tag it locally to
> nova-compute:ocata-deployed, deploy it
> 2. every now and then pull fresh nova-compute:ocata from dockerhub
> 3. compare versions in LABELs to see whether you want to upgrade or not
> 4. if you do, retag :ocata-deployed to :ocata-old, :ocata to
> :ocata-deployed and run upgrade
> 5. keep ocata-old, revision it, backup it for as long as you want
>
> I also think that we can ship utils to do this in kolla, so people
> won't need to write these themselves.
>
> Does that work?
>
> Cheers,
> Michal
>
> On 19 April 2017 at 05:02, Flavio Percoco <fla...@redhat.com> wrote:
>> On 19/04/17 11:20 +0100, Paul Bourke wrote:
>>>
>>> I'm wondering if moving to using docker labels is a better way of solving
>>> the various issue being raised here.
>>>
>>> We can maintain a tag for each of master/ocata/newton/etc, and on each
>>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link to CI
>>> of build/etc'. I believe this solves all points Kevin mentioned except
>>> rollback, which afaik, OpenStack doesn't support anyway. It also solves
>>> people's concerns with what is actually in the images, and is a standard
>>> Docker mechanism.
>>>
>>> Also as Michal mentioned, if users are concerned about keeping images,
>>> they can tag and stash them away themselves. It is overkill to maintain
>>> hundreds of (imo meaningless) tags in a registry, the majority of which
>>> people don't care about - they only want the latest of the branch they're
>>> deploying.
>>>
>>> Every detail of a running Kolla system can be easily deduced by scanning
>>> across nodes and printing the labels of running containers, functionality
>>> which can be shipped by Kolla. There are also methods for fetching labels of
>>> remote images[0][1] for users wishing to ins

[openstack-dev] [kolla][all] Core mentorship program kickoff

2017-04-19 Thread Michał Jastrzębski
Hello everyone,

On todays meeting we officially started mentorship program in Kolla:)
If you are core or you are interested in becoming one, please sign up
on this etherpad

https://etherpad.openstack.org/p/kolla-mentorship-signup

Idea is to provide safe environment to ask questions, get feedback
from trusted person in core team and ultimately join core team.

Role of mentor is:
1. Make sure to review changes that your student reviewed, providing
feedback to his/hers review as well
2. Review changes your student proposed
3. Answer questions about review process, technical issues and stuff like that
4. Be a trusted friend in community:)
5. Ultimately, when you decide that your student is ready, feel free
to kick off voting process for core addition or let me know, I'll do
it for you

Role of student:
1. Review, review, review, your voice counts
2. Don't be shy to ask your mentor, either openly or privately
3. Care for project en large, care for code and community, it's your
project and someday you might be mentoring another person:)

I encourage everyone to take part in this program! This is just a
pilot, we're figuring it out as we go so help us evolve this effort
and maybe make it more cross-community:)

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Michał Jastrzębski
I think LABEL is great idea for all the "informative" stuff. In fact
if we could somehow abuse LABEL to fill it up after we get packages
installed, we could use it for version manifesto. That will make logic
around "if version changed" much easier since we'll have easy access
to this information on both image and container.

Our autopushing mechanism will work with tags and HEAD of stable
branch in this case.

Kevin, then your use case would be done like that:
1. pull container nova-compute:ocata, tag it locally to
nova-compute:ocata-deployed, deploy it
2. every now and then pull fresh nova-compute:ocata from dockerhub
3. compare versions in LABELs to see whether you want to upgrade or not
4. if you do, retag :ocata-deployed to :ocata-old, :ocata to
:ocata-deployed and run upgrade
5. keep ocata-old, revision it, backup it for as long as you want

I also think that we can ship utils to do this in kolla, so people
won't need to write these themselves.

Does that work?

Cheers,
Michal

On 19 April 2017 at 05:02, Flavio Percoco <fla...@redhat.com> wrote:
> On 19/04/17 11:20 +0100, Paul Bourke wrote:
>>
>> I'm wondering if moving to using docker labels is a better way of solving
>> the various issue being raised here.
>>
>> We can maintain a tag for each of master/ocata/newton/etc, and on each
>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link to CI
>> of build/etc'. I believe this solves all points Kevin mentioned except
>> rollback, which afaik, OpenStack doesn't support anyway. It also solves
>> people's concerns with what is actually in the images, and is a standard
>> Docker mechanism.
>>
>> Also as Michal mentioned, if users are concerned about keeping images,
>> they can tag and stash them away themselves. It is overkill to maintain
>> hundreds of (imo meaningless) tags in a registry, the majority of which
>> people don't care about - they only want the latest of the branch they're
>> deploying.
>>
>> Every detail of a running Kolla system can be easily deduced by scanning
>> across nodes and printing the labels of running containers, functionality
>> which can be shipped by Kolla. There are also methods for fetching labels of
>> remote images[0][1] for users wishing to inspect what they are upgrading to.
>>
>> [0] https://github.com/projectatomic/skopeo
>> [1] https://github.com/docker/distribution/issues/1252
>
>
>
> You beat me to it, Paul.
>
> I think using lables to communicate the version of each openstack software
> installed in the image is the way to go here. We're looking into doing this
> ourselves as part of the RDO pipeline and it'd be awesome to have it being
> part
> of kolla-build itself. Steve Baker, I believe, was working on this.
>
> The more explicit we are about the contents of the image, the better. People
> want to know what's in there, rather than assuming based on the tag.
>
> Flavio
>
>
>> -Paul
>>
>> On 18/04/17 22:10, Michał Jastrzębski wrote:
>>>
>>> On 18 April 2017 at 13:54, Doug Hellmann <d...@doughellmann.com> wrote:
>>>>
>>>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>>>>>
>>>>> On 18 April 2017 at 12:41, Doug Hellmann <d...@doughellmann.com> wrote:
>>>>>>
>>>>>> Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>>>>>>>
>>>>>>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann
>>>>>>> <d...@doughellmann.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34
>>>>>>>> -0700:
>>>>>>>>>
>>>>>>>>> My dear Kollegues,
>>>>>>>>>
>>>>>>>>> Today we had discussion about how to properly name/tag images being
>>>>>>>>> pushed to dockerhub. That moved towards general discussion on
>>>>>>>>> revision
>>>>>>>>> mgmt.
>>>>>>>>>
>>>>>>>>> Problem we're trying to solve is this:
>>>>>>>>> If you build/push images today, your tag is 4.0
>>>>>>>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0
>>>>>>>>> until
>>>>>>>>> we tag new release.
>>>>>>>>>
>>>>>>>>> But image built today is not equal to image built tomorrow, so we
>>>>>>&

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Michał Jastrzębski
On 18 April 2017 at 13:54, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
>> > Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>> >> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
>> >> wrote:
>> >>
>> >> > Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>> >> > > My dear Kollegues,
>> >> > >
>> >> > > Today we had discussion about how to properly name/tag images being
>> >> > > pushed to dockerhub. That moved towards general discussion on revision
>> >> > > mgmt.
>> >> > >
>> >> > > Problem we're trying to solve is this:
>> >> > > If you build/push images today, your tag is 4.0
>> >> > > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>> >> > > we tag new release.
>> >> > >
>> >> > > But image built today is not equal to image built tomorrow, so we
>> >> > > would like something like 4.0.0-1, 4.0.0-2.
>> >> > > While we can reasonably detect history of revisions in dockerhub,
>> >> > > local env will be extremely hard to do.
>> >> > >
>> >> > > I'd like to ask you for opinions on desired behavior and how we want
>> >> > > to deal with revision management in general.
>> >> > >
>> >> > > Cheers,
>> >> > > Michal
>> >> > >
>> >> >
>> >> > What's in the images, kolla? Other OpenStack components?
>> >>
>> >>
>> >> Yes, each image will typically contain all software required for one
>> >> OpenStack service, including dependencies from OpenStack projects or the
>> >> base OS. Installed via some combination of git, pip, rpm, deb.
>> >>
>> >> > Where does the
>> >> > 4.0.0 come from?
>> >> >
>> >> >
>> >> Its the python version string from the kolla project itself, so ultimately
>> >> I think pbr. I'm suggesting that we switch to using the
>> >> version.release_string[1] which will tag with the longer version we use 
>> >> for
>> >> other dev packages.
>> >>
>> >> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py
>> >
>> > Why are you tagging the artifacts containing other projects with the
>> > version number of kolla, instead of their own version numbers and some
>> > sort of incremented build number?
>>
>> This is what we do in Kolla and I'd say logistics and simplicity of
>> implementation. Tags are more than just information for us. We have to
>
> But for a user consuming the image, they have no idea what version of
> nova is in it because the version on the image is tied to a different
> application entirely.

That's easy enough to check tho (just docker exec into container and
do pip freeze). On the other hand you'll have information that "this
set of various versions was tested together" which is arguably more
important.

>> deploy these images and we have to know a tag. Combine that with clear
>> separation of build phase from deployment phase (really build phase is
>> entirely optional thanks to dockerhub), you'll end up with either
>> automagical script that will have to somehow detect correct version
>> mix of containers that works with each other, or hand crafted list
>> that will have 100+ versions hardcoded.
>>
>> Incremental build is hard because builds are atomic and you never
>> really know how many times images were rebuilt (also local rebuilt vs
>> dockerhub-pushed rebuild will cause collisions in tags).
>>
>> > Doug
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Michał Jastrzębski
On 18 April 2017 at 12:41, Doug Hellmann  wrote:
> Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
>> wrote:
>>
>> > Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>> > > My dear Kollegues,
>> > >
>> > > Today we had discussion about how to properly name/tag images being
>> > > pushed to dockerhub. That moved towards general discussion on revision
>> > > mgmt.
>> > >
>> > > Problem we're trying to solve is this:
>> > > If you build/push images today, your tag is 4.0
>> > > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>> > > we tag new release.
>> > >
>> > > But image built today is not equal to image built tomorrow, so we
>> > > would like something like 4.0.0-1, 4.0.0-2.
>> > > While we can reasonably detect history of revisions in dockerhub,
>> > > local env will be extremely hard to do.
>> > >
>> > > I'd like to ask you for opinions on desired behavior and how we want
>> > > to deal with revision management in general.
>> > >
>> > > Cheers,
>> > > Michal
>> > >
>> >
>> > What's in the images, kolla? Other OpenStack components?
>>
>>
>> Yes, each image will typically contain all software required for one
>> OpenStack service, including dependencies from OpenStack projects or the
>> base OS. Installed via some combination of git, pip, rpm, deb.
>>
>> > Where does the
>> > 4.0.0 come from?
>> >
>> >
>> Its the python version string from the kolla project itself, so ultimately
>> I think pbr. I'm suggesting that we switch to using the
>> version.release_string[1] which will tag with the longer version we use for
>> other dev packages.
>>
>> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py
>
> Why are you tagging the artifacts containing other projects with the
> version number of kolla, instead of their own version numbers and some
> sort of incremented build number?

This is what we do in Kolla and I'd say logistics and simplicity of
implementation. Tags are more than just information for us. We have to
deploy these images and we have to know a tag. Combine that with clear
separation of build phase from deployment phase (really build phase is
entirely optional thanks to dockerhub), you'll end up with either
automagical script that will have to somehow detect correct version
mix of containers that works with each other, or hand crafted list
that will have 100+ versions hardcoded.

Incremental build is hard because builds are atomic and you never
really know how many times images were rebuilt (also local rebuilt vs
dockerhub-pushed rebuild will cause collisions in tags).

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Michał Jastrzębski
Our issue is a bit complex tho. Dockerhub-pushed images are less
affected by version of our code than versions of everyone else's code.

On 18 April 2017 at 07:36, Flavio Percoco <fla...@redhat.com> wrote:
> On 13/04/17 13:48 +1200, Steve Baker wrote:
>>
>> On Thu, Apr 13, 2017 at 10:59 AM, Michał Jastrzębski <inc...@gmail.com>
>> wrote:
>>
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>>
>> I already have a change which proposes tagging images with a pbr built
>> version [1]. I think if users want tags which are stable for the duration
>> of a major release they should switch to using the tag specified by
>> kolla-build.conf base_tag, which can be set to latest, ocata, pike, etc.
>> This would leave the version tag to at least track changes to the kolla
>> repo itself. Since the contents of upstream kolla images come from such
>> diverse sources, all I could suggest to ensure unique tags are created for
>> unique images is to append a datestamp to [1] (or have an extra datestamp
>> based tag). Bonus points for only publishing a new datestamp tag if the
>> contents of the image really changes.
>>
>> In the RDO openstack-kolla package we now tag images with the
>> {Version}-{Release} of the openstack-kolla package which built it[2]. I
>> realise this doesn't solve the problem of the tag needing to change when
>> other image contents need to be updated, but I believe this can be solved
>> within the RDO image build pipeline by incrementing the {Release} whenever
>> a new image needs to be published.
>>
>> [1] https://review.openstack.org/#/c/448380/
>> [2] https://review.rdoproject.org/r/#/c/5923/1/openstack-kolla.spec
>
>
> I like this option better because it's more consistent with how things are
> done
> elsewhere in OpenStack.
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-17 Thread Michał Jastrzębski
So, while I agree that everyone should use images built locally, I
also see what is value of downloading dockerhub-hosted images (besides
speed). Images at dockerhub passed our gates and are CI tested. Which
means quality of these images is ensured by our CI. Not everyone can
afford to have CI/CD system in their infra, so for small/medium
installation this actually might be more stable than local builds.

Given that both local and dockerhub hosted images have valid
production use case, I'd like that we keep our tagging mechanism same
for both.
That makes revisions per se impossible to handle (4.0.0-3 on dockerhub
doesn't mean 4.0.0-3 locally). Also how often we push 4.0.0-x? Daily
for quickest update on security patches (preferably for me), that
would mean that our dockerhub registry would grow extremely fast if
we'd like to retain every revision. One idea would be to put a little
of this weight on users (sorry!). We could upload daily :ocata images
and delete old ones. I think not many people does daily openstack
upgrades (cool use case tho), that means they will have some stale
:ocata image locally. We can just put an expectation to backup your
images locally, ones that you actually use. Just tar.gz
/var/lib/docker/volumes/registry/_data and save it somewhere safe.
Then you can always come back to it.

Bottom line, my suggestion for now is to have schema like that:

image-name:4.0.0 -> corresponding docker tag and image built (pref
automatically) close to tagging date
image-name:ocata -> tip of ocata branch build daily - the freshest
code that past gates
image-name:master -> tip of master branch

To achieve fully repeatable builds, we would need to have 4.0.0 type
tagging (based on pbr+git tag), version manifesto generation (as
discussed on PTG) and mechanism to consume existing manifestos and
rebuild images with these exact versions (baring issues with repo
removing previous version...). That is quite a project in it's own...

Thoughts?

Cheers,
Michal

On 17 April 2017 at 19:43, Jeffrey Zhang  wrote:
> I think we have two topics and improvements here
>
> 1. images in https://hub.docker.com/r/kolla/
> 2. tag in end-user env.
>
> # images in hub.docker.com
>
> we are building kolla tag image and push them into hub.docker.com. After
> this,
> we do nothing for these images.
>
> The issue is
>
> 1. any security update is not included in these images.
>solution: I do not think use 4.0.0-1 4.0.0-2 in hub.docker.com is a good
> idea.
>if so, we need mark what 4.0.0-1 container and what's the difference with
> 4.0.0-2.
>This will make another chaos.
>And any prod env shouldn't depend on hub.docker.com's images, which is
> vulnerable
>to attack and is mutable.
>
> 2. branch images are not pushed.
>solution: we can add a job to push branch images into hub.docker.com like
> inc0
>said. For example:
>centos-source-nova-api:4.0.0
>centos-source-nova-api:ocata
>centos-source-nova-api:pike
>centos-source-nova-api:master
>But branch tag images is not stable ( even its name is stable/ocata ),
> users are
>not recommended to use these images
>
> # images in end-user env


> I recommended end user should build its own image rather then use
> hub.docker.com directly.
> in my env, I build images with following tag rule.
>
> when using 4.0.0 to build multi time, i use different tag name. For example
>1st: 4.0.0.1
>2nd: 4.0.0.2
>3rd: 4.0.0.3
>...
>
> The advantage in this way is: keep each tag as immutable ( never override )
>
> On Tue, Apr 18, 2017 at 6:46 AM, Steve Baker  wrote:
>>
>>
>>
>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
>> wrote:
>>>
>>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>>> > My dear Kollegues,
>>> >
>>> > Today we had discussion about how to properly name/tag images being
>>> > pushed to dockerhub. That moved towards general discussion on revision
>>> > mgmt.
>>> >
>>> > Problem we're trying to solve is this:
>>> > If you build/push images today, your tag is 4.0
>>> > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> > we tag new release.
>>> >
>>> > But image built today is not equal to image built tomorrow, so we
>>> > would like something like 4.0.0-1, 4.0.0-2.
>>> > While we can reasonably detect history of revisions in dockerhub,
>>> > local env will be extremely hard to do.
>>> >
>>> > I'd like to ask you for opinions on desired behavior and how we want
>>> > to deal with revision management in general.
>>> >
>>> > Cheers,
>>> > Michal
>>> >
>>>
>>> What's in the images, kolla? Other OpenStack components?
>>
>>
>> Yes, each image will typically contain all software required for one
>> OpenStack service, including dependencies from OpenStack projects or the
>> base OS. Installed via some combination of git, pip, rpm, deb.
>>
>>>
>>> Where does the
>>> 4.0.0 come from?
>>>
>>
>> Its the 

Re: [openstack-dev] [kolla][nova] Starting a core reviewer mentorship program for Kolla deliverables

2017-04-13 Thread Michał Jastrzębski
Thanks sdake!

I added this topic to next meeting agenda. Way I see it it might not
be any more load on core reviewer as answering questions and reviewing
is part of our job already. However having some "trusted" person to
ask can mean a lot to new reviewer as it will lower this fear of being
ashamed by asking in public (there is nothing to fear, but that's how
our brains are wired). Also direct cores attention to comment on
mantee review to improve quality. Also mentor can start actual core
voting (or ask me to start the voting) when mantee will be ready.

Let's give it a try. I'm very hopeful for it!

Cheers,
Michal

On 13 April 2017 at 06:19, Steven Dake (stdake)  wrote:
> Ok sounds fair.  I guess I had misunderstood what nova had done, and thought
> it was a path to success.
>
>
>
> I’m happy we can have the conversation about a mentorship program now at
> least in the Kolla project.
>
>
>
> I’m not sure how to evaluate the cost of the mentorship program in terms of
> core reviewer time.  Perhaps we should start there.  I know core reviewers
> are swamped for bandwidth, however, I also feel making time for mentorship
> is essential to the project’s long term health.
>
>
>
> I am also not sure if we have more individuals interested.  If only Rich is
> interested, that is only one person to mentor.  If 100 people are
> interested, that is beyond our capacity as a team to handle J
>
>
>
> To help sort out the time commitment for core reviewers, I have started a
> “sign-up sheet” for folks interested in mentoring here:
>
> https://etherpad.openstack.org/p/kolla-mentorship-signup
>
>
>
> Regards
>
> -steve
>
>
>
>
>
> From: "Serguei Bezverkhi (sbezverk)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Thursday, April 13, 2017 at 8:45 AM
>
>
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][nova] Starting a core reviewer
> mentorship program for Kolla deliverables
>
>
>
> The idea is great, no doubt here, meaning mentoring and everything, but it
> should not come with price of reducing quality control. 2 x +2 +w should
> still be required from “regular” cores for PS to merge.
>
>
>
> Serguei
>
>
>
> From: Richard Wellum [mailto:richwel...@gmail.com]
> Sent: Thursday, April 13, 2017 7:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [kolla][nova] Starting a core reviewer
> mentorship program for Kolla deliverables
>
>
>
> As a relatively new member of the openstack community I think the idea of a
> mentorship program is a good one; I'd like to throw my hat in the ring if
> the kolla community needs a guinea-pig to try this on. :)
>
>
>
> Rich
>
>
>
> On Wed, Apr 12, 2017 at 7:53 PM Matt Riedemann  wrote:
>
> On 4/12/2017 3:40 PM, Steven Dake (stdake) wrote:
>> Matt,
>>
>> Thanks for the response.  It is helpful.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Matt Riedemann 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Wednesday, April 12, 2017 at 4:36 PM
>> To: "openstack-dev@lists.openstack.org"
>> 
>> Subject: Re: [openstack-dev] [kolla][nova] Starting a core reviewer
>> mentorship program for Kolla deliverables
>>
>> On 4/12/2017 11:59 AM, Steven Dake (stdake) wrote:
>> > Hey folks,
>> >
>> >
>> >
>> > In today’s Kolla team meeting, the idea was proposed of adopting
>> nova’s
>> > “protocore” mentorship program for Kolla.  We would like to know
>> what
>> > nova has learned from this effort.
>> >
>> >
>> >
>> > In today’s Kolla meeting we had broad consensus on the following:
>> >
>> > 1)   Kolla has participants that want to be core reviewers
>> >
>> > 2)   These participants don’t know how to become core reviewers
>> >
>> > 3)   The core reviewers in Kolla should mentor “protocore”
>> reviewers
>> > on how to do good reviews
>> >
>> >
>> >
>> > From that, we concluded some form of mentorship program for
>> potential
>> > core reviewers was in order.  We got into some debate about _/how/_
>> the
>> > program should be rolled out.  Let’s use this thread to discuss how
>> it
>> > should be rolled out since that seems to be the sticking point of
>> the
>> > discussion.  I saw no dissent in the discussion that the basic
>> concepts
>> > were a negative change.
>> >
>> >
>> >
>> > I am aware that nova uses a +1 review from a “protocore” and a +2/+w
>> > from a core reviewer prior to merge.  Nova cores – would you mind
>> > defining your process (on the ml is fine) more thoroughly and your
>> > 

  1   2   3   >