Re: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core
On 26/03/18 11:52 -0400, Doug Hellmann wrote: Ken has been managing oslo.messaging for ages now but his participation in the team has gone far beyond that single library. He regularly attends meetings, including the PTG, and has provided input into several of our team decisions recently. I think it's time we make him a full member of the oslo-core group. Please respond here with a +1 or -1 to indicate your opinion. YAY! +1 -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] Changes to Glance core team
On 01/03/18 12:09 +, Erno Kuvaja wrote: 2) Removing Flavio Percoco from Glance core. Flavio requested to be removed already couple of cycles ago and we did beg him to stick around to help with the Interoperable Image Import which of he has been integral part of designing since the very beginning and due to his knowledge of the internals of the Glance tasks. The majority of this work is finished and we would like to thank Flavio for his help and hard work for Glance community. Makes sense to me! +1 Thanks for all the fish. :) Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Oslo team updates
On 02/01/18 11:53 +0800, ChangBo Guo wrote: In last two cycles some people's situation changed, can't focus on Oslo code review, so I propose some changes in Oslo team. Remove following people, thanks their past hard wok to make Oslo well, and welcome them back if they want to join the team again. please +1/-1 for the change Generalist Code Reviewers: Brant Knudson Specialist API Maintainers: oslo-cache-core: Brant Kundson David Stanek oslo-db-core: Viktor Serhieiev oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev oslo-policy-core: Brant Kundson David Stanek guang-yee oslo-service-core: Marian Horban +1 Thanks everyone -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] The Weekly Owl - 1st Edition
stions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work
On 30/11/17 10:23 -0500, Dan Prince wrote: On Fri, Nov 17, 2017 at 4:43 AM, Steven Hardy wrote: In the ansible/kubernetes model, it could work like: 1. Ansible role makes k8s API call creating pod with multiple containers 2. Pod starts temporary container that runs puppet, config files written out to shared volume 3. Service container starts, config consumed from shared volume 4. Optionally run temporary bootstrapping container inside pod This sort of pattern is documented here: https://kubernetes.io/docs/tasks/access-application-cluster/communicate- containers-same-pod-shared-volume/ Regarding the use of the shared volume I agree this is a nice iteration. We considered using it within Pike as well but due to the Hybrid nature of the deployment, and the desire to have config files easily debug friendly on the host itself we ended up not going there. In Queens however we are aiming for more or less full containerization so we could consider the merits of this approach again. Just pointing out that I don't think Kubernetes is a requirement in order to be able to proceed with some of this improvement. Agreed! The sooner we can move things out of "host paths" the better! Kubernetes is not a requirement for these improvements but, I would say they are a requirement for a k8s based deployment. That is to say, a k8s based deployment should not depend on hostpaths :) Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Updates from the ansible-role-k8s-* front
Hey folks, Some updates from the work on ansible-role-k8s-* I've got the CI jobs working and running tempest. Well, to be honest, the kubernetes one does, whereas the openshift one seems to be failing right now. But, this is good anyways. It is not running the full tempest suite but the tests for the specific role. This is not meant to be an integration job but a functional one. We'll eventually add integration jobs. In addition, I've created a little script for those who want to play with the ansible-role-k8s-* repos. You can start from a vanilla CentOS box and run this script (sudo required). Ok, I lied, you need to have a Kubernetes cluster somewhere and, as of now, it expects it to be on localhost. If it is not, feel free to modify the script and change the `coe_host` in the playbook. Here's a link to the script: https://gist.github.com/flaper87/f69a5dcc7b6ab0fdc290fa01eb8e7bdb And an asciicinema recording of the script running: https://asciinema.org/a/150376 The requirements are small and it doesn't take too long to run. Eventually, all the cleanup operations should go into the deprovision tasks file. Working on that! Cheers, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Dublin PTG format
On 27/11/17 14:21 -0500, Doug Hellmann wrote: Excerpts from Thierry Carrez's message of 2017-11-27 11:58:04 +0100: Hi everyone, We are in the final step in the process of signing the contract with the PTG venue. We should be able to announce the location this week ! So it's time to start preparing. We'll have 5 days, like in Denver. One thing we'd like to change for this round is to give a bit more flexibility in the topics being discussed, especially in the first two days. In Denver, we selected a number of general "themes" and gave them all a room for 2 days on Monday-Tuesday. Then all the "teams" that wanted a project team meeting could get a room for 2 or 3 days on Wednesday-Friday. That resulted in a bit of flux during the first two days, with a lot of empty rooms as most of the themes did not really need 2 days, and a lot of conflicts were present. For Dublin, the idea would be to still predetermine topics (themes and teams) and assign them rooms in advance. But we would be able to assign smaller amounts of time (per half-day) based on the expressed needs. Beyond those pre-assigned themes/teams we'd add flexibility for other groups to book the remaining available rooms in 90-min slots on-demand. A bit like how we did reservable rooms in the past, but more integrated with the rest of the event. It would all be driven by the PTGbot, which would show which topic is being discussed in which room, in addition to the current discussion subject within each topic. We have two options in how we do the split for predetermined topics. We used to split the week between Mon-Tue (themes) and Wed-Fri (teams). The general idea there was to allow some people only interested in a team meeting to only attend the second part of the week. However most people attend all 5 days, and during event feedback some people suggested that "themes" should be in the mornings and "teams" in the afternoons (and all Friday). What would be your preference ? The Mon-Tue/Wed-Fri split means less room changes, which make it easier on the events team. So all else being equal we'd rather keep it the way it is, but I'm open to changing it if attendees think it's a good idea. If you have any other suggestion (that we could implement in the 3 months we have between now and the event) please let me know :) What sort of options do we have for trying the new morning/afternoon split approach without increasing the burden on the events team? Can we print the signs so they have both the project team names and a theme listed on the same sign so we can avoid changing them at all? Can we have the project teams or theme room organizers manage their own signs, placing them in prepared holders outside of the rooms? Regardless of the format, I think we can experiment with something like this. It will give teams more flexibility. Or do we need signs at all? The rooms all have names or numbers already right? Or this! Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas
On 27/11/17 13:14 -0600, Jimmy McArthur wrote: Joshua Harlow wrote: With say an editor that solicits (and backlogs topics and stories and such) various developers/architects at various companies and creates a actually human curated place for developers and technology and architecture to be spot-lighted. To me personal blogs can be used for this, sure, but that sort of misses the point of having a place that is targeted for this (and no I don't really care about finding and subscribing to 100+ random joe blogs that I will never look at more than once). Ideally that place would not become `elitist` as some others have mentioned in this thread (ie, don't pick an elitist editor? lol). The big desire for me is to actually have a editor (a person or people) involved that is keeping such a blog going and editing it and curating it and ensuring it gets found in google searches and is *developer* focused... This is basically what https://www.openstack.org/blog/ is for. It's using Wordpress. It's developer-centric. Anyone can submit to it and we have editors that can publish it. We also have pretty solid SEO. Interestingly enough, not many folks are (were) aware of this. It was not brought up during the discussion at the forum. I'm glad you did, though. If we already have a platform for this then I would say we need to promote it more and find someone (Josh? ;) that will actively seek for content. Thanks for pointing us to o.o/blog, Jimmy. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Dublin PTG format
On 27/11/17 11:58 +0100, Thierry Carrez wrote: Hi everyone, We are in the final step in the process of signing the contract with the PTG venue. We should be able to announce the location this week ! So it's time to start preparing. We'll have 5 days, like in Denver. One thing we'd like to change for this round is to give a bit more flexibility in the topics being discussed, especially in the first two days. In Denver, we selected a number of general "themes" and gave them all a room for 2 days on Monday-Tuesday. Then all the "teams" that wanted a project team meeting could get a room for 2 or 3 days on Wednesday-Friday. That resulted in a bit of flux during the first two days, with a lot of empty rooms as most of the themes did not really need 2 days, and a lot of conflicts were present. For Dublin, the idea would be to still predetermine topics (themes and teams) and assign them rooms in advance. But we would be able to assign smaller amounts of time (per half-day) based on the expressed needs. Beyond those pre-assigned themes/teams we'd add flexibility for other groups to book the remaining available rooms in 90-min slots on-demand. A bit like how we did reservable rooms in the past, but more integrated with the rest of the event. It would all be driven by the PTGbot, which would show which topic is being discussed in which room, in addition to the current discussion subject within each topic. We have two options in how we do the split for predetermined topics. We used to split the week between Mon-Tue (themes) and Wed-Fri (teams). The general idea there was to allow some people only interested in a team meeting to only attend the second part of the week. However most people attend all 5 days, and during event feedback some people suggested that "themes" should be in the mornings and "teams" in the afternoons (and all Friday). If most of the people attend the full week, then I would argue that the format we used in Denver is the one that will bring more people together, People interested in attending the full PTG and those interested only in team discussions will participate. If we change the format, there's a risk that we'll exclude folks only interested in team specific rooms as it'll likely increase their travel expense and travel time. If we were to adopt this new format, we could work on making sure that team discussions happen in consecutive days to avoid teams like Sahara to have sessions on Monday afternoon and then Thursday afternoon. However, I doubt this will work well for everyone interested only in team discussions. What would be your preference ? The Mon-Tue/Wed-Fri split means less room changes, which make it easier on the events team. So all else being equal we'd rather keep it the way it is, but I'm open to changing it if attendees think it's a good idea. Others have raised concerns about being able to properly keep the momentum of the day going if we adopt the new format. I have to admit that I'm also concerned about this. Switching context every half day may not be ideal. If you have any other suggestion (that we could implement in the 3 months we have between now and the event) please let me know :) My suggestion is to keep it as-is. This is our 3rd PTG and the first one outside the U.S. I would prefer us to gather some extra data and feedback before we do any drastic change to the format of the event. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas
Greetings, Last Thursday[0], at the TC office hours, we brainstormed a bit around the idea of having a tech blog. This idea came first from Joshua Harlow and it was then briefly discussed at the summit too. The idea, we have gathered, is to have a space where the community could write technical posts about OpenStack. The idea is not to have an aggregator (that's what our planet[1] is for) but a place to write original and curated content. During the conversation, we argued about what kind of content would be acceptable for this platform. Here are some ideas of things we could have there: - Posts that are dev-oriented (e.g: new functions on an oslo lib) - Posts that facilitate upstream development (e.g: My awesome dev setup) - Deep dive into libvirt internals - ideas? As Chris Dent pointed out on that conversation, we should avoid making this place a replacement for things that would otherwise go on the mailing list - activity reports, for example. Having dev news in this platform, we would overlap with things that go already on the mailing list and, arguably, we would be defeating the purpose of the platform. But, there might be room for both(?) Ultimately, we should avoid topics promoting new features in services as that's what superuser[2] is for. So, what are your thoughts about this? What kind of content would you rather have posted here? Do you like the idea at all? [0] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-11-23.log.html#t2017-11-23T15:01:25 [1] http://planet.openstack.org/ [2] http://superuser.openstack.org/ Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work
Hi Team, I wanted to take a chance to send some updates about the work that we have been doing on the Kubernetes side of things and how things are progressing. As you read this update, please bear in mind that we are still at the early stages of this work and there are many things under discussion, as WIP, discussed but not implemented, etc. I'm sure many of you have many questions and I hope we will be able to answer them all as the work progresses. For now, let's take the update bellow and see where we are headed from here: Kubernetes on the overcloud === The work on this front started with 2[0][1] patches that some of you might have seen and then evolved into using the config download mechanism to execute these tasks as part of the undercloud tasks[2][3] (Thanks a bunch, Jiri, for your work here). Note that [0] needs to be refactored to use the same mechanism used in [2]. There are quite a few things to improve here: - How to configure/manage the loadbalancer/vips on the overcloud Kubespray is - currently being cloned and we need to build a package for it More CI is likely - needed for this work [0] https://review.openstack.org/494470 [1] https://review.openstack.org/471759 [2] https://review.openstack.org/#/c/511272/ [3] https://review.openstack.org/#/c/514730/ Ansible roles for k8s = We discussed and did research[0] on the topic of whether we should use ansible or some other tool to deploy OpenStack services on Kubernetes. The conclusion from that topic was that TripleO would be better fit by a solution based on pure ansible modules and that's the work we have been pushing forward. As some of you might have noticed, we started importing some of the roles that were created for the PoC[0] into openstack. So far we have imported 3 roles (mariadb, keystone, tripleo) and there are more to come[1] but before importing the remaining roles, we would like to nail down the CI jobs for the ones that have been imported. You'll notice that these roles don't mention tripleo in their name (except for the tripleo one) because they are intended to be consumed not only by TripleO. Hopefully, they'll grow into more robust roles that will be consumed by other tools. [0] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119696.html [1] https://github.com/tripleo-apb CI for the ansible-role-k8s-* repos === If you look closely to these repos, you'll notice that these roles can be run standalone, in full Ansible fashion. To follow the same strategy, the first jobs that have been added test the ability to deploy these roles with the minimum set of requirements. For example, the ansible-role-k8s-mariadb role is deployed without extra dependencies, whereas the ansible-role-k8s-keystone role requires the ansible-role-k8s-mariadb. This is very very very basic testing. I'm working on running tempest jobs for openstack services as I write this email and I'll be working on full-blown integration jobs as soon as we nail some of these basic jobs down. If we compare what's been done so far to what we have in the rest of tripleo, it doesn't sound too exciting. It's great progress, nonetheless. In addition to the things missing in our CI effort, we would also like to build a CI job that is consumable by other projects in the community (or, eventually, consume some of the jobs created by other projects in the community).[0] [0] https://etherpad.openstack.org/p/tripleo-ptg-queens-kolla-collaboration Integration with TripleO Heat Templates === This work is on-going and you should eventually see some patches popping-up on the reviews list. One of the goals, besides consuming these ansible roles from t-h-t, is to be able to create a PoC for upgrades and have an end-to-end test/demo of this work. As we progress, we are trying to nail down an end-to-end deployment before creating roles for all the services that are currently supported by TripleO. We will be adding projects as needed with a focus on the end-to-end goal. As a final note, we're collecting patches and updates on this etherpad[0] and we'll provide more concrete updates on the containers squad etherpad as well. Admitedly, we should be sending updates like this one more often so, I commit to do so. [0] https://etherpad.openstack.org/p/tripleo-on-kubernetes Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Upstream LTS Releases
On 14/11/17 15:10 -0500, Doug Hellmann wrote: Excerpts from Chris Friesen's message of 2017-11-14 14:01:58 -0600: On 11/14/2017 01:28 PM, Dmitry Tantsur wrote: >> The quality of backported fixes is expected to be a direct (and only?) >> interest of those new teams of new cores, coming from users and operators and >> vendors. > > I'm not assuming bad intentions, not at all. But there is a lot of involved in a > decision whether to make a backport or not. Will these people be able to > evaluate a risk of each patch? Do they have enough context on how that release > was implemented and what can break? Do they understand why feature backports are > bad? Why they should not skip (supported) releases when backporting? > > I know a lot of very reasonable people who do not understand the things above > really well. I would hope that the core team for upstream LTS would be the (hopefully experienced) people doing the downstream work that already happens within the various distros. Chris Presumably those are the same people we've been trying to convince to work on the existing stable branches for the last 5 years. What makes these extended branches more appealing to those people than the existing branches? Is it the reduced requirements on maintaining test jobs? Or maybe some other policy change that could be applied to the stable branches? Guessing based on the feedback so far, I would say that these branches are more appealing because they are the ones these folks are actually running in production. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] LTS pragmatic example
On 14/11/17 22:33 +1100, Davanum Srinivas wrote: Saverio, This is still under the stable team reviews... NOT LTS. Your contacts for the Nova Stable team is ... https://review.openstack.org/#/admin/groups/540,members Let's please be clear, we need new people to help with LTS plans. Current teams can't scale, they should not have to and it's totally unfair to expect them to do so. I think you may have misunderstood Saverio's email. IIUC, what he was trying to do was provide an example in favor of the LTS branches as discussed in Sydney, rather than requesting for reviews or suggesting the stable team should do LTS. Flavio On Tue, Nov 14, 2017 at 8:02 PM, Saverio Proto wrote: Hello, here an example of a trivial patch that is important for people that do operations, and they have to troubleshoot stuff. with the old Stable Release thinking this patch would not be accepted on old stable branches. Let's see if this gets accepted back to stable/newton https://review.openstack.org/#/q/If525313c63c4553abe8bea6f2bfaf75431ed18ea Please note that a developers/operators that make the effort of fixing this in master, should do also all the cherry-pickes back. We dont have any automatic procudure for this. thank you Saverio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [elections] Technical Committee Election Results
On 26/10/17 10:52 -0400, Doug Hellmann wrote: Excerpts from Jeremy Stanley's message of 2017-10-26 13:30:53 +: On 2017-10-26 14:42:35 +0200 (+0200), Flavio Percoco wrote: [...] > I personally don't think the campaing period was too short. I saw > enough interactions between candidates and the rest of the > community, which was useful for me to make up my mind and vote. > This is, of course, my own view and I don't mean to imply David's > view is not valid. [...] Indeed, by my quick count we had 84 E-mails to the list for questions to and answers from the candidates (not including candidacy/platform statements, announcements from election officials, et cetera). Also, it's entirely legitimate for these discussions to continue into the voting week as some voters may wait until toward the end of the period to make up their minds on how to rank various candidates (I know I did, at least). It would also be good to see some discussion of those issues outside of campaign periods. Some of the questions, like the one about user perspectives held by the candidates, were clearly meant to elicit more info to help make a choice in the election. The discussion of inclusiveness shouldn't be reserved for the campaign period, though. +1K What trigger the question is that many candidates mentioned inclusiveness and diversity in their candidacies. Since it's a topic that I believe is sensitive and vibrant in our industry, I felt it was fair to dive into it further before the voting period. It definitely influenced the way I voted. This said, I agree we should have this kind of discussions more often than not and I intend to pursue this topic further. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [elections] Technical Committee Election Results
On 26/10/17 11:27 +0200, Thierry Carrez wrote: Tony Breeds wrote: On Wed, Oct 25, 2017 at 10:06:46PM -0400, David Moreau Simard wrote: Was it just me or did the "official" period for campaigning/questions was awfully short ? The schedule [1] went: TC Campaigning: (Start) Oct 11, 2017 23:59 UTC (End) Oct 14, 2017 23:45 UTC The original was: - name: 'TC Campaigning' start: '2017-10-09T23:59' end: '2017-10-12T23:45' but that needed to be adjusted (https://review.openstack.org/509654/) While that was still the same duration it was mid-week. That's three days, one of which was a saturday. Was it always this short ? It seems to me that this is not a lot of time to the community to ask (read, and answer) thoughful questions. There used to be no campaigning period at all, so it had been shorter :) I realize this doesn't mean you can't keep asking questions once the actual election voting start but I wonder if we should cut a few days from the nomination and give it to the campaigning. I can't find anything that documents how long the nomination period needed to be, perhaps I missed it? So we could do this but it's already quite short. So more likely we could just extend the Campaigning period if that's the consensus. Duration of campaigning period is not mandated by the TC charter, so left at the appreciation of election officials. The whole election takes close to 3 weeks of officials time so I'd like to ask we be mindful of that before we extend things too much It's clearly a balance between having interesting discussions and triggering election fatigue. I'd say we need to have /some/ campaigning time but not too much :) Ideally discussions would start once people self-nominate, and we could keep the period between nomination close and election start relatively short (3/4 business days max). As an observation, participating in the elections (not only as an election official but also as a candidate) can be stressful. I personally don't think the campaing period was too short. I saw enough interactions between candidates and the rest of the community, which was useful for me to make up my mind and vote. This is, of course, my own view and I don't mean to imply David's view is not valid. I would be a bit hesitant to make the total election period too long but I'm sure we cand adjust a few things here and there. I would also prefer waiting until the nomination period is closed to start discussion. It might not feel fair if the discussions start early and then some candidacies use the data from the discussions as promotion. Again, personal preference. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] [all] TC Report 43
On 24/10/17 19:26 +0100, Chris Dent wrote: # TC Participation At last Thursday's [office hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-19.log.html#t2017-10-19T15:01:02) Emilien asked, as a thought experiment, what people thought of the idea of TC term limits. In typical office hours fashion, this quickly went off into a variety of topics, some only tangentially related to term limits. To summarize, incompletely, the pro-reason is: Make room and opportunities for new leadership. The con-reason is: Maintain a degree of continuity. This led to some discussion of the value of "history and baggage" and whether such things are a keel or anchor in managing the nautical metaphor of OpenStack. We did not agree, which is probably good because somewhere in the middle is likely true. Things then circled back to the nature of the TC: court of last resort or something with a more active role in executive leadership. If the former, who does the latter? Many questions related to significant change are never resolved because it is not clear who does these things. There's a camp that says "the people who step up to do it". In my experience this is a statement made by people in a position of privilege and may (intentionally or otherwise) exclude others or lead to results which have unintended consequences. This then led to meandering about the nature of facilitation. (Like I said, a variety of topics.) We did not resolve these questions except to confirm that the only way to address these things is to engage with not just the discussion, but also the work. Sad I couldn't attend this office hour :( I would love to see this idea being explored further. Perhaps a mailing list thread, then a resolution (Depending on the ML thread feedback) and some f2f conversations at the next PTG (or even the forum. Emilien, up to start the thread? Flavio # OpenStack Technical Blog Josh Harlow showed up with [an idea](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-19.log.html#t2017-10-19T18:19:30). An OpenStack equivalent of the [kubernetes blog](http://blog.kubernetes.io/), focused on interesting technology in OpenStack. This came up again on [Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-20.log.html#t2017-10-20T18:13:01). It's clear that anyone and everyone _could_ write their own blogs and syndicate to the [OpenStack planet](http://planet.openstack.org/) but this doesn't have the same panache and potential cadence as an official thing _might_. It comes down to people having the time. Eking out the time for this blog, for example, can be challenging. Since this is the second [week in a row](https://anticdent.org/tc-report-42.html) that Josh showed up with an idea, I wonder what next week will bring? I might not be exactly the same but, I think the superuser's blog could be a good place to do some of this writing. There are posts of various kinds in that blog: technical, community, news, etc. I wonder how many folks from the community are aware of it and how many would be willing to contribute to it too. Contributing to the superuser's blog is quite simple, really. http://superuser.openstack.org/ Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] Proposing changes in stable policy for installers
v-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella
On 11/10/17 07:48 +0200, Flavio Percoco wrote: On 10/10/17 10:34 -0600, Alex Schultz wrote: On Tue, Oct 10, 2017 at 5:24 AM, Flavio Percoco wrote: On 09/10/17 12:41 -0700, Emilien Macchi wrote: On Mon, Oct 9, 2017 at 2:29 AM, Flavio Percoco wrote: [...] 1. A repo per role: Each role would have its own repo - this is the way I've been developing it on Github. This model is closer to the ansible way of doing things and it'll make it easier to bundle, ship, and collaborate on, individual roles. Going this way would produce something similar to what the openstack-ansible folks have. +1 on #1 for the composability. [...] Have we considered renaming it to something without tripleo in the name? Or is it too specific to TripleO that we want it in the name? The roles don't have tripleo in their names. The only role that mentions tripleo is tripleo specific. As for the APB, yeah, I had thought about renaming that repo to something without tripleo in there: Perhaps just `ansible-k8s-apbs`. I'm about to refactor this repo to remove all the code duplication. We should be able to generate most of the APB code that's in there from a python script. We could even have this script in tripleo_common, if it sounds sensible. It should be it's own thing and not in tripleo_common. When I was proposing a cookiecutter repo it was because in Puppet we do the same thing to bootstrap the modules[0]. It would be a good idea to establish this upfront with the appropriate repo & zuul v3 configurations that could be used to test these modules. We have a similar getting started with a new module doc[1] that we should probably establish for these ansible-k8s-* roles. Yes, I shall work on a cookiecutter repo for these roles. Good thinking. I've moved ahead with this. I created a cookiecutter template and I've proceeded to use this repo as the first one to migrate under `openstack/` for this work. https://review.openstack.org/#/c/512323/ Please, provide feedback there. I'll soon create the governance patch. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][election] Question for candidates: How do you think we can make our community more inclusive?
/me waves Thanks a bunch for replying to my email. As some of you may know, this is a topic that is very close to me and I that I pay lots of attention to. There's some overlap in some of your replies and I've taken notes offline so we can work together on some of them (although I'd love to see y'all pushing your ideas forward). I've decided to not summarize the thread here because I would prefer to encourage voters to read each of the replies. A summary from me would not pay justice to the effort you've put replying to my question. Thanks again and good luck to y'all, Flavio On 13/10/17 14:45 +0200, Flavio Percoco wrote: Greetings, Some of you, TC candidates, expressed concerns about diversity and inclusiveness (or inclusivity, depending on your taste) in your candidacy. I believe this is a broad, and some times ill-used, topic so, I'd like to know, from y'all, how you think we could make our community more inclusive. What areas would you improve first? Thank you, Flavio -- @flaper87 Flavio Percoco -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][election] Question for candidates: How do you think we can make our community more inclusive?
On 15/10/17 01:26 +0100, Erno Kuvaja wrote: What we really need to focus on is to get people _wanting_ to join us. There is next to nothing easy in OpenStack with all it's complexity and that's perfectly fine. Easy is not fun, we all want to challenge ourselves. And we have amazing community to support those who wants to join and make the difference. That is the group we need to grow and when we run out of scalability of helping the people who really wants to make the effort, then we should focus streamlining that process. I'm eager to say, we're wasting our time trying to make it super welcoming and easy just to join for everyone as long as we do not have the queue of people who really wants to make a difference. Think about it, feel free to tell that I'm totally wrong and just being ass by saying this, and when you do, please explain why you think so. I don't think you're totally wrong and I also think we might be having a problem attracting people to the community. Nevertheless, I don't think attracting people to the community is entirely related to being inclusive. If you do a massive marketing campain to attract people and your community is not welcoming (or simply ready to deal with that) then you'll end up pushing them away, hence my question ;) As you correctly pointed out, our community is amaizing but not perfect. We do have some serious issues to deal with to be more inclusive. So, I think you're onto something and your answer is valid, perhaps better in a different context. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc][election] Question for candidates: How do you think we can make our community more inclusive?
Greetings, Some of you, TC candidates, expressed concerns about diversity and inclusiveness (or inclusivity, depending on your taste) in your candidacy. I believe this is a broad, and some times ill-used, topic so, I'd like to know, from y'all, how you think we could make our community more inclusive. What areas would you improve first? Thank you, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session
rectly for various use cases (dev/test/POC/demos). This preserves the quick iteration via Ansible that is often desired. - The remaining SoftwareDeployment resources in tripleo-heat-templates need to be supported by config download so that the entire configuration can be driven with Ansible, not just the deployment steps. The success criteria for this point would be to illustrate using an image that does not contain a running os-collect-config. - The ceph-ansible implementation done in Pike could be reworked to use this model. "config download" could generate playbooks that have hooks for calling external playbooks, or those hooks could be represented in the templates directly. The result would be the same either way though in that Heat would no longer be triggering a separate Mistral workflow just for ceph-ansible. I'd say for ceph-ansible, kubernetes and in general anything else which needs to run with a standard playbook installed on the undercloud and not one generated via the heat templates... these "external" services usually require the inventory file to be in different format, to describe the hosts to use on a per-service basis, not per-role (and I mean tripleo roles here, not ansible roles obviously) About that, we discussed a more long term vision where the playbooks (static data) needd to describe how to deploy/upgrade a given service is in a separate repo (like tripleo-apb) and we "compose" from heat the list of playbooks to be executed based on the roles/enabled services; in this scenario we'd be much closer to what we had to do for ceph-ansible and I feel like that might finally allow us merge back the ceph deployment (or kubernetes deployment) process into the more general approach driven by tripleo James, Dan, comments? Agreed, I think this is the longer term plan in regards to using APB's, where everything consumed is an external playbook/role. We definitely want to consider this plan in parallel with the POC work that Flavio is pulling together and make sure that they are aligned so that we're not constantly reworking the framework. I've not yet had a chance to review the material he sent out this morning, but perhaps we could work together to update the sequence diagram to also have a "future" state to indicate where we are going and what it would look like with APB's and external paybooks. Indeed that would be great :) IIUC, APBs are deployed by running a short-lived container with Ansible inside, which then connects to Kubernetes endpoint to create resources. So this should be a less complicated case than running non-containerized external playbooks. this would be awesome, note that it isn't only ceph and kubernetes anymore in this scenario ... I just spotted a submission for the Skydive composable service and it uses the same mistral/ansible-playbook approach ... so it's already 3 looking forward for this! https://review.openstack.org/#/c/502353/ [1] https://github.com/ansibleplaybookbundle/ansible-playbook-bundle/blob/master/docs/design.md#deploy __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella
On 10/10/17 10:34 -0600, Alex Schultz wrote: On Tue, Oct 10, 2017 at 5:24 AM, Flavio Percoco wrote: On 09/10/17 12:41 -0700, Emilien Macchi wrote: On Mon, Oct 9, 2017 at 2:29 AM, Flavio Percoco wrote: [...] 1. A repo per role: Each role would have its own repo - this is the way I've been developing it on Github. This model is closer to the ansible way of doing things and it'll make it easier to bundle, ship, and collaborate on, individual roles. Going this way would produce something similar to what the openstack-ansible folks have. +1 on #1 for the composability. [...] Have we considered renaming it to something without tripleo in the name? Or is it too specific to TripleO that we want it in the name? The roles don't have tripleo in their names. The only role that mentions tripleo is tripleo specific. As for the APB, yeah, I had thought about renaming that repo to something without tripleo in there: Perhaps just `ansible-k8s-apbs`. I'm about to refactor this repo to remove all the code duplication. We should be able to generate most of the APB code that's in there from a python script. We could even have this script in tripleo_common, if it sounds sensible. It should be it's own thing and not in tripleo_common. When I was proposing a cookiecutter repo it was because in Puppet we do the same thing to bootstrap the modules[0]. It would be a good idea to establish this upfront with the appropriate repo & zuul v3 configurations that could be used to test these modules. We have a similar getting started with a new module doc[1] that we should probably establish for these ansible-k8s-* roles. Yes, I shall work on a cookiecutter repo for these roles. Good thinking. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella
On 09/10/17 12:41 -0700, Emilien Macchi wrote: On Mon, Oct 9, 2017 at 2:29 AM, Flavio Percoco wrote: [...] 1. A repo per role: Each role would have its own repo - this is the way I've been developing it on Github. This model is closer to the ansible way of doing things and it'll make it easier to bundle, ship, and collaborate on, individual roles. Going this way would produce something similar to what the openstack-ansible folks have. +1 on #1 for the composability. [...] Have we considered renaming it to something without tripleo in the name? Or is it too specific to TripleO that we want it in the name? The roles don't have tripleo in their names. The only role that mentions tripleo is tripleo specific. As for the APB, yeah, I had thought about renaming that repo to something without tripleo in there: Perhaps just `ansible-k8s-apbs`. I'm about to refactor this repo to remove all the code duplication. We should be able to generate most of the APB code that's in there from a python script. We could even have this script in tripleo_common, if it sounds sensible. Thoughts? Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ptls] Sydney Forum Project Onboarding Rooms
On 06/10/17 07:31 -0700, Emilien Macchi wrote: On Thu, Oct 5, 2017 at 8:50 AM, Kendall Nelson wrote: [...] If you are interested in reserving a spot, just reply directly to me and I will put your project on the list. Please let me know if you want one and also include the names and emails anyone that will be speaking with you. TripleO - Emilien Macchi - emil...@redhat.com I'll help here if needed :) Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella
Greetings, I've been working on something called triple-apbs (and it's respective roles) in the last couple of months. You can find more info about this work here[0][1][2] This work is at the point where I think it would be worth start discussing how we want these repos to exist under the TripleO umbrella. As far as I can tell, we have 2 options (please comment with alternatives if there are more): 1. A repo per role: Each role would have its own repo - this is the way I've been developing it on Github. This model is closer to the ansible way of doing things and it'll make it easier to bundle, ship, and collaborate on, individual roles. Going this way would produce something similar to what the openstack-ansible folks have. 2. Everything in a single repo: this would ease the import process and integration with the rest of TripleO. It'll make the early days of this work a bit easier but it will take us in a direction that doesn't serve one of the goals of this work. My preferred option is #1 because one of the goals of this work is to have independent roles that can also be consumed standalone. In other words, I would like to stay closer to the ansible recommended structure for roles. Some examples[3][4] Any thoughts? preferences? Flavio [0] http://blog.flaper87.com/deploy-mariadb-kubernetes-tripleo.html [1] http://blog.flaper87.com/glance-keystone-mariadb-on-k8s-with-tripleo.html [2] https://github.com/tripleo-apb [3] https://github.com/tripleo-apb/ansible-role-k8s-mariadb [4] https://github.com/tripleo-apb/ansible-role-k8s-glance -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Reminder (1 day left) -- Forum Topic Submission
Hello Everyone, This is a friendly reminder that the submission period ends tomorrow (Sep 29th). Take some time to think about the topics you would like to talk about and submit them at: http://forumtopics.openstack.org/cfp/create Submit your topic before 11:59PM UTC on Friday September 29th! Regards, UC/TC __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project
On 27/09/17 01:59 +, Jeremy Stanley wrote: On 2017-09-27 09:15:21 +0800 (+0800), Zhenguo Niu wrote: [...] I don't mean there are deficiencies in Ironic. Ironic itself is cool, it works well with TripleO, Nova, Kolla, etc. Mogan just want to be another client to schedule workloards on Ironic and provide bare metal specific APIs for users who seeks a way to provider virtual machines and bare metals separately, or just bare metal cloud without interoperble with other compute resources under Nova. [...] The short explanation which clicked for me (granted it's probably an oversimplification, but still) was this: Ironic provides an admin API for managing bare metal resources, while Mogan gives you a user API (suitable for public cloud use cases) to your Ironic backend. I suppose it could have been implemented in Ironic, but implementing it separately allows Ironic to be agnostic to multiple user frontends and also frees the Ironic team up from having to take on yet more work directly. ditto! I had a similar question at the PTG and this was the answer that convinced be may be worth the effort. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][infra] Zuul v3 migration update
Just wanted to say thanks to all of you for the hard work. I can only imagine how hard it must be to do this migration without causing downtimes. Flavio On 26/09/17 18:04 -0500, Monty Taylor wrote: Hey everybody, We got significantly further along with our Zuul v3 rollout today. We uncovered some fun bugs in the migration but were able to fix most of them rather quickly. We've pretty much run out of daylight though for the majority of the team and there is a tricky zuul-cloner related issue to deal with, so we're not going to push things further tonight. We're leaving most of today's work in place, having gotten far enough that we feel comfortable not rolling back. The project-config repo should still be considered frozen except for migration-related changes. Hopefully we'll be able to flip the final switch early tomorrow. If you haven't yet, please see [1] for information about the transition. [1] https://docs.openstack.org/infra/manual/zuulv3.html Thanks, Monty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Garbage patches for simple typo fixes
On 22/09/17 14:47 -0400, Doug Hellmann wrote: Excerpts from Davanum Srinivas (dims)'s message of 2017-09-22 13:47:06 -0400: Doug, Howard (cc'ed) already did a bunch of reaching out especially on wechat. We should request his help. Howard, Can you please help with communications and follow up? Thanks, Dims Thanks, Dims and Howard, I think the problem has reached a point where it would be a good idea to formalize our approach to outreach. We should track the patches or patch series identified as problematic, so reviewers know not to bother with them. We can also track who is contacting whom (and how) so we don't have a bunch of people replicating work or causing confusion for people who are trying to contribute. Having that information will also help us figure out when we need to escalate by finding the right managers to be talking to. Let's put together a small team to manage this instead of letting it continue to cause frustration for everyone. Count me in! I'm interested in helping with this effort. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Garbage patches for simple typo fixes
On 22/09/17 14:20 +, Jeremy Stanley wrote: On 2017-09-22 08:30:21 -0400 (-0400), Amrith Kumar wrote: [...] When can we take some concrete action to stop these same kinds of things from coming up again and again? Technical solutions to social problems rarely do more than increase complexity for everyone involved. Just wanted to +1 this as I just mentioned a technical solution in one of my replies to this thread on which I also said I doubt it would ever work/help. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Garbage patches for simple typo fixes
On 23/09/17 12:25 -0400, Doug Hellmann wrote: Excerpts from Qiming Teng's message of 2017-09-23 23:55:34 +0800: To some extent, I think Zhipeng is right. There are times we as a community have to do something beyond mentoring new developers. One of the reasons behind these patches are from the management chain of those companies. They need numbers, and they don't care what kind of contributions were made. They don't bother read these emails. Another fact is that some companies are doing such things not just in the OpenStack community. Their developers are producing tons of low-quality "patches" to play this as a game in other communities as well. If we don't place a STOP sign, things will never get improved. By not doing something, we are hurting everyone, including those developers who could have done more meaningful contributions, though their number of patches may decrease. Just my 2 cents. - Qiming This may be true. Before we create harsh processes, however, we need to collect the data to show that other attempts to provide guidance have not worked. We have a lot of anecdotal information right now. We need to collect that and summarize it. If the results show that there are clear abuses, rather than misunderstandings, then we can use the data to design effective blocks without hurting other contributors or creating a reputation that our community is not welcoming. I would also like to take this opportunity to encourage everyone to assume good faith. There are ways to evaluate if there are reasons to believe there are abusive behaviors from some companies and/or contributors. It is neither encouraged nor right to publicly shame anyone. This is not the way we operate and I would like to keep it that way. Let's build a process and/or tool that can be used to analyze this behaviors so that we can communicate them through the right channels. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] patches for simple typo fixes
On 25/09/17 09:28 -0400, Doug Hellmann wrote: Excerpts from Sean Dague's message of 2017-09-25 08:24:18 -0400: On 09/25/2017 07:56 AM, Chris Dent wrote: > On Fri, 22 Sep 2017, Paul Belanger wrote: > >> This is not a good example of encouraging anybody to contribute to the >> project. > > Yes. This entire thread was a bit disturbing to read. Yes, I totally > agree that mass patches that do very little are a big cost to > reviewer and CI time but a lot of the responses sound like: "go away > you people who don't understand our special culture and our > important work". > > That's not a good look. > > Matt's original comment is good in and of itself: I saw a thing, > let's remember to curtail this stuff and do it in a nice way. > > But then we generate a long thread about it. It's odd to me that > these threads sometimes draw more people out then discussions about > actually improving the projects. > > It's also odd that if OpenStack were small and differently > structured, any self-respecting maintainer would be happy to see > a few typo fixes and generic cleanups. Anything to push the quality > forward is nice. But because of the way we do review and because of > the way we do CI these things are seen as expensive distractions[1]. > We're old and entrenched enough now that our tooling enforces our > culture and our culture enforces our tooling. > > [1] Note that I'm not denying they are expensive distractions nor > that they need to be managed as such. They are, but a lot of that > is on us. I was trying to ignore the thread in the hopes it would die out quick. But torches and pitchforks all came out from the far corners, so I'm going to push back on that a bit. I'm not super clear why there is always so much outrage about these patches. They are fixing real things. When I encounter them, I just approve them to get them merged quickly and not backing up the review queue, using more CI later if they need rebasing. They are fixing real things. Maybe there is a CI cost, but the faster they are merged the less likely someone else is to propose it in the future, which keeps down the CI cost. And if we have a culture of just fixing typos later, then we spend less CI time on patches the first time around with 2 or 3 iterations catching typos. I think the concern is the ascribed motive for why people are putting these up. That's fine to feel that people are stat padding (and that too many things are driven off metrics). But, honestly, that's only important if we make it important. Contributor stats are always going to be pretty much junk stats. They are counting things to be the same which are wildly variable in meaning (number of patches, number of Lines of Code). My personal view is just merge things that fix things that are wrong, don't care why people are doing it. If it gets someone a discounted ticket somewhere, so be it. It's really not any skin off our back in the process. If people are deeply concerned about CI resources, step one is to get some better accounting into the existing system to see where resources are currently spent, and how we could ensure that time is fairly spread around to ensure maximum productivity by all developers. -Sean I'm less concerned with the motivation of someone submitting the patches than I am with their effect. Just like the situation we had with the bug squash days a year or so ago, if we had a poorly timed set of these trivial patches coming in at our feature freeze deadline, it would be extremely disruptive. So to me the fact that we're seeing them in large batches means we have people who are not fully engaged with the community and don't understand the impact they're having. My goal is to reach out and try to improve that engagement, and try to help them become more fully constructive contributors. I agree with the sentinment that these patches might be coming from folks that are not fully engaged with the community but they won't stop comming. There's a risk behind this mass submitted patches but I agree with Sean's comment that they are still fixing things. Once they've been submitted, I think we're better off merging them if we're not in a release phase. A more agressive fix would be to limit the amount of patches a single person can propose in a given day and in a specific period of time (or forever) but this might not be possible or not exactly what we want as I would rather work with the community. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session
- The ceph-ansible implementation done in Pike could be reworked to use this model. "config download" could generate playbooks that have hooks for calling external playbooks, or those hooks could be represented in the templates directly. The result would be the same either way though in that Heat would no longer be triggering a separate Mistral workflow just for ceph-ansible. I'd say for ceph-ansible, kubernetes and in general anything else which needs to run with a standard playbook installed on the undercloud and not one generated via the heat templates... these "external" services usually require the inventory file to be in different format, to describe the hosts to use on a per-service basis, not per-role (and I mean tripleo roles here, not ansible roles obviously) About that, we discussed a more long term vision where the playbooks (static data) needd to describe how to deploy/upgrade a given service is in a separate repo (like tripleo-apb) and we "compose" from heat the list of playbooks to be executed based on the roles/enabled services; in this scenario we'd be much closer to what we had to do for ceph-ansible and I feel like that might finally allow us merge back the ceph deployment (or kubernetes deployment) process into the more general approach driven by tripleo James, Dan, comments? Agreed, I think this is the longer term plan in regards to using APB's, where everything consumed is an external playbook/role. We definitely want to consider this plan in parallel with the POC work that Flavio is pulling together and make sure that they are aligned so that we're not constantly reworking the framework. I've not yet had a chance to review the material he sent out this morning, but perhaps we could work together to update the sequence diagram to also have a "future" state to indicate where we are going and what it would look like with APB's and external paybooks. So far, I think it aligns just fine. I would like to start playing with it and see if I can leaverage this work directly instead of modifying the existing templates we use for paunch. Will look into the details of how this works and get back to you, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session
On 18/09/17 09:37 -0600, James Slagle wrote: On Wednesday at the PTG, TripleO held a session around our current use of Ansible and how to move forward. I'll summarize the results of the session. Feel free to add anything I forgot and provide any feedback or questions. We discussed the existing uses of Ansible in TripleO and how they differ in terms of what they do and how they interact with Ansible. I covered this in a previous email[1], so I'll skip over summarizing those points again. I explained a bit about the "openstack overcloud config download" approach implemented in Pike by the upgrades squad. This method no-op's out the deployment steps during the actual Heat stack-update, then uses the cli to query stack outputs to create actual Ansible playbooks from those output values. The Undercloud is then used as the Ansible runner to apply the playbooks to each Overcloud node. I created a sequence diagram for this method and explained how it would also work for initial stack deployment[2]: https://slagle.fedorapeople.org/tripleo-ansible-arch.png The high level proposal was to move in a direction where we'd use the config download method for all Heat driven stack operations (stack-create and stack-update). We highlighted and discussed several key points about the method shown in the diagram: - The entire sequence and flow is driven via Mistral on the Undercloud by default. This preserves the API layer and provides a clean reusable interface for the CLI and GUI. - It would still be possible to run ansible-playbook directly for various use cases (dev/test/POC/demos). This preserves the quick iteration via Ansible that is often desired. - The remaining SoftwareDeployment resources in tripleo-heat-templates need to be supported by config download so that the entire configuration can be driven with Ansible, not just the deployment steps. The success criteria for this point would be to illustrate using an image that does not contain a running os-collect-config. - The ceph-ansible implementation done in Pike could be reworked to use this model. "config download" could generate playbooks that have hooks for calling external playbooks, or those hooks could be represented in the templates directly. The result would be the same either way though in that Heat would no longer be triggering a separate Mistral workflow just for ceph-ansible. - We will need some centralized log storage for the ansible-playbook results and should consider using ARA. As it would be a lot of work to eventually make this method the default, I don't expect or plan that we will complete all this work in Queens. We can however start moving in this direction. Specifically, I hope to soon add support to config download for the rest of the SoftwareDeployment resources in tripleo-heat-templates as that will greatly simplify the undercloud container installer. Doing so will illustrate using the ephemeral heat-all process as simply a means for generating ansible playbooks. I plan to create blueprints this week for Queens and beyond. If you're interested in this work, please let me know. I'm open to the idea of creating an official squad for this work, but I'm not sure if it's needed or not. As not everyone was able to attend the PTG, please do provide feedback about this plan as it should still be considered open for discussion. Hey James, sorry for getting back to this thread this late! I like the approach and I think it makes sense for us to go down this path. As far as my research on tripleo+kubernetes goes, I think this plan aligns quite well with what I've been doing so far. That said, I need to dive more into how `config download` works and start using it for future demos and development. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Deploy Keystone, Glance and Mariadb in Kubernetes with TripleO
Hey Folks, I just posted a screencast sharing the progress I've made on the kubernetes effort[0]. I've pasted part of the blog post below in case you want to discuss some parts of it. What if I want to play with it? === Here's a small recap of what's needed to play with this PoC. Before you do, though, bear in mind that this work is in its very early days and that there are *many* things that don't work or that could be better. As usual, any kind of feedback and/or contribution are welcome. Note that some of the steps below require root access 1# Clone the tripleo-apbs repository and its submodules: git clone --recursive https://github.com/tripleo-apb/tripleo-apbs 2# Build the images you want to run: ./build.sh mariadb ./build.sh glance ./build.sh keystone 3# Clone the `undercloud_containers` repo and run the `doit.sh` script. This repo is meant to be used only for development purposes: git clone https://github.com/flaper87/undercloud_containers 4# Prepare the environment cd undercloud_containers && ./doit.sh 5# Deploy the undercloud (as root) cd $HOME && ./run.sh The `doit.sh` scripts uses my fork of tripleo-heat-templates, which contains the changes to use the APBs. It's important to highlight that this fork doesn't introduce changes to the existing API. You can see the comparison between the fork and the main tripleo-heat-template's repo[1]: [0] http://blog.flaper87.com/glance-keystone-mariadb-on-k8s-with-tripleo.html [1] https://github.com/openstack/tripleo-heat-templates/compare/master...flaper87:tht-apbs Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?
On 20/09/17 12:21 +, Jeremy Stanley wrote: On 2017-09-20 07:51:29 -0400 (-0400), Davanum Srinivas wrote: [...] please indicate which file from Nova, so if anyone wanted to cross check for fixes etc can go look in Nova [...] While the opportunity has probably passed in this case, the ideal method is to start with a Git fork of the original as your seed project (perhaps with history pruned to just the files you're reusing via git filter-branch or similar). This way the complete change history of the files in question is preserved for future inspection. If it's not too late, I would definitely recommend going with a fork, fwiw. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] Technical Committee Status update, August 25th
Greetings, This is the weekly update on Technical Committee initiatives. You can find the full list of all open topics at: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee == Recently-approved changes == Thierry is out this week so not patches have been approved. == PTG prep == Preparations for the PTG are still in progress. Notes and ideas are being proposed on this etherpad: https://etherpad.openstack.org/p/queens-PTG-TC-SWG == New project teams == As we open Queens cycle development, it's time to reconsider our current backlog of project additions: * Stackube: https://review.openstack.org/462460 The project was set up on OpenStack infrastructure, so it's time to review the proposal again. * Glare: https://review.openstack.org/479285 * Blazar: https://review.openstack.org/482860 * Gluon: https://review.openstack.org/463069 Also ready for review, those teams will be at the PTG and available to spend some time in the TC room on Monday-Tuesday == Open discussions == Monty's proposal to be explicit about supported database versions is still under discussion: https://review.openstack.org/493932 John updated his resolution to highlight that decisions should be globally inclusive. The proposal seems to have gotten enough iterations and it will likely be moved into a reference document: https://review.openstack.org/#/c/460946/ == Voting in progress == Emmet's rewording of the leaderless project resolution reached majority yesterday and will be approved early next week unless new objections are brought: https://review.openstack.org/#/c/492578/ Flavio's resolutions on dropping Technical Committee meetings[3] and allowing teams to use their channel for meetings[4] are both still under review and voting: [3] https://review.openstack.org/459848 [4] https://review.openstack.org/485117 For leaderless teams, the TC is proposing to appoint Kota TSUYUZAKI as Storlets PTL for Queens. Voting is on going and majority was reached on August 21st: https://review.openstack.org/#/c/493846/ == Need for a TC meeting next Tuesday == There are no meetings scheduled for next week. Cheers, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla] [tripleo] [openstack-ansible] [deployment] Collaboration at PTG
On 17/08/17 10:24 -0500, Major Hayden wrote: On 08/17/2017 09:30 AM, Emilien Macchi wrote: If you're working on Kolla / OpenStack-Ansible - please let us know if you have specific constraints on the schedule, so we can maybe block a timeslot in the agenda from now. We'll have a "Packaging" room which is reserved for all topics related to OpenStack deployments, so we can use this one. I don't have any constraints (that I'm aware of), but I'd be interested in participating! Performance in the gate jobs has been one of my tasks lately and I'd like to see if we can collaborate there to make improvements without ruining infra's day. ;) As long as you can put up with a few Dad jokes, I'll be there. ++ I'm interested in this topic too! Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] Technical Committee Status update, August 18th
On 18/08/17 11:17 +0200, Thierry Carrez wrote: == Need for a TC meeting next Tuesday == I'll be off next week (therefore skipping this update next Friday). I don't think we have anything thoroughly blocked at this point requiring a meeting. However if something comes up and requires a meeting next week while I'm off, please feel free to self-organize :) I'd happy to send the Friday update email until you're back. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla] [tripleo] [openstack-ansible] [deployment] Collaboration at PTG
On 17/08/17 07:30 -0700, Emilien Macchi wrote: Hey folks, As usual, we'll meet in Denver and I hope we can spend some time together (in a meeting room first) to have face to face discussions on the recent topics that we had. Right now, TripleO sessions are not scheduled in our agenda, so we're pretty flexible: https://etherpad.openstack.org/p/tripleo-ptg-queens I would like to propose one topic (happy to coordinate the discussion) on some efforts regarding doing configuration management with Ansible, and k8s integration as well. Flavio made some progress [1] - I really hope we can make progress here. [1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119696.html If you're working on Kolla / OpenStack-Ansible - please let us know if you have specific constraints on the schedule, so we can maybe block a timeslot in the agenda from now. We'll have a "Packaging" room which is reserved for all topics related to OpenStack deployments, so we can use this one. Looking forward to meeting you at PTG! Thanks, Just want to raise my hand to help driving some of these conversations. I'd love to see some sessions around collaborating with other teams. Some ideas that we've discussed in the past could use some more discussions. For example: - Configuration management - Sharing playbooks with kolla - Kubernetes based jobs - What's there? - What can be shared? Going to add these points to the etherpad, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] Technical Committee Status update, August 11th
On 11/08/17 12:11 +0200, Thierry Carrez wrote: == TC member actions for the coming week(s) == All TC members should have a close look on Storlets team status to make up their mind on what to do with it in Queens. Flavio still needs to incorporate feedback in the "Drop TC meetings" proposal and produce a new patchset, or abandon it since we pretty much already implemented the described change. I actually did already (probably seconds after you sent this email) :P https://review.openstack.org/#/c/459848/ Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][barbican][freezer][horizon][karbor][keystone][mistral][nova][pack-deb][refstack][solum][storlets][swift][tacker][telemetry][watcher][zaqar] Last days for PTL candidate announ
On 10/08/17 14:04 -0400, Anita Kuno wrote: On 2017-08-07 03:50 PM, Kendall Nelson wrote: Hello Everyone :) A quick reminder that we are in the last days for PTL candidate announcements. If you want to stand for PTL, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your candidacy has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: This means that with approximately 2.5 days left more than 27% of projects will be deemed leaderless. In this case the TC will be bound by [3]. I thought that the language in this sentence, that the TC is bound by the referenced resolution was a bit of mis-stating the relationship, but I thought I would let it pass and that things would work themselves out. However having read the language in Emmett's post to the TC reporting which programs don't have a self-nominated PTL, I'm motivated to clarify. The TC is not bound. The TC has agreed to follow a process. The election officials are bound, in as much as they are obliged to communicate the list of leaderless programs to the TC without delay. The TC is enabled by the process, not bound by it. The TC CAN appoint a leader, they are not obliged to appoint a leader. The TC may do other things as well, it depends on the circumstances. Also to clarify, the election officials serve the TC, not the other way around. I would like to clarify this sentence, though. I do not think the election officials serve the TC, neither the TC serves the election officials. Both bodies serve the community and the processes defined by it. The fact the TC oversees the community does not mean the latter (or any group in it) serves the TC. if anything, I'd prefer to think the TC serves the community and the groups the conform it. That said, I think we could argue for a long time on the terms and words used to communicate the various relationships. While I believe words are extremely important, I also believe they are a bit less important if the message goes through. In this case the message is that there are cases of leaderless teams this time around and there's a process we can, should, and will follow. Thanks for the clarifications, Anita. Thanks for the hard work on the elections, Kendall. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Screencast of the undercloud deploying mariadb
Hey Team, I took the time to record a small screencast[0] showing tripleo undercloud deplying mariadb on kubernetes. It's it's a *PoC*. It's very basic, some things are hard coded, but it highlights 3 things that I believe are critical for TripleO: * Unified configuration management * Re-use of existing data * Re-use existing templates and libraries You can find some more info in the blog post. I'll be working on a detailed version to explain how some of these parts work. This way we can keep moing the discussions forward. Flavio [0] https://blog.flaper87.com/deploy-mariadb-kubernetes-tripleo.html#one -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] New code contributors no longer forced to join the foundation
On 07/08/17 23:28 +, Jeremy Stanley wrote: Due to improvements in the OpenStack Foundation member directory and our technical election tooling, it is no longer necessary to join the foundation as an individual member just to be able to submit changes through Gerrit for official repositories. You do, however, still eventually need to join if you want to be able to participate in technical elections (as a candidate or a voter). This is already a great step towards simplifying the new contributor experience. Thank you and to everyone involved in this effort. Great work, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.5.0
On 30/07/17 08:08 -0700, James E. Blair wrote: Thanks to the following people whose changes are included in this release: Jim Rollenhagen Kevin Benton Masayuki Igawa Matthew Thode Thank you all! <3 Gertty Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] Technical Committee Status update, July 28th
On 28/07/17 10:50 +0200, Thierry Carrez wrote: Hi! This is the weekly update on Technical Committee initiatives. You can find the full list of all open topics at: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee == Recently-approved changes == * Declare plainly the current state of PostgreSQL in OpenStack [1] * Clean up remaining 'big tent' mention in Licensing requirements [2] * Queens goals updates: octavia * New repositories: charm-deployment-guide * Repositories moved to legacy: api-site, faafo * Removed repositories: deb-mistral-dashboard [1] https://review.openstack.org/#/c/427880/ [2] https://review.openstack.org/484607 The big item of the week is the final merge, after almost 6 months of discussion, of the resolution declaring plainly the state of PostgreSQL in OpenStack: https://governance.openstack.org/tc/resolutions/20170613-postgresql-status.html Additionally, the governance repository was tagged in preparation for the PTL elections, to clearly define the teams and associated repositories that will be considered. == Open discussions == Flavio's resolution about allowing teams to host meetings in their own IRC channels is still in the early days of discussion, and is likely to need a few iterations to iron out: https://review.openstack.org/485117 I'll update this patch asap. Heads up, I'll be out most of this week so expect an update by next week (unless I get to it today). == TC member actions for the coming week(s) == Flavio still needs to incorporate feedback in the "Drop TC meetings" proposal and produce a new patchset, or abandon it since we pretty much already implemented the described change. ditto! :D Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO
On 20/07/17 08:18 -0700, Emilien Macchi wrote: On Thu, Jul 20, 2017 at 7:27 AM, Andy McCrae wrote: [...] Hopefully that is useful, happy to discuss this more (or any other collaboration points!) if that does sound interesting. Andy [1] https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py [2] https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html Yes, this is very useful and this is what I also wanted to investigate more back in June: http://lists.openstack.org/pipermail/openstack-dev/2017-June/118417.html . Like Flavio said, it sounds like we might just re-use what you guys did, since it looks flexible. What Doug wrote [1] stays very useful, since we don't want to re-use your templates, we would rather generate the list of options available in OpenStack projects by using oslo-.config directly. We could provide an YAML with key/values of things we want to generate in an inifile. Now we could ask ourselves, in that case, why not directly making oslo.config reading YAML instead of ini? Do we really need a translator? User input → YAML →OSA config template plugin →INI →read by oslo.config we could have: User input → YAML → read by oslo.config I've discussed with some operators about this options but I want to re-iterate on it here. Any thoughts? [1] https://github.com/dhellmann/oslo-config-ansible The plugin, as is, is capable of generating INI files as well as YAML files. I don't really see the need of making oslo.config read YAML. On one side the idea sounds appealing because well, we can do more with YAMl than we can with INI files. On the other side, though, I don't think it's worth the time right now. A migration to YAML files requires way more work than we can account for. Not only we need to make oslo.config support it, we also have to maintain compatibility for quite a few cycles, etc. Don't get me wrong. If someone wants to work on this, I'm good. What I'm saying is that, at this point, I don't think it's worth it. It may be in the future. The reality is that we depend on INI files now and we will for the forseable future so the work has to be done anyway. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO
On 20/07/17 15:27 +0100, Andy McCrae wrote: Hi all, Some areas of collaboration: * Kubernetes resources: Work on the same set of resources. In this case, resources means the existing templates in kolla-kubernetes. Find ways to share the same resources rather than having 2 different sets of resources. * Configuration management: Work on a common ansible role/module for generating configuration files. There's a PoC already[1] but it's still being worked on. The PoC will likely turn into an Ansible module rather than a role. @flaper87 is working on this. On this point specifically, we have the config_template module[1] in OpenStack-Ansible, which sounds like it already does similar things to what you are after. Essentially you can supply a yaml formatted config and it will generate a json, ini or yaml conf file for you. We have some docs around using the module [2] - and it's already in use by the ceph-ansible project. We use it on top of templates, to allow the deployer to specify any options that aren't templated, but you could just as easily use it on a blank/empty start point and do away with templates completely. We tried to push it into Ansible core a few years ago, but there was push back based on there being other ways to achieve that, but I think there has been a shift in Ansible's approach to accepting new features/modules - so Kevin Carter (cloudnull) is going to give that another go at upstreaming it, since it seems generically useful for Ansible projects. Hopefully that is useful, happy to discuss this more (or any other collaboration points!) if that does sound interesting. Andy [1] https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py [2] https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html I just learned this module exists. Thanks for reaching out! By looking at the source code, it looks like this is exactly what we need and what we were hoping to come up with. YAY! Open Source! YAY! OpenStack! As mentioned on IRC, I'll add validation to this module (check the keys actually exist, the types are valid, etc) based on the YAML schema that can be generated with oslo-config-gen now. I'll reach out again as asoon as I have something to show on this front. Thanks again for your work, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [kolla][TripleO] New areas for collaboration between Kolla and TripleO
Hello Team, The TripleO team and the Kolla team met on IRC yday to explore areas where collaboration is possible, now that TripleO is looking into jumping on the Kubernetes wagon. Below you can find a brief summary of the meeting and some of the action items that came out from it. But, before that, I'd like to take the chance to thank everyone who participated in the meeting as I believe it was a productive conversation. There are still many more to have but it's a good example of what is possible. Bullet summary: * The Kolla team went into details about how kolla-kubernetes uses Helm. * kolla-kubernetes doesn't depend on Helm as much as it depends on gotpl. Helm is still being used to render the template and running the services, though. Although it's not planned, it would be technically possible to change the latter with calls to kubectl and the former with calls to gotpl directly. Again, not planned, not even discussed. Just a thought. * TripleO would rather not have another template language. * TripleO is interested in a solution that is primarily based on Ansible. Some areas of collaboration: * Kubernetes resources: Work on the same set of resources. In this case, resources means the existing templates in kolla-kubernetes. Find ways to share the same resources rather than having 2 different sets of resources. * Configuration management: Work on a common ansible role/module for generating configuration files. There's a PoC already[1] but it's still being worked on. The PoC will likely turn into an Ansible module rather than a role. @flaper87 is working on this. * Work on a common orchestration playbook: It would be possible to work on a set of playbooks that could be shared across kolla-kubernetes, TripleO and other projects to orchestrate an OpenStack deployment. Moving Forward: Configuration management is certainly one area that we can start working on already. As mentioned above, I've started working on it based on a previous PoC that Doug Hellmann did. I'm in the process of translating the role into an ansible module 'cause I believe a python module would be better for this case. The work on common orchestration depends, to some extent, on the work for using the same set of kubernetes resources. I'm also looking into this topic. As mentioned in the meeting, the TripleO team would rather not add a new templating language to the stack so I'm looking into other ways we could make this happen. For example, I added support for generating k8s YAML files to ansible-kubernetes[2]. No idea whether that will land or whether it makes sense but, I'm actively working on this. Once we figure some of the above out, we can start working on a common playbook for orchestration. I've not mentioned anything about repos, teams, etc because I don't think this discussion is relevant right now. Let's get something going and work the logistics out later on. Finally, Emilien and Michal will sync to make sure the PTG sessions for Kolla and TripleO don't overlap so we can have more chances for shared sessions. Ideally, we'll get to the PTG with some prototypes done already and we'll use that time for more granular planning. Thoughts? Corrections? Did I miss something? Flavio [0] http://eavesdrop.openstack.org/meetings/kolla/2017/kolla.2017-07-19-16.00.log.html [1] https://github.com/flaper87/oslo-config-ansible [2] https://github.com/ansible/ansible-kubernetes-modules/pull/4 -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
On 17/07/17 14:05 +0200, Flavio Percoco wrote: Thanks for all the feedback so far. This is one of the things I appreciate the most about this community, Open conversations, honest feedback and will to collaborate. I'm top-posting to announce that we'll have a joint meeting with the Kolla team on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not for me) but I do want to have a live discussion with the rest of the Kolla team. Some questions about the meeting: * How much time can we allocate? * Can we prepare an agenda rather than just discussing "TripleO is thinking of using Ansible and not kolla-kubernetes"? (I'm happy to come up with such agenda) One last point. I'm not interested in conversations around competition, re-invention, etc. I think I speak for the entire TripleO team when I say that this is not about "winning" in this space but rather seeing how/if we can collaborate and how/if it makes sense to keep exploring the path described in the email below. Hey y'all, Sorry for not having sent this earlier but, Life Happened (TM). In preparation for the meeting today, I took the time to collect some thoughts on the topic so that we can, hopefully, have a more focused and constructive conversation. Please, find my thoughts on this etherpad and feel free to comment on it. I've disabled color so please, tag your comments with your nickname. https://etherpad.openstack.org/p/tripleo-ptg-queens-kubernetes Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][all] Move away from meeting channels
Hey folks, Based on the outcome of this thread, I've submitted this resolution to allow teams to host meetings outside meeting channels. Please, comment and review :) https://review.openstack.org/#/c/485117/ Flavio On 26/06/17 10:37 +0200, Flavio Percoco wrote: Hey Y'all, Not so long ago there was a discussion about how we manage our meeting channels and whether there's need for more or fewer of them[0]. Good points were made in that thread in favor of keeping the existing model but some things have changed, hence this new thread. More teams - including the Technical Committee[1] - have started to adopt office hours as a way to provide support and have synchronous discussion. Some of these teams have also discontinued their IRC meetings or moved to an ad-hoc meetings model. As these changes start popping up in the community, we need to have a good way to track the office hours for each team and allow for teams to meet at the time they prefer. Before we go deep into the discussion again, I'd like to summarize what has been discussed in the past (thanks ttx for the summary): The main objections to just letting people meet anywhere are: - how do we ensure the channel is logged/accessible - we won't catch random mentions of our name as easily anymore - might create a pile-up of meetings at peak times rather than force them to spread around - increases silo effect Main benefits being: - No more scheduling nightmare - More flexibility in listing things in the calendar Some of the problems above can be solved programmatically - cross-check on eavesdrop to make sure logging is enabled, for example. The problems that I'm more worried about are the social ones, because they'll require a change in the way we interact among us. Not being able to easily ping someone during a meeting is kind of a bummer but I'd argue that assuming someone is in the meeting channel and available at all times is a mistake to begin with. There will be conflicts on meeting times. There will be slots that will be used by several teams as these slots are convinient for cross-timezone interaction. We can check this and highlight the various conflicts but I'd argue we shouldn't. We already have some overlaps in the current structure. The social drawbacks related to this change can be overcome by interacting more on te mailing list. Ideally, this change should help raising awareness about the distributed nature of our community, encourage folks to do more office hours, fewer meetings and, more importantly, to encourage folks to favor the mailing list over IRC conversations for *some* discussions. So, should we let teams to host IRC meetings in their own channels? Thoughts? Flavio [0] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108360.html [1] https://governance.openstack.org/tc/#office-hours -- @flaper87 Flavio Percoco -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
On 17/07/17 16:48 -0400, Ryan Hallisey wrote: One other thing to mention. Maybe folks can speed up writing these playbooks by using kolla-ansible's playbooks as a shell. Here's an example: [1] Take lines 1-16 and replace it with helm install mariadb or kubectl create -f mariabd-pod.yaml and set inventory to localhost. Just a thought. There may be some other playbooks out there I don' know about that you can use, but that could at least get some of the collaboration started so folks don't have to start from scratch. [1] - https://github.com/openstack/kolla-ansible/blob/afdd11b9a22ecca70962a4637d89ad50b7ded2e5/ansible/roles/mariadb/tasks/start.yml#L1-L16 +1 This is why I think there's still room for collaboration and we can re-use several of the existing things. I don't think everything would have to be written from scratch. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glare] Application for inclusion of Glare in the list of official projects - Answers
one plugin. But in order to avoid potential collisions, it was decided not to include this plugin in the official repository, since it has not yet been properly tested. 6. "Is there any documentation to familiarize with the project closer" Yes, there is documentation, but it obviously is not enough. Here are the main links: Glare repo: https://github.com/openstack/glare Glare client repo: https://github.com/openstack/python-glareclient How to deploy Glare in Docker: https://github.com/Fedosin/docker-glare How to deploy Glare in Devstack: https://github.com/openstack/glare/tree/master/devstack Glare API description: https://github.com/openstack/glare/blob/master/doc/source/developer/webapi/v1.rst Glare architecture description: https://github.com/openstack/glare/blob/master/doc/source/architecture.rst Set of glare demos (slightly outdated): Glare artifact lifecycle: https://asciinema.org/a/97985 Listing of artifacts in Glare: https://asciinema.org/a/97986 Creating a new artifact type: https://asciinema.org/a/97987 Locations, Tags, Links and Folders: https://asciinema.org/a/99771 Now I'm writing Get Started doc for people who want to get to know the project more closely. It would also be nice to create a wiki page with the most useful and up-to-date information, as well as with FAQ. Idan is recording new demos, based on the recent changes. In conclusion, I want to say that I agree that it is not necessary to make Glare an official project right now. Therefore, I want to hold 1-2 sessions at PTG in Denver to demonstrate all service possibilities, because I believe that the picture's worth a thousand words. After the discussions, we will be able to jointly take a weighted decision. If you want to help the project, you can always find us in the channel #openstack-glare, or write to my email (mfedo...@gmail.com) 5 and 6 and your conclusion is what I believe we should be focusing on. Mike, thanks a lot for sending this email out. Thanks for summarizing things, for addressing comments and for understanding that we should probably have a better discussion before we move forward. Understanding Glare's goals, plan, current state, etc. is extremely important. Enjoy your time off, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
On 17/07/17 09:47 -0400, James Slagle wrote: On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco wrote: Thanks for all the feedback so far. This is one of the things I appreciate the most about this community, Open conversations, honest feedback and will to collaborate. I'm top-posting to announce that we'll have a joint meeting with the Kolla team on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not for me) but I do want to have a live discussion with the rest of the Kolla team. Some questions about the meeting: * How much time can we allocate? * Can we prepare an agenda rather than just discussing "TripleO is thinking of using Ansible and not kolla-kubernetes"? (I'm happy to come up with such agenda) It may help to prepare some high level requirements around what we need out of a solution. For the ansible discussion I started this etherpad: https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible How we use Ansible and what we want to use it for, is related to this discussion around Helm. Although, it's not the exact same discussion, so if you wanted to start a new etherpad more specific to tripleo/kubernetes that may be good as well. One thing I think is important in this discussion is that we should be thinking about deploying containers on both Kubernetes and !Kubernetes. That is one of the reasons I like the ansible approach, in that I think it could address both cases with a common interface and API. I don't think we should necessarily choose a solution that requires to deploy on Kubernetes. Because then we are stuck with that choice. It'd be really nice to just "docker run" sometimes for dev/test. I don't know if Helm has that abstraction or not, I'm just trying to capture the requirement. Yes! Thanks for pointing this out as this is one of the reasons why I was proposing ansible as our common interface w/o any extra layer. I'll probably start a new etherpad for this as I would prefer not to distract the rest of the TripleO + ansible discussion. At the end, if ansible ends up being the tool we pick, I'll make sure to update your etherpad. Flavio If you consider the parallel with Heat in this regard, we are currently "stuck" deploying on OpenStack (undercloud with Heat). We've had to work an a lot of complimentary features to add the flexibility to TripleO that are a result of having to use OpenStack (OVB, split-stack). That's exactly why we are starting a discussion around using Ansible, and is one of the fundamental changes that operators have been requesting in TripleO. -- -- James Slagle -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] scenario006 conflict
On 17/07/17 15:56 +0100, Derek Higgins wrote: On 17 July 2017 at 15:37, Emilien Macchi wrote: On Thu, Jul 13, 2017 at 6:01 AM, Emilien Macchi wrote: On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins wrote: On 12 July 2017 at 22:33, Emilien Macchi wrote: On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi wrote: [...] Derek, it seems like you want to deploy Ironic on scenario006 (https://review.openstack.org/#/c/474802). I was wondering how it would work with multinode jobs. Derek, I also would like to point out that https://review.openstack.org/#/c/474802 is missing the environment file for non-containerized deployments & and also the pingtest file. Just for the record, if we can have it before the job moves in gate. I knew I had left out the ping test file, this is the next step but I can create a noop one for now if you'd like? Please create a basic pingtest with common things we have in other scenarios. Is the non-containerized deployments a requirement? Until we stop supporting non-containerized deployments, I would say yes. Thanks, -- Emilien Macchi So if you create a libvirt domain, would it be possible to do it on scenario004 for example and keep coverage for other services that are already on scenario004? It would avoid to consume a scenario just for Ironic. If not possible, then talk with Flavio and one of you will have to prepare scenario007 or 0008, depending where Numans is in his progress to have OVN coverage as well. I haven't seen much resolution / answers about it. We still have the conflict right now and open questions. Derek, Flavio - let's solve this one this week if we can. Yes, I'll be looking into using scenario004 this week. I was traveling last week so wasn't looking at it. Awesome! Thanks, Derek. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
On 14/07/17 08:08 -0700, Emilien Macchi wrote: On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco wrote: Greetings, As some of you know, I've been working on the second phase of TripleO's containerization effort. This phase if about migrating the docker based deployment onto Kubernetes. These phase requires work on several areas: Kubernetes deployment, OpenStack deployment on Kubernetes, configuration management, etc. While I've been diving into all of these areas, this email is about the second point, OpenStack deployment on Kubernetes. There are several tools we could use for this task. kolla-kubernetes, openstack-helm, ansible roles, among others. I've looked into these tools and I've come to the conclusion that TripleO would be better of by having ansible roles that would allow for deploying OpenStack services on Kubernetes. The existing solutions in the OpenStack community require using Helm. While I like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I believe using any of them would add an extra layer of complexity to TripleO, which is something the team has been fighting for years years - especially now that the snowball is being chopped off. Adopting any of the existing projects in the OpenStack communty would require TripleO to also write the logic to manage those projects. For example, in the case of openstack-helm, the TripleO team would have to write either ansible roles or heat templates to manage - install, remove, upgrade - the charts (I'm happy to discuss this point further but I'm keepping it at a high-level on purpose for the sake of not writing a 10k-words-long email). James Slagle sent an email[0], a couple of days ago, to form TripleO plans around ansible. One take-away from this thread is that TripleO is adopting ansible more and more, which is great and it fits perfectly with the conclusion I reached. Now, what this work means is that we would have to write an ansible role for each service that will deploy the service on a Kubernetes cluster. Ideally these roles will also generate the configuration files (removing the need of puppet entirely) and they would manage the lifecycle. The roles would be isolated and this will reduce the need of TripleO Heat templates. Doing this would give TripleO full control on the deployment process too. In addition, we could also write Ansible Playbook Bundles to contain these roles and run them using the existing docker-cmd implementation that is coming out in Pike (you can find a PoC/example of this in this repo[1]). Now, I do realize the amount of work this implies and that this is my opinion/conclusion. I'm sending this email out to kick-off the discussion and gather thoughts and opinions from the rest of the community. Finally, what I really like about writing pure ansible roles is that ansible is a known, powerfull, tool that has been adopted by many operators already. It'll provide the flexibility needed and, if structured correctly, it'll allow for operators (and other teams) to just use the parts they need/want without depending on the full-stack. I like the idea of being able to separate concerns in the deployment workflow and the idea of making it simple for users of TripleO to do the same at runtime. Unfortunately, going down this road means that my hope of creating a field where we could collaborate even more with other deployment tools will be a bit limited but I'm confident the result would also be useful for others and that we all will benefit from it... My hopes might be a bit naive *shrugs* Of course I'm biased since I've been (a little) involved in that work but I like the idea of : - Moving forward with our containerization. docker-cmd will help us for sure for this transition (I insist on the fact TripleO is a product that you can upgrade and we try to make it smooth for our operators), so we can't just trash everything and switch to a new tool. I think the approach that we're taking is great and made of baby steps where we try to solve different problems. - Using more Ansible - the right way - when it makes sense : with the TripleO containerization, we only use Puppet for Configuration Management, managing a few resources but not for orchestration (or not all the features that Puppet provide) and for Data Binding (Hiera). To me, it doesn't make sense for us to keep investing much in Puppet modules if we go k8s & Ansible. That said, see the next point. - Having a transition path between TripleO with Puppet and TripleO with apbs and have some sort of binding between previous hieradata generated by TripleO & a similar data binding within Ansible playbooks would help. I saw your PoC Flavio, I found it great and I think we should make https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/hiera.yaml optional when running apbs, and a
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
Thanks for all the feedback so far. This is one of the things I appreciate the most about this community, Open conversations, honest feedback and will to collaborate. I'm top-posting to announce that we'll have a joint meeting with the Kolla team on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not for me) but I do want to have a live discussion with the rest of the Kolla team. Some questions about the meeting: * How much time can we allocate? * Can we prepare an agenda rather than just discussing "TripleO is thinking of using Ansible and not kolla-kubernetes"? (I'm happy to come up with such agenda) One last point. I'm not interested in conversations around competition, re-invention, etc. I think I speak for the entire TripleO team when I say that this is not about "winning" in this space but rather seeing how/if we can collaborate and how/if it makes sense to keep exploring the path described in the email below. Flavio On 14/07/17 11:17 +0200, Flavio Percoco wrote: Greetings, As some of you know, I've been working on the second phase of TripleO's containerization effort. This phase if about migrating the docker based deployment onto Kubernetes. These phase requires work on several areas: Kubernetes deployment, OpenStack deployment on Kubernetes, configuration management, etc. While I've been diving into all of these areas, this email is about the second point, OpenStack deployment on Kubernetes. There are several tools we could use for this task. kolla-kubernetes, openstack-helm, ansible roles, among others. I've looked into these tools and I've come to the conclusion that TripleO would be better of by having ansible roles that would allow for deploying OpenStack services on Kubernetes. The existing solutions in the OpenStack community require using Helm. While I like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I believe using any of them would add an extra layer of complexity to TripleO, which is something the team has been fighting for years years - especially now that the snowball is being chopped off. Adopting any of the existing projects in the OpenStack communty would require TripleO to also write the logic to manage those projects. For example, in the case of openstack-helm, the TripleO team would have to write either ansible roles or heat templates to manage - install, remove, upgrade - the charts (I'm happy to discuss this point further but I'm keepping it at a high-level on purpose for the sake of not writing a 10k-words-long email). James Slagle sent an email[0], a couple of days ago, to form TripleO plans around ansible. One take-away from this thread is that TripleO is adopting ansible more and more, which is great and it fits perfectly with the conclusion I reached. Now, what this work means is that we would have to write an ansible role for each service that will deploy the service on a Kubernetes cluster. Ideally these roles will also generate the configuration files (removing the need of puppet entirely) and they would manage the lifecycle. The roles would be isolated and this will reduce the need of TripleO Heat templates. Doing this would give TripleO full control on the deployment process too. In addition, we could also write Ansible Playbook Bundles to contain these roles and run them using the existing docker-cmd implementation that is coming out in Pike (you can find a PoC/example of this in this repo[1]). Now, I do realize the amount of work this implies and that this is my opinion/conclusion. I'm sending this email out to kick-off the discussion and gather thoughts and opinions from the rest of the community. Finally, what I really like about writing pure ansible roles is that ansible is a known, powerfull, tool that has been adopted by many operators already. It'll provide the flexibility needed and, if structured correctly, it'll allow for operators (and other teams) to just use the parts they need/want without depending on the full-stack. I like the idea of being able to separate concerns in the deployment workflow and the idea of making it simple for users of TripleO to do the same at runtime. Unfortunately, going down this road means that my hope of creating a field where we could collaborate even more with other deployment tools will be a bit limited but I'm confident the result would also be useful for others and that we all will benefit from it... My hopes might be a bit naive *shrugs* Flavio [0] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html [1] https://github.com/tripleo-apb/tripleo-apbs -- @flaper87 Flavio Percoco -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
First and foremost I just realized that I forgot to tag kolla and openstack-helm in the subject so, I apologize. I'm glad the subject was catchy enough to get your attention. Just want to raise here what I just mentioned on IRC: It's late in EU so I shouldn't be here right now but, I do want to point out that, as usual, I asked for feedback and clarifications from everyone in this thread. I'm not trying to re-invent the wheel. What's in my original email is my conclusion based on a research I did across the different tools there are. I can, of course, be wrong and I'd like you all to help us by providing feedback. I'm not expecting sales pitches but I'd love to have a more technical discussion on how we can, hopefully, make this work. On 14/07/17 16:16 +, Fox, Kevin M wrote: https://xkcd.com/927/ I don't think adopting helm as a dependency adds more complexity then writing more new k8s object deployment tooling? There are efforts to make it easy to deploy kolla-kubernetes microservice charts using ansible for orchestration in kolla-kubernetes. See: https://review.openstack.org/#/c/473588/ What kolla-kubernetes brings to the table is a tested/shared base k8s object layer. Orchestration is done by ansible via TripleO, and the solutions already found/debugged to how to deploy OpenStack in containers on Kubernetes can be reused/shared. See for example: https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml I don't see much by way of dealing with fernet token rotation. That was a tricky bit of code to get to work, but kolla-kubernetes has a solution to it. You can get it by: helm install kolla/keystone-fernet-rotate-job. It's just a PoC, don't take the implementation as definitive. We designed this layer to be shareable so we all can contribute to the commons rather then having every project reimplement their own and have to chase bugs across all the implementations. The deployment projects will be stronger together if we can share as much as possible. Please reconsider. I'd be happy to talk with you more if you want. Let's talk, that's the whole point of this thread. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote: On 14.07.2017 11:17, Flavio Percoco wrote: Greetings, As some of you know, I've been working on the second phase of TripleO's containerization effort. This phase if about migrating the docker based deployment onto Kubernetes. These phase requires work on several areas: Kubernetes deployment, OpenStack deployment on Kubernetes, configuration management, etc. While I've been diving into all of these areas, this email is about the second point, OpenStack deployment on Kubernetes. There are several tools we could use for this task. kolla-kubernetes, openstack-helm, ansible roles, among others. I've looked into these tools and I've come to the conclusion that TripleO would be better of by having ansible roles that would allow for deploying OpenStack services on Kubernetes. The existing solutions in the OpenStack community require using Helm. While I like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I believe using any of them would add an extra layer of complexity to TripleO, It's hard to estimate that complexity w/o having a PoC of such an integration. We should come up with a final choice once we have it done. My vote would go for investing engineering resources into solutions that have problems already solved, even by the price of added complexity (but that sort of depends...). Added complexity may be compensated with removed complexity (like those client -> Mistral -> Heat -> Mistral -> Ansible manipulations discussed in the mail thread mentioned below [0]) I agree it's hard to estimate but you gotta draw the line somewhere. I actually spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote the pyhelm lib (took some code from the openstack-helm folks) and I wrote the ansible helm module myself. I'd say I've spent enough time on this research. I don't think getting a full PoC working is worth it as that will require way more work for not much value since we can anticipate some of the complexities already. As far as the complexity comment goes, I disagree with you. I don't think you're evaluating the amount of complexity that there *IS* already in TripleO and how adding more complexity (layers, states, services) would make things worse for not much extra value. By all means, I might be wrong here so, do let me know if you're seeing something I'm not. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes
Greetings, As some of you know, I've been working on the second phase of TripleO's containerization effort. This phase if about migrating the docker based deployment onto Kubernetes. These phase requires work on several areas: Kubernetes deployment, OpenStack deployment on Kubernetes, configuration management, etc. While I've been diving into all of these areas, this email is about the second point, OpenStack deployment on Kubernetes. There are several tools we could use for this task. kolla-kubernetes, openstack-helm, ansible roles, among others. I've looked into these tools and I've come to the conclusion that TripleO would be better of by having ansible roles that would allow for deploying OpenStack services on Kubernetes. The existing solutions in the OpenStack community require using Helm. While I like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I believe using any of them would add an extra layer of complexity to TripleO, which is something the team has been fighting for years years - especially now that the snowball is being chopped off. Adopting any of the existing projects in the OpenStack communty would require TripleO to also write the logic to manage those projects. For example, in the case of openstack-helm, the TripleO team would have to write either ansible roles or heat templates to manage - install, remove, upgrade - the charts (I'm happy to discuss this point further but I'm keepping it at a high-level on purpose for the sake of not writing a 10k-words-long email). James Slagle sent an email[0], a couple of days ago, to form TripleO plans around ansible. One take-away from this thread is that TripleO is adopting ansible more and more, which is great and it fits perfectly with the conclusion I reached. Now, what this work means is that we would have to write an ansible role for each service that will deploy the service on a Kubernetes cluster. Ideally these roles will also generate the configuration files (removing the need of puppet entirely) and they would manage the lifecycle. The roles would be isolated and this will reduce the need of TripleO Heat templates. Doing this would give TripleO full control on the deployment process too. In addition, we could also write Ansible Playbook Bundles to contain these roles and run them using the existing docker-cmd implementation that is coming out in Pike (you can find a PoC/example of this in this repo[1]). Now, I do realize the amount of work this implies and that this is my opinion/conclusion. I'm sending this email out to kick-off the discussion and gather thoughts and opinions from the rest of the community. Finally, what I really like about writing pure ansible roles is that ansible is a known, powerfull, tool that has been adopted by many operators already. It'll provide the flexibility needed and, if structured correctly, it'll allow for operators (and other teams) to just use the parts they need/want without depending on the full-stack. I like the idea of being able to separate concerns in the deployment workflow and the idea of making it simple for users of TripleO to do the same at runtime. Unfortunately, going down this road means that my hope of creating a field where we could collaborate even more with other deployment tools will be a bit limited but I'm confident the result would also be useful for others and that we all will benefit from it... My hopes might be a bit naive *shrugs* Flavio [0] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html [1] https://github.com/tripleo-apb/tripleo-apbs -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [glance][rally] Disabling Glance Testing in Rally gates
On 13/07/17 00:56 -0700, Boris Pavlovic wrote: Hi stackers, Unfortunately what was discussed in other thread (situation in glance is critical) happened. Glance stopped working and Rally team is forced to disable checking of it in Rally gates. P.S. Seems like this patch is casing the problems: https://github.com/openstack-dev/devstack/commit/1fa653635781cd975a1031e212b35b6c38196ba4 Hey Boris, Has this been brought up to the Glance team? Or is this email meant to do that? FWIW, the switch to uwsgi is a community goal and not so much a Glance thing. Would you mind elaborating on what exactly is failing and how the glance team can help? Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] scenario006 conflict
On 12/07/17 14:23 -0700, Emilien Macchi wrote: Hey folks, Derek, it seems like you want to deploy Ironic on scenario006 (https://review.openstack.org/#/c/474802). I was wondering how it would work with multinode jobs. Also, Flavio would like to test k8s on scenario006: https://review.openstack.org/#/c/471759/ . To avoid having too much scenarios and complexity, I think if ironic tests can be done on a 2nodes job, then we can deploy ironic on scenario004 maybe. If not, then please give the requirements so we can see how to structure it. For Flavio's need, I think we need a dedicated scenario for now, since he's not going to deploy any OpenStack service on the overcloud for now, just k8s. True, this is the plan for now. However, I don't think deploying other services in the overcloud is going to affect what I'm doing now. Ultimately, I'll start moving the services onto Kubernetes as soon as the rest of the work starts to shape out. Thanks for the email, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects
On 11/07/17 19:21 -0500, Monty Taylor wrote: On 07/11/2017 06:47 AM, Flavio Percoco wrote: On 11/07/17 14:20 +0300, Mikhail Fedosin wrote: On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor wrote: On 07/10/2017 04:31 PM, Mikhail Fedosin wrote: Third, all these changes can be hidden in Glare client. So if we try a little, we can achieve 100% compatibility there, and other projects can use Glare client instead of Glance's without even noticing the differences. I think we should definitely not do this... I think instead, if we decide to go down this road, we want to look at adding an endpoint to glare that speaks glance v2 API so that users can have a transition period while libraries and tools get updated to understand the artifacts API. This is optional and depends on the project developers. For my part, I can only offer the most compatible client, so that the Glance module can be simply copied into the new Glare module. Unfortunately, adding this sort of logic to the client is almost never the right choice. To be completely honest, I'm not even convinced having a Glance-like API in Glare is the right thing to do. As soon as that API hits the codebase, you'll have to maintain it. Anything that delays the transition to the new thing is providing a fake bridge to the users. It's a bridge that will be blown-up eventually. To make a hypothetical transition from Glance to Glare works smoothly, we should first figure out how to migrate the database (assuming this has not been done yet), how to migrate the images, etc. Only when these things have been figured out, I'd start worrying about what compatibility layer we want to provide. The answer could also be: "Hey, we're sorry but, the best thing you can do is to migrate your code base as soon as possible". I think this is a deal breaker. The problem is - if glare doesn't provide a v2 compat layer, then a deployer is going to have to run glance AND glare at the same time and we'll have to make sure both glance and glare can write to the same backend. The reason is that with our major version bumps both versions co-exist for a period of time which allows consumers to gracefully start consuming the nicer and newer api while not being immediately broken when the old api isn't there. What we'd be looking at is: * a glare service that runs two endpoints - an /image endpoint and an /artifact endpoint - and that registers the /image endpoint with the catalog as the 'image' service_type and the /artifact endpoint with the catalog as the 'artifact' service_type followed by a deprecation period of the image endpoint from the bazillion things that use it and a migration to the artifact service. OR First - immediately bump the glare api version to 3.0. This is affect some glare users, but given the relative numbers of glance v. glare users, it may be the right choice. Run a single set of versioned endpoints - no /v1, /v2 has /image at the root and /v3 has /artifact at the root. Register that endpoint with the catalog as both artifact and image. That means service and version discovery will find the /v2 endpoint of the glare service if someone says "I want 'image' api 'v2'". It's already fair game for a cloud to run without v1 - so that's not a problem. (This, btw, is the reason glare has to bump its api to v3 - if it still had a v1 in its version discovery document, glance users would potentially find that but it would not be a v1 of the image API) In both cases, /v2/images needs to be the same as glance /v2/images. If both are running side-by-side, which is how we normally do major version bumps, then client tools and libraries can use the normal version discovery process to discover that the cloud has the new /v3 version of the api with service-type of 'image', and they can decide if they want to use it or not. Yes - this is going to provide a pile of suck for the glare team, because they're going to have to maintain an API mapping layer, and they're going to have to maintain it for a full glance v2 api deprecation period. Becaue glance v2 is in DefCore, that is longer than a normal deprecation period - but that's life. Right! This is the extended version of what I tried to say. :D I'm not a huge fan of the Glare team having a Glance v2 API but I think it's our best option forward. FWIW, this WAS tried before but a bit different. Remeber the Glance v3 discussion? That Glance v3 was Glare living in the Glance's codebase. The main difference now is that it would be Glare providing Glance's v2 and Glare's v3 rather than Glance doing yet another major version change. I still think we should figure out how to migrate a Glance deployment to Glare (database, stores, etc) before the work on this API even starts. I would like to see a good plan forward for this. Ultimately, the t
Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects
On 11/07/17 14:20 +0300, Mikhail Fedosin wrote: On Tue, Jul 11, 2017 at 1:43 AM, Monty Taylor wrote: On 07/10/2017 04:31 PM, Mikhail Fedosin wrote: Third, all these changes can be hidden in Glare client. So if we try a little, we can achieve 100% compatibility there, and other projects can use Glare client instead of Glance's without even noticing the differences. I think we should definitely not do this... I think instead, if we decide to go down this road, we want to look at adding an endpoint to glare that speaks glance v2 API so that users can have a transition period while libraries and tools get updated to understand the artifacts API. This is optional and depends on the project developers. For my part, I can only offer the most compatible client, so that the Glance module can be simply copied into the new Glare module. Unfortunately, adding this sort of logic to the client is almost never the right choice. To be completely honest, I'm not even convinced having a Glance-like API in Glare is the right thing to do. As soon as that API hits the codebase, you'll have to maintain it. Anything that delays the transition to the new thing is providing a fake bridge to the users. It's a bridge that will be blown-up eventually. To make a hypothetical transition from Glance to Glare works smoothly, we should first figure out how to migrate the database (assuming this has not been done yet), how to migrate the images, etc. Only when these things have been figured out, I'd start worrying about what compatibility layer we want to provide. The answer could also be: "Hey, we're sorry but, the best thing you can do is to migrate your code base as soon as possible". If projects use Glance without client, it means that some direct API requests will need to be rewritten. But in any case, the number of differences between Glance v1 and Glance v2 was much larger, and we switched pretty smoothly. So I hope everything will be fine here, too. v1 vs v2 is still a major headache for end users. I don't think it's ok for us to do that to our users again if we can help it. However, as you said, conceptually the calls are very similar so making an API controller that can be registered in the catalog as "image" should be fairly easy to do, no? Indeed, the interfaces are almost identical. And all the differences were made on purpose. For example, deactivating an image in Glance looks like *POST* /v2/images/{image_id}/actions/deactivate with empty body. At one time, Chris Dent advised us to avoid such decisions, and simply change the status of the artifact to 'deactivated' using *PATCH*, which we did. Despite this not being my preferred option, I definitely prefer it over the "compatible" client library. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] Status update, July 7th
On 07/07/17 10:19 +0200, Thierry Carrez wrote: == Need for a TC meeting next Tuesday == I propose we have a meeting next week to discuss the next steps in establishing the vision. I feel like we should approve it soon, otherwise we'll get too close to the vision date (Spring 2019)... We also need to wrap up the goals (selecting the two and deferring the others). Who is up for discussing those items at our usual meeting slot time on Tuesday ? Was out last Friday but here I am now. I'll be there. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Wiki (was: How to deal with confusion around "hosted projects")
On 03/07/17 11:04 -0400, Doug Hellmann wrote: Excerpts from Flavio Percoco's message of 2017-07-03 16:11:44 +0200: On 03/07/17 13:58 +0200, Thierry Carrez wrote: >Flavio Percoco wrote: >> Sometimes I wonder if we still need to maintain a Wiki. I guess some >> projects still use it but I wonder if the use they make of the Wiki could be moved >> somewhere else. >> >> For example, in the TC we use it for the Agenda but I think that could be moved >> to an etherpad. Things that should last forever should be documented somewhere >> (project repos, governance repo in the TC case) where we can actually monitor >> what goes in and easily clean up. > >This is a complete tangent, but I'll bite :) We had a thorough >discussion about that last year, summarized at: > >http://lists.openstack.org/pipermail/openstack-dev/2016-June/096481.html > >TL,DR; was that while most authoritative content should (and has been >mostly) moved off the wiki, it's still useful as a cheap publication >platform for teams and workgroups, somewhere between a git repository >with a docs job and an etherpad. > >FWIW the job of migrating authoritative things off the wiki is still >on-going. As an example, Thingee is spearheading the effort to move the >"How to Contribute" page and other first pointers to a reference website >(see recent thread about that). I guess the short answer is that we hope one day we won't need it. I certainly do. What would happen if we make the wiki read-only? Would that break peopl's workflow? Do we know what teams modify the wiki more often and what it is they do there? Thanks for biting :) Flavio The docs team is looking for operators to take over the operators guide and move that content to the wiki (operators have said they don't want to deal with gerrit reviews). ++ This is the perfect answer. If there's a use-case, I think we're good. Thanks for the bringing this up, Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Wiki (was: How to deal with confusion around "hosted projects")
On 03/07/17 13:58 +0200, Thierry Carrez wrote: Flavio Percoco wrote: Sometimes I wonder if we still need to maintain a Wiki. I guess some projects still use it but I wonder if the use they make of the Wiki could be moved somewhere else. For example, in the TC we use it for the Agenda but I think that could be moved to an etherpad. Things that should last forever should be documented somewhere (project repos, governance repo in the TC case) where we can actually monitor what goes in and easily clean up. This is a complete tangent, but I'll bite :) We had a thorough discussion about that last year, summarized at: http://lists.openstack.org/pipermail/openstack-dev/2016-June/096481.html TL,DR; was that while most authoritative content should (and has been mostly) moved off the wiki, it's still useful as a cheap publication platform for teams and workgroups, somewhere between a git repository with a docs job and an etherpad. FWIW the job of migrating authoritative things off the wiki is still on-going. As an example, Thingee is spearheading the effort to move the "How to Contribute" page and other first pointers to a reference website (see recent thread about that). I guess the short answer is that we hope one day we won't need it. I certainly do. What would happen if we make the wiki read-only? Would that break peopl's workflow? Do we know what teams modify the wiki more often and what it is they do there? Thanks for biting :) Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"
On 28/06/17 16:50 +0200, Thierry Carrez wrote: Removing the root cause would be a more radical move: stop offering hosting to non-OpenStack projects on OpenStack infrastructure altogether. We originally did that for a reason, though. The benefits of offering that service are: 1- it lets us set up code repositories and testing infrastructure before a project applies to be an official OpenStack project. 2- it lets us host things that are not openstack but which we work on (like abandoned Python libraries or GPL-licensed things) in a familiar environment 3- it spreads "the openstack way" (Gerrit, Zuul) beyond openstack itself I would argue that we could handle (1) and (2) within our current governance. For (1) we could have an "onboarding" project team that would help incoming projects through the initial steps of becoming an openstack project. The team would act as an umbrella team, an experimental area for projects that have some potential to become an OpenStack project one day. There would be a time limit -- if after one year(?) it looks like you won't become an openstack project after all, the onboarding team would clean you up. I actually think a bit more project mentoring would serve us better than our current hands-free approach. I'd say that we should do this regardless. I believe in mentoring and I see great value in onboarding projects. It's a job in itself and I think having a team of volunteers doing that would be awesome. One could argue that, given the current status of some of the teams, it'd not be wise to create a new one that, well, requires more volunteers. However, I think that it doesn't have to take volunteer's full time and it'd be great for new projects. I've mentored new teams and I know a few other folks have done it, including Thierry. I wonder how many of the currently hosted teams feel they need menotring. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"
On 28/06/17 16:50 -0500, Monty Taylor wrote: On 06/28/2017 09:50 AM, Thierry Carrez wrote: Removing the root cause would be a more radical move: stop offering hosting to non-OpenStack projects on OpenStack infrastructure altogether. We originally did that for a reason, though. The benefits of offering that service are: I disagree that this is removing the root cause. I believe this is reacting to a misunderstanding by hiding from it. I do not believe that doing this provides any value to us as a community. Even though we do not actually use github for development, we have implicitly accepted the false premise that github is a requirement. It is suggested that the existence of git repos in the openstack/ github org is confusing to people. And our reaction to that is to cut off access to our Open Source tools that we set up to collaboratively develop cloud software and tell people to go use the thing that people suggest is one of the causes of people being confused? * People are not 'confused' by what OpenStack is. Being "confused" is a passive-aggressive way of expressing that they DISAGREE with what OpenStack is. We still have _plenty_ of people who express that they think we should only be IaaS - so they're still going to be unhappy with cloudkitty, congress and karbor. Such people are under the misguided impression that kicking cloudkitty out of OpenStack will somehow cause Nova features to land quicker. I can't even begin to express all of the ways in which it's wrong. We aren't a top-down corporate structure and we can't 'reassign' humans - but even if we WERE - this flawed thinking runs afoul of the Mythical Man Month. I agree with Monty on this one. My main concern with the proposal is that we'll end up exactly where we are at now because we simply can't make everyone happy. There are folks confused by what Google says (mainly people not familiar with OpenStack) and there are folks confused because they disagree with what OpenStack *is*. As others proposed in this thread (or was it another thread?), I'd suggest we first finish working on the vision, and take a few more steps on this area before we do any other major change in the community. That way we can measure whether the changes we're discussing serve such vision. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"
On 29/06/17 10:33 -0500, Monty Taylor wrote: On 06/29/2017 10:00 AM, Jimmy McArthur wrote: Thierry Carrez <mailto:thie...@openstack.org> June 29, 2017 at 9:54 AM Unfortunately, those pages just exist -- those hundreds of projects projects might be inactive, they still have git repositories and wiki pages. We could more actively clean them up (and then yes, adjusting the corresponding Google juice), but (1) we don't really have any right to do so unless we get permission (which is hard to get from dead projects), and (2) that's a giganormous amount of maintenance work. It might be a giganormous amount of maintenance work, but it's the only way you're going to properly fix the Google problem. You can still keep the data archived, but I would change the link to something like /inactive-projects/meteos, again with the proper redirects. And again, updating the sitemap. As far as github, if the project is legitimately dead, the repo should be set to private. Just because something is a lot of work doesn't mean it's not worth doing :) When we retire a project, we land a commit to that project that removes all of the content and replaces it with a commit message that indicates that the project has been retired. We could probably add a flag to our projects.yaml file that is "retired" or something, that would cause the cgit mirror config to stop listing the project (the git repo would still exist and still be cloneable, it just wouldn't show up in the web listings) Since github for us is just a read-only mirror, I would not object to having that flag cause our automation to delete the mirror repo from github. Again, we would not be deleting any content, we would just be un-publishing it. I do not believe either of those would be much work- other than someone needing to go through and flag retired projects as such in projects.yaml - and I do not believe there are any downsides. There is still the wiki- which is still a wiki. Sometimes I wonder if we still need to maintain a Wiki. I guess some projects still use it but I wonder if the use they make of the Wiki could be moved somewhere else. For example, in the TC we use it for the Agenda but I think that could be moved to an etherpad. Things that should last forever should be documented somewhere (project repos, governance repo in the TC case) where we can actually monitor what goes in and easily clean up. That said, I agree, wiki is still a wiki. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][all] Move away from meeting channels
On 28/06/17 14:02 +, Jeremy Stanley wrote: Anyway, I don't think we need to propose "moving away from meeting channels" but rather "allowing teams to not use meeting channels if it's inconvenient for them." I expect many (most even?) teams who continue to hold regularly scheduled meetings would do so in the official meeting channels even if given the opportunity and freedom to use a different channel. Yes, this is the goal. I chose a poor title for the thread. The idea is not to delete the existing meeting channels but certainly not add more. Allow folks to use their own channels instead. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][all] Move away from meeting channels
On 27/06/17 18:22 +, Jeremy Stanley wrote: On 2017-06-26 15:27:21 +0200 (+0200), Thierry Carrez wrote: Flavio Percoco wrote: > [...] > Not being able to easily ping someone during a meeting is kind > of a bummer but I'd argue that assuming someone is in the > meeting channel and available at all times is a mistake to begin > with. I think people can be pinged by PM or on #openstack-dev, it's just an habit to take. It's just that there are cases where people passively mention you, without going up to a formal ping -- I usually go back later to that person to answer the issue they informally raised. We'll lose that, but it's minor enough. [...] By lurking in official meeting channels I'm often able to jump straight into a discussion when someone asks me a question in a meeting I wouldn't normally attend but am around during. I can see the discussion instantly up to that point as opposed to inconveniencing the attendees by asking to have everything repeated for me after /join'ing. The channel logs on eavesdrop.o.o aren't really a substitute there because of the batched flushing in the bot delays Web-based logs by some number of minutes. Would recommend folks pinging you (and others) on openstack-dev be enough? You can then momentarily join the channel and they can bring you up to speed, although it is not as convinient as just being there. I'd also like to encourage folks to reach out on the ML if people don't happen to be around on IRC. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project
On 26/06/17 17:35 +0300, Mikhail Fedosin wrote: 2. We would like to become an official OpenStack project, and in general we follow all the necessary rules and recommendations, starting from weekly IRC meetings and our own channel, to Apache license and Keystone support. For this reason, I want to file an application and hear objections and recommendations on this matter. Note that IRC meetings are not a requirement anymore: https://review.openstack.org/#/c/462077/ As far as the rest of the process goes, it looks like you are all good to go. I'd recommend you to submit the request to the governance repo and let the discussion begin: https://governance.openstack.org/tc/reference/new-projects-requirements.html Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services
On 15/06/17 13:06 -0400, Emilien Macchi wrote: I missed [tripleo] tag. On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi wrote: If you haven't followed the "Configuration management with etcd / confd" thread [1], Doug found out that using confd to generate configuration files wouldn't work for the Cinder case where we don't know in advance of the deployment what settings to tell confd to look at. We are still looking for a generic way to generate *.conf files for OpenStack, that would be usable by Deployment tools and operators. Right now, Doug and I are investigating some tooling that would be useful to achieve this goal. Doug has prototyped an Ansible role that would generate configuration files by consumming 2 things: * Configuration schema, generated by Ben's work with Machine Readable Sample Config. $ oslo-config-generator --namespace cinder --format yaml > cinder-schema.yaml It also needs: https://review.openstack.org/#/c/474306/ to generate some extra data not included in the original version. * Parameters values provided in config_data directly in the playbook: config_data: DEFAULT: transport_url: rabbit://user:password@hostname verbose: true There are 2 options disabled by default but which would be useful for production environments: * Set to true to always show all configuration values: config_show_defaults * Set to true to show the help text: config_show_help: true The Ansible module is available on github: https://github.com/dhellmann/oslo-config-ansible To try this out, just run: $ ansible-playbook ./playbook.yml You can quickly see the output of cinder.conf: https://clbin.com/HmS58 What are the next steps: * Getting feedback from Deployment Tools and operators on the concept of this module. Maybe this module could replace what is done by Kolla with merge_configs and OpenStack Ansible with config_template. * On the TripleO side, we would like to see if this module could replace the Puppet OpenStack modules that are now mostly used for generating configuration files for containers. A transition path would be having Heat to generate Ansible vars files and give it to this module. We could integrate the playbook into a new task in the composable services, something like "os_gen_config_tasks", a bit like we already have for upgrade tasks, also driven by Ansible. * Another similar option to what Doug did is to write a standalone tool that would generate configuration, and for Ansible users we would write a new module to use this tool. Example: Step 1. oslo-config-generator --namespace cinder --format yaml > cinder-schema.yaml (note this tool already exists) Step 2. Create config_data.yaml in a specific format with parameters values for what we want to configure (note this format doesn't exist yet but look at what Doug did in the role, we could use the same kind of schema). Step 3. oslo-gen-config -i config_data.yaml -s schema.yaml > cinder.conf (note this tool doesn't exist yet) For Ansible users, we would write an Ansible module that would take in entry 2 files: the schema and the data. The module would just run the tool provided by oslo.config. Example: - name: Generate cinder.conf oslo-gen-config: schema=cinder-schema.yaml data=config_data.yaml I finally caught up with this thread and got the time to get back to y'all. Sorry. I like the roles version more because it's flexible and easier to distribute. We can upload it to galaxy, package it, etc. Distributing ansible modules is a bit painful right now and you end up adding them as roles in the playbook for the modules to be loaded. I'm about to work on a prototype and I'll use option #1 and perhaps we can discuss further the option #2. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc][all] Move away from meeting channels
Hey Y'all, Not so long ago there was a discussion about how we manage our meeting channels and whether there's need for more or fewer of them[0]. Good points were made in that thread in favor of keeping the existing model but some things have changed, hence this new thread. More teams - including the Technical Committee[1] - have started to adopt office hours as a way to provide support and have synchronous discussion. Some of these teams have also discontinued their IRC meetings or moved to an ad-hoc meetings model. As these changes start popping up in the community, we need to have a good way to track the office hours for each team and allow for teams to meet at the time they prefer. Before we go deep into the discussion again, I'd like to summarize what has been discussed in the past (thanks ttx for the summary): The main objections to just letting people meet anywhere are: - how do we ensure the channel is logged/accessible - we won't catch random mentions of our name as easily anymore - might create a pile-up of meetings at peak times rather than force them to spread around - increases silo effect Main benefits being: - No more scheduling nightmare - More flexibility in listing things in the calendar Some of the problems above can be solved programmatically - cross-check on eavesdrop to make sure logging is enabled, for example. The problems that I'm more worried about are the social ones, because they'll require a change in the way we interact among us. Not being able to easily ping someone during a meeting is kind of a bummer but I'd argue that assuming someone is in the meeting channel and available at all times is a mistake to begin with. There will be conflicts on meeting times. There will be slots that will be used by several teams as these slots are convinient for cross-timezone interaction. We can check this and highlight the various conflicts but I'd argue we shouldn't. We already have some overlaps in the current structure. The social drawbacks related to this change can be overcome by interacting more on te mailing list. Ideally, this change should help raising awareness about the distributed nature of our community, encourage folks to do more office hours, fewer meetings and, more importantly, to encourage folks to favor the mailing list over IRC conversations for *some* discussions. So, should we let teams to host IRC meetings in their own channels? Thoughts? Flavio [0] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108360.html [1] https://governance.openstack.org/tc/#office-hours -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology
On 21/06/17 16:27 -0400, Sean Dague wrote: On 06/21/2017 02:52 PM, Lauren Sell wrote: Two things we should address: 1) Make it more clear which projects are “officially” part of OpenStack. It’s possible to find that information, but it’s not obvious. I am one of the people who laments the demise of stackforge…it was very clear that stackforge projects were not official, but part of the OpenStack ecosystem. I wish it could be resurrected, but I know that’s impractical. To make this actionable...Github is just a mirror of our repositories, but for better or worse it's the way most people in the world explore software. If you look at OpenStack on Github now, it’s impossible to tell which projects are official. Maybe we could help by better curating the Github projects (pinning some of the top projects, using the new new topics feature to put tags like openstack-official or openstack-unofficial, coming up with more standard descriptions or naming, etc.). Same goes for our repos…if there’s a way we could differentiate between official and unofficial projects on this page it would be really useful: https://git.openstack.org/cgit/openstack/ I think even if it was only solvable on github, and not cgit, it would help a lot. The idea of using github project tags and pinning suggested by Lauren seems great to me. If we replicated the pinning on github.com/openstack to "popular projects" here - https://www.openstack.org/software/, and then even just start with the tags as defined in governance - https://governance.openstack.org/tc/reference/tags/index.html it would go a long way. We can also standardize the README files in the projects and use the badges that were created already. These badges are automatically generated for every project. I think there's a way we could also make this work in cgit too and we won't need something that is github specific. These badges can be used for documentation too. Here's Glance's example: https://github.com/openstack/glance#team-and-repository-tags Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology
On 21/06/17 06:18 +, joehuang wrote: hello, Flavio, Hi :D This thread is to discuss moving away from the "big tent" term, not removing some project. Removing a project will make this flavor disappear from the ice-cream counter, but this thread, it's to use another concept to describe projects under openstack project governance. If we don't want to use "big tent" for those projects staying in the counter, I hope all projects could be treated in flat, just like different flavor ice-creams are flat in the same counter, kid can make choice by themselves. Even Nova may be only "core" to some cloud operators, but not always for all cloud operators, for example, those who only run object storage service, hyper.sh also not use Nova, some day may some cloud operators only use Zun or K8S instead for computing, it should not be an issue to OpenStack community. I think you misunderstood my message. I'm not talking about removing projects, I'm talking about the staging of these projects to join the "Big tent" - regardless of how we call it. The distinction *is* important and we ought to find a way to preserve it and communicate it so that there's the least amount of confusion possible. OpenStack should be "OPEN" stack for infrastructure, just like kid can choose how many balls of ice-cream, cloud operators can make decision to choose which project to use or not to manage his infrastructure. You keep mentioning "OPEN stack" as if we weren't being open (enough?) and I think I'm failing to see why you think that. Could you please elaborate more? What you're describing seems to be the current status. Flavio Best Regards Chaoyi Huang (joehuang) From: Flavio Percoco [fla...@redhat.com] Sent: 20 June 2017 17:44 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all][tc] Moving away from "bigtent" terminology On 20/06/17 00:33 +, joehuang wrote: I think openstack community provides a flat project market place for infrastructure is good enough: all projects are just some "goods" in the market place, let the cloud operators to select projects from the project market place for his own infrastructure. We don't have to mark a project a core project or not, only need to tag attribute of a project, for example how mature it is, how many "like" they have, what the cloud operator said for the project. etc. All flat, just let people make decision by themselves, they are not idiot, they have wisdom on building infrastructure. Not all people need a package: you bought a package of ice-cream, but not all you will like it, If they want package, distribution provider can help them to define and customize a package, if you want customization, you will decide which ball of cream you want, isn't it? The flavors you see in a ice-creem shop counter are not there by accident. Those flavors have gone through a creation process, they have been tested and they have also survived over the years. Some flavors are removed with time and some others stay there forever. Unfortunately, tagging those flavors won't cut it, which is why you don't see tags in their labels when you go to an ice-cream shop. Some tags are implied, other tags are inferred and other tags are subjective. Experimenting with new flavors doesn't happen overnight in some person's bedroom. The new flavors are tested using the *same* infrastructure as the other flavors and once they reach a level of maturity, they are exposed in the counter so that customers will able to consume them. Ultimately, experimentation is part of the ice-cream shop's mission and it requires time, effort and resources but not all experiments end well. At the end, though, what really matters is that all these flavors serve the same mission and that's why they are sold at the ice-cream shop, that's why they are exposed in the counter. Customer's of the ice-cream shop know they can trust what's in the counter. They know the exposed flavors serve their needs at a high level and they can now focus on their specific needs. So, do you really think it's just a set of flavors and it doesn't really matter how those flavors got there? Flavio -- @flaper87 Flavio Percoco __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [glance] Stepping down from core
On 20/06/17 09:31 +1200, feilong wrote: Hi there, I've been a Glance core since 2013 and been involved in the Glance community even longer, so I care deeply about Glance. My situation right now is such that I cannot devote sufficient time to Glance, and while as you've seen elsewhere on the mailing list, Glance needs reviewers, I'm afraid that keeping my name on the core list is giving people a false impression of how dire the current Glance personnel situation is. So after discussed with Glance PTL, I'd like to offer my resignation as a member of the Glance core reviewer team. Thank you for your understanding. Thanks for being honest and open about the situation. I agree with you that this is the right move. I'd like to thank you for all these years of service and I think it goes without saying that you're welcome back in the team anytime you want. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology
On 20/06/17 00:33 +, joehuang wrote: I think openstack community provides a flat project market place for infrastructure is good enough: all projects are just some "goods" in the market place, let the cloud operators to select projects from the project market place for his own infrastructure. We don't have to mark a project a core project or not, only need to tag attribute of a project, for example how mature it is, how many "like" they have, what the cloud operator said for the project. etc. All flat, just let people make decision by themselves, they are not idiot, they have wisdom on building infrastructure. Not all people need a package: you bought a package of ice-cream, but not all you will like it, If they want package, distribution provider can help them to define and customize a package, if you want customization, you will decide which ball of cream you want, isn't it? The flavors you see in a ice-creem shop counter are not there by accident. Those flavors have gone through a creation process, they have been tested and they have also survived over the years. Some flavors are removed with time and some others stay there forever. Unfortunately, tagging those flavors won't cut it, which is why you don't see tags in their labels when you go to an ice-cream shop. Some tags are implied, other tags are inferred and other tags are subjective. Experimenting with new flavors doesn't happen overnight in some person's bedroom. The new flavors are tested using the *same* infrastructure as the other flavors and once they reach a level of maturity, they are exposed in the counter so that customers will able to consume them. Ultimately, experimentation is part of the ice-cream shop's mission and it requires time, effort and resources but not all experiments end well. At the end, though, what really matters is that all these flavors serve the same mission and that's why they are sold at the ice-cream shop, that's why they are exposed in the counter. Customer's of the ice-cream shop know they can trust what's in the counter. They know the exposed flavors serve their needs at a high level and they can now focus on their specific needs. So, do you really think it's just a set of flavors and it doesn't really matter how those flavors got there? Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 19/06/17 11:33 -0500, Sean McGinnis wrote: [snip] Who else would like to volunteer to help? The help needed is not so much on fixing bugs but rather reviewing the patches that fix bugs and help moving the release forward. I hope the community will grow soonish so that we can go back to the regular core team. Flavio [0] https://review.openstack.org/#/c/474604/ -- @flaper87 Flavio Percoco I've been trying to spend some time doing reviews there. I will continue to do so as long as it is needed/useful. Awesome, thanks a bunch! I'll propose adding you as well. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 13/06/17 09:50 -0500, Flavio Percoco wrote: On 13/06/17 10:49 +0200, Thierry Carrez wrote: Quick attempt at a summary of the discussion so far, with my questions: * Short-term, Glance needs help to stay afloat - Sean volunteered to help - but glance needs to add core reviewers to get stuff flowing -> could the VM/BM workgroup also help ? Any progress there ? +1 Given the current situation, I think we'll get any help we can. I'd be happy to add Sean and a couple of other volunteers to the core team until the end of the cycle. When Pike is out, we can do a status check and see how to proceed. I've proposed a patch to add Glance to the list of top-5 help wanted[0]. Please, review and let me know what y'all think. In addition to this, I'd like to for the Glance team to seriously consider the possibility of having a provisional, extra, core team to go through the Pike cycle. I'm ok with adding people to the general core team and describe in an email thread who these folks are, for how long we think we'll need this, etc. Who else would like to volunteer to help? The help needed is not so much on fixing bugs but rather reviewing the patches that fix bugs and help moving the release forward. I hope the community will grow soonish so that we can go back to the regular core team. Flavio [0] https://review.openstack.org/#/c/474604/ -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology
On 16/06/17 04:32 +, gordon chung wrote: On 15/06/17 06:28 PM, Doug Hellmann wrote: i see, so this is less an existential question of 'what is openstack' > and more 'how to differentiate governance projects from a random repo > created last weekend' > > this might have been just me, but big tent was exactly 'big tent == > governance' so when i read 'moving away from "big tent"' i think 'what > is this *new* thing we're moving to and if we're redefining this new > thing, what for?'. it seems this is not the case. No. We're trying to pick new words, because there continues to be confusion about the old words. my bad, apologies for taking the scenic route. regardless of new words, we failed to properly describe what the big tent was the first go to some people, how do we make sure they're not confused this time? and how do we not confuse the ones that did understand the first time? for me personally, the first go, the messaging was kind of muddled. i remember 'level playing field' being used frequently. not sure if that's still one of the reasons for ? > > sorry, i probably wasn't clear, i simply noticed that it was a corporate > sponsor that was misusing the 'big tent' name so was just thinking we > could easily tell them, that's not what it means. wasn't suggesting > anything else by sponsor comment. You'd think it would be that easy. A surprising number of folks within the community don't really understand the old naming either, though (see the rest of this thread for examples). *sigh* so this is why we can't have nice things :p as an aside, in telemetry project, we did something somewhat similar when we renamed/rebranded to telemetry from ceilometer. we wrote several notes to the ML, had a few blog posts, fixed the docs, mentioned the new project structure in our presentations... 2 years on, we still occasionally get asked "what's ceilometer", "is xyz not ceilometer?", or "so ceilometer is deprecated?". to a certain extent i think we'll have to be prepared to do some hand holding and say "hey, that's not what the "big tent/." Is it clear to these people, once you explain the difference, what telemetry is? I would assume it is and this is one of the problems we're trying to solve. Even after explaining the difference, it's sometimes hard for people to grasp the concept because the naming that was used is poor and, to be honest, it feels like it came out from an analogy without properly considering the impact it would have in the community. Over-communicating won't get rid of surprises but sometimes the problem is in the message and not the receivers of it. We must stay honest with ourselves. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project
On 15/06/17 10:48 +0200, Thierry Carrez wrote: Hi everyone, Part of reducing OpenStack perceived complexity is to cull projects that have not delivered on their initial promises. Those are always difficult discussions, but we need to have them. In this email I'd like to discuss whether we should no longer consider Fuel an official OpenStack project, and turn it into a hosted (unofficial) project. Fuel originated at Mirantis as their OpenStack installer. It was proposed as an official OpenStack project in July 2015 and approved in November 2015. The promise at that time was that making it official would drive other organizations to participate in its development and turn it into the one generic OpenStack installer that everyone wanted. Fuel was not a small endeavor: in Mitaka and Newton it represented more commits than Nova. The Fuel team fully embraced open collaboration, but failed to attract other organizations. Mitaka and Newton were still 96% the work of Mirantis. In my view, while deployment/packaging tools sit at the periphery of the "OpenStack" map, they make sense as official OpenStack teams if they create an open collaboration playing field and attract multiple organizations. Otherwise they are just another opinionated install tool that happens to be blessed with an "official" label. Since October 2016, Fuel's activity has dropped, following the gradual disengagement of its main sponsor. Comparing activity in the 5 first months of the year, there was a 68% drop between 2016 and 2017, the largest of any official OpenStack project. The Fuel team hasn't met on IRC for the last 3 months. Activity dropped from ~990 commits/month (Apr 2016, Aug 2016) to 52 commits in April 2017 and 25 commits in May 2017. And there are unsolved issues around licensing that have been lingering for the last 6 months. I think that, despite the efforts of the Fuel team, Fuel did not become what we hoped when we made it official: a universal installer that would be used across the board. It was worth a try, I'm happy that we tried, but I think it's time to stop considering it a part of "OpenStack" proper and make it a hosted project. It can of course continue its existence as an unofficial project hosted on OpenStack infrastructure. Thoughts ? +1 to change Fuel* status Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology
On 15/06/17 14:09 +, Jeremy Stanley wrote: On 2017-06-15 14:57:20 +0200 (+0200), Thierry Carrez wrote: [...] An alternative would be to give "the OpenStack project infrastructure" some kind of a brand name (say, "Opium", for OpenStack project infrastructure ultimate madness) and then call the hosted projects "Opium projects". Rename the Infra team to Opium team, and voilà! Not to be cynical, but it sounds like a return to StackForge under a different name. The thing I like about _not_ having a name for that is it's not an either/or situation. There are OpenStack projects under official governance, and everything else in existence (some of which we might host, other stuff is elsewhere on the Internet at large). Keeping the discussion focused on OpenStack is key for me. I am not personally keen on the idea of branding the Infrastructure team's work as an unrelated hosting service and feel like we only recently managed to get away from that paradigm when we ditched the StackForge branding as a euphemism for projects that weren't under OpenStack governance. +1 I literally just sent an email asking whether we want to make this separation more evident. The fact that we're picking these names makes me think it's important for us to have such separation so that we can be clear on what the releases will bring, among other things. If we're going to have such separation, then I'd rather make it evident since it's confusing for people to understand what the difference between big-tent and official project is and the name change won't help much with this problem, I reckon. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology
On 15/06/17 11:15 +0200, Thierry Carrez wrote: I'd like to propose that we introduce a new concept: "OpenStack-Hosted projects". There would be "OpenStack projects" on one side, and "Projects hosted on OpenStack infrastructure" on the other side (all still under the openstack/ git repo prefix). We'll stop saying "official OpenStack project" and "unofficial OpenStack project". The only "OpenStack projects" will be the official ones. We'll chase down the last mentions of "big tent" in documentation and remove it from our vocabulary. I think this new wording (replacing what was previously Stackforge, replacing what was previously called "unofficial OpenStack projects") will bring some clarity as to what is OpenStack and what is beyond it. The wording sounds good to me. I've found that it's a bit unclear what projects are part of OpenStack to folks that are not entirely familiar with the terminology (regardless of the terminology). Stackforge made this very clear, so, I wonder if we should find a better way to clarify what projects are Hosted and what projects are part of OpenStack. So far we have badges, which were added to some Readmes and they only show up on github so, badges might not be clear enough. To be honest, I don't have an idea that I'm happy with but here are a couple: * Have an autogenerated (?) doc with the list of hosted services * Update badges to reflect the terminology change * Have a different documentation theme for hosted projects (?) I'm not super happy with these ideas but I'm throwing them out there hoping that we can brainstorm a bit on how we can do this and whether this is something we really want/need to do. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 13/06/17 10:49 +0200, Thierry Carrez wrote: Quick attempt at a summary of the discussion so far, with my questions: * Short-term, Glance needs help to stay afloat - Sean volunteered to help - but glance needs to add core reviewers to get stuff flowing -> could the VM/BM workgroup also help ? Any progress there ? +1 Given the current situation, I think we'll get any help we can. I'd be happy to add Sean and a couple of other volunteers to the core team until the end of the cycle. When Pike is out, we can do a status check and see how to proceed. * Long-term, is Glance still our best bet for the future ? - The code base is way more complicated than it should be - Difficult to work on necessary refactoring with current resources - Glare is a sane base, but achieves more than just image catalog - Disk images may be special enough to require their own service -> Elaborate on "optimizing for their specialness is really important" I'd like to start working on a more formal proposal for this. The email threas have covered some interesting points and there have been a good number of sessions at various summits about this same argument. There could be another session in Denver but I'd like to see a more formal document, etherpad, whatever, that explains the different features that would make the migration worth it and a set of different paths we could explore to make this migration happen. With this info, I think we will be able to make a thoughtful decision. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On Mon, Jun 12, 2017, 19:47 Mikhail Fedosin wrote: > On Tue, Jun 13, 2017 at 12:01 AM, Flavio Percoco > wrote: > >> On 12/06/17 23:20 +0300, Mikhail Fedosin wrote: >> >>> My opinion is that Glance stagnates and it's really hard to implement new >>> features there. In two years, only one major improvement was developed >>> (Image Import Refactoring), and no one has tested it in production yet. >>> And >>> this is in the heyday of the community, as you said! >>> >> >> You're skipping 2 important things here: >> >> The first one is that focusing on the image import refactor (IIR) was a >> community choice. It's fixing a bigger problem that requires more focus. >> The >> design of the feature took a couple of cycles too, not the >> implementation. The >> second thing is that the slow pace may also be caused by the lack of >> contributors. > > > It's exactly what I'm talking about - implementing medium-size feature > (IIR is about 600 lines of code [1][2]) took 1 year of discussions and 1 > year for implementation of 5 full-time developers. And most importantly, it > took all the community attention. What if we need to implement more serious > features? How much time will it take, given that there are not so many > developers left? > What I was referring to is that this is not the normal case. The IIR was a special case, which doesn't mean implementing features is easy, as you mentioned. On the other hand OpenStack users have been requesting for new features for >>> a long time: I'm talking about mutistore support, versioning of images, >>> image slicing (like in docker), validation and conversion of uploading >>> data >>> and so on. And I can say that it is impossible to implement them without >>> breaking Glance. But all this stuff is already done in Glare (multistore >>> support is implemented partially, because modifications of glance_store >>> are >>> required). And if we switch OpenStack to Glare users will get these >>> features out of the box. >>> >> >> Some of these features could be implemented in Glance. As you mentioned, >> the >> code base is over-engineered but it could be simplified. > > > Everything is possible, I know that. But at what cost? > Exactly! This is what I'm asking you to help me out with. I'm trying to have a constructive discussion on the cost of this and find a sohort term solution and then a long term one. I don't think the current problem is caused by Glance's lack of "exciting" >> features and I certainly don't think replacing it with Glare would be of >> any >> help now. It may be something we want to think about in the future (and >> this is >> not the first time I say this) but what you're proposing will be an >> expensive >> distraction from the real problem. > > > And for the very last time - I don't suggest to replace Glance now or even > in a year. At the moment, an email with the title "Glance needs help, it's > getting critical" is enough. > I call to think about the distant future, probably two years or near that. > What can prevent Flavio from writing of such emails in T cycle? Bringing > people from Nova and Cinder part-time will not work, because, as we > discussed above, even medium-size feature requires years of dedicated work, > and having their +1 on typo fixes... what's the benefit of that? > Fully agree here. What I think we need is a short term and a long term solution. Would you agree with this? I mentioned in my previous email that I've never been opposed to a future transition away from Glance as soon as this happens naturally. I understand that you're not proposing to replace Glance now. What I was trying to understand is why you thought migratinf away from Glance in the future would help us now. And for the very last time - I'm here not to promote Glare. As you know, I > will soon be involved in this project extremely mediately. I'm here to > decide what to do with Glance next. In the original email Flavio said "So, > before things get even worse, I'd like us to brainstorm a bit on what > solutions/options we have now". I described in detail my personal feelings > about the current situation in Glance for the members of TC, who are > unfamiliar with the project. And also I suggested one possible solution > with Glare, maybe not the best one, but I haven't heard any other proposals > . > I know you're not promoting Glare and O hope my emails are not coming through as accusations of any kind. I'm playing th
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On Mon, Jun 12, 2017, 19:25 Mike Perez wrote: > On 16:01 Jun 12, Flavio Percoco wrote: > > On 12/06/17 23:20 +0300, Mikhail Fedosin wrote: > > > My opinion is that Glance stagnates and it's really hard to implement > new > > > features there. In two years, only one major improvement was developed > > > (Image Import Refactoring), and no one has tested it in production > yet. And > > > this is in the heyday of the community, as you said! > > > > You're skipping 2 important things here: > > > > The first one is that focusing on the image import refactor (IIR) was a > > community choice. It's fixing a bigger problem that requires more focus. > The > > design of the feature took a couple of cycles too, not the > implementation. The > > second thing is that the slow pace may also be caused by the lack of > > contributors. > > +1 image import refactor work. That's great that the image import refactor > work > is done! > > Mikhail, > > I'm pretty thorough on reading this list for the dev digest, so even I > missed > that news. Which release was that done in? Are people not using it in > production right away because of having to upgrade to a new release? > It's actually coming out with Pike. Patches landed last week. Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 12/06/17 23:20 +0300, Mikhail Fedosin wrote: My opinion is that Glance stagnates and it's really hard to implement new features there. In two years, only one major improvement was developed (Image Import Refactoring), and no one has tested it in production yet. And this is in the heyday of the community, as you said! You're skipping 2 important things here: The first one is that focusing on the image import refactor (IIR) was a community choice. It's fixing a bigger problem that requires more focus. The design of the feature took a couple of cycles too, not the implementation. The second thing is that the slow pace may also be caused by the lack of contributors. On the other hand OpenStack users have been requesting for new features for a long time: I'm talking about mutistore support, versioning of images, image slicing (like in docker), validation and conversion of uploading data and so on. And I can say that it is impossible to implement them without breaking Glance. But all this stuff is already done in Glare (multistore support is implemented partially, because modifications of glance_store are required). And if we switch OpenStack to Glare users will get these features out of the box. Some of these features could be implemented in Glance. As you mentioned, the code base is over-engineered but it could be simplified. Then, Glance works with images only, but Glare supports various types of data, like heat and tosca templates. Next week we will add Secrets artifact type to store private data, and Mistral workflows. I mean - we'll have unified catalog of all cloud data with the possibility to combine them in metastructures, when artifact of one type depends on the other. Glance working only with images is a design choice and I don't think that's something bad. I also don't think Glare's support for other artifacts is bad. Just different choices. I will repeat it once again, in order to be understood as much as possible. It takes too much time to develop new features and fix old bugs (years to be exact). If we continue in the same spirit, it certainly will not increase the joy of OpenStack users and they will look for other solutions that meet their desires. Mike, I understand that you think that the broader set of features that Glare provides would be better for users, which is something I disagree with a bit. More features don't make a service better. What I'm failing to see, though, is why you believe that replacing Glance with Glare will solve the current problem. I don't think the current problem is caused by Glance's lack of "exciting" features and I certainly don't think replacing it with Glare would be of any help now. It may be something we want to think about in the future (and this is not the first time I say this) but what you're proposing will be an expensive distraction from the real problem. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 12/06/17 15:37 -0400, Sean Dague wrote: On 06/12/2017 03:20 PM, Flavio Percoco wrote: Could you please elaborate more on why you think switching code bases is going to solve the current problem? In your email you talked about Glance's over-engineered code as being the thing driving people away and while I disagree with that statement, I'm wondering whether you really think that's the motivation or there's something else. Let's not talk about proxy API's or ways we would migrate users. I'd like to understand why *you* (or others) might think that a complete change of projects is a good solution to this problem. Ultimatedly, I believe Glance, in addition to not being the "sexiest" project in OpenStack, is taking the hit of the recent lay-offs, which it kinda managed to avoid last year. As someone from the outside the glance team, I'd really like to avoid the artifacts path. I feel like 2 years ago there was a promise that if glance headed in that direction it would bring in new people, and everything would be great. But, it didn't bring in folks solving the class of issues that current glance users are having. 80+ GB disk images could be classified as a special case of Artifacts, but it turns optimizing for their specialness is really important to a well functioning cloud. Glance might not be the most exciting project, but what seems to be asked for is help on the existing stuff. I'd rather focus there. Just want to make clear that I'm *not* proposing going down any artifacts path. I actually disagree with this idea but I do want to understand why other folks think this is going to solve the issue. There might be some insights there that we can learn from and use to improve Glance (or not). Glance can be very exciting if one focuses on the interesting bits and it's an *AWESOME* place where new comers can start contributing, new developers can learn and practice, etc. That said, I believe that code doesn't have to be challenging to be exciting. There's also excitment in the simple but interesting things. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 12/06/17 16:56 +0300, Mikhail Fedosin wrote: Hello! Flavio raised a very difficult and important question, and I think that we, as community members, should decide what to do with Glance next. Hi Mike, I will try to state my subjective opinion. I was involved in the Glance project for almost three years and studied it fairly plank. I believe that the main problem is that the project was designed extremely poorly. Glance does not have many tasks to solve, but nevertheless, there are a lot of Java design patterns used (factory of factories, visitors, proxy and other things that are unnecessary in this case). All this leads to absolutely sad consequences, when in order to add an image tag over 180 objects of different classes are created, the code execution passes through more than 25 locations with a number of callbacks 3 times. So I can say that the code base is artificially over-complicated and incredibly inflated. The next problem is that over the years the code has grown by a number of workarounds, which make it difficult to implement new changes - any change leads to something breaking down somewhere else. In the long run, we get a lot of pain associated with race conditions, hard-to-recover heisenbugs and other horrors of programmer's life. It is difficult to talk about attracting new developers, because the developing of the code in such conditions is mentally exhausting. I don't disagree on this. The code base *is* over-engineered in many areas. However, I don't think this is a good reason to just throw the entire project away. With enough time and contributions, the code could be refactored. We can continue to deny the obvious, saying that Glance simply needs people and everything will be wonderful. But unfortunately this is not so - we should admit that it is simply not profitable to engage in further development. I suggest thinking about moving the current code base into a support mode and starting to develop an alternative (which I have been doing for the past year and a half). If you are allergic to the word "artifacts", do not read the following paragraph: We are actively developing the Glare project, which offers a universal catalog of various binary data along with its metadata - at the moment the catalog supports the storage of images of virtual machines and has feature parity with Glance. The service is used in production by Nokia, and it was thoroughly tested at various settings. Next week we plan to release the first stable version and begin the integration with various projects of OpenStack: Mistral and Vitrage in the first place. As a solution, I can propose to implement an additional API to Glare, which would correspond to OpenStack Image API v2 and test that OpenStack is able to work on its basis. After that, leave Glance at rest and start developing Glare as a universal catalog of binary data for OpenStack. Could you please elaborate more on why you think switching code bases is going to solve the current problem? In your email you talked about Glance's over-engineered code as being the thing driving people away and while I disagree with that statement, I'm wondering whether you really think that's the motivation or there's something else. Let's not talk about proxy API's or ways we would migrate users. I'd like to understand why *you* (or others) might think that a complete change of projects is a good solution to this problem. Ultimatedly, I believe Glance, in addition to not being the "sexiest" project in OpenStack, is taking the hit of the recent lay-offs, which it kinda managed to avoid last year. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
On 12/06/17 09:13 -0400, Sean Dague wrote: On 06/09/2017 01:07 PM, Flavio Percoco wrote: Would it be possible to get help w/ reviews from folks from teams like nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to think about teams that may be familiar with the Glance code/api already. I'm happy to help here, I just went through and poked at a few things. It is going to be tough to make meaningful contributions there without approve authority, especially given the normal trust building exercise for core teams takes 3+ months. It might be useful to figure out if there are a set of folks already in the community that the existing core team would be happy to provisionally promote to help worth the current patch backlog and get things flowing. I think this is fine. I'd be happy to add you and a couple of other folks that have some time to spend on thi to the core team. This until the core team is healthier. Brian has been sending emails with focus reviews/topics every week and I think that would be useful especially for folks joining the team provisionally. That sounds like a better way to invest time. Not sure whether Brian will have time to keep doing this, perhaps Erno can take this task on? Erno? Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd
On 12/06/17 10:07 +0200, Bogdan Dobrelya wrote: On 09.06.2017 18:51, Flavio Percoco wrote: On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann mailto:d...@doughellmann.com>> wrote: Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +: > Unless I'm missing something, to use confd with an OpenStack deployment on > k8s, we'll have to do something like this: > > * Deploy confd in every node where we may want to run a pod (basically > wvery node) Oh, no, no. That's not how it works at all. confd runs *inside* the containers. It's input files and command line arguments tell it how to watch for the settings to be used just for that one container instance. It does all of its work (reading templates, watching settings, HUPing services, etc.) from inside the container. The only inputs confd needs from outside of the container are the connection information to get to etcd. Everything else can be put in the system package for the application. A-ha, ok! I figured this was another option. In this case I guess we would have 2 options: 1. Run confd + openstack service in side the container. My concern in this case would be that we'd have to run 2 services inside the container and structure things in a way we can monitor both services and make sure they are both running. Nothing impossible but one more thing to do. 2. Run confd `-onetime` and then run the openstack service. A sidecar confd container running in a shared pod, which is having a shared PID namespace with the managed service, would look much more containerish. So confd could still HUP the service or signal it to be restarted w/o baking itself into the container image. We have to deal with the Pod abstraction as we want to be prepared for future integration with k8s. Yeah, this might work too. I was just trying to think of options that were generic enough. In an k8s scenario, this should do the job. Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][tc][glance] Glance needs help, it's getting critical
(sorry if duplicate, having troubles with email) Hi Team, I've been working a bit with the Glance team and trying to help where I can and I can't but be worried about the critical status of the Glance team. Unfortunately, the number of participants in the Glance team has been reduced a lot resulting in the project not being able to keep up with the goals, the reviews required, etc.[0] I've always said that Glance is one of those critical projects that not many people notice until it breaks. It's in every OpenStack cloud sitting in a corner and allowing for VMs to be booted. So, before things get even worse, I'd like us to brainstorm a bit on what solutions/options we have now. I know Glance is not the only project "suffering" from lack of contributors but I don't want us to get to the point where there won't be contributors left. How do people feel about adding Glance to the list of "help wanted" areas of interest? Would it be possible to get help w/ reviews from folks from teams like nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to think about teams that may be familiar with the Glance code/api already. Cheers, Flavio [0] http://stackalytics.com/?module=glance-group&metric=marks [1] https://review.openstack.org/#/c/466684/ -- @flaper87 Flavio Percoco __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd
On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) wrote: > How does confd run inside the container? Does this mean we’d need some > kind of systemd in every container which would spawn both confd and the > real service? That seems like a very large architectural change. But > maybe I’m misunderstanding it. > > Copying part of my reply to Doug's email: 1. Run confd + openstack service in side the container. My concern in this case would be that we'd have to run 2 services inside the container and structure things in a way we can monitor both services and make sure they are both running. Nothing impossible but one more thing to do. 2. Run confd `-onetime` and then run the openstack service. I either case, we could run confd as part of the entrypoint and have it run in background for the case #1 or just run it sequentially for case #2. > Thx, > britt > > On 6/9/17, 9:04 AM, "Doug Hellmann" wrote: > > Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +: > > > Unless I'm missing something, to use confd with an OpenStack > deployment on > > k8s, we'll have to do something like this: > > > > * Deploy confd in every node where we may want to run a pod > (basically > > wvery node) > > Oh, no, no. That's not how it works at all. > > confd runs *inside* the containers. It's input files and command line > arguments tell it how to watch for the settings to be used just for > that > one container instance. It does all of its work (reading templates, > watching settings, HUPing services, etc.) from inside the container. > > The only inputs confd needs from outside of the container are the > connection information to get to etcd. Everything else can be put > in the system package for the application. > > Doug > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd
On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann wrote: > Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +: > > > Unless I'm missing something, to use confd with an OpenStack deployment > on > > k8s, we'll have to do something like this: > > > > * Deploy confd in every node where we may want to run a pod (basically > > wvery node) > > Oh, no, no. That's not how it works at all. > > confd runs *inside* the containers. It's input files and command line > arguments tell it how to watch for the settings to be used just for that > one container instance. It does all of its work (reading templates, > watching settings, HUPing services, etc.) from inside the container. > > The only inputs confd needs from outside of the container are the > connection information to get to etcd. Everything else can be put > in the system package for the application. > A-ha, ok! I figured this was another option. In this case I guess we would have 2 options: 1. Run confd + openstack service in side the container. My concern in this case would be that we'd have to run 2 services inside the container and structure things in a way we can monitor both services and make sure they are both running. Nothing impossible but one more thing to do. 2. Run confd `-onetime` and then run the openstack service. Either would work but #2 means we won't have config files monitored and the container would have to be restarted to update the config files. Thanks, Doug. Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd
On Thu, Jun 8, 2017, 19:14 Doug Hellmann wrote: > Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200: > > On 08/06/17 18:23 +0200, Flavio Percoco wrote: > > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote: > > >>On 06.06.2017 18:08, Emilien Macchi wrote: > > >>>Another benefit is that confd will generate a configuration file when > > >>>the application will start. So if etcd is down *after* the app > > >>>startup, it shouldn't break the service restart if we don't ask confd > > >>>to re-generate the config. It's good for operators who were concerned > > >>>about the fact the infrastructure would rely on etcd. In that case, we > > >>>would only need etcd at the initial deployment (and during lifecycle > > >>>actions like upgrades, etc). > > >>> > > >>>The downside is that in the case of containers, they would still have > > >>>a configuration file within the container, and the whole goal of this > > >>>feature was to externalize configuration data and stop having > > >>>configuration files. > > >> > > >>It doesn't look a strict requirement. Those configs may (and should) be > > >>bind-mounted into containers, as hostpath volumes. Or, am I missing > > >>something what *does* make embedded configs a strict requirement?.. > > > > > >mmh, one thing I liked about this effort was possibility of stop > bind-mounting > > >config files into the containers. I'd rather find a way to not need any > > >bindmount and have the services get their configs themselves. > > > > Probably sent too early! > > > > If we're not talking about OpenStack containers running in a COE, I > guess this > > is fine. For k8s based deployments, I think I'd prefer having installers > > creating configmaps directly and use that. The reason is that depending > on files > > that are in the host is not ideal for these scenarios. I hate this idea > because > > it makes deployments inconsistent and I don't want that. > > > > Flavio > > > > I'm not sure I understand how a configmap is any different from what is > proposed with confd in terms of deployment-specific data being added to > a container before it launches. Can you elaborate on that? > > Unless I'm missing something, to use confd with an OpenStack deployment on k8s, we'll have to do something like this: * Deploy confd in every node where we may want to run a pod (basically wvery node) * Configure it to download all configs from etcd locally (we won't be able to download just some of them because we don't know what services may run in specific nodes. Except, perhaps, in the case of compute nodes and some other similar nodes) * Enable hostpath volumes (iirc it's disabled by default) so that we can mount these files in the pod * Run the pods and mount the files assuming the files are there. All of the above is needed because confd syncs files locally from etcd. Having a centralized place to manage these configs allows for controlling the deployment better. For example, if a configmap doesn't exist, then stop everything. Not trying to be negative but rather explain why I think confd may not work well for the k8s based deployments. I think it's a good fit for the rest of the deployments. Am I missing something? Am I overcomplicating things? Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd
On Thu, Jun 8, 2017, 19:51 Steven Dake (stdake) wrote: > Flavio, > > Atleast for the kubernetes variant of kolla, bindmounting will always be > used as this is fundamentally how configmaps operate. In order to maintain > maximum flexilbility and compatibility with kubernetes, I am not keen to > try a non-configmap way of doing things. > I was referring to bindmounts of files that were created in the host and reside in the host. While configmaps are bindmounts, they don't really live in the host until the pod/container is created. Flavio > Regards > -steve > > -----Original Message- > From: Flavio Percoco > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev@lists.openstack.org> > Date: Thursday, June 8, 2017 at 9:23 AM > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev@lists.openstack.org> > Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] > [kolla] [helm] Configuration management with etcd / confd > > On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote: > >On 06.06.2017 18:08, Emilien Macchi wrote: > >> Another benefit is that confd will generate a configuration file > when > >> the application will start. So if etcd is down *after* the app > >> startup, it shouldn't break the service restart if we don't ask > confd > >> to re-generate the config. It's good for operators who were > concerned > >> about the fact the infrastructure would rely on etcd. In that case, > we > >> would only need etcd at the initial deployment (and during lifecycle > >> actions like upgrades, etc). > >> > >> The downside is that in the case of containers, they would still > have > >> a configuration file within the container, and the whole goal of > this > >> feature was to externalize configuration data and stop having > >> configuration files. > > > >It doesn't look a strict requirement. Those configs may (and should) > be > >bind-mounted into containers, as hostpath volumes. Or, am I missing > >something what *does* make embedded configs a strict requirement?.. > > mmh, one thing I liked about this effort was possibility of stop > bind-mounting > config files into the containers. I'd rather find a way to not need any > bindmount and have the services get their configs themselves. > > Flavio > > > -- > @flaper87 > Flavio Percoco > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Install Kubernetes in the overcloud using TripleO
Hey y'all, Just wanted to give an updated on the work around tripleo+kubernetes. This is still far in the future but as we move tripleo to containers using docker-cmd, we're also working on the final goal, which is to have it run these containers on kubernetes. One of the first steps is to have TripleO install Kubernetes in the overcloud nodes and I've moved forward with this work: https://review.openstack.org/#/c/471759/ The patch depends on the `ceph-ansible` work and it uses the mistral-ansible action to deploy kubernetes by leveraging kargo. As it is, the patch doesn't quite work as it requires some files to be in some places (ssh keys) and a couple of other things. None of these "things" are blockers as in they can be solved by just sending some patches here and there. I thought I'd send this out as an update and to request some early feedback on the direction of this patch. The patch, of course, works in my local environment ;) Flavio -- @flaper87 Flavio Percoco signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev