Re: [openstack-dev] [all] Switching to longer development cycles
Hey On 13 December 2017 at 17:31, Thierry Carrezwrote: > See attached for the PDF strawman the release team came up with when > considering how that change would roll out in practice... > [in which the final stage of the release is 8 weeks, therefore shorter than the other 10/11 week stages] I'm not going to go on about this beyond this mail, since I've been roundly shot down about this before, but please could we have the gap between the final milestone and the release, be longer? Every extra week there is more release quality :) -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
Hey On 13 December 2017 at 17:12, Jimmy McArthurwrote: > Thierry Carrez wrote: > >> - It doesn't mean that teams can only meet in-person once a year. >> Summits would still provide a venue for team members to have an >> in-person meeting. I also expect a revival of the team-organized >> midcycles to replace the second PTG for teams that need or want to meet >> more often. >> > The PTG seems to allow greater coordination between groups. I worry that > going back to an optional mid-cycle would reduce this cross-collaboration, > while also reducing project face-to-face time. I can't speak for the Foundation, but I would think it would be good to have an official PTG in the middle of the cycle (perhaps neatly aligned with some kind of milestone/event) that lets people discuss plans for finishing off the release, and early work they want to get started on for the subsequent release). The problem with team-organised midcycles (as I'm sure everyone remembers), is that there's little/no opportunity for cross-project work. -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
Hey On 13 December 2017 at 16:17, Thierry Carrezwrote: > self-imposed rhythm no longer matches our natural pace. It feels like we > are always running elections, feature freeze is always just around the > corner, we lose too much time to events, and generally the impression > that there is less time to get things done. Milestones in the > development cycles are mostly useless now as they fly past us too fast. > A lot of other people reported that same feeling. > Strongly agree. > In another thread, John Dickinson suggested that we move to one-year > development cycles, and I've been thinking a lot about it. I now think > it is actually the right way to reconcile our self-imposed rhythm with > the current pace of development, and I would like us to consider > switching to year-long development cycles for coordinated releases as > soon as possible. > +1 (and +1 for starting with Rocky) For me the first thing that comes to mind with this proposal, is how would the milestones/FF/etc be arranged within that year? As I've raised previously on this list [0], I would prefer more time for testing and stabilisation between Feature Freeze and Release. I continue to think that the unit testing our CI provides, is not a sufficient protection against real world deployment issues. I think building in a useful amount of time for functional testing, would be a huge benefit to both the quality of upstream releases, and the timeliness of downstream releases. [0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116911.html -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [release] Proposal to change the timing of Feature Freeze
Hey Well, it definitely seems like there's not much support for the idea ;) Thanks everyone who replied. I'll go away and think about ways we can improve things without moving FF :) Cheers, -- Chris Jones > On 18 May 2017, at 11:18, Thierry Carrez <thie...@openstack.org> wrote: > > Chris Jones wrote: >> I have a fairly simple proposal to make - I'd like to suggest that >> Feature Freeze move to being much earlier in the release cycle (no >> earlier than M.1 and no later than M.2 would be my preference). >> [...] > > Hey Chris, > > From my (admittedly too long) experience in release management, forcing > more time for stabilization work does not magically yield better > results. There is nothing like a "perfect" release, it's always a "good > enough" trade-off. Holding releases in the hope that more bugs will be > discovered and fixed only works so far: some bugs will only emerge once > people start deploying software in their unique environments and use > cases. It's better to put it out there when it's "good enough". > > So a Feature Freeze should be placed early enough to give you an > opportunity to slow down, fix known blockers, have documentation and > translations catch up. Currently that means 5-6 weeks. Moving it earlier > than this reasonable trade-off just brings more pain for little benefit. > It is hard enough to get people to stop pushing features and feature > freeze exceptions and do stabilization work for 5 weeks. Forcing a > longer freeze would just see an explosion of local feature branches, not > a more "stable" release. > > Furthermore, we have a number of projects (newly-created ones that need > to release early, or mature ones that want to push that occasional new > feature more often) that bypass the feature freeze / RC system > completely. With more constraints, I'd expect most projects to switch to > that model instead. > >> Rather than getting hung up on the specific numbers of weeks, perhaps it >> would be helpful to start with opinions on whether or not there is >> enough stabilisation time in the current release schedules. > > Compared to the early days of OpenStack (where we'd still use a 5-6-week > freeze period) our automated testing has come a long way. The cases > where we need to respin release candidates due to a major blocker that > was not caught in automated testing are becoming rarer. If anything, the > data points to a need for shorter freezes rather than longer ones. The > main reason we are still at 5-6weeks those days is for translations and > docs, rather than real stabilization work. I'm not advocating for making > it shorter, I still think it's the right trade-off :) > > -- > Thierry Carrez (ttx) > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [release] Proposal to change the timing of Feature Freeze
Hey folks I have a fairly simple proposal to make - I'd like to suggest that Feature Freeze move to being much earlier in the release cycle (no earlier than M.1 and no later than M.2 would be my preference). In the current arrangement (looking specifically at Pike), FF is scheduled to happen 5 weeks before the upstream release. This means that of a 26 week release cycle, 21 weeks are available for making large changes, and only 5 weeks are available for stabilising the release after the feature work has landed (possibly less if FF exceptions are granted). In my experience, significant issues are generally still being found after the upstream release, by which point fixing them is much harder - the patches need to land twice (master and stable/foo) and master may already have diverged. If the current model were inverted, and ~6 weeks of each release were available for landing features, there would be ~20 weeks available for upstream and downstream folk to do their testing/stabilising work. The upstream release ought to have a higher quality, and downstream releases would be more likely to be able to happen at the same time. Obviously not all developers would be working on the stabilisation work for those ~20 weeks, many would move on to working on features for the following release, which would then be ready to land in the much shorter period. This might slow the feature velocity of projects, and maybe ~6 weeks is too aggressive, but I feel that the balance right now is weighted strongly against timely, stable releasing of OpenStack, particularly for downstream consumers :) Rather than getting hung up on the specific numbers of weeks, perhaps it would be helpful to start with opinions on whether or not there is enough stabilisation time in the current release schedules. -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] CI Q session at PTG - submit your questions!
Hey folks We're doing a Q session tomorrow (Friday) morning at the PTG on TripleO CI. Whether you're at the PTG or not, if you have questions/confusions about Tripleo CI, please put them in the Etherpad at: https://etherpad.openstack.org/p/tripleo-ptg-ci-qa -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] diskimage-builder 1.0.0
Hey On 27 Jul 2015, at 16:22, Gregory Haynes g...@greghaynes.net wrote: I just cut the 1.0.0 release, so no going back now. Enjoy! woot! Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Core reviewer update proposal
Hi On 5 May 2015 at 12:57, James Slagle james.sla...@gmail.com wrote: Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core. +1 I'd also like to give a heads-up to the following folks whose review activity is very low for the last 90 days: | cmsj ** | 60 2 0 4 266.7% |0 ( 0.0%) | I want to pick up my review rate, mostly with a focus on DIB, but I suspect that will not keep me on track to remain core, which is fine :) -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL
Hi Thanks for stepping forward, James :) Cheers, -- Chris Jones On 18 Feb 2015, at 21:45, Clint Byrum cl...@fewbar.com wrote: Excerpts from Clint Byrum's message of 2015-02-17 08:52:46 -0800: Excerpts from Anita Kuno's message of 2015-02-17 07:38:01 -0800: On 02/17/2015 09:21 AM, Clint Byrum wrote: There has been a recent monumental shift in my focus around OpenStack, and it has required me to take most of my attention off TripleO. Given that, I don't think it is in the best interest of the project that I continue as PTL for the Kilo cycle. I'd like to suggest that we hold an immediate election for a replacement who can be 100% focused on the project. Thanks everyone for your hard work up to this point. I hope that one day soon TripleO can deliver on the promise of a self-deploying OpenStack that is stable and automated enough to sit in the gate for many if not all OpenStack projects. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev So in the middle of a release, changing PTLs can take 3 avenues: 1) The new PTL is appointed. Usually there is a leadership candidate in waiting which the rest of the project feels it can rally around until the next election. The stepping down PTL takes the pulse of the developers on the project and informs us on the mailing list who the appointed PTL is. Barring any huge disagreement, we continue on with work and the appointed PTL has the option of standing for election in the next election round. The appointment lasts until the next round of elections. Thanks for letting me know about this Anita. I'd like to appoint somebody, but I need to have some discussions with a few people first. As luck would have it, some of those people will be in Seattle with us for the mid-cycle starting tomorrow. 2) We have an election, in which case we need candidates and some dates. Let me know if we want to exercise this option so that Tristan and I can organize some dates. Let's wait a bit until I figure out if there's a clear and willing appointee. That should be clear by Thursday. Ok, we talked this morning, and James Slagle has agreed to step in as the PTL for the rest of this cycle. So I hereby appoint him so. Thanks everyone! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Stepping down as TripleO PTL
Hi Clint Thanks very much for your awesome work as our PTL :) Cheers, Chris On 17 February 2015 at 14:21, Clint Byrum cl...@fewbar.com wrote: There has been a recent monumental shift in my focus around OpenStack, and it has required me to take most of my attention off TripleO. Given that, I don't think it is in the best interest of the project that I continue as PTL for the Kilo cycle. I'd like to suggest that we hold an immediate election for a replacement who can be 100% focused on the project. Thanks everyone for your hard work up to this point. I hope that one day soon TripleO can deliver on the promise of a self-deploying OpenStack that is stable and automated enough to sit in the gate for many if not all OpenStack projects. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] stepping down as core reviewer
Hi Rob Thanks for your excellent run of insightful reviewing :) Cheers, Chris On 15 February 2015 at 21:40, Robert Collins robe...@robertcollins.net wrote: Hi, I've really not been pulling my weight as a core reviewer in TripleO since late last year when personal issues really threw me for a while. While those are behind me now, and I had a good break over the christmas and new year period, I'm sufficiently out of touch with the current (fantastic) progress being made that I don't feel comfortable +2'ing anything except the most trivial things. Now the answer to that is to get stuck back in, page in the current blueprints and charge ahead - but... One of the things I found myself reflecting on during my break was the extreme fragility of the things we were deploying in TripleO - most of our time is spent fixing fallout from unintended, unexpected consequences in the system. I think its time to put some effort directly in on that in a proactive fashion rather than just reacting to whichever failure du jour is breaking deployments / scale / performance. So for the last couple of weeks I've been digging into the Nova (initially) bugtracker and code with an eye to 'how did we get this bug in the first place', and refreshing my paranoid distributed-systems-ops mindset: I'll be writing more about that separately, but its clear to me that there's enough meat there - both analysis, discussion, and hopefully execution - that it would be self-deceptive for me to think I'll be able to meaningfully contribute to TripleO in the short term. I'm super excited by Kolla - I think that containers really address the big set of hurdles we had with image based deployments, and if we can one-way-or-another get cinder and Ironic running out of containers, we should have a pretty lovely deployment story. But I still think helping on the upstream stuff more is more important for now. We'll see where we're at in a cycle or two :) -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers, Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] nominating James Polley for tripleo-core
Hi +1! Cheers, -- Chris Jones On 14 Jan 2015, at 18:14, Clint Byrum cl...@fewbar.com wrote: Hello! It has been a while since we expanded our review team. The numbers aren't easy to read with recent dips caused by the summit and holidays. However, I believe James has demonstrated superb review skills and a commitment to the project that shows broad awareness of the project. Below are the results of a meta-review I did, selecting recent reviews by James with comments and a final score. I didn't find any reviews by James that I objected to. https://review.openstack.org/#/c/133554/ -- Took charge and provided valuable feedback. +2 https://review.openstack.org/#/c/114360/ -- Good -1 asking for better commit message and then timely follow-up +1 with positive comments for more improvement. +2 https://review.openstack.org/#/c/138947/ -- Simpler review, +1'd on Dec. 19 and no follow-up since. Allowing 2 weeks for holiday vacation, this is only really about 7 - 10 working days and acceptable. +2 https://review.openstack.org/#/c/146731/ -- Very thoughtful -1 review of recent change with alternatives to the approach submitted as patches. https://review.openstack.org/#/c/139876/ -- Simpler review, +1'd in agreement with everyone else. +1 https://review.openstack.org/#/c/142621/ -- Thoughtful +1 with consideration for other reviewers. +2 https://review.openstack.org/#/c/113983/ -- Thorough spec review with grammar pedantry noted as something that would not prevent a positive review score. +2 All current tripleo-core members are invited to vote at this time. Thank you! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?
Hi On 8 Jan 2015, at 17:37, Steven Hardy sha...@redhat.com wrote: Pretty sure there's a way to make DiB do this, but don't know what, anyone able to share some clues? Do I have to hack the elements, or is there a better way? The docs are pretty sparse, so any help would be much appreciated! :) We do have a mechanism for overriding the git sources for things, but os-*-config don't use them at the moment, they either install from packages or pip. I'm not sure what the rationale was for not including a git source for those tools, but I think we should do it, even if it's limited to situations where the procedure for overriding sources is being followed. (The procedure that should be used is the DIB_REPO* environment variables documented in diskimage-builder/elements/source-repositories/README.md) So, for now I think you're going to be stuck hacking the elements, unfortunately. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][tripleo] Making diskimage-builder install from forked repo?
Hi On 8 Jan 2015, at 17:58, Clint Byrum cl...@fewbar.com wrote: Excerpts from Steven Hardy's message of 2015-01-08 09:37:55 -0800: So you can probably setup a devpi instance locally, and upload the commits you want to it, and then build the image with the 'pypi' element Given that we have a pretty good release frequency of all our tools, is this burden on devs/testers actually justified at this point, versus the potential consistency we could have with source repo flexibility in other openstack components? Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Do we want to remove Nova-bm support?
Hi AFAIK there are no products out there using tripleonova-bm, but maybe a quick post to -operators asking if this will ruin anyone's day, would be good? Cheers, -- Chris Jones On 4 Dec 2014, at 04:47, Steve Kowalik ste...@wedontsleep.org wrote: Hi all, I'm becoming increasingly concerned about all of the code paths in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use nova-baremetal rather than Ironic. We do not check nova-bm support in CI, haven't for at least a month, and I'm concerned that parts of it may be slowly bit-rotting. I think our documentation is fairly clear that nova-baremetal is deprecated and Ironic is the way forward, and I know it flies in the face of backwards-compatibility, but do we want to bite the bullet and remove nova-bm support? Cheers, -- Steve Oh, in case you got covered in that Repulsion Gel, here's some advice the lab boys gave me: [paper rustling] DO NOT get covered in the Repulsion Gel. - Cave Johnson, CEO of Aperture Science ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)
Hi I am very sympathetic to this view. We have a patch in hand that improves the situation. We also have disagreement about the ideal situation. I +2'd Ian's patch because it makes things work better than they do now. If we can arrive at an ideal solution later, great, but the more I think about logging from a multitude of bash scripts, and tricks like XTRACE_FD, the more I think it's crazy and we should just incrementally improve the non-trace logging as a separate exercise, leaving working tracing for true debugging situations. Cheers, -- Chris Jones On 3 Dec 2014, at 05:00, Ian Wienand iwien...@redhat.com wrote: On 12/03/2014 09:30 AM, Clint Byrum wrote: I for one find the idea of printing every cp, cat, echo and ls command out rather frustratingly verbose when scanning logs from a normal run. I for one find this ongoing discussion over a flag whose own help says -x -- turn on tracing not doing the blindly obvious thing of turning on tracing and the seeming inability to reach to a conclusion on a posted review over 3 months a troubling narrative for potential consumers of diskimage-builder. -i ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)
Hi On 3 Dec 2014, at 18:41, Clint Byrum cl...@fewbar.com wrote: What if the patch is reworked to leave the current trace-all-the-time mode in place, and we iterate on each script to make tracing conditional as we add proper logging? +1 Cheers, -- Chris Jones ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] [Ironic] [Cinder] Baremetal volumes -- how to model direct attached storage
Hi My thoughts: Shoe-horning the ephemeral partition into Cinder seems like a lot of pain for almost no gain[1]. The only gain I can think of would be that we could bring a node down, boot it into a special ramdisk that exposes the volume to the network, so cindery operations (e.g. migration) could be performed, but I'm not even sure if anyone is asking for that? Forcing Cinder to understand and track something it can never normally do anything with, seems like we're just trying to squeeze ourselves into an ever-shrinking VM costume! Having said that, preserve ephemeral is a terrible oxymoron, so if we can do something about it, we probably should. How about instead, we teach Nova/Ironic about a concept of no ephemeral? They make a partition on the first disk for the first image they deploy, and then they never touch the other part(s) of the disk(s), until the instance is destroyed. This creates one additional burden for operators, which is to create and format a partition the first time they boot, but since this is a very small number of commands, and something we could trivially bake into our (root?) elements, I'm not sure it's a huge problem. This gets rid of the cognitive dissonance of preserving something that is described as ephemeral, and (IMO) makes it extremely clear that OpenStack isn't going to touch anything but the first partition of the first disk. If this were baked into the flavour rather than something we tack onto a nova rebuild command, it offers greater safety for operators, against the risk of accidentallying a vital state partition with a misconstructed rebuild command. [1] for local disk, I mean. I still think it'd be nice for operators to be able to use a networked Cinder volume for /mnt/state/, but that presents a whole different set of challenges :) Cheers, -- Chris Jones On 13 Nov 2014, at 09:25, Robert Collins robe...@robertcollins.net wrote: Back in the day before the ephemeral hack (though that was something folk have said they would like for libvirt too - so its not such a hack per-se) this was (broadly) sketched out. We spoke with the cinder PTL at the time in portland, from memory. There was no spec, so here is my brain-dumpy-recollection... - actual volumes are a poor match because we wouldn't be running cinder-volume on an ongoing basis and service records would accumulate etc. - we'd need cross-service scheduler support to make cinder operations line up with allocated bare metal nodes (and to e.g. make sure both our data volume and golden image volume are scheduled to the same machine). - folk want to be able to do fairly arbitrary RAID( JBOD) setups and that affects scheduling as well, one way to work it is to have Ironic export capabilities and specify actual RAID setups via matching flavors - this is the direction the ephemeral work took us, and is conceptually straight forwardly extended to RAID. We did talk about doing a little JSON schema to describe RAID / volume layouts, which cinder could potentially use for user defined volume flavors too. One thing I think that is missing from your description is in this: To be clear, in TripleO, we need a way to keep the data on a local direct attached storage device while deploying a new image to the box. I think we need to be able to do this with a single drive shared between image and data - doing one disk image, one disk data would add substantial waste given the size of disks these days (and for some form factors like moonshot it would rule out using them at all). Of course, being able to do entirely network stored golden images might be something some deployments want, but we can't require them all to do that ;) -Rob On 13 November 2014 11:30, Clint Byrum cl...@fewbar.com wrote: Each summit since we created preserve ephemeral mode in Nova, I have some conversations where at least one person's brain breaks for a second. There isn't always alcohol involved before, there almost certainly is always a drink needed after. The very term is vexing, and I think we have done ourselves a disservice to have it, even if it was the best option at the time. To be clear, in TripleO, we need a way to keep the data on a local direct attached storage device while deploying a new image to the box. If we were on VMs, we'd attach volumes, and just deploy new VMs and move the volume over. If we had a SAN, we'd just move the LUN's. But at some point when you deploy a cloud you're holding data that is expensive to replicate all at once, and so you'd rather just keep using the same server instead of trying to move the data. Since we don't have baremetal Cinder, we had to come up with a way to do this, so we used Nova rebuild, and slipped it a special command that said don't overwrite the partition you'd normally make the 'ephemeral' partition. This works fine, but it is confusing and limiting. We'd like something better. I had
Re: [openstack-dev] [TripleO] Kilo Mid-Cycle Meetup Planning
Hi On 9 October 2014 23:56, James Polley j...@jamezpolley.com wrote: Assuming it's in the US or Europe, Mon-Fri gives me about 3 useful days, once you take out the time I lose to jet lag. That's barely worth the 48 hours or so I spent in transit last time. It may well be reasonable/possible, assuming it's not inconvenient for you, to add a day or two to the trip, to recover before the meetup starts :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state
Hi On 6 Oct 2014, at 17:41, Clint Byrum cl...@fewbar.com wrote: We have to be _extremely_ careful in how we manage this. I actually think it has potential to really blow up in our faces. Yes, anything we do here has the potential to be extremely ruinous for operators, but the reality is that any existing TripleO deployment is at pretty severe risk of blowing up because of UIDs/GIDs changing when they update. We need to give people a way to move forward without us merging a patch, and at the same time we need to make sure we provide a consistent set of UIDs for anything people may want to deploy with diskimage-builder. IMO the only desirable option *has* to be that we statically define UIDs and GIDs in the elements, because: 1: Requires no data fragments to be kept safe and fed to subsequent build processes 2: Doesn't do anything dynamic on first boot that could take hours/days 3: Can be thoroughly audited at build time to ensure correctness As you rightly point out though, any existing deployments will definitely be disrupted by this, but as I said above, all we'd be doing there is moving the needle from possible/probable to definite. Since the only leftovers we have from their previous image builds, are the images themselves, we could add the ability for a DIB run to extract IDs from a previous image, but this couldn't be required as a default build option, so we'd still risk existing deployments if they don't notice this feature. We could create a script that would spider an existing cloud and extract its ID mappings, to produce a fragment to feed into future builds, but again we're relying on operators to know that they need to do this. Instead, I agree with Greg's view that this is our fault and we should fix it. We didn't think of this sooner, and as a result, our users are at risk. If we don't entirely fix this ourselves, we will be both expecting them to become aware of this issue and expecting them to do additional work to mitigate it. To that end, I think we should audit all of our elements for use of /mnt/state/ and use the specific knowledge we have of the software they relate to, to build one-time ID migration scripts, which would: 1: Execute before any related services start 2: Compare the now-static ID mappings against known files in /mnt/state 3: chown/chgrp any files/directories that need migrating 4: store a flag file in /mnt/state indicating that this process doesn't need to run again It does mean they have a potentially painfully long update process once, but the result will be a completely stable, static arrangement that will not require them to preserve precious build fragments for the rest of time. Nor does it require some odd run-time remapping, or any additional mechanisms to centralise user management (e.g. LDAP. Please, no LDAP!) I think that tying ourselves and our operators into knots because we're afraid of the hit of one-time data migration, is crazy. AFAICS, the only risk left at that point, is elements that other people are maintaining. If we consider that to be a sufficient risk, we can still build the mechanism for injecting ID values from a previous build (essentially just seeding the static values that we'd be setting anyway) and apologise to the users who need that, or who don't discover its existence and break their clouds. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state
Hi On 7 Oct 2014, at 18:49, Clint Byrum cl...@fewbar.com wrote: * Create an element which imports /etc/passwd and /etc/group from local disk into image. This will have an element-provides of uid-gid-map I don't think it should import those files wholesale, IMO that's making way too many assumptions about the similarity of the two builds. I think this needs to be at least somewhat driven from the elements themselves - they need to declare the users/groups they need and likely offer a default UID/GID for each. The element you describe could consult the elements to determine which users/groups need to be created, pull the values it needs from the passwd/group files if they exist, or fall back on the offered static defaults if not, then use normal tools to create the user/group (mainly so we respect whatever libnss settings the operator has chosen). * Create a separate element called 'static-users' which also provides uid-gid-map. Contains a map of uids and gids, and creates users early on I don't like the idea of keeping the map in a single element, for several reasons: 1: Harder to maintain across multiple element repos (e.g. we have two repos, do we have a static-users equivalent in each?) 2: Harder for downstreams to fork if they want to add an element we don't carry 3: Harder for devs to commit to, especially if it means they need to simultaneously land something in di-b and t-i-e. with static UIDs/GIDs only. Disables usual commands used to add users and groups (error message should explain well enough that user can add their Are you suggesting disabling those during build time only, or runtime too? I strongly dislike the latter and I'm not thrilled about the former. I'd rather we leave them as-is and audit the passwd/group files at the end of the build, vs what we were expecting from the elements. (we should also be aware that by enforcing this, we'll be increasing the number of elements we need to supply, because any dependencies that get pulled in during the build, which create users/groups, will now error/fail the build) As for migrations, that is fairly simple and can be done generically, I've already written a script that does it fairly reliably. The only Curious to see how that works - how can it know that /mnt/state/some/random/dir currently listed as ceilometer was actually owned by swift on the previous boot? Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state
Hi On 7 Oct 2014, at 20:47, Clint Byrum cl...@fewbar.com wrote: I don't think it should import those files wholesale, IMO that's making way too many assumptions about the similarity of the two builds. Disagree on the grounds that the point of this is to deal with people who already have images and want to update them. We're not trying to enable people to deploy radically different OS's or anything like that. My point is that we don't know what else is changing under the hood. Switching OS is a bit extreme, but we don't know that they're not going to pull in an OS upgrade at the same time and have everything change substantially, or even just introduce one additional dependency which we would destroy the uid/gid for. I think this needs to be at least somewhat driven from the elements themselves - they need to declare the users/groups they need and likely offer a default UID/GID for each. The element you describe could consult the elements to determine which users/groups need to be created, pull the values it needs from the passwd/group files if they exist, or fall back on the offered static defaults if not, then use normal tools to create the user/group (mainly so we respect whatever libnss settings the operator has chosen). That is a lot of complexity for the case that we don't want people to use (constantly carrying /etc/passwd and /etc/group forward). As tchaypo pointed out on IRC, if we do this static route, we are laying down a great big concrete slab of opinion in our images. I'm all for opinionated software, but we need to give people an out, which means we probably want to have a way for the suggested default UID/GIDs to be overridden, i.e. roughly what I described. We can just use that override to inject the pre-existing passwd/group values, if we so desire. I think that makes sense. The per-element expressions will still need to be coalesced into one place to check for conflicts. I suppose if we Definitely. If we use normal system tools to create the users/groups, the build will fail if there are conflicts. So I think we still do care about these. MySQL puts its files on /mnt/state. RabbitMQ puts its files on /mnt/state. icinga puts its files on /mnt/state. If an element installs a package that needs a user, we have to treat that as our responsibility to handle, because we want to preserve /mnt/state. Agreed. I just wanted to put a note on the table that we will have to care about these things :) It also has to have access to the previous image's /etc/passwd and /etc/group. In the context I wrote it in, updating via ansible, I can *nods* Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][elections] Not running as PTL this cycle
Hi On 22 September 2014 04:26, Robert Collins robe...@robertcollins.net wrote: I'm not running as PTL for the TripleO program this cycle. As someone who's been involved in TripleO for a couple of years, I'd like to say thank you very much for your efforts in bootstrapping and PTLing the program. I think it has benefitted enormously from your contributions (leadership and otherwise). -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers
Hi +1 Cheers, -- Chris Jones On 9 Sep 2014, at 19:32, Gregory Haynes g...@greghaynes.net wrote: Hello everyone! I have been working on a meta-review of StevenK's reviews and I would like to propose him as a new member of our core team. As I'm sure many have noticed, he has been above our stats requirements for several months now. More importantly, he has been reviewing a wide breadth of topics and seems to have a strong understanding of our code base. He also seems to be doing a great job at providing valuable feedback and being attentive to responses on his reviews. As such, I think he would make a great addition to our core team. Can the other core team members please reply with your votes if you agree or disagree. Thanks! Greg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?
Hi When register-nodes blows up, is the error we get from Ironic sufficiently unique that we can just consume it and move on? I'm all for making the API more powerful wrt inspecting the current setup, but I also like idempotency :) Cheers, -- Chris Jones On 22 Aug 2014, at 07:32, Steve Kowalik ste...@wedontsleep.org wrote: Hi, TripleO has a bridging script we use to register nodes with a baremetal service (eg: Ironic or Nova-bm), called register-nodes, which given a list of node descriptions (in JSON), will register them with the appropriate baremetal service. At the moment, if you run register-nodes a second time with the same list of nodes, it will happily try and register them and then blow up when Ironic or Nova-bm returns an error. If operators are going to update their master list of nodes to add or remove machines and then run register-nodes again, we need a way to skip registering nodes that are already -- except that I don't really want to extract out the UUID of the registered nodes, because that puts an onus on the operators to make sure that the UUID is listed in the master list, and that would be mean requiring manual data entry, or some way to get that data back out in the tool they use to manage their master list, which may not even have an API. Because our intent is for this bridge between an operators master list, and a baremetal service, the intent is for this to run again and again when changes happen. This means we need a way to uniquely identify the machines in the list so we can tell if they are already registered. For the pxe_ssh driver, this means the set of MAC addresses must intersect. For other drivers, we think that the pm_address for each machine will be unique. Would it be possible add some advice to that effect to Ironic's driver API? Cheers, -- Steve Stop breathing down my neck! My breathing is merely a simulation. So is my neck! Stop it anyway. - EMH vs EMH, USS Prometheus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?
Hi Nice, sounds like a very good thing from an operator's perspective. Cheers, -- Chris Jones On 22 Aug 2014, at 08:44, Steve Kowalik ste...@wedontsleep.org wrote: On 22/08/14 17:35, Chris Jones wrote: Hi When register-nodes blows up, is the error we get from Ironic sufficiently unique that we can just consume it and move on? I'm all for making the API more powerful wrt inspecting the current setup, but I also like idempotency :) If the master nodes list changes, because say you add a second NIC, and up the amount of RAM for a few of your nodes, we then want to update those details in the baremetal service, rather than skipping those nodes since they are already registered. Cheers, -- Chris Jones On 22 Aug 2014, at 07:32, Steve Kowalik ste...@wedontsleep.org wrote: Hi, TripleO has a bridging script we use to register nodes with a baremetal service (eg: Ironic or Nova-bm), called register-nodes, which given a list of node descriptions (in JSON), will register them with the appropriate baremetal service. At the moment, if you run register-nodes a second time with the same list of nodes, it will happily try and register them and then blow up when Ironic or Nova-bm returns an error. If operators are going to update their master list of nodes to add or remove machines and then run register-nodes again, we need a way to skip registering nodes that are already -- except that I don't really want to extract out the UUID of the registered nodes, because that puts an onus on the operators to make sure that the UUID is listed in the master list, and that would be mean requiring manual data entry, or some way to get that data back out in the tool they use to manage their master list, which may not even have an API. Because our intent is for this bridge between an operators master list, and a baremetal service, the intent is for this to run again and again when changes happen. This means we need a way to uniquely identify the machines in the list so we can tell if they are already registered. For the pxe_ssh driver, this means the set of MAC addresses must intersect. For other drivers, we think that the pm_address for each machine will be unique. Would it be possible add some advice to that effect to Ironic's driver API? Cheers, -- Steve Stop breathing down my neck! My breathing is merely a simulation. So is my neck! Stop it anyway. - EMH vs EMH, USS Prometheus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Steve In the beginning was the word, and the word was content-type: text/plain ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][core] Expectations of core reviewers
Hi On 14 Aug 2014, at 19:48, Dugger, Donald D donald.d.dug...@intel.com wrote: My experience with mics, no matter how good, In conference rooms is not good. +1 The ubuntu dev summits went through several iterations of trying to make remote participation effective, and I don't think we ever achieved it. I think the reality that needs to be accepted is that not every IRL conversation is going to be accessible to everyone who wants to take part (even if you have superb remote access, maybe some particular discussion is happening at 3am in some developer's timezone). I think we should be ok with that. So long as a quorum of cores can be present at any given event, progress can be made. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] os-refresh-config run frequency
Hi I also have some strong concerns about this. Can we get round a table this week and hash it out? Cheers, Chris On 20 Jul 2014, at 14:51, Dan Prince dpri...@redhat.com wrote: On Thu, 2014-07-17 at 15:54 +0100, Michael Kerrin wrote: On Thursday 26 June 2014 12:20:30 Clint Byrum wrote: Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-26 04:13:31 -0700: Hi all, I've been working more and more with TripleO recently and whilst it does seem to solve a number of problems well, I have found a couple of idiosyncrasies that I feel would be easy to address. My primary concern lies in the fact that os-refresh-config does not run on every boot/reboot of a system. Surely a reboot *is* a configuration change and therefore we should ensure that the box has come up in the expected state with the correct config? This is easily fixed through the addition of an @reboot entry in /etc/crontab to run o-r-c or (less easily) by re-designing o-r-c to run as a service. My secondary concern is that through not running os-refresh-config on a regular basis by default (i.e. every 15 minutes or something in the same style as chef/cfengine/puppet), we leave ourselves exposed to someone trying to make a quick fix to a production node and taking that node offline the next time it reboots because the config was still left as broken owing to a lack of updates to HEAT (I'm thinking a quick change to allow root access via SSH during a major incident that is then left unchanged for months because no-one updated HEAT). There are a number of options to fix this including Modifying os-collect-config to auto-run os-refresh-config on a regular basis or setting os-refresh-config to be its own service running via upstart or similar that triggers every 15 minutes I'm sure there are other solutions to these problems, however I know from experience that claiming this is solved through education of users or (more severely!) via HR is not a sensible approach to take as by the time you realise that your configuration has been changed for the last 24 hours it's often too late! So I see two problems highlighted above. 1) We don't re-assert ephemeral state set by o-r-c scripts. You're right, and we've been talking about it for a while. The right thing to do is have os-collect-config re-run its command on boot. I don't think a cron job is the right way to go, we should just have a file in /var/run that is placed there only on a successful run of the command. If that file does not exist, then we run the command. I've just opened this bug in response: https://bugs.launchpad.net/os-collect-config/+bug/1334804 I have been looking into bug #1334804 and I have a review up to resolve it. I want to highlight something. Currently on a reboot we start all services via upstart (on debian anyways) and there have been quite a lot of issues around this - missing upstart scripts and timing issues. I don't know the issues on fedora. So with a fix to #1334804, on a reboot upstart will start all the services first (with potentially out-of-date configuration), then o-c-c will start o-r-c and will now configure all services and restart them or start them if upstart isn't configured properly. I would like to turn off all boot scripts for services we configure and leave all this to o-r-c. I think this will simplify things and put us in control of starting services. I believe that it will also narrow the gap between fedora and debian or debian and debian so what works on one should work on the other and make it easier for developers. I'm not sold on this approach. At the very least I think we want to make this optional because not all deployments may want to have o-r-c be the central service starting agent. So I'm opposed to this being our (only!) default... The job of o-r-c in this regard is to assert state... which to me means making sure that a service is configured correctly (config files, set to start on boot, and initially started). Requiring o-r-c to be the service starting agent (always) is beyond the scope of the o-r-c tool. If people want to use it in that mode I think having an *option* to do this is fine. I don't think it should be required though. Furthermore I don't think we should get into the habit of writing our elements in such a matter that things no longer start on boot without o-r-c in the mix. I do think we can solve these problems. But taking a hardwired prescriptive approach is not good here... Having the ability to service nova-api stop|start|restart is very handy but this will be a manually thing and I intend to leave that there. What do people think and how best do I push this forward. I feel that this leads into the the re-assert-system-state spec but mainly I think this is a bug and doesn't
Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team
Hi +1 Cheers, -- Chris Jones On 9 Jul 2014, at 16:52, Clint Byrum cl...@fewbar.com wrote: Hello! I've been looking at the statistics, and doing a bit of review of the reviewers, and I think we have an opportunity to expand the core reviewer team in TripleO. We absolutely need the help, and I think these two individuals are well positioned to do that. I would like to draw your attention to this page: http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt Specifically these two lines: +---+---++ | Reviewer | Reviews -2 -1 +1 +2 +A+/- % | Disagreements* | +---+---++ | jonpaul-sullivan | 1880 43 145 0 077.1% | 28 ( 14.9%) | | lxsli | 1860 23 163 0 087.6% | 27 ( 14.5%) | Note that they are right at the level we expect, 3 per work day. And I've looked through their reviews and code contributions: it is clear that they understand what we're trying to do in TripleO, and how it all works. I am a little dismayed at the slightly high disagreement rate, but looking through the disagreements, most of them were jp and lxsli being more demanding of submitters, so I am less dismayed. So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core reviewer team. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] os-refresh-config run frequency
Hi Given... On 26 Jun 2014, at 20:20, Clint Byrum cl...@fewbar.com wrote: we should just have a file in /var/run that and... I think we should focus just on how do we re-assert state?. ... for the reboot case, we could have os-collect-config check for the presence of the /var/run file when it starts, if it doesn't find it, unconditionally call o-r-c and then write out the file. Given that we're starting o-c-c on boot, this seems like a fairly simple way to get o-r-c to run on boot (and one that could be trivially disabled by configuration or just dumbly pre-creating the /var/run file). Whenever a new version is detected, os-collect-config would set a value in the environment that informs the command this is a new version of I like the idea of exposing the fact that a new config version has arrived, to o-r-c scripts, but... if !service X status ; then service X start ... I always worry when I see suggestions to have periodic state-assertion tasks take care of starting services that are not running, but in this case I will try to calm my nerves with the knowledge that service(1) is almost certainly talking to a modern init which is perfectly capable of supervising daemons :D Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Post-summit next steps
Hi On 20 May 2014, at 16:25, Sanchez, Cristian A cristian.a.sanc...@intel.com wrote: Could you please point me where the spec repo is? http://git.openstack.org/cgit/openstack/tripleo-specs/ Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh
Hi On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com wrote: I supose that rewriting the code to be in Python is out of the question ? IMHO shell is just a terrible language for doing any program that is remotely complicated (ie longer than 10 lines of I don't think it's out of the question - where something makes sense to switch to Python, that would seem like a worthwhile thing to be doing. I do think it's a different question though - we can quickly flip things from /bin/sh to /bin/bash without affecting their suitability for replacement with python. -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Thoughts from the PTL -- possible mid cycle meetup dates
Hey Co-locating still has the option to partially overlap the two sprints. Cheers, -- Chris Jones On 16 Apr 2014, at 02:38, Michael Still mi...@stillhq.com wrote: On Wed, Apr 16, 2014 at 10:01 AM, Robert Collins robe...@robertcollins.net wrote: On 16 April 2014 11:28, Michael Still mi...@stillhq.com wrote: On Wed, Apr 16, 2014 at 7:57 AM, Hugh O. Brock hbr...@redhat.com wrote: On Wed, Apr 16, 2014 at 09:30:45AM +1200, Robert Collins wrote: Redhat offered to host the next TripleO midcycle meetup in Raleigh, I don't know if they have space for Nova TripleO at once, but I'd love to get more collaboration time betwixt Nova and TripleO. The TripleO midcycle meetups are 'doing' meetings, not planning meetings - but plenty of planning does still happen ;) Date wise, how about before OSCON ? PyConAU which often gets a heavy openstack contingent is august 1-5. I am sure we have enough space, we would be very happy to host both at the same time. I envision at the same time being back to back to be honest, as I think running two in parallel would be a bit bonkers. I can't travel for a single 2 week trip - my daughter doesn't cope super well with me being gone, and I don't want to subject her to a 2 week trip. Doing a 2-or-3 day meetup for TripleO is pointless IMO - folk spend a day getting there in the first place. Last cycle TripleO and Ironic co-located and it was productive for all involved. This may mean that co-locating is an idea which doesn't work out. Based on the way the last nova meetup went, there will be little time to dig into the deeper specifics of tripleo (ironic especially) if its in time that's also allocated to other nova discussion -- I think the absolute longest we spent on a single topic last time was in the order of a couple of hours. The nova meetup also wasn't a hackfest -- it was more about design review and progress tracking, and I think that was a model that worked well for us. I see a need for a lot of sync between ironic and nova for Juno, mostly around the replacement of the baremetal driver. Perhaps instead we should go back to trying to have these events separately, and try and get a few key nova people to the the tripleo meetup. Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic][TripleO] ubuntu deploy-ironic element ramdisk does not detect sata disks
Hi At the point you get that error, it should also give you about 10 seconds to press t to get a troubleshooting shell (or you can specify troubleshoot on the kernel command line if you prefer). I would start with that and have a poke around dmesg and /dev to see what it can see by way of disks. Cheers, Chris On 11 April 2014 12:10, Rohan Kanade openst...@rohankanade.com wrote: I am using Ironic's out of tree Nova driver and running Ironic + PXESeamicro driver. I am using a ubuntu based deployment ramdisk created using below cmd in diskimage-builder. sudo bin/ramdisk-image-create -a amd64 ubuntu deploy-ironic -o /tmp/deploy-ramdisk I can see that the ramdisk is pxe-booted on the baremetal correctly. But the ramdisk throws an error saying it cannot find the target disk device https://github.com/openstack/diskimage-builder/blob/master/elements/deploy-ironic/init.d/80-deploy-ironic#L18 I then hard-code /dev/sda as target_disk, yet the ramdisk does not actually detect any disks after booting up or while booting up. I have cross-checked using a SystemRescueCD linux image on the same baremetal, i can see all the sata disks attached to it fine. Any pointers? Regards, Rohan Kanade ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh
Hi Apart from special cases like the ramdisk's /init, which is a script that needs to run in busybox's shell, everything should be using bash. There's no point us tying ourselves in knots trying to achieve POSIX compliance for the sake of it, when bashisms are super useful. Cheers, Chris On 14 April 2014 17:26, Ben Nemec openst...@nemebean.com wrote: tldr: I propose we use bash explicitly for all diskimage-builder scripts (at least for the short-term - see details below). This is something that was raised on my linting changes to enable set -o pipefail. That is a bash-ism, so it could break in the diskimage-builder scripts that are run using /bin/sh. Two possible fixes for that: switch to /bin/bash, or don't use -o pipefail But I think this raises a bigger question - does diskimage-builder require bash? If so, I think we should just add a rule to enforce that /bin/bash is the shell used for everything. I know we have a bunch of bash-isms in the code already, so at least in the short-term I think this is probably the way to go, so we can get the benefits of things like -o pipefail and lose the ambiguity we have right now. For reference, a quick grep of the diskimage-builder source shows we have 150 scripts using bash explicitly and only 24 that are plain sh, so making the code truly shell-agnostic is likely to be a significant amount of work. In the long run it might be nice to have cross-shell compatibility, but if we're going to do that I think we need a couple of things: 1) Someone to do the work (I don't have a particular need to run dib in not-bash, so I'm not signing up for that :-) 2) Testing in other shells - obviously just changing /bin/bash to /bin/sh doesn't mean we actually support anything but bash. We really need to be gating on other shells if we're going to make a significant effort to support them. It's not good to ask reviewers to try to catch every bash-ism proposed in a change. This also relates to some of the unit testing work that is going on right now too - if we had better unit test coverage of the scripts we would be able to do this more easily. Thoughts? Thanks. -Ben ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] TripleO fully uploaded to Debian Experimental
Hi On 11 April 2014 08:00, Thomas Goirand z...@debian.org wrote: it's with a great joy that I can announce today, that TripleO is now fully in Debian [1]. It is currently only uploaded to Debian woo! Thanks very much Thomas :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] reviewer update march [additional cores]
Hi +1 Cheers, -- Chris Jones On 8 Apr 2014, at 00:50, Robert Collins robe...@robertcollins.net wrote: tl;dr: 3 more core members to propose: bnemec greghaynes jdon On 4 April 2014 08:55, Chris Jones c...@tenshu.net wrote: Hi +1 for your proposed -core changes. Re your question about whether we should retroactively apply the 3-a-day rule to the 3 month review stats, my suggestion would be a qualified no. I think we've established an agile approach to the member list of -core, so if there are a one or two people who we would have added to -core before the goalposts moved, I'd say look at their review quality. If they're showing the right stuff, let's get them in and helping. If they don't feel our new goalposts are achievable with their workload, they'll fall out again naturally before long. So I've actioned the prior vote. I said: Bnemec, jdob, greg etc - good stuff, I value your reviews already, but... So... looking at a few things - long period of reviews: 60 days: |greghaynes | 1210 22 99 0 081.8% | 14 ( 11.6%) | | bnemec | 1160 38 78 0 067.2% | 10 ( 8.6%) | | jdob | 870 15 72 0 082.8% | 4 ( 4.6%) | 90 days: | bnemec | 1450 40 105 0 072.4% | 17 ( 11.7%) | |greghaynes | 1420 23 119 0 083.8% | 22 ( 15.5%) | | jdob | 1060 17 89 0 084.0% | 7 ( 6.6%) | Ben's reviews are thorough, he reviews across all contributors, he shows good depth of knowledge and awareness across tripleo, and is sensitive to the pragmatic balance between 'right' and 'good enough'. I'm delighted to support him for core now. Greg is very active, reviewing across all contributors with pretty good knowledge and awareness. I'd like to see a little more contextual awareness though - theres a few (but not many) reviews where looking at how the big picture of things fitting together more would have been beneficial. *however*, I think that's a room-to-improve issue vs not-good-enough-for-core - to me it makes sense to propose him for core too. Jay's reviews are also very good and consistent, somewhere between Greg and Ben in terms of bigger-context awareness - so another definite +1 from me. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] config options, defaults, oh my!
Hi On 8 Apr 2014, at 11:20, Sean Dague s...@dague.net wrote: I think Phil is dead on. I'll also share the devstack experience here. Until we provided the way for arbitrary pass through we were basically getting a few patches every week that were let me configure this variable in the configs over and over again. +1 We can't be in the business of prescribing what users can/can't configure in the daemons they are using us to deploy. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] reviewer update march
Hi +1 for your proposed -core changes. Re your question about whether we should retroactively apply the 3-a-day rule to the 3 month review stats, my suggestion would be a qualified no. I think we've established an agile approach to the member list of -core, so if there are a one or two people who we would have added to -core before the goalposts moved, I'd say look at their review quality. If they're showing the right stuff, let's get them in and helping. If they don't feel our new goalposts are achievable with their workload, they'll fall out again naturally before long. Cheers, Chris On 3 April 2014 12:02, Robert Collins robe...@robertcollins.net wrote: Getting back in the swing of things... Hi, like most OpenStack projects we need to keep the core team up to date: folk who are not regularly reviewing will lose context over time, and new folk who have been reviewing regularly should be trusted with -core responsibilities. In this months review: - Dan Prince for -core - Jordan O'Mara for removal from -core - Jiri Tomasek for removal from -core - Jamomir Coufal for removal from -core Existing -core members are eligible to vote - please indicate your opinion on each of the three changes above in reply to this email. Ghe, please let me know if you're willing to be in tripleo-core. Jan, Jordan, Martyn, Jiri Jaromir, if you are planning on becoming substantially more active in TripleO reviews in the short term, please let us know. My approach to this caused some confusion a while back, so I'm keeping the boilerplate :) - I'm going to talk about stats here, but they are only part of the picture : folk that aren't really being /felt/ as effective reviewers won't be asked to take on -core responsibility, and folk who are less active than needed but still very connected to the project may still keep them : it's not pure numbers. Also, it's a vote: that is direct representation by the existing -core reviewers as to whether they are ready to accept a new reviewer as core or not. This mail from me merely kicks off the proposal for any changes. But, the metrics provide an easy fingerprint - they are a useful tool to avoid bias (e.g. remembering folk who are just short-term active) - human memory can be particularly treacherous - see 'Thinking, Fast and Slow'. With that prelude out of the way: Please see Russell's excellent stats: http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt For joining and retaining core I look at the 90 day statistics; folk who are particularly low in the 30 day stats get a heads up so they aren't caught by surprise. 90 day active-enough stats: +-+---++ | Reviewer| Reviews -2 -1 +1 +2 +A+/- % | Disagreements* | +-+---++ |slagle **| 6550 145 7 503 15477.9% | 36 ( 5.5%) | | clint-fewbar ** | 5494 120 11 414 11577.4% | 32 ( 5.8%) | | lifeless ** | 518 34 203 2 279 11354.2% | 21 ( 4.1%) | | rbrady | 4530 14 439 0 096.9% | 60 ( 13.2%) | | cmsj ** | 3220 24 1 297 13692.5% | 22 ( 6.8%) | |derekh **| 2610 50 1 210 9080.8% | 12 ( 4.6%) | |dan-prince | 2570 67 157 33 1673.9% | 15 ( 5.8%) | | jprovazn ** | 1900 21 2 167 4388.9% | 13 ( 6.8%) | |ifarkas ** | 1860 28 18 140 8284.9% | 6 ( 3.2%) | === | jistr **| 1770 31 16 130 2882.5% | 4 ( 2.3%) | | ghe.rivero ** | 1761 21 25 129 5587.5% | 7 ( 4.0%) | |lsmola **| 1722 12 55 103 6391.9% | 21 ( 12.2%) | | jdob | 1660 31 135 0 081.3% | 9 ( 5.4%) | | bnemec | 1380 38 100 0 072.5% | 17 ( 12.3%) | |greghaynes | 1260 21 105 0 083.3% | 22 ( 17.5%) | | dougal | 1250 26 99 0 079.2% | 13 ( 10.4%) | | tzumainn ** | 1190 30 69 20 1774.8% | 2 ( 1.7%) | |rpodolyaka | 1150 15 100 0 087.0% | 15 ( 13.0%) | | ftcjeff | 1030 3 100 0 097.1% | 9 ( 8.7%) | | thesheep| 930 26 31 36 2172.0% | 3 ( 3.2%) | |pblaho **| 881 8 37 42 2289.8% | 3 ( 3.4%) | | jonpaul-sullivan| 800 33 47 0 058.8% | 17 ( 21.2%) | | tomas-8c8 ** | 780 15 4 59 27
Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests
Hi We don't have a strong attachment to stunnel though, I quickly dropped it in front of our CI/CD undercloud and Rob wrote the element so we could repeat the deployment. In the fullness of time I would expect there to exist elements for several SSL terminators, but we shouldn't necessarily stick with stunnel because it happened to be the one I was most familiar with :) I would think that an httpd would be a good option to go with as the default, because I tend to think that we'll need an httpd running/managing the python code by default. Cheers, -- Chris Jones On 26 Mar 2014, at 13:49, stuart.mcla...@hp.com wrote: Just spotted the openstack-ssl element which uses 'stunnel'... On Wed, 26 Mar 2014, stuart.mcla...@hp.com wrote: All, I know there's a preference for using a proxy to terminate SSL connections rather than using the native python code. There's a good write up of configuring the various proxies here: http://docs.openstack.org/security-guide/content/ch020_ssl-everywhere.html If we're not using native python SSL termination in TripleO we'll need to pick which one of these would be a reasonable choice for initial https support. Pound may be a good choice -- its lightweight (6,000 lines of C), easy to configure and gives good control over the SSL connections (ciphers etc). Plus, we've experience with pushing large (GB) requests through it. I'm interested if others have a strong preference for one of the other options (stud, nginx, apache) and if so, what are the reasons you feel it would make a better choice for a first implementation. Thanks, -Stuart ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests
Hi On 26 March 2014 16:51, Clint Byrum cl...@fewbar.com wrote: quite a bit differently than app serving), there is a security implication in having the private SSL keys on the same box that runs the app. This is a very good point, thanks :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][reviews] We're falling behind
Hey +1 3 a day seems pretty sustainable. Cheers, -- Chris Jones On 25 Mar 2014, at 20:17, Robert Collins robe...@robertcollins.net wrote: TripleO has just seen an influx of new contributors. \o/. Flip side - we're now slipping on reviews /o\. In the meeting today we had basically two answers: more cores, and more work by cores. We're slipping by 2 reviews a day, which given 16 cores is a small amount. I'm going to propose some changes to core in the next couple of days - I need to go and re-read a bunch of reviews first - but, right now we don't have a hard lower bound on the number of reviews we request cores commit to (on average). We're seeing 39/day from the 16 cores - which isn't enough as we're falling behind. Thats 2.5 or so. So - I'd like to ask all cores to commit to doing 3 reviews a day, across all of tripleo (e.g. if your favourite stuff is all reviewed, find two new things to review even if outside comfort zone :)). And we always need more cores - so if you're not a core, this proposal implies that we'll be asking that you a) demonstrate you can sustain 3 reviews a day on average as part of stepping up, and b) be willing to commit to that. Obviously if we have enough cores we can lower the minimum commitment - so I don't think this figure should be fixed in stone. And now - time for a loose vote - who (who is a tripleo core today) supports / disagrees with this proposal - lets get some consensus here. I'm in favour, obviously :), though it is hard to put reviews ahead of direct itch scratching, its the only way to scale the project. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Error while running tox -v -epy27 tests.unit.ml2
Hello; I was talking with some people on the openstack-neutron IRC channel, and was trying to get assistance to root cause why the ml2 unit tests are failing. I have cloned a fresh neutron tree that I have pegged back to hash: b78eea6 I am attempting to run the following on an vagrant Ubuntu 12.04 VM running in virtual box on a Mac. The neutron code resides on my Mac with a mount point in the VM so I can access the code from there. I am using tox version: 1.6.1 as the latest version didn’t seem to work and this was recommended by one of the users on IRC. Here is the command I’m running with the output below: tox -v -epy27 tests.unit.ml2 Non-zero exit code (2) from test listing. stdout='\xb3)\x01@o@dneutron.tests.functional.agent.linux.test_async_process.TestAsyncProcess.test_async_process_respawns8\x10\x05\x8e\xb3)\x01@y@nneutron.tests.functional.agent.linux.test_async_process ... (COPIOUS output omitted for brevity) ... error: testr failed (3) ERROR: InvocationError: '/Users/cjones/dev/siaras/stack/vagrant/project/openstack/n3/.tox/py27/bin/python -m neutron.openstack.common.lockutils python setup.py testr --slowest --testr-args=tests.unit.ml2' __ summary __ ERROR: py27: commands failed If I take the mount point out of the equation, I don’t get this copious output, but fail with the following: Killed == FAIL: process-returncode tags: worker-3 -- Binary content: traceback (test/plain; charset=utf8) Ran 920 (+473) tests in 296.233s (+190.902s) FAILED (id=2, failures=2) error: testr failed (1) ERROR: InvocationError: '/home/vagrant/neutron/.tox/py27/bin/python -m neutron.openstack.common.lockutils python setup.py testr --slowest --testr-args=tests.unit.ml2' If anyone can provide me with some assistance on why this would be erring out or how to debug and root cause this, it would be much appreciated. Cheers, Chris___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] Manual scheduling nodes in maintenance mode
Hey I wanted to throw out an idea that came to me while I was working on diagnosing some hardware issues in the Tripleo CD rack at the sprint last week. Specifically, if a particular node has been dropped from automatic scheduling by the operator, I think it would be super useful to be able to still manually schedule the node. Examples might be that someone is diagnosing a hardware issue and wants to boot an image that has all their favourite diagnostic tools in it, or they might be booting an image they use for updating firmwares, etc (frankly, just being able to boot a generic, unmodified host OS on a node can be super useful if you're trying to crash cart the machine for something hardware related). Any thoughts? :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tripleo] tripleo-cd-admins team update / contact info question
Hi On 25 February 2014 14:30, Robert Collins robe...@robertcollins.net wrote: So - I think we need to define two things: - a stock way for $randoms to ask for support w/ these clouds that will be fairly low latency and reliable. - a way for us to escalate to each other *even if folk happen to be away from the keyboard at the time*. And possibly a third: - a way for openstack-infra admins to escalate to us in the event of OMG things happening. Like, we send 1000 VMs all at once at their git mirrors or something. I think action zero is to define an SLA, so everyone has a very clear picture of what to expect from us, and we have a clear picture of what we're signing up to provide. Also, I'd note that talking about non-IRC escalation methods, coverage of weekends, etc. is moving us into a pretty different realm than we have been in, so it might be worth checking that all the current people (who might not all have been in the meeting) are ok with fixing a cloud on a Sunday :) Then we need to map out who can be contacted at any given time of week, and how they can be contacted. Hopefully follow-the-sun covers us with normal working hours, apart from the gap between US/Pacific finishing their week, and New Zealand starting the next week. Since we're essentially relying on volunteer efforts to service these production clouds, we would need to let people be pretty flexible about when they can be contacted. Then we need to publish that information somewhere that the relevant folk can see and some kind of monitoring that can escalate beyond IRC if it's not getting a response. James mentioned Pagerduty and I've had good experiences with it in previous operational roles. Then we need to write a playbook so each outage isn't a voyage of discovery unless it's something completely new, and commit to updating the playbook after each outage, with what we learned that time. Have we considered reaching out to OpenStack sponsors who have operational folk, to see if they would be interested in contributing human resources to this? -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO
Hi Assuming I am interpreting your mail correctly, I think option A makes vastly more sense, with one very specific provision I'd add. More on that in a moment. Option B, the idea that we would mangle a package-installed environment to suit our desired layout, is not going to work out well for us at all. Firstly, I think we'll find there are a ton of problems here (just thinking about assuming that we can alter a system username, in the context of the crazy things you can hook into libnss, makes me shiver). Secondly, it is also going to cause a significant increase in distro-specific manglery in our DIB elements, no? Right now, handling the username case could be simplified if we recognised ok, this is a thing that can vary, abstracted them into environment variables and allowed the distros to override them in their base OS element. That is not very much code, it would simplify the elements from where we are today and the documentation could be auto-generated to account for it, or made to refer to the usernames in a way the operator can dereference locally. Maybe we can't do something like that for every friction point we hit, but I'd wager we could for most). Back to the specific provision I mentioned for option A. Namely, put the extra work you mention, on the distros. If they want to get their atypical username layout into TripleO, ask them to provide a fork of our documentation that accounts for it, and keep that up to date. If their choice is do that, or have their support department maintain a fork of all their openstack support material, because it might be wildly different if the customer is using TripleO, I suspect they'd prefer to do a bit of work on our docs. I completely agree with your comment later in the thread that our job is to be the upstream installer, so I suggest we do our best to only focus on upstream, but in a way that enables our downstreams to engage with our element repositories, by shouldering most of the burdens of their divergence from upstream. For me, the absolute worst case scenario in any distro's adoption of TripleO, is that they are unwilling to be part of the community that maintains tripleo-image-elements, and instead have their own elements that are streamlined for their OS, but lack our community's polish and don't contribute back their features/fixes. I think option B would drive any serious distro away from us purely on the grounds that it would be a nightmare for them to support. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] our update story: can people live with it?
Hi Not a tarball. The node would notice from Heat metadata that it should update to a new image and would fetch that image and sync the contents to its /. This would leave it bit for bit identical to a fresh deployment of the new image, at least on disk. The running state would differ and that still requires some design and implementation to figure out. Cheers, -- Chris Jones On 23 Jan 2014, at 12:57, Angus Thomas atho...@redhat.com wrote: On 22/01/14 20:54, Clint Byrum wrote: I don't understand the aversion to using existing, well-known tools to handle this? These tools are of course available to users and nobody is stopping them from using them. We are optimizing for not needing them. They are there and we're not going to explode if you use them. You just lose one aspect of what we're aiming at. I believe that having image based deploys will be well received as long as it is simple to understand. A hybrid model (blending 2 and 3, above) here I think would work best where TripleO lays down a baseline image and the cloud operator would employ an well-known and support configuration tool for any small diffs. These tools are popular because they control entropy and make it at least more likely that what you tested ends up on the boxes. A read-only root partition is a much stronger control on entropy. The operator would then be empowered to make the call for any major upgrades that would adversely impact the infrastructure (and ultimately the users/apps). He/She could say, this is a major release, let's deploy the image. Something logically like this, seems reasonable: if (system_change 10%) { use TripleO; } else { use Existing_Config_Management; } I think we can make deploying minor updates minimally invasive. We've kept it simple enough, this should be a fairly straight forward optimization cycle. And the win there is that we also improve things for the 11% change. Hi Clint, For deploying minimally-invasive minor updates, the idea, if I've understood it correctly, would be to deploy a tarball which replaced selected files on the (usually read-only) root filesystem. That would allow for selective restarting of only the services which are directly affected. The alternative, pushing out a complete root filesystem image, would necessitate the same amount of disruption in all cases. There are a handful of costs with that approach which concern me: It simplifies the deployment itself, but increases the complexity of preparing the deployment. The administrator is going to have to identify the services which need to be restarted, based on the particular set of libraries which are touched in their partial update, and put together the service restart scripts accordingly. We're also making the administrator responsible for managing the sequence in which incremental updates are deployed. Since each incremetal update will re-write a particular set of files, any machine which gets updates 1,2, 3, there's an oversight, and then update 5 is deployed would end up in an odd state, which would require additional tooling to detect. Package based updates, with versioning and dependency tracking on each package, mitigate that risk. Then there's the relationship between the state of running machines, with applied partial updates, and the images which are put onto new machines by Ironic. We would need to apply the partial updates to the images which Ironic writes, or to have the tooling to ensure that newly deployed machines immediately apply the set of applicable partial updates, in sequence. Solving these issues feels like it'll require quite a lot of additional tooling. Angus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] our update story: can people live with it?
Hi On 22 January 2014 21:33, Fox, Kevin M kevin@pnnl.gov wrote: I think we're pretty far off in a tangent though. My main point was, if you can't selectively restart services as needed, I'm not sure how useful patching the image really is over a full reboot. It should take on the same order of magnitude service unavailability I think. The in-place upgrade (currently known as takeovernode) is not yet well designed and while there is a script in tripleo-incubator called takeovernode, nobody is likely to resume working on it until a little later in our roadmap. The crude hack we have atm does no detection of services that need to be restarted, but that is not intended to suggest that we don't care about such a feature :) I think Clint has covered pretty much all the bases here, but I would re-iterate that in no way do we think the kind of upgrade we're working on at the moment (i.e. a nova rebuild driven instance reboot) is the only one that should exist. We know that in-place upgrades need to happen for tripleo's full story to be taken seriously, and we will get to it. If folk have suggestions for behaviours/techniques/tools, those would be great to capture, probably in https://etherpad.openstack.org/p/tripleo-image-updates . http://manpages.ubuntu.com/manpages/oneiric/man1/checkrestart.1.html is one such tool that we turned up in earlier research about how to detect services that need to be restarted after an upgrade. It's not a complete solution on its own, but it gets us some of the way. (Also, just because we favour low-entropy golden images for all software changes, doesn't mean that any given user can't choose to roll out an upgrade to some piece(s) of software via any other mechanism they choose, if that is what they feel is right for their operation. A combination of the two strategies is entirely possible). -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ironic] Disk Eraser
Hi https://code.google.com/p/diskscrub/ If you need more than /dev/zero, scrub should be packaged in most distros and offers a choice of high grade algorithms. Cheers, -- Chris Jones On 15 Jan 2014, at 14:31, Alan Kavanagh alan.kavan...@ericsson.com wrote: Hi fellow OpenStackers Does anyone have any recommendations on open source tools for disk erasure/data destruction software. I have so far looked at DBAN and disk scrubber and was wondering if ironic team have some better recommendations? BR Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] a common client library
Hi Once a common library is in place, is there any intention to (or resistance against) collapsing the clients into a single project or even a single command (a la busybox)? (I'm thinking reduced load for packagers, simpler installation for users, etc) Cheers, -- Chris Jones On 15 Jan 2014, at 19:37, Doug Hellmann doug.hellm...@dreamhost.com wrote: Several people have mentioned to me that they are interested in, or actively working on, code related to a common client library -- something meant to be reused directly as a basis for creating a common library for all of the openstack clients to use. There's a blueprint [1] in oslo, and I believe the keystone devs and unified CLI teams are probably interested in ensuring that the resulting API ends up meeting all of our various requirements. If you're interested in this effort, please subscribe to the blueprint and use that to coordinate efforts so we don't produce more than one common library. ;-) Thanks, Doug [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements
Hi My guess for the easiest answer to that: distro vendor support. Cheers, -- Chris Jones On 7 Jan 2014, at 20:23, Clint Byrum cl...@fewbar.com wrote: What would be the benefit of using packages? We've specifically avoided packages because they complect[1] configuration and system state management with software delivery. The recent friction we've seen with MySQL is an example where the packages are not actually helping us, they're hurting us because they encode too much configuration instead of just delivering binaries. Perhaps those of us who have been involved a bit longer haven't done a good job of communicating our reasons. I for one believe in the idea that image based updates eliminate a lot of the entropy that comes along with a package based updating system. For that reason alone I tend to look at any packages that deliver configurable software as potentially dangerous (note that I think they're wonderful for libraries, utilities, and kernels. :) [1] http://www.infoq.com/presentations/Simple-Made-Easy Excerpts from James Slagle's message of 2014-01-07 12:01:07 -0800: Hi, I'd like to discuss some possible ways we could install the OpenStack components from packages in tripleo-image-elements. As most folks are probably aware, there is a fork of tripleo-image-elements called tripleo-puppet-elements which does install using packages, but it does so using Puppet to do the installation and for managing the configuration of the installed components. I'd like to kind of set that aside for a moment and just discuss how we might support installing from packages using tripleo-image-elements directly and not using Puppet. One idea would be to add support for a new type (or likely 2 new types: rpm and dpkg) to the source-repositories element. source-repositories already knows about the git, tar, and file types, so it seems somewhat natural to have additional types for rpm and dpkg. A complication with that approach is that the existing elements assume they're setting up everything from source. So, if we take a look at the nova element, and specifically install.d/74-nova, that script does stuff like install a nova service, adds a nova user, creates needed directories, etc. All of that wouldn't need to be done if we were installing from rpm or dpkg, b/c presumably the package would take care of all that. We could fix that by making the install.d scripts only run if you're installing a component from source. In that sense, it might make sense to add a new hook, source-install.d and only run those scripts if the type is a source type in the source-repositories configuration. We could then have a package-install.d to handle the installation from the packages type. The install.d hook could still exist to do things that might be common to the 2 methods. Thoughts on that approach or other ideas? I'm currently working on a patchset I can submit to help prove it out. But, I'd like to start discussion on the approach now to see if there are other ideas or major opposition to that approach. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements
Hi Assuming we want to do this, but not necessarily agreeing that we do want to, I would suggest: 1) I think it would be nice if we could avoid separate dpkg/rpm types by having a package type and reusing the package map facility. 2) Clear up the source-repositories inconsistency by making it clear that multiple repositories of the same type do not work in source-repositories-nova (this would be a behaviour change, but would mesh more closely with the docs, and would require refactoring the 4 elements we ship atm with multiple git repos listed) 3) extend arg_to_element to parse element names like nova/package, nova/tar, nova/file and nova/source (defaulting to source), storing the choice for later. 4) When processing the nova element, apply only the appropriate entry in source-repositories-nova 5) Keep install.d as-is and make the scripts be aware of the previously stored choice of element origin in the elements (as they add support for a package origin) 6) Probably rename source-repositories to something more appropriate. As for whether we should do this or not... like Clint I want to say no, but I'm also worried about people forking t-i-e and not pushing their fixes/improvements and new elements back up to us because we're too diverged. If this is a real customer need, I would come down in favour of doing it if the cost of the above implementation (or an alternate one) isn't too high. Cheers, -- Chris Jones On 7 Jan 2014, at 20:01, James Slagle james.sla...@gmail.com wrote: Hi, I'd like to discuss some possible ways we could install the OpenStack components from packages in tripleo-image-elements. As most folks are probably aware, there is a fork of tripleo-image-elements called tripleo-puppet-elements which does install using packages, but it does so using Puppet to do the installation and for managing the configuration of the installed components. I'd like to kind of set that aside for a moment and just discuss how we might support installing from packages using tripleo-image-elements directly and not using Puppet. One idea would be to add support for a new type (or likely 2 new types: rpm and dpkg) to the source-repositories element. source-repositories already knows about the git, tar, and file types, so it seems somewhat natural to have additional types for rpm and dpkg. A complication with that approach is that the existing elements assume they're setting up everything from source. So, if we take a look at the nova element, and specifically install.d/74-nova, that script does stuff like install a nova service, adds a nova user, creates needed directories, etc. All of that wouldn't need to be done if we were installing from rpm or dpkg, b/c presumably the package would take care of all that. We could fix that by making the install.d scripts only run if you're installing a component from source. In that sense, it might make sense to add a new hook, source-install.d and only run those scripts if the type is a source type in the source-repositories configuration. We could then have a package-install.d to handle the installation from the packages type. The install.d hook could still exist to do things that might be common to the 2 methods. Thoughts on that approach or other ideas? I'm currently working on a patchset I can submit to help prove it out. But, I'd like to start discussion on the approach now to see if there are other ideas or major opposition to that approach. -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements
Hi (FWIW I suggested using the element arguments like nova/package to avoid a huge and crazy environment by using DIB_REPO foo for every element) Cheers, -- Chris Jones On 7 Jan 2014, at 20:32, James Slagle james.sla...@gmail.com wrote: On Tue, Jan 7, 2014 at 3:22 PM, Fox, Kevin M kevin@pnnl.gov wrote: Sounds very useful. Would there be a diskimage-builder flag then to say you prefer packages over source? Would it fall back to source if you specified packages and there were only source-install.d for a given element? Yes, you could pick which you wanted via environment variables. Similar to the way you can pick if you want git head, a specific gerrit review, or a released tarball today via $DIB_REPOTYPE_name, etc. See: https://github.com/openstack/diskimage-builder/blob/master/elements/source-repositories/README.md for more info about that. If you specified something that didn't exist, it should probably fail with an error. The default behavior would still be installing from git master source if you specified nothing though. Thanks, Kevin From: James Slagle [james.sla...@gmail.com] Sent: Tuesday, January 07, 2014 12:01 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements Hi, I'd like to discuss some possible ways we could install the OpenStack components from packages in tripleo-image-elements. As most folks are probably aware, there is a fork of tripleo-image-elements called tripleo-puppet-elements which does install using packages, but it does so using Puppet to do the installation and for managing the configuration of the installed components. I'd like to kind of set that aside for a moment and just discuss how we might support installing from packages using tripleo-image-elements directly and not using Puppet. One idea would be to add support for a new type (or likely 2 new types: rpm and dpkg) to the source-repositories element. source-repositories already knows about the git, tar, and file types, so it seems somewhat natural to have additional types for rpm and dpkg. A complication with that approach is that the existing elements assume they're setting up everything from source. So, if we take a look at the nova element, and specifically install.d/74-nova, that script does stuff like install a nova service, adds a nova user, creates needed directories, etc. All of that wouldn't need to be done if we were installing from rpm or dpkg, b/c presumably the package would take care of all that. We could fix that by making the install.d scripts only run if you're installing a component from source. In that sense, it might make sense to add a new hook, source-install.d and only run those scripts if the type is a source type in the source-repositories configuration. We could then have a package-install.d to handle the installation from the packages type. The install.d hook could still exist to do things that might be common to the 2 methods. Thoughts on that approach or other ideas? I'm currently working on a patchset I can submit to help prove it out. But, I'd like to start discussion on the approach now to see if there are other ideas or major opposition to that approach. -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements
Hi On 7 Jan 2014, at 21:17, James Slagle james.sla...@gmail.com wrote: Could you expand on this a bit? I'm not sure what inconsistency you're referring to. That multiple repos work, but the docs don't say so, and the DIB_REPO foo doesn't support multiple repos. I wonder if we could have a global build option as well that said to use packages or source Definitely. Maybe DIB_PREFER_ORIGIN? I feel that not offering a choice will only turn people off from using t-i-e. +1 (even if I wish that wasn't the case!) Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements
Hi On 7 Jan 2014, at 22:18, Clint Byrum cl...@fewbar.com wrote: Packages do the opposite, and encourage entropy by promising to try and update software Building with packages doesn't require updating running systems with packages and more than building with git requires updating running systems with git pull. One can simply build (and test!) a new image with updated packages and rebuild/takeover nodes. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements
Hi On 7 Jan 2014, at 23:04, Clint Byrum cl...@fewbar.com wrote: My question still stands, what are the real advantages? So far the only one that matters to me is makes it easier for people to think about using it. If I were to put on my former sysadmin hat, I would always strongly prefer to use packages for things, so I have the weight of the distro vendor behind me (particularly if I only have a few hundred servers and want a 6 month upgrade cadence with easy access to security fixes between upgrades). I'm not necessarily advocating for or against tripleo supporting packages as a source of openstack, but I do think it is likely that some/many users will have their reasons for wanting to leverage our tools without following all of our preferred processes. What I am advocating though, is that if there is a need and someone gives us a patch that satisfies it without hurting our preferred processes, we look favourably on it. I would rather have users on the road to doing things our way, than be immediately turned away or forced into dramatically forking us. Cheers, -- Chris Jones ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] Ubuntu element switched to Saucy
Hi Apologies for being a little late announcing this, but the Ubuntu element in diskimage-builder has been switched[1] to defaulting to the Saucy release (i.e. 13.10). Please file bugs if you find any regressions! [1] https://review.openstack.org/#/c/58714/ -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] capturing build details in images
Hi On 4 December 2013 22:19, Robert Collins robe...@robertcollins.net wrote: So - what about us capturing this information outside the image: we can create a uuid for the build, and write a file in the image with that uuid, and outside the image we can write +1 I think having a UUID inside the image is a spectacularly good idea generally, and this seems like a good way to solve the general problem of what to put in the image. It would also be nice to capture the build logs automatically to $UUID-build.log or something, for folk who really really care about audit trails and reproducibility. -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tripleo] Core reviewer update Dec
Hi On 4 December 2013 07:12, Robert Collins robe...@robertcollins.net wrote: - Ghe Rivero for -core +1 - Jan Provaznik for removal from -core - Jordan O'Mara for removal from -core - Martyn Taylor for removal from -core - Jiri Tomasek for removal from -core - Jamomir Coufal for removal from -core I'm skipping voting on these for now, since not all have responded, but in general I am +1 de-core-ing folk who have shifted their focus elsewhere and I thank them for their efforts on TripleO to date, and hope that the winds of time and focus, bring them back to us at some point in the future :) Cheers, -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: reverse the default Gerrit sort order
Hey On 11 November 2013 12:36, Monty Taylor mord...@inaugust.com wrote: In the meantime, I've been making complex search queries in the search box and then just saving the resulting URL as 'saved search' link. I got bored enough with maintaining them in the tiny search box, that I quickly whipped up: https://github.com/cmsj/os-oddsods/blob/master/reviews.py (but it still doesn't solve the sort order problem, which is the thing I would most like to have the ability to control!) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Version numbering of TripleO releases
Hi On 31 October 2013 05:09, Roman Podoliaka rpodoly...@mirantis.com wrote: 0.MAJOR.MINOR versioning totally makes sense to me until we get to 1.0.0. +1 Your examples are both excellent. Thanks very much for taking the release baton for now :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] TripleO core reviewer update - november
Hi On 30 October 2013 09:06, Robert Collins robe...@robertcollins.net wrote: - James Slagle for -core Very +1 - Arata Notsu to be removed from -core +1 - Devananda van der veen to be removed from -core +1 Thanks to Arata and Devananda for their efforts to date (and of course the awesome work they are doing currently in other projects :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Undercloud Ceilometer
Hi On 4 October 2013 16:28, Ladislav Smola lsm...@redhat.com wrote: test it. Anybody volunteers for this task? There will be a hard part: doing the right configurations. (firewall, keystone, snmpd.conf) So it's all configured in a clean and a secured way. That would require a seasoned sysadmin to at least observe the thing. Any volunteers here? :-) I'm not familiar at all with Ceilometer, but I'd be happy to discuss how/where things like snmpd are going to be exposed, and look over the resulting bits in tripleo :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre
Hi On 25 September 2013 04:15, Robert Collins robe...@robertcollins.netwrote: E.g. for any node I should be able to ask: - what failure domains is this in? [e.g. power-45, switch-23, ac-15, az-3, region-1] - what locality-of-reference features does this have? [e.g. switch-23, az-3, region-1] - where is it [e.g. DC 2, pod 4, enclosure 2, row 5, rack 3, RU 30, cartridge 40]. So, what do you think? As a recovering data-centre person, I love the idea of being able to map a given thing to not only its physical location, but its failure domain. +1 -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core
Hi On 27 August 2013 22:25, Robert Collins robe...@robertcollins.net wrote: So - calling for votes for Derek to become a TripleO core reviewer +1 I think we're nearly at the point where we can switch to the 'two +2's' model - what do you think? Selfishly I'd quite like to see a little more EU core reviewer presence, but in reality there's not many hours where we'll be potentially unable to land things. That aside, I like the idea. -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Python overhead for rootwrap
Hi On 2 August 2013 13:14, Daniel P. Berrange berra...@redhat.com wrote: for managing VMs. Nova isn't using as much as it could do though. Nova isn't using any of libvirt's storage or network related APIs currently, which could obsolete some of its uses of rootwrap. That certainly sounds like a useful thing that could happen regardless of what happens with sudo/rootwrap. * DBus isn't super pleasing to work with as a developer or a sysadmin No worse than OpenStack's own RPC system Replacing a thing we don't really like, with another thing that isn't super awesome, may not be a good move :) As a host-local service only, I'm not sure high availability is really a relevant issue. So, I mentioned that because exec()ing out to a shell script way fewer ways it can go wrong. exec() doesn't go away and forget the thing you just asked for, because something is being upgraded, or restarted, or crashed. I'm a little rusty with DBus, but I don't think those sorts of things are well catered for. Perhaps we don't care about that, but the change would be big enough to at least figure out whether we care. but I still think it is better to have your root/non-root barrier defined in terms of APIs. It is much simpler to validate well defined parameters to API calls, than to parse validate shell command lines. Shell commands have a nasty habit of changing over time, or being inconsistent across distros, or have ill defined error handling / reporting behaviour. I do agree that privileged operations should be well defined and separated, but I think in almost all cases you're going to find that the shell commands are just moving to a different bit of code and will still be fragile, just fragile inside a privileged daemon instead of fragile inside a shell script. To pick a random example, nova's calls to iptables. Something, somewhere is still going to be composing an iptables command out of fragments and executing /sbin/iptables and hoping for a detailed answer. Regardless of the mechanism used to implement this, I think that from the perspective of someone hacking on the code that needs to make a privileged call, and the code that implements that privileged call, the mechanism for the call should be utterly transparent, as in, you are just calling a method with some arguments in one place, and implementing that method and returning something, in another place. That could be implemented on top of sudo, root wrap, dbus, AMQP, SOAP, etc, etc. -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder
Hi On 25 July 2013 14:20, Derek Higgins der...@redhat.com wrote: which only gives people an incorrect sense of security. I agree with your analysis of the effects of the sudoers file and I think it makes a great argument for recommending people run the main command itself with sudo, rather than a blanket passwordless sudo, but really all we need to say is this tool needs to be run as root and let people make their own decision :) -- Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] headsup - transient test failures on py26 ' cannot import name OrderedDict'
Hi On 17 July 2013 21:27, Robert Collins robe...@robertcollins.net wrote: Surely thats fixable by having a /opt/ install of Python2.7 built for RHEL ? That would make life s much easier for all concerned, and is super Possibly not easier for those tasked with keeping OS security patches up to date, which is part of what a RHEL customer is paying Red Hat a bunch of money to do. Cheers, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] mid-cycle sprint?
Hey I'm away from 5th-15th August and if possible I would prefer to be at home for 11th September. I'll start putting out feelers for family to come and stay to help with childcare while I'm out :) No preferences on the location. Cheers, Chris On 12 July 2013 17:02, Devananda van der Veen devananda@gmail.comwrote: Neither week is possible for me. There's that thing in the desert... So while I'd like to attend, I don't think my absence will affect the rest of you being productive :) Cheers, Devananda On Wed, Jul 10, 2013 at 8:54 PM, Robert Collins robe...@robertcollins.net wrote: Clint suggested we do a mid-cycle sprint at the weekly meeting a fortnight ago, but ETIME and stuff - so I'm following up. HP would be delighted to host a get-together of TripleO contributors [or 'I will be contributing soon, honest'] folk. We're proposing a late August / early Sept time - a couple weeks before H3, so we can be dealing with features that have landed // ensuring necessary features *do* land. That would give a start date of the 19th or 26th August. Probable venue of either Sunnyvale, CA or Seattle. I need a rough count of numbers to kick off the approval and final venue stuff w/in HP. I've cc'd some fairly obvious folk that should come :) So - who is interested and would come, and what constraints do you have? -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Cloud Services ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev