Re: [openstack-dev] [ironic] Summit recap
Hi, Jim, thank you for recap and blog! > # Newton priorities > > [Etherpad]( https://etherpad.openstack.org/p/ironic-newton-summit-priorities) > > We discussed our priorities for the Newton cycle here. > > Of note, we decided that we need to get cold upgrade testing (i.e. > grenade) running ASAP. We have lots of large changes lined up that feel > like they could easily break upgrades, and want to be able to test them. > Much of the team is jumping in to help get this going. > > The priorities for the cycle have been published > [here]( http://specs.openstack.org/openstack/ironic-specs/priorities/newton-priorities.html ). I'd like to discuss about priority. I don't think that we should not implement items not in priority list. But, we need to concentrate on high-priority items which need to be completed in this cycle. Especially, core developers are. Because they have Workflow+1 privilege :) I think there are 2 concerns. one is, we need to set scope in the cycle. For example, we have implemented Neutron integration from L cycle, and we want to complete it in this cycle. There are many things which we want, and to complete everything takes very long time. So that, we need to set priority in Neutron integration also, and we need to give up to implement some items and implement them in the next cycle. Another is, we need to be strict about priority a little bit. Recently, I saw the presentation "How Open Source Projects Survive Poisonous People (And You Can Too)". https://www.youtube.com/watch?v=Q52kFL8zVoM In this presentation, subversion developers are talking about how to manage OSS community. They have a TODO list, and when a new proposal comes, if it is not in the TODO list, it will be denied. Do we need to do the same thing? Of course, NO. But, now is the time we need to concern about it... Thank you for reading this long email. What should we do to manage the project efficiently? Everyone is welcome. Best Regards, Yuiko Takada Mori 2016-05-11 23:16 GMT+09:00 Jim Rollenhagen : > Others made good points for posting this on the ML, so here it is in > full. Sorry for the markdown formatting, I just copied this from the > blog post. > > // jim > > Another cycle, another summit. The ironic project had ten design summit > sessions to get together and chat about some of our current and future > work. We also led a cross-project session on bare metal networking, had > a joint session with nova, and a contributor's meetup for the first half > of Friday. The following is a summary of those sessions. > > # Cross-project: the future of bare-metal networking > > [Etherpad](https://etherpad.openstack.org/p/newton-baremetal-networking) > > This session was meant to have the Nova, Ironic, and Neutron folks get > together and figure out some of the details of the [work we're > doing](https://review.openstack.org/#/c/277853/) to decouple the > physical network infrastructure from the logical networking that users > interact with. Unfortunately, we spent most of the time explaining the > problem and the goals, and not much time actually figuring out how > things should work. We were able to decide that the trunk port work in > neutron should mostly work for us. > > There was plenty of hallway chats about this throughout the week, and > from those I think we have a good idea of what needs to be done. The > spec linked above will be updated soon to clarify where we are at here. > > # Nova-compatible serial and graphical consoles > > [Etherpad](https://etherpad.openstack.org/p/ironic-newton-summit-console) > > This session began with a number of proposals to implement serial and > graphical consoles that would work with Nova, and a goal to narrow them > down so folks can move forward with the code. > > The first thing we decided is that in the short term, we want to focus > on the serial console. It's supported by almost all hardware and most > cases where someone needs a console are covered by a simple serial > console. We do want to do graphical consoles eventually, but would like > to take one thing at a time. > > We then spent some time dissecting our requirements (and preferences) > for what we want an implementation to do, which are listed toward the > bottom of the etherpad. > > We narrowed the serial console work down to two implementations: > > * [ironic-console-server](https://review.openstack.org/#/c/306755/). > The tl;dr here is that the conductor will shell out to a command that > creates a listening port, and forks a process that connects to the > console, and pipes data between the two. This command is called once > per console session. The upside with this approach is that operators > don't need to do much when the change is deployed. > > * [ironic-ipmiproxy](https://review.openstack.org/#/c/296869/). > This is similar to ironic-console-server, except that it runs as its > own daemon with a small REST API for start/stop/get. It spawns a > process for each console, which does not fork
Re: [openstack-dev] [ironic] Summit recap
Others made good points for posting this on the ML, so here it is in full. Sorry for the markdown formatting, I just copied this from the blog post. // jim Another cycle, another summit. The ironic project had ten design summit sessions to get together and chat about some of our current and future work. We also led a cross-project session on bare metal networking, had a joint session with nova, and a contributor's meetup for the first half of Friday. The following is a summary of those sessions. # Cross-project: the future of bare-metal networking [Etherpad](https://etherpad.openstack.org/p/newton-baremetal-networking) This session was meant to have the Nova, Ironic, and Neutron folks get together and figure out some of the details of the [work we're doing](https://review.openstack.org/#/c/277853/) to decouple the physical network infrastructure from the logical networking that users interact with. Unfortunately, we spent most of the time explaining the problem and the goals, and not much time actually figuring out how things should work. We were able to decide that the trunk port work in neutron should mostly work for us. There was plenty of hallway chats about this throughout the week, and from those I think we have a good idea of what needs to be done. The spec linked above will be updated soon to clarify where we are at here. # Nova-compatible serial and graphical consoles [Etherpad](https://etherpad.openstack.org/p/ironic-newton-summit-console) This session began with a number of proposals to implement serial and graphical consoles that would work with Nova, and a goal to narrow them down so folks can move forward with the code. The first thing we decided is that in the short term, we want to focus on the serial console. It's supported by almost all hardware and most cases where someone needs a console are covered by a simple serial console. We do want to do graphical consoles eventually, but would like to take one thing at a time. We then spent some time dissecting our requirements (and preferences) for what we want an implementation to do, which are listed toward the bottom of the etherpad. We narrowed the serial console work down to two implementations: * [ironic-console-server](https://review.openstack.org/#/c/306755/). The tl;dr here is that the conductor will shell out to a command that creates a listening port, and forks a process that connects to the console, and pipes data between the two. This command is called once per console session. The upside with this approach is that operators don't need to do much when the change is deployed. * [ironic-ipmiproxy](https://review.openstack.org/#/c/296869/). This is similar to ironic-console-server, except that it runs as its own daemon with a small REST API for start/stop/get. It spawns a process for each console, which does not fork itself. The upside here is that it can be scaled independently, and has no implications on conductor failover; however it will need its own HA model as well, and will be more work for deployers. It seems like most folks agree that the latter is more desirable, in terms of scaling model and such, but we didn't quite come to consensus during the session. We need to do that ASAP. We also talked a bit about console logging, and the pitfalls of doing it automatically for every instance. For example, some BMCs crash if power status is called repeatedly with a serial-over-lan session active (this is something to consider for regular console attach as well). We'll need to make this operator configurable, possibly per-node, so that we aren't automatically crashing bad BMCs for people. The nova team agreed later that this is fine, as long as a decent error is returned in this case. # Status and future of our gate [Etherpad](https://etherpad.openstack.org/p/ironic-newton-summit-gate) We discussed the current status of our gate, and the plans for Newton. We first talked about third-party CI, and where we're at with that. Kurt Taylor is doing the main tracking of that, and explained where and how we're tracking it. There was a call for help for some of the missing data, and getting all the right pages updated (stackalytics, openstack.org marketplace, etc). We also talked about the current changes going into our gate that we want to push forward. Moving to tinyipa and virtualbmc (with ipmitool drivers) are the main changes right now. We discussed the progress on upgrade testing via grenade. There hasn't been a lot of progress made, but some of the groundwork to make local testing easy has been done. Later in the week, during the priorities session, we agreed that the upgrade testing was our top priority right now, and some folks volunteered to help move it along. # Hardware pool management [Etherpad](https://etherpad.openstack.org/p/ironic-newton-summit-hardware-pools) This topic is talked about at nearly every summit, and we said that we need to at least solve the internals this round. We
[openstack-dev] [ironic] Summit recap
Hey all, I wrote a recap of the summit on my blog: http://jroll.ghost.io/newton-summit-recap/ I hope this covers everything that folks missed or couldn't remember. As always, questions/comments/concerns welcome. // jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ironic] Summit recap
On 11/02/2015 03:29 PM, Jim Rollenhagen wrote: > Hi friends, > > I wrote a recap of the summit (from my perspective) that some of you may > find interesting. Feedback is very welcome. :) > > http://words.jimrollenhagen.com/mitaka-summit-recap/ > > // jim > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > I can't tell you how much I appreciate this. I wish every project did this, particularly Horizon. -- Jason E. Rist Senior Software Engineer OpenStack Infrastructure Integration Red Hat, Inc. openuc: +1.972.707.6408 mobile: +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ironic] Summit recap
Hi friends, I wrote a recap of the summit (from my perspective) that some of you may find interesting. Feedback is very welcome. :) http://words.jimrollenhagen.com/mitaka-summit-recap/ // jim __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev