[openstack-dev] TC Candidacy
Greetings! I am currently a member of the TC, and I would like to continue to serve. I'm going to write this email backwards because I am aware it is quite long. I have put what I hope to achieve on the TC at the top, but provide background detail afterwards for those who want to dig deeper. I am of course happy to answer questions. == Executive summary == * I am a Nova and Oslo core reviewer, who works full time on upstream OpenStack * Provide geographic diversity to the TC, doing my best to represent the APAC region * Continue to incubate new projects so long as they form a logical part of a cloud deployment, regardless of whether they will graduate within a single release * We need to work on improving documentation, and I'd like to see the TC work on this in Icehouse * Assist the Foundation Board in defining what is core and placing high quality technical evangelists at conferences around the world * Also, I have a cool accent == What I want to get done on the TC in Icehouse == First off, the TC has incubated a number of projects in the Havana release, and I'd like to see that continue. I think its important that we build a platform that includes the services that a deployer would need to build a cloud and that those platform elements work well together. Now, its clear that not everyone will deploy all of the projects we are incubating, but I think its still important that they play well together and have a consistent look and feel. I suspect that ultimately we'll need to work out how to handle this growth differently -- we have a lot of PTLs now and the summit is going to be super busy, but these are both good problems to have and I am confident that we can solve them as a community. I also do not believe that an incubated project needs to graduate within one release. I'd rather take a less mature project if we think it has a good chance of getting to graduation in the forseeable future and work with them, than ignore them and then be surprised that they never became a well integrated project. We need to get better at documentation, and the TC needs to do more in this area. Having high quality documentation is very important to the continued success of OpenStack. Anne Gentle and the docs team are doing a fantastic job, but I am personally of the belief that the docs team simply isn't big enough to keep up with the work load we impose on them. I'd like to see the community, lead by the TC, discuss how we can grow that team and produce the documentation the project deserves. I've seen proposals that we block code reviews which don't have an associated doc patch for example, and while I think that's too blunt a metric we need to do _something_. Could we do something with reporting the number of undocumented features are landing? Could we be better at approaching corporate contributors and asking for more documentation support? The TC needs to also provide more assistance to the Foundation Board. The reality is that the Board and TC don't solve isolated problems -- they both work on different aspects of the same problem. The Board has been doing really good work, but the TC should be helping what defines core OpenStack. While this discussion might be framed as being about trademarks, it ultimately affects how users see our software and I think that matters a lot to the technical people as well. I'd also like the TC to be helping the Foundation place technical talks at conferences around the world. We have a limited window to drive OpenStack deployment, and having solid technical talks at as many technical conferences around the world as possible is one of the ways we can achieve that. While I'm lucky enough to work somewhere with good support for these activities internally, that's not true of all of our developers. The TC and Board should be working together to identify high quality technical evangelists, and then helping them get accepted at conferences. Perhaps the Board can also allocated some travel support for this sort of activity -- I'd love to see that happen. Overall I think the TC should be helping the Board more. The TC has unique insights into what projects and events matter to the technical deployers of OpenStack, and we should be helping the Foundation make good decisions. During the Havana release there was one meeting between the TC and the Board (the day before the Havana summit opened) that I am aware of, and I think we need to be talking more than that. == Background == I first started hacking on Nova during the Diablo release, with my first code contributions appearing in the Essex release. Since then I've hacked mostly on Nova and Oslo, although I have also contributed to many other projects as my travels have required. For example, I've tried hard to keep various projects in sync with their imports of parts of Oslo I maintain. I work full time on OpenStack at Rackspace, leading a team of developers who work solely on upstream open source OpenStack. I am a Nova and Oslo
Re: [openstack-dev] Gate issues - what you can do to help
Hi Gary, Almost the same I have posted another way of the fix. https://review.openstack.org/#/c/49942/ Both try to fix the same issue. Gary's one changes neutronclient itself and mine chnages quantumclient proxy. I am not sure which is the direction, but at least one of them should be merged ASAP to fix the stable/grizzly blocking failure. Thanks, On Sun, Oct 6, 2013 at 5:45 PM, Gary Kotton gkot...@vmware.com wrote: Hi, Can some Neutron cores please look at https://review.openstack.org/#/c/49943/. I have tested this locally and it addresses the issues that I have encountered. Thanks Gary On Fri, Oct 4, 2013 at 2:06 AM, Akihiro Motoki amot...@gmail.com wrote: Hi, I would like to share what Gary and I investigated, while it is not addressed yet. The cause is the failure of quantum-debug command in setup_quantum_debug (https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L 996). We can reproduce the issue in local environment by setting Q_USE_DEBUG_COMMAND=True in localrc. Mark proposed a patch https://review.openstack.org/#/c/49584/ but it does not address the issue. We need another way to proxy quantumclient to neutronclient. Note that there is a case devstack log in the gate does not contain the end of the console logs. In http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neut ron/b9e6559/, the last command logged is quantum subnet-create, but actually quantum-debug command was executed and it failed. Thanks, Akihiro On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec ape...@gmail.com wrote: The problems occur when the when the the following line is invoked: https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quant um#L302 But that line is reached only in case baremetal is enabled which isn't the case in gate, is it? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro MOTOKI amot...@gmail.com -- Akihiro MOTOKI amot...@gmail.com -- Akihiro MOTOKI amot...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate issues - what you can do to help
Regarding https://review.openstack.org/#/c/49942/ (against quantumclient branch), the gate for quantumclient branch of python-neutronclient seems broken. It seems the script expects master branch of python-neutronclient. I am not sure what is the right direction to propose patch to quantumclient branch. Gary's patch https://review.openstack.org/#/c/49943/ looks a short way to fix the gate issue since once the fix is merged the gate issue will be fixed. I am fine with the patch as a temporary solution. Thanks, Akihiro On Sun, Oct 6, 2013 at 5:51 PM, Akihiro Motoki amot...@gmail.com wrote: Hi Gary, Almost the same I have posted another way of the fix. https://review.openstack.org/#/c/49942/ Both try to fix the same issue. Gary's one changes neutronclient itself and mine chnages quantumclient proxy. I am not sure which is the direction, but at least one of them should be merged ASAP to fix the stable/grizzly blocking failure. Thanks, On Sun, Oct 6, 2013 at 5:45 PM, Gary Kotton gkot...@vmware.com wrote: Hi, Can some Neutron cores please look at https://review.openstack.org/#/c/49943/. I have tested this locally and it addresses the issues that I have encountered. Thanks Gary On Fri, Oct 4, 2013 at 2:06 AM, Akihiro Motoki amot...@gmail.com wrote: Hi, I would like to share what Gary and I investigated, while it is not addressed yet. The cause is the failure of quantum-debug command in setup_quantum_debug (https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L 996). We can reproduce the issue in local environment by setting Q_USE_DEBUG_COMMAND=True in localrc. Mark proposed a patch https://review.openstack.org/#/c/49584/ but it does not address the issue. We need another way to proxy quantumclient to neutronclient. Note that there is a case devstack log in the gate does not contain the end of the console logs. In http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neut ron/b9e6559/, the last command logged is quantum subnet-create, but actually quantum-debug command was executed and it failed. Thanks, Akihiro On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec ape...@gmail.com wrote: The problems occur when the when the the following line is invoked: https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quant um#L302 But that line is reached only in case baremetal is enabled which isn't the case in gate, is it? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro MOTOKI amot...@gmail.com -- Akihiro MOTOKI amot...@gmail.com -- Akihiro MOTOKI amot...@gmail.com -- Akihiro MOTOKI amot...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] TC Candidacy
Confirmed. On 10/06/2013 04:04 AM, Michael Still wrote: Greetings! I am currently a member of the TC, and I would like to continue to serve. I'm going to write this email backwards because I am aware it is quite long. I have put what I hope to achieve on the TC at the top, but provide background detail afterwards for those who want to dig deeper. I am of course happy to answer questions. == Executive summary == * I am a Nova and Oslo core reviewer, who works full time on upstream OpenStack * Provide geographic diversity to the TC, doing my best to represent the APAC region * Continue to incubate new projects so long as they form a logical part of a cloud deployment, regardless of whether they will graduate within a single release * We need to work on improving documentation, and I'd like to see the TC work on this in Icehouse * Assist the Foundation Board in defining what is core and placing high quality technical evangelists at conferences around the world * Also, I have a cool accent == What I want to get done on the TC in Icehouse == First off, the TC has incubated a number of projects in the Havana release, and I'd like to see that continue. I think its important that we build a platform that includes the services that a deployer would need to build a cloud and that those platform elements work well together. Now, its clear that not everyone will deploy all of the projects we are incubating, but I think its still important that they play well together and have a consistent look and feel. I suspect that ultimately we'll need to work out how to handle this growth differently -- we have a lot of PTLs now and the summit is going to be super busy, but these are both good problems to have and I am confident that we can solve them as a community. I also do not believe that an incubated project needs to graduate within one release. I'd rather take a less mature project if we think it has a good chance of getting to graduation in the forseeable future and work with them, than ignore them and then be surprised that they never became a well integrated project. We need to get better at documentation, and the TC needs to do more in this area. Having high quality documentation is very important to the continued success of OpenStack. Anne Gentle and the docs team are doing a fantastic job, but I am personally of the belief that the docs team simply isn't big enough to keep up with the work load we impose on them. I'd like to see the community, lead by the TC, discuss how we can grow that team and produce the documentation the project deserves. I've seen proposals that we block code reviews which don't have an associated doc patch for example, and while I think that's too blunt a metric we need to do _something_. Could we do something with reporting the number of undocumented features are landing? Could we be better at approaching corporate contributors and asking for more documentation support? The TC needs to also provide more assistance to the Foundation Board. The reality is that the Board and TC don't solve isolated problems -- they both work on different aspects of the same problem. The Board has been doing really good work, but the TC should be helping what defines core OpenStack. While this discussion might be framed as being about trademarks, it ultimately affects how users see our software and I think that matters a lot to the technical people as well. I'd also like the TC to be helping the Foundation place technical talks at conferences around the world. We have a limited window to drive OpenStack deployment, and having solid technical talks at as many technical conferences around the world as possible is one of the ways we can achieve that. While I'm lucky enough to work somewhere with good support for these activities internally, that's not true of all of our developers. The TC and Board should be working together to identify high quality technical evangelists, and then helping them get accepted at conferences. Perhaps the Board can also allocated some travel support for this sort of activity -- I'd love to see that happen. Overall I think the TC should be helping the Board more. The TC has unique insights into what projects and events matter to the technical deployers of OpenStack, and we should be helping the Foundation make good decisions. During the Havana release there was one meeting between the TC and the Board (the day before the Havana summit opened) that I am aware of, and I think we need to be talking more than that. == Background == I first started hacking on Nova during the Diablo release, with my first code contributions appearing in the Essex release. Since then I've hacked mostly on Nova and Oslo, although I have also contributed to many other projects as my travels have required. For example, I've tried hard to keep various projects in sync with their imports of parts of Oslo I maintain. I work full time on OpenStack at Rackspace, leading a team of developers who work
Re: [openstack-dev] [Climate] Questions and comments
Hello, Mike! As for Nova connected thoughts, it was just a reference to the Roadmap, that includes implementing the reservations opportunity first of all for the solid virtual resources (VMs, volumes, …) and next for the complex ones (Heat stacks, Savanna clusters, …). To implement complex reservations we need implementation of the simple ones. Climate will be the independent service, that will the the underlaying layer for other OpenStack services wishing not just start some virtual resource, but reserve it for the future with different policies. We assume the following reservation process for the OpenStack services: 1) User goes to service and and does all actions as usual only passing his/her wish to reserve resource, not really start it. 2) Service does all the usual actions, but not really starts resource. Instead of that is uses Climate to create lease and then Climate decides what events should happen to the lease - start, end, send notification about some event and so on. We were thinking about leases of resources from different services for the first time, but here we found some logical problem - nobody really needs just VM, just volume and just floating IP. To work together, volume should be attached to VM and IP should be assigned too. But this is just the thing that Heat does. That's why now we are thinking of lease as reserving the resources from one service - to reserve VM+volume+IP using Heat will be the better idea I suppose. As for scheduling part, in our current scheme we'll use services' schedulers cause now they do their work in the best way. That's why Climate now looks as the underlaying service, not the upper one. That way provides the opportunity to use all capabilities of existing services and do not proxy all the resource-creation preparations and calls when user really ants to reserve something. Also I think we do not really understand each other in the question of how any OpenStack service and Climate may use each other. We were thinking that any service goes to Climate to create lease with already existing resources IDs in services' DB. In Nova case it looks like Nova schedules VM, creates in DB instance with the 'RESERVED' status and then sends Climate lease creation request. And then Climate uses its internal schedule to do all the actions with that resource defined by plugin for the lease start/end. In this view nobody except Climate really owns lease with all the reservations. But I see you have some other view on how that process may look like. Although it was not the idea we were thinking about for the Climate. As for we were thinking about service-Climate workflow, service already will mark 'RESERVED' resource in its DB when it'll send lease creation request to the Climate. That's why that backing capacity will be already used by service and mentioned in the further scheduling process. Now we'll reserve resource right at the moment of lease creation, that means it's more useful for the not so far future. But also we are thinking to implement the opportunity of not only the immediate resources reservation, but that is questionable. Thank you, Dina On Sun, Oct 6, 2013 at 10:36 AM, Mike Spreitzer mspre...@us.ibm.com wrote: I looked at the blueprint ( https://blueprints.launchpad.net/heat/+spec/stacks-reservation) and associated wiki page (https://wiki.openstack.org/wiki/Heat/Reservation), and I have a few comments/questions. The wiki page has some remarks that are Nova-centric, and some other remarks that emphasize that Climate is not just about Nova, and I do not understand the relationship between these remarks. Is this a roadmapping thought (start with just Nova, expand to other services later), or the inclusion of some details (related to Nova) and omission of other similar details (related to the other services), or what? Will Climate be an independent service, or part of Nova, or what? What will be the atomic operations? I presume the primary interesting operation will be something like reserving a bag of resources, where that bag is allowed to contain any mixture of resources from any services. Have I got that right? What exactly does reserving a resource mean? Does this atomic reservation operation include some atomic cooperation from the resources' services' schedulers (e.g., Nova scheduler, Cinder scheduler, etc)? Or is this reservation service logically independent of the resources' primary schedulers? Overall I am getting the suggestion that reservation is an independent service. The flow is something like first reserve a bag of resources, and then proceed to use them at your leisure. But I also suppose that the important thing about a reservation is that it includes the result of scheduling (placement) --- the point of a reservation is that it is holding capacity to host the reserved resources. You do not want an atomic operation to take a long time; do the scheduling decisions get made (tentatively, of
Re: [openstack-dev] [Climate] Questions and comments
Thanks, Dina. Yes, we do not understand each other; can I ask some more questions? You outlined a two-step reservation process (We assume the following reservation process for the OpenStack services...), and right after that talked about changing your mind to use Heat instead of individual services. So I am confused, I am not sure which of your remarks reflect your current thinking and which reflect old thinking. Can you just state your current thinking? On what basis would Climate decide to start or stop a lease? What sort of event notifications would Climate be sending, and when and why, and what would subscribers do upon receipt of such notifications? If the individual resource services continue to make independent scheduling decisions as they do today, what value does Climate add? Maybe a little more detailed outline of what happens in your current thinking, in support of an explicitly stated use case that shows the value, would help here. Thanks, Mike___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutronclient] tox -epy27 failed because test_ssl
Sometimes, when i run tox on neutronclient, the test_ssl.TestSSL.test_client_manager_properly_creates_httpclient_instance may fail. However, gate doesn't catch this problem, and even in my env, it doesn't fail everytime, is there any other developer have meet same problem? -- blog: zqfan.github.com git: github.com/zqfan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Ephemeral bases and block-migration
Hi there, We've been investigating some guest filesystem issues recently and noticed what looks like a slight inconsistency in base image handling in block-migration. We're on Grizzly from the associated Ubuntu cloud archive and using qcow on local storage. What we've noticed is that after block-migration the instances secondary disk has a generic backing file _base/ephemeral, as opposed to the backing file it was created with, e.g., _base/ephemeral_30_default. These backing files have different virtual sizes: $ qemu-img info _base/ephemeral image: _base/ephemeral file format: raw virtual size: 2.0G (2147483648 bytes) disk size: 778M $ qemu-img info _base/ephemeral_30_default image: _base/ephemeral_30_default file format: raw virtual size: 30G (32212254720 bytes) disk size: 614M This seems like it could be problematic considering virtual disks of different sizes end up pointed at this _base/ephemeral file, and I've no idea how that file is created in the first place. Can anyone explain? -- Cheers, ~Blairo ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev