Re: [openstack-dev] [Networking] Basic Bug Etiquette
Maru Newby wrote: I'd like to reminder all Neutron contributors that if you are not the assignee for a bug, you should be consulting the assignee before you consider reassigning the bug to yourself or others. Failing to do so is not only disrespectful, but may also result in unnecessary work. It could be that the assignee has been making progress on the problem - code or otherwise - that would be duplicated by the new assignee. Or the assignee could have specialized knowledge that makes them best equipped to fix the bug. In any case, please ensure that bugs are not reassigned without the involvement of the current assignee. Note that the bug updates script may switch assignees when a new patch claiming to fix the bug is submitted for review in Gerrit... -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume
Hi, I just proposed a patch for the boot_from_volume_exercise.sh to get rid of --image. To be honest, I did not look at the various execution paths. My initial thought is that boot from volume means you boot from volume. If you only have a kernel + ramdisk image, I simply assumed that you can't do it. I would not do any magic. Boot from volume should boot from volume. If you only have 3 part images, you need to find another way to prepare your bootable volume. btw, here is my change: https://review.openstack.org/34761 Cheers, Mate On Mon, Jul 01, 2013 at 01:25:23AM -0400, Sheng Bo Hou wrote: Hi Cinder folks, I am currently fixing the bugs related to booting the instance from the volume. I found there are bugs both in Nova and Cinder. Cinder: https://bugs.launchpad.net/cinder/+bug/1159824 Nova: https://bugs.launchpad.net/nova/+bug/1191069 For the volumes created from the image, I propose to copy the reference image during the creation of the main image. For example, an image may refer to a kernel image and a ramdisk image. When we create a volume from this image, we only copied this one to the volume. The kernel and ramdisk images are still in glance, and the volume still refers to the kernel and ramdisk images. I think if an image has other reference images, the reference images also need to be copied to the volumes(kernel volume and ramdisk volume), and then set the volume referring to the kernel volume and the ramdisk volume. This feature will make booting from a volume completely independent of the existence of the glance image. Do you think we can firstly add this feature to cinder? Do folks have any comments on it? Thanks. Best wishes, Vincent Hou (侯胜博) Staff Software Engineer, Open Standards and Open Source Team, Emerging Technology Institute, IBM China Software Development Lab Tel: 86-10-82450778 Fax: 86-10-82453660 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mate Lakat ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] AUTO: Jeff Sloyer is out of the office. (returning 07/10/2013)
I am out of the office until 07/10/2013. I am out of the office on July 1st and returning July 10th. I will have limited access to email but will try to respond. For items regarding CloudFactory, please contact Bryan Osenbach or Alan Byrne. For all other questions, issues, or concerns please contact my manager Konrad Lagarde. Note: This is an automated response to your message [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume sent on 06/30/2013 11:25:23 PM. This is the only notification you will receive while this person is away.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume
On 1 July 2013 20:18, Mate Lakat mate.la...@citrix.com wrote: Hi, I just proposed a patch for the boot_from_volume_exercise.sh to get rid of --image. To be honest, I did not look at the various execution paths. My initial thought is that boot from volume means you boot from volume. If you only have a kernel + ramdisk image, I simply assumed that you can't do it. I would not do any magic. Boot from volume should boot from volume. If you only have 3 part images, you need to find another way to prepare your bootable volume. This doesn't make any sense to me. Where you boot from, and what your root filesystem are are orthogonal issues. boot from volume shouldn't alter that. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Cloud Services ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3
On Sun, Jun 30, 2013 at 8:46 PM, Christopher Yeoh cbky...@gmail.com wrote: On Sat, Jun 29, 2013 at 10:40 PM, Anne Gentle annegen...@justwriteclick.com wrote: I was thinking of making sure it gets in the weekly newsletter and asking the user committee if they have ideas, plus Twitter. Another mailing list post may not be a different enough channel. I've made a blog post, tweeted it and asked for it to be included in the weekly newsletter. http://chris.yeoh.info/2013/07/01/nova-api-extensions-not-to-be-ported-to-v3/ Just wondering if you knew what the right way to contact the user committee is? There didn't seem to be a contact email address on their web page. Good thinking Chris. Here's the contact list for the user-committe: http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee. I am also realizing that what I really want is a full document describing v3. What are the plans for that? (Sorry if that's really obvious and I'm not seeing it.) Thanks, Anne Regards, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Anne Gentle annegen...@justwriteclick.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3
On Mon, Jul 1, 2013 at 11:05 PM, Anne Gentle annegen...@justwriteclick.comwrote: On Sun, Jun 30, 2013 at 8:46 PM, Christopher Yeoh cbky...@gmail.comwrote: On Sat, Jun 29, 2013 at 10:40 PM, Anne Gentle annegen...@justwriteclick.com wrote: I was thinking of making sure it gets in the weekly newsletter and asking the user committee if they have ideas, plus Twitter. Another mailing list post may not be a different enough channel. I've made a blog post, tweeted it and asked for it to be included in the weekly newsletter. http://chris.yeoh.info/2013/07/01/nova-api-extensions-not-to-be-ported-to-v3/ Just wondering if you knew what the right way to contact the user committee is? There didn't seem to be a contact email address on their web page. Good thinking Chris. Here's the contact list for the user-committe: http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee. Thanks! I am also realizing that what I really want is a full document describing v3. What are the plans for that? (Sorry if that's really obvious and I'm not seeing it.) We have a blueprint for H3 to do doco and I've been planning on sending you an email about that :-) https://blueprints.launchpad.net/nova/+spec/v3-api-specification https://blueprints.launchpad.net/openstack-manuals/+spec/multi-version-api-site The rough plan was to have a v3 version of test_api_samples and produce pretty much exactly what we do for the v2 API. But if there is a better way to do it that it makes it easier to produce API docs this is a good opportunity to change what is done for v3. If we do that will the doc team be able to handle the creation of the documentation? There's around 60-70 extensions so there will be quite a lot of stuff going in for throughout H3. Regards, Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3
One more though, about os-multiple-create: I was also thinking to remove it, I don't see any real advantage to use it since it doesn't offer any kind of flexibility like chose different flavors, images and other attributes. So anyone creating multiple servers would probably prefer an external automation tool instead of multiple server IMHO. So anyone using it? There are a good reason to keep it? Did I miss something about this extension? On 06/28/2013 09:31 AM, Christopher Yeoh wrote: Hi, The following is a list of API extensions for which there are no plans to port. Please shout if you think any of them needs to be! baremetal_nodes.py os_networks.py networks_associate.py os_tenant_networks.py virtual_interfaces.py createserverext.py floating_ip_dns.py floating_ip_pools.py floating_ips_bulk.py floating_ips.py cloudpipe.py cloudpipe_update.py volumes.py Also I'd like to propose that after H2 any new API extension submitted HAS to have a v3 version. That will give us enough time to ensure that the V3 API in Havana can do everything that the V2 one except where we explicitly don't want to support something. For developers who have had new API extensions merged in H2 but haven't submitted a v3 version, I'd appreciate it if you could check the following etherpad to see if your extension is on the list and put it on there ASAP if it isn't there already: https://etherpad.openstack.org/NovaV3ExtensionPortWorkList I've tried to keep track of new API extensions to make sure we do v3 ports but may have missed some. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3
On 07/01/13 at 11:23am, Mauro S M Rodrigues wrote: One more though, about os-multiple-create: I was also thinking to remove it, I don't see any real advantage to use it since it doesn't offer any kind of flexibility like chose different flavors, images and other attributes. So anyone creating multiple servers would probably prefer an external automation tool instead of multiple server IMHO. So anyone using it? There are a good reason to keep it? Did I miss something about this extension? I would like to see this extension go away, but only by a small margin. Just because it complicates the boot/spawn workflow a bit, which really isn't a big deal. As far as I understand it the benefit is that if you have a use case for booting large numbers of the same type of instance then this extension allows it to happen faster by sending all of the information to the scheduler in one message. I don't know how useful this actually is in practice, but I could see it being helpful for someone. On 06/28/2013 09:31 AM, Christopher Yeoh wrote: Hi, The following is a list of API extensions for which there are no plans to port. Please shout if you think any of them needs to be! baremetal_nodes.py os_networks.py networks_associate.py os_tenant_networks.py virtual_interfaces.py createserverext.py floating_ip_dns.py floating_ip_pools.py floating_ips_bulk.py floating_ips.py cloudpipe.py cloudpipe_update.py volumes.py Also I'd like to propose that after H2 any new API extension submitted HAS to have a v3 version. That will give us enough time to ensure that the V3 API in Havana can do everything that the V2 one except where we explicitly don't want to support something. For developers who have had new API extensions merged in H2 but haven't submitted a v3 version, I'd appreciate it if you could check the following etherpad to see if your extension is on the list and put it on there ASAP if it isn't there already: https://etherpad.openstack.org/NovaV3ExtensionPortWorkList I've tried to keep track of new API extensions to make sure we do v3 ports but may have missed some. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3
I have used it. In the case where large numbers of instances are to be created, it (in theory) allows the compute system to optimize the request. It also avoids rate limit and other such issues that could come from hundreds of calls to create a server in sequence. -David On 07/01/2013 10:23 AM, Mauro S M Rodrigues wrote: One more though, about os-multiple-create: I was also thinking to remove it, I don't see any real advantage to use it since it doesn't offer any kind of flexibility like chose different flavors, images and other attributes. So anyone creating multiple servers would probably prefer an external automation tool instead of multiple server IMHO. So anyone using it? There are a good reason to keep it? Did I miss something about this extension? On 06/28/2013 09:31 AM, Christopher Yeoh wrote: Hi, The following is a list of API extensions for which there are no plans to port. Please shout if you think any of them needs to be! baremetal_nodes.py os_networks.py networks_associate.py os_tenant_networks.py virtual_interfaces.py createserverext.py floating_ip_dns.py floating_ip_pools.py floating_ips_bulk.py floating_ips.py cloudpipe.py cloudpipe_update.py volumes.py Also I'd like to propose that after H2 any new API extension submitted HAS to have a v3 version. That will give us enough time to ensure that the V3 API in Havana can do everything that the V2 one except where we explicitly don't want to support something. For developers who have had new API extensions merged in H2 but haven't submitted a v3 version, I'd appreciate it if you could check the following etherpad to see if your extension is on the list and put it on there ASAP if it isn't there already: https://etherpad.openstack.org/NovaV3ExtensionPortWorkList I've tried to keep track of new API extensions to make sure we do v3 ports but may have missed some. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch
Hey Thierry I actually didn't notice this go by last week, the other thread got all the attention. On Wed, 2013-06-26 at 14:51 +0200, Thierry Carrez wrote: Hi everyone, Yesterday at the TC meeting we agreed that as a first step to establishing programs, we should have a basic definition of them and establish the first (undisputed) ones. We can solve harder questions (like if horizontal efforts should be a program or a separate thing, or where each current official repo exactly falls) as a second step. So here is my proposal for step 1: 'OpenStack Programs' are efforts which are essential to the completion of our mission, but which do not produce deliverables included in the common release of OpenStack 'integrated' projects every 6 months, like Projects do. Hmm, this wasn't what I understood our direction to be. And maybe this highlights a subtle difference in thinking - as I see it, Oslo absolutely is producing release deliverables. For example, in what way was oslo.config 1.1.0 *not* a part of the grizzly release? The idea that documentation isn't a part of our releases seems a bit off too. This distinction feels like it's based on an extremely constrained definition of what constitutes a release. Programs can create any code repository and produce any deliverable they deem necessary to achieve their goals. Programs are placed under the oversight of the Technical Committee, and contributing to one of their code repositories grants you ATC status. Current efforts or teams which want to be recognized as an 'OpenStack Program' should place a request to the Technical Committee, including a clear mission statement describing how they help the OpenStack general mission and how that effort is essential to the completion of our mission. Programs do not need to go through an Incubation period. The initial Programs are 'Documentation', 'Infrastructure', 'QA' and 'Oslo'. Those programs should retroactively submit a mission statement and initial lead designation, if they don't have one already. This motion is expected to be discussed and voted at the next TC meeting, so please comment on this thread. It's funny, I think we're all fine with the idea of Programs but can't quite explain what distinguishes a program from a project, etc. and we're reaching for things like programs don't produce release deliverables or projects don't have multiple code repositories. I'm nervous of picking a distinguishing characteristic that will artificially limit what Programs can do. Say we go with the no release deliverables definition and, by extension, Oslo wasn't a program ... then what happens if the QA team wanted a start producing a release deliverable? Say, a test suite to validate that your Havana deployment isn't screwed up? Would they have to drop the program moniker? How about we say the distinction is whether a project exposes a public REST API? 'OpenStack Programs' are efforts which are essential to the completion of our mission. However, while they may produce release deliverables which are included in our coordinated release every 6 months, programs do not include projects which implement REST APIs intended to be exposed to the users of OpenStack clouds. Such projects are known as 'integrated' projects and join our releases through the incubation process. In terms of the issue of integrated projects also producing client libraries, that ties in nicely with the REST API distinction - if a project is producing a server which exposes an API, of course it should also produce a client for that API. Cheers, Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [OSLO] Current DB status
Hi, Can someone please clarify what the DB status is in OSLO. Last week the code was imported to Neutron (aka Quantum). The review process is Nova has been -1'ed due to issues with the common code (https://review.openstack.org/#/c/34671/). What is the current status? Does the code need to be reverted from OSLO if it is is problematic? Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OSLO] Current DB status
On 07/01/2013 06:13 PM, Mark McLoughlin wrote: (Oslo is not an acronym) ok. can someone please clarify what the database status is in Oslo? On Mon, 2013-07-01 at 18:07 +0300, Gary Kotton wrote: Hi, Can someone please clarify what the DB status is in OSLO. Last wee ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Metrics][Nova] Another take on review turnaround stats
On 06/28/2013 06:49 PM, Nachi Ueno wrote: Hi Russell Your tool is awesome. I have also two idea (I believe it's not crazy :P ). 1) Show LOC of the review Long patch takes long time.. 2) Show priority of the review If it shows priority of the review, it wil be more useful, because we should work on the order of the priorities. The priority of the patch can be defined by related bug report or blueprint. (It is also awesome, if the Gerrit shows this priority ) Cool, thanks for the ideas. Everything I've done so far is from info that comes back from gerrit, so these would be a bit more work. By the way, these scripts are now imported into openstack-infra/reviewstats, so anyone can hack on them and submit changes through gerrit. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch
On Jul 1, 2013, at 8:03 AM, Mark McLoughlin mar...@redhat.com wrote: Hey Thierry I actually didn't notice this go by last week, the other thread got all the attention. On Wed, 2013-06-26 at 14:51 +0200, Thierry Carrez wrote: Hi everyone, Yesterday at the TC meeting we agreed that as a first step to establishing programs, we should have a basic definition of them and establish the first (undisputed) ones. We can solve harder questions (like if horizontal efforts should be a program or a separate thing, or where each current official repo exactly falls) as a second step. So here is my proposal for step 1: 'OpenStack Programs' are efforts which are essential to the completion of our mission, but which do not produce deliverables included in the common release of OpenStack 'integrated' projects every 6 months, like Projects do. Hmm, this wasn't what I understood our direction to be. And maybe this highlights a subtle difference in thinking - as I see it, Oslo absolutely is producing release deliverables. For example, in what way was oslo.config 1.1.0 *not* a part of the grizzly release? The idea that documentation isn't a part of our releases seems a bit off too. This distinction feels like it's based on an extremely constrained definition of what constitutes a release. Programs can create any code repository and produce any deliverable they deem necessary to achieve their goals. Programs are placed under the oversight of the Technical Committee, and contributing to one of their code repositories grants you ATC status. Current efforts or teams which want to be recognized as an 'OpenStack Program' should place a request to the Technical Committee, including a clear mission statement describing how they help the OpenStack general mission and how that effort is essential to the completion of our mission. Programs do not need to go through an Incubation period. The initial Programs are 'Documentation', 'Infrastructure', 'QA' and 'Oslo'. Those programs should retroactively submit a mission statement and initial lead designation, if they don't have one already. This motion is expected to be discussed and voted at the next TC meeting, so please comment on this thread. It's funny, I think we're all fine with the idea of Programs but can't quite explain what distinguishes a program from a project, etc. and we're reaching for things like programs don't produce release deliverables or projects don't have multiple code repositories. I'm nervous of picking a distinguishing characteristic that will artificially limit what Programs can do. I think the concern I have with the current discussions is that the definition is becoming so specific that we'll someday have such an over categorization of things to the point of repos created on a Tuesday. What is the end result here, and what are we trying to promote? I think we want to give ATC status to people who contribute to code that is managed as part of the OpenStack organization. In that sense, everything (ie nova, swift, neutron, cinder, etc) is a program, right? What happens if an existing project wants to deliver an independent code library? Just put it in oslo! may be the current answer, but moving a bunch of unrelated deliverables into oslo causes review issues (a separate review community) and may slow development. (that's actually not the argument I want to have in this email thread) I'd suggest that everything we have today are openstack programs. Many have multiple deliverables (eg a server side and a client side). As a specific example (only because it's what I'm most familiar with, not that this is something Swift is considering), if Swift wanted to separately deliver the swob library (replacement for WebOb) or our logging stuff, then they simply become another deliverable under the Swift program. I completely support (going back years now) the idea of having CI, QA, Docs, etc as additional top-level openstack things. To reiterate, what are we trying to accomplish with further classification of code into programs and projects? What is lacking in the current structure that further classification (ie a new name) gets us? --John smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Move keypair management out of Nova and into Keystone?
+1.. make sense to me, I always thought that was weird hehe Say the word and we will remove it from v3. On 07/01/2013 01:02 PM, Russell Bryant wrote: On 07/01/2013 11:47 AM, Jay Pipes wrote: Recently a colleague asked me whether their key pair from one of our deployment zones would be usable in another deployment zone. His identity credentials are shared between the two zones (we use a shared identity database) and was wondering if the key pairs were also shared. I responded that no, they were not, because Nova, not Keystone, manages key pairs. But that got me thinking is it time to change this? Key pairs really are an element of identity/authentication, and not specific to OpenStack Compute. Has there been any talk of moving the key pair management API out of Nova and into Keystone? I haven't heard any talk about it, but it does seem to make sense. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch
Mark McLoughlin wrote: 'OpenStack Programs' are efforts which are essential to the completion of our mission, but which do not produce deliverables included in the common release of OpenStack 'integrated' projects every 6 months, like Projects do. Hmm, this wasn't what I understood our direction to be. And maybe this highlights a subtle difference in thinking - as I see it, Oslo absolutely is producing release deliverables. For example, in what way was oslo.config 1.1.0 *not* a part of the grizzly release? The idea that documentation isn't a part of our releases seems a bit off too. I suspect what I call common release of OpenStack 'integrated' projects every 6 months is what you call server projects later in the thread. We publish a number of things, and a subset of them (which I call the integrated release) are released together every 6 months. Oslo libraries are also released, but on a different cadence. Docs are released, usually with a slight delay. I call those deliverables, to avoid confusion with the first set... I think we are on the same line, just not using the term release for the same thing ? -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone][Nova] Why migrate_version not InnoDB?
'Stackers - I've got a review up in Keystone that converts tables from MyISAM to InnoDB [0], which I patterned after a change in Nova. One of the comments in the review is suggesting that the migrate_version table should also be changed. The reason I didn't include migrate_version is because that's the way Nova did it, but other than that I don't know why migrate_version should not be converted. The Nova code is pretty explicit that migrate_version isn't changed [1]. Maybe somebody who knows MySQL or SQLAlchemy-migrate better than I do can come up with a reason why migrate_version shouldn't be changed from MyISAM to InnoDB. [0] https://review.openstack.org/#/c/33102/ - Use InnoDB for MySQL [1] https://github.com/openstack/nova/blob/master/nova/tests/db/test_migrations.py#L331 - Brant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Question about locking
On Jul 1, 2013, at 2:27 AM, Rosa, Andrea (HP Cloud Services) andrea.r...@hp.com wrote: Hi Ben, Thank you very much for your reply. That function is using the synchronized decorator, which means that it's wrapped by a semaphore context. As I understand it (and someone correct me if I'm wrong), if an error happens and an exception is thrown the context would be exited and the semaphore released. Of course, I suppose there are situations where a thread could be terminated without being able to do that cleanup, but I suspect most of those cases would kill the entire process, making the lock irrelevant (since you specify not external). Ok, that is my understanding. Thanks for confirming it. If not I think that all other actions for that instance are blocked waiting for the lock, is that correct? That is a potential pitfall of synchronization, but I think it shouldn't happen in this case. Are you seeing this behavior? I am seeing an odd behaviour, sometimes (not often) I find instances in DELETED status (vm_state) which are not marked as deleted. Below what I found when I was debugging it: I found an instance in that odd status, looking at the log file for the compute node I didn't find any error, the service was running, the only thing I spotted was a gap of several minutes in the log file of the compute node. That is very unlikely. I tried to delete again the same instances but the operation never got completed. Maybe the thread which was trying to manage the first deletion died but the lock was still valid so all the other attempts to delete the same instance failed. Were other commands working on the compute node? It seems much more likely that the node had a hung connection to rabbit. If you are not using tcp keepalives, a network hiccup (or failover) can cause half open connections where the server thinks the connection is still active so it sends the message but the compute node never receives it. Vish To fix the issue I had to restart the nova-compute service (so all locks were released) and then I was able to complete the deletion. Does that make sense to you? PS: As you are on this topic I submitted a fix to complete the pending deletion when the compute service starts, it would be great if you can have a look at it: https://review.openstack.org/33265 Regards -- Andrea Rosa ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] VPNaaS
Hi Taiana Cl! Could you share the code on gerrit? The first driver can be testable now https://wiki.openstack.org/wiki/Quantum/VPNaaS/HowToInstall Could you add instruction to use your UI on this page? Nice work! Best Nachi 2013/6/30 Tatiana Mazur tma...@mirantis.com: Hello, I have finished the prototype of VPNaaS UI. The corresponding blueprint is here: https://blueprints.launchpad.net/horizon/+spec/vpnaas-ui. Since that's a prototype, the code will be polished and some features will be added later when dependencies are merged. Unit tests are also to be added. For now 'Create' and 'Delete' options are implemented for VPNService, IKEPolicy, IPSecPolicy and VPNConnection. 'Update' actions are to be added (I think I'll add them in a separate patch set in order not to overcomplicate this one). -- Kind regards, Tatiana On Wed, May 15, 2013 at 10:14 PM, Nachi Ueno na...@ntti3.com wrote: Hi Llya Wow. Sounds Great! Thank you for your contribution. Best Nachi 2013/5/15 Ilya Shakhat ishak...@mirantis.com: Hi Nachi, Tatyana and me volunteer for work on UI for VPNaaS. The corresponding bp is https://blueprints.launchpad.net/horizon/+spec/vpnaas-ui. We will start filling the specification soon. Thanks, Ilya 2013/5/15 Nachi Ueno na...@ntti3.com Hi Folks We had VPN meetings yesterday. Agenda : 1. local_subnet vs local_cidr -- Keep discussion 2. Use cidr value or subnet_id? -- Keep discussion 3. Task assignment - move doc to wiki (Swami) Done https://wiki.openstack.org/wiki/Quantum/VPNaaS - Register BP and get approval by Mark (Swami) Done - H2 - check default value for lifetime value (Swami) Done - Implement Data Model (Swami will push code to the gerrit) by 5/20 - CLI (python-quantum client) work (Swami will push code to the gerrit) by 5/20 - Implement Driver (Nachi PCM ) by 5/31 - Investigate strongswan - rpc (spec needed) - Design driver archtecutre (spec needed) - Write driver code - Instation instructions on Wiki 5/31 - Devstack support (nati) late June? - Write openstack network api document wiki (Sachin) - Horizon work (needs contributer) - Tempest (needs contributer) Next meeting is 5/16 Thursday at 3pm (PST) . On IRC #openstack-meetings Meeting ended Tue May 14 01:00:58 2013 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) Minutes: http://eavesdrop.openstack.org/meetings/openstack_networking_vpn/2013/openstack_networking_vpn.2013-05-14-00.06.html Minutes (text): http://eavesdrop.openstack.org/meetings/openstack_networking_vpn/2013/openstack_networking_vpn.2013-05-14-00.06.txt Log: http://eavesdrop.openstack.org/meetings/openstack_networking_vpn/2013/openstack_networking_vpn.2013-05-14-00.06.log.htm Thanks! Nachi Ueno 2013/5/10 Nachi Ueno na...@ntti3.com: Hi Paul Thanks for your contributions! :) Nachi 2013/5/10 Paul Michali p...@cisco.com: Sure! Glad to work with you Nachi. Anything I can do to help out on the project! I'll start looking at strongswan and how to configure. Regards, PCM (Paul Michali) On May 10, 2013, at 12:35 PM, Nachi Ueno wrote: Hi Paul Sounds Great. The first driver will be strong-swan based. http://www.strongswan.org/ How about work with me to implement strong-swan vpn driver? Honestly, i'm new to strong-swan, so I'm very appreciate if you could try strong-swan on ubuntu and share how to configure it based on current API model. Thanks Nachi 2013/5/10 Paul Michali p...@cisco.com: Naci, Mark, Swami, Sachin, et al, Any suggestions on where/how I can help on this? I'm new to OS (just working it for a few months), so no specific expertise area, but have bandwidth to contribute. Also, any pointers to information that will help me get up to speed on this would be appreciated (Mark gave me link to Amazon URL for info on what they provide for VPNaaS). I was going to look at LBaaS code next week and have been monitoring those discussions, as there seem to be some parallels there. If there are companion info that you think would help, let me know. Regards, PCM (Paul Michali) On May 9, 2013, at 9:12 PM, Nachi Ueno wrote: Hi Folks We have meeting about VPN today. #Conclusions 1. We agreed ipsec api https://blueprints.launchpad.net/quantum/+spec/vpnaas-python-apis 2. Swami will push api CRUD code to review (continue discussion on code) https://blueprints.launchpad.net/quantum/+spec/vpnaas-python-apis 3. We agreed first implementation vpn architecture 4. Next meeting is 5/13 PST 5:00 PM on #openstack-meetings #Questions for IPSec API 1
Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume
On Jul 1, 2013, at 3:35 AM, Sheng Bo Hou sb...@cn.ibm.com wrote: Hi Mate, First, thanks for answering. I was trying to find the way to prepare the bootable volume. Take the default image downloaded by devstack, there are three images: cirros-0.3.0-x86_64-uec, cirros-0.3.0-x86_64-uec-kernel and cirros-0.3.0-x86_64-uec-ramdisk. cirros-0.3.0-x86_64-uec-kernel is referred as the kernel image and cirros-0.3.0-x86_64-uec-ramdisk is referred as the ramdisk image. Issue: If only the image(cirros-0.3.0-x86_64-uec) is copied to the volume when creating a volume) from an image, this volume is unable to boot an instance without the references to the kernel and the ramdisk images. The current cinder only copies the image cirros-0.3.0-x86_64-uec to one targeted volume(Vol-1), which is marked as bootable but unable to do a successful boot with the current nova code, even if image-id is removed in the parameter. Possible solutions: There are two ways in my mind to resolve it. One is we just need the code change in Nova to let it find the reference images for the bootable volume(Vol-1) and there is no need to change anything in cinder, since the kernel and ramdisk id are saved in the volume_glance_metadata, where the references point to the images(kernel and ramdisk) for the volume(Vol-1). You should be able to create an image in glance that references the volume in block device mapping but also has a kernel_id and ramdisk_id parameter so it can boot properly. I know this is kind of an odd way to do things, but this seems like an edge case and I think it is a valid workaround. Vish The other is that if we need multiple images to boot an instance, we need a new way to create the bootable volume. For example, we can create three separate volumes for three of the images and set the new references in volume_glance_metadata with the kernel_volume_id and ramdisk_volume_id. The benefit of this approach is that the volume can live independent of the existence of the original images. Even the images get lost accidentally, the volumes are still sufficient to boot an instance, because all the information have been copied to Cinder part. I am trying to looking for the another way to prepare your bootable volume as you mentioned and asking for the suggestions. And I think the second approach could be one way. Do you think it is a good approach? Best wishes, Vincent Hou (侯胜博) Staff Software Engineer, Open Standards and Open Source Team, Emerging Technology Institute, IBM China Software Development Lab Tel: 86-10-82450778 Fax: 86-10-82453660 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 Mate Lakat mate.la...@citrix.com 2013/07/01 04:18 Please respond to OpenStack Development Mailing List openstack-dev@lists.openstack.org To OpenStack Development Mailing List openstack-dev@lists.openstack.org, cc jsbry...@us.ibm.com, Duncan Thomas duncan.tho...@gmail.com John Griffith duncan.thomas@gmail.comduncan.thomas Subject Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume Hi, I just proposed a patch for the boot_from_volume_exercise.sh to get rid of --image. To be honest, I did not look at the various execution paths. My initial thought is that boot from volume means you boot from volume. If you only have a kernel + ramdisk image, I simply assumed that you can't do it. I would not do any magic. Boot from volume should boot from volume. If you only have 3 part images, you need to find another way to prepare your bootable volume. btw, here is my change: https://review.openstack.org/34761 Cheers, Mate On Mon, Jul 01, 2013 at 01:25:23AM -0400, Sheng Bo Hou wrote: Hi Cinder folks, I am currently fixing the bugs related to booting the instance from the volume. I found there are bugs both in Nova and Cinder. Cinder: https://bugs.launchpad.net/cinder/+bug/1159824 Nova: https://bugs.launchpad.net/nova/+bug/1191069 For the volumes created from the image, I propose to copy the reference image during the creation of the main image. For example, an image may refer to a kernel image and a ramdisk image. When we create a volume from this image, we only copied this one to the volume. The kernel and ramdisk images are still in glance, and the volume still refers to the kernel and ramdisk images. I think if an image has other reference images, the reference images also need to be copied to the volumes(kernel volume and ramdisk volume), and then set the volume referring to the kernel volume and the ramdisk volume. This feature will make booting from a volume completely independent of the existence of the glance image. Do you
Re: [openstack-dev] [PTLs] Proposed simplification around blueprint tracking
No objections here. I'll sync with Flavio and see if we can take the proposal for a test drive in Marconi. On 6/28/13 9:53 AM, Flavio Percoco fla...@redhat.com wrote: On 19/06/13 15:51 +0200, Thierry Carrez wrote: Thoughts ? Although it is not an integrated project - not even incubated - we've been doing this for Marconi as well and it's been quite a pain. Also, I've been following how Glance's blueprints have evolved and again, I've noticed how hard it is to keep those fields synced and updated. So, a big +1 from my side. Cheers, FF -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Nominating John Garbutt for nova-core
On 06/26/2013 11:09 AM, Russell Bryant wrote: Greetings, I would like to nominate John Garbutt for the nova-core team. John received plenty of support. Welcome to the team! -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] HP OpenStack Stress Test Tool
Hi All, We have a stress tool (called qaStressTool) that we want to contribute to the OpenStack community to help improve the quality of the OpenStack. Initially it was written to stress the block storage driver, but it appear to touch several pieces of the OpenStack projects. We would like your feedback on which project to submit the tool. One suggestion is to submit qaStressTool into the Tempest project. Is this the appropriate location? A preliminary copy of qaStressTool is available from github at https://github.com/terry7/openstack-stress.git for you to review to help make this determination. Here is a brief summary what qaStressTool can do and what issues were found while running qaStressTool. qaStressTool is used to generate a load that tests the block storage drivers and the related OpenStack software. qaStressTool is written in Python and uses the Python Cinder Client and Nova Client API's. To run qaStressTool, the user specify the number of threads, the number of servers, and the number of volumes on the command line along with the IP address of the controller node. The user name, the tenant name, and the user password must be configured in the environment variables before the start of the test. qaStressTool will perform the following during the run: * Create the specified number of virtual machines (or instances) that will be used by all the threads. * Create the specified number of threads that will be generating the load. * Each thread will create the specified number of volumes. * After creating the specified number of volumes, create a snapshot for each of the volume. * After creating the snapshot, the thread will attach each volume to a randomly selected virtual machine until all volumes are used. * After attaching all the volumes, start detaching the volumes from the instances. * After detaching all the volumes, delete the snapshot from each volume. * After deleting all the snapshots, delete the volumes. * Finally, delete all the virtual machines. * Display the results of the test after performing any necessary cleanup. * Note that each thread run asynchronously performing the creating volumes, creating snapshots, attaching volumes to instances, detaching volumes from instances, deleting snapshots, and deleting volumes. * Initially, the test ran with no confirmation for each action. This has proven to be too stressful. However, there is an command line option to turn off confirmation if one want to run in this mode. We have found the following bugs from running qaStressTool: * Cinder Bug# 1157506 Snapshot in-use count not decrement after snapshot is deleted * Cinder Bug# 1172503 3par driver not synchronize causing duplicate hostname error * Nova Bug# 1175366 Fibre Channel Multipath attach race condition * Nova Bug# 1180497 FC attach code doesn't discover multipath device * Neutron Bug# 1182662 Cannot turn off quota checking * Nova Bug# 1192287 Creating server did not fail when exceeded Quantum quota limit * Nova Bug# 1192763 Removing FC device causes exception preventing detachment completion We have found other issues like libvirt errors or dangling LUNs that we need to investigate further to determine whether we have a problem or not. -Terry ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch
On Mon, 2013-07-01 at 18:46 +0200, Thierry Carrez wrote: Mark McLoughlin wrote: 'OpenStack Programs' are efforts which are essential to the completion of our mission, but which do not produce deliverables included in the common release of OpenStack 'integrated' projects every 6 months, like Projects do. Hmm, this wasn't what I understood our direction to be. And maybe this highlights a subtle difference in thinking - as I see it, Oslo absolutely is producing release deliverables. For example, in what way was oslo.config 1.1.0 *not* a part of the grizzly release? The idea that documentation isn't a part of our releases seems a bit off too. I suspect what I call common release of OpenStack 'integrated' projects every 6 months is what you call server projects later in the thread. We publish a number of things, and a subset of them (which I call the integrated release) are released together every 6 months. Oslo libraries are also released, but on a different cadence. Docs are released, usually with a slight delay. I call those deliverables, to avoid confusion with the first set... I think we are on the same line, just not using the term release for the same thing ? Oslo libraries and docs are very much on the same cadence - as in the true meaning of the word, rhythm. The fact that, for process reasons, oslo libraries get released a little before the server releases and, for pragmatic reasons, docs get released a little after the server releases changes little in my mind - everyone's working in the same cycle to produce a single, integrated OpenStack product. Cheers, Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch
Thierry Carrez wrote: What would be your alternate definition ? Let me see if I can answer myself :) I think we have two ways forward. Solution (1) is to make Programs and Projects separate entities, where Projects would be the teams which end up producing at least one integrated thing (as in: a server/REST thing which may end up being considered for core labelling). It's the same as what I originally proposed, but perhaps this wording would be clearer: 'OpenStack Programs' are efforts which are essential to the completion of our mission, but which do not produce a server 'integrated' delivery, like Projects do. Programs can create any code repository and produce any deliverable they deem necessary to achieve their goals. (other paragraphs are similar to the initial RFC) Solution (2) is to make everything a Program. Some have a goal of producing an 'integrated' piece and those must go through incubation. Something like: 'OpenStack Programs' are efforts which are essential to the completion of our mission. Programs can create any code repository and produce any deliverable they deem necessary to achieve their goals. Programs are placed under the oversight of the Technical Committee, and contributing to one of their code repositories grants you ATC status. Current efforts or teams which want to be recognized as an 'OpenStack Program' should place a request to the Technical Committee, including a clear mission statement describing how they help the OpenStack general mission and how that effort is essential to the completion of our mission. Only programs which have a goal that includes the production of a server 'integrated' deliverable need to go through an Incubation period. The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron', 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer', 'Documentation', 'Infrastructure', 'QA' and 'Oslo'. 'Trove' and 'Ironic' are in incubation. Those programs should retroactively submit a mission statement and initial lead designation, if they don't have one already. In that scenario, we could also name them Official Teams... because that's the central piece. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone][Nova] Why migrate_version not InnoDB?
On 07/01/2013 12:49 PM, Brant Knudson wrote: 'Stackers - I've got a review up in Keystone that converts tables from MyISAM to InnoDB [0], which I patterned after a change in Nova. One of the comments in the review is suggesting that the migrate_version table should also be changed. The reason I didn't include migrate_version is because that's the way Nova did it, but other than that I don't know why migrate_version should not be converted. The Nova code is pretty explicit that migrate_version isn't changed [1]. Maybe somebody who knows MySQL or SQLAlchemy-migrate better than I do can come up with a reason why migrate_version shouldn't be changed from MyISAM to InnoDB. [0] https://review.openstack.org/#/c/33102/ - Use InnoDB for MySQL [1] https://github.com/openstack/nova/blob/master/nova/tests/db/test_migrations.py#L331 sqlalchemy-migrate relies on the migrate_versions table, so modifying it from within a sqlalchemy-migrate script is scary. And it's a tiny table that's only used during DB migrations, so I doubt you'd see any actual benefit. -- David Ripton Red Hat drip...@redhat.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances from 'deleted' ones?
For everyone's awareness, there is a bug related to this: https://bugs.launchpad.net/nova/+bug/1196255 Thanks, MATT RIEDEMANN Advisory Software Engineer Cloud Solutions and OpenStack Development Phone: 1-507-253-7622 | Mobile: 1-507-990-1889 E-mail: mrie...@us.ibm.com 3605 Hwy 52 N Rochester, MN 55901-1407 United States From: Yufang Zhang yufang521...@gmail.com To: openstack-dev@lists.openstack.org, Date: 06/30/2013 10:01 AM Subject:[openstack-dev] Seperate out 'soft-deleted' instances from 'deleted' ones? In nova-api, both vm_states.DELETED and vm_states.SOFT_DELETED states are mapped to the 'DELETED' status. Thus although nova-api supports filtering instances by instance status, we cannot get instances which are in 'soft-deleted' status, like: nova list --status SOFT_DELETED So does it make sense to seperate out 'soft-deleted' instances from 'deleted' ones in the api level? To achive this, we can modify the state-status mappings in nova-api to map vm_states.SOFT_DELETED to a dedicated status(like SOFT_DELETED) and vice versa. Of course, some modification should be token in the instance filter logic. Could anyone give some opinions before I am working on it? Thanks.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev image/gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.
Last week I went to use oslo.config in a utility I am writing called os-collect-config[1]... While running unit tests on the main() method that is used for the CLI, I was surprised to find that my unit tests were picking up values from a config file I had created just as a test. The tests can be fixed to disable config file lookups, but what was more troublesome was that the config file was overriding values I was passing in as sys.argv. I have read the thread[2] which suggest that CLI should defer to config file because config files are somehow less permanent than the CLI. I am writing today to challenge that notion, and also to suggest that even if that is the case, it is inappropriate to have oslo.config operate in such a profoundly different manner than basically any other config library or system software in general use. CLI options are _for config files_ and if packagers are shipping configurations in systemd unit files, upstart jobs, or sysvinits, they are doing so to control the concerns of that particular invocation of whatever command they are running, and not to configure the software entirely. CLI args are by definition ephemeral, even if somebody might make them permanent in their system, I doubt any packager would then expect that these CLI args would be overridden by any config files. This default is just wrong, and needs to be fixed. [1] https://github.com/SpamapS/os-collect-config.git [2] http://lists.openstack.org/pipermail/openstack-dev/2013-May/008691.html Clint Byrum HP Cloud Services ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.
I think I had a different takeaway from that thread. My understanding was that people were agreeing with you that CLI args *should* have highest precedence and override everything else. The talk about permanence confuses me, unless we mean that more permanent values are overridden by less permanent ones. I was also confused by the ordering in this list, though when I read more carefully it seems to agree with me: - Default value in source code - Overridden by value in config file - Overridden by value in environment variable - Overridden by value given as command line option I'd like to rewrite that list as - Value given as a command line option - Failing that, value in environment variable - Failing that, value in config file - Failing that, default value in source code which I believe to be completely equivalent to Monty's original formulation. On Mon, Jul 1, 2013 at 2:52 PM, Clint Byrum cl...@fewbar.com wrote: Last week I went to use oslo.config in a utility I am writing called os-collect-config[1]... While running unit tests on the main() method that is used for the CLI, I was surprised to find that my unit tests were picking up values from a config file I had created just as a test. The tests can be fixed to disable config file lookups, but what was more troublesome was that the config file was overriding values I was passing in as sys.argv. I have read the thread[2] which suggest that CLI should defer to config file because config files are somehow less permanent than the CLI. I am writing today to challenge that notion, and also to suggest that even if that is the case, it is inappropriate to have oslo.config operate in such a profoundly different manner than basically any other config library or system software in general use. CLI options are _for config files_ and if packagers are shipping configurations in systemd unit files, upstart jobs, or sysvinits, they are doing so to control the concerns of that particular invocation of whatever command they are running, and not to configure the software entirely. CLI args are by definition ephemeral, even if somebody might make them permanent in their system, I doubt any packager would then expect that these CLI args would be overridden by any config files. This default is just wrong, and needs to be fixed. [1] https://github.com/SpamapS/os-collect-config.git [2] http://lists.openstack.org/pipermail/openstack-dev/2013-May/008691.html Clint Byrum HP Cloud Services ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.
On Mon, Jul 01, 2013, Clint Byrum cl...@fewbar.com wrote: I am writing today to challenge that notion, and also to suggest that even if that is the case, it is inappropriate to have oslo.config operate in such a profoundly different manner than basically any other config library or system software in general use. CLI options are _for config files_ and if packagers are shipping configurations in systemd unit files, upstart jobs, or sysvinits, they are doing so to control the concerns of that particular invocation of whatever command they are running, and not to configure the software entirely. I completely agree and I even replied to the mail you've referenced expressing the same concern at the time. I'm sorry I didn't push harder at the time to fix this trap that has been unnecessarily set up. If no one beats me to it by the end of the week, I'll put up a review to fix this. JE ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Email is not registered problem
The emails from this list stopped coming to my email address, is this related? -Original Message- From: Jeremy Stanley [mailto:fu...@yuggoth.org] Sent: Wednesday, June 26, 2013 7:38 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Email is not registered problem On 2013-06-26 17:31:26 +0800 (+0800), Wenhao Xu wrote: I am reworking on on one of my old patch today. After that, when I tried to do git review, I got the following error message: [...] remote: ERROR: committer email address wen...@zelin.io remote: ERROR: does not match your user account. remote: ERROR: remote: ERROR: The following addresses are currently registered: remote: ERROR:xuwenhao2...@gmail.com [...] I have updated my primary email address to wen...@zelin.io in launchpad, Gerrit and openstack foundation profile as well. I am wondering is there anything I missed? It's possible you've got two accounts in Gerrit, one you're updating through the WebUI (tied to your current Launchpad OpenID) and another which is associated with your SSH username but has the same SSH key along with your old E-mail address. We've run into a few like that which needed cleaning up, usually following LP account/OpenID changes. I'll look into it and follow up with you privately to resolve the issue. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Oslo] moving sphinx theme for development docs to a library
On Mon, Jul 1, 2013 at 4:51 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: As part of the oslo.sphinx blueprint, I am working on moving the theme we use for the development docs out of all of the various projects and into a reusable library. The first change, to put the theme into a new repository, is up for review at https://review.openstack.org/#/c/35202/ Good idea. I would appreciate any feedback you have, Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Anne Gentle annegen...@justwriteclick.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3
On Tue, Jul 2, 2013 at 1:44 AM, Anne Gentle annegen...@justwriteclick.com=mailto:annegen...@justwriteclick.com; wrote: It's pretty rare to assign a doc team person to an API spec, typically the PTL works on it. We do have Diane Fleming on the doc team for API work and she's super knowledgeable and helpful. I have her working with Keystone team on their v3 right now, possibly we can have her help you as well if we plan and scope the work. Or you can identify someone with the right knowledge. Let's talk more at the meeting on Tuesday July 9, 2013, 1300 UTC. I'll be there. Thanks! Chris___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Move keypair management out of Nova and into Keystone?
On 07/01/2013 07:49 PM, Jamie Lennox wrote: On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote: Hi folks I'm interested in it too. I'm working on VPN support for Neutron. Public key authentication is one of feature milestone in the IPsec implementation. But I believe key-pair management api and the implementation will be quite similar in Key for IPsec and Nova. so I'm +1 for moving key management for Keystone. Best Nachi I don't know how nova's keypair management works but i assume we are talking about keys for ssh-ing into new virtual machines rather than keys for authentication against nova. Keystone's v3 api has credentials storage (see https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/markdown/identity-api-v3.md ), is this sufficient on behalf of keystone? There is some support in the current master of keystoneclient for working with these credentials. Otherwise would the upcoming barbican be a more appropriate place? If i've got this wrong and we are using these keys to actually authenticate against nova then if someone can point me to the code i'll see how hard it is to transfer to keystone. Actually, no, I think you have it right (though the correct link is https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md) I think the main work, though, has to be in removing/replacing the Nova API /keypairs stuff with calls to Keystone's v3/credentials API. Would the appropriate way to do this be to add an API shim into Nova's API that simply calls out to the Keystone v3/credentials API IFF Keystone's v3 API is enabled in the deployment? Then, deprecate the old code and when Keystone v2 API is sunsetted, then remove the old Nova keypairs API codepaths? Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Nova scheduler sub-group meeting agenda 7/2
1) Follow ups on the scheduler BPs 2) Opens? I'm hoping we can get something going through the opens, people must have some issues they want to discuss. PS: The curren list of scheduler BPs is: 1) Extending data in host state 2) Utilization based scheduling 3) Whole host allocation capability 4) Coexistence of different schedulers 5) Rack aware scheduling 6) List scheduler hints via API 7) Host directory service 8) The future of the scheduler 9) Network bandwisth aware scheduling (and wider aspects) 10) ensembles/vclusters -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] A patch review
Hi guys, The review of (https://review.openstack.org/#/c/34652/) has been idled for a while. I am wondering anyone has a free time slot to review it? Thanks. Regards, Wenhao ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] Keystone store-quota-data blueprint
On Tue, 2013-07-02 at 02:03 +, Everett Toews wrote: This topic came up at the last summit in Portland at [1] and [2]. Yehia and another colleague of his from HP had a design that was discussed and it seemed like they were going to start work on it. Another developer from CERN expressed interest too. I'm not sure if anything ever really got started on it. I think you'll want to wait until Dolph gets back (next week?) before doing any major work on it. Ask him before moving forward. Also know that Dolph has said that nothing that affects the API will be accepted after h-2 so it would have to be finished, reviewed and commited by the 16th. Given that he's away and the discussion that would be around this I'd say it's very tight. Regards, Everett [1] http://openstacksummitapril2013.sched.org/event/c0c6befcb4361e54d5c7e45b2f772de7 [2] http://openstacksummitapril2013.sched.org/event/7bf2cdde2dfad733b499d9c2a3f60b08 P.S. This email really belongs in openstack-dev On Jul 1, 2013, at 10:24 AM, Dmitry Stepanenko wrote: Hi folks, we're going to work on store-quota-data blueprint (https://blueprints.launchpad.net/keystone/+spec/store-quota-data). Did anyone already work on it? Thanks regards, Dmitry ___ Mailing list: https://launchpad.net/~openstack Post to : openst...@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openst...@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Oslo] moving sphinx theme for development docs to a library
On Monday, July 1, 2013, Anne Gentle wrote: On Mon, Jul 1, 2013 at 4:51 PM, Doug Hellmann doug.hellm...@dreamhost.comjavascript:_e({}, 'cvml', 'doug.hellm...@dreamhost.com'); wrote: As part of the oslo.sphinx blueprint, I am working on moving the theme we use for the development docs out of all of the various projects and into a reusable library. The first change, to put the theme into a new repository, is up for review at https://review.openstack.org/#/c/35202/ Good idea. Credit where due, it was Sean's idea. I'm just implementing it. Doug I would appreciate any feedback you have, Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:_e({}, 'cvml', 'OpenStack-dev@lists.openstack.org'); http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Anne Gentle annegen...@justwriteclick.com javascript:_e({}, 'cvml', 'annegen...@justwriteclick.com'); ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev