Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients
Hi, I'd rather not subclass dict directly. for various reasons adding extra attributes to normal python dict seems prone to errors since people will be expecting regular dicts, and on the other hand if we want to expand it in the future we might run into problems playing with dict methods (such as update) I suggets (roughly): class ResponseBody(dict): def __init__(self, body=None, resp=None): self_data_dict = body or {} self.resp = resp def __getitem__(self, index): return self._data_dict[index] Thus we can keep the previous dict interface, but protect the data and make sure the object will behave exactly as we expect it to. if we want it to have more dict attributes/method we can add them explicitly - Original Message - From: Boris Pavlovic bpavlo...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Saturday, August 30, 2014 2:53:37 PM Subject: Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients Sean, class ResponseBody(dict): def __init__(self, body={}, resp=None): self.update(body) self.resp = resp Are you sure that you would like to have default value {} for method argument and not something like: class ResponseBody(dict): def __init__(self, body=None, resp=None): body = body or {} self.update(body) self.resp = resp In your case you have side effect. Take a look at: http://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument Best regards, Boris Pavlovic On Sat, Aug 30, 2014 at 10:08 AM, GHANSHYAM MANN ghanshyamm...@gmail.com wrote: +1. That will also help full for API coming up with microversion like Nova. On Fri, Aug 29, 2014 at 11:56 PM, Sean Dague s...@dague.net wrote: On 08/29/2014 10:19 AM, David Kranz wrote: While reviewing patches for moving response checking to the clients, I noticed that there are places where client methods do not return any value. This is usually, but not always, a delete method. IMO, every rest client method should return at least the response. Some services return just the response for delete methods and others return (resp, body). Does any one object to cleaning this up by just making all client methods return resp, body? This is mostly a change to the clients. There were only a few places where a non-delete method was returning just a body that was used in test code. Yair and I were discussing this yesterday. As the response correctness checking is happening deeper in the code (and you are seeing more and more people assigning the response object to _ ) my feeling is Tempest clients should probably return a body obj that's basically. class ResponseBody(dict): def __init__(self, body={}, resp=None): self.update(body) self.resp = resp Then all the clients would have single return values, the body would be the default thing you were accessing (which is usually what you want), and the response object is accessible if needed to examine headers. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks Regards Ghanshyam Mann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] libvirt version_cap, a postmortem
Hi, Very nice write up. I take my hat off for taking the time and doing a postmortem. As a community we really need to work on how we communicate with one another. At the end of the day we all have the common goal in the success of the project. At times I feel like things are done very quickly without any discussion at all, for example the reverting of patches. I can understand when the gate is broken that this is a must, but in other cases I think that we need to discuss things in order to build trust and for people in the community to learn what they may or may not have done correctly. Thanks Gary On 8/30/14, 7:08 PM, Mark McLoughlin mar...@redhat.com wrote: Hey The libvirt version_cap debacle continues to come up in conversation and one perception of the whole thing appears to be: A controversial patch was ninjaed by three Red Hat nova-cores and then the same individuals piled on with -2s when a revert was proposed to allow further discussion. I hope it's clear to everyone why that's a pretty painful thing to hear. However, I do see that I didn't behave perfectly here. I apologize for that. In order to understand where this perception came from, I've gone back over the discussions spread across gerrit and the mailing list in order to piece together a precise timeline. I've appended that below. Some conclusions I draw from that tedious exercise: - Some people came at this from the perspective that we already have a firm, unwritten policy that all code must have functional written tests. Others see that test all the things is interpreted as a worthy aspiration, but is only one of a number of nuanced factors that needs to be taken into account when considering the addition of a new feature. i.e. the former camp saw Dan Smith's devref addition as attempting to document an existing policy (perhaps even a more forgiving version of an existing policy), whereas other see it as a dramatic shift to a draconian implementation of test all the things. - Dan Berrange, Russell and I didn't feel like we were ninjaing a controversial patch - you can see our perspective expressed in multiple places. The patch would have helped the live snapshot issue, and has other useful applications. It does not affect the broader testing debate. Johannes was a solitary voice expressing concerns with the patch, and you could see that Dan was particularly engaged in trying to address those concerns and repeating his feeling that the patch was orthogonal to the testing debate. That all being said - the patch did merge too quickly. - What exacerbates the situation - particularly when people attempt to look back at what happened - is how spread out our conversations are. You look at the version_cap review and don't see any of the related discussions on the devref policy review nor the mailing list threads. Our disjoint methods of communicating contribute to misunderstandings. - When it came to the revert, a couple of things resulted in misunderstandings, hurt feelings and frayed tempers - (a) that our retrospective veto revert policy wasn't well understood and (b) a feeling that there was private, in-person grumbling about us at the mid-cycle while we were absent, with no attempt to talk to us directly. To take an even further step back - successful communities like ours require a huge amount of trust between the participants. Trust requires communication and empathy. If communication breaks down and the pressure we're all under erodes our empathy for each others' positions, then situations can easily get horribly out of control. This isn't a pleasant situation and we should all strive for better. However, I tend to measure our flamewars against this: https://urldefense.proofpoint.com/v1/url?u=https://mail.gnome.org/archives /gnome-2-0-list/2001-June/msg00132.htmlk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0 Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=WnWRoI3TSZlh7lPB Z67S7KqCg6LUo1tMHirwwCXEY0o%3D%0As=cb0293d31053f67f603946a43e6ebb99333df6 d73adeb6d3965380aa94424519 GNOME in June 2001 was my introduction to full-time open-source development, so this episode sticks out in my mind. The two individuals in that email were/are immensely capable and reasonable people, yet ... So far, we're doing pretty okay compared to that and many other open-source flamewars. Let's make sure we continue that way by avoiding letting situations fester. Thanks, and sorry for being a windbag, Mark. --- = July 1 = The starting point is this review: https://review.openstack.org/103923 Dan Smith proposes a policy that the libvirt driver may not use libvirt features until they have been available in Ubuntu or Fedora for at least 30 days. The commit message mentions: broken us in the past when we add a new feature that requires a newer libvirt than we test with, and we discover that it's totally broken when we upgrade in the gate.
Re: [openstack-dev] [nova][NFV] VIF_VHOSTUSER
On 8/30/2014 11:22 PM, Ian Wells wrote: The problem here is that you've removed the vif_driver option and now you're preventing the inclusion of named VIF types into the generic driver, which means that rather than adding a package to an installation to add support for a VIF driver it's now necessary to change the Nova code (and repackage it, or - ew - patch it in place after installation). I understand where you're coming from but unfortunately the two changes together make things very awkward. Granted that vif_driver needed to go away - it was the wrong level of code and the actual value was coming from the wrong place anyway (nova config and not Neutron) - but it's been removed without a suitable substitute. It's a little late for a feature for Juno, but I think we need to write something discovers VIF types installed on the system. That way you can add a new VIF type to Nova by deploying a package (and perhaps naming it in config as an available selection to offer to Neutron) *without* changing the Nova tree itself. In the meantime, I recommend you consult with the Neutron cores and see if you can make an exception for the VHOSTUSER driver for the current timescale. -- Ian. I Agree with Ian. My understanding from a conversation a month ago was that there would be an alternative to the deprecated config option. As far as I understand now there is no such alternative in Juno and in case that one has an out of the tree VIF Driver he'll be left out with a broken solution. What do you say about an option of reverting the change? Anyway It might be a good idea to discuss propositions to address this issue towards Kilo summit. BR, Itzik ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?
I tend to say 2) is the best option. There are many open source or commercial backup software, and both for VMs and volume. If we do option 1), it reminds me to implement something similar to VMware method, and it will cause nova really heavy. On Sun, Aug 31, 2014 at 4:04 AM, Preston L. Bannister pres...@bannister.us wrote: You are thinking of written-for-cloud applications. For those the state should not persist with the instance. There are a very large number of existing applications, not written to the cloud model, but which could be deployed in a cloud. Those applications are not all going to get re-written (as the cost is often greater than the benefit). Those applications need some ready and efficient means of backup. The benefits of the cloud-application model and the cloud-deployment model are distinct. The existing nova backup (if it worked) is an inefficient snapshot. Not really useful at scale. There are two basic paths forward, here. 1) Build a complete common backup implementation that everyone can use. Or 2) define a common API for invoking backup, allow vendors to supply differing implementations, and add to OpenStack the APIs needed by backup implementations. Given past history, there does not seem to be enough focus or resources to get (1) done. That makes (2) much more likely. Reasonably sure we can find the interest and resources for this path. :) On Fri, Aug 29, 2014 at 10:55 PM, laserjetyang laserjety...@gmail.com wrote: I think the purpose of nova VM is not for persistent usage, and it should be used for stateless. However, there are use cases to use VM to replace bare metal applications, and it requires the same coverage, which I think VMware did pretty well. The nova backup is snapshot indeed, so it should be re-implemented to be fitting into the real backup solution. On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister pres...@bannister.us wrote: The current backup APIs in OpenStack do not really make sense (and apparently do not work ... which perhaps says something about usage and usability). So in that sense, they could be removed. Wrote out a bit as to what is needed: http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/ At the same time, to do efficient backup at cloud scale, OpenStack is missing a few primitives needed for backup. We need to be able to quiesce instances, and collect changed-block lists, across hypervisors and filesystems. There is some relevant work in this area - for example: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots Switching hats - as a cloud developer, on AWS there is excellent current means of backup-through-snapshots, which is very quick and is charged based on changed-blocks. (The performance and cost both reflect use of changed-block tracking underneath.) If OpenStack completely lacks any equivalent API, then the platform is less competitive. Are you thinking about backup as performed by the cloud infrastructure folk, or as a service used by cloud developers in deployed applications? The first might do behind-the-scenes backups. The second needs an API. On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/29/2014 02:48 AM, Preston L. Bannister wrote: Looking to put a proper implementation of instance backup into OpenStack. Started by writing a simple set of baseline tests and running against the stable/icehouse branch. They failed! https://github.com/dreadedhill-work/openstack-backup-scripts Scripts and configuration are in the above. Simple tests. At first I assumed there was a configuration error in my Devstack ... but at this point I believe the errors are in fact in OpenStack. (Also I have rather more colorful things to say about what is and is not logged.) Try to backup bootable Cinder volumes attached to instances ... and all fail. Try to backup instances booted from images, and all-but-one fail (without logged errors, so far as I see). Was concerned about preserving existing behaviour (as I am currently hacking the Nova backup API), but ... if the existing is badly broken, this may not be a concern. (Makes my job a bit simpler.) If someone is using nova backup successfully (more than one backup at a time), I *would* rather like to know! Anyone with different experience? IMO, the create_backup API extension should be removed from the Compute API. It's completely unnecessary and backups should be the purview of external (to Nova) scripts or configuration management modules. This API extension is essentially trying to be a Cloud Cron, which is inappropriate for the Compute API, IMO. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list
Re: [openstack-dev] [nova] Is the BP approval process broken?
Indeed, this is pretty much what we are going to do about Gantt. Nobody has said don’t do it, all of the objections have been around how when to do the split. We will revisit in Kilo (hopefully early in the cycle) and try again. Note there is still the issue of Nova BP review process that I think needs to be tweaked but that is separate from the issue of how do we get Gantt going. -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 From: Joe Gordon [mailto:joe.gord...@gmail.com] Sent: Friday, August 29, 2014 8:39 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [nova] Is the BP approval process broken? On Aug 29, 2014 10:42 AM, Dugger, Donald D donald.d.dug...@intel.commailto:donald.d.dug...@intel.com wrote: Well, I think that there is a sign of a broken (or at least bent) process and that's what I'm trying to expose. Especially given the ongoing conversations over Gantt it seems wrong that ultimately it was rejected due to silence. Maybe rejecting the BP was the right decision but the way the decision was made was just wrong. Note that dealing with silence is `really` difficult. You point out that maybe silence means people don't agree with the BP but how do I know? Maybe it means no one has time, maybe no one has an opinion, maybe it got lost in the shuffle, maybe I'm being too obnoxious - who knows. A simple -1 with a one sentence explanation would helped a lot. How is this: -1, we already have too many approved blueprints in Juno and it sounds like there are still concerns about the Gantt split in general. Hopefully after trunk is open for Kilo we can revisit the Gantt idea. I'm thinking yet another ML thread outlining why and how to get there. -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.commailto:jaypi...@gmail.com] Sent: Friday, August 29, 2014 10:43 AM To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [nova] Is the BP approval process broken? On 08/29/2014 12:25 PM, Zane Bitter wrote: On 28/08/14 17:02, Jay Pipes wrote: I understand your frustration about the silence, but the silence from core team members may actually be a loud statement about where their priorities are. I don't know enough about the Nova review situation to say if the process is broken or not. But I can say that if passive-aggressively ignoring people is considered a primary communication channel, something is definitely broken. Nobody is ignoring anyone. There have ongoing conversations about the scheduler and Gantt, and those conversations haven't resulted in all the decisions that Don would like. That is unfortunate, but it's not a sign of a broken process. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
I'm fairly certain the buzzing sound I can hear is a bee in my bonnet... so I suspect that I'm starting to sound like someone chasing a bee that only they can hear. I'm not sure if it's helpful to keep this discussion on this list - would there be a better forum somewhere else? On Fri, Aug 29, 2014 at 7:34 PM, Thierry Carrez thie...@openstack.org wrote: James Polley wrote: However, Thierry pointed to https://wiki.openstack.org/wiki/Governance/Foundation/Structure which still refers to Project Technical Leads and says explicitly that they lead individual projects, not programs. I actually have edit access to that page, so I could at least update that with a simple s/Project/Program/, if I was sure that was the right thing to do. Don't underestimate how stale wiki pages can become! Yes, fix it. I don't know if I've fixed it, but I've certainly replaced all users of the word Project with Program. Whether or not it now matches reality, I'm not sure. I alsp removed (what I assume is) a stale reference to the PPB and added a new heading for the TC. It looks correct to me, thanks! http://www.openstack.org/ has a link in the bottom nav that says Projects; it points to http://www.openstack.org/projects/ which redirects to http://www.openstack.org/software/ which has a list of things like Compute and Storage - which as far as I know are Programs, not Projects. I don't know how to update that link in the nav panel. That's because the same word (compute) is used for two different things: a program name (Compute) and an official OpenStack name for a project (OpenStack Compute a.k.a. Nova). Basically official OpenStack names reduce confusion for newcomers (What is Nova ?), but they confuse old-timers at some point (so the Compute program produces Nova a.k.a. OpenStack Compute ?). That's confusing to me. I had thought that part of the reason for the separation was to enable a level of indirection - if the Compute program team decide that a new project called (for example) SuperNova should be the main project, that just means that Openstack Compute is now a pointer to a different project, supported by the same program team. It sounds like that isn't the intent though? That's more of a side-effect than the intent, IMHO. The indirection we created is between teams and code repositories. I wasn't around when the original Programs/Projects discussion was happening - which, I suspect, has a lot to do with why I'm confused today - it seems as though people who were around at the time understand the difference, but people who have joined since then are relying on multiple conflicting verbal definitions. I believe, though, that http://lists.openstack.org/pipermail/openstack-dev/2013-June/010821.html was one of the earliest starting points of the discussion. That page points at https://wiki.openstack.org/wiki/Projects, which today contains a list of Programs. That page does have a definition of what a Program is, but doesn't explain what a Project is or how they relate to Programs. This page seems to be locked down, so I can't edit it. https://wiki.openstack.org/wiki/Projects was renamed to https://wiki.openstack.org/wiki/Programs with the wiki helpfully leaving a redirect behind. So the content you are seeing here is the Programs wiki page, which is why it doesn't define projects. We don't really use the word project that much anymore, we prefer to talk about code repositories. Programs are teams working on a set of code repositories. Some of those code repositories may appear in the integrated release. This explanation of the difference between projects and programs sounds like it would be useful to add to /Programs - but I can't edit that page. This page reflects the official list of programs, which is why it's protected. it's supposed to be replaced by an automatic publication from http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml which is the ultimate source of truth on that topic. I was going to ask about the reference to The process new projects can follow to become an Integrated project - is that intended to refer to a project or a program? But then I read https://review.openstack.org/#/c/116727/ and and http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst, seem to make it clear that it's entirely possible that the Kitty program might have a mix of Integrated and non-Integrated projects. Is it safe to assume that the Governance repo is canonical and up-to-date, and rework the wiki pages based on the information in the Governance repo? [1]
Re: [openstack-dev] [nova] nova backup not working in stable/icehouse?
I also believe (2) is the most workable option. Full disclosure ... my current job is at EMC, and we just shipped a backup product for the VMware vCloud (the one used for VMware vCloud Air - http://vcloud.vmware.com/http://vcloud.vmware.com/). First release of that project was wrapping up, and I was asked to look at backup for OpenStack. As I am familiar with both open source community, and with how to do backup at cloud scale ... the problem is an easy fit. Different backup vendors might approach the problem in differing ways, which calls for a pluggable backend. What I find is that OpenStack is missing the hooks we need to do backup efficiently. There are promising proposed bits, but ... we are not there yet. Storage for backups is of a different character than existing services in OpenStack. Backup vendors need a place to plug in, and need some relevant primitive operations. Turns out the AWS folk already have rather nice support for backup from a cloud developer's perspective. At present, there is nothing equivalent in OpenStack. From a cloud developer's perspective, that is a huge lack. While AWS is an API wrapped around a single service, OpenStack is an API wrapped around differing services. This is both harder (in terms of defining the API), and an advantage to customers with differing requirements. What lacks and is needed by backup vendors is the right set of primitives. On Sun, Aug 31, 2014 at 8:19 AM, laserjetyang laserjety...@gmail.com wrote: I tend to say 2) is the best option. There are many open source or commercial backup software, and both for VMs and volume. If we do option 1), it reminds me to implement something similar to VMware method, and it will cause nova really heavy. On Sun, Aug 31, 2014 at 4:04 AM, Preston L. Bannister pres...@bannister.us wrote: You are thinking of written-for-cloud applications. For those the state should not persist with the instance. There are a very large number of existing applications, not written to the cloud model, but which could be deployed in a cloud. Those applications are not all going to get re-written (as the cost is often greater than the benefit). Those applications need some ready and efficient means of backup. The benefits of the cloud-application model and the cloud-deployment model are distinct. The existing nova backup (if it worked) is an inefficient snapshot. Not really useful at scale. There are two basic paths forward, here. 1) Build a complete common backup implementation that everyone can use. Or 2) define a common API for invoking backup, allow vendors to supply differing implementations, and add to OpenStack the APIs needed by backup implementations. Given past history, there does not seem to be enough focus or resources to get (1) done. That makes (2) much more likely. Reasonably sure we can find the interest and resources for this path. :) On Fri, Aug 29, 2014 at 10:55 PM, laserjetyang laserjety...@gmail.com wrote: I think the purpose of nova VM is not for persistent usage, and it should be used for stateless. However, there are use cases to use VM to replace bare metal applications, and it requires the same coverage, which I think VMware did pretty well. The nova backup is snapshot indeed, so it should be re-implemented to be fitting into the real backup solution. On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister pres...@bannister.us wrote: The current backup APIs in OpenStack do not really make sense (and apparently do not work ... which perhaps says something about usage and usability). So in that sense, they could be removed. Wrote out a bit as to what is needed: http://bannister.us/weblog/2014/08/21/cloud-application-backup-and-openstack/ At the same time, to do efficient backup at cloud scale, OpenStack is missing a few primitives needed for backup. We need to be able to quiesce instances, and collect changed-block lists, across hypervisors and filesystems. There is some relevant work in this area - for example: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots Switching hats - as a cloud developer, on AWS there is excellent current means of backup-through-snapshots, which is very quick and is charged based on changed-blocks. (The performance and cost both reflect use of changed-block tracking underneath.) If OpenStack completely lacks any equivalent API, then the platform is less competitive. Are you thinking about backup as performed by the cloud infrastructure folk, or as a service used by cloud developers in deployed applications? The first might do behind-the-scenes backups. The second needs an API. On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/29/2014 02:48 AM, Preston L. Bannister wrote: Looking to put a proper implementation of instance backup into OpenStack. Started by writing a simple set of baseline tests and running against the stable/icehouse branch. They
Re: [openstack-dev] [bashate] .bashateignore
On 08/29/2014 10:42 PM, Sean Dague wrote: I'm actually kind of convinced now that none of these approaches are what we need, and that we should instead have a .bashateignore file in the root dir for the project instead, which would be regex that would match files or directories to throw out of the walk. Dean's idea of reading .gitignore might be good. I had a quick poke at git dir.c:match_pathspec_item() and sort of came up with something similar [2] which roughly follows that and then only matches on files that have a shell-script mimetype; which I feel is probably sane for a default implementation. IMO devstack should just generate it's own file-list to pass in for checking and bashate shouldn't have special guessing code for it It all feels a bit like a solution looking for a problem. Making bashate only work on a passed-in list of files and leaving generating those files up to the test infrastructure is probably would probably best the best KISS choice... -i [1] https://github.com/git/git/blob/master/dir.c#L216 [2] https://review.openstack.org/#/c/117425/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev