Re: [openstack-dev] baremetal nova boot issue
On 13 October 2013 20:16, Ravikanth Samprathi rsamp...@gmail.com wrote: Hi Rob The steps are well known, the devil is in the details. Following the instructions in the wiki was not very straightforward, and led to many issues down the line. Now which images are deploy ramdisk and kernel? The ones you create. How is the nova agent downloaded to the baremetal node, in which step and how? What agent? What does ''nova boot'' do? It triggers the deployment process as normal for nova. For the pxe baremetal driver that means extracting the images from glance, writing them to tftp tthen powering on the machine. I see that using any of the diskbuilder built images (ramdisk kernel) is not booting up the system. Can you give some more detail about what happens? -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Hyper-V] Havana status
On Sat, 12 Oct 2013 23:12:26 +1300 Robert Collins robe...@robertcollins.net wrote: On 12 October 2013 21:35, Christopher Yeoh cbky...@gmail.com wrote: On Fri, 11 Oct 2013 08:27:54 -0700 Dan Smith d...@danplanet.com wrote: A fairly fundamental thing in SOA architectures - which we have here - is to make all changes backwards compatibly, it's pretty easy if you're in the habit of it - there's only a handful of basic primitives around evolving APIs gracefully - and it results in a much smoother deployment story - and ultimately thats what we're aiming at. I think that approach is fine for external APIs where we want stability. But for internal APIs where we don't seek to provide that sort of guarantee there is a benefit to be being able to do major reworking of code without having to worry about backwards compatibility and the cruft that you get with having to provide that. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Hyper-V] Havana status
On Sat, 12 Oct 2013 09:30:30 -0700 Dan Smith d...@danplanet.com wrote: If the idea is to gate with nova-extra-drivers this could lead to a rather painful process to change the virt driver API. When all the drivers are in the same tree all of them can be updated at the same time as the infrastructure. Right, and I think if we split those drivers out, then we do *not* gate on them for the main tree. It's asymmetric, which means potentially more trouble for the maintainers of the extra drivers. However, as has been said, we *want* the drivers in the tree as we have them now. Being moved out would be something the owners of a driver would choose in order to achieve a faster pace of development, with the consequence of having to place catch-up if and when we change the driver API. If that's what the owners of the driver want to do then I've no problem with supporting that approach. But I very much think that we should aim to have drivers integrated into the Nova tree as they mature so we can gate on them. Or if not in the tree then at least have a system that supports developing in a way that makes gating on them possible without the downside pains of not being able to change internal APIs easily. Chris. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Hyper-V] Havana status
On Sat, 12 Oct 2013 15:20:44 -0700 Joe Gordon joe.gord...@gmail.com wrote: Once again you raise the issue of bug triage and prioritization of reviews (and blueprints), so help us fix that! This isn't a virt driver only issue though. The issues you originally raise are only incidentally related to virt drivers and hyper-v. The same issues can be brought up as you point out by any sub-project (scheduling, APIs, DB, etc). So a fix for only virt drivers hardly sounds like an appropriate solution. +1 I see there is a session submitted for the summit which is meant to specifically cover the future of compute drivers http://summit.openstack.org/cfp/details/4 But I'm wondering if it would better to generalise this to one where we can more generally discuss the issues around bug triage, review prioritisation, feature planning (eg risk of aiming for merging of major features in H3/I3) - what people can do to help the situation and what tools might help make reviewers more effective. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Hyper-V] Havana status
On Oct 13, 2013, at 14:54 , Christopher Yeoh cbky...@gmail.com wrote: On Sat, 12 Oct 2013 09:30:30 -0700 Dan Smith d...@danplanet.com wrote: If the idea is to gate with nova-extra-drivers this could lead to a rather painful process to change the virt driver API. When all the drivers are in the same tree all of them can be updated at the same time as the infrastructure. Right, and I think if we split those drivers out, then we do *not* gate on them for the main tree. It's asymmetric, which means potentially more trouble for the maintainers of the extra drivers. However, as has been said, we *want* the drivers in the tree as we have them now. Being moved out would be something the owners of a driver would choose in order to achieve a faster pace of development, with the consequence of having to place catch-up if and when we change the driver API. If that's what the owners of the driver want to do then I've no problem with supporting that approach. But I very much think that we should aim to have drivers integrated into the Nova tree as they mature so we can gate on them. Or if not in the tree then at least have a system that supports developing in a way that makes gating on them possible without the downside pains of not being able to change internal APIs easily. For what the driver's interface stability is concerned, I wouldn't see it as a major issue as long as Nova and driver devs coordinate the effort. Beside that, having a versioned stable driver interface wouldn't be IMHO such a hassle, but as I wrote, this is our very last problem. Chris. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Havana RC2's available in the Ubuntu Cloud Archive for 12.04
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Folks RC2's for nova, cinder, glance, heat and neutron should be available in the Havana Cloud Archive for Ubuntu 12.04 in the next hour or so - see: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/havana_versions.html For details of how to use the Ubuntu Cloud Archive for Havana, please refer to: https://wiki.ubuntu.com/ServerTeam/CloudArchive Cheers James - -- James Page Ubuntu and Debian Developer james.p...@ubuntu.com jamesp...@debian.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.14 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBCAAGBQJSWswvAAoJEL/srsug59jDoV8P/Ra90ial4XUh6CYJqLK7bdTN qmpQertlufIb86r2cklz6sdVvPJ5pMkxSRLl5ugc+rI4M9T1GNGElS4rtm8Eo5+t M8ABeXqlcmtr3kkyGvanSVBE707CFVXp89jyDFaoxPUC9PB+6mAx97TxfT7suRIJ 8pGewNUO4bgYMSJsSi3hqK0tq5hVU9a3cUzQCNOB9cOejnLxYOuGVoFiMagTz7gR L4Z/z6DQZnzUhxGAJeqok0iJK8zghxRO6zDycJM1E94MvbYq80vfKf/6xg5Yv4UZ T+75EkJ2cupLanHp4zFc/MzOoakuKFIibsKODefzh2YUtvUK383hjWtgfz4KZbBg fmY9mZqHToWMKCbwGkNvvzQOkCec7jbtfZuCFevHZ5rbtTe3aTspy+sHKTbN1H4j exCjJpx8RLNTdCvZdc22AkH437DFeyI0LS1oZqfVZZ1E1UlFtqv0xpIGEGMKZEMN pe+c2v7fNfAT1r+pX3NE9nBLOQM5BjW/vWq4GTAfqt2Q4YEvGQd51WzE1DQKELlj Q/v5UJzScQuP60kqec7kVkjxTQFsV9ZkFuu6Gj17t+lcGcxnEHI+Vysmar/dnLc8 +815mR3d99nvmiXhgYnncFB8PJOZPmuTN39NaCALbb2xFYYGg3wx8ZTePnLD14IE hrzh3lPEAmq0cHy9K6I8 =rtrL -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting
Hi All, For the next of phase of FWaaS development we will be considering a number of features. I am proposing an IRC meeting on Oct 16th Wednesday 18:00 UTC (11 AM PDT) to discuss this. The etherpad for the summit session proposal is here: https://etherpad.openstack.org/p/icehouse-neutron-fwaas and has a high level list of features under consideration. Thanks, ~Sumit. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows
On 11/10/13 14:25 -0400, Lakshminaraya Renganarayana wrote: Excellent discussion on various issues around orchestration and coordination -- thanks to you all, in particular to Clint, Angus, Stan, Thomas, Joshua, Zane, Steve ... After reading the discussions, I am finding the following themes emerging (please feel free to correct/add): 1. Most of the building blocks needed for effective coordination and orchestration are already in Heat/HOT. 2. Heat would like to view software configuration as a resource (type) with related providers + plugins 3. There is scope for communication/synchronization mechanisms that would complement the wait-conditions and signals I would like to propose a simple abstraction that would complement the current wait-conditions and signals. My proposal is based our experience with supporting such an abstraction on our DSL and also on an extension of Heat. In a nut-shell, this abstraction is a global data space (visible across resources, stacks) from which resources can read and write their inputs / outputs PLUS the semantics that reads will block until the read values are available and writes are non-blocking. We used ZooKeeper to implement this global data space and the blocking-read/non-blocking-writes semantics. But, these could be implemented using several other mechanisms and I believe the techniques currently used by Heat for meta-data service can be used here. I would like to make clear that I am not proposing a replacement for wait-conditions and signals. I am hoping that wait-conditions and signals would be used by power-users (concurrent/distributed programming experts) and the proposed abstraction would be used by folks (like me) who do not want to reason about concurrency and related problems. Also, the proposed global data-space with blocking reads and non-blocking writes is not a new idea (google tuple-spaces, linda) and it has been proven in other domains such as coordination languages to improve the level of abstraction and productivity. The benefits of the proposed abstraction are: G1. Support finer granularity of dependences G2. Allow Heat to reason/analyze about these dependences so that it can order resource creations/management G3. Avoid classic synchronization problems such as dead-locks and race conditions G4 *Conjecture* : Capture most of the coordination use cases (including those required for software configuration / orchestration). Here is more detailed description: Let us say that we can use either pre-defined or custom resource types to define resources at arbitrary levels of granularity. This can be easily supported and I guess is already possible in current version of Heat/HOT. Given this, the proposed abstraction has two parts: (1) an interface style specification a resource's inputs and outputs and (2) a global name/data space. The interface specification which would capture We do specify the resources inputs (properties) and outputs (attributes). I am not sure what is new here? - INPUTS: all the attributes that are consumed/used/read by that resource (currently, we have Ref, GetAttrs that can give this implicitly) - OUTPUTS: all the attributes that are produced/written by that resource (I do not know if this write-set is currently well-defined for a resource. I think some of them are implicitly defined by Heat on particular resource types.) - Global name-space and data-space : all the values produced and consumed (INPUTS/OUTPUTS) are described using a names that are fully qualified (XXX.stack_name.resource_name.property_name). The data values associated with these names are stored in a global data-space. Reads are blocking, i.e., reading a value will block the execution resource/thread until the value is available. Writes are non-blocking, i.e., any thread can write a value and the write will succeed immediately. I don't believe this would give us any new behaviour. The ability to define resources at arbitrary levels of granularity together with the explicit specification of INPUTS/OUTPUTS allows us to reap the benefits G1 and G2 outlined above. Note that the ability to reason about the inputs/outputs of each resource and the induced dependencies will also allow Heat to detect dead-locks via dependence cycles (benefit G3). This is already done today in Heat for Refs, GetAttr on base-resources, but the proposal is to extend the same to arbitrary attributes for any resource. How are TemplateResources and NestedStacks any different? To my knowledge this is aleady the case. The blocking-read and non-blocking writes further structures the specification to avoid deadlocks and race conditions (benefit G3). Have you experienced deadlocks with heat? I have never seen this... As for G4, the conjecture, I can only give as evidence our experience with using our DSL with the proposed abstraction to deploy a few reasonably large applications :-) I would like to know your comments and suggestions. Also, if there is interest I can
Re: [openstack-dev] Thanks for fixing my patch
On 11/10/13 11:34 -0700, Clint Byrum wrote: Recently in the TripleO meeting we identified situations where we need to make it very clear that it is ok to pick up somebody else's patch and finish it. We are broadly distributed, time-zone-wise, and I know other teams working on OpenStack projects have the same situation. So when one of us starts the day and sees an obvious issue with a patch, we have decided to take action, rather than always -1 and move on. We clarified for our core reviewers that this does not mean that now both of you cannot +2. We just need at least one person who hasn't been in the code to also +2 for an approval*. I think all projects can benefit from this model, as it will raise velocity. It is not perfect for everything, but it is really great when running up against deadlines or when a patch has a lot of churn and thus may take a long time to get through the rebase gauntlet. So, all of that said, I want to encourage all OpenStack developers to say thanks for fixing my patch when somebody else does so. It may seem obvious, but publicly expressing gratitude will make it clear that you do not take things personally and that we're all working together. Thanks for your time -Clint * If all core reviewers have been in on the patch, then any two +2's work. Note the commit will authored by the original poster, so perhaps if you modify a patch we should add a Modified-by: line to indicate that it was dual authored. -Angus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thanks for fixing my patch
On 2013-10-14 09:45:38 +1100 (+1100), Angus Salkeld wrote: Note the commit will authored by the original poster, so perhaps if you modify a patch we should add a Modified-by: line to indicate that it was dual authored. We encourage the use of Co-Authored-By: name n...@example.com in commit messages to indicate people who worked on a particular patch. It's a convention for recognizing multiple authors, and our projects would encourage the stats tools to observe it when collecting statistics. https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references That said, if the work I'm doing is trivial fixup to someone else's change and I'm not substantially contributing to the overall idea/implementation, I don't bother to add one... only if it's a significant departure from/improvement on the original author's work. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thanks for fixing my patch
On 13/10/13 22:58 +, Jeremy Stanley wrote: On 2013-10-14 09:45:38 +1100 (+1100), Angus Salkeld wrote: Note the commit will authored by the original poster, so perhaps if you modify a patch we should add a Modified-by: line to indicate that it was dual authored. We encourage the use of Co-Authored-By: name n...@example.com in commit messages to indicate people who worked on a particular patch. It's a convention for recognizing multiple authors, and our projects would encourage the stats tools to observe it when collecting statistics. https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references Well there you go, I missed that. Always read the little writing at the bottom :) -Angus That said, if the work I'm doing is trivial fixup to someone else's change and I'm not substantially contributing to the overall idea/implementation, I don't bother to add one... only if it's a significant departure from/improvement on the original author's work. Sure. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Swift] container forwarding/cluster federation blueprint
Hi, I'd be interested in the differences this has to using swift global clusters? Cheers, Sam On 12/10/2013, at 3:49 AM, Coles, Alistair alistair.co...@hp.com wrote: We’ve just committed a first set of patches to gerrit that address this blueprint: https://blueprints.launchpad.net/swift/+spec/cluster-federation Quoting from that page: “The goal of this work is to enable account contents to be dispersed across multiple clusters, motivated by (a) accounts that might grow beyond the remaining capacity of a single cluster and (b) clusters offering differentiated service levels such as different levels of redundancy or different storage tiers. Following feedback at the Portland summit, the work is initially limited to dispersal at the container level, i.e. each container within an account may be stored on a different cluster, whereas every object within a container will be stored on the same cluster.” It is work in progress, but we’d welcome feedback on this thread, or in person for anyone who might be at the hackathon in Austin next week. The bulk of the new features are in this patch: https://review.openstack.org/51236 (Middleware module for container forwarding.) There’s a couple of patches refactoring/adding support to existing modules: https://review.openstack.org/51242 (Refactor proxy/controllers obj base http code) https://review.openstack.org/51228 (Store x-container-attr-* headers in container db.) And some tests… https://review.openstack.org/51245 (Container-forwarding unit and functional tests) Regards, Alistair Coles, Eric Deliot, Aled Edwards HP Labs, Bristol, UK - Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN . Registered No: 690597 England The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thanks for fixing my patch
On 14 October 2013 09:58, Jeremy Stanley fu...@yuggoth.org wrote: On 2013-10-14 09:45:38 +1100 (+1100), Angus Salkeld wrote: Note the commit will authored by the original poster, so perhaps if you modify a patch we should add a Modified-by: line to indicate that it was dual authored. We encourage the use of Co-Authored-By: name n...@example.com in commit messages to indicate people who worked on a particular patch. It's a convention for recognizing multiple authors, and our projects would encourage the stats tools to observe it when collecting statistics. https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references That said, if the work I'm doing is trivial fixup to someone else's change and I'm not substantially contributing to the overall idea/implementation, I don't bother to add one... only if it's a significant departure from/improvement on the original author's work. While we're talking about attribution - it's polite to reassign the bug to the original author when it finally merges, since launchpad automatically assigns you whenever you push a new patchset. Kieran -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] OpenStack Havana RC1 RC2 available in Debian Experimental
Hi, I failed to announce it on time, though Havana RC1 is available from Debian Experimental since the 8th of October. I'm packaging the next RCx (Cinder, Glance, etc.) as they come. The plan is to upload all of Havana into Sid as soon as we have a definitive release (overwriting Grizzly). Also, I maintain Wheezy backports. For Wheezy, you need the below (non-official) repositories: deb http://havana.pkgs.enovance.com/debian havana main deb http://archive.gplhost.com/debian havana-backports main While there's not enough space for multiple releases of OpenStack within the main Debian archive, the plan is still to use [1] Debian PPAMAIN when this will be available (please do not confuse with Ubuntu PPA, this is a very different concept), and to keep maintaining the above non-official repositories until then. Cheers, Thomas Goirand (zigo) [1] https://lists.debian.org/debian-devel/2013/05/msg00131.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Debian Jessie freeze date announced: 5th of November 2014
Hi, The Debian release team has announced the release date for Jessie: https://lists.debian.org/debian-devel-announce/2013/10/msg4.html This means that for this release, we will *not* have any kind of sync with Ubuntu LTS (last time for Wheezy, we froze a few months after the 2012.04 LTS). I haven't made up my mind yet if we should release Debian Jessie with OpenStack Icehouse (to have the same release as the Ubuntu LTS), or with the J release. I'd be happy to gather comments and suggestions about it, and discuss about it at the HK summit. Cheers, Thomas Goirand (zigo) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][Libvirt] Disabling nova-compute when a connection to libvirt is broken.
Thanks for the feedback. I'm already working on this. Will send the patch for review, very soon. Thanks, Vladik - Original Message - From: Lingxian Kong anlin.k...@gmail.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Sent: Saturday, October 12, 2013 5:42:15 AM Subject: Re: [openstack-dev] [nova][Libvirt] Disabling nova-compute when a connection to libvirt is broken. +1 for me. And I am willing to be a volunteer. 2013/10/12 Joe Gordon joe.gord...@gmail.com On Thu, Oct 10, 2013 at 4:47 AM, Vladik Romanovsky vladik.romanov...@enovance.com wrote: Hello everyone, I have been recently working on a migration bug in nova (Bug #1233184). I noticed that compute service remains available, even if a connection to libvirt is broken. I thought that it might be better to disable the service (using conductor.manager.update_service()) and resume it once it's connected again. (maybe keep the host_stats periodic task running or create a dedicated one, once it succeed, the service will become available again). This way new vms wont be scheduled nor migrated to the disconnected host. Any thoughts on that? Sounds reasonable to me. If we can't reach libvirt there isn't much that nova-compute can / should do. Is anyone already working on that? Thank you, Vladik ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Lingxian Kong Huawei Technologies Co.,LTD. IT Product Line CloudOS PDU China, Xi'an Mobile: +86-18602962792 Email: konglingx...@huawei.com ; anlin.k...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev