Re: [openstack-dev] [Neutron][LBaaS] Migrations in feature branch
If these are just feature branches and they aren't intended to be deployed for long life cycles, why don't we just skip the db migration and enable auto-schema generation inside of the feature branch? Then a migration can be created once it's time to actually merge into master. On Tue, Sep 23, 2014 at 9:37 PM, Brandon Logan brandon.lo...@rackspace.com wrote: Well the problem with resequencing on a merge is that a code change for the first migration must be added first and merged into the feature branch before the merge is done. Obviously this takes review time unless someone of authority pushes it through. We'll run into this same problem on rebases too if we care about keeping the migration sequenced correctly after rebases (which we don't have to, only on a merge do we really need to care). If we did what Henry suggested in that we only keep one migration file for the entire feature, we'd still have to do the same thing. I'm not sure that buys us much other than keeping the feature's migration all in one file. I'd also say that code in master should definitely NOT be dependent on code in a feature branch, much less a migration. This was a requirement of the incubator as well. So yeah this sounds like a problem but one that really only needs to be solved at merge time. There will definitely need to be coordination with the cores when merge time comes. Then again, I'd be a bit worried if there wasn't since a feature branch being merged into master is a huge deal. Unless I am missing something I don't see this as a big problem, but I am highly capable of being blind to many things. Thanks, Brandon On Wed, 2014-09-24 at 01:38 +, Doug Wiegley wrote: Hi Eugene, Just my take, but I assumed that we’d re-sequence the migrations at merge time, if needed. Feature branches aren’t meant to be optional add-on components (I think), nor are they meant to live that long. Just a place to collaborate and work on a large chunk of code until it’s ready to merge. Though exactly what those merge criteria are is also yet to be determined. I understand that you’re raising a general problem, but given lbaas v2’s state, I don’t expect this issue to cause many practical problems in this particular case. This is also an issue for the incubator, whenever it rolls around. Thanks, doug On September 23, 2014 at 6:59:44 PM, Eugene Nikanorov (enikano...@mirantis.com) wrote: Hi neutron and lbaas folks. Recently I briefly looked at one of lbaas proposed into feature branch. I see migration IDs there are lined into a general migration sequence. I think something is definitely wrong with this approach as feature-branch components are optional, and also master branch can't depend on revision IDs in feature-branch (as we moved to unconditional migrations) So far the solution to this problem that I see is to have separate migration script, or in fact, separate revision sequence. The problem is that DB models in feature branch may depend on models of master branch, which means that each revision of feature-branch should have a kind of minimum required revision of the master branch. The problem that revision IDs don't form linear order, so we can't have 'minimum' unless that separate migration script may analyze master branch migration sequence and find minimum required migration ID. Thoughts? Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] PTL Non-Candidacy
On 09/22/2014 07:22 PM, Mark Washenberger wrote: Greetings, I will not be running for PTL for Glance for the Kilo release. I want to thank all of the nice folks I've worked with--especially the attendees and sponsors of the mid-cycle meetups, which I think were a major success and one of the highlights of the project for me. Thanks for all your hard work, Mark. You did a great job and I hope you'll still be around. Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] adding James Carey to oslo-i18n-core
On 09/23/2014 11:03 PM, Doug Hellmann wrote: James Carey (jecarey) from IBM has done the 3rd most reviews of oslo.i18n this cycle [1]. His feedback has been useful, and I think he would be a good addition to the team for maintaining oslo.i18n. Let me know what you think, please. Doug +1 -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues
On 09/23/2014 11:59 PM, Joe Gordon wrote: On Tue, Sep 23, 2014 at 9:13 AM, Zane Bitter zbit...@redhat.com mailto:zbit...@redhat.com wrote: On 22/09/14 22:04, Joe Gordon wrote: To me this is less about valid or invalid choices. The Zaqar team is comparing Zaqar to SQS, but after digging into the two of them, zaqar barely looks like SQS. Zaqar doesn't guarantee what IMHO is the most important parts of SQS: the message will be delivered and will never be lost by SQS. I agree that this is the most important feature. Happily, Flavio has clarified this in his other thread[1]: *Zaqar's vision is to provide a cross-cloud interoperable, fully-reliable messaging service at scale that is both, easy and not invasive, for deployers and users.* ... Zaqar aims to be a fully-reliable service, therefore messages should never be lost under any circumstances except for when the message's expiration time (ttl) is reached So Zaqar _will_ guarantee reliable delivery. Zaqar doesn't have the same scaling properties as SQS. This is true. (That's not to say it won't scale, but it doesn't scale in exactly the same way that SQS does because it has a different architecture.) It appears that the main reason for this is the ordering guarantee, which was introduced in response to feedback from users. So this is clearly a different design choice: SQS chose reliability plus effectively infinite scalability, while Zaqar chose reliability plus FIFO. It's not feasible to satisfy all three simultaneously, so the options are: 1) Implement two separate modes and allow the user to decide 2) Continue to choose FIFO over infinite scalability 3) Drop FIFO and choose infinite scalability instead This is one of the key points on which we need to get buy-in from the community on selecting one of these as the long-term strategy. Zaqar is aiming for low latency per message, SQS doesn't appear to be. I've seen no evidence that Zaqar is actually aiming for that. There are waaay lower-latency ways to implement messaging if you don't care about durability (you wouldn't do store-and-forward, for a start). If you see a lot of talk about low latency, it's probably because for a long time people insisted on comparing Zaqar to RabbitMQ instead of SQS. I thought this was why Zaqar uses Falcon and not Pecan/WSME? For an application like Marconi where throughput and latency is of paramount importance, I recommend Falcon over Pecan. https://wiki.openstack.org/wiki/Zaqar/pecan-evaluation#Recommendation Yes that statement mentions throughput as well, but it does mention latency as well. Right, but that doesn't make low-latency the main goal and as I've already said, the fact that latency is not the main goal doesn't mean the team will overlook it. (Let's also be careful not to talk about high latency as if it were a virtue in itself; it's simply something we would happily trade off for other properties. Zaqar _is_ making that trade-off.) So if Zaqar isn't SQS what is Zaqar and why should I use it? If you are a small-to-medium user of an SQS-like service, Zaqar is like SQS but better because not only does it never lose your messages but they always arrive in order, and you have the option to fan them out to multiple subscribers. If you are a very large user along one particular dimension (I believe it's number of messages delivered from a single queue, but probably Gordon will correct me :D) then Zaqar may not _yet_ have a good story for you. cheers, Zane. [1] http://lists.openstack.org/__pipermail/openstack-dev/2014-__September/046809.html http://lists.openstack.org/pipermail/openstack-dev/2014-September/046809.html _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
I think Joe's idea pretty sums it up, ASF model is definitely worth following (Mesos is awesome). Non layer #1 projects will still be shepherded but not that closely coupled to make OpenStack over-bloated. Incubation projects can't be just dropped. On Wed, Sep 24, 2014 at 2:46 AM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Sep 23, 2014 at 9:50 AM, Vishvananda Ishaya vishvana...@gmail.com wrote: On Sep 23, 2014, at 8:40 AM, Doug Hellmann d...@doughellmann.com wrote: If we are no longer incubating *programs*, which are the teams of people who we would like to ensure are involved in OpenStack governance, then how do we make that decision? From a practical standpoint, how do we make a list of eligible voters for a TC election? Today we pull a list of committers from the git history from the projects associated with “official programs, but if we are dropping “official programs” we need some other way to build the list. Joe Gordon mentioned an interesting idea to address this (which I am probably totally butchering), which is that we make incubation more similar to the ASF Incubator. In other words make it more lightweight with no promise of governance or infrastructure support. you only slightly butchered it :). From what I gather the Apache Software Foundation primary goals are to: * provide a foundation for open, collaborative software development projects by supplying hardware, communication, and business infrastructure * create an independent legal entity to which companies and individuals can donate resources and be assured that those resources will be used for the public benefit * provide a means for individual volunteers to be sheltered from legal suits directed at the Foundation's projects * protect the 'Apache' brand, as applied to its software products, from being abused by other organizations [0] This roughly translates into: JIRA, SVN, Bugzilla and Confluence etc. for infrastructure resources. So ASF provides infrastructure, legal support, a trademark and some basic oversight. The [Apache] incubator is responsible for: * filtering the proposals about the creation of a new project or sub-project * help the creation of the project and the infrastructure that it needs to operate * supervise and mentor the incubated community in order for them to reach an open meritocratic environment * evaluate the maturity of the incubated project, either promoting it to official project/ sub-project status or by retiring it, in case of failure. It must be noted that the incubator (just like the board) does not perform filtering on the basis of technical issues. This is because the foundation respects and suggests variety of technical approaches. It doesn't fear innovation or even internal confrontation between projects which overlap in functionality. [1] So my idea, which is very similar to Monty's, is to make move all the non-layer 1 projects into something closer to an ASF model where there is still incubation and graduation. But the only things a project receives out of this process is: * Legal support * A trademark * Mentorship * Infrastructure to use * Basic oversight via the incubation/graduation process with respect to the health of the community. They do not get: * Required co-gating or integration with any other projects * People to right there docs for them, etc. * Technical review/oversight * Technical requirements * Evaluation on how the project fits into a bigger picture * Language requirements * etc. Note: this is just an idea, not a fully formed proposal [0] http://www.apache.org/foundation/how-it-works.html#what [1] http://www.apache.org/foundation/how-it-works.html#incubator It is also interesting to consider that we may not need much governance for things outside of layer1. Of course, this may be dancing around the actual problem to some extent, because there are a bunch of projects that are not layer1 that are already a part of the community, and we need a solution that includes them somehow. Vish ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Zhipeng Huang Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Octavia] Weekly meeting agenda
Hi Folks! Please note that the IRC channel for the weekly Octavia meeting has changed (see below)! We've got the following agenda for tomorrow's Octavia meeting, so far: - Review progress on action items from last week - From blogan: Neutron lbaas v1 and v2 right now creates a neutron port before passing any control to the driver, we need to decide how Octavia is going to handle that - Discuss any outstanding blockers - Review status on outstanding gerrit reviews - Review list of blueprints, assign people to specific blueprints and/or tasks Please feel free to add additional agenda items here: https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Agenda If you've been working on Octavia, please update our standup etherpad: https://etherpad.openstack.org/p/octavia-weekly-standup Beyond that, this is your friendly reminder that the Octavia team meets on Wednesdays at 20:00UTC in #openstack-meeting-alt Thanks, Stephen -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On 24 September 2014 16:38, Jay Pipes jaypi...@gmail.com wrote: On 09/23/2014 10:29 PM, Steven Dake wrote: There is a deployment program - tripleo is just one implementation. Nope, that is not correct. Like it or not (I personally don't), Triple-O is *the* Deployment Program for OpenStack: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284 Saying Triple-O is just one implementation of a deployment program is like saying Heat is just one implementation of an orchestration program. It isn't. It's *the* implemenation of an orchestration program that has been blessed by the TC: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112 Thats not what Steve said. He said that the codebase they are creating is a *project* with a target home of the OpenStack Deployment *program*, aka TripleO. The TC blesses social structure and code separately: no part of TripleO has had its code blessed by the TC yet (incubation/graduation), but the team was blessed. I've no opinion on the Murano bits you raise. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
Excerpts from Jay Pipes's message of 2014-09-23 21:38:37 -0700: On 09/23/2014 10:29 PM, Steven Dake wrote: There is a deployment program - tripleo is just one implementation. Nope, that is not correct. Like it or not (I personally don't), Triple-O is *the* Deployment Program for OpenStack: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284 Saying Triple-O is just one implementation of a deployment program is like saying Heat is just one implementation of an orchestration program. It isn't. It's *the* implemenation of an orchestration program that has been blessed by the TC: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112 That was written before we learned everything we've learned in the last 12 months. I think it is unfair to simply point to this and imply that bending or even changing it is not open for discussion. We went through this with Heat and various projects that want to extend heat (eg Murano) and one big mistake I think Murano folks made was not figuring out where there code would go prior to writing it. I'm only making a statement as to where I think it should belong. Sorry, I have to call you to task on this. You think it was a mistake for the Murano folks to not figure out where the code would go prior to writing it? For the record, Murano existed nearly 2 years ago, as a response to various customer requests. Having the ability to properly deploy Windows applications like SQL Server and Active Directory into an OpenStack cloud was more important to the Murano developers than trying to predict what the whims of the OpenStack developer and governance model would be months or years down the road. Tell me, did any of Heat's code exist prior to deciding to propose it for incubation? Saying that Murano developers should have thought about where their code would live is holding them to a higher standard than any of the other developer communities. Did folks working on disk-image-builder pre-validate with the TC or the mailing list that the dib code would live in the triple-o program? No, of course not. It was developed naturally and then placed into the program that fit it best. Murano was developed naturally in exactly the same way, and the Murano developers have been nothing but accommodating to every request made of them by the TC (and those requests have been entirely different over the last 18 months, ranging from split it out to just propose another program) and by the PTLs for projects that requested they split various parts of Murano out into existing programs. The Murano developers have done no power grab, have deliberately tried to be as community-focused and amenable to all requests as possible, and yet they are treated with disdain by a number of folks in the core Heat developer community, including yourself, Clint and Zane. And honestly, I don't get it... all Murano is doing is generating Heat templates and trying to fill in some pieces that Heat isn't interested in doing. I don't see why there is so much animosity towards a project that has, to my knowledge, acted in precisely the ways that we've asked projects to act in the OpenStack community: with openness, transparency, and community good will. Disdain is hardly the right word. Disdain implies we don't have any respect at all for Murano. I cannot speak for others, but I do have respect. I'm just not interested in Murano. FWIW, I think what Steven Dake is saying is that he does not want to end up in the same position Murano is in. I think that is unlikely, as we're seeing many projects hitting the same wall, which is the cause for discussing changing how we include or exclude projects. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues
On 09/24/2014 03:48 AM, Devananda van der Veen wrote: I've taken a bit of time out of this thread, and I'd like to jump back in now and attempt to summarize what I've learned and hopefully frame it in such a way that it helps us to answer the question Thierry asked: I *loved* it! Thanks a lot for taking the time. On Fri, Sep 19, 2014 at 2:00 AM, Thierry Carrez thie...@openstack.org wrote: The underlying question being... can Zaqar evolve to ultimately reach the massive scale use case Joe, Clint and Devananda want it to reach, or are those design choices so deeply rooted in the code and architecture that Zaqar won't naturally mutate to support that use case. I also want to sincerely thank everyone who has been involved in this discussion, and helped to clarify the different viewpoints and uncertainties which have surrounded Zaqar lately. I hope that all of this helps provide the Zaqar team guidance on a path forward, as I do believe that a scalable cloud-based messaging service would greatly benefit the OpenStack ecosystem. Use cases -- So, I'd like to start from the perspective of a hypothetical user evaluating messaging services for the new application that I'm developing. What does my application need from a messaging service so that it can grow and become hugely popular with all the hipsters of the world? In other words, what might my architectural requirements be? (This is certainly not a complete list of features, and it's not meant to be -- it is a list of things that I *might* need from a messaging service. But feel free to point out any glaring omissions I may have made anyway :) ) 1. Durability: I can't risk losing any messages Example: Using a queue to process votes. Every vote should count. 2. Single Delivery - each message must be processed *exactly* once Example: Using a queue to process votes. Every vote must be counted only once. 3. Low latency to interact with service Example: Single threaded application that can't wait on external calls 4. Low latency of a message through the system Example: Video streaming. Messages are very time-sensitive. 5. Aggregate throughput Example: Ad banner processing. Remember when sites could get slash-dotted? I need a queue resilient to truly massive spikes in traffic. 6. FIFO - When ordering matters Example: I can't stop a job that hasn't started yet. So, as a developer, I actually probably never need all of these in a single application -- but depending on what I'm doing, I'm going to need some of them. Hopefully, the examples above give some idea of what I have in mind for different sorts of applications I might develop which would require these guarantees from a messaging service. Why is this at all interesting or relevant? Because I think Zaqar and SQS are actually, in their current forms, trying to meet different sets of requirements. And, because I have not actually seen an application using a cloud which requires the things that Zaqar is guaranteeing - which doesn't mean they don't exist - it frames my past judgements about Zaqar in a much better way than simply I have doubts. It explains _why_ I have those doubts. I'd now like to offer the following as a summary of this email thread and the available documentation on SQS and Zaqar, as far as which of the above requirements are satisfied by each service and why I believe that. If there are fallacies in here, please correct me. SQS -- Requirements it meets: 1, 5 The SQS documentation states that it guarantees durability of messages (1) and handles unlimited throughput (5). It does not guarantee once-and-only-once delivery (2) and requires applications that care about this to de-duplicate on the receiving side. It also does not guarantee message order (6), making it unsuitable for certain uses. SQS is not an application-local service nor does it use a wire-level protocol, so from this I infer that (3) and (4) were not design goals. Zaqar Requirements it meets: 1*, 2, 6 Zaqar states that it aims to guarantee message durability (1) but does so by pushing the guarantee of durability into the storage layer. Thus, Zaqar will not be able to guarantee durability of messages when a storage pool fails, is misconfigured, or what have you. Therefor I do not feel that message durability is a strong guarantee of Zaqar itself; in some configurations, this guarantee may be possible based on the underlying storage, but this capability will need to be exposed in such a way that users can make informed decisions about which Zaqar storage back-end (or flavor) to use for their application based on whether or not they need durability. I agree with the above but I would like to add a couple of things. The first one is just a clarification on flavors. Flavor's are not required to use pool whereas pools are required to use flavors. Operators can
[openstack-dev] [TripleO] PTL Candidacy
I am writing to announce my candidacy for OpenStack Deployment PTL. Those of you involved with the deployment program may be surprised to see my name here. I've been quiet lately, distracted by an experiment which was announced by Allison Randal a few months back. [1] The experiment has been going well. We've had to narrow our focus from the broader OpenStack project and just push hard to get HP's Helion Product ready for release, but we're ready to bring everything back out into the open and add it to the options for the deployment program. Most recently our 'tripleo-ansible' repository has been added to stackforge [2], and I hope we can work out a way where it lands in the official deployment namespace once we have broader interest. Those facts may cause some readers to panic, and others to rejoice, but I would ask you to keep reading, even if you think the facts above might disqualify me from your ballot. My intention is to serve as PTL for OpenStack Deployment. I want to emphasize the word serve. I believe that a PTL's first job is to serve the mission of the program. I have watched Robert serve closely, and I think I understand the wide reach the program already has. We make use of Ironic, Nova, Glance, Neutron, and Heat, and we need to interface directly with those projects to be successful, regardless of any other tools in use. However, I don't think the way to scale this project is to buckle down and try to be a hero-PTL. We need to make the program's mission more appealing to a greater number of OpenStack operators that want to deploy and manage OpenStack. This will widen our focus, which may slow some things down, but we can collaborate, and find common ground on many issues while still pushing forward on the fronts that are important to each organization. My recent experience with Ansible has convinced me that Ansible is not _the_ answer, but that Ansible is _an_ answer which serves the needs of some OpenStack users. Heat serves other needs, where Puppet, Chef, Salt, and SSH in a for loop serve yet more diverse needs. So, with that in mind, I want to succinctly state my priorities for the role: * Serve the operators. Our feedback from operators has been extremely mixed. We need to do a better job of turning operators into OpenStack Deployment users and contributors. * Improve diversity. I have been as guilty as anyone else in the past of slamming the door on those who wanted to join our effort but with a different use case. This was a mistake. Looking forward, the door needs to stay open, and be widened. Without that, we won't be able to welcome more operators. * March toward a presence in the gate. I know that the gate is a hot term and up for debate right now. However, there will always be a gate of some kind for the projects in the integrated release, and I'd like to see a more production-like test in that gate. From the beginning, TripleO has been focused on supporting continuous deployment models, so it would make a lot of sense to have TripleO doing integration testing of the integrated release. If there is a continued stripping down of the gate, then TripleO would still certainly be a valuable CI job for the integrated release. We've had TripleO break numerous times because we run with a focus on production ready settings and multiple nodes which exposes new facets of the code that go untouched in the single-node simple-and-fast focused devstack tests. Of course, our CI has not exactly been rock solid, for various reasons. We need to make it a priority to get CI handled for at least the primary tooling, and at the same time welcome and support efforts to make use of our infrastructure for alternative tooling. This isn't something I necessarily think will happen in the next 6 months, but I think one role that a PTL can be asked to serve is as shepherd of long term efforts, and this is definitely one of those. So, I thank you for taking the time to read this, and hope that whatever happens we can build a better deployment program this cycle. -Clint Byrum [1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042589.html [2] https://git.openstack.org/cgit/stackforge/tripleo-ansible ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] PTL Non-Candidacy
Hi Mark, Many thanks for your great work and leadership! Personally I have to say thank you for your mentorship for me. Let's still keep in touch in Glance/OpenStack. zhiyan On Tue, Sep 23, 2014 at 1:22 AM, Mark Washenberger mark.washenber...@markwash.net wrote: Greetings, I will not be running for PTL for Glance for the Kilo release. I want to thank all of the nice folks I've worked with--especially the attendees and sponsors of the mid-cycle meetups, which I think were a major success and one of the highlights of the project for me. Cheers, markwash ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On Wed, Sep 24, 2014 at 12:40 AM, Steven Dake sd...@redhat.com wrote: I'm pleased to announce the development of a new project Kolla which is Greek for glue :). Kolla has a goal of providing an implementation that deploys OpenStack using Kubernetes and Docker. This project will begin as a StackForge project separate from the TripleO/Deployment program code base Congratulations this sounds promising! If I understand correctly reading your POC there is two part to Kolla the docker images repository of openstack services and a future service (or kubernetes plugin?[1]) driving the communication and deployments to kubernetes. I think making sure that we separate the two would be nice to have, since if we can plug those images within devstack, thanks to the abstraction of how we run processes that was introduced by Dean (http://git.io/Px1nMg) perhaps that would be a nice way to make devstack more robust and have a nice side effect to have a pretty good testing for those docker images. Chmouel [1] CAVEAT: I don't know very well kubernetes, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues
On 09/24/2014 12:06 AM, Joe Gordon wrote: On Tue, Sep 23, 2014 at 2:40 AM, Flavio Percoco fla...@redhat.com mailto:fla...@redhat.com wrote: On 09/23/2014 05:13 AM, Clint Byrum wrote: Excerpts from Joe Gordon's message of 2014-09-22 19:04:03 -0700: [snip] To me this is less about valid or invalid choices. The Zaqar team is comparing Zaqar to SQS, but after digging into the two of them, zaqar barely looks like SQS. Zaqar doesn't guarantee what IMHO is the most important parts of SQS: the message will be delivered and will never be lost by SQS. Zaqar doesn't have the same scaling properties as SQS. Zaqar is aiming for low latency per message, SQS doesn't appear to be. So if Zaqar isn't SQS what is Zaqar and why should I use it? I have to agree. I'd like to see a simple, non-ordered, high latency, high scale messaging service that can be used cheaply by cloud operators and users. What I see instead is a very powerful, ordered, low latency, medium scale messaging service that will likely cost a lot to scale out to the thousands of users level. I don't fully agree :D Let me break the above down into several points: * Zaqar team is comparing Zaqar to SQS: True, we're comparing to the *type* of service SQS is but not *all* the guarantees it gives. We're not working on an exact copy of the service but on a service capable of addressing the same use cases. * Zaqar is not guaranteeing reliability: This is not true. Yes, the current default write concern for the mongodb driver is `acknowledge` but that's a bug, not a feature [0] ;) * Zaqar doesn't have the same scaling properties as SQS: What are SQS scaling properties? We know they have a big user base, we know they have lots of connections, queues and what not but we don't have numbers to compare ourselves with. Here is *a* number 30k messages per second on a single queue: http://java.dzone.com/articles/benchmarking-sqs I know how to get those links and I had read them before. For example, here's[0] a 2 years older one that tests a different scenario and has a quite different result. My point is that it's not as easy as to say X doesn't scale as Y. We know, based on Zaqar's architecture, that depending on the storage there are some scaling limits the service could hit but without more (or proper) load tests I think that's just an assumption based on what we know about the service architecture and not the storage itself. There are benchmarks about mongodb but it'd be unfair to use those as the definitive reference since the schema plays a huge role there. And with this, I'm neither saying Zaqar scales unlimitedly regardless of the storage backend nor that there are no limits at all. I'm aware there's lot to improve in the service. [0] http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/throughput.html Thanks for sharing, Flavio * Zaqar is aiming for low latency per message: This is not true and I'd be curious to know where did this come from. A couple of things to consider: - First and foremost, low latency is a very relative measure and it depends on each use-case. - The benchmarks Kurt did were purely informative. I believe it's good to do them every once in a while but this doesn't mean the team is mainly focused on that. - Not being focused on 'low-latency' does not mean the team will overlook performance. * Zaqar has FIFO and SQS doesn't: FIFO won't hurt *your use-case* if ordering is not a requirement but the lack of it does when ordering is a must. * Scaling out Zaqar will cost a lot: In terms of what? I'm pretty sure it's not for free but I'd like to understand better this point and figure out a way to improve it, if possible. * If Zaqar isn't SQS then what is it? Why should I use it?: I don't believe Zaqar is SQS as I don't believe nova is EC2. Do they share similar features and provide similar services? Yes, does that mean you can address similar use cases, hence a similar users? Yes. In addition to the above, I believe Zaqar is a simple service, easy to install and to interact with. From a user perspective the semantics are few and the concepts are neither new nor difficult to grasp. From an operators perspective, I don't believe it adds tons of complexity. It does require the operator to deploy a replicated storage environment but I believe all services require that. Cheers, Flavio P.S: Sorry for my late answer or lack of it. I lost *all* my emails yesterday and I'm working on recovering them. [0] https://bugs.launchpad.net/zaqar/+bug/1372335 -- @flaper87 Flavio Percoco
[openstack-dev] [OpenStack][Trove] Building new image for trove
Hello, Currently I am trying to use Trove services configured with devstack. The services are configured and it has also created a default datastore for MySQL image on ubuntu, but the launch instances always gets error polling timeout I tried the same installation with redstack, created new image with dib and tripleo-image-elements, but to no use. Is there any document which describes how I can create a new image which can be used in Trove? What are the prerequisites for the image? and trove services which needs to be running? Best Regards, Swapnil Kulkarni irc : coolsvap ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Deprecating localfs?
On Wed, Sep 24, 2014 at 08:26:44AM +1000, Michael Still wrote: On Tue, Sep 23, 2014 at 8:58 PM, Daniel P. Berrange berra...@redhat.com wrote: On Tue, Sep 23, 2014 at 02:27:52PM +0400, Roman Bogorodskiy wrote: Michael Still wrote: Hi. I know we've been talking about deprecating nova.virt.disk.vfs.localfs for a long time, in favour of wanting people to use libguestfs instead. However, I can't immediately find any written documentation for when we said we'd do that thing. Additionally, this came to my attention because Ubuntu 14.04 is apparently shipping a libguestfs old enough to cause us to emit the falling back to localfs warning, so I think we need Ubuntu to catch up before we can do this thing. So -- how about we remove localfs early in Kilo to give Canonical a release to update libguestfs? Thoughts appreciated, Michael If at some point we'd start moving into getting FreeBSD supported as a host OS for OpenStack, then it would make sense to keep localfs for that configuration. libguestfs doesn't work on FreeBSD yet. On the other hand, localfs code in Nova doesn't look like it'd be hard to port. Yep, that's a good point and in fact applies to Linux too when considering the non-KVM/QEMU drivers libvirt supports. eg if your host does not have virtualization and you're using LXC for container virt, then we need to have localfs still be present. Likewise if running Xen. So we definitely cannot delete or even deprecate it unconditionally. We simply want to make sure localfs isn't used when Nova is configured to run QEMU/KVM via libvirt. So if we take the config option approach I suggested, then we'd set a default value for the vfs_impl parameter according to which libvirt driver you have enabled. I'm glad we've had this thread, because I hadn't thought of the FreeBSD case at all. In that case I wonder if we want to water down the warning we currently log in this case: LOG.warn(_LW(Unable to import guestfs falling back to VFSLocalFS)) If feel like it should be an info if we know some platforms will always have this occur. I know this is a minor thing, but this came to my attention because at lease one operator was concerned by seeing that warning in their logs. If we take my suggested approach of using a fixed impl based on libvirt driver type, then we wouldn't have fallback so not see this warning. Even when we do have fallback, we should only warn if libguestfs is installed, but not working. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
-Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 7:19 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com mailto:jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. I also would like to join the group. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Some ideas for micro-version implementation
-Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 12:47 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] Some ideas for micro-version implementation On 09/22/2014 04:27 PM, Brant Knudson wrote: On Fri, Sep 19, 2014 at 1:39 AM, Alex Xu x...@linux.vnet.ibm.com mailto:x...@linux.vnet.ibm.com wrote: Close to Kilo, it is time to think about what's next for nova API. In Kilo, we will continue develop the important feature micro-version. In previous v2 on v3 propose, it's include some implementations can be used for micro-version. (https://review.openstack.org/__#/c/84695/19/specs/juno/v2-on-__v3-api.rst https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst) But finally, those implementations was considered too complex. So I'm try to find out more simple implementation and solution for micro-version. I wrote down some ideas as blog post at: http://soulxu.github.io/blog/__2014/09/12/one-option-for-__nova-api/ http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/ And for those ideas also already done some POC, you can find out in the blog post. As discussion in the Nova API meeting, we want to bring it up to mail-list to discussion. Hope we can get more idea and option from all developers. We will appreciate for any comment and suggestion! Thanks Alex Did you consider JSON Home[1] for this? For Juno we've got JSON Home support in Keystone for Identity v3 (Zaqar was using it already). We weren't planning to use it for microversioning since we weren't planning on doing microversioning, but I think JSON Home could be used for this purpose. Using JSON Home, you'd have relationships that include the version, then the client can check the JSON Home document to see if the server has support for the relationship the client wants to use. [1] http://tools.ietf.org/html/draft-nottingham-json-home-03 ++ I used JSON-Home extensively in the Compute API blueprint I put together a few months ago: http://docs.oscomputevnext.apiary.io/ vNext seems an interesting idea, I thought the implementation way for Nova a little. API Route Discoverability is a nice design, but a root / URL will conflict on current list versions API. Maybe there would be a workaround. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Migrations in feature branch
Relying again on automatic schema generation could be error-prone. It can only be enabled globally, and does not work when models are altered if the table for the model being altered already exists in the DB schema. I don't think it would be a big problem to put these migrations in the main sequence once the feature branch is merged back into master. Alembic unfortunately does not yet do a great job in maintaining multiple timelines. Even if only a single migration branch is supported, in theory one could have a separate alembic environment for the feature branch, but that in my opinion just creates the additional problem of handling a new environment, and does not solve the initial problem of re-sequencing migrations. Re-sequencing at merge time is not going to be a problem in my opinion. However, keeping all the lbaas migrations chained together will help. You can also do as Henry suggests, but that option has the extra (possibly negligible) cost of squashing all migrations for the whole feature branch at merge time. As an example: MASTER --- X - X+1 - ... - X+n \ FEATURE \- Y - Y+1 - ... - Y+m At every rebase of rebase the migration timeline for the feature branch could be rearranged as follows: MASTER --- X - X+1 - ... - X+n --- \ FEATURE \- Y=X+n - Y+1 - ... - Y+m = X+n+m And therefore when the final merge in master comes, all the migrations in the feature branch can be inserted in sequence on top of master's HEAD. I have not tried this, but I reckon that conceptually it should work. Salvatore On 24 September 2014 08:16, Kevin Benton blak...@gmail.com wrote: If these are just feature branches and they aren't intended to be deployed for long life cycles, why don't we just skip the db migration and enable auto-schema generation inside of the feature branch? Then a migration can be created once it's time to actually merge into master. On Tue, Sep 23, 2014 at 9:37 PM, Brandon Logan brandon.lo...@rackspace.com wrote: Well the problem with resequencing on a merge is that a code change for the first migration must be added first and merged into the feature branch before the merge is done. Obviously this takes review time unless someone of authority pushes it through. We'll run into this same problem on rebases too if we care about keeping the migration sequenced correctly after rebases (which we don't have to, only on a merge do we really need to care). If we did what Henry suggested in that we only keep one migration file for the entire feature, we'd still have to do the same thing. I'm not sure that buys us much other than keeping the feature's migration all in one file. I'd also say that code in master should definitely NOT be dependent on code in a feature branch, much less a migration. This was a requirement of the incubator as well. So yeah this sounds like a problem but one that really only needs to be solved at merge time. There will definitely need to be coordination with the cores when merge time comes. Then again, I'd be a bit worried if there wasn't since a feature branch being merged into master is a huge deal. Unless I am missing something I don't see this as a big problem, but I am highly capable of being blind to many things. Thanks, Brandon On Wed, 2014-09-24 at 01:38 +, Doug Wiegley wrote: Hi Eugene, Just my take, but I assumed that we’d re-sequence the migrations at merge time, if needed. Feature branches aren’t meant to be optional add-on components (I think), nor are they meant to live that long. Just a place to collaborate and work on a large chunk of code until it’s ready to merge. Though exactly what those merge criteria are is also yet to be determined. I understand that you’re raising a general problem, but given lbaas v2’s state, I don’t expect this issue to cause many practical problems in this particular case. This is also an issue for the incubator, whenever it rolls around. Thanks, doug On September 23, 2014 at 6:59:44 PM, Eugene Nikanorov (enikano...@mirantis.com) wrote: Hi neutron and lbaas folks. Recently I briefly looked at one of lbaas proposed into feature branch. I see migration IDs there are lined into a general migration sequence. I think something is definitely wrong with this approach as feature-branch components are optional, and also master branch can't depend on revision IDs in feature-branch (as we moved to unconditional migrations) So far the solution to this problem that I see is to have separate migration script, or in fact, separate revision sequence. The problem is that DB models in feature branch may depend on models of master branch, which means that each revision of feature-branch should have a kind of minimum required revision of the master branch. The problem that
Re: [openstack-dev] [Nova] [All] API standards working group
Please keep me in the loop. The importance of ensuring consistent style across Openstack APIs increases as the number of integrated project increases. Unless we decide to merge all API endpoints as proposed in another thread! [1] Regards, Salvatore [1] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36012.html On 24 September 2014 11:15, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote: -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 7:19 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com mailto:jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. I also would like to join the group. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ceilometer] Why alarm name is unique per project?
Hi, all I am just a little confused why alarm name should be unique per project, anyone knows this? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Why alarm name is unique per project?
On 09/24/2014 12:23 PM, Long Suo wrote: Hi, all I am just a little confused why alarm name should be unique per project, anyone knows this? Good point, I admit I can't find a compelling reason for that either. Perhaps someone else can? Also, an interesting use-case comes to mind, where you can have for example an alarm for each instance, all of them named 'cpu_alarm', but with unique action per instance. You could then retrieve all these alarms at once with a proper query. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] stack-update with existing parameters
TL;DR Is there any reason why stack-update doesn¹t reuse the existing parameters when I extend my stack definition with a resource that uses them? I have created a stack from the hello_world.yaml template (https://github.com/openstack/heat-templates/blob/master/hot/hello_world.ya ml) It has the following parameters keyname, image, flavor, admin_pass, db_port. heat stack-create hello_world -P key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa ss=Openst1 -f hello_world.yaml Then I have added one more nova server resource with new name(server1), rest all the details are untouched. I get the following when I use this new template without mentioning any of the parameter value. heat --debug stack-update hello_world -f hello_world_modified.yaml On debugging it throws the below exception. The resource was found athttp://localhost:8004/v1/7faee9dd37074d3e8896957dc4a52e22/stacks/hello_wo rld/85a0bc2c-1a20-45c4-a8a9-7be727db6a6d; you should be redirected automatically. DEBUG (session) RESP: [400] CaseInsensitiveDict({'date': 'Wed, 24 Sep 2014 10:08:08 GMT', 'content-length': '961', 'content-type': 'application/json; charset=UTF-8'}) RESP BODY: {explanation: The server could not comply with the request since it is either malformed or otherwise incorrect., code: 400, error: {message: The Parameter (admin_pass) was not provided., traceback: Traceback (most recent call last):\n\n File \/opt/stack/heat/heat/engine/service.py\, line 63, in wrapped\n return func(self, ctx, *args, **kwargs)\n\n File \/opt/stack/heat/heat/engine/service.py\, line 576, in update_stack\n env, **common_params)\n\n File \/opt/stack/heat/heat/engine/parser.py\, line 109, in __init__\ncontext=context)\n\n File \/opt/stack/heat/heat/engine/parameters.py\, line 403, in validate\n param.validate(validate_value, context)\n\n File \/opt/stack/heat/heat/engine/parameters.py\, line 215, in validate\n raise exception.UserParameterMissing(key=self.name)\n\nUserParameterMissing: The Parameter (admin_pass) was not provided.\n, type: UserParameterMissing}, title: Bad Request} When I mention all the parameters then it updates the stack properly heat --debug stack-update hello_world -P key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa ss=Openst1 -f hello_world_modified.yaml Any reason why I can¹t reuse the existing parameters during the stack-update if I don¹t want to specify them again? - Dimitri ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
On 09/18/2014 02:53 PM, Monty Taylor wrote: Hey all, I've recently been thinking a lot about Sean's Layers stuff. So I wrote a blog post which Jim Blair and Devananda were kind enough to help me edit. http://inaugust.com/post/108 When I first read Monty's post, my basic reaction was yes, please. I think there are plenty of devils in the details, but all of which we can work through, we're mostly all reasonable people. A couple of follow ups for parts of the thread: Concerning the summit: while I understand ttx's concerns at - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046504.html - my experience with the summits is the in project alignment isn't being well served by current format. The absolutely most valuable parts of the last summit for me were the Operator meetup sessions, and some of the cross project sessions. I think there is an interesting question of what does the TC govern. Honestly, I'm more in the camp that the TC focus should be on the Foundational Infrastructure (hey, new words, not sure if they are any better than layer 1 or ring 0), and have the ecosystem largely outside TC governance per Joe / Vish's ASF model - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046877.html. There are pragmatic reasons for that, which is a TC that's based around that Foundation will tend to have more shared context about how we make that better and move forward. I'm not sure I see the point of a TC who's main job is ranking 100s of ecosystem projects on their production readiness... when most of them don't run production clouds. I really like markmc's point about Production Ready being something the User Committee should probably have more of a hand in - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046653.html. I actually think some kind of self certification by projects to make them easy to evaluate by potential consumers would be really handy. This template might be a good thing to co-evolve between the User Committee and the TC. I'm completely happy getting rid of incubation, given that we're talking about a basically static foundation. I think the process of raising TC expectations on projects this past year exposed an interesting fact that there were things some of us felt were core values of OpenStack, that a lot of projects weren't doing. Our approach was they are doing it wrong and to put them on an improvement plan. But I think Monty's slicing up of things brings out an interesting point. Maybe they were doing it fine, they just weren't part of the particular shared culture needed to build foundational infrastructure. Maybe that was ok, because they weren't actually part of that. So, honestly, I'd say full speed ahead on Monty's plan. Is it perfect, probably not. But I think it's a demonstrable move towards a more sustainable base, a more inclusive ecosystem, and a better consumption experience by our users. So how do we put a big stamp on it and make this the direction we are headed in? -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] stack-update with existing parameters
On 24/09/14 13:50, Dimitri Mazmanov wrote: TL;DR Is there any reason why stack-update doesn¹t reuse the existing parameters when I extend my stack definition with a resource that uses them? Hey Dimitri, There is an open bug for this feature: https://bugs.launchpad.net/heat/+bug/1224828 and it seems to be being worked on. I have created a stack from the hello_world.yaml template (https://github.com/openstack/heat-templates/blob/master/hot/hello_world.ya ml) It has the following parameters keyname, image, flavor, admin_pass, db_port. heat stack-create hello_world -P key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa ss=Openst1 -f hello_world.yaml Then I have added one more nova server resource with new name(server1), rest all the details are untouched. I get the following when I use this new template without mentioning any of the parameter value. heat --debug stack-update hello_world -f hello_world_modified.yaml On debugging it throws the below exception. The resource was found athttp://localhost:8004/v1/7faee9dd37074d3e8896957dc4a52e22/stacks/hello_wo rld/85a0bc2c-1a20-45c4-a8a9-7be727db6a6d; you should be redirected automatically. DEBUG (session) RESP: [400] CaseInsensitiveDict({'date': 'Wed, 24 Sep 2014 10:08:08 GMT', 'content-length': '961', 'content-type': 'application/json; charset=UTF-8'}) RESP BODY: {explanation: The server could not comply with the request since it is either malformed or otherwise incorrect., code: 400, error: {message: The Parameter (admin_pass) was not provided., traceback: Traceback (most recent call last):\n\n File \/opt/stack/heat/heat/engine/service.py\, line 63, in wrapped\n return func(self, ctx, *args, **kwargs)\n\n File \/opt/stack/heat/heat/engine/service.py\, line 576, in update_stack\n env, **common_params)\n\n File \/opt/stack/heat/heat/engine/parser.py\, line 109, in __init__\ncontext=context)\n\n File \/opt/stack/heat/heat/engine/parameters.py\, line 403, in validate\n param.validate(validate_value, context)\n\n File \/opt/stack/heat/heat/engine/parameters.py\, line 215, in validate\n raise exception.UserParameterMissing(key=self.name)\n\nUserParameterMissing: The Parameter (admin_pass) was not provided.\n, type: UserParameterMissing}, title: Bad Request} When I mention all the parameters then it updates the stack properly heat --debug stack-update hello_world -P key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa ss=Openst1 -f hello_world_modified.yaml Any reason why I can¹t reuse the existing parameters during the stack-update if I don¹t want to specify them again? - Dimitri ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [RelMgt] PTL candidacy
I am writing to announce my candidacy for OpenStack Release Cycle Management PTL. This is a little-known program, so I'll take the bi-yearly opportunity to explain what this covers: 1. Release Management This is about coordinating the process that will turn the master branches of the integrated projects into a common release at the end of our development cycle. It's no longer a one-person job: Russell Bryant and Sean Dague, in particular, have stepped up during the Juno cycle to help me there. 2. Stable Maintenance This is about maintaining stable branches, reviewing backports according to our Stable branch policy, and publishing point releases from time to time. Alan Pevec is our subteam lead there, playing the drum that keeps us all in sync. 3. Vulnerability Management This is about handling incoming vulnerability reports and push them through our patching and advisory process. Tristan de Cacqueray has been taking on the bulk of the work there. If I get elected, we have several challenges ahead of us for the Kilo cycle. In particular, we'll need to adapt our rules and processes to either support more projects, or follow structural changes (if any). For example, I think the centralized stable maintenance team does not scale that well beyond 10 projects, and we may need to refactor it into team-specific stable maintenance groups. If we adopt Monty's layer #1, the release management team will have less work to produce the common release, but will need to educate and build reusable tooling for everyone else to be able to handle releases. These are interesting times :) Thanks for taking the time to read this! -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Some ideas for micro-version implementation
On 09/24/2014 05:26 AM, Kenichi Oomichi wrote: -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 12:47 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] Some ideas for micro-version implementation On 09/22/2014 04:27 PM, Brant Knudson wrote: On Fri, Sep 19, 2014 at 1:39 AM, Alex Xu x...@linux.vnet.ibm.com mailto:x...@linux.vnet.ibm.com wrote: Close to Kilo, it is time to think about what's next for nova API. In Kilo, we will continue develop the important feature micro-version. In previous v2 on v3 propose, it's include some implementations can be used for micro-version. (https://review.openstack.org/__#/c/84695/19/specs/juno/v2-on-__v3-api.rst https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst) But finally, those implementations was considered too complex. So I'm try to find out more simple implementation and solution for micro-version. I wrote down some ideas as blog post at: http://soulxu.github.io/blog/__2014/09/12/one-option-for-__nova-api/ http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/ And for those ideas also already done some POC, you can find out in the blog post. As discussion in the Nova API meeting, we want to bring it up to mail-list to discussion. Hope we can get more idea and option from all developers. We will appreciate for any comment and suggestion! Thanks Alex Did you consider JSON Home[1] for this? For Juno we've got JSON Home support in Keystone for Identity v3 (Zaqar was using it already). We weren't planning to use it for microversioning since we weren't planning on doing microversioning, but I think JSON Home could be used for this purpose. Using JSON Home, you'd have relationships that include the version, then the client can check the JSON Home document to see if the server has support for the relationship the client wants to use. [1] http://tools.ietf.org/html/draft-nottingham-json-home-03 ++ I used JSON-Home extensively in the Compute API blueprint I put together a few months ago: http://docs.oscomputevnext.apiary.io/ vNext seems an interesting idea, I thought the implementation way for Nova a little. API Route Discoverability is a nice design, but a root / URL will conflict on current list versions API. Maybe there would be a workaround. Completely agreed, Ken'ichi. The root URL that returns the JSON-Home doc in the vNext API is actually *after* the version in the URI, though... So, the JSON-Home doc would be returned from: http://compute.example.com/vNext/ Of course, replacing vNext with v4 or v42 or whatever the next major version of the API would be. The real root would still return the versions list as it exists today, with a 302 Multiple Choice. Best, jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Create an instance with a custom uuid
Hello I would like to be able to specify the UUID of an instance when I create it. I found this discussion about this matter: https://lists.launchpad.net/openstack/msg22387.html but I could not find any blueprint, anyway I understood this modification should not comport any particular issue. Would it be acceptable to pass the uuid as metadata, or should I instead modify the api if I want to set the UUID from the novaclient? Best regards -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues
On 23/09/14 17:59, Joe Gordon wrote: Zaqar is aiming for low latency per message, SQS doesn't appear to be. I've seen no evidence that Zaqar is actually aiming for that. There are waaay lower-latency ways to implement messaging if you don't care about durability (you wouldn't do store-and-forward, for a start). If you see a lot of talk about low latency, it's probably because for a long time people insisted on comparing Zaqar to RabbitMQ instead of SQS. I thought this was why Zaqar uses Falcon and not Pecan/WSME? For an application like Marconi where throughput and latency is of paramount importance, I recommend Falcon over Pecan. https://wiki.openstack.org/wiki/Zaqar/pecan-evaluation#Recommendation Yes that statement mentions throughput as well, but it does mention latency as well. I think we're talking about two different kinds of latency - latency for a message passing end-to-end through the system, and latency for a request to the API (which also affects throughput, and may not be a great choice of word). By not caring about the former, which Zaqar and SQS don't, you can add guarantees like never loses your message, which Zaqar and SQS have. By not caring about the latter you can add a lot of cost to operating the service and... that's about it. (Which is why *both* Zaqar and clearly SQS *do* care about it.) There's really no upside to doing more work than you need to on every API request, of which there will be *a lot*. The latency trade-off here is against using the same framework as... a handful of other OpenStack projects - I can't even say all other OpenStack projects, since there are at least 2 or 3 frameworks in use out there already. IMHO the whole Falcon vs. Pecan thing is a storm in a teacup. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On 09/24/2014 03:57 AM, Clint Byrum wrote: Excerpts from Jay Pipes's message of 2014-09-23 21:38:37 -0700: On 09/23/2014 10:29 PM, Steven Dake wrote: There is a deployment program - tripleo is just one implementation. Nope, that is not correct. Like it or not (I personally don't), Triple-O is *the* Deployment Program for OpenStack: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284 Saying Triple-O is just one implementation of a deployment program is like saying Heat is just one implementation of an orchestration program. It isn't. It's *the* implemenation of an orchestration program that has been blessed by the TC: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112 That was written before we learned everything we've learned in the last 12 months. I think it is unfair to simply point to this and imply that bending or even changing it is not open for discussion. My statement above is a reflection of the current reality of OpenStack governance policies and organizational structure. It's neither fair nor unfair. We went through this with Heat and various projects that want to extend heat (eg Murano) and one big mistake I think Murano folks made was not figuring out where there code would go prior to writing it. I'm only making a statement as to where I think it should belong. Sorry, I have to call you to task on this. You think it was a mistake for the Murano folks to not figure out where the code would go prior to writing it? For the record, Murano existed nearly 2 years ago, as a response to various customer requests. Having the ability to properly deploy Windows applications like SQL Server and Active Directory into an OpenStack cloud was more important to the Murano developers than trying to predict what the whims of the OpenStack developer and governance model would be months or years down the road. Tell me, did any of Heat's code exist prior to deciding to propose it for incubation? Saying that Murano developers should have thought about where their code would live is holding them to a higher standard than any of the other developer communities. Did folks working on disk-image-builder pre-validate with the TC or the mailing list that the dib code would live in the triple-o program? No, of course not. It was developed naturally and then placed into the program that fit it best. Murano was developed naturally in exactly the same way, and the Murano developers have been nothing but accommodating to every request made of them by the TC (and those requests have been entirely different over the last 18 months, ranging from split it out to just propose another program) and by the PTLs for projects that requested they split various parts of Murano out into existing programs. The Murano developers have done no power grab, have deliberately tried to be as community-focused and amenable to all requests as possible, and yet they are treated with disdain by a number of folks in the core Heat developer community, including yourself, Clint and Zane. And honestly, I don't get it... all Murano is doing is generating Heat templates and trying to fill in some pieces that Heat isn't interested in doing. I don't see why there is so much animosity towards a project that has, to my knowledge, acted in precisely the ways that we've asked projects to act in the OpenStack community: with openness, transparency, and community good will. Disdain is hardly the right word. Disdain implies we don't have any respect at all for Murano. I cannot speak for others, but I do have respect. I'm just not interested in Murano. OK. FWIW, I think what Steven Dake is saying is that he does not want to end up in the same position Murano is in. Perhaps. I just took offense to the implication (big mistake .. the Murano folks made) that somehow it was the Murano developer team's fault that they didn't have the foresight to predict the mess that the governance structure and policies have caused projects that want to be in the openstack/ code namespace but need to go through several arbitrary Trials by Fire before the TC to do so. I think that is unlikely, as we're seeing many projects hitting the same wall, which is the cause for discussing changing how we include or exclude projects. Hey, I'm all for changing the way we build the OpenStack tent. I just didn't think it was right to call out the Murano team in the way that it was. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Trove] Building new image for trove
Swapnil, If the default image being created by devstack gives you a timeout on launch, I don’t think your issue is with the image itself. Your best guide (for now) for creating a guest image is to follow the template that devstack uses. I’m on the hook for writing some documentation on how to build a guest image and I’ll send you a draft as soon as I have one. The trove guestagent service is the only one (that I know of) that must be running on the guest. Out of curiousity, are you able to launch the image you created as a simple Nova image? And if you do that, does it go active? -amrith From: Swapnil Kulkarni [mailto:cools...@gmail.com] Sent: Wednesday, September 24, 2014 5:04 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [OpenStack][Trove] Building new image for trove Hello, Currently I am trying to use Trove services configured with devstack. The services are configured and it has also created a default datastore for MySQL image on ubuntu, but the launch instances always gets error polling timeout I tried the same installation with redstack, created new image with dib and tripleo-image-elements, but to no use. Is there any document which describes how I can create a new image which can be used in Trove? What are the prerequisites for the image? and trove services which needs to be running? Best Regards, Swapnil Kulkarni irc : coolsvap ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues
On 23/09/14 19:29, Devananda van der Veen wrote: On Mon, Sep 22, 2014 at 5:47 PM, Zane Bitter zbit...@redhat.com wrote: On 22/09/14 17:06, Joe Gordon wrote: If 50,000 messages per second doesn't count as small-to-moderate then Zaqar does not fulfill a major SQS use case. It's not a drop-in replacement, but as I mentioned you can recreate the SQS semantics exactly *and* get the scalability benefits of that approach by sharding at the application level and then round-robin polling. This response seems dismissive to application developers deciding what cloud-based messaging system to use for their application. If I'm evaluating two messaging services, and one of them requires my application to implement autoscaling and pool management, and the other does not, I'm clearly going to pick the one which makes my application development *simpler*. This is absolutely true, but the point I was trying to make earlier in the thread is that for other use cases you can make exactly the same argument going in the other direction: if I'm evaluating two messaging services, and one of them requires my application to implement reordering of messages by sequence number, and the other does not, I'm clearly going to pick the one which makes my application development *simpler*. So it's not a question of do we make developers do more work?. It's a question of *which* developers do we make do more work?. Also, choices made early in a product's lifecycle (like, say, a facebook game) about which technology they use (like, say, for messaging) are often informed by hopeful expectations of explosive growth and fame. So, based on what you've said, if I were a game developer comparing SQS and Zaqar today, it seems clear that, if I picked Zaqar, and my game gets really popular, it's also going to have to be engineered to handle autoscaling of queues in Zaqar. Based on that, I'm going to pick SQS. Because then I don't have to worry about what I'm going to do when my game has 100 million users and there's still just one queue. I totally agree, and that's why I'm increasingly convinced that Zaqar should eventually offer the choice of either. Happily, thanks to the existence of Flavours, I believe this can be implemented in the future as an optional distribution layer *above* the storage back end without any major changes to the current API or architecture. (One open question: would this require dropping browsability from the API?) The key question here is if we're satisfied with the possibility of adding this in the future, or if we want to require Zaqar to dump the users with the in-order use case in favour of the users with the massive-scale use case. If we wanted that then a major re-think would be in order. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On Wed, Sep 24, 2014 at 9:16 AM, Jay Pipes jaypi...@gmail.com wrote: On 09/24/2014 03:19 AM, Robert Collins wrote: On 24 September 2014 16:38, Jay Pipes jaypi...@gmail.com wrote: On 09/23/2014 10:29 PM, Steven Dake wrote: There is a deployment program - tripleo is just one implementation. Nope, that is not correct. Like it or not (I personally don't), Triple-O is *the* Deployment Program for OpenStack: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284 Saying Triple-O is just one implementation of a deployment program is like saying Heat is just one implementation of an orchestration program. It isn't. It's *the* implemenation of an orchestration program that has been blessed by the TC: http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112 Thats not what Steve said. He said that the codebase they are creating is a *project* with a target home of the OpenStack Deployment *program*, aka TripleO. The TC blesses social structure and code separately: no part of TripleO has had its code blessed by the TC yet (incubation/graduation), but the team was blessed. There are zero programs in the OpenStack governance repository that have competing implementations for the same thing. Like it or not, the TC process of blessing these teams has effectively blessed a single implementation of something. And it looks to me like what's being proposed here is that there is a group of folks who intend to work on Knoll, and they are indicating that they plan to participate and would like to be a part of that team. Personally, as a TripleO team member, I welcome that approach and their willingness to participate and share experience with the Deployment program. Meaning: exactly what you seem to claim is not possible due to some perceived blessing, is indeed in fact happening, or trying to come about. It would be great if Heat was already perfect and great at doing container orchestration *really* well. I'm not saying Kubernetes is either, but I'm not going to dismiss it just b/c it might compete with Heat. I see lots of other integration points with OpenStack services (using heat/nova/ironic to deploy kubernetes host, still using ironic to deploy baremetal storage nodes due to the iscsi issue, etc). -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On Wed, Sep 24, 2014 at 9:41 AM, James Slagle james.sla...@gmail.com wrote: And it looks to me like what's being proposed here is that there is a group of folks who intend to work on Knoll, and they are indicating Oops, I meant Kolla, obviously :-). -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?
I think we should aim to /always/ have 3 notifications using a pattern of try: ...notify start... ...do the work... ...notify end... except: ...notify abort... Precisely my viewpoint as well. Unless we standardize on the above, our notifications are less than useful, since they will be open to interpretation by the consumer as to what precisely they mean (and the consumer will need to go looking into the source code to determine when an event actually occurred...) Smells like a blueprint to me. Anyone have objections to me writing one up for Kilo? Best, -jay Hi Jay, So just to be clear, are you saying that we should generate 2 notification messages on Rabbit for every DB update ? That feels like a big overkill for me. If I follow that login then the current state transition notifications should also be changed to Starting to update task state / finished updating task state - which seems just daft and confuisng logging with notifications. Sandy's answer where start /end are used if there is a significant amount of work between the two and/or the transaction spans multiple hosts makes a lot more sense to me. Bracketing a single DB call with two notification messages rather than just a single one on success to show that something changed would seem to me to be much more in keeping with the concept of notifying on key events. Phil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Create an instance with a custom uuid
+1. Or at least provide a way to specify an external UUID for the instance, and can retrieve the instance through the external UUID which may be linked to external system's object. Chaoyi Huang ( joehuang ) 发件人: Pasquale Porreca [pasquale.porr...@dektech.com.au] 发送时间: 2014年9月24日 21:08 收件人: openstack-dev@lists.openstack.org 主题: [openstack-dev] [nova] Create an instance with a custom uuid Hello I would like to be able to specify the UUID of an instance when I create it. I found this discussion about this matter: https://lists.launchpad.net/openstack/msg22387.html but I could not find any blueprint, anyway I understood this modification should not comport any particular issue. Would it be acceptable to pass the uuid as metadata, or should I instead modify the api if I want to set the UUID from the novaclient? Best regards -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On 09/24/2014 09:41 AM, James Slagle wrote: On Wed, Sep 24, 2014 at 9:16 AM, Jay Pipes jaypi...@gmail.com wrote: There are zero programs in the OpenStack governance repository that have competing implementations for the same thing. Like it or not, the TC process of blessing these teams has effectively blessed a single implementation of something. And it looks to me like what's being proposed here is that there is a group of folks who intend to work on Knoll, and they are indicating that they plan to participate and would like to be a part of that team. Personally, as a TripleO team member, I welcome that approach and their willingness to participate and share experience with the Deployment program. Nobody is saying what the Kolla folks are doing is not laudable. I'm certainly not saying that. I think it's great to participate and be open from the start. What I took umbrage with was the statement that it was the Murano developers who made the mistake years ago of basically not being in the right place at the right time. Meaning: exactly what you seem to claim is not possible due to some perceived blessing, is indeed in fact happening, or trying to come about. :) Talking about something on the ML is not the same thing as having that thing happen in real life. Kolla folks can and should discuss their end goal of being in the openstack/ code namespace and offering an alternate implementation for deploying OpenStack. That doesn't mean that the Technical Committee will allow this, though. Which is what I'm saying... the real world right now does not match this perception that a group can just state where they want to end up in the openstack/ code namespace and by just being up front about it, that magically happens. It would be great if Heat was already perfect and great at doing container orchestration *really* well. I'm not saying Kubernetes is either, but I'm not going to dismiss it just b/c it might compete with Heat. I see lots of other integration points with OpenStack services (using heat/nova/ironic to deploy kubernetes host, still using ironic to deploy baremetal storage nodes due to the iscsi issue, etc). Again, I'm not dismissing Kolla whatsoever. I think it's a great initiative. I'd point out that Fuel has been doing deployment with Docker containers for a while now, also out in the open, but on stackforge. Would the deployment program welcome Fuel into the openstack/ code namespace as well? Something to think about. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?
On 09/24/2014 09:48 AM, Day, Phil wrote: I think we should aim to /always/ have 3 notifications using a pattern of try: ...notify start... ...do the work... ...notify end... except: ...notify abort... Precisely my viewpoint as well. Unless we standardize on the above, our notifications are less than useful, since they will be open to interpretation by the consumer as to what precisely they mean (and the consumer will need to go looking into the source code to determine when an event actually occurred...) Smells like a blueprint to me. Anyone have objections to me writing one up for Kilo? Best, -jay Hi Jay, So just to be clear, are you saying that we should generate 2 notification messages on Rabbit for every DB update? That feels like a big overkill for me. If I follow that login then the current state transition notifications should also be changed to Starting to update task state / finished updating task state - which seems just daft and confuisng logging with notifications. Sandy's answer where start /end are used if there is a significant amount of work between the two and/or the transaction spans multiple hosts makes a lot more sense to me. Bracketing a single DB call with two notification messages rather than just a single one on success to show that something changed would seem to me to be much more in keeping with the concept of notifying on key events. I can see your point, Phil. But what about when the set of DB calls takes a not-insignificant amount of time? Would the event be considered significant then? If so, sending only the I completed creating this thing notification message might mask the fact that the total amount of time spent creating the thing was significant. That's why I think it's safer to always wrap tasks -- a series of actions that *do* one or more things -- with start/end/abort context managers that send the appropriate notification messages. Some notifications are for events that aren't tasks, and I don't think those need to follow start/end/abort semantics. Your example of an instance state change is not a task, and therefore would not need a start/end/abort notification manager. However, the user action of say, Reboot this server *would* have a start/end/abort wrapper for the REBOOT_SERVER event. In between the start and end notifications for this REBOOT_SERVER event, there may indeed be multiple SERVER_STATE_CHANGED notification messages sent, but those would not have start/end/abort wrappers around them. Make a bit more sense? -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Trove] Building new image for trove
Hi Amrith, Thanks for the response. I did check that, the instance is active and I can access it. Is there anyway to check the guestagent in the image, update if its required and create snapshot which can be used later? Best Regards, Swapnil Kulkarni irc : coolsvap On Wed, Sep 24, 2014 at 7:08 PM, Amrith Kumar amr...@tesora.com wrote: Swapnil, If the default image being created by devstack gives you a timeout on launch, I don’t think your issue is with the image itself. Your best guide (for now) for creating a guest image is to follow the template that devstack uses. I’m on the hook for writing some documentation on how to build a guest image and I’ll send you a draft as soon as I have one. The trove guestagent service is the only one (that I know of) that must be running on the guest. Out of curiousity, are you able to launch the image you created as a simple Nova image? And if you do that, does it go active? -amrith *From:* Swapnil Kulkarni [mailto:cools...@gmail.com] *Sent:* Wednesday, September 24, 2014 5:04 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] [OpenStack][Trove] Building new image for trove Hello, Currently I am trying to use Trove services configured with devstack. The services are configured and it has also created a default datastore for MySQL image on ubuntu, but the launch instances always gets error polling timeout I tried the same installation with redstack, created new image with dib and tripleo-image-elements, but to no use. Is there any document which describes how I can create a new image which can be used in Trove? What are the prerequisites for the image? and trove services which needs to be running? Best Regards, Swapnil Kulkarni irc : coolsvap ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues
Apologies in advance for possible repetition and pedantry... On 09/24/2014 02:48 AM, Devananda van der Veen wrote: 2. Single Delivery - each message must be processed *exactly* once Example: Using a queue to process votes. Every vote must be counted only once. It is also important to consider the ability of the publisher to reliable publish a message exactly once. If that can't be done, there may need to be de-duplication even if there is an exactly-once delivery guarantee of messages from the queue (because there could exist two copies of the same logical message). 5. Aggregate throughput Example: Ad banner processing. Remember when sites could get slash-dotted? I need a queue resilient to truly massive spikes in traffic. A massive spike in traffic can be handled also by allowing the queue to grow, rather than increasing the throughput. This is obviously only effective if it is indeed a spike and the rate of ingress drops again to allow the backlog to be processed. So scaling up aggregate throughput is certainly an important requirement for some. However the example illustrates another, which is scaling the size of the queue (because the bottleneck for throughput may be in the application processing or this processing may be temporarily unavailable). The latter is something that both Zaqar and SQS I suspect would do quite well at. 6. FIFO - When ordering matters Example: I can't stop a job that hasn't started yet. I think FIFO is insufficiently precise. The most extreme requirement is total ordering, i.e. all messages are assigned a place in a fixed sequence and the order in which they are seen is the same for all receivers. The example you give above is really causal ordering. Since the need to stop a job is caused by the starting of that job, the stop request must come after the start request. However the ordering of the stop request for task A with respect to a stop request for task B may not be defined (e.g. if they are triggered concurrently). The pattern in use is also relevant. For multiple competing consumers, if there are ordering requirements such as the one in your example, it is not sufficient to *deliver* the messages in order, they must also be *processed* in order. If I have two consumers processing task requests, and give the 'start A' message to one, and then the 'stop A' message to another it is possible that the second, though dispatched by the messaging service after the first message, is still processed before it. One way to avoid that would be to have the application use a separate queue for processing consumer, and ensure causally related messages are sent through the same queue. The downside is less adaptive load balancing and resiliency. Another option is to have the messaging service recognise message groupings and ensure that a group in which a previously delivered message has not been acknowledged are delivered only to the same consumer as that previous message. [...] Zaqar relies on a store-and-forward architecture, which is not amenable to low-latency message processing (4). I don't think store-and-forward precludes low-latency ('low' is of course subjective). Polling however is not a good fit for latency sensitive applications. Again, as with SQS, it is not a wire-level protocol, It is a wire-level protocol, but as it is based on HTTP it doesn't support asynchronous delivery of messages from server to client at present. so I don't believe low-latency connectivity (3) was a design goal. Agreed (and that is the important thing, so sorry for the nitpicking!). ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Trove] Building new image for trove
Ive had good luck recently enabling heat suppprt and then tweaking the trove default template to use a standard image and install the guest agent at launch. So no custom image needed. Thanks, Kevin From: Swapnil Kulkarni Sent: Wednesday, September 24, 2014 2:03:38 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [OpenStack][Trove] Building new image for trove Hello, Currently I am trying to use Trove services configured with devstack. The services are configured and it has also created a default datastore for MySQL image on ubuntu, but the launch instances always gets error polling timeout I tried the same installation with redstack, created new image with dib and tripleo-image-elements, but to no use. Is there any document which describes how I can create a new image which can be used in Trove? What are the prerequisites for the image? and trove services which needs to be running? Best Regards, Swapnil Kulkarni irc : coolsvap ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
I'm interesting in the group too! On 2014年09月24日 18:01, Salvatore Orlando wrote: Please keep me in the loop. The importance of ensuring consistent style across Openstack APIs increases as the number of integrated project increases. Unless we decide to merge all API endpoints as proposed in another thread! [1] Regards, Salvatore [1] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36012.html On 24 September 2014 11:15, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp mailto:oomi...@mxs.nes.nec.co.jp wrote: -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 7:19 AM To: openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. I also would like to join the group. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
It is an important topics towards good user experience. There are many inconsistencies across projects and every API has pros and cons. We need to share API best practices and it is nice. I am interested in the topic both as Neutron developer and from API consumer as Horizon developer. If the meeting time allows, I would like to join! Regards, Akihiro On Wed, Sep 24, 2014 at 7:01 PM, Salvatore Orlando sorla...@nicira.com wrote: Please keep me in the loop. The importance of ensuring consistent style across Openstack APIs increases as the number of integrated project increases. Unless we decide to merge all API endpoints as proposed in another thread! [1] Regards, Salvatore [1] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36012.html On 24 September 2014 11:15, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote: -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 7:19 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com mailto:jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. I also would like to join the group. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro Motoki amot...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
You can add me to this list as well. Thanks! Lance On Wed, Sep 24, 2014 at 9:41 AM, Alex Xu x...@linux.vnet.ibm.com wrote: I'm interesting in the group too! On 2014年09月24日 18:01, Salvatore Orlando wrote: Please keep me in the loop. The importance of ensuring consistent style across Openstack APIs increases as the number of integrated project increases. Unless we decide to merge all API endpoints as proposed in another thread! [1] Regards, Salvatore [1] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36012.html On 24 September 2014 11:15, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote: -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 7:19 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com mailto:jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. I also would like to join the group. Thanks Ken'ichi Ohmichi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On 9/24/14, 9:51 AM, Lance Bragstad lbrags...@gmail.com wrote: You can add me to this list as well. Thanks! Lance On Wed, Sep 24, 2014 at 9:41 AM, Alex Xu x...@linux.vnet.ibm.com wrote: I'm interesting in the group too! On 2014年09月24日 18:01, Salvatore Orlando wrote: Please keep me in the loop. The importance of ensuring consistent style across Openstack APIs increases as the number of integrated project increases. Unless we decide to merge all API endpoints as proposed in another thread! [1] Regards, Salvatore [1] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36012.htm l On 24 September 2014 11:15, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote: -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: Wednesday, September 24, 2014 7:19 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com mailto:jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. I also would like to join the group. Thanks Ken'ichi Ohmichi I’d like to participate as well. Cheers, Ian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Some ideas for micro-version implementation
vNext seems an interesting idea, I thought the implementation way for Nova a little. API Route Discoverability is a nice design, but a root / URL will conflict on current list versions API. Maybe there would be a workaround. Completely agreed, Ken'ichi. The root URL that returns the JSON-Home doc in the vNext API is actually *after* the version in the URI, though... So, the JSON-Home doc would be returned from: http://compute.example.com/vNext/ Of course, replacing vNext with v4 or v42 or whatever the next major version of the API would be. The real root would still return the versions list as it exists today, with a 302 Multiple Choice. JSON Home and your JSON versions document can exist on the same path. The JSON Home response should be returned when the Accept header is application/json-home[1], and the JSON document when the Accept header is application/json. Webob makes it easy to support qvalues[2] for the accept header. This is how Keystone works for Juno, if you request `/` with Accept: application/json-home, you get the JSON Home document with paths like `v3/auth/tokens`. If you request `/v3` with Accept: application/json-home, you get the JSON Home document with the paths like `/auth/tokens`. This way, if your auth endpoint is / or /v3 the client can use the json-home document. # TODO(blk-u): Implement json-home in keystoneclient. [1] https://tools.ietf.org/html/draft-nottingham-json-home-03#section-2 [2] http://tools.ietf.org/html/rfc2616#section-3.9 - Brant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [RelMgt] PTL candidacy
confirmed On 24/09/14 08:56 AM, Thierry Carrez wrote: I am writing to announce my candidacy for OpenStack Release Cycle Management PTL. This is a little-known program, so I'll take the bi-yearly opportunity to explain what this covers: 1. Release Management This is about coordinating the process that will turn the master branches of the integrated projects into a common release at the end of our development cycle. It's no longer a one-person job: Russell Bryant and Sean Dague, in particular, have stepped up during the Juno cycle to help me there. 2. Stable Maintenance This is about maintaining stable branches, reviewing backports according to our Stable branch policy, and publishing point releases from time to time. Alan Pevec is our subteam lead there, playing the drum that keeps us all in sync. 3. Vulnerability Management This is about handling incoming vulnerability reports and push them through our patching and advisory process. Tristan de Cacqueray has been taking on the bulk of the work there. If I get elected, we have several challenges ahead of us for the Kilo cycle. In particular, we'll need to adapt our rules and processes to either support more projects, or follow structural changes (if any). For example, I think the centralized stable maintenance team does not scale that well beyond 10 projects, and we may need to refactor it into team-specific stable maintenance groups. If we adopt Monty's layer #1, the release management team will have less work to produce the common release, but will need to educate and build reusable tooling for everyone else to be able to handle releases. These are interesting times :) Thanks for taking the time to read this! signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] PTL Candidacy
confirmed On 24/09/14 04:03 AM, Clint Byrum wrote: I am writing to announce my candidacy for OpenStack Deployment PTL. Those of you involved with the deployment program may be surprised to see my name here. I've been quiet lately, distracted by an experiment which was announced by Allison Randal a few months back. [1] The experiment has been going well. We've had to narrow our focus from the broader OpenStack project and just push hard to get HP's Helion Product ready for release, but we're ready to bring everything back out into the open and add it to the options for the deployment program. Most recently our 'tripleo-ansible' repository has been added to stackforge [2], and I hope we can work out a way where it lands in the official deployment namespace once we have broader interest. Those facts may cause some readers to panic, and others to rejoice, but I would ask you to keep reading, even if you think the facts above might disqualify me from your ballot. My intention is to serve as PTL for OpenStack Deployment. I want to emphasize the word serve. I believe that a PTL's first job is to serve the mission of the program. I have watched Robert serve closely, and I think I understand the wide reach the program already has. We make use of Ironic, Nova, Glance, Neutron, and Heat, and we need to interface directly with those projects to be successful, regardless of any other tools in use. However, I don't think the way to scale this project is to buckle down and try to be a hero-PTL. We need to make the program's mission more appealing to a greater number of OpenStack operators that want to deploy and manage OpenStack. This will widen our focus, which may slow some things down, but we can collaborate, and find common ground on many issues while still pushing forward on the fronts that are important to each organization. My recent experience with Ansible has convinced me that Ansible is not _the_ answer, but that Ansible is _an_ answer which serves the needs of some OpenStack users. Heat serves other needs, where Puppet, Chef, Salt, and SSH in a for loop serve yet more diverse needs. So, with that in mind, I want to succinctly state my priorities for the role: * Serve the operators. Our feedback from operators has been extremely mixed. We need to do a better job of turning operators into OpenStack Deployment users and contributors. * Improve diversity. I have been as guilty as anyone else in the past of slamming the door on those who wanted to join our effort but with a different use case. This was a mistake. Looking forward, the door needs to stay open, and be widened. Without that, we won't be able to welcome more operators. * March toward a presence in the gate. I know that the gate is a hot term and up for debate right now. However, there will always be a gate of some kind for the projects in the integrated release, and I'd like to see a more production-like test in that gate. From the beginning, TripleO has been focused on supporting continuous deployment models, so it would make a lot of sense to have TripleO doing integration testing of the integrated release. If there is a continued stripping down of the gate, then TripleO would still certainly be a valuable CI job for the integrated release. We've had TripleO break numerous times because we run with a focus on production ready settings and multiple nodes which exposes new facets of the code that go untouched in the single-node simple-and-fast focused devstack tests. Of course, our CI has not exactly been rock solid, for various reasons. We need to make it a priority to get CI handled for at least the primary tooling, and at the same time welcome and support efforts to make use of our infrastructure for alternative tooling. This isn't something I necessarily think will happen in the next 6 months, but I think one role that a PTL can be asked to serve is as shepherd of long term efforts, and this is definitely one of those. So, I thank you for taking the time to read this, and hope that whatever happens we can build a better deployment program this cycle. -Clint Byrum [1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042589.html [2] https://git.openstack.org/cgit/stackforge/tripleo-ansible ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
On Wed, Sep 24, 2014 at 10:03 AM, Jay Pipes jaypi...@gmail.com wrote: On 09/24/2014 09:41 AM, James Slagle wrote: Meaning: exactly what you seem to claim is not possible due to some perceived blessing, is indeed in fact happening, or trying to come about. :) Talking about something on the ML is not the same thing as having that thing happen in real life. Hence the trying to come about. And the only thing proposed for real life right now is a project under stackforge whose long term goal is to merge into the Deployment program. I don't get the opposition to a long term goal. Kolla folks can and should discuss their end goal of being in the openstack/ code namespace and offering an alternate implementation for deploying OpenStack. That doesn't mean that the Technical Committee will allow this, though. Certainly true. Perhaps the mission statement for the Deployment program needs some tweaking. Perhaps it will be covered by whatever plays out within the larger OpenStack changes that are being discussed about the future of programs/projects/etc. Personally, I think there is some room for interpretation in the existing mission statement around the wherever possible phrase. Where it's not possible, OpenStack does not have to be used. So again, we probably need to update for clarity. I think the Deployment program should work with the TC to help define what it wants to be. Which is what I'm saying... the real world right now does not match this perception that a group can just state where they want to end up in the openstack/ code namespace and by just being up front about it, that magically happens. I'm not sure who you are arguing against that has that perception :). I've reread the thread, and I see desires being voiced to join an existing program, and some initial support offered in favor of that, minus your responses ;-). Obviously patches would have to be proposed to the governance repo to add projects under the program, those would have to be approved by people with +2 in governance, etc. No one claims it will be magically done. It would be great if Heat was already perfect and great at doing container orchestration *really* well. I'm not saying Kubernetes is either, but I'm not going to dismiss it just b/c it might compete with Heat. I see lots of other integration points with OpenStack services (using heat/nova/ironic to deploy kubernetes host, still using ironic to deploy baremetal storage nodes due to the iscsi issue, etc). Again, I'm not dismissing Kolla whatsoever. I think it's a great initiative. I'd point out that Fuel has been doing deployment with Docker containers for a while now, also out in the open, but on stackforge. Would the deployment program welcome Fuel into the openstack/ code namespace as well? Something to think about. Based on what you're saying about the Deployment program, you seem to indicate the TC would say No. I don't speak for the program. In the past, I've personally expressed support for alternative implementations where they make sense for OpenStack as a whole, and I still feel that way. -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
I am interested in participating as well. Thanks, Brad Brad Topol, Ph.D. IBM Distinguished Engineer OpenStack (919) 543-0646 Internet: bto...@us.ibm.com Assistant: Kendra Witherspoon (919) 254-0680 From: Morgan Fainberg morgan.fainb...@gmail.com To: Dolph Mathews dolph.math...@gmail.com, OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 09/23/2014 08:01 PM Subject:Re: [openstack-dev] [Nova] [All] API standards working group -Original Message- From: Dolph Mathews dolph.math...@gmail.com Reply: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: September 23, 2014 at 16:41:27 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group I'd be interested in participating in this as well. I wrote Keystone's v3 Identity API Document Overview and Conventions [1] with an eye toward hopefully establishing *some* consistency across multiple projects (or at least having a starting ground with which to discuss and iterate on). [1] https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3.md#document-overview On Sep 23, 2014 6:22 PM, Jay Pipes wrote: On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote: jaypi...@gmail.com on Tuesday, September 23, 2014 9:09 AM wrote: _Snip I'd like to say finally that I think there should be an OpenStack API working group whose job it is to both pull together a set of OpenStack API practices as well as evaluate new REST APIs proposed in the OpenStack ecosystem to provide guidance to new projects or new subprojects wishing to add resources to an existing REST API. Best, -jay */[Rocky Grober] /*++ */Jay, are you volunteering to head up the working group? Or at least be an active member? I’ll certainly follow with interest, but I think I have my hands full with the log rationalization working group./* Yes, I'd be willing to head up the working group... or at least participate in it. Best, -jay I would also be interested in participating in on this front. —Morgan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] Covergence: Persisting dependency task graph and resource versioning
Hi, The convergence spec is really big to be seen and understood in entirety without going through multiple iterations. It is probably a good idea to break the spec into multiple implementable specs and move a chunk of material from main convergence spec to individual implementable specs like engine or observer or even newer specs. One of the steps in the direction of convergence is to enable Heat engine to persist dependency task graph and version the resources. The main convergence spec talks about it. This spec elaborates it and discusses what needs to be done and why back-up stack needs to be removed. Convergence: https://review.openstack.org/#/c/95907/7 Persisting dependency graph and resource version: https://review.openstack.org/#/c/123749/1 Please review it and let us know your thoughts. - Anant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On Tue, 2014-09-23 at 18:18 -0400, Jay Pipes wrote: Yes, I'd be willing to head up the working group... or at least participate in it. I'll raise my hand to participate… -- Kevin L. Mitchell kevin.mitch...@rackspace.com Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Convergence: Backing up template instead of stack
On 24-Sep-14 00:25, Joshua Harlow wrote: I believe heat has its own dependency graph implementation but if that was switched to networkx[1] that library has a bunch of nice read/write capabilities. See: https://github.com/networkx/networkx/tree/master/networkx/readwrite And one made for sqlalchemy @ https://pypi.python.org/pypi/graph-alchemy/ Networkx has worked out pretty well for taskflow (and I believe mistral is also using it). [1] https://networkx.github.io/ Something to think about... On Sep 23, 2014, at 11:32 AM, Zane Bitter zbit...@redhat.com wrote: On 23/09/14 09:44, Anant Patil wrote: On 23-Sep-14 09:42, Clint Byrum wrote: Excerpts from Angus Salkeld's message of 2014-09-22 20:15:43 -0700: On Tue, Sep 23, 2014 at 1:09 AM, Anant Patil anant.pa...@hp.com wrote: Hi, One of the steps in the direction of convergence is to enable Heat engine to handle concurrent stack operations. The main convergence spec talks about it. Resource versioning would be needed to handle concurrent stack operations. As of now, while updating a stack, a backup stack is created with a new ID and only one update runs at a time. If we keep the raw_template linked to it's previous completed template, i.e. have a back up of template instead of stack, we avoid having backup of stack. Since there won't be a backup stack and only one stack_id to be dealt with, resources and their versions can be queried for a stack with that single ID. The idea is to identify resources for a stack by using stack id and version. Please let me know your thoughts. Hi Anant, This seems more complex than it needs to be. I could be wrong, but I thought the aim was to simply update the goal state. The backup stack is just the last working stack. So if you update and there is already an update you don't need to touch the backup stack. Anyone else that was at the meetup want to fill us in? The backup stack is a device used to collect items to operate on after the current action is complete. It is entirely an implementation detail. Resources that can be updated in place will have their resource record superseded, but retain their physical resource ID. This is one area where the resource plugin API is particularly sticky, as resources are allowed to raise the replace me exception if in-place updates fail. That is o-k though, at that point we will just comply by creating a replacement resource as if we never tried the in-place update. In order to facilitate this, we must expand the resource data model to include a version. Replacement resources will be marked as current and to-be-removed resources marked for deletion. We can also keep all current - 1 resources around to facilitate rollback until the stack reaches a complete state again. Once that is done, we can remove the backup stack. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Backup stack is a good way to take care of rollbacks or cleanups after the stack action is complete. By cleanup I mean the deletion of resources that are no longer needed after the new update. It works very well when one engine is processing the stack request and the stacks are in memory. It's actually a fairly terrible hack (I wrote it ;) It doesn't work very well because in practice during an update there are dependencies that cross between the real stack and the backup stack (due to some resources remaining the same or being updated in place, while others are moved to the backup stack ready for replacement). So in the event of a failure that we don't completely roll back on the spot, we lose some dependency information. As a step towards distributing the stack request processing and making it fault-tolerant, we need to persist the dependency task graph. The backup stack can also be persisted along with the new graph, but then the engine has to traverse both the graphs to proceed with the operation and later identify the resources to be cleaned-up or rolled back using the stack id. There would be many resources for the same stack but different stack ids. Right, yeah this would be a mistake because in reality there is only one graph, so that's how we need to model it internally. In contrast, when we store the current dependency task graph(from the latest request) in DB, and version the resources, we can identify those resources that need to be rolled-back or cleaned up after the stack operations is done, by comparing their versions. With versioning of resources and template, we can avoid creating a deep stack of backup stacks. The processing of stack operation can happen from multiple engines, and IMHO, it is simpler when all the engines just see one stack and versions of resources, instead of seeing many stacks with many resources for each stack. Bingo. I think all you need
Re: [openstack-dev] pycharm license?
Hey Devs, I just got clarification on what it was meant by 'commercial developer.' If developer participates in commercial consulting, develops commercial per-order customization / implementation for OpenStack (if any) – they are considered a commercial developer. As long as you do not meet those criteria, I can provide you the key. To request the key from me, please send me the name of the company you work for (if applicable) and your launchpad id. Thanks again, Andrew Melton From: Andrew Melton [andrew.mel...@rackspace.com] Sent: Tuesday, September 23, 2014 3:12 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] pycharm license? Hi Devs, I have the new license, but it has some new restrictions on it's use. I am still waiting on some clarification, but I do know some situations in which I can distribute the license. I cannot distribute the license to any commercial developers. This means if as part of your job, you are contributing to OpenStack, and if the company you work for provides paid services, support, or training relating to OpenStack, I cannot provide you the license. If you meet those criteria (I know I do), you are now required to have a commercial license. I'm sure quite a few of us now meet this criteria and are without a license. Another alternative is the free Community Edition. If you do not meet those criteria, I can provide you with the license. To request the license, please send me an email with your name, the company you work for (if applicable), and your launchpad id. If you are unsure of your situation, it may be best to hold off until I hear back from Jetbrains. Lastly, as part of Jetbrains granting us this new license, they have asked if anyone would be willing to write a review. If anyone would like to do that, please let me know. Thanks, Andrew Melton From: Andrew Melton [andrew.mel...@rackspace.com] Sent: Monday, September 22, 2014 3:19 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] pycharm license? Hi Devs, I'm working on the new license, but it is taking longer than it normally does as Jetbrains is requiring some new steps to get the license. I'll send out an update when I have it, but until then we'll just have to deal with the pop-ups on start. If I'm remembering correctly, a new license simple grants access to newer versions, current versions should still work. Thanks for your patience, Andrew Melton From: Manickam, Kanagaraj [kanagaraj.manic...@hp.com] Sent: Sunday, September 21, 2014 11:20 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] pycharm license? Hi, Does anyone has pycharm license for openstack project development? Thanks. Regards Kanagaraj M ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On Tue, Sep 23, 2014 at 5:18 PM, Jay Pipes jaypi...@gmail.com wrote: Yes, I'd be willing to head up the working group... or at least participate in it. I'll bring an API consumer's perspective. dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] stack-update with existing parameters
On 24/09/14 08:47, Tomas Sedovic wrote: On 24/09/14 13:50, Dimitri Mazmanov wrote: TL;DR Is there any reason why stack-update doesn¹t reuse the existing parameters when I extend my stack definition with a resource that uses them? Hey Dimitri, There is an open bug for this feature: https://bugs.launchpad.net/heat/+bug/1224828 and it seems to be being worked on. In fact it's complete, and you can now use the -x (or --existing) flag to do this on the latest master of Heat (i.e. it will be available in Juno) and the latest release of python-heatclient. cheers, Zane. I have created a stack from the hello_world.yaml template (https://github.com/openstack/heat-templates/blob/master/hot/hello_world.ya ml) It has the following parameters keyname, image, flavor, admin_pass, db_port. heat stack-create hello_world -P key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa ss=Openst1 -f hello_world.yaml Then I have added one more nova server resource with new name(server1), rest all the details are untouched. I get the following when I use this new template without mentioning any of the parameter value. heat --debug stack-update hello_world -f hello_world_modified.yaml On debugging it throws the below exception. The resource was found athttp://localhost:8004/v1/7faee9dd37074d3e8896957dc4a52e22/stacks/hello_wo rld/85a0bc2c-1a20-45c4-a8a9-7be727db6a6d; you should be redirected automatically. DEBUG (session) RESP: [400] CaseInsensitiveDict({'date': 'Wed, 24 Sep 2014 10:08:08 GMT', 'content-length': '961', 'content-type': 'application/json; charset=UTF-8'}) RESP BODY: {explanation: The server could not comply with the request since it is either malformed or otherwise incorrect., code: 400, error: {message: The Parameter (admin_pass) was not provided., traceback: Traceback (most recent call last):\n\n File \/opt/stack/heat/heat/engine/service.py\, line 63, in wrapped\n return func(self, ctx, *args, **kwargs)\n\n File \/opt/stack/heat/heat/engine/service.py\, line 576, in update_stack\n env, **common_params)\n\n File \/opt/stack/heat/heat/engine/parser.py\, line 109, in __init__\ncontext=context)\n\n File \/opt/stack/heat/heat/engine/parameters.py\, line 403, in validate\n param.validate(validate_value, context)\n\n File \/opt/stack/heat/heat/engine/parameters.py\, line 215, in validate\n raise exception.UserParameterMissing(key=self.name)\n\nUserParameterMissing: The Parameter (admin_pass) was not provided.\n, type: UserParameterMissing}, title: Bad Request} When I mention all the parameters then it updates the stack properly heat --debug stack-update hello_world -P key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa ss=Openst1 -f hello_world_modified.yaml Any reason why I can¹t reuse the existing parameters during the stack-update if I don¹t want to specify them again? - Dimitri ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] pycharm license?
Le 24/09/2014 17:41, Andrew Melton a écrit : Hey Devs, I just got clarification on what it was meant by 'commercial developer.' If developer participates in commercial consulting, develops commercial per-order customization / implementation for OpenStack (if any) – they are considered a commercial developer. Well, if that applies as company objectives, then all major contributors to OpenStack are considered as commercial developers. I would even consider that only individual contributors could get that licence because companies who are boarding to the Foundation are, per se, participating in commercial consulting, customization or implementation. Thanks for raising this up, -Sylvain As long as you do not meet those criteria, I can provide you the key. To request the key from me, please send me the name of the company you work for (if applicable) and your launchpad id. Thanks again, Andrew Melton *From:* Andrew Melton [andrew.mel...@rackspace.com] *Sent:* Tuesday, September 23, 2014 3:12 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] pycharm license? Hi Devs, I have the new license, but it has some new restrictions on it's use. I am still waiting on some clarification, but I do know some situations in which I can distribute the license. I cannot distribute the license to any commercial developers. This means if as part of your job, you are contributing to OpenStack, and if the company you work for provides paid services, support, or training relating to OpenStack, I cannot provide you the license. If you meet those criteria (I know I do), you are now required to have a commercial license. I'm sure quite a few of us now meet this criteria and are without a license. Another alternative is the free Community Edition. If you do not meet those criteria, I can provide you with the license. To request the license, please send me an email with your name, the company you work for (if applicable), and your launchpad id. If you are unsure of your situation, it may be best to hold off until I hear back from Jetbrains. Lastly, as part of Jetbrains granting us this new license, they have asked if anyone would be willing to write a review. If anyone would like to do that, please let me know. Thanks, Andrew Melton *From:* Andrew Melton [andrew.mel...@rackspace.com] *Sent:* Monday, September 22, 2014 3:19 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] pycharm license? Hi Devs, I'm working on the new license, but it is taking longer than it normally does as Jetbrains is requiring some new steps to get the license. I'll send out an update when I have it, but until then we'll just have to deal with the pop-ups on start. If I'm remembering correctly, a new license simple grants access to newer versions, current versions should still work. Thanks for your patience, Andrew Melton *From:* Manickam, Kanagaraj [kanagaraj.manic...@hp.com] *Sent:* Sunday, September 21, 2014 11:20 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] pycharm license? Hi, Does anyone has pycharm license for openstack project development? Thanks. Regards Kanagaraj M ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV] Meeting Minutes for 2014-09-24
Hi all, Thanks to those who attended today's meeting, please find links to the minutes and log below: Minutes: http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-09-24-14.03.html Minutes (text): http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-09-24-14.03.txt Log: http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-09-24-14.03.log.html In particular please note that items for Kilo design summit planning need to be socialized with the relevant teams and then added to the planning etherpads here for consideration as the PTLs attempt to plan the design tracks: https://wiki.openstack.org/wiki/Summit/Planning Thanks, Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] Meeting Thursday September 25th at 17:00 UTC
Hi Everyone, Just a quick reminder that the weekly OpenStack QA team IRC meeting will be this Thursday, September 25th at 17:00 UTC in the #openstack-meeting channel. The agenda for tomorrow's meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. It's also worth noting that a few weeks ago we started having a regular dedicated Devstack topic during the meetings. So if anyone is interested in Devstack development please join the meetings to be a part of the discussion. To help people figure out what time 17:00 UTC is in other timezones tomorrow's meeting will be at: 13:00 EDT 02:00 JST 02:30 ACST 19:00 CEST 12:00 CDT 10:00 PDT -Matt Treinish pgpNfDvN7_Epe.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On Sep 24, 2014, at 9:42 AM, Dean Troyer dtro...@gmail.com wrote: I'll bring an API consumer's perspective. +1 I’d bring an API consumer’s perspective as well. Looks like there’s lots of support for an API WG. What’s the next step? Form a WG under the User Committee [1] or is there something more appropriate? Thanks, Everett [1] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [OSSN 0029] Neutron FWaaS rules lack port restrictions when using protocol 'any'
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Neutron FWaaS rules lack port restrictions when using protocol 'any' - --- ### Summary ### A bug in the Neutron FWaaS (Firewall as a Service) code results in iptables rules being generated that do not reflect desired port restrictions. This behaviour is triggered when a protocol other than 'udp' or 'tcp' is specified, e.g. 'any'. The scope of this bug is limited to Neutron FWaaS and systems built upon it. Security Groups are not affected. ### Affected Services / Software ### Neutron FWaaS, Grizzly, Havana, Icehouse ### Discussion ### When specifying firewall rules using Neutron that should match multiple protocols, it is convenient to specify a protocol of 'any' in place of defining multiple specific rules. For example, in order to allow DNS (TCP and UDP) requests, the following rule might be defined: neutron firewall-rule-create --protocol any --destination-port 53 \ --action allow However, this rule results in the generation of iptables firewall rules that do not reflect the desired port restriction. An example generated iptables rule might look like the following: - -A neutron-l3-agent-iv441c58eb2 -j ACCEPT Note that the restriction on port 53 is missing. As a result, the generated rule will match and accept any traffic being processed by the rule chain to which it belongs. Additionally, iptables arranges sets of rules into chains and processes packets entering a chain one rule at a time. Rule matching stops at the first matched exit condition (e.g. accept or drop). Since, the generated rule above will match and accept all packets, it will effectively short circuit any filtering rules lower down in the chain. Consequently, this can break other firewall rules regardless of the protocol specified when defining those rules with Neutron. They simply need to appear later in the generated iptables rule chain. This bug is triggered when any protocol other than 'tcp' or 'udp' is specified in conjunction with a source or destination port number. ### Recommended Actions ### Avoid the use of 'any' when specifying the protocol for Neutron FWaaS rules. Instead, create multiple rules for both 'tcp' and 'udp' protocols independently. A fix has been submitted to Juno. ### Contacts / References ### This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0029 Original LaunchPad Bug : https://bugs.launchpad.net/neutron/+bug/1365961 OpenStack Security ML : openstack-secur...@lists.openstack.org OpenStack Security Group : https://launchpad.net/~openstack-ossg -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJUIvg/AAoJEJa+6E7Ri+EV9bUH/RLDncR7tUCdz6jxDBECj/Uq wGshhd0YiLaOIJ57tHQNNQedUSNOe0ErPgpSHQifb5nLYE5JvR+YK1QS3n+aM8vL 1teVJDpHU6hoJBmLD8MfB2ZodikSDT/Lfjm3SemfgVtAjHwKEzUE1vGWsq5z+8KB I8HDffLQjPPbzgxhS0wwCoLouwP07trodg01j93uON6PwMnY4+8eRquiCpr8/dva aqjT1eKuaTvfDKnXhiVcXDACH1uKKEgmeHKcqLYNKut8n/8F4WOSigAzwGlCDpre 9lpuTWpfIDDR6mDFgrrlanhdyQzo7SV9jupKvYnWHVr72x+E4OCSHtVXq7BLiVw= =2m+Z -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On Wed, Sep 24, 2014 at 11:42 AM, Dean Troyer dtro...@gmail.com wrote: On Tue, Sep 23, 2014 at 5:18 PM, Jay Pipes jaypi...@gmail.com wrote: Yes, I'd be willing to head up the working group... or at least participate in it. I'll bring an API consumer's perspective. I would love to participate too. I have an interest in RESTful API design and the surrounding architecture. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek www: http://dstanek.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On 09/24/2014 12:48 PM, Everett Toews wrote: On Sep 24, 2014, at 9:42 AM, Dean Troyer dtro...@gmail.com wrote: I'll bring an API consumer's perspective. +1 I’d bring an API consumer’s perspective as well. Looks like there’s lots of support for an API WG. What’s the next step? Form a WG under the User Committee [1] or is there something more appropriate? Thanks, Everett [1] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups Whatever it ends up being, it needs to have some teeth to it. Otherwise, we're going to end up in the exact same place we're in now, where each project does something slightly different. That could mean putting the working group under the user committee and giving it teeth via some policy that would allow the group to look over proposed API changes *before* the code that implements the API was merged into an OpenStack project. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Zaqar] The horse is dead. Long live the horse.
Sorry for the vague subject[1]. I just wanted to commend Flavio Percoco and the Zaqar team for maintaining poise and being excellent citizens of OpenStack whilst being questioned intensely by the likes of me, and others. I feel that this questioning has been useful, and will allow us to reason about Zaqar in the future. So, I recommend that we stop questioning, and start coding. If you feel that a lighter weight system with different guarantees will serve the users of OpenStack better than Zaqar, then own up and write it. Meanwhile, I suggest we spend our communication bandwidth and effort on reasoning about the bigger problem that Zaqar exposes, and which I think Monty has highlighted in his recent thread about the big tent. Anyway, thanks for listening. -Clint [1] The subject is a reference to beating a dead horse. Zaqar is not a horse, and is not dead. The first person to get angry about my declaration of Zaqar's death should be asked to wear a ridiculous sombrero to the next summit. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
Sounds like an interesting project/goal and will be interesting to see where this goes. A few questions/comments: How much golang will people be exposed to with this addition? Seeing that this could be the first 'go' using project it will be interesting to see where this goes (since afaik none of the infra support exists, and people aren't likely to familiar with go vs python in the openstack community overall). What's your thoughts on how this will affect the existing openstack container effort? I see that kubernetes isn't exactly a small project either (~90k LOC, for those who use these types of metrics), so I wonder how that will affect people getting involved here, aka, who has the resources/operators/other... available to actually setup/deploy/run kubernetes, when operators are likely still just struggling to run openstack itself (at least operators are getting used to the openstack warts, a new set of kubernetes warts could not be so helpful). On Sep 23, 2014, at 3:40 PM, Steven Dake sd...@redhat.com wrote: Hi folks, I'm pleased to announce the development of a new project Kolla which is Greek for glue :). Kolla has a goal of providing an implementation that deploys OpenStack using Kubernetes and Docker. This project will begin as a StackForge project separate from the TripleO/Deployment program code base. Our long term goal is to merge into the TripleO/Deployment program rather then create a new program. Docker is a container technology for delivering hermetically sealed applications and has about 620 technical contributors [1]. We intend to produce docker images for a variety of platforms beginning with Fedora 20. We are completely open to any distro support, so if folks want to add new Linux distribution to Kolla please feel free to submit patches :) Kubernetes at the most basic level is a Docker scheduler produced by and used within Google [2]. Kubernetes has in excess of 100 technical contributors. Kubernetes is more then just a scheduler, it provides additional functionality such as load balancing and scaling and has a significant roadmap. The #tripleo channel on Freenode will be used for Kolla developer and user communication. Even though we plan to become part of the Deployment program long term, as we experiment we believe it is best to hold a separate weekly one hour IRC meeting on Mondays at 2000 UTC in #openstack-meeting [3]. This project has been discussed with the current TripleO PTL (Robert Collins) and he seemed very supportive and agreed with the organization of the project outlined above. James Slagle, a TripleO core developer, has kindly offered to liase between Kolla and the broader TripleO community. I personally feel it is necessary to start from a nearly empty repository when kicking off a new project. As a result, there is limited code in the repository [4] at this time. I suspect folks will start cranking out a kick-ass implementation once the Kolla/Stackforge integration support is reviewed by the infra team [5]. The initial core team is composed of Steven Dake, Ryan Hallisey, James Lebocki, Jeff Peeler, James Slagle, Lars Kellogg-Sedman, and David Vossel. The core team will be reviewed every 6 weeks to add fresh developers. Please join the core team in designing and inventing this rockin' new technology! Regards -steve ~~ [1] https://github.com/docker/docker [2] https://github.com/GoogleCloudPlatform/kubernetes [3] https://wiki.openstack.org/wiki/Meetings/Kolla [4] https://github.com/jlabocki/superhappyfunshow [5] https://review.openstack.org/#/c/122972/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara][Doc] Filing a bug for modifying the image
Hi Sharan, Yes, file bug, commit new image to review. Thanks, Andrew. On Wed, Sep 24, 2014 at 10:16 AM, Sharan Kumar M sharan.monikan...@gmail.com wrote: Hi all, In this documentation http://docs.openstack.org/developer/sahara/overview.html#details, the missing services were added. And I notice that the image is not in parallel to the description and I think it needs to be fixed. So should I file a new bug in launchpad before proceeding? Thanks, Sharan Kumar M ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] The horse is dead. Long live the horse.
On 09/24/2014 06:07 PM, Clint Byrum wrote: I just wanted to commend Flavio Percoco and the Zaqar team for maintaining poise and being excellent citizens of OpenStack whilst being questioned intensely by the likes of me, and others. +1 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] The horse is dead. Long live the horse.
On 09/24/2014 07:07 PM, Clint Byrum wrote: Sorry for the vague subject[1]. I just wanted to commend Flavio Percoco and the Zaqar team for maintaining poise and being excellent citizens of OpenStack whilst being questioned intensely by the likes of me, and others. And I'd personally like to thank all of you for taking the time to go through the code, docs, specs and emails. It's been very helpful and it's highlighted the bads and goods of the service. Lets work on making it better. I feel that this questioning has been useful, and will allow us to reason about Zaqar in the future. So, I recommend that we stop questioning, and start coding. https://review.openstack.org/#/c/123750/ ;) If you feel that a lighter weight system with different guarantees will serve the users of OpenStack better than Zaqar, then own up and write it. Meanwhile, I suggest we spend our communication bandwidth and effort on reasoning about the bigger problem that Zaqar exposes, and which I think Monty has highlighted in his recent thread about the big tent. +1 Thank you, Clint Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
On 18/09/14 14:53, Monty Taylor wrote: Hey all, I've recently been thinking a lot about Sean's Layers stuff. So I wrote a blog post which Jim Blair and Devananda were kind enough to help me edit. http://inaugust.com/post/108 Thanks Monty, I think there are some very interesting ideas in here. I'm particularly glad to see the 'big tent' camp reasserting itself, because I have no sympathy with anyone who wants to join the OpenStack community and then bolt the door behind them. Anyone who contributes to a project that is related to OpenStack's goals, is willing to do things the OpenStack way, and submits itself to the scrutiny of the TC deserves to be treated as a member of our community with voting rights, entry to the Design Summit and so on. I'm curious how you're suggesting we decide which projects satisfy those criteria though. Up until now, we've done it through the incubation process (or technically, the new program approval process... but in practice we've never added a project that was targeted for eventual inclusion in the integrated release to a program without incubating it). Would the TC continue to judge whether a project is doing things the OpenStack way prior to inclusion, or would we let projects self-certify? What does it mean for a project to submit itself to TC scrutiny if it knows that realistically the TC will never have time to actually scrutinise it? Or are you not suggesting a change to the current incubation process, just a willingness to incubate multiple projects in the same problem space? I feel like I need to play devil's advocate here, because overall I'm just not sure I understand the purpose of arbitrarily - and it *is* arbitrary - declaring Layer #1 to be anything required to run Wordpress. To anyone whose goal is not to run Wordpress, how is that relevant? Speaking of arbitrary, I had to laugh a little at this bit: Also, please someone notice that the above is too many steps and should be: openstack boot gentoo on-a 2G-VM with-a publicIP with-a 10G-volume call-it blog.inaugust.com That's kinda sorta exactly what Heat does ;) Minus the part about assuming there is only one kind of application, obviously. I think there are a number of unjustified assumptions behind this arrangement of things. I'm going to list some here, but I don't want anyone to interpret this as a personal criticism of Monty. The point is that we all suffer from biases - not for any questionable reasons but purely as a result of our own experiences, who we spend our time talking to and what we spend our time thinking about - and therefore we should all be extremely circumspect about trying to bake our own mental models of what OpenStack should be into the organisational structure of the project itself. * Assumption #1: The purpose of OpenStack is to provide a Compute cloud This assumption is front-and-centre throughout everything Monty wrote. Yet this wasn't how the OpenStack project started. In fact there are now at least three services - Swift, Nova, Zaqar - that could each make sense as the core of a standalone product. Yes, it's true that Nova effectively depends on Glance and Neutron (and everything depends on Keystone). We should definitely document that somewhere. But why does it make Nova special? * Assumption #2: Yawnoc's Law Don't bother Googling that, I just made it up. It's the reverse of Conway's Law: Infra engineers who design governance structures for OpenStack are constrained to produce designs that are copies of the structure of Tempest. I just don't understand why that needs to be the case. Currently, for understandable historic reasons, every project gates against every other project. That makes no sense any more, completely independently of the project governance structure. We should just change it! There is no organisational obstacle to changing how gating works. Even this proposal doesn't entirely make sense on this front - e.g. Designate requires only Neutron and Keystone... why should Nova, Glance and every other project in Layer 1 gate against it, and vice-versa? I suggested in another thread[1] a model where each project would publish a set of tests, each project would decide which sets of tests to pull in and gate on, and Tempest would just be a shell for setting up the environment and running the selected tests. Maybe that idea is crazy or at least needs more work (it certainly met with only crickets and tumbleweeds on the mailing list), but implementing it wouldn't require TC intervention and certainly not by-laws changes. It just requires... implementing it. Perhaps the idea here is that by designating Layer 1 the TC is indicating to projects which other projects they should accept gate test jobs from (a function previously fulfilled by Incubation). I'd argue that this is a very bad way to do it, because (a) it says nothing to projects outside of Layer 1 how they should
Re: [openstack-dev] [Nova] [All] API standards working group
Seems like there is some overlap with the end user (i.e. consumer) working group at https://wiki.openstack.org/wiki/End_User_Working_Group Sounds like it would be worth discussing with them on how to focus on these needs. The scope of the end user is much larger than just the API but a consistent API is needed for easy consumption How about seeing with Chris Kemp where things fit together ? There is a fairly open structure under the user committee if you would like a place to form but it is important to avoid duplication. With the specs process, there is now much more opportunity for users (in all different forms) to give input before code is written. Tim -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: 24 September 2014 19:05 To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] [All] API standards working group On 09/24/2014 12:48 PM, Everett Toews wrote: On Sep 24, 2014, at 9:42 AM, Dean Troyer dtro...@gmail.com wrote: I'll bring an API consumer's perspective. +1 I'd bring an API consumer's perspective as well. Looks like there's lots of support for an API WG. What's the next step? Form a WG under the User Committee [1] or is there something more appropriate? Thanks, Everett [1] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Wo rking_Groups Whatever it ends up being, it needs to have some teeth to it. Otherwise, we're going to end up in the exact same place we're in now, where each project does something slightly different. That could mean putting the working group under the user committee and giving it teeth via some policy that would allow the group to look over proposed API changes *before* the code that implements the API was merged into an OpenStack project. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Why alarm name is unique per project?
@Nejc Saje you're talking about group alarm, I'm going to try it on Kilo On Wed, Sep 24, 2014 at 7:34 PM, Nejc Saje ns...@redhat.com wrote: On 09/24/2014 12:23 PM, Long Suo wrote: Hi, all I am just a little confused why alarm name should be unique per project, anyone knows this? Good point, I admit I can't find a compelling reason for that either. Perhaps someone else can? Also, an interesting use-case comes to mind, where you can have for example an alarm for each instance, all of them named 'cpu_alarm', but with unique action per instance. You could then retrieve all these alarms at once with a proper query. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- blog: zqfan.github.com git: github.com/zqfan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] sw process
Folks – we have gotten into a couple of loose habits in our mad dash to beta 1 that we need to tighten up on. I’ve asked Ryan to set geritt up to fail reviews when the commit id does not contain one of the following, so please add them to your commit. In addition please make your commit message useful and with enough detail that the reviewer knows what he/she is reviewing. Implements-Story: XXX Fixes-Bug: YYY Other issues Do not +2 your own patch . We won’t enforce this unless we need to, but don’t do it Reviewed pushed on master and containing multiple changes You should create a branch to hold your work and name it something that makes sense. This branch should ONLY contain this work. When you checkin your code it makes it much easier for the reviewer to understand what you are doing. If you have work that is dependent on other work, create a dependency on your branch I.e. Git checkout –b branch name where branch name is bug/1234 or implement-ui-validation This is the openstack workflow – which talks about how to do dependencies, rebating etc. It’s a pretty good guide. https://wiki.openstack.org/wiki/Gerrit_Workflow#Normal_Workflow Anyone have other guides they want to share on a good workflow? Let me know if you have comments or suggestions. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
Excerpts from Robert Collins's message of 2014-09-23 21:14:47 -0700: No one helped me edit this :) http://rbtcollins.wordpress.com/2014/09/24/what-poles-for-the-tent/ I hope I haven't zoned out and just channelled someone else here ;) This sounds like API's are what matters. You did spend some time working with Simon Wardley, didn't you? ;) I think it's a sound argument, but I'd like to banish the term reference implementation from any discussions around what OpenStack, as a project, delivers. It has too many negative feelings wrapped up in it. I also want to call attention to how what you describe feels an awful lot like POSIX to me. Basically offering guarantees of API compatibility, but then letting vendors run wild around and behind it. I'm not sure if that is a good thing, or a bad thing. I do, however, think if we can avoid a massive vendor battle that involves multiple vendors pushing multiple implementations, we will save our companies a lot of money, and our users will get what they need sooner. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] sw process
SORRY – please ignore that email – it was clearly internal…… I used the wrong ML (blush) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On 09/24/2014 10:05 AM, Jay Pipes wrote: Whatever it ends up being, it needs to have some teeth to it. Otherwise, we're going to end up in the exact same place we're in now, where each project does something slightly different. +1 I think getting started and produce some material to discuss in Paris would be the first step. Like other teams or WG, a wiki page for the team with a mission, roadmap, members would be a start. I would suggest to stay close to the TC and PTLs to get buy-in from them, increasing the chances of smooth implementations, while getting constant feedback from downstream consumers of the APIs. /stef -- Ask and answer questions on https://ask.openstack.org ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] sw process
On 09/24/2014 02:51 PM, Tracy Jones wrote: SORRY – please ignore that email – it was clearly internal…… I used the wrong ML (blush) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I get behind any email that states don't +2 your own patch so yeah, let's not do that whereever you are. Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Juno release images
here are the updated fedora images, could we get them hosted at sahara-files.mirantis.com please https://mimccune.fedorapeople.org/sahara-juno-vanilla-1.2.1-fedora-20.qcow2 https://mimccune.fedorapeople.org/sahara-juno-vanilla-2.4.1-fedora-20.qcow2 md5sum 7e8a39bb4d43ebf07ceeaf66670d8726 sahara-juno-vanilla-1.2.1-fedora-20.qcow2 b218e236be9f95cc86073afa6b248b06 sahara-juno-vanilla-2.4.1-fedora-20.qcow2 i'll create a bug and submit a cr to the doc change with the sahara-files.m.c as the location. thanks, mike - Original Message - - Original Message - Mike, please, propose a CR to our docs w/ Fedora images when it'll be ready. will do mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] usability anti-pattern, part 2
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/19/2014 09:01 PM, Monty Taylor wrote: except exc.Unauthorized: raise exc.CommandError(Invalid OpenStack credentials.) except exc.AuthorizationFailure: raise exc.CommandError(Unable to authorize user) This is pervasive enough that both of those exceptions come from openstack.common. Anyone? Please. Explain the difference. In words. I think that there are two problems here: first, the message for Unauthorized is wrong; it should be something like You are not authorized to do X. The second exception should most likely be 'AuthenticationFailure', and should have the error text from the Authentication exception. I've seen confusion between authz and authn in many projects; looks like OpenStack is no different, unfortunately. - -- Ed Leafe -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.14 (GNU/Linux) iQIcBAEBAgAGBQJUIxi+AAoJEKMgtcocwZqLiq0P/j77O9jrXRBfz/2tEVTbKk0D xAO1Ol2wwgJgekqokhecQrNlqc4PajzQdlEb0fg497GzjErWvPfHRhHATkSm19M7 z6SLbhtx4G+u/v79p3hoEkfFOEcCgnQU/ZFQ2r5XhVljO12Fo+I6vBprqUhekyMO bhpVzFyjMifEDoO/C/GrNRY3QdfhkZuQO40Y9fcDlPQADykEghGXNnBx6aFyMARZ DJL6zcKwX3tMYR68cNMXdHF18Aue/U9wvVlhrw8uFa3qwT8IpJrelRZteNG+oZ0o ig4J5BDs3Mi/chNWIG7fNlCZ1OMGPT4Zn1tAitHESONme55M0Djc5N7/lizgewzb VMGkwOrpzDvpbZYZRtUpdiN8LMYEwYjIZpMRG4PO+5Roy2XNkPT8UOSWhc4Xq3ct vWA7ToBL1k0DzIs5Db0qoz1bdc31BeS3gcX+C5rLQQO/DQ45Au2mtK/yyrwmvtZB SyplnysiBhHgh6/KhuFnA4pfW5mRMfh+hb9cNyqyVldTDZDyUV4c+Xoh3rGDMlSj Lr6Gc8y16l/QgAIQZvbb2Z1MbMH5vSkRP8BNUkdhPfrB64ykiso4J4h1ftUEySJR GM6SpYzJ4tVy6o3XZajEvRJoMZZ09/3Qk/5n0QmwZOy/R6EgfO/8YfyRfGw4Rlvn zM2SySZlUhnAHfb5CYAT =f8fm -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
On 19/09/14 22:37, Monty Taylor wrote: I think we can do what you're saying and generalize a little bit. What if we declared programs, as needed, when we think there is a need to pick a winner. (I think we can all agree that early winner picking is an unintended but very real side effect of the current system) Ooh, a challenge. I'll bite ;) Here's a question: how many instances are there of two Stackforge projects working on the same problem domain? I can think of one example: Gnocchi and StackTach - though those are (respectively) a branch of and a sort-of competitor to an integrated project (where we supposedly picked the winner already). So we have evidence of competitors surviving the picking of a winner, but not a lot of evidence of competition in general. (To be fair, you could make an argument that people are being put off starting competing projects by the prospect of eventually having to submit to winner-picking by the TC.) Don't get me wrong, I think it's clear that our implicit hope for the current model - that picking a winner would mean everyone getting on board with that project and making it better - has failed to materialise. But there's also every reason to think that a model where we rely on competition between projects in the marketplace to determine a winner owes just as much to wishful thinking. I suspect that part of the problem is that there are so many different things any given person could be working on to make users' lives better that when they see a project whose approach they don't agree with, they're more likely to just go work on something else than start a competitor or jump in to try and change the direction. (In fact, the people most qualified to do so are almost by definition the most busy already.) Maybe just throw a few rocks right before the graduation review, that sort of thing. If you were hoping this email would end with some kind of proposed solution, prepare for disappointment. Early winner-picking obviously sucks for the developer who realises that they need to make major changes and has to deal with the extra challenge of maintaining API compatibility and upgradability while doing it. On the other hand, it sucks precisely because that's great for users and operators. A competition model, even assuming that the competition actually arises, just means that the portion of operators who chose the 'wrong' side and their users get hosed when an eventual winner emerges. Potential consequences include delayed interoperability, delayed adoption of new features altogether to avoid the risk, or even permanent lack of interoperability with proprietary solutions being used instead. The only suggestion I can make is one I mentioned in another thread[1]: establish a design principle that the parts of the design which are hard to change (e.g. APIs) must be a simple as possible in order to provide the maximum flexibility of implementation, until such time as both the implementation and the need for more complexity have been validated in the real world. cheers, Zane. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-September/046563.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
On 09/24/2014 02:48 PM, Clint Byrum wrote: Excerpts from Robert Collins's message of 2014-09-23 21:14:47 -0700: No one helped me edit this :) http://rbtcollins.wordpress.com/2014/09/24/what-poles-for-the-tent/ I hope I haven't zoned out and just channelled someone else here ;) This sounds like API's are what matters. You did spend some time working with Simon Wardley, didn't you? ;) I think it's a sound argument, but I'd like to banish the term reference implementation from any discussions around what OpenStack, as a project, delivers. It has too many negative feelings wrapped up in it. I also want to call attention to how what you describe feels an awful lot like POSIX to me. Basically offering guarantees of API compatibility, but then letting vendors run wild around and behind it. I'm not sure if that is a good thing, or a bad thing. I do, however, think if we can avoid a massive vendor battle that involves multiple vendors pushing multiple implementations, we will save our companies a lot of money, and our users will get what they need sooner. I like what Rob had to say here, and have expressed similar views. Having competition between implementations is good for every one (except for the losers) if that competition takes place in a way that shields users and the ecosystem from the aftermath of such competition. That is what standards, defined apis, whetever we want to call it, is all about. By analogy, competition by electronics companies around who can make the best performing blu-ray player with the most features is a good thing for users and that ecosystem. Competition about whether the ecosystem should use blu-ray or HD DVD, not so much: http://en.wikipedia.org/wiki/High_definition_optical_disc_format_war. This is what I see as the main virtue of the TC blessing things as the one OpenStack way to do X. There is also the potential of efficiency if more people contribute to the same project that is doing X as compared to multiple projects doing X. But as we have seen, that efficiency is only realized if X turns out to be the right thing. There is no particular reason to think the TC will be great at picking winners. Blessing apis, though difficult, would have huge benefit and provide more room for leeway and experimentation. Blessing code, before it has been proven in the real world, is the worst of all worlds when it turns out to be wrong. I believe our scale problems can be addressed by thoughtful decentralization and I hope we move in that direction, and in terms of how many pieces of the run a real cloud we have in our tent, we may have shot too high. But some of the recent proposals to move to an extreme in the other direction would be a mistake IMO. To be important, and be competitive with non-OpenStack cloud solutions, we need to provide a critical mass so that most other interesting things can glom on and form a larger ecosystem. -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Sahara][Doc] Features overview suggests Cinder to increase reliability
Hi all, In this bug https://bugs.launchpad.net/sahara/+bug/1371360 its said that Cinder itself has replication and using it with HDFS will increase replication even more. That said, will mentioning this in the documentation and also stating that it makes sense to use Cinder when replication factor for HDFS is 1 fix the bug? Thanks, Sharan Kumar M ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model
On 2014-09-24 13:55:57 -0400 (-0400), Zane Bitter wrote: [...] * Assumption #2: Yawnoc's Law Don't bother Googling that, I just made it up. It's the reverse of Conway's Law: Infra engineers who design governance structures for OpenStack are constrained to produce designs that are copies of the structure of Tempest. I just don't understand why that needs to be the case. Currently, for understandable historic reasons, every project gates against every other project. That makes no sense any more, completely independently of the project governance structure. We should just change it! There is no organisational obstacle to changing how gating works. [...] Note that to a great extent the current proliferation of integration testing was driven from the developers of many projects. The Infra and QA teams didn't just wake up one morning and decide to chuck all the projects in. Rewind a year or two and remember that we had massive amounts of asymmetry in our testing. Project A would implement some new change and Project B developers would get their jobs insta-broken, then come complaining that clearly this meant we should be gating Project A on whether or not Project B works. For a time we had sufficient (human and server) resources to do that, and so it was comparatively cheap to just keep expanding the list of projects which shared a common set of jobs we hoped exercised enough of everything to act as a canary in the coal mine. We're now running into pretty clear scalability limits on the number of development teams whose changes you can effectively test against each other given the realities of nondeterministic failures, breakdowns in cross-group communication, growth in size of integrated releases, tragedy of the commons effects on horizontal efforts/shared resources, external factors like dependency changes, et cetera. As a community we should explore solutions (and clearly are) to these underlying problems, but also need to reconsider some old habits that need changing such as tight coupling to the internal APIs and implementation details of other projects... especially if doing so lets us scale back our integration testing, rather than leaning on it harder so that these undesirable development patterns can continue unabated. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Documentation process
Hi, I would like to discuss the documentation process and align it to OpenStack flow. At the moment we add special tags to bugs in Launchpad which is not optimal as everyone can add/remove tags cannot participate in documentation process or enforce documentation process. I suggest to switch to standard workflow that is used by OpenStack community All we need is to move the process of tracking documentation from launchpad to gerrit This process gives more control to individual developers or community for tracking the changes and reflect them in documentation. Every reviewer checks the commit. If he thinks that this commit requires documentation update, he will set -1 with comment message Docs impact required This will force the author of patchset to update commit with DocImpact commit message Our documentation team will get all messages with DocImpact from 'git log'. The documentation team will make a documentation where the author of patch will play a key role. All other reviewers from original patch must give own +1 for documentation update. Patches in fuel-docs may have the same Change-ID as original patch. It will allow us to match documentation and patches in Gerrit. More details about DocImpact flow ban be obtained at https://wiki.openstack.org/wiki/Documentation/DocImpact -- Best regards, Sergii Golovatiuk, Skype #golserge IRC #holser ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Create an instance with a custom uuid
Hi Joe, Tools like Pumphouse [1] (migrates workloads, e.g. instances, between two OpenStack clouds) would benefit from supporting this (Pumphouse would be able to replicate user instances in a new cloud up to their UUIDs). Are there any known gotchas with support of this feature in REST APIs (in general)? Thanks, Roman [1] https://github.com/MirantisLabs/pumphouse On Wed, Sep 24, 2014 at 10:23 AM, Joe Gordon joe.gord...@gmail.com wrote: Whats the use case for this? We should be thorough when making API changes like this. On Wed, Sep 24, 2014 at 6:56 AM, joehuang joehu...@huawei.com wrote: +1. Or at least provide a way to specify an external UUID for the instance, and can retrieve the instance through the external UUID which may be linked to external system's object. Chaoyi Huang ( joehuang ) 发件人: Pasquale Porreca [pasquale.porr...@dektech.com.au] 发送时间: 2014年9月24日 21:08 收件人: openstack-dev@lists.openstack.org 主题: [openstack-dev] [nova] Create an instance with a custom uuid Hello I would like to be able to specify the UUID of an instance when I create it. I found this discussion about this matter: https://lists.launchpad.net/openstack/msg22387.html but I could not find any blueprint, anyway I understood this modification should not comport any particular issue. Would it be acceptable to pass the uuid as metadata, or should I instead modify the api if I want to set the UUID from the novaclient? Best regards -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Create an instance with a custom uuid
On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka rpodoly...@mirantis.com wrote: Are there any known gotchas with support of this feature in REST APIs (in general)? I'd be worried about relying on a user-defined attribute in that use case, that's ripe for a DOS. Since these are cloud-unique I wouldn't even need to be in your project to block you from creating that clone instance if I knew your UUID. dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] PTL Candidacy
confirmed On 24/09/14 04:28 PM, Devananda van der Veen wrote: Hi all, I would like to announce my candidacy for the PTL role of the Bare Metal Provisioning program. I have been the PTL of Ironic since I began the project at the Havana summit, and I am partly to blame for the baremetal driver that Ironic was forked from. (If you've touched that code, you will share in my joy when it is finally deleted from Nova!) In that time, I have been delighted to work with many people whose knowledge of the innards of hardware management and provisioning is far deeper than mine, and I have come to rely on their experience to inform the project's course. Without each of you, Ironic would not be what it is today. Thank you! As we move into the Kilo cycle, I would like us to remember this purpose: provisioning workloads on physical machines, driven by the principles of cloud computing, which I'll sum up as: abstraction, automation, and elasticity. Where a proposed feature would violate any of these principles, I do not believe it will benefit the project. In the Juno cycle, we have seen a large (but not unexpected) influx of hardware drivers. This, I believe, is a result of the stability of the code and the narrow focus of the core of the project. This focus has discouraged certain uses and, perhaps, alienated some contributors whose needs were not well served -- and I'm OK with that. I believe that all OpenStack projects (like all software) must have a clear and limited purpose, and, especially early in their life cycle, resist scope creep. There is plenty of room for neighboring projects within the bare metal space. On the one hand, vendors are adding functionality to their drivers that extends beyond the existing core capabilities. We must work with them to stabilize and abstract this into common APIs. On the other hand, we currently require operators to prepare a machine before it can be used with Ironic or when changing the role it fulfils. We have seen a lot of interest in expanding the project's scope to increase automation of these prepartion and decommissioning phases, and I believe we will see incremental progress here during Kilo through the exposure of hardware capabilities that may be changed just-in-time during provisioning. The topics of discovery and inventory management also consistently arises, and we've discussed these several times recently. As a result, my position on this has changed over the last two years - I no longer believe this has a place within a *provisioning* service, but it is a necessary component in fleet management. While I believe that Ironic can already be integrated with such systems, I do not know of any agentless inventory management systems in the open source ecosystem. We must continue to integrate with other OpenStack projects, and the ongoing Big Tent discussion does not change our goals here (though it may change the processes we go through to get there). I believe that Horizon integration will be very important in Kilo, as well as better operational monitoring through statsd / Ceilometer integration, and we should add a notification bus between Ironic and Nova to reduce the time lag before resource changes are visible to users. I would also like to promote a separate effort to use Ironic in a stand-alone fashion. I believe this will meet the needs of an important set of users who do not require the full abstraction which OpenStack provides. I believe we also need to grow the breadth of knowledge within the core team through hands-on usage of Ironic in production deployments. Rackspace's contributions based on running the OnMetal service during Juno have been immeasurably beneficial to the project, and I would like more direct operator input during Kilo. Sincerely, Devananda ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On Wed, Sep 24, 2014 at 11:05 AM, David Stanek dsta...@dstanek.com wrote: On Wed, Sep 24, 2014 at 11:42 AM, Dean Troyer dtro...@gmail.com wrote: On Tue, Sep 23, 2014 at 5:18 PM, Jay Pipes jaypi...@gmail.com wrote: Yes, I'd be willing to head up the working group... or at least participate in it. I'll bring an API consumer's perspective. I would love to participate too. I have an interest in RESTful API design and the surrounding architecture. +1. Add me to the loop for the same reasons. Carl -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek www: http://dstanek.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [All] API standards working group
On Wed, Sep 24, 2014 at 12:18 AM, Jay Pipes jaypi...@gmail.com wrote: Yes, I'd be willing to head up the working group... or at least participate in it. I am certainly interested, count me in. Chmouel ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][L3] Team Meeting Thursday at 1500 UTC
The Neutron L3 Subteam will meet tomorrow at the regular time and place. The agenda and details are posted [1]. I think the RC1 ship will have sailed for most potential fixes by then so I'd like to take some time during the meeting tomorrow to chat about the work that is coming up for Kilo. There is some work I'd like to accomplish quickly to make way for other Kilo work, namely in the L3 agent. We'll need a spec posted very soon for it. All of those blueprints that were postponed to Kilo will be up for discussion. Carl [1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance][Nova] Does glance support upper case key/value pair?
Yes, IIRC, these extra properties will be handled to lower case, see https://github.com/openstack/glance/blob/master/glance/common/utils.py#L236 What's the nova bug you're talking about? Cheers. On 24/09/14 16:21, Chen CH Ji wrote: Got following result when doing bug analysis on nova , is this key/pair uppercase - lowercase was done by purpose or a restriction? It might be related to a nova defect, thanks jichen@cloudcontroller:~$ glance image-update --property Key1=Value2 --purge-props 64f067bd-ce03-4f04-a354-7188a4828e8e +--+--+ | Property | Value| +--+--+ | Property 'key1' | Value2 | Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Create an instance with a custom uuid
On 9/24/2014 3:17 PM, Dean Troyer wrote: On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka rpodoly...@mirantis.com mailto:rpodoly...@mirantis.com wrote: Are there any known gotchas with support of this feature in REST APIs (in general)? I'd be worried about relying on a user-defined attribute in that use case, that's ripe for a DOS. Since these are cloud-unique I wouldn't even need to be in your project to block you from creating that clone instance if I knew your UUID. dt -- Dean Troyer dtro...@gmail.com mailto:dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev We talked about this a bit before approving the 'enforce-unique-instance-uuid-in-db' blueprint [1]. As far as we knew there was no one using null instance UUIDs or duplicates for that matter. The instance object already enforces that the UUID field is unique but the database schema doesn't. I'll be re-proposing that for Kilo when it opens up. If it's a matter of tagging an instance, there is also the tags blueprint [2] which will probably be proposed again for Kilo. [1] https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db [2] https://blueprints.launchpad.net/nova/+spec/tag-instances -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] PTL Candidacy
I'd like to announce my candidacy for TripleO PTL. I think most folks who have worked in the TripleO community probably know me. For those who don't, I work for Red Hat, and over the last year and a half that I've been involved with TripleO I've worked in different areas. My focus has been on improvements to the frameworks to support things such as other distros, packages, and offering deployment choices. I've also tried to focus on stabilization and documentation as well. I stand by what I said in my last candidacy announcement[1], so I'm not going to repeat all of that here :-). One of the reasons I've been so active in reviewing changes to the project is because I want to help influence the direction and move progress forward for TripleO. The spec process was new for TripleO during the Juno cycle, and I also helped define that. I think that process is working well and will continue to evolve during Kilo as we find what works best. The TripleO team has made a lot of great progress towards full HA deployments, CI improvements, rearchitecting Tuskar as a deployment planning service, and driving features in Heat to support our use cases. I support this work continuing in Kilo. I continue to believe in TripleO's mission to use OpenStack itself. I think the feedback provided by TripleO to other projects is very valuable. Given the complexity to deploy OpenStack, TripleO has set a high bar for other integrated projects to meet to achieve this goal. The resulting new features and bug fixes that have surfaced as a result has been great for all of OpenStack. Given that TripleO is the Deployment program though, I also support alternative implementations where they make sense. Those implementations may be in TripleO's existing projects themselves, new projects entirely, or pulling in existing projects under the Deployment program where a desire exists. Not every operator is going to deploy OpenStack the same way, and some organizations already have entrenched and accepted tooling. To that end, I would also encourage integration with other deployment tools. Puppet is one such example and already has wide support in the broader OpenStack community. I'd also like to see TripleO support different update mechanisms potentially with Heat's SoftwareConfig feature, which didn't yet exist when TripleO first defined an update strategy. The tripleo-image-elements repository is a heavily used part of our process and I've seen some recurring themes come up that I'd like to see addressed. Element idempotence seems to often come up, as well as the ability to edit already built images. I'd also like to see our elements more generally applicable to installing OpenStack vs. just installing OpenStack in an image building context. Personally, I support these features, but mostly, I'd like to drive to a consensus on those points during Kilo. I'd love to see more people developing and using TripleO where they can and providing feedback. To enable that, I'd like for easier developer setups to be a focus during Kilo so that it's simpler for people to contribute without such a large initial learning curve investment. Downloadable prebuilt images could be one way we could make that process easier. There have been a handful of mailing list threads recently about the organization of OpenStack and how TripleO/Deployment may fit into that going forward. One thing is clear, the team has made a ton of great progress since it's inception. I think we should continue on the mission of OpenStack owning it's own production deployment story, regardless of how programs may be organized in the future, or what different paths that story may take. Thanks for your consideration! [1] http://lists.openstack.org/pipermail/openstack-dev/2014-April/031772.html -- -- James Slagle -- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Having a problem regarding git review.
Thanks Julie! It worked. Actually I had different email address as a foundation member and for gerrit account. At last I could submit my first fix. Sorry for the late reply. And thank you very much once again for your help. Thanks Regards, Tahmina Ahmed On Tue, Sep 23, 2014 at 7:19 PM, Julie Pichon jpic...@redhat.com wrote: On 24/09/14 07:48, Tahmina Ahmed wrote: Hi, I am a newbie to openstack. I am just trying to submit my first fix but I am having problem to update my contact information. Without contact information I cannot issue git review command. When I put my contact information in https://review.openstack.org/#/settings/contact it shows Code Review - Error Server Error Cannot store contact information FYI : I have removed all the special characters from my contact information but still it is showing same. Has anyone faced this problem? Hello Tahmina, and welcome! This reminds me of the bug at [1], did you join the Foundation before creating your Gerrit account? I believe this needs to be done first (and make sure the email addresses match). Hopefully this helps, Julie [1] https://bugs.launchpad.net/openstack-ci/+bug/1346833 Thanks Regards, Tahmina ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] team meeting Sept 25 1800 UTC
Hi folks, We'll be having the Sahara team meeting as usual in #openstack-meeting-alt channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140925T18 -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Interaction with Barbican and Keystone
Hi Adam, For me the thing needs to be user friendly. So my main question is how do things look in Horizon? Will there just be a popup saying Establish Trust (Y/N). I am wondering as you how other teams are handling that... Thanks, German -Original Message- From: Adam Harwell [mailto:adam.harw...@rackspace.com] Sent: Thursday, September 18, 2014 3:16 PM To: OpenStack Development Mailing List (not for usage questions) Cc: sbaluk...@bluebox.net; Doug Wiegley; Eichberger, German; Adam Young; Balle, Susanne; Douglas Mendizabal Subject: [openstack-dev] [Neutron][LBaaS] Interaction with Barbican and Keystone I've made an attempt at mapping out exactly how Neutron Advanced Services will communicate with Barbican to retrieve Certificate/Key info for TLS purposes. This is far from solidified, since there are some issues that I'll go over momentarily. First, here is a *high level* diagram of the process flow: http://i.imgur.com/VQcbGJS.png (I use the term hijack purposefully) And here is a more detailed flow, including each and every operation: http://goo.gl/Wc8oIj There are some valid concerns about this workflow, and at least one issue that may be a blocker. Following are the two main issues that I've run into: 1) Possible blocker: Keystone won't allow Trust creation using a Trust Token. Example: A user creates a Trust for Heat, and gives Heat their TrustID. The user configures Heat to spin up Load Balancers. Heat contacts LBaaS on behalf of the user with a Trust Token. LBaaS contacts Keystone to create a Trust using the token received from Heat. LBaaS would be unable to create a Trust because the Token we're trying to use doesn't have the ability to create Trusts, and our operation would fail. 2) Security concern: If the Neutron/LBaaS config contains a Service Account's user/pass and Database URI/user/pass, then anyone with access to the config file would be able to connect to the DB, pull out TrustIDs, and use the Neutron Service Account to execute them. Essentially, the only difference between storing private keys directly in the database and storing them in Barbican is that there's one additional (trivial) step to get the key data (contact the Barbican API). The keystone folks I talked to (primarily Adam Young) suggested that the solution to issue #1 is to require the user to create the Trust beforehand in Keystone, then pass the TrustID to Neutron/LBaaS along with the ContainerID. This could originally be based on a template we provide to the user, probably in the form of a suggested JSON body and keystone URI. Eventually, there could/should/might be a system in place to allow services to pre-define a Trust with Keystone and the user would just need to tell Keystone that they accept that Trust. Either way, this would require action by the user before they could create a TLS Terminated LB. I don't particularly LIKE that option, but if 90% of our users come through Horizon anyway, it should be as simple as having Horizon pop up a Yes/No box prompting to enable the Trust when the user creates their first TLS LB. As for issue #2, I don't really have a solution to propose. There was some talk about the Postern project, but there isn't really any usable code yet, or even solid specs from what I can tell -- it looks like the project was proposed and never went past the PoC stage. https://github.com/cloudkeep/postern I know there are some other teams looking into very similar issues, so I have a bit of research to do on that front, but in the meantime, what are people's thoughts? I've cc'd a few of the people who were already in the IRC version of this discussion (I may have missed anyone who wasn't already in my address book, sorry), but I'd love to hear from anyone who has ideas on the subject. --Adam https://keybase.io/rm_you ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
Excerpts from Steven Dake's message of 2014-09-23 15:40:29 -0700: *Hi folks,*** * I'm pleased to announce the development of a new project Kolla which is Greek for glue :). Kolla has a goal of providing an implementation that deploys OpenStack using Kubernetes and Docker. This project will begin as a StackForge project separate from the TripleO/Deployment program code base. Our long term goal is to merge into the TripleO/Deployment program rather then create a new program. Docker is a container technology for delivering hermetically sealed applications and has about 620 technical contributors [1]. We intend to produce docker images for a variety of platforms beginning with Fedora 20. We are completely open to any distro support, so if folks want to add new Linux distribution to Kolla please feel free to submit patches :) Kubernetes at the most basic level is a Docker scheduler produced by and used within Google [2]. Kubernetes has in excess of 100 technical contributors. Kubernetes is more then just a scheduler, it provides additional functionality such as load balancing and scaling and has a significant roadmap. You had me at Docker.. Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration. But the bit above has me a bit nervous... I'm not exactly ignorant of what declarative orchestration is, and of late I've found it to be more trouble than I had previously imagined it would be. All of the features above are desirable in any application, whether docker managed or not, and have been discussed for Heat specifically. I'm not entirely sure I want these things in my OpenStack deployment, but it will be interesting to see if there are operators who want them bad enough to deal with the inherent complexities of trying to write such a thing for an application as demanding as OpenStack. Anyway, I would definitely be interested in seeing if we can plug it into the interfaces we already have for image building, config file and system state management. Thanks for sharing, and see you in the deployment trenches. :) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Migrations in feature branch
Apparently I've mistakenly though that feature branch will form separate optional component. If it will eventually be a part of neutron - then it's fine. Thanks, Eugene. On Wed, Sep 24, 2014 at 1:30 PM, Salvatore Orlando sorla...@nicira.com wrote: Relying again on automatic schema generation could be error-prone. It can only be enabled globally, and does not work when models are altered if the table for the model being altered already exists in the DB schema. I don't think it would be a big problem to put these migrations in the main sequence once the feature branch is merged back into master. Alembic unfortunately does not yet do a great job in maintaining multiple timelines. Even if only a single migration branch is supported, in theory one could have a separate alembic environment for the feature branch, but that in my opinion just creates the additional problem of handling a new environment, and does not solve the initial problem of re-sequencing migrations. Re-sequencing at merge time is not going to be a problem in my opinion. However, keeping all the lbaas migrations chained together will help. You can also do as Henry suggests, but that option has the extra (possibly negligible) cost of squashing all migrations for the whole feature branch at merge time. As an example: MASTER --- X - X+1 - ... - X+n \ FEATURE \- Y - Y+1 - ... - Y+m At every rebase of rebase the migration timeline for the feature branch could be rearranged as follows: MASTER --- X - X+1 - ... - X+n --- \ FEATURE \- Y=X+n - Y+1 - ... - Y+m = X+n+m And therefore when the final merge in master comes, all the migrations in the feature branch can be inserted in sequence on top of master's HEAD. I have not tried this, but I reckon that conceptually it should work. Salvatore On 24 September 2014 08:16, Kevin Benton blak...@gmail.com wrote: If these are just feature branches and they aren't intended to be deployed for long life cycles, why don't we just skip the db migration and enable auto-schema generation inside of the feature branch? Then a migration can be created once it's time to actually merge into master. On Tue, Sep 23, 2014 at 9:37 PM, Brandon Logan brandon.lo...@rackspace.com wrote: Well the problem with resequencing on a merge is that a code change for the first migration must be added first and merged into the feature branch before the merge is done. Obviously this takes review time unless someone of authority pushes it through. We'll run into this same problem on rebases too if we care about keeping the migration sequenced correctly after rebases (which we don't have to, only on a merge do we really need to care). If we did what Henry suggested in that we only keep one migration file for the entire feature, we'd still have to do the same thing. I'm not sure that buys us much other than keeping the feature's migration all in one file. I'd also say that code in master should definitely NOT be dependent on code in a feature branch, much less a migration. This was a requirement of the incubator as well. So yeah this sounds like a problem but one that really only needs to be solved at merge time. There will definitely need to be coordination with the cores when merge time comes. Then again, I'd be a bit worried if there wasn't since a feature branch being merged into master is a huge deal. Unless I am missing something I don't see this as a big problem, but I am highly capable of being blind to many things. Thanks, Brandon On Wed, 2014-09-24 at 01:38 +, Doug Wiegley wrote: Hi Eugene, Just my take, but I assumed that we’d re-sequence the migrations at merge time, if needed. Feature branches aren’t meant to be optional add-on components (I think), nor are they meant to live that long. Just a place to collaborate and work on a large chunk of code until it’s ready to merge. Though exactly what those merge criteria are is also yet to be determined. I understand that you’re raising a general problem, but given lbaas v2’s state, I don’t expect this issue to cause many practical problems in this particular case. This is also an issue for the incubator, whenever it rolls around. Thanks, doug On September 23, 2014 at 6:59:44 PM, Eugene Nikanorov (enikano...@mirantis.com) wrote: Hi neutron and lbaas folks. Recently I briefly looked at one of lbaas proposed into feature branch. I see migration IDs there are lined into a general migration sequence. I think something is definitely wrong with this approach as feature-branch components are optional, and also master branch can't depend on revision IDs in feature-branch (as we moved to unconditional migrations) So far the solution to this problem that I see is to have separate migration script,
Re: [openstack-dev] [Neutron][LBaaS] Interaction with Barbican and Keystone
Yeah, I was hoping for something like that... Essentially, Horizon would need to detect that particular response and be prepared to make a simple Yes/No dialog pop up to create that Trust, then continue with the original operation again automatically afterwards. That said, I have not looked at programming Horizon interfaces at all yet, so I don't know how feasible that is. --Adam https://keybase.io/rm_you On 9/24/14 5:02 PM, Eichberger, German german.eichber...@hp.com wrote: Hi Adam, For me the thing needs to be user friendly. So my main question is how do things look in Horizon? Will there just be a popup saying Establish Trust (Y/N). I am wondering as you how other teams are handling that... Thanks, German -Original Message- From: Adam Harwell [mailto:adam.harw...@rackspace.com] Sent: Thursday, September 18, 2014 3:16 PM To: OpenStack Development Mailing List (not for usage questions) Cc: sbaluk...@bluebox.net; Doug Wiegley; Eichberger, German; Adam Young; Balle, Susanne; Douglas Mendizabal Subject: [openstack-dev] [Neutron][LBaaS] Interaction with Barbican and Keystone I've made an attempt at mapping out exactly how Neutron Advanced Services will communicate with Barbican to retrieve Certificate/Key info for TLS purposes. This is far from solidified, since there are some issues that I'll go over momentarily. First, here is a *high level* diagram of the process flow: http://i.imgur.com/VQcbGJS.png (I use the term hijack purposefully) And here is a more detailed flow, including each and every operation: http://goo.gl/Wc8oIj There are some valid concerns about this workflow, and at least one issue that may be a blocker. Following are the two main issues that I've run into: 1) Possible blocker: Keystone won't allow Trust creation using a Trust Token. Example: A user creates a Trust for Heat, and gives Heat their TrustID. The user configures Heat to spin up Load Balancers. Heat contacts LBaaS on behalf of the user with a Trust Token. LBaaS contacts Keystone to create a Trust using the token received from Heat. LBaaS would be unable to create a Trust because the Token we're trying to use doesn't have the ability to create Trusts, and our operation would fail. 2) Security concern: If the Neutron/LBaaS config contains a Service Account's user/pass and Database URI/user/pass, then anyone with access to the config file would be able to connect to the DB, pull out TrustIDs, and use the Neutron Service Account to execute them. Essentially, the only difference between storing private keys directly in the database and storing them in Barbican is that there's one additional (trivial) step to get the key data (contact the Barbican API). The keystone folks I talked to (primarily Adam Young) suggested that the solution to issue #1 is to require the user to create the Trust beforehand in Keystone, then pass the TrustID to Neutron/LBaaS along with the ContainerID. This could originally be based on a template we provide to the user, probably in the form of a suggested JSON body and keystone URI. Eventually, there could/should/might be a system in place to allow services to pre-define a Trust with Keystone and the user would just need to tell Keystone that they accept that Trust. Either way, this would require action by the user before they could create a TLS Terminated LB. I don't particularly LIKE that option, but if 90% of our users come through Horizon anyway, it should be as simple as having Horizon pop up a Yes/No box prompting to enable the Trust when the user creates their first TLS LB. As for issue #2, I don't really have a solution to propose. There was some talk about the Postern project, but there isn't really any usable code yet, or even solid specs from what I can tell -- it looks like the project was proposed and never went past the PoC stage. https://github.com/cloudkeep/postern I know there are some other teams looking into very similar issues, so I have a bit of research to do on that front, but in the meantime, what are people's thoughts? I've cc'd a few of the people who were already in the IRC version of this discussion (I may have missed anyone who wasn't already in my address book, sorry), but I'd love to hear from anyone who has ideas on the subject. --Adam https://keybase.io/rm_you ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
Steven I have to ask what is the motivation and benefits we get from integrating Kubernetes into Openstack? Would be really useful if you can elaborate and outline some use cases and benefits Openstack and Kubernetes can gain. /Alan From: Steven Dake [mailto:sd...@redhat.com] Sent: September-24-14 7:41 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker On 09/24/2014 10:12 AM, Joshua Harlow wrote: Sounds like an interesting project/goal and will be interesting to see where this goes. A few questions/comments: How much golang will people be exposed to with this addition? Joshua, I expect very little. We intend to use Kubernetes as an upstream project, rather then something we contribute to directly. Seeing that this could be the first 'go' using project it will be interesting to see where this goes (since afaik none of the infra support exists, and people aren't likely to familiar with go vs python in the openstack community overall). What's your thoughts on how this will affect the existing openstack container effort? I don't think it will have any impact on the existing Magnum project. At some point if Magnum implements scheduling of docker containers, we may add support for Magnum in addition to Kubernetes, but it is impossible to tell at this point. I don't want to derail either project by trying to force them together unnaturally so early. I see that kubernetes isn't exactly a small project either (~90k LOC, for those who use these types of metrics), so I wonder how that will affect people getting involved here, aka, who has the resources/operators/other... available to actually setup/deploy/run kubernetes, when operators are likely still just struggling to run openstack itself (at least operators are getting used to the openstack warts, a new set of kubernetes warts could not be so helpful). Yup it is fairly large in size. Time will tell if this approach will work. This is an experiment as Robert and others on the thread have pointed out :). Regards -steve On Sep 23, 2014, at 3:40 PM, Steven Dake sd...@redhat.commailto:sd...@redhat.com wrote: Hi folks, I'm pleased to announce the development of a new project Kolla which is Greek for glue :). Kolla has a goal of providing an implementation that deploys OpenStack using Kubernetes and Docker. This project will begin as a StackForge project separate from the TripleO/Deployment program code base. Our long term goal is to merge into the TripleO/Deployment program rather then create a new program. Docker is a container technology for delivering hermetically sealed applications and has about 620 technical contributors [1]. We intend to produce docker images for a variety of platforms beginning with Fedora 20. We are completely open to any distro support, so if folks want to add new Linux distribution to Kolla please feel free to submit patches :) Kubernetes at the most basic level is a Docker scheduler produced by and used within Google [2]. Kubernetes has in excess of 100 technical contributors. Kubernetes is more then just a scheduler, it provides additional functionality such as load balancing and scaling and has a significant roadmap. The #tripleo channel on Freenode will be used for Kolla developer and user communication. Even though we plan to become part of the Deployment program long term, as we experiment we believe it is best to hold a separate weekly one hour IRC meeting on Mondays at 2000 UTC in #openstack-meeting [3]. This project has been discussed with the current TripleO PTL (Robert Collins) and he seemed very supportive and agreed with the organization of the project outlined above. James Slagle, a TripleO core developer, has kindly offered to liase between Kolla and the broader TripleO community. I personally feel it is necessary to start from a nearly empty repository when kicking off a new project. As a result, there is limited code in the repository [4] at this time. I suspect folks will start cranking out a kick-ass implementation once the Kolla/Stackforge integration support is reviewed by the infra team [5]. The initial core team is composed of Steven Dake, Ryan Hallisey, James Lebocki, Jeff Peeler, James Slagle, Lars Kellogg-Sedman, and David Vossel. The core team will be reviewed every 6 weeks to add fresh developers. Please join the core team in designing and inventing this rockin' new technology! Regards -steve ~~ [1] https://github.com/docker/docker [2] https://github.com/GoogleCloudPlatform/kubernetes [3] https://wiki.openstack.org/wiki/Meetings/Kolla [4] https://github.com/jlabocki/superhappyfunshow [5] https://review.openstack.org/#/c/122972/ ___ OpenStack-dev mailing list
[openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)
Hi, so, I'd really like to see https://review.openstack.org/#/c/121663/ merged in rc1. That patch is approved right now. However, it depends on https://review.openstack.org/#/c/119521/, which is not approved. 119521 fixes a problem where we make five RPC calls per call to get_network_info, which is an obvious efficiency problem. Talking to Vish, who is the author of these patches, it sounds like the efficiency issue is a pretty big deal for users of nova-network and he'd like to see 119521 land in Juno. I think that means he's effectively arguing that the bug is release critical. On the other hand, its only a couple of days until rc1, so we're trying to be super conservative about what we land now in Juno. So... I'd like to see a bit of a conversation on what call we make here. Do we land 119521? Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] The horse is dead. Long live the horse.
+1 From: Gordon Sim [g...@redhat.com] Sent: Wednesday, September 24, 2014 10:26 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Zaqar] The horse is dead. Long live the horse. On 09/24/2014 06:07 PM, Clint Byrum wrote: I just wanted to commend Flavio Percoco and the Zaqar team for maintaining poise and being excellent citizens of OpenStack whilst being questioned intensely by the likes of me, and others. +1 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev