Re: [openstack-dev] [Fuel] removing single mode
Definitely fuel spec is needed :) On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L e...@mirantis.com wrote: Hi Andrew, I have some comments regarding to you action items 2) Removing simple mode from the ui and tests 3) Removing simple mode support from nailgun (maybe we leave it) and cli We shouldn't do it, because nailgun should handle both versions of cluster. What we have to do here is to use openstack.yaml to keep all possible modes. For new release there will be only ha, to manage previous releases we have to create data migrations in nailgun to create the filed with modes i.e. multinode and ha. Also fixes for ui are required too, I think it mostly related to wizard, 'mode' tab where use can chose ha or non ha cluster in case of new release there should be only ha, and in case of old releases there should be ha and multinode. Thanks, On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward xar...@gmail.com wrote: Started a new thread so that we don't hijack the older thread. as Andrew, will you work on it in 6.0? What are remaining items there? Also, it might affect our tests - simple mode runs faster so we use it for smoke ISO test. Anastasia, please confirm that we can switch smoke to one-ha-controller model, or even drop smoke at all and use BVT only (running CentOS 3 HA controllers and same with Ubuntu). The primary reason that we haven't disabled single yet is was due to [0] where we where having problems adding additional controllers. With the changes to galera and rabbit clustering it appears that we ended up fixing it already. The remaining issues are: 1) Ensuring we have good test coverage for the cases we expect to support [1] 2) Removing simple mode from the ui and tests 3) Removing simple mode support from nailgun (maybe we leave it) and cli 4) Updating documentation [0] https://bugs.launchpad.net/fuel/+bug/1350266 [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7 -- Andrew Mirantis Ceph community ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Pre-5.1 and master builds ISO are available for download
Igor, thanks a lot for improving UX over it - this table allows me to see which ISO passed verification tests. On Mon, Aug 25, 2014 at 7:54 PM, Vladimir Kuklin vkuk...@mirantis.com wrote: I would also like to add that you can use our library called devops along with system tests we use for QA and CI. These tests use libvirt and kvm so that you can easily fire up an environment with specific configuration (Centos/Ubuntu Nova/Neutron Ceph/Swift and so on). All the documentation how to use this library is here: http://docs.mirantis.com/fuel-dev/devops.html. If you find any bugs or gaps in documentation, please feel free to file bugs to https://launchpad.net/fuel. On Mon, Aug 25, 2014 at 6:39 PM, Igor Shishkin ishish...@mirantis.com wrote: Hi all, along with building your own ISO following instructions [1], you can always download nightly build [2] and run it, by using virtualbox scripts [3], for example. For your conveniency, you can see a build status table on CI [4]. First tab now refers to pre-5.1 builds, and second - to master builds. BVT columns stands for Build Verification Test, which is essentially full HA deploy deployment test. Currently pre-5.1 and master builds are actually built from same master branch. As soon as we call for Hard Code Freeze, pre-5.1 builds will be reconfigured to use stable/5.1 branch. Thanks, [1] http://docs.mirantis.com/fuel-dev/develop/env.html#building-the-fuel-iso [2] https://wiki.openstack.org/wiki/Fuel#Nightly_builds [3] https://github.com/stackforge/fuel-main/tree/master/virtualbox [4] https://fuel-jenkins.mirantis.com/view/ISO/ -- Igor Shishkin DevOps ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Yours Faithfully, Vladimir Kuklin, Fuel Library Tech Lead, Mirantis, Inc. +7 (495) 640-49-04 +7 (926) 702-39-68 Skype kuklinvv 45bk3, Vorontsovskaya Str. Moscow, Russia, www.mirantis.com http://www.mirantis.ru/ www.mirantis.ru vkuk...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] gettext question about oslo.i18n library
Hi Doug, I created a i18n blueprint for oslo.i18n. URL: https://blueprints.launchpad.net/openstack-i18n/+spec/more-gettext-supports Could you review it? I am reading django gettext code, I think I will use a similar approach for the contextual markers gettext function. Thanks, Peng Wu On Mon, 2014-08-18 at 10:05 -0400, Doug Hellmann wrote: Yes, that would be a good next step. Doug On Aug 17, 2014, at 10:03 PM, Peng Wu peng.e...@gmail.com wrote: Yes, I am interested in adding these missing gettext functions to oslo.i18n library. Guess the next step is to create a blueprint for Kilo? Thanks, Peng Wu On Fri, 2014-08-15 at 16:02 -0400, Doug Hellmann wrote: On Aug 15, 2014, at 3:18 AM, Peng Wu peng.e...@gmail.com wrote: Hi, Recently I just read the code of oslo.i18n library, The lazy translation idea is great! But I found a question about gettext contextual markers and plural form, such as pgettext and ungettext functions, see [3]. It seems the two gettext functions are missing in the oslo.i18n library. Is it correct? or will support it? Thanks, Peng Wu You’re right, those are not present. We apparently haven’t used them anywhere, yet, because they weren’t exposed via the old gettextutils module in the incubator. We should add them. Are you interested in working on a blueprint for Kilo to do that? Doug Refer URL: 1. https://github.com/openstack/oslo.i18n 2. http://lists.openstack.org/pipermail/openstack-dev/2014-July/039217.html 3. https://wiki.openstack.org/wiki/I18n/TranslatableStrings ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects
Doug Hellmann wrote: On Aug 22, 2014, at 5:59 AM, Thierry Carrez thie...@openstack.org wrote: TL;DR: Let's create an Oslo projectgroup in Launchpad to track work across all Oslo libraries. In library projects, let's use milestones connected to published versions rather than the common milestones. Long version: As we graduate more Oslo libraries (which is awesome), tracking Oslo work in Launchpad (bugs, milestones...) has become more difficult. There used to be only one Launchpad project (oslo, which covered the oslo incubator). It would loosely follow the integrated milestones (juno-1, juno-2...), get blueprints and bugs targeted to those, get tags pushed around those development milestones: same as the integrated projects, just with no source code tarball uploads. When oslo.messaging graduated, a specific Launchpad project was created to track work around it. It still had integrated development milestones -- only at the end it would publish a 1.4.0 release instead of a 2014.2 one. That approach creates two problems. First, it's difficult to keep track of oslo work since it now spans two projects. Second, the release management logic of marking bugs Fix released at development milestones doesn't really apply (bugs should rather be marked released when a published version of the lib carries the fix). Git tags and Launchpad milestones no longer align, which creates a lot of confusion. Then as more libraries appeared, some of them piggybacked on the general oslo Launchpad project (generally adding tags to point to the specific library), and some others created their own project. More confusion ensues. Here is a proposal that we could use to solve that, until StoryBoard gets proper milestone support and Oslo is just migrated to it: 1. Ask for an oslo project group in Launchpad This would solve the first issue, by allowing to see all oslo work on single pages (see examples at [1] or [2]). The trade-off here is that Launchpad projects can't be part of multiple project groups (and project groups can't belong to project groups). That means that Oslo projects will no longer be in the openstack Launchpad projectgroup. I think the benefits outweigh the drawbacks here: the openstack projectgroup is not very strict anyway so I don't think it's used in people workflows that much. 2. Create one project per library, adopt tag-based milestones Each graduated library should get its own project (part of the oslo projectgroup). It should use the same series/cycles as everyone else (juno), but it should have milestones that match the alpha release tags, so that you can target work to it and mark them fix released when that means the fix is released. That would solve the issue of misaligned tags/milestones. The trade-off here is that you lose the common milestone rhythm (although I guess you can still align some alphas to the common development milestones). That sounds a small price to pay to better communicate which version has which fix. We don’t necessarily decide the version numbers for all of the libraries in advance. I think we talked about this on IRC, and your suggestion was to use a “next” milestone and then rename it at the point of release. Am I remembering correctly? Yes, using next and renaming it once you know is the way to go. 3. Rename oslo project to oslo-incubator Keep the Launchpad oslo project as-is, part of the same projectgroup, to cover oslo-incubator work. This can keep the common development milestones, since the incubator doesn't do releases anyway. However, it has to be renamed to oslo-incubator so that it doesn't conflict with the projectgroup namespace. Once it no longer contains graduated libs, that name makes much more sense anyway. This plan requires Launchpad admin assistance to create a projectgroup and rename a project, so I'd like to get approval on it before moving to ask them for help. Comments, thoughts ? [1] https://blueprints.launchpad.net/openstack [2] https://bugs.launchpad.net/openstack This makes sense to me, so let’s move ahead with your plan. I'd like to wait for Mark McLoughlin's position, as he was defending the current setup (oslo.messaging following the juno-X milestones). It would be good if we had a script to automate some of the release processes now, and it seems like this change is going to make it easier to implement parts like marking all of the tickets as released and updating their milestone. I'll come up with tooling to facilitate releasing of Oslo libs. We do have launchpad projects for some of the other oslo libraries, we just haven’t been using them for release tracking: https://launchpad.net/python-stevedore https://launchpad.net/python-cliff https://launchpad.net/taskflow https://launchpad.net/pbr https://launchpad.net/oslo.vmware Cool, good to know. I'll include them in the oslo group if we create it. -- Thierry Carrez (ttx)
Re: [openstack-dev] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps
Hi Angus, We are not using nova-docker driver to deploy docker containers. In our manual, we are using Heat (thanks to the docker plugin) to deploy docker containers and nova is just used to deploy VM. Inside this VM heat deploy the docker software. The figure below describes the interactions between different components. Regards, Marouen [image: Images intégrées 1] 2014-08-26 0:13 GMT+02:00 Angus Salkeld asalk...@mirantis.com: This seems misleading as there is no description on setting up nova-docker or using the heat docker container. -Angus On Tue, Aug 26, 2014 at 5:56 AM, Marouen Mechtri mechtri.mar...@gmail.com wrote: Hi all, I want to present you our guide for Docker containers deployment with OpenStack Heat. In this guide we dockerize and deploy a lamp application on two containers. https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Docker-containers-deployment-with-OpenStack-Heat.rst https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst Hope it will be helpful for many people. Please let us know your opinion about it. Regards, Marouen Mechtri ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective
On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi) pkila...@cisco.com wrote: On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote: On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka ihrac...@redhat.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 20/08/14 18:28, Salvatore Orlando wrote: Some comments inline. Salvatore On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com mailto:ihrac...@redhat.com wrote: Hi all, I've read the proposal for incubator as described at [1], and I have several comments/concerns/suggestions to this. Overall, the idea of giving some space for experimentation that does not alienate parts of community from Neutron is good. In that way, we may relax review rules and quicken turnaround for preview features without loosing control on those features too much. Though the way it's to be implemented leaves several concerns, as follows: 1. From packaging perspective, having a separate repository and tarballs seems not optimal. As a packager, I would better deal with a single tarball instead of two. Meaning, it would be better to keep the code in the same tree. I know that we're afraid of shipping the code for which some users may expect the usual level of support and stability and compatibility. This can be solved by making it explicit that the incubated code is unsupported and used on your user's risk. 1) The experimental code wouldn't probably be installed unless explicitly requested, and 2) it would be put in a separate namespace (like 'preview', 'experimental', or 'staging', as the call it in Linux kernel world [2]). This would facilitate keeping commit history instead of loosing it during graduation. Yes, I know that people don't like to be called experimental or preview or incubator... And maybe neutron-labs repo sounds more appealing than an 'experimental' subtree in the core project. Well, there are lots of EXPERIMENTAL features in Linux kernel that we actively use (for example, btrfs is still considered experimental by Linux kernel devs, while being exposed as a supported option to RHEL7 users), so I don't see how that naming concern is significant. I think this is the whole point of the discussion around the incubator and the reason for which, to the best of my knowledge, no proposal has been accepted yet. I wonder where discussion around the proposal is running. Is it public? The discussion started out privately as the incubation proposal was put together, but it's now on the mailing list, in person, and in IRC meetings. Lets keep the discussion going on list now. In the spirit of keeping the discussion going, I think we probably need to iterate in practice on this idea a little bit before we can crystallize on the policy and process for this new repo. Here are few ideas on how we can start this iteration: * Namespace for the new repo: Should this be in the neutron namespace, or a completely different namespace like neutron labs? Perhaps creating a separate namespace will help the packagers to avoid issues of conflicting package owners of the namespace. I don¹t think there is a technical requirement to choose a new namespace. Python supports sharing a namespace, and packaging can support this feature (see: oslo.*). From what I understand there can be overlapping code between neutron and incubator to override/modify existing python/config files. In which case, packaging(for Eg: rpm) will raise a path conflict. So we probably will need to worry about namespaces? Doug's suggestion to use a separate namespace to indicate that the incubator codebase isn’t fully supported is a good idea and what I had in mind as a non-technical reason for a new namespace. I still assert that the potential for path conflicts can be avoided easily enough, and is not a good reason on its own to use a different namespace. * Dependency on Neutron (core) repository: We would need to sort this out so that we can get UTs to run and pass in the new repo. Can we set the dependency on Neutron milestone releases? We already publish tar balls for the milestone releases, but I am not sure we publish these as packages to pypi. If not could we start doing that? With this in place, the incubator would always lag the Neutron core by at the most one milestone release. Given that it is possible to specify a dependency as a branch/hash/tag in a git repo [1], I¹m not sure it¹s worth figuring out how to target tarballs. Master branch of the incubation repo could then target the master branch of the Neutron repo and always be assured of being current, and then released versions could target milestone tags or released versions. 1: http://pip.readthedocs.org/en/latest/reference/pip_install.html#git
Re: [openstack-dev] [Openstack] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps
Hi Chandra, I updated the link to the docker guide. https://github.com/MarouenMechtri/Docker-containers-deployment-with-OpenStack-Heat Regards, Marouen 2014-08-26 1:21 GMT+02:00 Chandra Kamalakantha chandra_kamalakan...@yahoo.com: Hi, Did you by chance include the wrong link? The link below is mostly Heat and nothing on Docker. MarouenMechtri/OpenStack-Heat-Installation https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst -- *From:* Marouen Mechtri mechtri.mar...@gmail.com *To:* openstack openst...@lists.openstack.org; openstack-dev openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org openstack-operat...@lists.openstack.org *Sent:* Monday, August 25, 2014 2:56 PM *Subject:* [Openstack] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps Hi all, I want to present you our guide for Docker containers deployment with OpenStack Heat. In this guide we dockerize and deploy a lamp application on two containers. https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Docker-containers-deployment-with-OpenStack-Heat.rst https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst Hope it will be helpful for many people. Please let us know your opinion about it. Regards, Marouen Mechtri ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to: openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps
Hi all, I want to present you our guide for Docker containers deployment with OpenStack Heat. In this guide we dockerize and deploy a lamp application on two containers. https://github.com/MarouenMechtri/Docker-containers-deployment-with-OpenStack-Heat Hope it will be helpful for many people. Please let us know your opinion about it. Apologies if you receive multiple copies of this message :). I include the wrong link in the previous mail. Regards, Marouen Mechtri 2014-08-26 10:55 GMT+02:00 Marouen Mechtri mechtri.mar...@gmail.com: Hi Chandra, I updated the link to the docker guide. https://github.com/MarouenMechtri/Docker-containers-deployment-with-OpenStack-Heat Regards, Marouen 2014-08-26 1:21 GMT+02:00 Chandra Kamalakantha chandra_kamalakan...@yahoo.com: Hi, Did you by chance include the wrong link? The link below is mostly Heat and nothing on Docker. MarouenMechtri/OpenStack-Heat-Installation https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst -- *From:* Marouen Mechtri mechtri.mar...@gmail.com *To:* openstack openst...@lists.openstack.org; openstack-dev openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org openstack-operat...@lists.openstack.org *Sent:* Monday, August 25, 2014 2:56 PM *Subject:* [Openstack] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps Hi all, I want to present you our guide for Docker containers deployment with OpenStack Heat. In this guide we dockerize and deploy a lamp application on two containers. https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Docker-containers-deployment-with-OpenStack-Heat.rst https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst Hope it will be helpful for many people. Please let us know your opinion about it. Regards, Marouen Mechtri ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to: openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? Do we need someone to cover the QA space ? Anything else missing ? At first glance I don't think we need a role for logistics (chairing meetings and organizing meetups), design summit planning, roadmapping, user point of contact, or spokesperson -- as I expect the PTL to retain those roles anyway... -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone] Domain-specific Drivers
Hi folks! I would like to know what is the status on the “Domain-specific Drivers” feature for Juno. I see that there’s documentation on this already but I was not able to use it with the master branch. I was trying to configure LDAP on the default domain and SQL for heat domain but with no luck. Is the feature ready? Best Regards, Bruno Bompastor. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [third-party] - rebasing patches for CI
I am experiencing the exact same issue. Several reviews did not rebase recently, which make some of my tests fail. What should be the behaviour from the CI point of view ? 1) -1 and request a rebase 2) 0 and request a rebase 3) Ci to do the cherry pick but like Kevin was stating, it need some custom code that I want to avoid Thanks, Franck Franck On Thu, Jul 24, 2014 at 4:15 PM, Kevin Benton blak...@gmail.com wrote: Cherry-picking onto the target branch requires an extra step and custom code that I wanted to avoid. Right now I can just pass the gerrit ref into devstack's local.conf as the branch and everything works. If there was a way to get that Zuul ref, I could just use that instead and no new code would be required. Is exposing that ref in a known format/location something the infra team might consider? Thanks On Tue, Jul 22, 2014 at 4:16 PM, Jeremy Stanley fu...@yuggoth.org wrote: On 2014-07-21 11:36:43 -0700 (-0700), Kevin Benton wrote: I see. So then back to my other question, is it possible to get access to the same branch that is being passed to the OpenStack CI devstack tests? For example, in the console output I can see it uses a ref like refs/zuul/ master/Z75ac747d605b4eb28d4add7fa5b99890.[1] Is that visible somewhere (other than the logs of course) could be used in a third-party system? Right now, no. It's information passed from Zuul to a Jenkins master via Gearman, but as far as I know is currently only discoverable within the logs and the job parameters displayed in Jenkins. There has been some discussion in the past of Zuul providing some more detailed information to third-party systems (perhaps the capability to add them as additional Gearman workers) but that has never been fully fleshed out. For the case of independent pipelines (which is all I would expect a third-party CI to have any interest in running for the purpose of testing a proposed change) it should be entirely sufficient to cherry-pick a patch/series from our Gerrit onto the target branch. Only _dependent_ pipelines currently make use of Zuul's capability to provide a common ref representing a set of different changes across multiple projects, since independent pipelines will only ever have an available ZUUL_REF on a single project (the same project for which the change is being proposed). -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects
On Fri, 2014-08-22 at 11:59 +0200, Thierry Carrez wrote: TL;DR: Let's create an Oslo projectgroup in Launchpad to track work across all Oslo libraries. In library projects, let's use milestones connected to published versions rather than the common milestones. Sounds good to me, Thierry. Thanks for the thoughtful proposal. The part about using integrated release milestones was more about highlighting that we follow a similar development model and cadence - i.e. it's helpful from a planning perspective to predict whether a given feature is likely to land in juno-1, juno-2 or juno-3. When it comes to release time, though, I'd much rather have a launchpad milestone that reflects the release itself rather than the development milestone. Sounds like we need to choose between using launchpad milestones for planning or releases, and choosing the latter makes sense to me. Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation
Hi keystone/infra, One key upcoming Juno feature (Keystone to keystone federation) is currently blocked on adding pysaml2 to requirements: https://review.openstack.org/#/c/113294/ It was -1ed by Doug after the discussion at the release meeting last week, where the xmlsec1 dependency was raised as a potential infra issue. There doesn't seem to be so many good alternatives though. Steve mentioned saml, but it's a bit alpha, and I have no idea how much work would be required to use that instead of pysaml2 at this point. How blocking is the xmlsec1 dependency from an Infra perspective ? How doable would a migration to saml at this point be ? I'm trying to find a solution so that we can ship this feature :) Regards, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds
From: Michael Still [mailto:mi...@stillhq.com] Sent: 25 August 2014 23:38 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds ... Mark McClain and I discussed a possible plan for nova-network to neutron upgrades at the Ops Meetup today, and it seemed generally acceptable. It defines a cold migration as freezing the ability to create or destroy instances during the upgrade, and then requiring a short network outage for each instance in the cell. This is why I'm trying to understand the no downtime use case better. Is it literally no downtime, ever? Or is it a more simple no simultaneous downtime for instances? Michael The simultaneous downtime across the cloud is the one we really need to avoid. Short network outages (depending on how you define short) can be handled along with blocking API operations for short periods. The other item was how to stage the upgrade.. with a cloud of a significant size and some concerns about scalability, we would like to be able to do the migration as a set of steps rather than a big bang. During the gap between the steps, we'd like to open the APIs for usage, such as new VMs get created on Neutron hypervisors. Would that be a possibility ? Tim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [infra] Centos 7 images for HP Helion
Hi, I would like to get centos 7 based testing working, but I am stuck without images being provided in the HP Cloud. Rackspace has a (slightly quirky, but workable) image and we have an experimental job that runs fine. I am aware that building our own custom images with disk-image-builder is the way forward for this. I will certainly be working on this once the changes have made their way into nodepool and have been deployed. However, this is a very large change to the way upstream infra works and I think the existing change is enough to digest without adding new platforms from day 1. Also, we generally find a few quirks in differences between rax and hp platforms (certainly f20 work did), so getting them sorted before we add in the complexity of d-i-b is a win. Can someone from hp *please* contact me about this. If there are issues with the centos side, I will be able to find people to help Thanks, -i ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] Domain-specific Drivers
Hi It was fully merged for Juno-2 - so if you are having problems, feel free to share the settings in you main config and keystone.heat.config files Henry On 26 Aug 2014, at 10:26, Bruno Luis Dos Santos Bompastor bruno.bompas...@cern.ch wrote: Hi folks! I would like to know what is the status on the “Domain-specific Drivers” feature for Juno. I see that there’s documentation on this already but I was not able to use it with the master branch. I was trying to configure LDAP on the default domain and SQL for heat domain but with no luck. Is the feature ready? Best Regards, Bruno Bompastor. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report
On Fri, Aug 22, 2014 at 03:39:24PM -0400, Zane Bitter wrote: We held the inaugural Heat mid-cycle meetup in Raleigh, North Carolina this week. There were a dozen folks in attendance, and I think everyone agreed that it was a very successful event. Notes from the meetup are on the Etherpad here: https://etherpad.openstack.org/p/heat-juno-midcycle-meetup Here are a few of the conclusions: Thanks for the update Zane, for those of us who were unable to attend it is much appreciated! :) Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] usage patterns for oslo.config
On Mon, 2014-08-11 at 15:06 -0400, Doug Hellmann wrote: On Aug 8, 2014, at 7:22 PM, Devananda van der Veen devananda@gmail.com wrote: On Fri, Aug 8, 2014 at 12:41 PM, Doug Hellmann d...@doughellmann.com wrote: That’s right. The preferred approach is to put the register_opt() in *runtime* code somewhere before the option will be used. That might be in the constructor for a class that uses an option, for example, as described in http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options Doug Interesting. I've been following the prevailing example in Nova, which is to register opts at the top of a module, immediately after defining them. Is there a situation in which one approach is better than the other? The approach used in Nova is the “old” way of doing it. It works, but assumes that all of the application code is modifying a global configuration object. The runtime approach allows you to pass a configuration object to a library, which makes it easier to mock the configuration for testing and avoids having the configuration options bleed into the public API of the library. We’ve started using the runtime approach in new Oslo libraries that have configuration options, but changing the implementation in existing application code isn’t strictly necessary. I've been meaning to dig up some of the old threads and reviews to document how we got here. But briefly: * this global CONF variable originates from the gflags FLAGS variable in Nova before oslo.config * I was initially determined to get rid of any global variable use and did a lot of work to allow glance use oslo.config without a global variable * one example detail of this work - when you use paste.deploy to load an app, you have no ability to pass a config object through paste.deploy to the app. I wrote a little helper that used a thread-local variable to mimic this pass-through. * with glance done, I moved on to making keystone use oslo.config and initially didn't use the global variable. Then I ran into a veto from termie who felt very strongly that a global variable should be used. * in the end, I bought the argument that the use of a global variable was pretty deeply ingrained (especially in Nova) and that we should aim for consistent coding patterns across projects (i.e. Oslo shouldn't be just about shared code, but also shared patterns). The only realistic standard pattern we could hope for was the use of the global variable. * with that agreed, we reverted glance back to using a global variable and all projects followed suit * the case of libraries is different IMO - we'd be foolish to design APIs which lock us into using the global object So ... I wouldn't quite agree that this is the new way vs the old way, but I think it would be reasonable to re-open the discussion about using the global object in our applications. Perhaps, at least, we could reduce our dependence on it. Oh look, we have a FAQ on this: https://wiki.openstack.org/wiki/Oslo#Why_does_oslo.config_have_a_CONF_object.3F_Global_object_SUCK.21 Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] [Keystone] Heat cfn-push-stats failed with '403 SignatureDoesNotMatch', it may be Keystone problem.
On Sun, Aug 24, 2014 at 02:55:14PM +0900, Yukinori Sagara wrote: Hi. I am trying Heat instance HA, using RDO Icehouse. After instance boot, instance push own stats to heat alarm with cfn-push-stats command. But cfn-push-stats always failed with error '403 SignatureDoesNotMatch', this message is output to /var/log/cfn-push-stats.log. I debugged client and server side code. (i.e. cfn-push-stats, boto, heat, keystone, Thanks for debugging this - unfortunately it's not the first time a change in boto has broken heat like this :( Can you please provide a link to your patch, so we can help review the change to python-keystoneclient? Also, please remember that we need to ensure the signature calculation on the server side works not only with the latest boto, but also older versions too - so if this is a recent change in behavior, we may need to use the User-Agent header to conditionally modify the logic (unless it can be done in a backwards compatible way), e.g like I did here: https://review.openstack.org/#/c/38784/ Thanks, Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Issues with hardcoded versions of requirements in specs of packages
It is not enough, you need to review requirements in the code of nailgun, ostf and astute. I'll be happy to have our requirements files and specs as close to global-requirements as possible. It will ruin our current solid structure, where we have same versions of dependencies on production, on development and test environments. And from time to time we will face issues with updates from pypi. Development and test environments will be affected. We somewhat protected from it by maintainers of global-requirement file. And changes in production environments are protected by OSCI team. So, if we want to build iso with custom packages, we have to add flexibility to our dependencies lists. Any objections? On Mon, Aug 25, 2014 at 8:28 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: Commit with fast fix was submitted: https://review.openstack.org/#/c/116667/ Need review :) I will try to build image with this commit and will send my comments with my results. On Mon, Aug 25, 2014 at 7:55 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: When I started the build of ISO from master branch, I can see the following errors: https://bugs.launchpad.net/fuel/+bug/1361279 I want to submit the patch set and remove all hardcoded requirements and change all '==' to '=', but I want to discuss how we can organize specs to avoid problems with dependencies before this. Thank you. On Mon, Aug 25, 2014 at 6:21 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: Hi team, Today I started to build Fuel ISO from the master branch and with packages with code from the master branches, and have found strange errors: http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/77/console Looks like we have hardcoded versions of all required packages in specs: https://github.com/stackforge/fuel-main/blob/master/packages/rpm/specs/nailgun.spec#L17-L44 and this is the root of problems. In the result we can't build ISO from master branch, because we have another versions of requirements for code from master branches. Looks like it is common issue for several components. Could we discuss how we can organize specs to avoid problems with dependencies? Thank you! -- Timur, QA Engineer OpenStack Projects Mirantis Inc [image: http://www.openstacksv.com/] http://www.openstacksv.com/ -- Timur, QA Engineer OpenStack Projects Mirantis Inc [image: http://www.openstacksv.com/] http://www.openstacksv.com/ -- Timur, QA Engineer OpenStack Projects Mirantis Inc [image: http://www.openstacksv.com/] http://www.openstacksv.com/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On 08/26/2014 11:13 AM, Thierry Carrez wrote: OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Sounds good to me. Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? Do we need someone to cover the QA space ? Anything else missing ? I think we do. We have someone assigned to QA in Zaqar and that has worked pretty well for us. Malini has been taking care of tempest, devstack patches and syncing with OpenStack's QA team. Her efforts there have been a key on getting Zaqar in the gate and making sure we're up-to-date with the latest changes happening in our QA. At first glance I don't think we need a role for logistics (chairing meetings and organizing meetups), design summit planning, roadmapping, user point of contact, or spokesperson -- as I expect the PTL to retain those roles anyway... +1 for letting the PTL take care of this. Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes
On 08/25/2014 10:06 PM, Brandon Logan wrote: 2. Therefor, there should be some configuration to specifically enable either version (not both) in case LBaaS is needed. In this case, the other version is disabled (ie. a REST query for non-active version should return a not activated error). Additionally, adding a 'lb-version' command to return the version currently active seems like a good user-facing idea. We should see how this doesn't negatively effect the db migration process (for example, allowing read-only commands for both versions?) A /version endpoint can be added for both v1 and v2 extensions and service plugins. If it doesn't already exist, it would be nice if neutron had an endpoint that would return the list of loaded extensions and their versions. There is 'neutron ext-list', but I'm not familiar enough with it or with the REST API to say if we can use that. 3. Another decision that's needed to be made is the syntax for v2. As mentioned, the current new syntax is 'neutron lbaas-object-command' (against the old 'lb-object-action'), keeping in mind that once v1 is deprecated, a syntax like 'lbv2-object-action' would be probably unwanted. Is 'lbaas-object-command' okay with everyone? That is the reason we with with lbaas because lbv2 looks ugly and we'd be stuck with it for the lifetime of v2, unless we did another migration back to lb for it. Which seemed wrong to do, since then we'd have to accept both lbv2 and lb commands, and then deprecate lbv2. I assume this also means you are fine with the prefix in the API resource of /lbaas as well then? I don't mind, as long there is a similar mechanism which disables the non-active REST API commands. Does anyone disagree? 4. If we are going for different API between versions, appropriate patches also need to be written for lbaas-related scripts and also Tempest, and their maintainers should probably be notified. Could you elaborate on this? I don't understand what you mean by different API between version. The intention was that the change of the user-facing API also forces changes on other levels - not only neutronclient needs to be modified accordingly, but also tempest system tests, horizon interface regarding LBaaS... ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation
On 08/26/2014 05:38 AM, Thierry Carrez wrote: Hi keystone/infra, One key upcoming Juno feature (Keystone to keystone federation) is currently blocked on adding pysaml2 to requirements: https://review.openstack.org/#/c/113294/ It was -1ed by Doug after the discussion at the release meeting last week, where the xmlsec1 dependency was raised as a potential infra issue. There doesn't seem to be so many good alternatives though. Steve mentioned saml, but it's a bit alpha, and I have no idea how much work would be required to use that instead of pysaml2 at this point. How blocking is the xmlsec1 dependency from an Infra perspective ? How doable would a migration to saml at this point be ? I'm trying to find a solution so that we can ship this feature :) I don't think this has anything to do with Infra. xmlsec1 is included in Debian / Ubuntu and Fedora. I think the complaint was about whether this library existed for MacOSX, which honestly, I *don't* think is a valid argument against adding a requirement as that's not a target environment for OpenStack. I'm +2 on this moving forward. I feel that the keystone team answered the questions needed. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Access filtering BP
Team, Please take a look at the new BP https://blueprints.launchpad.net/mistral/+spec/mistral-access-filtering. Interested in your feedback and shared experience. Although it’s a purely internal design thing I find pretty important to be accurate about things like that. Thanks Renat Akhmerov @ Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Private IP allocation
Hi, Openstack deployment detail : Single node devstack setup Issue Description: We are writing a core plugin and trying to create instance provisioned by our plugin. We are able to launch the VM instance successfully via OpenStack horizon. From the horizon, we could see the IP is getting allocated to the VM. But the IP is not seen when we do an ifconfig in the VM's terminal. Also, we are not able to ping 2 instances. From the CLI, the ovs-vsctl show command shows: f858da98-365a-4ca0-bfdc-7ab19987a128 Bridge br-int Port br-int Interface br-int type: internal ovs_version: 1.4.6 However, we could see the interfaces when we did ifconfig from the terminal. What are we missing? Do we need to add some configuration for the interface in our plugin? Regards, Sanjivini Naikar ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [rally] Introduction to feature request mechanism in Rally
Hi stackers, It goes without saying that end users, that are actually using your tool/project/service, usually don't have enough time to join community, understand all technical details and road map of project and after that contribute code that implements their features... But it's well know that their experience is very useful and important for project. Specially for such kind of users, that can just say what is missing (or not working) we designed very simple way to request new feature from rally team. End user should add proposal for adding new feature: https://github.com/stackforge/rally/tree/master/doc/feature_request It's very similar to specs, but much simpler for end users, because only thing that they should fill is description of what and why is needed in rally. Thoughts? Best regards, Boris Pavlovic ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds
Hi, I'd like to encourage everybody interested to take a look and leave comments on the Neutron migration spec here: https://review.openstack.org/#/c/101921 The design currently includes both cold and live approaches, supports host-by-host migration (as opposite to big bang) and doesn't require to freeze the whole deployment during upgrade. I've also started prototyping the above spec: https://review.openstack.org/#/c/111755 - Neutron migration: synchronize IP (de)allocations with Nova-net https://review.openstack.org/#/c/115635 - Neutron migration as part of cold migration On Tue, Aug 26, 2014 at 1:59 PM, Tim Bell tim.b...@cern.ch wrote: From: Michael Still [mailto:mi...@stillhq.com] Sent: 25 August 2014 23:38 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds ... Mark McClain and I discussed a possible plan for nova-network to neutron upgrades at the Ops Meetup today, and it seemed generally acceptable. It defines a cold migration as freezing the ability to create or destroy instances during the upgrade, and then requiring a short network outage for each instance in the cell. This is why I'm trying to understand the no downtime use case better. Is it literally no downtime, ever? Or is it a more simple no simultaneous downtime for instances? Michael The simultaneous downtime across the cloud is the one we really need to avoid. Short network outages (depending on how you define short) can be handled along with blocking API operations for short periods. The other item was how to stage the upgrade.. with a cloud of a significant size and some concerns about scalability, we would like to be able to do the migration as a set of steps rather than a big bang. During the gap between the steps, we'd like to open the APIs for usage, such as new VMs get created on Neutron hypervisors. Would that be a possibility ? Tim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?
Hi, All: I've deployed undercloud and overcloud on some baremetals. All overcloud machines are deployed by undercloud. Then I tried to shutdown undercloud machines. After that, if I reboot one overcloud machine, it will never boot from net, AKA PXE used by undercloud. Is that what TripleO is designed to be ? We can never shutdown undercloud machines for maintainance of overcloud ? Please help me clearify that. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [OpenStack][Nova]May be performance issues of connect_volume in Nova
Hi, All I have done some work to test the performance of LUN scanning, use iscsiadm with --rescan like what Nova dose. In my test, a host connected with a lot of LUNs , more than 1000 LUNs. Because --rescan will cause kernel to scan all of the LUNs connected to the host, it costs several minutes to complete the scanning. According to connect_volume at line 284 in nova.virt.libvirt.volume.py: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L252 Nova uses iscsiadm with --rescan to detect new volume, but this command will scan all of the LUNs, including all the others which already connected to this host. So if a host has a large number of LUNs connected to it, the connect_volume will be very slow. I think connect_volume needn't scan all of the LUNs, only need scan the LUN specified by connection_info. Is it necessary to discuss a more efficient way to improve this issues. -- Best wishes == Peter.W == ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On 08/25/2014 03:50 PM, Adam Lawson wrote: I recognize I'm joining the discussion late but I've been following the dialog fairly closely and want to offer my perspective FWIW. I have a lot going through my head, not sure how to get it all out there so I'll do a brain dump, get some feedback and apologize in advance. One the things I like most about Openstack is its incredible flexibility - a modular architecture where certain programs/capabilities can be leveraged for a specific install - or not, and ideally the rest of the feature suite remains functional irrespective of a program status. When it comes to a program being approved as part of Openstack Proper (pardon my stepping over that discussion), I think a LOT of what is being discussed here touches on what Openstack will ultimately be about and what it won't. With products like Cloudstack floating around consuming market share, all I see is Citrix. A product billed as open source but so closely aligned with one vendor that it almost doesn't matter. They have matured decision structure, UI polish and organized support but they don't have community. Not like us anyway. With Openstack the moral authority to call ourselves the champions of open cloud and with that, we have competing interests that make our products better. We don't have a single vendor (yet) that dictates whether something will happen or not. The maturity of the Openstack products themselves are driven by a community of consumers where the needs are accommodated rather than sold. A positive than comes with such a transparent design pipeline is the increased capability for design agility and accommodating changes when a change is needed. But I'm becoming increasingly disappointed at the mount of attention being given to whether one product is blessed by Openstack or not. In a modular design, these programs should be interchangeable with only a couple exceptions. Does being blessed really matter? The consensus I've garnered in this thread is the desperate need for the consuming community's continued involvement. What I /haven't/ heard much about is how Openstack can standardize how these programs - blessed or not - can interact with the rest of the suite to the extent they adhere to the correct inputs/outputs which makes them functional. Program status is irrelevant. I guess when it comes right down to it, I love what Openstack is and where we're going and I especially appreciate these discussions. But I'm disappointed at the number of concerns I've been reading about things that ultimately don't matter (like being blessed, about who has the power, etc) and I have concerns we lose sight what this is all about to the point that the vision for Openstack gets clouded. We have a good thing and no project can accommodate every request so a decision must be made as to what is 'included' and what is 'supported'. But with modularity, it really doesn't matter one iota if a program is blessed in the Openstack integrated release cycle or not. Couldn't agree with you more, Adam. I believe if OpenStack is to succeed in the future, our community and our governance structure needs to embrace the tremendous growth in scope that OpenStack's success to-date has generated. The last thing we should do, IMO, is reverse course and act like a single-vendor product in order to tame the wildlings. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1. Congratulations, Amrith. Best regards, Denis M. вторник, 26 августа 2014 г. пользователь Sergey Gotliv написал: Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me javascript:;] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] usage patterns for oslo.config
On Aug 26, 2014, at 6:30 AM, Mark McLoughlin mar...@redhat.com wrote: On Mon, 2014-08-11 at 15:06 -0400, Doug Hellmann wrote: On Aug 8, 2014, at 7:22 PM, Devananda van der Veen devananda@gmail.com wrote: On Fri, Aug 8, 2014 at 12:41 PM, Doug Hellmann d...@doughellmann.com wrote: That’s right. The preferred approach is to put the register_opt() in *runtime* code somewhere before the option will be used. That might be in the constructor for a class that uses an option, for example, as described in http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options Doug Interesting. I've been following the prevailing example in Nova, which is to register opts at the top of a module, immediately after defining them. Is there a situation in which one approach is better than the other? The approach used in Nova is the “old” way of doing it. It works, but assumes that all of the application code is modifying a global configuration object. The runtime approach allows you to pass a configuration object to a library, which makes it easier to mock the configuration for testing and avoids having the configuration options bleed into the public API of the library. We’ve started using the runtime approach in new Oslo libraries that have configuration options, but changing the implementation in existing application code isn’t strictly necessary. I've been meaning to dig up some of the old threads and reviews to document how we got here. But briefly: * this global CONF variable originates from the gflags FLAGS variable in Nova before oslo.config * I was initially determined to get rid of any global variable use and did a lot of work to allow glance use oslo.config without a global variable * one example detail of this work - when you use paste.deploy to load an app, you have no ability to pass a config object through paste.deploy to the app. I wrote a little helper that used a thread-local variable to mimic this pass-through. * with glance done, I moved on to making keystone use oslo.config and initially didn't use the global variable. Then I ran into a veto from termie who felt very strongly that a global variable should be used. * in the end, I bought the argument that the use of a global variable was pretty deeply ingrained (especially in Nova) and that we should aim for consistent coding patterns across projects (i.e. Oslo shouldn't be just about shared code, but also shared patterns). The only realistic standard pattern we could hope for was the use of the global variable. * with that agreed, we reverted glance back to using a global variable and all projects followed suit * the case of libraries is different IMO - we'd be foolish to design APIs which lock us into using the global object So ... I wouldn't quite agree that this is the new way vs the old way, but I think it would be reasonable to re-open the discussion about using the global object in our applications. Perhaps, at least, we could reduce our dependence on it. The aspect I was calling “old” was the “register options at import time” pattern, not the use of a global. Whether we use a global or not, registering options at runtime in a code path that will be using them is better than relying on import ordering to ensure options are registered before they are used. Doug Oh look, we have a FAQ on this: https://wiki.openstack.org/wiki/Oslo#Why_does_oslo.config_have_a_CONF_object.3F_Global_object_SUCK.21 Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Configuring libivrt VIF driver
Hi, Following the conversation [1]: My understanding was that the way to use out of the tree vif_driver is to set vif_driver option in nova.conf until there is a better way to support such cases but the commit [2] removed this option. Can someone clarify the current status (i.e. what is the current way to do it ) in Juno? [1] https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg30177.html [2] https://github.com/openstack/nova/commit/7561c8ded211d53e8745d1420a73b82bd0fc35cf https://github.com/openstack/nova/commit/7561c8ded211d53e8745d1420a73b82bd0fc35cf- (libvirt: remove vif_driver config parameter) Thanks, Itzik ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation
On Aug 26, 2014, at 7:44 AM, Sean Dague s...@dague.net wrote: On 08/26/2014 05:38 AM, Thierry Carrez wrote: Hi keystone/infra, One key upcoming Juno feature (Keystone to keystone federation) is currently blocked on adding pysaml2 to requirements: https://review.openstack.org/#/c/113294/ It was -1ed by Doug after the discussion at the release meeting last week, where the xmlsec1 dependency was raised as a potential infra issue. There doesn't seem to be so many good alternatives though. Steve mentioned saml, but it's a bit alpha, and I have no idea how much work would be required to use that instead of pysaml2 at this point. How blocking is the xmlsec1 dependency from an Infra perspective ? How doable would a migration to saml at this point be ? I'm trying to find a solution so that we can ship this feature :) I don't think this has anything to do with Infra. xmlsec1 is included in Debian / Ubuntu and Fedora. I think the complaint was about whether this library existed for MacOSX, which honestly, I *don't* think is a valid argument against adding a requirement as that's not a target environment for OpenStack. My impression was this library would also be needed for keystone client, not just the server or middleware. Did I misunderstand? Doug I'm +2 on this moving forward. I feel that the keystone team answered the questions needed. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On Aug 26, 2014, at 5:13 AM, Thierry Carrez thie...@openstack.org wrote: OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? Do we need someone to cover the QA space ? Anything else missing ? It seems the QA team is also feeling pressure due to the growing community, so it seems wise to ensure every team has someone designated to help with coordinating work on QA projects. Doug At first glance I don't think we need a role for logistics (chairing meetings and organizing meetups), design summit planning, roadmapping, user point of contact, or spokesperson -- as I expect the PTL to retain those roles anyway... -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?
As an update, please check and review commit [1] to fuel-specs with detailed feature description. According to this feature, we are going to switch our CI system to lrzipped tarballs. [1] https://review.openstack.org/#/c/116874/ On Thu, Aug 21, 2014 at 5:50 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote: Fuelers, Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by 2Gb with lrzip tool (ticket https://bugs.launchpad.net/fuel/+bug/1356813, change in build system https://review.openstack.org/#/c/114201/, change in docs https://review.openstack.org/#/c/115331/), but it will dramatically increase unpacking time. I've run unpack on my virtualbox environment and got this result: [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz Decompressing... 100%7637.48 / 7637.48 MB Average DeCompression Speed: 8.014MB/s [OK] - 8008478720 bytes Total time: 00:15:52.93 My suggestion is to reject this change, release 5.1 with big tarball and find another solution in next release. Any objections? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Aleksandra Fedorova bookwar ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On Fri, Aug 22, 2014 at 8:19 PM, John Dickinson m...@not.mn wrote: I think Anne makes some excellent points about the pattern being proposed being unlikely to be commonly implemented across all the programs (or, at best, very difficult). Let's not try to formalize another best practice that works many times and force it to work every time. Here's an alternate proposal: Let's let PTLs be PTLs and effectively coordinate and manage the activity in their respective projects. And let's get the PTLs together for one or two days every cycle to discuss project issues. Just PTLs, and let's focus on the project management stuff and some cross-project issues. Getting the PTLs together would allow them to discuss cross-project issues, share frustrations and solutions about what does and doesn't work. Basically, think of it as a mid-cycle meetup, but for PTLs. (Perhaps we could even ask the Foundation to sponsor it.) +100 I think John nails the key point here: PTLs are likely already doing a lot of what Thierry originally proposed here. I know that I work with many people in Neutron to help offload things like bug triaging, gate failure analysis, etc. Without that, Neutron wouldn't scale. In effect what's proposed here is something we're already doing a lot of. I don't think forcing this on each project is a good way of doing this, because each project has different challenges and needs. Thanks, Kyle --John On Aug 22, 2014, at 6:02 PM, Anne Gentle a...@openstack.org wrote: On Fri, Aug 22, 2014 at 6:17 PM, Rochelle.RochelleGrober rochelle.gro...@huawei.com wrote: /flame-on Ok, this is funny to some of us in the community. The general populace of this community is so against the idea of management that they will use the term for a despotic dictator as a position name rather than manager. Sorry, but this needed to be said. /flame-off Specific comments in line: Thierry Carrez wrote: Hi everyone, We all know being a project PTL is an extremely busy job. That's because in our structure the PTL is responsible for almost everything in a project: - Release management contact - Work prioritization - Keeping bugs under control - Communicate about work being planned or done - Make sure the gate is not broken - Team logistics (run meetings, organize sprints) - ... Point of clarification: I've heard PTL=Project Technical Lead and PTL=Program Technical Lead. Which is it? It is kind of important as OpenStack grows, because the first is responsible for *a* project, and the second is responsible for all projects within a program. Now Program, formerly Project. I'd also like to set out as an example of a Program that is growing to encompass multiple projects, the Neutron Program. Look at how it is expanding: Multiple sub-teams for: LBAAS, DNAAS, GBP, etc. This model could be extended such that: - the subteam is responsible for code reviews, including the first +2 for design, architecture and code of the sub-project, always also keeping an eye out that the sub-project code continues to both integrate well with the program, and that the program continues to provide the needed code bits, architecture modifications and improvements, etc. to support the sub-project. - the final +2/A would be from the Program reviewers to ensure that all integrate nicely together into a single, cohesive program. - This would allow sub-projects to have core reviewers, along with the program and be a good separation of duties. It would also help to increase the number of reviews moving to merged code. - Taken to a logical stepping stone, you would have project technical leads for each project, and they would make up a program council, with the program technical lead being the chair of the council. This is a way to offload a good chunk of PTL tactical responsibilities and help them focus more on the strategic. They end up being completely drowned in those day-to-day operational duties, miss the big picture, can't help in development that much anymore, get burnt out. Since you're either the PTL or not the PTL, you're very alone and succession planning is not working that great either. There have been a number of experiments to solve that problem. John Garbutt has done an incredible job at helping successive Nova PTLs handling the release management aspect. Tracy Jones took over Nova bug management. Doug Hellmann successfully introduced the concept of Oslo liaisons to get clear point of contacts for Oslo library adoption in projects. It may be time to generalize that solution. The issue is one of responsibility: the PTL is ultimately responsible for everything in a project. If we can more formally delegate that responsibility, we can avoid getting up to the PTL for everything, we can rely on a team of people rather than just one person. Enter the Czar system: each project should have a
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On Mon, Aug 25, 2014 at 4:45 AM, Alan Kavanagh alan.kavan...@ericsson.com wrote: That's a fair point Jay. The Czar does sound like a reasonable approach and what would be useful and helpful would be to appoint additional PTL's and not have the burden of everything falling on one individual which becomes over loading after a period of time. In this case, imho it would be useful to have 2 or more PTL's assigned per project to adjust the workload and have different views and assess the sticky points with different views. /Alan I disagree with this assessment. Having multiple PTLs defeated the purpose of a PTL. Also, the PTL should be about building consensus, not using a hammer as was noted in other parts of this thread. As developers in Open Source, you have to be able to build consensus before some ideas and concepts can move forward. The PTL, in my opinion, is about helping to establish that consensus and being able to say no when that consensus isn't built. I don't think having multiple PTLs would help here. Thanks, Kyle -Original Message- From: Jay Pipes [mailto:jaypi...@gmail.com] Sent: August-25-14 1:58 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs On 08/23/2014 06:35 PM, Clint Byrum wrote: I agree as well. PTL is a servant of the community, as any good leader is. If the PTL feels they have to drop the hammer, or if an impass is reached where they are asked to, it is because they have failed to get everyone communicating effectively, not because that's their job. The problem isn't really that teams are not communicating effectively, nor is the problem to do with some deficit of a PTL in either putting the hammer down or failing to figure out common ground. The issue in my opinion and my experience is that there are multiple valid ways of doing something (say, deployment or metering or making toast) and the TC and our governing structure has decided to pick winners in spaces instead of having a big tent and welcoming different solutions and projects into the OpenStack fold. We pick winners and by doing so, we are exclusionary, and this exclusivity does not benefit our user community, but rather just gives it fewer options. IMHO, the TC should become an advisory team that recommends to interested project teams ways in which they can design and architect their projects to integrate well with other projects in the OpenStack community, and design their projects for the scale, stability and requirements (such as multi-tenancy) that an open cloud software ecosystem demands. Just my two cents, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa][all][Heat] Packaging of functional tests
Steve Baker has started the process of moving Heat tests out of the Tempest repository and into the Heat repository, and we're looking for some guidance on how they should be packaged in a consistent way. Apparently there are a few projects already packaging functional tests in the package projectname.tests.functional (alongside projectname.tests.unit for the unit tests). That strikes me as odd in our context, because while the unit tests run against the code in the package in which they are embedded, the functional tests run against some entirely different code - whatever OpenStack cloud you give it the auth URL and credentials for. So these tests run from the outside, just like their ancestors in Tempest do. There's all kinds of potential confusion here for users and packagers. None of it is fatal and all of it can be worked around, but if we refrain from doing the thing that makes zero conceptual sense then there will be no problem to work around :) I suspect from reading the previous thread about In-tree functional test vision that we may actually be dealing with three categories of test here rather than two: * Unit tests that run against the package they are embedded in * Functional tests that run against the package they are embedded in * Integration tests that run against a specified cloud i.e. the tests we are now trying to add to Heat might be qualitatively different from the projectname.tests.functional suites that already exist in a few projects. Perhaps someone from Neutron and/or Swift can confirm? I'd like to propose that tests of the third type get their own top-level package with a name of the form projectname-integrationtests (second choice: projectname-tempest on the principle that they're essentially plugins for Tempest). How would people feel about standardising that across OpenStack? thanks, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation
From: Doug Hellmann d...@doughellmann.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 08/26/2014 10:11 AM Subject: Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation On Aug 26, 2014, at 7:44 AM, Sean Dague s...@dague.net wrote: On 08/26/2014 05:38 AM, Thierry Carrez wrote: Hi keystone/infra, One key upcoming Juno feature (Keystone to keystone federation) is currently blocked on adding pysaml2 to requirements: https://review.openstack.org/#/c/113294/ It was -1ed by Doug after the discussion at the release meeting last week, where the xmlsec1 dependency was raised as a potential infra issue. There doesn't seem to be so many good alternatives though. Steve mentioned saml, but it's a bit alpha, and I have no idea how much work would be required to use that instead of pysaml2 at this point. How blocking is the xmlsec1 dependency from an Infra perspective ? How doable would a migration to saml at this point be ? I'm trying to find a solution so that we can ship this feature :) I don't think this has anything to do with Infra. xmlsec1 is included in Debian / Ubuntu and Fedora. I think the complaint was about whether this library existed for MacOSX, which honestly, I *don't* think is a valid argument against adding a requirement as that's not a target environment for OpenStack. My impression was this library would also be needed for keystone client, not just the server or middleware. Did I misunderstand? Doug Hey Doug, Just talked it over with Marek, we shouldn't need it for keystoneclient. Just the server side. I'm +2 on this moving forward. I feel that the keystone team answered the questions needed. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation
On Aug 26, 2014, at 10:18 AM, Steve Martinelli steve...@ca.ibm.com wrote: From: Doug Hellmann d...@doughellmann.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 08/26/2014 10:11 AM Subject: Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation On Aug 26, 2014, at 7:44 AM, Sean Dague s...@dague.net wrote: On 08/26/2014 05:38 AM, Thierry Carrez wrote: Hi keystone/infra, One key upcoming Juno feature (Keystone to keystone federation) is currently blocked on adding pysaml2 to requirements: https://review.openstack.org/#/c/113294/ It was -1ed by Doug after the discussion at the release meeting last week, where the xmlsec1 dependency was raised as a potential infra issue. There doesn't seem to be so many good alternatives though. Steve mentioned saml, but it's a bit alpha, and I have no idea how much work would be required to use that instead of pysaml2 at this point. How blocking is the xmlsec1 dependency from an Infra perspective ? How doable would a migration to saml at this point be ? I'm trying to find a solution so that we can ship this feature :) I don't think this has anything to do with Infra. xmlsec1 is included in Debian / Ubuntu and Fedora. I think the complaint was about whether this library existed for MacOSX, which honestly, I *don't* think is a valid argument against adding a requirement as that's not a target environment for OpenStack. My impression was this library would also be needed for keystone client, not just the server or middleware. Did I misunderstand? Doug Hey Doug, Just talked it over with Marek, we shouldn't need it for keystoneclient. Just the server side. Great! Sorry for the confusion. +2a Doug I'm +2 on this moving forward. I feel that the keystone team answered the questions needed. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On Aug 26, 2014, at 10:10 AM, Kyle Mestery mest...@mestery.com wrote: On Fri, Aug 22, 2014 at 8:19 PM, John Dickinson m...@not.mn wrote: I think Anne makes some excellent points about the pattern being proposed being unlikely to be commonly implemented across all the programs (or, at best, very difficult). Let's not try to formalize another best practice that works many times and force it to work every time. Here's an alternate proposal: Let's let PTLs be PTLs and effectively coordinate and manage the activity in their respective projects. And let's get the PTLs together for one or two days every cycle to discuss project issues. Just PTLs, and let's focus on the project management stuff and some cross-project issues. Getting the PTLs together would allow them to discuss cross-project issues, share frustrations and solutions about what does and doesn't work. Basically, think of it as a mid-cycle meetup, but for PTLs. (Perhaps we could even ask the Foundation to sponsor it.) +100 I think John nails the key point here: PTLs are likely already doing a lot of what Thierry originally proposed here. I know that I work with many people in Neutron to help offload things like bug triaging, gate failure analysis, etc. Without that, Neutron wouldn't scale. In effect what's proposed here is something we're already doing a lot of. I don't think forcing this on each project is a good way of doing this, because each project has different challenges and needs. This proposal isn’t about how teams are organized internally; it’s about how they interface with the other OpenStack teams. The cross-project teams need more direct coordination and participation than we are getting from some projects, and so we want the projects and PTLs to recognize that the areas Thierry has listed are responsibilities that need to be met. Someone needs to do these basic coordination tasks in order for the overall project to be successful. It’s up to each team to decide who fills a given role, but by identifying these points of contact the other teams don’t have to know that you make those decisions by asking for volunteers, holding an election, or drawing straws. Doug Thanks, Kyle --John On Aug 22, 2014, at 6:02 PM, Anne Gentle a...@openstack.org wrote: On Fri, Aug 22, 2014 at 6:17 PM, Rochelle.RochelleGrober rochelle.gro...@huawei.com wrote: /flame-on Ok, this is funny to some of us in the community. The general populace of this community is so against the idea of management that they will use the term for a despotic dictator as a position name rather than manager. Sorry, but this needed to be said. /flame-off Specific comments in line: Thierry Carrez wrote: Hi everyone, We all know being a project PTL is an extremely busy job. That's because in our structure the PTL is responsible for almost everything in a project: - Release management contact - Work prioritization - Keeping bugs under control - Communicate about work being planned or done - Make sure the gate is not broken - Team logistics (run meetings, organize sprints) - ... Point of clarification: I've heard PTL=Project Technical Lead and PTL=Program Technical Lead. Which is it? It is kind of important as OpenStack grows, because the first is responsible for *a* project, and the second is responsible for all projects within a program. Now Program, formerly Project. I'd also like to set out as an example of a Program that is growing to encompass multiple projects, the Neutron Program. Look at how it is expanding: Multiple sub-teams for: LBAAS, DNAAS, GBP, etc. This model could be extended such that: - the subteam is responsible for code reviews, including the first +2 for design, architecture and code of the sub-project, always also keeping an eye out that the sub-project code continues to both integrate well with the program, and that the program continues to provide the needed code bits, architecture modifications and improvements, etc. to support the sub-project. - the final +2/A would be from the Program reviewers to ensure that all integrate nicely together into a single, cohesive program. - This would allow sub-projects to have core reviewers, along with the program and be a good separation of duties. It would also help to increase the number of reviews moving to merged code. - Taken to a logical stepping stone, you would have project technical leads for each project, and they would make up a program council, with the program technical lead being the chair of the council. This is a way to offload a good chunk of PTL tactical responsibilities and help them focus more on the strategic. They end up being completely drowned in those day-to-day operational duties, miss the big picture, can't help in development that much anymore, get burnt out. Since you're either the PTL or not the PTL, you're very
Re: [openstack-dev] gettext question about oslo.i18n library
Hi Peng, We're using the spec process described in https://wiki.openstack.org/wiki/Oslo#Design_Proposals to manage blueprints now. You'll need to create a spec review as described in the wiki for this. Unfortunately, I don't think we've opened up Kilo specs yet, so you might want to either wait for just put it in Juno for now and rebase to Kilo when that's available. -Ben On 08/26/2014 02:27 AM, Peng Wu wrote: Hi Doug, I created a i18n blueprint for oslo.i18n. URL: https://blueprints.launchpad.net/openstack-i18n/+spec/more-gettext-supports Could you review it? I am reading django gettext code, I think I will use a similar approach for the contextual markers gettext function. Thanks, Peng Wu On Mon, 2014-08-18 at 10:05 -0400, Doug Hellmann wrote: Yes, that would be a good next step. Doug On Aug 17, 2014, at 10:03 PM, Peng Wu peng.e...@gmail.com wrote: Yes, I am interested in adding these missing gettext functions to oslo.i18n library. Guess the next step is to create a blueprint for Kilo? Thanks, Peng Wu On Fri, 2014-08-15 at 16:02 -0400, Doug Hellmann wrote: On Aug 15, 2014, at 3:18 AM, Peng Wu peng.e...@gmail.com wrote: Hi, Recently I just read the code of oslo.i18n library, The lazy translation idea is great! But I found a question about gettext contextual markers and plural form, such as pgettext and ungettext functions, see [3]. It seems the two gettext functions are missing in the oslo.i18n library. Is it correct? or will support it? Thanks, Peng Wu You’re right, those are not present. We apparently haven’t used them anywhere, yet, because they weren’t exposed via the old gettextutils module in the incubator. We should add them. Are you interested in working on a blueprint for Kilo to do that? Doug Refer URL: 1. https://github.com/openstack/oslo.i18n 2. http://lists.openstack.org/pipermail/openstack-dev/2014-July/039217.html 3. https://wiki.openstack.org/wiki/I18n/TranslatableStrings ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective
On 8/26/14, 4:49 AM, Maru Newby ma...@redhat.com wrote: On Aug 25, 2014, at 4:39 PM, Pradeep Kilambi (pkilambi) pkila...@cisco.com wrote: On 8/23/14, 5:36 PM, Maru Newby ma...@redhat.com wrote: On Aug 23, 2014, at 4:06 AM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka ihrac...@redhat.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 20/08/14 18:28, Salvatore Orlando wrote: Some comments inline. Salvatore On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com mailto:ihrac...@redhat.com wrote: Hi all, I've read the proposal for incubator as described at [1], and I have several comments/concerns/suggestions to this. Overall, the idea of giving some space for experimentation that does not alienate parts of community from Neutron is good. In that way, we may relax review rules and quicken turnaround for preview features without loosing control on those features too much. Though the way it's to be implemented leaves several concerns, as follows: 1. From packaging perspective, having a separate repository and tarballs seems not optimal. As a packager, I would better deal with a single tarball instead of two. Meaning, it would be better to keep the code in the same tree. I know that we're afraid of shipping the code for which some users may expect the usual level of support and stability and compatibility. This can be solved by making it explicit that the incubated code is unsupported and used on your user's risk. 1) The experimental code wouldn't probably be installed unless explicitly requested, and 2) it would be put in a separate namespace (like 'preview', 'experimental', or 'staging', as the call it in Linux kernel world [2]). This would facilitate keeping commit history instead of loosing it during graduation. Yes, I know that people don't like to be called experimental or preview or incubator... And maybe neutron-labs repo sounds more appealing than an 'experimental' subtree in the core project. Well, there are lots of EXPERIMENTAL features in Linux kernel that we actively use (for example, btrfs is still considered experimental by Linux kernel devs, while being exposed as a supported option to RHEL7 users), so I don't see how that naming concern is significant. I think this is the whole point of the discussion around the incubator and the reason for which, to the best of my knowledge, no proposal has been accepted yet. I wonder where discussion around the proposal is running. Is it public? The discussion started out privately as the incubation proposal was put together, but it's now on the mailing list, in person, and in IRC meetings. Lets keep the discussion going on list now. In the spirit of keeping the discussion going, I think we probably need to iterate in practice on this idea a little bit before we can crystallize on the policy and process for this new repo. Here are few ideas on how we can start this iteration: * Namespace for the new repo: Should this be in the neutron namespace, or a completely different namespace like neutron labs? Perhaps creating a separate namespace will help the packagers to avoid issues of conflicting package owners of the namespace. I don¹t think there is a technical requirement to choose a new namespace. Python supports sharing a namespace, and packaging can support this feature (see: oslo.*). From what I understand there can be overlapping code between neutron and incubator to override/modify existing python/config files. In which case, packaging(for Eg: rpm) will raise a path conflict. So we probably will need to worry about namespaces? Doug's suggestion to use a separate namespace to indicate that the incubator codebase isn’t fully supported is a good idea and what I had in mind as a non-technical reason for a new namespace. I still assert that the potential for path conflicts can be avoided easily enough, and is not a good reason on its own to use a different namespace. * Dependency on Neutron (core) repository: We would need to sort this out so that we can get UTs to run and pass in the new repo. Can we set the dependency on Neutron milestone releases? We already publish tar balls for the milestone releases, but I am not sure we publish these as packages to pypi. If not could we start doing that? With this in place, the incubator would always lag the Neutron core by at the most one milestone release. Given that it is possible to specify a dependency as a branch/hash/tag in a git repo [1], I¹m not sure it¹s worth figuring out how to target tarballs. Master branch of the incubation repo could then target the master branch of the Neutron repo and always be assured of being current, and then released versions could target milestone tags or released versions. 1:
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On 08/26/2014 05:13 AM, Thierry Carrez wrote: OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? I think your assessment of specific to some projects is accurate. For instance, for Neutron Kyle wants to continue to be responsible for this role. Fine by me as long as I have a name. For Cinder, Jay Bryant attends the third party meeting and gives updates, Ramy has put in the most work on CIs and is a great resource for answering questions and Duncan holds the hammer on what stays in master but the third party meetings are too late for him to attend most times, so he co-ordinates with Jay. Again as long as I know who to talk to. For Nova? Well right now Nova is kind of hit and miss so I would appreciate a name there. Other projects? If you have vendors or ci accounts testing your projects, please get in touch. Thanks, Anita. Do we need someone to cover the QA space ? Anything else missing ? At first glance I don't think we need a role for logistics (chairing meetings and organizing meetups), design summit planning, roadmapping, user point of contact, or spokesperson -- as I expect the PTL to retain those roles anyway... ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [horizon] JavaScript library support
Horizon has been actively moving away from having 3rd party JavaScript libraries bundled in the Horizon repo. Most have been removed barring one or two exceptions. Moving forward new JavaScript libraries dependencies should either use existing xstatic packages or need a new one created for the library. The dependency will then be treated like any other python dependency in openstack/requirements. For more details, see the updated documentation: http://docs.openstack.org/developer/horizon/contributing.html#javascript-an d-css-libraries David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?
Oh, after writing my response below I realized this is cross-posted between openstack and openstack-dev. Please don't do that. I suppose this probably belongs on the users list, but since I've already written the response I guess I'm not going to argue too much. :-) On 08/26/2014 07:36 AM, 严超 wrote: Hi, All: I've deployed undercloud and overcloud on some baremetals. All overcloud machines are deployed by undercloud. Then I tried to shutdown undercloud machines. After that, if I reboot one overcloud machine, it will never boot from net, AKA PXE used by undercloud. Yes, that's normal. With the way our baremetal deployments work today, the deployed systems always PXE boot. After deployment they PXE boot a kernel and ramdisk that use the deployed hard disk image, but it's still a PXE boot. Is that what TripleO is designed to be ? We can never shutdown undercloud machines for maintainance of overcloud ? Please help me clearify that. Yes, that's working as intended at the moment. I recall hearing that there were plans to eliminate the PXE requirement after deployment, but you'd have to talk to the Ironic team about that. Also, I don't think it was ever the intent of TripleO that the undercloud would be shut down after deployment. The idea is that you use the undercloud to manage the overcloud machines, so if you want to reboot one you do it via the undercloud nova, not directly on the system itself. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests
On 08/26/2014 10:14 AM, Zane Bitter wrote: Steve Baker has started the process of moving Heat tests out of the Tempest repository and into the Heat repository, and we're looking for some guidance on how they should be packaged in a consistent way. Apparently there are a few projects already packaging functional tests in the package projectname.tests.functional (alongside projectname.tests.unit for the unit tests). That strikes me as odd in our context, because while the unit tests run against the code in the package in which they are embedded, the functional tests run against some entirely different code - whatever OpenStack cloud you give it the auth URL and credentials for. So these tests run from the outside, just like their ancestors in Tempest do. There's all kinds of potential confusion here for users and packagers. None of it is fatal and all of it can be worked around, but if we refrain from doing the thing that makes zero conceptual sense then there will be no problem to work around :) Thanks, Zane. The point of moving functional tests to projects is to be able to run more of them in gate jobs for those projects, and allow tempest to survive being stretched-to-breaking horizontally as we scale to more projects. At the same time, there are benefits to the tempest-as-all-in-one-functional-and-integration-suite that we should try not to lose: 1. Strong integration testing without thinking too hard about the actual dependencies 2. Protection from mistaken or unwise api changes (tempest two-step required) 3. Exportability as a complete blackbox functional test suite that can be used by Rally, RefStack, deployment validation, etc. I think (1) may be the most challenging because tests that are moved out of tempest might be testing some integration that is not being covered by a scenario. We will need to make sure that tempest actually has a complete enough set of tests to validate integration. Even if this is all implemented in a way where tempest can see in-project tests as plugins, there will still not be time to run them all as part of tempest on every commit to every project, so a selection will have to be made. (2) is quite difficult. In Atlanta we talked about taking a copy of functional tests into tempest for stable apis. I don't know how workable that is but don't see any other real options except vigilance in reviews of patches that change functional tests. (3) is what Zane was addressing. The in-project functional tests need to be written in a way that they can, at least in some configuration, run against a real cloud. I suspect from reading the previous thread about In-tree functional test vision that we may actually be dealing with three categories of test here rather than two: * Unit tests that run against the package they are embedded in * Functional tests that run against the package they are embedded in * Integration tests that run against a specified cloud i.e. the tests we are now trying to add to Heat might be qualitatively different from the projectname.tests.functional suites that already exist in a few projects. Perhaps someone from Neutron and/or Swift can confirm? That seems right, except that I would call the third functional tests and not integration tests, because the purpose is not really integration but deep testing of a particular service. Tempest would continue to focus on integration testing. Is there some controversy about that? The second category could include whitebox tests. I don't know about swift, but in neutron the intent was to have these tests be configurable to run against a real cloud, or not. Maru Newby would have details. I'd like to propose that tests of the third type get their own top-level package with a name of the form projectname-integrationtests (second choice: projectname-tempest on the principle that they're essentially plugins for Tempest). How would people feel about standardising that across OpenStack? +1 But I would not call it integrationtests for the reason given above. -David thanks, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On 08/26/2014 10:04 AM, Doug Hellmann wrote: On Aug 26, 2014, at 5:13 AM, Thierry Carrez thie...@openstack.org wrote: OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? Do we need someone to cover the QA space ? Anything else missing ? It seems the QA team is also feeling pressure due to the growing community, so it seems wise to ensure every team has someone designated to help with coordinating work on QA projects. Very much so, and having such a someone would help. But I also think that the moving of functional tests to be housed in-project will help even more. -David Doug At first glance I don't think we need a role for logistics (chairing meetings and organizing meetups), design summit planning, roadmapping, user point of contact, or spokesperson -- as I expect the PTL to retain those roles anyway... -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On Tue, Aug 26, 2014 at 10:04:41AM -0400, Doug Hellmann wrote: On Aug 26, 2014, at 5:13 AM, Thierry Carrez thie...@openstack.org wrote: OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? Do we need someone to cover the QA space ? Anything else missing ? It seems the QA team is also feeling pressure due to the growing community, so it seems wise to ensure every team has someone designated to help with coordinating work on QA projects. Yes I agree, I was actually planning to start a liaison (I guess I'll have to come up with a different name...) system similar to oslo at some point soon. We discussed it during the QA meeting last week. [1] My plan was actually going to be starting a thread on that this week before Thierry beat me to it. -Matt Treinish [1] http://eavesdrop.openstack.org/meetings/qa/2014/qa.2014-08-21-22.01.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [keystone] pysaml2/xmlsec1 dep blocking keystone-to-keystone federation
On Tue, Aug 26, 2014 at 6:44 AM, Sean Dague s...@dague.net wrote: On 08/26/2014 05:38 AM, Thierry Carrez wrote: Hi keystone/infra, One key upcoming Juno feature (Keystone to keystone federation) is currently blocked on adding pysaml2 to requirements: https://review.openstack.org/#/c/113294/ It was -1ed by Doug after the discussion at the release meeting last week, where the xmlsec1 dependency was raised as a potential infra issue. There doesn't seem to be so many good alternatives though. Steve mentioned saml, but it's a bit alpha, and I have no idea how much work would be required to use that instead of pysaml2 at this point. How blocking is the xmlsec1 dependency from an Infra perspective ? How doable would a migration to saml at this point be ? I'm trying to find a solution so that we can ship this feature :) I don't think this has anything to do with Infra. xmlsec1 is included in Debian / Ubuntu and Fedora. I think the complaint was about whether this library existed for MacOSX, which honestly, I *don't* think is a valid argument against adding a requirement as that's not a target environment for OpenStack. FWIW, it's available on OS X via homebrew: $ brew install libxmlsec1 I'm +2 on this moving forward. I feel that the keystone team answered the questions needed. Thanks! -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague s...@dague.net wrote: On 08/20/2014 12:37 PM, Zane Bitter wrote: On 11/08/14 05:24, Thierry Carrez wrote: So the idea that being (and remaining) in the integrated release should also be judged on technical merit is a slightly different effort. It's always been a factor in our choices, but like Devananda says, it's more difficult than just checking a number of QA/integration checkboxes. In some cases, blessing one project in a problem space stifles competition, innovation and alternate approaches. In some other cases, we reinvent domain-specific solutions rather than standing on the shoulders of domain-specific giants in neighboring open source projects. I totally agree that these are the things we need to be vigilant about. Stifling competition is a big worry, but it appears to me that a lot of the stifling is happening even before incubation. Everyone's time is limited, so if you happen to notice a new project on the incubation trajectory doing things in what you think is the Wrong Way, you're most likely to either leave some drive-by feedback or to just ignore it and carry on with your life. What you're most likely *not* to do is to start a competing project to prove them wrong, or to jump in full time to the existing project and show them the light. It's really hard to argue against the domain experts too - when you're acutely aware of how shallow your knowledge is in a particular area it's very hard to know how hard to push. (Perhaps ironically, since becoming a PTL I feel I have to be much more cautious in what I say too, because people are inclined to read too much into my opinion - I wonder if TC members feel the same pressure.) I speak from first-hand instances of guilt here - for example, I gave some feedback to the Mistral folks just before the last design summit[1], but I haven't had time to follow it up at all. I wouldn't be a bit surprised if they showed up with an incubation request, a largely-unchanged user interface and an expectation that I would support it. The result is that projects often don't hear the feedback they need until far too late - often when they get to the incubation review (maybe not even their first incubation review). In the particularly unfortunate case of Marconi, it wasn't until the graduation review. (More about that in a second.) My best advice to new projects here is that you must be like a ferret up the pant-leg of any negative feedback. Grab hold of any criticism and don't let go until you have either converted the person giving it into your biggest supporter, been converted by them, or provoked them to start a competing project. (Any of those is a win as far as the community is concerned.) Perhaps we could consider a space like a separate mailing list (openstack-future?) reserved just for announcements of Related projects, their architectural principles, and discussions of the same? They certainly tend to get drowned out amidst the noise of openstack-dev. (Project management, meeting announcements, and internal project discussion would all be out of scope for this list.) As for reinventing domain-specific solutions, I'm not sure that happens as often as is being made out. IMO the defining feature of IaaS that makes the cloud the cloud is on-demand (i.e. real-time) self-service. Everything else more or less falls out of that requirement, but the very first thing to fall out is multi-tenancy and there just aren't that many multi-tenant services floating around out there. There are a couple of obvious strategies to deal with that: one is to run existing software within a tenant-local resource provisioned by OpenStack (Trove and Sahara are examples of this), and the other is to wrap a multi-tenancy framework around an existing piece of software (Nova and Cinder are examples of this). (BTW the former is usually inherently less satisfying, because it scales at a much coarser granularity.) The answer to a question of the form: Why do we need OpenStack project $X, when open source project $Y already exists? is almost always: Because $Y is not multi-tenant aware; we need to wrap it with a multi-tenancy layer with OpenStack-native authentication, metering and quota management. That even allows us to set up an abstraction layer so that you can substitute $Z as the back end too. This is completely uncontroversial when you substitute X, Y, Z = Nova, libvirt, Xen. However, when you instead substitute X, Y, Z = Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial. I'm all in favour of a healthy scepticism, but I think we've passed that point now. (How would *you* make an AMQP bus multi-tenant?) To be clear, Marconi did made a mistake. The Marconi API presented semantics to the user that excluded many otherwise-obvious choices of back-end plugin
Re: [openstack-dev] [OpenStack][Nova]May be performance issues of connect_volume in Nova
On Tue, Aug 26, 2014 at 5:36 AM, Wang Shen ws1...@gmail.com wrote: Hi, All I have done some work to test the performance of LUN scanning, use iscsiadm with --rescan like what Nova dose. In my test, a host connected with a lot of LUNs , more than 1000 LUNs. Because --rescan will cause kernel to scan all of the LUNs connected to the host, it costs several minutes to complete the scanning. According to connect_volume at line 284 in nova.virt.libvirt.volume.py: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L252 Nova uses iscsiadm with --rescan to detect new volume, but this command will scan all of the LUNs, including all the others which already connected to this host. So if a host has a large number of LUNs connected to it, the connect_volume will be very slow. I think connect_volume needn't scan all of the LUNs, only need scan the LUN specified by connection_info. Is it necessary to discuss a more efficient way to improve this issues. It sounds like this is a bug; we use https://bugs.launchpad.net/nova to track bugs so they don't get lost. -- Best wishes == Peter.W == ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Wed, Aug 20, 2014 at 2:25 AM, Eoghan Glynn egl...@redhat.com wrote: Additional cross-project resources can be ponied up by the large contributor companies, and existing cross-project resources are not necessarily divertable on command. Sure additional cross-project resources can and need to be ponied up, but I am doubtful that will be enough. OK, so what exactly do you suspect wouldn't be enough, for what exactly? I am not sure what would be enough to get OpenStack back in a position where more developers/users are happier with the current state of affairs. Which is why I think we may want to try several things. Is it the likely number of such new resources, or the level of domain- expertise that they can be realistically be expected bring to the table, or the period of time to on-board them, or something else? Yes, all of the above. Hi Joe, In coming to that conclusion, have you thought about and explicitly rejected all of the approaches that have been mooted to mitigate those concerns? Is there a strong reason why the following non-exhaustive list would all be doomed to failure: * encouraging projects to follow the successful Sahara model, where one core contributor also made a large contribution to a cross-project effort (in this case infra, but could be QA or docs or release management or stable-maint ... etc) [this could be seen as essentially offsetting the cost of that additional project drawing from the cross-project well] * assigning liaisons from each project to *each* of the cross- project efforts [this could be augmented/accelerated with one of the standard on-boarding approaches, such as a designated mentor for the liaison or even an immersive period of secondment] * applying back-pressure via the board representation to make it more likely that the appropriate number of net-new cross-project resources are forthcoming [c.f. Stef's we're not amateurs or volunteers mail earlier on this thread] All of these are good ideas and I think we should try them. I am just afraid this won't be enough. Imagine for a second, that the gate is is always stable, and none of the existing cross project efforts are short staffed. OpenStack would still has a pretty poor user experience and return errors in production. Our 'official' CLIs are poor, our logs are cryptic, we have scaling issues (by number of nodes), people are concerned about operational readiness [0], upgrades are very painful, etc. Solving the issue of scaling cross project efforts is not enough, we still have to solve a whole slew of usability issues. [0] http://robhirschfeld.com/2014/08/04/oscon-report/ I really think we need to do better than dismissing out-of-hand the idea of beefing up the cross-project efforts. If it won't work for specific reasons, let's get those reasons out onto the table and make a data-driven decision on this. And which cross-project concern do you think is most strained by the current set of projects in the integrated release? Is it: * QA * infra * release management * oslo * documentation * stable-maint or something else? Good question. IMHO QA, Infra and release management are probably the most strained. OK, well let's brain-storm on how some of those efforts could potentially be made more scalable. Should we for example start to look at release management as a program onto itself, with a PTL *and* a group of cores to divide and conquer the load? (the hands-on rel mgmt for the juno-2 milestone, for example, was delegated - is there a good reason why such delegation wouldn't work as a matter of course?) Should QA programs such as grenade be actively seeking new cores to spread the workload? (until recently, this had the effective minimum of 2 cores, despite now being a requirement for integrated projects) Could the infra group potentially delegate some of the workload onto the distro folks? (given that it's strongly in their interest to have their distro represented in the CI gate. None of the above ideas may make sense, but it doesn't feel like every avenue has been explored here. I for one don't feel entirely satisfied that every potential solution to cross-project strain was fully thought-out in advance of the de-integration being presented as the solution. Just my $0.02 ... Cheers, Eoghan [on vacation with limited connectivity] But I also think there is something missing from this list. Many of the projects are hitting similar issues and end up solving them in different ways, which just leads to more confusion for the end user. Today we have a decent model for rolling out cross-project libraries (Oslo) but we don't have a good way of having broader cross project discussions such as: API standards (such as discoverability of features), logging standards, aligning on concepts
Re: [openstack-dev] What does NASA not using OpenStack mean to OS's future
Don't feed the troll. :) :)= On Monday, August 25, 2014 12:39 PM, Joshua Harlow harlo...@outlook.com wrote: So to see if we can get something useful from this thread. What was your internal analysis, can it be published? Even negative analysis is useful to make openstack better... It'd be nice to have some details on what you found, what u didn't find, so that we can all improve... After all that is what it's all about. -Josh On Aug 25, 2014, at 11:13 AM, Aryeh Friedman aryeh.fried...@gmail.com wrote: If I was doing that then I would be promoting the platform by name (which I am not). I was just pointing out in our own internal ananylis OS came in dead last among all the open source IaaS/PaaS's (the current version of mine is not #1 btw) On Mon, Aug 25, 2014 at 2:03 PM, Ian Wells ijw.ubu...@cack.org.uk wrote: On 25 August 2014 10:34, Aryeh Friedman aryeh.fried...@gmail.com wrote: Do you call Martin Meckos having no clue... he is the one that leveled the second worse criticism after mine... or is Euclapytus not one the founding members of OpenStack (after all many of the glance commands still use it's name) You appear to be trolling, and throwing around amazingly easy-to-disprove 'factoids', in an inappropriate forum, in order to drum up support for your own competing open source cloud platform. Please stop. Your time would be much better spent improving your platform rather than coming up with frankly bizarre criticism of the competitors. -- Ian. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [rally] Introduction to feature request mechanism in Rally
On Tue, Aug 26, 2014 at 5:31 AM, Boris Pavlovic bo...@pavlovic.me wrote: Hi stackers, It goes without saying that end users, that are actually using your tool/project/service, usually don't have enough time to join community, understand all technical details and road map of project and after that contribute code that implements their features... But it's well know that their experience is very useful and important for project. Specially for such kind of users, that can just say what is missing (or not working) we designed very simple way to request new feature from rally team. End user should add proposal for adding new feature: https://github.com/stackforge/rally/tree/master/doc/feature_request It's very similar to specs, but much simpler for end users, because only thing that they should fill is description of what and why is needed in rally. Why not just use wishlist bugs in launchpad? The launchpad definition of Wishlist is 'Not a bug. It's an enhancement/new feature.' Thoughts? Best regards, Boris Pavlovic ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
On Aug 26, 2014, at 11:28 AM, Matthew Treinish mtrein...@kortar.org wrote: On Tue, Aug 26, 2014 at 10:04:41AM -0400, Doug Hellmann wrote: On Aug 26, 2014, at 5:13 AM, Thierry Carrez thie...@openstack.org wrote: OK, now that we have evacuated the terminology issue (we'll use liaison or janitor or secretary, not czar), and side-stepped the offtopic development (this is not about suppressing PTLs, just a framework to let them delegate along predetermined lines if they want to)... which of those unnamed roles do we need ? In the thread were mentioned: - Bugs janitor (keep reported bugs under control) - Oslo liaison (already in place) - Security mule (VMT first point of contact) - Release secretary (communication with integrated release management) - Infrastructure contact (for gate and other infra issues) - Docs lieutenant (docs point of contact) Anita mentioned the 3rd party space person, but I wonder if it would not be specific to some projects. Would it actually be separate from the Infra contact role ? Do we need someone to cover the QA space ? Anything else missing ? It seems the QA team is also feeling pressure due to the growing community, so it seems wise to ensure every team has someone designated to help with coordinating work on QA projects. Yes I agree, I was actually planning to start a liaison (I guess I'll have to come up with a different name...) system similar to oslo at some point soon. We discussed it during the QA meeting last week. [1] My plan was actually going to be starting a thread on that this week before Thierry beat me to it. In the spirit of collaboration, the Oslo team is willing to share the term “liaison” with the QA team if you would like to use it. ;-) Doug -Matt Treinish [1] http://eavesdrop.openstack.org/meetings/qa/2014/qa.2014-08-21-22.01.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1 On 8/26/14, 6:54 AM, Tim Simpson tim.simp...@rackspace.com wrote: +1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1 On Tue, Aug 26, 2014 at 8:54 AM, Tim Simpson tim.simp...@rackspace.com wrote: +1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][VMware NSX CI] How to see the log for failure.
Hi, The VMWare NSX CI is failing and I'm not able to get any failure log to find the reason of failure. The message that I get is this: VMware NSX CI has posted comments on this change. Patch Set 24: Verified-1 You might have hit the following known bug(s): bug #1271230 Add review comment recheck-vmware to retrigger a new build. Is the failure because of bug #1271230 or some other reason? Would be great if someone can point me to the failure log. Thanks, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1 On Tue, Aug 26, 2014 at 11:43 AM, Robert Myers myer0...@gmail.com wrote: +1 On Tue, Aug 26, 2014 at 8:54 AM, Tim Simpson tim.simp...@rackspace.com wrote: +1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
+1 On Tue, Aug 26, 2014 at 1:55 PM, Vipul Sabhaya vip...@gmail.com wrote: +1 On Tue, Aug 26, 2014 at 11:43 AM, Robert Myers myer0...@gmail.com wrote: +1 On Tue, Aug 26, 2014 at 8:54 AM, Tim Simpson tim.simp...@rackspace.com wrote: +1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP
Just want to clarify something. this public ip as private ip is only for external facing interfaces on a set of VM instances. At the same time, the majority of interfaces on hte same set of VM instances will not have public ip and their subnets are isolated networks. Will this change your conclusion when you mentioned the gateway_external_network_id is left blank for the L3 agent ? On Mon, Aug 25, 2014 at 1:07 AM, Kevin Benton blak...@gmail.com wrote: I think this will depend on the deployment type for the L3 agent. If the gateway_external_network_id is left blank for the L3 agent, the external network is vlan tagged just like any regular network and doesn't have an independent bridge.[1] In that deployment scenario it should work fine. On Sun, Aug 24, 2014 at 9:30 AM, Mohammad Banikazemi m...@us.ibm.com wrote: Would this work? We used to have warnings in Neutron docs indicating that instances should not be attached to external networks: It is important to understand that you should not attach the instance to Ext-Net directly. Instead, you must use a floating IP to make it accessible from the external network. In this particular case and with the OVS plugin, the traffic on the external network which now hosts tenant VMs (on OpenStack compute nodes) should get routed from the br-int to the external bridge br-ex using for example the appropriate vlan id (what if external network does not use vlan?) and then to the external network without doing the NATing. Would this traffic go through the veth pair connecting the br-int and br-ex? Mohammad [image: Inactive hide details for Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network]Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network within Neutron to accomplish this. From: Kevin Benton blak...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 08/23/2014 01:37 AM Subject: Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP -- Yes, you should be able to create a shared/external network within Neutron to accomplish this. On Fri, Aug 22, 2014 at 7:25 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: Thank you for your response. Could this be done naturally with Openstack neutron or have to be done manually outside neutron ? As we are expecting to orchestrate hundreds of NFV with all similar network configuration, programmability is another key element. On Thu, Aug 21, 2014 at 3:52 PM, Kevin Benton *blak...@gmail.com* blak...@gmail.com wrote: Have you tried making the external network shared as well? Instances that need a private IP with NAT attach to an internal network and go through the router like normal. Instances that need a public IP without NAT would just attach directly to the external network. On Thu, Aug 21, 2014 at 7:06 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: I have a very complex Openstack deployment for NFV. It could not be deployed as Flat. It will have a lot of isolated private networks. Some interfaces of a group VM instances will need bridged network with their fixed IP addresses to communicate with outside world while other interfaces from the same set of VM should keep isolated with real private/fixed IP addresses. What happen if we use public IP addresses directly as fixed IP on those interfaces ? Will this work with Openstack neutron networking ? Will Openstack do NAT automatically on those ? Overall, the requirement is to use the fixed/public IP to communicate with outside directly on some interfaces of some VM instances while keeping others as private. The floating IP is not an option here ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin
Re: [openstack-dev] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps
Hey Stackers! Wait! =) Let me ask something... Why are you guys using Docker within a VM?!?! What is the point of doing such thing?! I thought Docker was here to entirely replace the virtualization layer, bringing a bare metal-cloud, am I right?! Tks! Thiago On 26 August 2014 05:45, Marouen Mechtri mechtri.mar...@gmail.com wrote: Hi Angus, We are not using nova-docker driver to deploy docker containers. In our manual, we are using Heat (thanks to the docker plugin) to deploy docker containers and nova is just used to deploy VM. Inside this VM heat deploy the docker software. The figure below describes the interactions between different components. Regards, Marouen [image: Images intégrées 1] 2014-08-26 0:13 GMT+02:00 Angus Salkeld asalk...@mirantis.com: This seems misleading as there is no description on setting up nova-docker or using the heat docker container. -Angus On Tue, Aug 26, 2014 at 5:56 AM, Marouen Mechtri mechtri.mar...@gmail.com wrote: Hi all, I want to present you our guide for Docker containers deployment with OpenStack Heat. In this guide we dockerize and deploy a lamp application on two containers. https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Docker-containers-deployment-with-OpenStack-Heat.rst https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst Hope it will be helpful for many people. Please let us know your opinion about it. Regards, Marouen Mechtri ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][VMware NSX CI] How to see the log for failure.
Hi Nader, Sorry about that failure. We have temporarily stopped mine sweeper for neutron while we update our devstack images. However, unfortunately some jobs did not complete properly, and therefore you had failures without logs being reported. The situation should be back to normal soon, and you should be able to do a recheck. Regards, Salvatore On 26 August 2014 20:44, Nader Lahouti nader.laho...@gmail.com wrote: Hi, The VMWare NSX CI is failing and I'm not able to get any failure log to find the reason of failure. The message that I get is this: VMware NSX CI has posted comments on this change. Patch Set 24: Verified-1 You might have hit the following known bug(s): bug #1271230 Add review comment recheck-vmware to retrigger a new build. Is the failure because of bug #1271230 or some other reason? Would be great if someone can point me to the failure log. Thanks, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][VMware NSX CI] How to see the log for failure.
On 08/26/2014 03:43 PM, Salvatore Orlando wrote: Hi Nader, Sorry about that failure. We have temporarily stopped mine sweeper for neutron while we update our devstack images. However, unfortunately some jobs did not complete properly, and therefore you had failures without logs being reported. The situation should be back to normal soon, and you should be able to do a recheck. Regards, Salvatore Hi Salvatore, Could someone update the system status for vmware on this page please: https://wiki.openstack.org/wiki/ThirdPartySystems Thank you, Anita. On 26 August 2014 20:44, Nader Lahouti nader.laho...@gmail.com wrote: Hi, The VMWare NSX CI is failing and I'm not able to get any failure log to find the reason of failure. The message that I get is this: VMware NSX CI has posted comments on this change. Patch Set 24: Verified-1 You might have hit the following known bug(s): bug #1271230 Add review comment recheck-vmware to retrigger a new build. Is the failure because of bug #1271230 or some other reason? Would be great if someone can point me to the failure log. Thanks, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] What tests are required to be run
Hi Edgar, We are also facing CI issues when the neutron patch set is not rebased with latest changes. For e.g. CI audit that you posted today (https://review.openstack.org/#/c/114393/) is not rebased with neutron test_lib related changes. We had refactored the Brocade Vyatta plugin unit tests to accommodate the test_lib related changes. But our plugin is not compatible with the patch you have posted. So CI is failing. I had a discussion with Dane Leblanc on this. We also need to post the SKIPPED status for such patch sets. We will also experiment with Kevin's suggestion. Thanks, Karthik -Original Message- From: Dane Leblanc (leblancd) [mailto:lebla...@cisco.com] Sent: Monday, August 25, 2014 10:02 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to be run Edgar, Kyle: Kevin's suggestion should work for me (still hashing out the implementation). I've added an item to the 3rd Party IRC agenda anyway to discuss this corner case. Thanks! Dane -Original Message- From: Edgar Magana [mailto:edgar.mag...@workday.com] Sent: Monday, August 25, 2014 12:44 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to be run Dane, I will second Kyle's idea. Let's discuss this during today IRC meeting if Kevin's suggestion does not work for you. Thanks, Edgar On 8/25/14, 10:08 AM, Kyle Mestery mest...@mestery.com wrote: Dane, thanks for all the great work you're doing in the third-party CI area. It's great to see you working to share this knowledge with others as well! Did Kevin's idea work for you to move past this issue? If not, I suggest you put an item on the neutron meeting agenda today and we cover this there. You could put the item on the third-party meeting agenda as well. Thanks! Kyle On Sun, Aug 24, 2014 at 9:43 AM, Dane Leblanc (leblancd) lebla...@cisco.com wrote: Hi Kevin: Thanks, this is a great idea! I may try just a slight variation of this concept. Maybe your idea could be the recommended way to create a 3rd party CI for plugins that are just being introduced and need to limit the scope of testing to a small set of plugin-related commits (or plugins blocked on a certain fix). Thanks, Dane From: Kevin Benton [mailto:blak...@gmail.com] Sent: Saturday, August 23, 2014 5:47 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to be run Can you disable posting of results directly from your Jenkins/Zuul setup and have a script that just checks the log file for special markers to determine if the vote should be FAILED/PASSED/SKIPPED? Another advantage of this approach is that it gives you an opportunity to detect when a job just failed to setup due to infrastructure reasons and trigger a recheck without ever first posting a failure to gerrit. On Fri, Aug 22, 2014 at 3:06 PM, Dane Leblanc (leblancd) lebla...@cisco.com wrote: Thanks Edgar for updating the APIC status!!! Edgar and Kyle: *PLEASE NOTE** I need your understanding and advice on the following: We are still stuck with a problem stemming from a design limitation of Jenkins that prevents us from being compliant with Neutron 3rd Party CI requirements for our DFA CI. The issue is that Jenkins only allows our scripts to (programmatically) return either Success or Fail. There is no option to return Aborted, Not Tested, or Skipped. Why does this matter? The DFA plugin is just being introduced, and initial DFA-enabling change sets have not yet been merged. Therefore, all other change sets will fail our Tempest tests, since they are not DFA-enabled. Similarly, we were recently blocked in our APIC CI with a critical bug, causing all change sets without this fix to fail on our APIC testbed. In these cases, we would like to enter a throttled or partially blocked mode, where we would skip testing on change sets we know will fail, and (in an ideal world) signal this shortcoming to Gerrit e.g. by returning a Skipped status. Unfortunately, this option is not available in Jenkins scripts, as Jenkins is currently designed. The only options we have available is Success or all Fail, which are both misleading. We would also incorrectly report success or fail on one of the following test commits: https://review.openstack.org/#/c/114393/ https://review.openstack.org/#/c/40296/ I've brought this issue up on the openstack-infra IRC, and jeblair confirmed the Jenkins limitation, but asked me to get consensus from the Neutron community as to this being a problem/requirement. I've also sent out an e-mail on the Neutron ML trying to start a discussion on this problem (no traction). I plan on bringing this up in the 3rd Party CI IRC on Monday, assuming there is time permitted in
Re: [openstack-dev] [Infra] Meeting Tuesday August 26th at 19:00 UTC
On Mon, Aug 25, 2014 at 11:37 AM, Elizabeth K. Joseph l...@princessleia.com wrote: The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday August 26th, at 19:00 UTC in #openstack-meeting Thanks to everyone who joined us for the meeting, log and minutes available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-26-19.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-26-19.01.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-26-19.01.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 http://www.princessleia.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP
No, the gateway_external_network_id option just refers to how your network is deployed. If the external network uses a regular segmentation identifier like the rest of the networks, this will work. If not, it won't because the instances will try to use a segmentation identifier. In other words, if you have a separate physical interface for external networks on your L3 agent nodes, this will not work. On Aug 26, 2014 12:14 PM, Bao Wang bywan...@gmail.com wrote: Just want to clarify something. this public ip as private ip is only for external facing interfaces on a set of VM instances. At the same time, the majority of interfaces on hte same set of VM instances will not have public ip and their subnets are isolated networks. Will this change your conclusion when you mentioned the gateway_external_network_id is left blank for the L3 agent ? On Mon, Aug 25, 2014 at 1:07 AM, Kevin Benton blak...@gmail.com wrote: I think this will depend on the deployment type for the L3 agent. If the gateway_external_network_id is left blank for the L3 agent, the external network is vlan tagged just like any regular network and doesn't have an independent bridge.[1] In that deployment scenario it should work fine. On Sun, Aug 24, 2014 at 9:30 AM, Mohammad Banikazemi m...@us.ibm.com wrote: Would this work? We used to have warnings in Neutron docs indicating that instances should not be attached to external networks: It is important to understand that you should not attach the instance to Ext-Net directly. Instead, you must use a floating IP to make it accessible from the external network. In this particular case and with the OVS plugin, the traffic on the external network which now hosts tenant VMs (on OpenStack compute nodes) should get routed from the br-int to the external bridge br-ex using for example the appropriate vlan id (what if external network does not use vlan?) and then to the external network without doing the NATing. Would this traffic go through the veth pair connecting the br-int and br-ex? Mohammad [image: Inactive hide details for Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network]Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network within Neutron to accomplish this. From: Kevin Benton blak...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 08/23/2014 01:37 AM Subject: Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP -- Yes, you should be able to create a shared/external network within Neutron to accomplish this. On Fri, Aug 22, 2014 at 7:25 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: Thank you for your response. Could this be done naturally with Openstack neutron or have to be done manually outside neutron ? As we are expecting to orchestrate hundreds of NFV with all similar network configuration, programmability is another key element. On Thu, Aug 21, 2014 at 3:52 PM, Kevin Benton *blak...@gmail.com* blak...@gmail.com wrote: Have you tried making the external network shared as well? Instances that need a private IP with NAT attach to an internal network and go through the router like normal. Instances that need a public IP without NAT would just attach directly to the external network. On Thu, Aug 21, 2014 at 7:06 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: I have a very complex Openstack deployment for NFV. It could not be deployed as Flat. It will have a lot of isolated private networks. Some interfaces of a group VM instances will need bridged network with their fixed IP addresses to communicate with outside world while other interfaces from the same set of VM should keep isolated with real private/fixed IP addresses. What happen if we use public IP addresses directly as fixed IP on those interfaces ? Will this work with Openstack neutron networking ? Will Openstack do NAT automatically on those ? Overall, the requirement is to use the fixed/public IP to communicate with outside directly on some interfaces of some VM instances while keeping others as private. The floating IP is not an option here ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [neutron][OpenDayLight CI] Is this a real failure? The failure log is gone
Thanks Salvatore for the quick response for the NSX CI failure! OpenDayLight CI experts, We are also seeing a failure on OpenDayLight CI but it appears there is no log at the link below. This is an urgent issue for us. Appreciate your prompt help here. OpenDaylight CI 9:35 AM Patch Set 24: Verified-1 OpenDaylight Jenkins Build Failed https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/1763/ : FAILURE Thanks, Milton From: Salvatore Orlando [mailto:sorla...@nicira.com] Sent: Tuesday, August 26, 2014 12:44 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][VMware NSX CI] How to see the log for failure. Hi Nader, Sorry about that failure. We have temporarily stopped mine sweeper for neutron while we update our devstack images. However, unfortunately some jobs did not complete properly, and therefore you had failures without logs being reported. The situation should be back to normal soon, and you should be able to do a recheck. Regards, Salvatore On 26 August 2014 20:44, Nader Lahouti nader.laho...@gmail.commailto:nader.laho...@gmail.com wrote: Hi, The VMWare NSX CI is failing and I'm not able to get any failure log to find the reason of failure. The message that I get is this: VMware NSX CI has posted comments on this change. Patch Set 24: Verified-1 You might have hit the following known bug(s): bug #1271230 Add review comment recheck-vmware to retrigger a new build. Is the failure because of bug #1271230 or some other reason? Would be great if someone can point me to the failure log. Thanks, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds
On Tue, Aug 26, 2014 at 7:59 PM, Tim Bell tim.b...@cern.ch wrote: From: Michael Still [mailto:mi...@stillhq.com] Sent: 25 August 2014 23:38 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds ... Mark McClain and I discussed a possible plan for nova-network to neutron upgrades at the Ops Meetup today, and it seemed generally acceptable. It defines a cold migration as freezing the ability to create or destroy instances during the upgrade, and then requiring a short network outage for each instance in the cell. This is why I'm trying to understand the no downtime use case better. Is it literally no downtime, ever? Or is it a more simple no simultaneous downtime for instances? Michael The simultaneous downtime across the cloud is the one we really need to avoid. Short network outages (depending on how you define short) can be handled along with blocking API operations for short periods. The other item was how to stage the upgrade.. with a cloud of a significant size and some concerns about scalability, we would like to be able to do the migration as a set of steps rather than a big bang. During the gap between the steps, we'd like to open the APIs for usage, such as new VMs get created on Neutron hypervisors. Would that be a possibility ? Mark and I finally got a chance to sit down and write out a basic proposal. It looks like this: == neutron step 0 == configure neutron to reverse proxy calls to Nova (part to be written) == nova-compute restart one == Freeze nova's network state (probably by stopping nova-api, but we could be smarter than that if required) Update all nova-compute nodes to point Neutron and remove nova-net agent for Neutron Nova aware L2 agent Enable Neutron Layer 2 agent on each node, this might have the side effect of causing the network configuration to be rebuilt for some instances API can be unfrozen at this time until ready for step 2 == neutron restart two == Freeze nova's network state (probably by stopping nova-api, but we could be smarter than that if required) Dump/translate/restore date from Nova-Net to Neutron Configure Neutron to point to its own database Unfreeze Nova API *** Stopping point for linuxbridge to linuxbridge translation, or continue for rollout of new tech == nova-compute restart two == Configure OVS or new technology, ensure that proper ML2 driver is installed Restart Layer2 agent on each hypervisor where next gen networking should be enabled So, I want to stop using the word cold to describe this. Its more of a rolling upgrade than a cold migration. So... Would two shorter nova API outages be acceptable? Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [congress] specs update
current specs are updated and current with the project blueprints https://blueprints.launchpad.net/congress enjoy! ~ sean ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core
Thanks everyone for the show of support. Amrith: welcome to trove-core! On Tue, Aug 26, 2014 at 12:02 PM, Craig Vyvial cp16...@gmail.com wrote: +1 On Tue, Aug 26, 2014 at 1:55 PM, Vipul Sabhaya vip...@gmail.com wrote: +1 On Tue, Aug 26, 2014 at 11:43 AM, Robert Myers myer0...@gmail.com wrote: +1 On Tue, Aug 26, 2014 at 8:54 AM, Tim Simpson tim.simp...@rackspace.com wrote: +1 From: Sergey Gotliv [sgot...@redhat.com] Sent: Tuesday, August 26, 2014 8:11 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Strong +1 from me! -Original Message- From: Nikhil Manchanda [mailto:nik...@manchanda.me] Sent: August-26-14 3:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core Hello folks: I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core. Amrith has been working with Trove for a while now. He has been a consistently active reviewer, and has provided insightful comments on numerous reviews. He has submitted quality code for multiple bug-fixes in Trove, and most recently drove the audit and clean-up of log messages across all Trove components. https://review.openstack.org/#/q/reviewer:amrith,n,z https://review.openstack.org/#/q/owner:amrith,n,z Please respond with +1/-1, or any further comments. Thanks, Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests
On 27/08/14 03:18, David Kranz wrote: On 08/26/2014 10:14 AM, Zane Bitter wrote: Steve Baker has started the process of moving Heat tests out of the Tempest repository and into the Heat repository, and we're looking for some guidance on how they should be packaged in a consistent way. Apparently there are a few projects already packaging functional tests in the package projectname.tests.functional (alongside projectname.tests.unit for the unit tests). That strikes me as odd in our context, because while the unit tests run against the code in the package in which they are embedded, the functional tests run against some entirely different code - whatever OpenStack cloud you give it the auth URL and credentials for. So these tests run from the outside, just like their ancestors in Tempest do. There's all kinds of potential confusion here for users and packagers. None of it is fatal and all of it can be worked around, but if we refrain from doing the thing that makes zero conceptual sense then there will be no problem to work around :) Thanks, Zane. The point of moving functional tests to projects is to be able to run more of them in gate jobs for those projects, and allow tempest to survive being stretched-to-breaking horizontally as we scale to more projects. At the same time, there are benefits to the tempest-as-all-in-one-functional-and-integration-suite that we should try not to lose: 1. Strong integration testing without thinking too hard about the actual dependencies 2. Protection from mistaken or unwise api changes (tempest two-step required) 3. Exportability as a complete blackbox functional test suite that can be used by Rally, RefStack, deployment validation, etc. I think (1) may be the most challenging because tests that are moved out of tempest might be testing some integration that is not being covered by a scenario. We will need to make sure that tempest actually has a complete enough set of tests to validate integration. Even if this is all implemented in a way where tempest can see in-project tests as plugins, there will still not be time to run them all as part of tempest on every commit to every project, so a selection will have to be made. (2) is quite difficult. In Atlanta we talked about taking a copy of functional tests into tempest for stable apis. I don't know how workable that is but don't see any other real options except vigilance in reviews of patches that change functional tests. (3) is what Zane was addressing. The in-project functional tests need to be written in a way that they can, at least in some configuration, run against a real cloud. I suspect from reading the previous thread about In-tree functional test vision that we may actually be dealing with three categories of test here rather than two: * Unit tests that run against the package they are embedded in * Functional tests that run against the package they are embedded in * Integration tests that run against a specified cloud i.e. the tests we are now trying to add to Heat might be qualitatively different from the projectname.tests.functional suites that already exist in a few projects. Perhaps someone from Neutron and/or Swift can confirm? That seems right, except that I would call the third functional tests and not integration tests, because the purpose is not really integration but deep testing of a particular service. Tempest would continue to focus on integration testing. Is there some controversy about that? The second category could include whitebox tests. I don't know about swift, but in neutron the intent was to have these tests be configurable to run against a real cloud, or not. Maru Newby would have details. I'd like to propose that tests of the third type get their own top-level package with a name of the form projectname-integrationtests (second choice: projectname-tempest on the principle that they're essentially plugins for Tempest). How would people feel about standardising that across OpenStack? +1 But I would not call it integrationtests for the reason given above. Because all heat does is interact with other services, what we call functional tests are actually integration tests. Sure, we could mock at the REST API level, but integration coverage is what we need most. This lets us verify things like: - how heat handles races in other services leading to resources going into ERROR - connectivity and interaction between heat and agents on orchestrated servers So I'm +1 on Zane's suggestion for projectname-integrationtests for the sort of scenario integration testing we want to do. I would imagine the swift and neutron functional tests are much more self-contained, possibly only interacting with keystone. The heat API tests are very much in this category, which currently live in tempest.api.orchestration. These can move into the heat tree using whatever established convention once tempest-lib is consumable, and/or
Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP
Sorry, not good with neutron. Could you explain what use a regular segmentation identifer like the rest of the network ? What is this segmentation identifier ? On Tue, Aug 26, 2014 at 3:07 PM, Kevin Benton blak...@gmail.com wrote: No, the gateway_external_network_id option just refers to how your network is deployed. If the external network uses a regular segmentation identifier like the rest of the networks, this will work. If not, it won't because the instances will try to use a segmentation identifier. In other words, if you have a separate physical interface for external networks on your L3 agent nodes, this will not work. On Aug 26, 2014 12:14 PM, Bao Wang bywan...@gmail.com wrote: Just want to clarify something. this public ip as private ip is only for external facing interfaces on a set of VM instances. At the same time, the majority of interfaces on hte same set of VM instances will not have public ip and their subnets are isolated networks. Will this change your conclusion when you mentioned the gateway_external_network_id is left blank for the L3 agent ? On Mon, Aug 25, 2014 at 1:07 AM, Kevin Benton blak...@gmail.com wrote: I think this will depend on the deployment type for the L3 agent. If the gateway_external_network_id is left blank for the L3 agent, the external network is vlan tagged just like any regular network and doesn't have an independent bridge.[1] In that deployment scenario it should work fine. On Sun, Aug 24, 2014 at 9:30 AM, Mohammad Banikazemi m...@us.ibm.com wrote: Would this work? We used to have warnings in Neutron docs indicating that instances should not be attached to external networks: It is important to understand that you should not attach the instance to Ext-Net directly. Instead, you must use a floating IP to make it accessible from the external network. In this particular case and with the OVS plugin, the traffic on the external network which now hosts tenant VMs (on OpenStack compute nodes) should get routed from the br-int to the external bridge br-ex using for example the appropriate vlan id (what if external network does not use vlan?) and then to the external network without doing the NATing. Would this traffic go through the veth pair connecting the br-int and br-ex? Mohammad [image: Inactive hide details for Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network]Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network within Neutron to accomplish this. From: Kevin Benton blak...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 08/23/2014 01:37 AM Subject: Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP -- Yes, you should be able to create a shared/external network within Neutron to accomplish this. On Fri, Aug 22, 2014 at 7:25 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: Thank you for your response. Could this be done naturally with Openstack neutron or have to be done manually outside neutron ? As we are expecting to orchestrate hundreds of NFV with all similar network configuration, programmability is another key element. On Thu, Aug 21, 2014 at 3:52 PM, Kevin Benton *blak...@gmail.com* blak...@gmail.com wrote: Have you tried making the external network shared as well? Instances that need a private IP with NAT attach to an internal network and go through the router like normal. Instances that need a public IP without NAT would just attach directly to the external network. On Thu, Aug 21, 2014 at 7:06 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: I have a very complex Openstack deployment for NFV. It could not be deployed as Flat. It will have a lot of isolated private networks. Some interfaces of a group VM instances will need bridged network with their fixed IP addresses to communicate with outside world while other interfaces from the same set of VM should keep isolated with real private/fixed IP addresses. What happen if we use public IP addresses directly as fixed IP on those interfaces ? Will this work with Openstack neutron networking ? Will Openstack do NAT automatically on those ? Overall, the requirement is to use the fixed/public IP to communicate with outside directly on some interfaces of some VM instances while keeping others as private. The floating IP is not an option here ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
Re: [openstack-dev] [neutron][VMware NSX CI] How to see the log for failure.
Hi Salvatore, Thanks for the information. Will do the recheck-vmware. Regards, Nader. On Tue, Aug 26, 2014 at 12:43 PM, Salvatore Orlando sorla...@nicira.com wrote: Hi Nader, Sorry about that failure. We have temporarily stopped mine sweeper for neutron while we update our devstack images. However, unfortunately some jobs did not complete properly, and therefore you had failures without logs being reported. The situation should be back to normal soon, and you should be able to do a recheck. Regards, Salvatore On 26 August 2014 20:44, Nader Lahouti nader.laho...@gmail.com wrote: Hi, The VMWare NSX CI is failing and I'm not able to get any failure log to find the reason of failure. The message that I get is this: VMware NSX CI has posted comments on this change. Patch Set 24: Verified-1 You might have hit the following known bug(s): bug #1271230 Add review comment recheck-vmware to retrigger a new build. Is the failure because of bug #1271230 or some other reason? Would be great if someone can point me to the failure log. Thanks, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][OpenDayLight CI] Is this a real failure? The failure log is gone
There is an opendaylight failure on one of my patches as well. Can the maintainer update the message it posts with the syntax to retrigger a check? On Tue, Aug 26, 2014 at 2:27 PM, Kyle Mestery mest...@mestery.com wrote: Looking at the console output from the ODL Jenkins failure, it appears like Jenkins reaped this instance before everything was complete. I've copied Dave Tucker who may be able to provide some additional input from the ODL side as well. On Tue, Aug 26, 2014 at 3:13 PM, Milton Xu (mxu) m...@cisco.com wrote: Thanks Salvatore for the quick response for the NSX CI failure! OpenDayLight CI experts, We are also seeing a failure on OpenDayLight CI but it appears there is no log at the link below. This is an urgent issue for us. Appreciate your prompt help here. OpenDaylight CI 9:35 AM Patch Set 24: Verified-1 OpenDaylight Jenkins Build Failed https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/1763/ : FAILURE Thanks, Milton *From:* Salvatore Orlando [mailto:sorla...@nicira.com] *Sent:* Tuesday, August 26, 2014 12:44 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] [neutron][VMware NSX CI] How to see the log for failure. Hi Nader, Sorry about that failure. We have temporarily stopped mine sweeper for neutron while we update our devstack images. However, unfortunately some jobs did not complete properly, and therefore you had failures without logs being reported. The situation should be back to normal soon, and you should be able to do a recheck. Regards, Salvatore On 26 August 2014 20:44, Nader Lahouti nader.laho...@gmail.com wrote: Hi, The VMWare NSX CI is failing and I'm not able to get any failure log to find the reason of failure. The message that I get is this: VMware NSX CI has posted comments on this change. Patch Set 24: Verified-1 You might have hit the following known bug(s): bug #1271230 Add review comment recheck-vmware to retrigger a new build. Is the failure because of bug #1271230 or some other reason? Would be great if someone can point me to the failure log. Thanks, Nader. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP
VLAN tag, VXLAN id, etc. On Tue, Aug 26, 2014 at 2:27 PM, Bao Wang bywan...@gmail.com wrote: Sorry, not good with neutron. Could you explain what use a regular segmentation identifer like the rest of the network ? What is this segmentation identifier ? On Tue, Aug 26, 2014 at 3:07 PM, Kevin Benton blak...@gmail.com wrote: No, the gateway_external_network_id option just refers to how your network is deployed. If the external network uses a regular segmentation identifier like the rest of the networks, this will work. If not, it won't because the instances will try to use a segmentation identifier. In other words, if you have a separate physical interface for external networks on your L3 agent nodes, this will not work. On Aug 26, 2014 12:14 PM, Bao Wang bywan...@gmail.com wrote: Just want to clarify something. this public ip as private ip is only for external facing interfaces on a set of VM instances. At the same time, the majority of interfaces on hte same set of VM instances will not have public ip and their subnets are isolated networks. Will this change your conclusion when you mentioned the gateway_external_network_id is left blank for the L3 agent ? On Mon, Aug 25, 2014 at 1:07 AM, Kevin Benton blak...@gmail.com wrote: I think this will depend on the deployment type for the L3 agent. If the gateway_external_network_id is left blank for the L3 agent, the external network is vlan tagged just like any regular network and doesn't have an independent bridge.[1] In that deployment scenario it should work fine. On Sun, Aug 24, 2014 at 9:30 AM, Mohammad Banikazemi m...@us.ibm.com wrote: Would this work? We used to have warnings in Neutron docs indicating that instances should not be attached to external networks: It is important to understand that you should not attach the instance to Ext-Net directly. Instead, you must use a floating IP to make it accessible from the external network. In this particular case and with the OVS plugin, the traffic on the external network which now hosts tenant VMs (on OpenStack compute nodes) should get routed from the br-int to the external bridge br-ex using for example the appropriate vlan id (what if external network does not use vlan?) and then to the external network without doing the NATing. Would this traffic go through the veth pair connecting the br-int and br-ex? Mohammad [image: Inactive hide details for Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network]Kevin Benton ---08/23/2014 01:37:28 AM---Yes, you should be able to create a shared/external network within Neutron to accomplish this. From: Kevin Benton blak...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 08/23/2014 01:37 AM Subject: Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP -- Yes, you should be able to create a shared/external network within Neutron to accomplish this. On Fri, Aug 22, 2014 at 7:25 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: Thank you for your response. Could this be done naturally with Openstack neutron or have to be done manually outside neutron ? As we are expecting to orchestrate hundreds of NFV with all similar network configuration, programmability is another key element. On Thu, Aug 21, 2014 at 3:52 PM, Kevin Benton *blak...@gmail.com* blak...@gmail.com wrote: Have you tried making the external network shared as well? Instances that need a private IP with NAT attach to an internal network and go through the router like normal. Instances that need a public IP without NAT would just attach directly to the external network. On Thu, Aug 21, 2014 at 7:06 AM, Bao Wang *bywan...@gmail.com* bywan...@gmail.com wrote: I have a very complex Openstack deployment for NFV. It could not be deployed as Flat. It will have a lot of isolated private networks. Some interfaces of a group VM instances will need bridged network with their fixed IP addresses to communicate with outside world while other interfaces from the same set of VM should keep isolated with real private/fixed IP addresses. What happen if we use public IP addresses directly as fixed IP on those interfaces ? Will this work with Openstack neutron networking ? Will Openstack do NAT automatically on those ? Overall, the requirement is to use the fixed/public IP to communicate with outside directly on some interfaces of some VM instances while keeping others as private. The floating IP is not an option here ___ OpenStack-dev mailing list *OpenStack-dev@lists.openstack.org*
[openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)
Hi folks, I ran some rough benchmarks to get an idea of where Zaqar currently stands re latency and throughput for Juno. These results are by no means conclusive, but I wanted to publish what I had so far for the sake of discussion. Note that these tests do not include results for our new Redis driver, but I hope to make those available soon. As always, the usual disclaimers apply (i.e., benchmarks mostly amount to lies; these numbers are only intended to provide a ballpark reference; you should perform your own tests, simulating your specific scenarios and using your own hardware; etc.). ## Setup ## Rather than VMs, I provisioned some Rackspace OnMetal[8] servers to mitigate noisy neighbor when running the performance tests: * 1x Load Generator * Hardware * 1x Intel Xeon E5-2680 v2 2.8Ghz * 32 GB RAM * 10Gbps NIC * 32GB SATADOM * Software * Debian Wheezy * Python 2.7.3 * zaqar-bench from trunk with some extra patches[1] * 1x Web Head * Hardware * 1x Intel Xeon E5-2680 v2 2.8Ghz * 32 GB RAM * 10Gbps NIC * 32GB SATADOM * Software * Debian Wheezy * Python 2.7.3 * zaqar server from trunk @47e07cad * storage=mongodb * partitions=4 * MongoDB URI configured with w=majority * uWSGI + gevent * config: http://paste.openstack.org/show/100592/ * app.py: http://paste.openstack.org/show/100593/ * 3x MongoDB Nodes * Hardware * 2x Intel Xeon E5-2680 v2 2.8Ghz * 128 GB RAM * 10Gbps NIC * 2x LSI Nytro WarpDrive BLP4-1600[2] * Software * Debian Wheezy * mongod 2.6.4 * Default config, except setting replSet and enabling periodic logging of CPU and I/O * Journaling enabled * Profiling on message DBs enabled for requests over 10ms For generating the load, I used the zaqar-bench tool we created during Juno as a stepping stone toward integration with Rally. Although the tool is still fairly rough, I thought it good enough to provide some useful data[3]. The tool uses the python-zaqarclient library. Note that I didn’t push the servers particularly hard for these tests; web head CPUs averaged around 20%, while the mongod primary’s CPU usage peaked at around 10% with DB locking peaking at 5%. Several different messaging patterns were tested, taking inspiration from: https://wiki.openstack.org/wiki/Use_Cases_(Zaqar) Each test was executed three times and the best time recorded. A ~1K sample message (1398 bytes) was used for all tests. ## Results ## ### Event Broadcasting (Read-Heavy) ### OK, so let's say you have a somewhat low-volume source, but tons of event observers. In this case, the observers easily outpace the producer, making this a read-heavy workload. Options * 1 producer process with 5 gevent workers * 1 message posted per request * 2 observer processes with 25 gevent workers each * 5 messages listed per request by the observers * Load distributed across 4[7] queues * 10-second duration[4] Results * Producer: 2.2 ms/req, 454 req/sec * Observer: 1.5 ms/req, 1224 req/sec ### Event Broadcasting (Balanced) ### This test uses the same number of producers and consumers, but note that the observers are still listing (up to) 5 messages at a time[5], so they still outpace the producers, but not as quickly as before. Options * 2 producer processes with 10 gevent workers each * 1 message posted per request * 2 observer processes with 25 gevent workers each * 5 messages listed per request by the observers * Load distributed across 4 queues * 10-second duration Results * Producer: 2.2 ms/req, 883 req/sec * Observer: 2.8 ms/req, 348 req/sec ### Point-to-Point Messaging ### In this scenario I simulated one client sending messages directly to a different client. Only one queue is required in this case[6]. Note the higher latency. While running the test there were 1-2 message posts that skewed the average by taking much longer (~100ms) than the others to complete. Such outliers are probably present in the other tests as well, and further investigation is need to discover the root cause. Options * 1 producer process with 1 gevent worker * 1 message posted per request * 1 observer process with 1 gevent worker * 1 message listed per request * All load sent to a single queue * 10-second duration Results * Producer: 5.5 ms/req, 179 req/sec * Observer: 3.5 ms/req, 278 req/sec ### Task Distribution ### This test uses several producers and consumers in order to simulate distributing tasks to a worker pool. In contrast to the observer worker type, consumers claim and delete messages in such a way that each message is processed once and only once. Options * 2 producer processes with 25 gevent
Re: [openstack-dev] [rally] Introduction to feature request mechanism in Rally
Joe, Why not just use wishlist bugs in launchpad? The launchpad definition of Wishlist is 'Not a bug. It's an enhancement/new feature.' The are few reasons: 1) It's not a wish list it's roadmap of project, that is based on end users requests 2) It should be easy for everybody to understand, what is the major direction of Rally development and it's much simpler to read few really simple and short files, than to analyze whole launchpad. 3) Feature request requires reviews and future work, so it's much simpler to do it via gerrit 4) It simplifies to keep Rally team concentrated on real use cases and move in right direction Best regards, Boris Pavlovic On Tue, Aug 26, 2014 at 10:05 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Aug 26, 2014 at 5:31 AM, Boris Pavlovic bo...@pavlovic.me wrote: Hi stackers, It goes without saying that end users, that are actually using your tool/project/service, usually don't have enough time to join community, understand all technical details and road map of project and after that contribute code that implements their features... But it's well know that their experience is very useful and important for project. Specially for such kind of users, that can just say what is missing (or not working) we designed very simple way to request new feature from rally team. End user should add proposal for adding new feature: https://github.com/stackforge/rally/tree/master/doc/feature_request It's very similar to specs, but much simpler for end users, because only thing that they should fill is description of what and why is needed in rally. Why not just use wishlist bugs in launchpad? The launchpad definition of Wishlist is 'Not a bug. It's an enhancement/new feature.' Thoughts? Best regards, Boris Pavlovic ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Juno-3 BP meeting
I'd like to propose a meeting at 1300UTC on Thursday in #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this point. We're taking specifically about medium and high priority ones, with a focus on these three: https://blueprints.launchpad.net/neutron/+spec/l3-high-availability) https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security) https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor) These three BPs will provide a final push for scalability in a few areas and are things we as a team need to work to merge this week. The meeting will allow for discussion of final issues on these patches with the goal of trying to merge them by Feature Freeze next week. If time permits, we can discuss other medium and high priority community BPs as well. Let me know if this works by responding on this thread and I hope to see people there Thursday! Thanks, Kyle ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Beta milestone of Fuel 5.1 now available!
Hi All, I¹m thrilled to announce that we¹ve reached the beta milestone for the Fuel Project. This beta release is made available to allow a broad user base to test and evaluate the next minor version of Fuel, but is not recommended for production use at this stage. What¹s New in Fuel 5.1? The primary new features of Fuel 5.1 are: * An upgrade path from 5.0 or 5.0.1 to 5.1 * Automated updating an existing OpenStack environment (e.g. from 2014.1 to 2014.1.1) * * Access control to the Fuel UI and API * * Deployment of the ML2 Open vSwitch plug-in for Neutron * * The Fuel Master Node can be backed-up and restored * * VMWare NSX is supported as a network option for KVM hypervisors * * VMWare vCenter integration supports multiple vCenter clusters * * Mellanox hardware support for ISER SR-IOV based networking * * The Zabbix monitoring solution can be deployed by Fuel (experimental) * * Experimental features can now be explicitly enabled or disabled How you can participate To join us in the beta program, please follow these guidelines: * You can download the latest beta build from the public jenkins repository: * https://fuel-jenkins.mirantis.com/view/ISO/ https://fuel-jenkins.mirantis.com/view/ISO/ * * Look for the latest build that has passed the standard BVT tests. * * You can choose to download the ISO, IMG or Upgrade (UPGD) file by clicking on these links next to the build name. UPGD is for those folks upgrading from Fuel 5.0 or 5.0.1 to 5.1. Download the ISO or IMG if you¹re installing fresh. * For this beta, we¹re making the files available via BitTorrent, so please be sure to have a BitTorrent compatible client to download the files. * Beta documentation for the release is available: * http://docs.mirantis.com/openstack/fuel/master/ http://docs.mirantis.com/openstack/fuel/master/ * If you have questions, want to provide feedback or encounter issues with the beta release, you can contact the community developers in a couple of ways: * IRC at freenode.net: #fuel-dev * * OpenStack developers mailing list http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev : Obviously, if you¹re receiving this message, you¹re already signed up but if you want to forward this info to someone else, just let them know to sign up for this openstack-dev mailing list and send E-mails with the subject starting with [Fuel]. * If you find a reproducible bug, you can log it on Launchpad here: * https://bugs.launchpad.net/fuel https://bugs.launchpad.net/fuel * * Please be sure to run the Diagnostic Snapshot to collect the logs and configuration files needed by our dev team to troubleshoot the issue. We welcome any and all levels of participation in this Beta and we look forward to making this release of Fuel the best ever! Thanks, - David J. Easter Director of Product Management, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?
Thank you very much. And sorry for the cross-posting. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* 2014-08-26 23:17 GMT+08:00 Ben Nemec openst...@nemebean.com: Oh, after writing my response below I realized this is cross-posted between openstack and openstack-dev. Please don't do that. I suppose this probably belongs on the users list, but since I've already written the response I guess I'm not going to argue too much. :-) On 08/26/2014 07:36 AM, 严超 wrote: Hi, All: I've deployed undercloud and overcloud on some baremetals. All overcloud machines are deployed by undercloud. Then I tried to shutdown undercloud machines. After that, if I reboot one overcloud machine, it will never boot from net, AKA PXE used by undercloud. Yes, that's normal. With the way our baremetal deployments work today, the deployed systems always PXE boot. After deployment they PXE boot a kernel and ramdisk that use the deployed hard disk image, but it's still a PXE boot. Is that what TripleO is designed to be ? We can never shutdown undercloud machines for maintainance of overcloud ? Please help me clearify that. Yes, that's working as intended at the moment. I recall hearing that there were plans to eliminate the PXE requirement after deployment, but you'd have to talk to the Ironic team about that. Also, I don't think it was ever the intent of TripleO that the undercloud would be shut down after deployment. The idea is that you use the undercloud to manage the overcloud machines, so if you want to reboot one you do it via the undercloud nova, not directly on the system itself. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Wed, Aug 27, 2014 at 4:01 AM, Joe Gordon joe.gord...@gmail.com wrote: On Wed, Aug 20, 2014 at 2:25 AM, Eoghan Glynn egl...@redhat.com wrote: Additional cross-project resources can be ponied up by the large contributor companies, and existing cross-project resources are not necessarily divertable on command. Sure additional cross-project resources can and need to be ponied up, but I am doubtful that will be enough. OK, so what exactly do you suspect wouldn't be enough, for what exactly? I am not sure what would be enough to get OpenStack back in a position where more developers/users are happier with the current state of affairs. Which is why I think we may want to try several things. Is it the likely number of such new resources, or the level of domain- expertise that they can be realistically be expected bring to the table, or the period of time to on-board them, or something else? Yes, all of the above. Hi Joe, In coming to that conclusion, have you thought about and explicitly rejected all of the approaches that have been mooted to mitigate those concerns? Is there a strong reason why the following non-exhaustive list would all be doomed to failure: * encouraging projects to follow the successful Sahara model, where one core contributor also made a large contribution to a cross-project effort (in this case infra, but could be QA or docs or release management or stable-maint ... etc) [this could be seen as essentially offsetting the cost of that additional project drawing from the cross-project well] * assigning liaisons from each project to *each* of the cross- project efforts [this could be augmented/accelerated with one of the standard on-boarding approaches, such as a designated mentor for the liaison or even an immersive period of secondment] * applying back-pressure via the board representation to make it more likely that the appropriate number of net-new cross-project resources are forthcoming [c.f. Stef's we're not amateurs or volunteers mail earlier on this thread] All of these are good ideas and I think we should try them. I am just afraid this won't be enough. Imagine for a second, that the gate is is always stable, and none of the existing cross project efforts are short staffed. OpenStack would still has a pretty poor user experience and return errors in production. Our 'official' CLIs are poor, our logs are cryptic, we have scaling issues (by number of nodes), people are concerned about operational readiness [0], upgrades are very painful, etc. Solving the issue of scaling cross project efforts is not enough, we still have to solve a whole slew of usability issues. I believe developers working on OpenStack work for companies that really want this to happen. The developers also want their projects to be well regarded. Just the way the problem is using framed is a bit like you did above and this is very daunting for any one person to solve. If we can we quantify the problem, break the work into doable items of work (bugs) and prioritized it will be solved a lot faster. -Angus [0] http://robhirschfeld.com/2014/08/04/oscon-report/ I really think we need to do better than dismissing out-of-hand the idea of beefing up the cross-project efforts. If it won't work for specific reasons, let's get those reasons out onto the table and make a data-driven decision on this. And which cross-project concern do you think is most strained by the current set of projects in the integrated release? Is it: * QA * infra * release management * oslo * documentation * stable-maint or something else? Good question. IMHO QA, Infra and release management are probably the most strained. OK, well let's brain-storm on how some of those efforts could potentially be made more scalable. Should we for example start to look at release management as a program onto itself, with a PTL *and* a group of cores to divide and conquer the load? (the hands-on rel mgmt for the juno-2 milestone, for example, was delegated - is there a good reason why such delegation wouldn't work as a matter of course?) Should QA programs such as grenade be actively seeking new cores to spread the workload? (until recently, this had the effective minimum of 2 cores, despite now being a requirement for integrated projects) Could the infra group potentially delegate some of the workload onto the distro folks? (given that it's strongly in their interest to have their distro represented in the CI gate. None of the above ideas may make sense, but it doesn't feel like every avenue has been explored here. I for one don't feel entirely satisfied that every potential solution to cross-project strain was fully thought-out in advance of the de-integration being presented as the solution. Just my $0.02 ... Cheers, Eoghan
Re: [openstack-dev] [neutron] Juno-3 BP meeting
Hi Kyle, I have conflicts for 13 UTC - Thursday is already full for me, but I'll try anyway, to join the convo on IRC. I agree the 3 blueprints you've mentioned are the ones we should really merge for Juno. To this aim, I wonder why [1] has not been set to high. Nevertheless it does not matter a lot. The code for this blueprint has been thoroughly reviewed and we're just facing a little problem with RPC versioning. Akihiro has stepped in to help with [2], which should merge before the RPC refactor. I have not followed l3-high-availability, however I see only 3 patches merged and there a still a bunch in progress/abandoned. Do we have enough reviewer coverage there? Finally, the ipsets patch [3] has not been updated in a while. Probably just because the owner is waiting for reviews, even if imho this should be seralized on top of [1]. I'm sure we'll be able to identify more blueprints as highly likely candidates for juno. On the other hand I would like to ensure the meeting agenda is set up in a way that we won't spend all the time discussing inclusion of a few contented blueprints. Salvatore [1] https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor ) [2] https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bug/1359416,n,z [3] https://review.openstack.org/#/c/111877/ On 27 August 2014 00:00, Kyle Mestery mest...@mestery.com wrote: I'd like to propose a meeting at 1300UTC on Thursday in #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this point. We're taking specifically about medium and high priority ones, with a focus on these three: https://blueprints.launchpad.net/neutron/+spec/l3-high-availability) https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security) https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor ) These three BPs will provide a final push for scalability in a few areas and are things we as a team need to work to merge this week. The meeting will allow for discussion of final issues on these patches with the goal of trying to merge them by Feature Freeze next week. If time permits, we can discuss other medium and high priority community BPs as well. Let me know if this works by responding on this thread and I hope to see people there Thursday! Thanks, Kyle ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report
On 23/08/14 07:39, Zane Bitter wrote: We held the inaugural Heat mid-cycle meetup in Raleigh, North Carolina this week. There were a dozen folks in attendance, and I think everyone agreed that it was a very successful event. Notes from the meetup are on the Etherpad here: https://etherpad.openstack.org/p/heat-juno-midcycle-meetup Here are a few of the conclusions: ... * Marconi is now called Zaqar. Who knew? * Marc^W Zaqar is critical to pretty much every major non-Convergence feature on the roadmap. We knew that we wanted to use it for notifications, but we also want to make those a replacement for events, and a conduit for warnings and debugging information to the user. This is becoming so important that we're going to push ahead with an implementation now without waiting to see when Zaqar will graduate. Zaqar would also be a good candidate for pushing metadata changes to servers, to resolve the performance issues currently caused by polling. Until Zaqar is generally available we can still remove the polling load from heat by pushing metadata to a swift TempURL. This is ready now for review: https://review.openstack.org/#/q/topic:bp/swift-deployment-transport,n,z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] What tests are required to be run
Hi Karthik, what do you mean that the plugin is incompatible with https://review.openstack.org/#/c/114393/? you're mentioning a rebase issue - but the patch in question appears to cleanly apply to master. Is your probably because patch #114393 does not have in its log some changes you need to accommodate test_lib changes? Are these changes you need already merged? In this case you might try to rebase the patch you're going to test on master before running devstack, which I think it's also what happens in the upstream gate. Salvatore On 26 August 2014 21:57, Karthik Natarajan natar...@brocade.com wrote: Hi Edgar, We are also facing CI issues when the neutron patch set is not rebased with latest changes. For e.g. CI audit that you posted today ( https://review.openstack.org/#/c/114393/) is not rebased with neutron test_lib related changes. We had refactored the Brocade Vyatta plugin unit tests to accommodate the test_lib related changes. But our plugin is not compatible with the patch you have posted. So CI is failing. I had a discussion with Dane Leblanc on this. We also need to post the SKIPPED status for such patch sets. We will also experiment with Kevin's suggestion. Thanks, Karthik -Original Message- From: Dane Leblanc (leblancd) [mailto:lebla...@cisco.com] Sent: Monday, August 25, 2014 10:02 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to be run Edgar, Kyle: Kevin's suggestion should work for me (still hashing out the implementation). I've added an item to the 3rd Party IRC agenda anyway to discuss this corner case. Thanks! Dane -Original Message- From: Edgar Magana [mailto:edgar.mag...@workday.com] Sent: Monday, August 25, 2014 12:44 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to be run Dane, I will second Kyle's idea. Let's discuss this during today IRC meeting if Kevin's suggestion does not work for you. Thanks, Edgar On 8/25/14, 10:08 AM, Kyle Mestery mest...@mestery.com wrote: Dane, thanks for all the great work you're doing in the third-party CI area. It's great to see you working to share this knowledge with others as well! Did Kevin's idea work for you to move past this issue? If not, I suggest you put an item on the neutron meeting agenda today and we cover this there. You could put the item on the third-party meeting agenda as well. Thanks! Kyle On Sun, Aug 24, 2014 at 9:43 AM, Dane Leblanc (leblancd) lebla...@cisco.com wrote: Hi Kevin: Thanks, this is a great idea! I may try just a slight variation of this concept. Maybe your idea could be the recommended way to create a 3rd party CI for plugins that are just being introduced and need to limit the scope of testing to a small set of plugin-related commits (or plugins blocked on a certain fix). Thanks, Dane From: Kevin Benton [mailto:blak...@gmail.com] Sent: Saturday, August 23, 2014 5:47 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to be run Can you disable posting of results directly from your Jenkins/Zuul setup and have a script that just checks the log file for special markers to determine if the vote should be FAILED/PASSED/SKIPPED? Another advantage of this approach is that it gives you an opportunity to detect when a job just failed to setup due to infrastructure reasons and trigger a recheck without ever first posting a failure to gerrit. On Fri, Aug 22, 2014 at 3:06 PM, Dane Leblanc (leblancd) lebla...@cisco.com wrote: Thanks Edgar for updating the APIC status!!! Edgar and Kyle: *PLEASE NOTE** I need your understanding and advice on the following: We are still stuck with a problem stemming from a design limitation of Jenkins that prevents us from being compliant with Neutron 3rd Party CI requirements for our DFA CI. The issue is that Jenkins only allows our scripts to (programmatically) return either Success or Fail. There is no option to return Aborted, Not Tested, or Skipped. Why does this matter? The DFA plugin is just being introduced, and initial DFA-enabling change sets have not yet been merged. Therefore, all other change sets will fail our Tempest tests, since they are not DFA-enabled. Similarly, we were recently blocked in our APIC CI with a critical bug, causing all change sets without this fix to fail on our APIC testbed. In these cases, we would like to enter a throttled or partially blocked mode, where we would skip testing on change sets we know will fail, and (in an ideal world) signal this shortcoming to Gerrit e.g. by returning a Skipped status. Unfortunately, this option is not
Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests
Excerpts from Steve Baker's message of 2014-08-26 14:25:46 -0700: On 27/08/14 03:18, David Kranz wrote: On 08/26/2014 10:14 AM, Zane Bitter wrote: Steve Baker has started the process of moving Heat tests out of the Tempest repository and into the Heat repository, and we're looking for some guidance on how they should be packaged in a consistent way. Apparently there are a few projects already packaging functional tests in the package projectname.tests.functional (alongside projectname.tests.unit for the unit tests). That strikes me as odd in our context, because while the unit tests run against the code in the package in which they are embedded, the functional tests run against some entirely different code - whatever OpenStack cloud you give it the auth URL and credentials for. So these tests run from the outside, just like their ancestors in Tempest do. There's all kinds of potential confusion here for users and packagers. None of it is fatal and all of it can be worked around, but if we refrain from doing the thing that makes zero conceptual sense then there will be no problem to work around :) Thanks, Zane. The point of moving functional tests to projects is to be able to run more of them in gate jobs for those projects, and allow tempest to survive being stretched-to-breaking horizontally as we scale to more projects. At the same time, there are benefits to the tempest-as-all-in-one-functional-and-integration-suite that we should try not to lose: 1. Strong integration testing without thinking too hard about the actual dependencies 2. Protection from mistaken or unwise api changes (tempest two-step required) 3. Exportability as a complete blackbox functional test suite that can be used by Rally, RefStack, deployment validation, etc. I think (1) may be the most challenging because tests that are moved out of tempest might be testing some integration that is not being covered by a scenario. We will need to make sure that tempest actually has a complete enough set of tests to validate integration. Even if this is all implemented in a way where tempest can see in-project tests as plugins, there will still not be time to run them all as part of tempest on every commit to every project, so a selection will have to be made. (2) is quite difficult. In Atlanta we talked about taking a copy of functional tests into tempest for stable apis. I don't know how workable that is but don't see any other real options except vigilance in reviews of patches that change functional tests. (3) is what Zane was addressing. The in-project functional tests need to be written in a way that they can, at least in some configuration, run against a real cloud. I suspect from reading the previous thread about In-tree functional test vision that we may actually be dealing with three categories of test here rather than two: * Unit tests that run against the package they are embedded in * Functional tests that run against the package they are embedded in * Integration tests that run against a specified cloud i.e. the tests we are now trying to add to Heat might be qualitatively different from the projectname.tests.functional suites that already exist in a few projects. Perhaps someone from Neutron and/or Swift can confirm? That seems right, except that I would call the third functional tests and not integration tests, because the purpose is not really integration but deep testing of a particular service. Tempest would continue to focus on integration testing. Is there some controversy about that? The second category could include whitebox tests. I don't know about swift, but in neutron the intent was to have these tests be configurable to run against a real cloud, or not. Maru Newby would have details. I'd like to propose that tests of the third type get their own top-level package with a name of the form projectname-integrationtests (second choice: projectname-tempest on the principle that they're essentially plugins for Tempest). How would people feel about standardising that across OpenStack? +1 But I would not call it integrationtests for the reason given above. Because all heat does is interact with other services, what we call functional tests are actually integration tests. Sure, we could mock at the REST API level, but integration coverage is what we need most. This I'd call that faking, not mocking, but both could apply. lets us verify things like: - how heat handles races in other services leading to resources going into ERROR A fake that predictably fails (and thus tests failure handling) will result in better coverage than a real service that only fails when that real service is broken. What's frustrating is that _both_ are needed to catch bugs. - connectivity and interaction between heat and agents on orchestrated servers That is definitely
[openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
Hi, After reading https://wiki.openstack.org/wiki/Network/Incubator I have some thoughts about the proposed workflow. We have quite a bit of experience and some good tools around splitting code out of projects and into new projects. But we don't generally do a lot of importing code into projects. We've done this once, to my recollection, in a way that preserved history, and that was with the switch to keystone-lite. It wasn't easy; it's major git surgery and would require significant infra-team involvement any time we wanted to do it. However, reading the proposal, it occurred to me that it's pretty clear that we expect these tools to be able to operate outside of the Neutron project itself, to even be releasable on their own. Why not just stick with that? In other words, the goal of this process should be to create separate projects with their own development lifecycle that will continue indefinitely, rather than expecting the code itself to merge into the neutron repo. This has advantages in simplifying workflow and making it more consistent. Plus it builds on known integration mechanisms like APIs and python project versions. But more importantly, it helps scale the neutron project itself. I think that a focused neutron core upon which projects like these can build on in a reliable fashion would be ideal. -Jim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)
Correction: there were 25 workers per producer process, not 10. On 8/26/14, 4:41 PM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: ### Event Broadcasting (Balanced) ### This test uses the same number of producers and consumers, but note that the observers are still listing (up to) 5 messages at a time[5], so they still outpace the producers, but not as quickly as before. Options * 2 producer processes with 10 gevent workers each * 1 message posted per request * 2 observer processes with 25 gevent workers each * 5 messages listed per request by the observers * Load distributed across 4 queues * 10-second duration ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] What tests are required to be run
On 2014-08-27 00:52:35 +0200 (+0200), Salvatore Orlando wrote: [...] In this case you might try to rebase the patch you're going to test on master before running devstack, which I think it's also what happens in the upstream gate. More or less. Technically we try to merge the proposed change (and its open parent changes in Gerrit if any, in series) onto the current target branch tip. Rebasing/cherry-picking are a little muddier since they alter the commits being tested rather than merely adding merge commits to the mix. Also, upstream's Zuul (the Jenkins account in Gerrit) now automatically runs a built-in job against all open changes for a project+branch every time a change is merged for that combination, testing to see if the remaining open changes can still merge without conflict and leaves a -1 verify on them with a relevant message if not. So if a third-party CI is encountering changes which really can't be merged (rebased, cherry-picked, whatever) to the project then they should refrain from commenting at all... at this point it would be at best redundant, and at worst entirely incorrect. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Server Groups - remove VM from group?
On 08/25/2014 11:25 AM, Joe Cropper wrote: I was thinking something simple such as only allowing the add operation to succeed IFF no policies are found to be in violation... and then nova wouldn't need to get into all the complexities you mention? Personally I would be in favour of this...nothing fancy, just add it if it already meets all the criteria. This is basically just a database operation so I would hope we could make it reliable in the face of simultaneous things going on with the instance. And remove would be fairly straightforward as well since no constraints would need to be checked. Agreed. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [infra][qa][neutron] Neutron full job, advanced services, and the integrated gate
TL; DR A few folks are proposing to stop running tests for neutron advanced services [ie: (lb|vpn|fw)aas] in the integrated gate, and run them only on the neutron gate. Reason: projects like nova are 100% orthogonal to neutron advanced services. Also, there have been episodes in the past of unreliability of tests for these services, and it would be good to limit affected projects considering that more api tests and scenarios are being added. - So far the neutron full job runs tests (api and scenarios) for neutron core functionality as well as neutron advanced services, which run as neutron service plugin. It's highly unlikely, if not impossible, that changes in projects such as nova, glance or ceilometer can have an impact on the stability of these services. On the other hand, instability in these services can trigger gate failures in unrelated projects as long as tests for these services are run in the neutron full job in the integrated gate. There have already been several gate-breaking bugs in lbaas scenario tests are firewall api tests. Admittedly, advanced services do not have the same level of coverage as core neutron functionality. Therefore as more tests are being added, there is an increased possibility of unearthing dormant bugs. For this reason we are proposing to not run anymore tests for neutron advanced services in the integrated gate, but keep them running on the neutron gate. This means we will have two neutron jobs: 1) check-tempest-dsvm-neutron-full which will run only core neutron functionality 2) check-tempest-dsvm-neutron-full-ext which will be what the neutron full job is today. The former will be part of the integrated gate, the latter will be part of the neutron gate. Considering that other integrating services should not have an impact on neutron advanced services, this should not make gate testing asymmetric. However, there might be exceptions for: - orchestration project like heat which in the future might leverage capabilities like load balancing - oslo-* libraries, as changes in them might have an impact on neutron advanced services, since they consume those libraries Another good question is whether extended tests should be performed as part of functional or tempest checks. My take on this is that scenario tests should always be part of tempest. On the other hand I reckon API tests should exclusively be part of functional tests, but as so far tempest is running a gazillion of API tests, this is probably a discussion for the medium/long term. In order to add this new job there are a few patches under review: [1] and [2] Introduces the 'full-ext' job and devstack-gate support for it. [3] Are the patches implementing a blueprint which will enable us to specify for which extensions test should be executed. Finally, one more note about smoketests. Although we're planning to get rid of them soon, we still have failures in the pg job because of [4]. For this reasons smoketests are still running for postgres in the integrated gate. As load balancing and firewall API tests are part of it, they should be removed from the smoke test executed on the integrated gate ([5], [6]). This is a temporary measure until the postgres issue is fixed. Regards, Salvatore [1] https://review.openstack.org/#/c/114933/ [2] https://review.openstack.org/#/c/114932/ [3] https://review.openstack.org/#/q/status:open+branch:master+topic:bp/branchless-tempest-extensions,n,z [4] https://bugs.launchpad.net/nova/+bug/1305892 [5] https://review.openstack.org/#/c/115022/ [6] https://review.openstack.org/#/c/115023/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
From what I understand, the intended projects for the incubator can't operate without neutron because they are just extensions/plugins/drivers. For example, if the DVR modifications to the reference reference L3 plugin weren't already being developed in the tree, DVR could have been developed in the incubator and then merged into Neutron once the bugs were ironed out so a huge string of Gerrit patches didn't need to be tracked. If that had happened, would it make sense to keep the L3 plugin as a completely separate project or merge it? I understand this is the approach the load balancer folks took by making Octavia a separate project, but I think it can still operate on its own, where the reference L3 plugin (and many of the other incubator projects) are just classes that expect to be able to make core Neutron calls. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Octavia] Agenda for tomorrow's meeting, where meeting is happening
Hello, The current agenda for tomorrow's Octavia meeting is posted here: https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Agenda Please feel free to add agenda items as necessary. Also, based on the poll Doug Wiegley created on where we should hold our weekly meetings (IRC vs. webex), the result was a tie. So, the plan I am going to follow at this point is to hold the next two meetings in IRC, and then we can re-evaluate where the active participants in the group want to be holding the weekly meetings. So again, the next Octavia meeting will be: Wed, 27 Aug 2014 at 20:00 UTC in #openstack-lbaas on the usual (Freenode) IRC network. Thanks, Stephen -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
Incubator doesn't mean being kicked out of tree, it just mean that the API and resource model needs to be baked for fast iteration, and can't be put in tree temporarily. As kyle has said, incubator is not talking about moving 3rd drivers out of tree, which is in another thread. For DVR, as it has no influence on tenant facing API resource model, it works as the built-in backend, and this feature has accepted wide common interests, it's just the internal performance optimization tightly coupled with existing code, so it should be developed in tree. On Wed, Aug 27, 2014 at 8:08 AM, Kevin Benton blak...@gmail.com wrote: From what I understand, the intended projects for the incubator can't operate without neutron because they are just extensions/plugins/drivers. For example, if the DVR modifications to the reference reference L3 plugin weren't already being developed in the tree, DVR could have been developed in the incubator and then merged into Neutron once the bugs were ironed out so a huge string of Gerrit patches didn't need to be tracked. If that had happened, would it make sense to keep the L3 plugin as a completely separate project or merge it? I understand this is the approach the load balancer folks took by making Octavia a separate project, but I think it can still operate on its own, where the reference L3 plugin (and many of the other incubator projects) are just classes that expect to be able to make core Neutron calls. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective
On Sun, Aug 24, 2014 at 5:09 PM, Luke Gorrie l...@tail-f.com wrote: On 21 August 2014 12:12, Ihar Hrachyshka ihrac...@redhat.com wrote: Let the ones that are primarily interested in good quality of that code (vendors) to drive development. And if some plugins become garbage, it's bad news for specific vendors; if neutron screws because of lack of concentration on core features and open source plugins, everyone is doomed. Completely agree with this sentiment. Is there a crisp distinction between a vendor plugin and an open source plugin though? This topic is interesting: should all opensource backend drivers be put into the tree? But as Kyle has mentioned earlier, Incubator is not the place to discuss in-tree / out-tree for 3rd vs. built-in drivers, but the place to bake newly introduced API and resource model for fast iteration, so I'll forward this topic in another thread. The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks like a vendor plugin but is actually completely open source. The development is driven by end-user organisations who want to make the standard upstream Neutron support their NFV use cases. We are looking for a good way to engage with the upstream community. In this cycle we have found kindred spirits in the NFV subteam., but we did not find a good way to engage with Neutron upstream (see https://review.openstack.org/#/c/116476/). It would be wonderful if there is a suitable process available for us to use in Kilo e.g. incubation. Cheers, -Luke ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on
Forwarded from other thread discussing about incubator: http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html Completely agree with this sentiment. Is there a crisp distinction between a vendor plugin and an open source plugin though? I think that opensource is not the only factor, it's about built-in vs. 3rd backend. Built-in must be opensource, but opensource is not necessarily built-in. By my thought, current OVS and linuxbridge are built-in, but shim RESTful proxy for all kinds of sdn controller should be 3rd, for they keep all virtual networking data model and service logic in their own places, using Neutron API just as the NB shell (they can't even co-work with built-in l2pop driver for vxlan/gre network type today). As for the Snabb or DPDKOVS (they also plan to support official qemu vhost-user), or some other similar contributions, if one or two of them win in the war of this high performance userspace vswitch, and receive large common interest, then it may be accepted as built-in. The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks like a vendor plugin but is actually completely open source. The development is driven by end-user organisations who want to make the standard upstream Neutron support their NFV use cases. We are looking for a good way to engage with the upstream community. In this cycle we have found kindred spirits in the NFV subteam., but we did not find a good way to engage with Neutron upstream (see https://review.openstack.org/#/c/116476/). It would be wonderful if there is a suitable process available for us to use in Kilo e.g. incubation. Cheers, -Luke ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev