[openstack-dev] [mistral] Best practices for the DB maintanence in production
Hey, We are planning to use Mistral in production in the next few months. We noticed that having even a simple workflow with a cron-trigger (For example monitor and heal workflow) can create large amounts of data in the DB (MariaDB). Does Mistral have a mechanism / configuration of automatic deletion of old executions? What is the best practice to handle this type of challenge? Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][packaging] Adding files to /etc in a package
On 07/02/2015 05:51 AM, Robert Collins wrote: On 2 July 2015 at 13:26, Thomas Goirand z...@debian.org wrote: On 07/02/2015 02:07 AM, Tony Breeds wrote: On Wed, Jul 01, 2015 at 11:07:30PM +, Perry, Sean wrote: BTW, see dh_bash-completion from the debhelper package. When in doubt about packaging on a deb based distro look at the debhelper tools source (which is perl). -Original Message- From: Perry, Sean Sent: Wednesday, July 01, 2015 4:04 PM To: OpenStack Development Mailing List (not for usage questions) Subject: RE: [openstack-dev] [all][packaging] Adding files to /etc in a package According to Debian standards (which Ubuntu follows mostly) if a package ships bash completion information that file belongs in /etc/bash_completion.d with a file named after the package. You can look in that dir on an Ubuntu/debian box and see the setup. Right, but I'm talking about python packaging. Which is certainly closely related to system/distribution packaging, but lacks a lot of the machinery to get it right. Correct. Python packaging is made for packaging ... python! Not configuration files and other system bits. I'm with you there. BUT. Python programs use configuration files. And Python programs provide daemons. Its entirely reasonable and expected within the context of a given program that 'sudo make install' or 'sudo setup.py install' or ... any number of variations should make the program be usable by all users of the system that its installing it into. I'm going to presume we agree on that in the rest my reply. You may want to stop here if we don't agree about that. We do agree. The issue being that there's no universal way to do things, and different policy depending on the unix vendor and the way package is designed. What I'm trying to do is: 1) make it simple for the 'developer' comsumers of the python package to use bash completions (witghout needing the system packages) Use the source, luke! Or write yourself a small shell script... Unless-and-until distributions provide a standard home-dir scoped place to install bash completion scripts, developers and users installing from python packaging are going to expect that the completion scripts in the software they are installing get installed. Expecting every user to home-roll a workaround multiplies the overall cost by the number of users. So I can imagine three routes... - teach the thing being installed to install completion scripts into the existing right place(s) - define a new place thats homedir and virtualenv friendly and teach the thing installing them to install there. - punt and do nothing There's a 4th way: add some kind of parameters to (not) do the installation of config files. I'd say: have it not install config files by default, and add an option to setup.py install to do it. This way, everyone is happy. There's a big chunk of complexity hidden in my second point there. And even if we do it we still need to get it onto our dev machines: Mac OS X, various versions of Ubuntu, Fedora, RHEL and Suse. So I don't think it makes a lot of sense to bank on getting that right-and-out-there before we look at the user experience directly. Yup. 2) Help the system/distribution packagers or at the very least not make thier life more difficult. If you attempt to address this, you're making my life miserable. Please don't do it, thanks. Since the entire job of the dh-* script ecosystem is to automate repeated patterns without making individual developers figure things out, I find 'miserable' comes across as a massive exaggeration. Let's say every bits of the software we release was doing things wrong, because of wrong assumptions by the upstream maintainers who aren't familiar with the Debian Policy Manual, or with packaging automation principles (you mentioned debhelper, but there's also OpenStack specific tooling which I wrote), then there's a high chance that python setup.py install would do the wrong things with configuration files (or at least, not what *I* want it to do). Repeat this by the number of packages I maintain, and then having a miserable life unwinding the cruft is effectively miserable. Surely its a single one-off dh script to create to handle bash completion scripts and move them into the right place, and you are done? The fact that I would like to use the dh script is exactly the reason why I don't want upstream to get in involved. If upstream maintainers does nothing, then I just write: echo tools/nova.bash_completion nova \ debian/python-novaclient.bash-completion This way, I leave up to dh_bash-completion to decide where to install the bash completion script. Using this way, I am sure that my package is always Debian policy compliant, even if the Debian policy evolves (and it did for completion scripts, moving from /etc to /usr recently). If upstream maintainers pretend that they know how to do things, then I have to do
Re: [openstack-dev] [kolla][release] Announcing Liberty-1 release of Kolla
On 7/1/15, 3:50 PM, Kevin Carter kevin.car...@rackspace.com wrote: Steve, The initial review you guys had done did help a bunch and it was great to work with you and everyone else in the channel. As you're aware, code base that you had tested was our Juno (stable at that time) release which has more than its fair share of Rackspace-isms. One of which is the requirement to have access to the upstream repository for the installation of its python bits. So within that release it is true that if the upstream repository were to go away a redeployment or the expansion of the stack would be impossible until service was restored. While you could always self host the upstream repos, there is an open rsync relay, that wasn't functionality baked into OSAD at that time. However, since your eval we've released Kilo which now provides for the ability to self-host all of the python bits / container images / and anything else you may need or want from within the infrastructure (that's the default and what we gate on). While this functionality existed in master when you guys had done the test it had not been officially released so its likely you had not looked into it at this point. Additionally, we've done a huge amount of work to separate Kilo / Master from what was done in Icehouse / Juno while also providing an upgrade path for our existing deployments which will ensure that deployers are able to take advantage of the general improvements throughout the stack in Kilo and beyond. We, like you, do still have to reliance on some upstream resources however the inclusion of the repo-server containers should thwart these issues. Our python bits are built once within that repo-server infrastructure and everything within the OSAD points to the internal repository for its source of truth. As I said, we still have some reliance on upstream and likely always will but once an OSAD deployment is online, in Kilo or Master, it should be able to redeploy itself indefinitely. Obviously there's still more that we can do to make this better, and we're getting there, but I don't believe the same theoretical i! ssues you had seen before are present now. All that said, great work on the Libery-1 release and I look forward to play with Kolla with these new bits sometime in the near future. Kevin, Thanks! The development team did a fantastic job focusing in Liberty 1 - 14 blueprints - pretty amazing amount of work in a short 5 week cycle. Plan to see same level of focus to meet our Liberty-2 milestone goals and deliver on our complete mission. Regards -steve -- Kevin Carter IRC: cloudnull From: Steven Dake (stdake) std...@cisco.com Sent: Wednesday, July 1, 2015 2:21 PM To: OpenStack Development Mailing List (not for usage questions); s...@yaple.net Subject: Re: [openstack-dev] [kolla][release] Announcing Liberty-1 release of Kolla On 7/1/15, 8:11 AM, Ian Cordasco ian.corda...@rackspace.com wrote: On 6/30/15, 23:36, Sam Yaple sam...@yaple.net wrote: Ian, The most significant difference would be that Kolla uses image based deployment rather than building from source on each node at runtime allowing for a more consistent and repeatable deployment. Do you mean specific docker images? Can you expand on how os-ansible-deployment is not repeatable? They use an lxc-container cached image so all containers are uniform (consistent, repeatable, etc.) and build wheels (once) and use an internal repo mirror so that all installations use the exact same set of wheels (e.g., consistent and repeatable). Are there places where you've found osad to be not consistent or repeatable? Ian, We did a 10 day eval of OSAD and liked the tech. We did find the way the deployment pipeline works to be lacking. A purely theoretical problem with the deployment pipeline is key repositories used to build the software could be offline. Since the building of the software occurs during deployment, this could result in an inability to alter the configuration of the deployment after OpenStack is deployed. Kolla suffers from this same problem during the installation (build pipeline) step. But as long as you have already built images somewhere in your system, you are still able to deploy, avoiding complete downtime on deployment that OSAD could theoretically suffer. This theoretical issue makes the deployment non-repeatable. Hope our 10 day eval analysis helps improve OSAD. Regards -steve On Tue, Jun 30, 2015 at 2:28 PM, Ian Cordasco ian.corda...@rackspace.com wrote: On 6/29/15, 23:59, Steven Dake (stdake) std...@cisco.com wrote: The Kolla community is pleased to announce the release of the Kolla Liberty 1 milestone. This release fixes 56 bugs and implements 14 blueprints! Our community developed the following notable features: * A start at source-basedcontainers So how does this now compare to the stackforge/os-ansible-deployment (soon to be openstack/openstack-ansible) project?
[openstack-dev] OpenStack miniconf at PyCon AU Ticket giveaway
Hi all, The OpenStack miniconf at PyCon Australia is rapidly approaching (31st July) but it's not too late to register! You can find all of the details on http://2015.pycon-au.org/, including the OpenStack miniconf programme http://2015.pycon-au.org/programme/schedule/friday. We're also excited to be able to give away 2 full access (professional) tickets, courtesy of the OpenStack foundation! If you would like to attend the miniconf, but it would be a hardship for you to do so, please send an email to myself (j...@nitrotech.org) and Robert Collins (robe...@robertcollins.net) stating why you'd like to attend and we'll consider you for receiving a ticket. Cheers, Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Glance Image Protection Policy
Hi every one, The glance policy.json allows specific users/roles to download an image. If we apply a policy on a specific role, only that role can download and/or boot an image. What if we want to restrict downloading an image, but at the same time allowing the user to boot it via nova boot. The catch is that we will have to restrict the user from taking the snapshot right? Can glance can differentiate between user downloading an image and nova doing the same on the behalf of a user. OR how to solve the puzzle, please guide. Thanks __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] python-fuelclient-6.1.2
Folks, I’m glad to announse that verison 6.1.2 of Fuel Client was released. The team fixed a few important issues in order to make it possible to interact with Fuel 6.1 using the official client. CHANGELOG === 6.1.2 --- * Change version to 6.1.2 * Update link to developers' manual * Fix typo and output of fuel env --help * Pin oslo.i18n for keystoneclient * Return realistic data from mocked library * Reconfigure the wheel * Fix manual assignment of nodes to node groups * Block deploy/provision action for unassigned nodes * Bump version to 7.0 * Fix test for creating env with ha mode * Change default network mode for env creation to neutron As usual, you can install or upgrade python-fuelclient package from PyPi: https://pypi.python.org/pypi/python-fuelclient -- romcheg signature.asc Description: Message signed with OpenPGP using GPGMail __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [infra] issues with beaker/centos7 job
2015-07-02 5:00 GMT+02:00 Ian Wienand iwien...@redhat.com: So I looked into this with Gilles, and that error is a red-herring (it's just saying the rdo repos don't create the presto/deltarpm stuff); yep the real issue is when python-requests fails to install [1] a bit later due to [2] (mentioned in comments of [3]). [2] is unrelated EL6 issue, job is running on EL7 + RDO Kilo repo So this is really a upstream packaging issue and not an issue with the nodepool images. I could not reproduce on a clean centos7, so I'd like to grab nodepool image to have a closer look: https://bugs.launchpad.net/puppet-openstack/+bug/1470685/comments/1 Cheers, Alan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [mistral] Best practices for the DB maintanence in production
Hi Elisha, Currently Mistral doesn’t support any expiration policies for workflow/task/action runtime objects. It keeps them forever until someone deletes them manually. I see the following ways of addressing your need: Implement some cleanup component within Mistral (how to call it?) using its Scheduler component to periodically query and delete objects based on a criteria provided in a config. Just implement something on top of Mistral API to do the same. The cons of this approach is that Mistral now doesn’t provide any flexible mechanism to do criteria-based search of its objects. There’s an adjacent BP for that [1]. Generally, there’s a number of things in Mistral API we are not satisfied with and we’ve been planning to design and suggest API v3 for Mistral to support those features (don’t confuse with DSL v3, there’s no plan for now to implement a new backwards incompatible DSL). So this option may not be effective from performance perspective. I think it deserves its own blueprint so that we can discuss nuances. [1] https://blueprints.launchpad.net/mistral/+spec/mistral-items-filtering https://blueprints.launchpad.net/mistral/+spec/mistral-items-filtering Renat Akhmerov @ Mirantis Inc. On 02 Jul 2015, at 13:37, ELISHA, Moshe (Moshe) moshe.eli...@alcatel-lucent.com wrote: Hey, We are planning to use Mistral in production in the next few months. We noticed that having even a simple workflow with a cron-trigger (For example monitor and heal workflow) can create large amounts of data in the DB (MariaDB). Does Mistral have a mechanism / configuration of automatic deletion of old executions? What is the best practice to handle this type of challenge? Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
Hi, I'd like to see how we can develop new alarm features, since I'm working on event-alarm. Having duplicated code bases may confuse developer too, so we should have some policies like: * aodh focus on making sure that it provides existing API and functionality as of kilo to end users * ceilometer/alarm is open to develop new experimental features until L2/L3 * having a merge window to move those new features to aodh from ceilometer/alarm around L3 What do you think? Thanks, Ryota -Original Message- From: gordon chung [mailto:g...@live.ca] Sent: Tuesday, June 30, 2015 3:48 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps On 29/06/2015 11:40 AM, Chris Dent wrote: On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? i think we should follow up with the packagers. if i understand correctly, the location of the code is not known from a user pov, it's the packagers that build the appropriate packages for them to use. if from packagers pov, they just need to work against Aodh, then i would lean more to removing alarming from Ceilometer repo asap to avoid maintaining duplicate code bases and the eventual diversion of the two. This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? i would think/hope the experience from end user doesn't actually change. ie. all the same packaged services remain. I'm sure there are plenty of others. -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Rebuilding instances booted from volume
Hi ZhengZhenyu, I'd say, it's more like a new feature and rebuild of volume-backed instances is simply not implemented yet. Though, I agree, that existing behaviour is rather confusing, as such rebuild will be a no-op, and Nova won't report any errors either. AFAIK, Eugeniya K. (CC'ed) started to work on this, maybe she will be able to upload any WIP patches soon. Thanks, Roman On Thu, Jul 2, 2015 at 7:11 AM, ZhengZhenyu zheng.zhe...@outlook.com wrote: Hi, All According to my test, Nova cannot rebuild Volume-Booted instances, patch: https://review.openstack.org/#/c/176891/ fixes rebuild for instances launched using image and attached with volumes, but yet rebuilding an instance booted from volume is still not working. The rebuild action for volume-booted instances after implementing the above patch performs like this: The volumes are detached and attached again, the selected image/snapshot for rebuilding is actually useless. This means that if the /dev/vda for an instance booted from volume is broken for some reason, we cannot rebuild it from a new image or the snapshot of this instance (nova just detach and attach again the same volume). I don't know whether this is a bug or it is designed on purpose. Thanks, BR, Zheng __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] virtual mid-cycle planning
On Thu, 2 Jul 2015, Chris Dent wrote: The voting related to the ceilometer mid-cylce is now closed. Prad and I will work to come up with a proposed schedule. Can interested participants please make sure they are up to date on http://doodle.com/6vfksdu38wcwqqd3 with two consecutive days where they can be present (obviously ignore the dates that have already passed). If you can't do two consecutive days, that's okay, we'll try to put the stuff you care about on the day you can be around. Thanks. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] virtual mid-cycle planning
The voting related to the ceilometer mid-cylce is now closed. Prad and I will work to come up with a proposed schedule. Some general comments: As there is limited time not all topics proposed will be addressed. To maximize value for the sessions that got an above average number of votes, we will be cutting off the topics that did not with some severity. For those topics for which was a small number of votes it may be worthwhile for people to arrange their own small meetings with other interested parties. Just because a topic isn't going to be on the schedule that doesn't mean it is not important, it simply means that the right people weren't paying attention in the past few days. They may be later. Unfortunately when building the schedule we will be unable to accomodate concurrent sessions because of the small number of participants (everyone wants to go to the same stuff). It's looking like, based on the voting, we should go for 3 days, with the third day being an optional sprint day. Some topics are broad enough that having a session on day 1 (to figure out what matters) and day 2 (do something about it) probably makes sense. The primary zone of time overlap is the Euro-afternoon, followed by the Euro-morning. Thank you to those of you who are not local to Europe who were willing to stretch a bit to make the scheduling a bit easier. Topics will need leaders. Visit the relevant etherpads to find the summary information and links to a planning spreadsheet and volunteer to lead if you like. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends
On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com wrote: On 12:24 Jun 26, Matt Riedemann wrote: snip So the question is, is everyone OK with this and ready to make that change? Thanks for all your work on this Matt. +100, awesome debug, followup and fixing work by Matt I'm fine with this. I say bite the bullet and we'll see the CI's surface that aren't skipping or failing this test. Just curious, shouldn't this mean we need to have some way of Cinder querying Nova for do u have this capability and only then setting the 'encryption' key in conn_info ? Better communication between nova and cinder ? thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [mistral] Best practices for the DB maintanence in production
Thanks, Renat. I also believe the right place to do it is in Mistral. I will create a blueprint and we will discuss the details in the spec. Thanks. From: Renat Akhmerov [mailto:rakhme...@mirantis.com] Sent: יום ה 02 יולי 2015 12:34 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [mistral] Best practices for the DB maintanence in production Hi Elisha, Currently Mistral doesn’t support any expiration policies for workflow/task/action runtime objects. It keeps them forever until someone deletes them manually. I see the following ways of addressing your need: * Implement some cleanup component within Mistral (how to call it?) using its Scheduler component to periodically query and delete objects based on a criteria provided in a config. * Just implement something on top of Mistral API to do the same. The cons of this approach is that Mistral now doesn’t provide any flexible mechanism to do criteria-based search of its objects. There’s an adjacent BP for that [1]. Generally, there’s a number of things in Mistral API we are not satisfied with and we’ve been planning to design and suggest API v3 for Mistral to support those features (don’t confuse with DSL v3, there’s no plan for now to implement a new backwards incompatible DSL). So this option may not be effective from performance perspective. I think it deserves its own blueprint so that we can discuss nuances. [1] https://blueprints.launchpad.net/mistral/+spec/mistral-items-filtering Renat Akhmerov @ Mirantis Inc. On 02 Jul 2015, at 13:37, ELISHA, Moshe (Moshe) moshe.eli...@alcatel-lucent.commailto:moshe.eli...@alcatel-lucent.com wrote: Hey, We are planning to use Mistral in production in the next few months. We noticed that having even a simple workflow with a cron-trigger (For example monitor and heal workflow) can create large amounts of data in the DB (MariaDB). Does Mistral have a mechanism / configuration of automatic deletion of old executions? What is the best practice to handle this type of challenge? Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends
On 7/2/2015 4:12 AM, Deepak Shetty wrote: On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com mailto:thin...@gmail.com wrote: On 12:24 Jun 26, Matt Riedemann wrote: snip So the question is, is everyone OK with this and ready to make that change? Thanks for all your work on this Matt. +100, awesome debug, followup and fixing work by Matt I'm fine with this. I say bite the bullet and we'll see the CI's surface that aren't skipping or failing this test. Just curious, shouldn't this mean we need to have some way of Cinder querying Nova for do u have this capability and only then setting the 'encryption' key in conn_info ? Better communication between nova and cinder ? thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I thought the same about some capability flag in cinder where the volume driver would tell the volume manager if it supported encryption and then the cinder volume manager would use that to tell if a request to create a volume from an encryption type was possible. But the real problem in our case is the encryption provider support, which is currently the luks and cryptsetup modules in nova. However, the encryption provider is completely pluggable [1] from what I can tell, the libvirt driver in nova just creates the provider class (assuming it can import it) and calls the methods defined in the VolumeEncryptor abstract base class [2]. So whether or not encryption is supported during attach is really up to the encryption provider implementation, the volume driver connector code (now in os-brick), and what the cinder volume driver is providing back to nova during os-initialize_connection. I guess my point is I don't have a simple solution besides actually failing when we know we can't encrypt the volume during attach - which is at least better than the false positive we have today. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/__init__.py#n47 [2] http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/base.py#n28 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] - breaking changes for plugins/drivers
- Original Message - I think we need to revisit the test infrastructure requirement. We have a lot of logic to setup and test plugins/drivers and making each repo duplicate all of that is a pretty big waste of effort. Maybe some base stuff should go in neutron lib? Absolutely. On Jul 1, 2015 12:32 PM, Doug Wiegley doug...@parksidesoftware.com wrote: On Jun 30, 2015, at 11:22 PM, Kevin Benton blak...@gmail.com wrote: Hi, We have had at least two breaking changes merge this week for out-of-tree drivers/plugins. These are just the two I noticed that broke the Big Switch CI (the one I keep an eye on since I had set it up): 1. Removed test_lib that changes config files. https://review.openstack.org/#/c/196583/ 2. Removed the loopingcall common util with no deprecation cycle or announcement. https://review.openstack.org/#/c/192999/ I proposed a revert for 1 that merged, but I don't particularly want to keep fighting this. What is our current policy on this? Just change whatever we want and tell plugin maintainers this is just the way things work? So, this is a big hairy bit of suck right now. We expected some of this fallout with the services split and plugin decomp (in fact, we counted on it to move this ball forward), and we had adopted these guideilnes: 1. Other repos should not rely on oslo-incubated modules. (neutron/openstack/…) 2. Other repos should not rely on neutron’s test infrastructure. (neutron/tests/…) 3. For changes in any other area, they should be additive, or have a backwards compatibility shim or a big warning notice (the last being the suckiest answer.) 4. When we start getting “stable” interfaces in neutron/lib/…, which has the proviso of NO breaking changes without a shim or deprecation cycle, we get rid of restriction #3. We’ve been regularly merging code that breaks #3 and we have plugins that use code from #1 and #2 today. IMO, the core review team needs to be aware that neutron is now a library, and refactors and gratuitous cleanups have a pretty hefty cost. Especially in Liberty, be careful. Thanks, doug -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org ?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [infra] issues with beaker/centos7 job
On 2015-07-02 13:29:34 +0200 (+0200), Alan Pevec wrote: [...] I could not reproduce on a clean centos7, so I'd like to grab nodepool image to have a closer look: [...] Get up with one of the root admins in #openstack-infra and we can get you access to one. I did it yesterday for Ian's earlier pass at troubleshooting these errors. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [infra] issues with beaker/centos7 job
I could not reproduce on a clean centos7, so I'd like to grab nodepool image to have a closer look After having a closer look, I see that image has requests 2.7 installed from pypi which overwrites python-requests RPM installation and wreaks havoc when trying to upgrade RPM. I'm not sure why and where is pypi used during the image build but it should not be installed system-wide on RPM system. If really needed, install it in venv. Cheers, Alan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] db migration for vendor extensions
Hello folks, After Neutron core and vendor code decomposition [1], it was decided to keep db migration scripts in Neutron repo. I was wondering if any of the networking-* project owners figured out an alternative to this approach where DB migration can reside in networking-* repositories instead. As far as DB models are concerned, keeping them in networking-* is simple. I plan to introduce some extensions and it would ideal if DB migration and DB models live out of Neutron repository. Any suggestions for addressing this? Anyone has a working mechanism? [1] http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html Thanks, Fawad Khaliq __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] Show attribute is a collection of other attributes or not?
Hi Heaters. I don't think that my question is very huge for openstack-dev, but it affects a lot of Heat resources and need collect more opinions before apply some of follow approaches. I recently uploaded initial approach for implementation common 'show' attribute [1] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:show-attribute,n,z On one of this review was raised one interesting suggestion: 'show' attribute should return map of all resource's attributes, i.e. for each attr in self.attributes_schema: outputs[attr] = _resolve_attribute(attr) return outputs I agree, that it's more easier than separate show_resource method for each resource and it's the same, what returns Neutron API on show request. However, we already has opposite example, when OS::Nova::Server resource has bunch of attributes which are not similar on current 'show' attribute output: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918 I suppose, that the same situation will be and for other resources. So I want to ask about way, which we would like to follow? [1] show as collection of attributes [2] show as the same output for command some client name of resource-show Regards, Sergey. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone][all] Deprecating slash ('/') in project names
Hi everyone, In Kilo, keystone introduced the concept of Hierarchical Multitenancy[1], which allows cloud operators to organize projects in hierarchies. This concept is evolving in Liberty, with the addition of the Reseller use case[2], where among other features, it’ll have hierarchies of domains by making the domain concept a feature of projects and not a different entity: from now on, every domain will be treated as a project that has the “is_domain” property set to True. Currently, getting a project scoped token can be made by only passing the project name and the domain it belongs to, once project names are unique between domains. However with those hierarchies of projects, in M we intend to remove this constraint in order to make a project name unique only in its level in the hierarchy (project parent). In other words, it won’t be possible to have sibling projects with the same name. For example. the following hierarchy will be valid: A - project with the domain feature /\ B C - “pure” projects, children of A | | A B - “pure” projects, children of B and C respectively Therefore, the cloud user faces some problems when getting a project scoped token by name to projects A or B, since keystone won’t be able to distinguish them only by their names. The best way to solve this problem is providing the full hierarchy, like “A/B/A”, “A/B”, “A/C/B” and so on. To achieve this, we intend to deprecate the “/” character in project names in Liberty and prohibit it in M, removing/replacing this character in a database migration**. Do you have some strong reason to keep using this character in project names? How bad would it be for existing deploys? We’d like to hear from you. Best regards, Henrique ** LDAP as assignment backend does not support Hierarchical Multitenancy. This change will be only applied to SQL backends. [1] http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html [2] http://specs.openstack.org/openstack/keystone-specs/specs/kilo/reseller.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] db migration for vendor extensions
On Thu, Jul 02, 2015, Fawad Khaliq fa...@plumgrid.com wrote: After Neutron core and vendor code decomposition [1], it was decided to keep db migration scripts in Neutron repo. I was wondering if any of the networking-* project owners figured out an alternative to this approach where DB migration can reside in networking-* repositories instead. As far as DB models are concerned, keeping them in networking-* is simple. I plan to introduce some extensions and it would ideal if DB migration and DB models live out of Neutron repository. Any suggestions for addressing this? Anyone has a working mechanism? Neutron's contributing devref is being updated to include information about this. Please participate in the review [2] and let us know if there is anything you feel is missing or if it can be explained better. [2] https://review.openstack.org/187267 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
On 02/07/2015 4:43 AM, Ryota Mibu wrote: Hi, I'd like to see how we can develop new alarm features, since I'm working on event-alarm. Having duplicated code bases may confuse developer too, so we should have some policies like: * aodh focus on making sure that it provides existing API and functionality as of kilo to end users * ceilometer/alarm is open to develop new experimental features until L2/L3 * having a merge window to move those new features to aodh from ceilometer/alarm around L3 What do you think? this sounds like a good idea, we should probably adopt something similar to the graduation process for oslo libs. at quick glance, the code is all structured the same -- under different main folder -- so i believe it should be a easy port if coding against current ceilometer repo to move it under aodh afterwards. Thanks, Ryota -Original Message- From: gordon chung [mailto:g...@live.ca] Sent: Tuesday, June 30, 2015 3:48 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps On 29/06/2015 11:40 AM, Chris Dent wrote: On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? i think we should follow up with the packagers. if i understand correctly, the location of the code is not known from a user pov, it's the packagers that build the appropriate packages for them to use. if from packagers pov, they just need to work against Aodh, then i would lean more to removing alarming from Ceilometer repo asap to avoid maintaining duplicate code bases and the eventual diversion of the two. This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? i would think/hope the experience from end user doesn't actually change. ie. all the same packaged services remain. I'm sure there are plenty of others. -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps
On 02/07/2015 4:43 AM, Ryota Mibu wrote: Hi, I'd like to see how we can develop new alarm features, since I'm working on event-alarm. Having duplicated code bases may confuse developer too, so we should have some policies like: * aodh focus on making sure that it provides existing API and functionality as of kilo to end users * ceilometer/alarm is open to develop new experimental features until L2/L3 * having a merge window to move those new features to aodh from ceilometer/alarm around L3 What do you think? this sounds like a good idea, we should probably adopt something similar to the graduation process for oslo libs. at quick glance, the code is all structured the same -- under different main folder -- so i believe it should be a easy port if coding against current ceilometer repo to move it under aodh afterwards. Thanks, Ryota -Original Message- From: gordon chung [mailto:g...@live.ca] Sent: Tuesday, June 30, 2015 3:48 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps On 29/06/2015 11:40 AM, Chris Dent wrote: On Mon, 29 Jun 2015, Julien Danjou wrote: Hi team, Aodh has been imported and is now available at: https://git.openstack.org/cgit/openstack/aodh/ woot! I'm pretty clear about the next steps for Aodh and what we need to build, but something is still not clear to me. Do we go ahead and bite the bullet and remove ceilometer-alarming from ceilometer in Liberty? i think we should follow up with the packagers. if i understand correctly, the location of the code is not known from a user pov, it's the packagers that build the appropriate packages for them to use. if from packagers pov, they just need to work against Aodh, then i would lean more to removing alarming from Ceilometer repo asap to avoid maintaining duplicate code bases and the eventual diversion of the two. This is the big question and is one of the things listed on the potential agenda for the mid-cylce. When we do the splits do we deprecate or delete the old code. Given the high chance of us missing some of potential issues it seems like hasing it some before the mid-cylce is a good idea. The two big overarching issues (that inform a lot of the details) that I'm aware of are: * If we delete then we need to make sure we're working hand in hand with all of: downstream packagers, tempest, grenade, devstack, etc. * If we deprecate will people bother to use the new stuff? i would think/hope the experience from end user doesn't actually change. ie. all the same packaged services remain. I'm sure there are plenty of others. -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [test-]requirements-PYN.txt will cause gate failures on check-*-requirements
Sergey, i don't think so. Please go ahead and file one for oslo.* if you get a chance. -- dims On Thu, Jul 2, 2015 at 9:58 AM, Sergey Vilgelm svilg...@mirantis.com wrote: Do we have a common bug on launchpad for this work, to collect all patches? On Thu, Jul 2, 2015 at 2:39 AM, Robert Collins robe...@robertcollins.net wrote: On 2 July 2015 at 11:36, Robert Collins robe...@robertcollins.net wrote: On 29 June 2015 at 15:59, Robert Collins robe...@robertcollins.net wrote: Hi, so we're nearly ready to deprecate the python-version-specific requirements files. Once we have infra's requirements cross checking jobs all copacetic again, we should be able to move forward. So we've got them working again in master, and I'm about to work on support for stable - things are a bit awkward and all tied together there. Sorry for the extended disruption on requirement updates! The status right now is: - all stable jobs are failing on requirements checks, because we don't have the needed code modules in the openstack/requirements stable branches. This is critical and my top priority to unblock everything stable/. - master requirement checks should work Just Fine, as long as the requirements are current. However the bot doesn't know enough to merge -PYN requirements files for you. So you need to do that manually. The easiest way IMO is: REQS=$(pwd)/reqs virtualenv $REQS . $REQS/bin/activate pip install -U pip setuptools -e git+https://git.openstack.org/openstack/requirements#egg=openstack.requirements cd path-to-project cat requirements-py3.txt requirements.txt cat test-requirements-py3.txt test-requirements.txt git rm *requirements-py3.txt update-requirements --source $REQS/src/openstack.requirements . git diff # you'll have some cruft to cleanup - duplicate comments at the bottom of the files and so on # commit and review # repeat from 'cd path-to-project' as needed. One word of caution. Some of the projects i've done this for - keystone and glance - had -e editable lines in their requirements files: these got through because infra wasn't linting the -py3 files at all until recently, and I'm not yet sure if we can get them back in. So we may need to actually fixup those dependencies asap. E.g. get /a/ release of glance-store out with its partial python3 support, to permit the glance python34 gate to work at all. Second word of caution. tox.ini probably needs editing too. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Sergey Vilgelm OpenStack Software Engineer Mirantis Inc. Skype: sergey.vilgelm Phone: +36 70 512 3836 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [test-]requirements-PYN.txt will cause gate failures on check-*-requirements
Do we have a common bug on launchpad for this work, to collect all patches? On Thu, Jul 2, 2015 at 2:39 AM, Robert Collins robe...@robertcollins.net wrote: On 2 July 2015 at 11:36, Robert Collins robe...@robertcollins.net wrote: On 29 June 2015 at 15:59, Robert Collins robe...@robertcollins.net wrote: Hi, so we're nearly ready to deprecate the python-version-specific requirements files. Once we have infra's requirements cross checking jobs all copacetic again, we should be able to move forward. So we've got them working again in master, and I'm about to work on support for stable - things are a bit awkward and all tied together there. Sorry for the extended disruption on requirement updates! The status right now is: - all stable jobs are failing on requirements checks, because we don't have the needed code modules in the openstack/requirements stable branches. This is critical and my top priority to unblock everything stable/. - master requirement checks should work Just Fine, as long as the requirements are current. However the bot doesn't know enough to merge -PYN requirements files for you. So you need to do that manually. The easiest way IMO is: REQS=$(pwd)/reqs virtualenv $REQS . $REQS/bin/activate pip install -U pip setuptools -e git+ https://git.openstack.org/openstack/requirements#egg=openstack.requirements cd path-to-project cat requirements-py3.txt requirements.txt cat test-requirements-py3.txt test-requirements.txt git rm *requirements-py3.txt update-requirements --source $REQS/src/openstack.requirements . git diff # you'll have some cruft to cleanup - duplicate comments at the bottom of the files and so on # commit and review # repeat from 'cd path-to-project' as needed. One word of caution. Some of the projects i've done this for - keystone and glance - had -e editable lines in their requirements files: these got through because infra wasn't linting the -py3 files at all until recently, and I'm not yet sure if we can get them back in. So we may need to actually fixup those dependencies asap. E.g. get /a/ release of glance-store out with its partial python3 support, to permit the glance python34 gate to work at all. Second word of caution. tox.ini probably needs editing too. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Sergey Vilgelm OpenStack Software Engineer Mirantis Inc. Skype: sergey.vilgelm Phone: +36 70 512 3836 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends
Oh, just to be clear, I don't mean to discard what you fixed My intention was to discuss what would be a better way to fix this in future thru a feature/blueprint, given there is a consensus thanx, deepak On Thu, Jul 2, 2015 at 8:57 PM, Deepak Shetty dpkshe...@gmail.com wrote: On Thu, Jul 2, 2015 at 7:05 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 7/2/2015 4:12 AM, Deepak Shetty wrote: On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com mailto:thin...@gmail.com wrote: On 12:24 Jun 26, Matt Riedemann wrote: snip So the question is, is everyone OK with this and ready to make that change? Thanks for all your work on this Matt. +100, awesome debug, followup and fixing work by Matt I'm fine with this. I say bite the bullet and we'll see the CI's surface that aren't skipping or failing this test. Just curious, shouldn't this mean we need to have some way of Cinder querying Nova for do u have this capability and only then setting the 'encryption' key in conn_info ? Better communication between nova and cinder ? thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I thought the same about some capability flag in cinder where the volume driver would tell the volume manager if it supported encryption and then the cinder volume manager would use that to tell if a request to create a volume from an encryption type was possible. But the real problem in our case is the encryption provider support, which is currently the luks and cryptsetup modules in nova. However, the encryption provider is completely pluggable [1] from what I can tell, the libvirt driver in nova just creates the provider class (assuming it can import it) and calls the methods defined in the VolumeEncryptor abstract base class [2]. So whether or not encryption is supported during attach is really up to the encryption provider implementation, the volume driver connector code (now in os-brick), and what the cinder volume driver is providing back to nova during os-initialize_connection. Yes I understand the issue, hence i said that why not cinder checks with Nova whether it supports enc for volume-attach , nova returns yes/no and based on that cinder accepts/rejects the 'create new enc volume' request. I guess my point is I don't have a simple solution besides actually failing when we know we can't encrypt the volume during attach - which is at least better than the false positive we have today. Definitely what u have proposed/fixed is appreciated. But its a workaround, the better way seems to be improving the Nova-Cinder communication ? thanx, deepak [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/__init__.py#n47 [2] http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/base.py#n28 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano][Congress] Application placement use case
Hi, All applications are monolithic so they don't need to be split over multiple regions. It is necessary to have an ability to select a region based on requirements and for now I don't care how they are placed inside the region. I am not sure how region's capabilities will be stored and actually this is a reason why I am asking if Congress will solve this. I can imagine a policy which says if Solaris is required then select Region_Solaris. Or more complex If Solaris is required then select less loaded regions from [Region_Solaris1, Region_Solaris2] In this use case Murano will deploy complex environment which consist of multiple atomic applications with different requirements, so deployment will be across clouds but for whole environment. Imagine IBM MQ on AIX and PowerPC + Oracle DB on Solaris + Microsoft IIS on Windows 2012 HyperV + WebSphere on RHEL KVM. Thanks Gosha On Wed, Jul 1, 2015 at 10:17 PM, ruby.krishnasw...@orange.com wrote: Hi Did you mean placement at “two levels”. First to select the region and then within each region, Nova scheduler will place on hosts. But where will the capabilities of each region (based on which placement decision will be made) be stored? Will each region be queried to obtain this information? Will a single application need to be placed (split across) different regions? Ruby *De :* Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com] *Envoyé :* mercredi 1 juillet 2015 21:26 *À :* OpenStack Development Mailing List *Objet :* [openstack-dev] [Murano][Congress] Application placement use case Hi, I would like to share with the community one of the real use case which we saw while working with one of the Murano customer and ask an advice. This customer has multiple OpenStack regions which are serving for different hypervisors. The reason for that is Oracle OpenStack which is used to work with Solaris on top of SPARC architecture. There are other hypervisors KVM and VMWare which are used. There are multiple applications which should be placed to different regions based on their requirements (OS, Hypervisor, networking restrictions). As there is no single cloud, Nova scheduler can’t be used (at least in my understanding) so we need to have some placement policies to put applications properly. And obviously we don’t want to ask end user about the placement. Right now in Murano we can do this by: 1.Hardcoding placement inside application. This approach will work and does not require any significant change in Murano. But there are obvious limitations like if we have two options for placement which one should be hardcoded. 2.Create special placement scheduler application\class in Murano which will use some logic to place applications properly. This is better approach as nothing is hard coded in applications except their requirements. Applications will just have a workflow to ask placement scheduler for a decision and then will just use this decision. 3.Use some external tool or OpenStack component for placement decision. This is a very generic use case which we saw multiple times. Tools like CIRBA are often used for this. Murano will need an interface to ask these tools. I think about this solution as an extension of 2. I am aware that Murano is working on integration with Congress and I am looking for an opportunity here to address real use case of Murano usage in real customer environment. It will be great to know if OpenStack can offer something here without involving 3rd party tools. I suspect that this is a good use case for Congress, but I would like to see how it might be implemented. Thanks Gosha -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 _ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. __ OpenStack Development Mailing List (not
Re: [openstack-dev] [nova][cinder][qa] encrypted volumes tests don't actually test encrypted volumes for most backends
On Thu, Jul 2, 2015 at 7:05 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 7/2/2015 4:12 AM, Deepak Shetty wrote: On Wed, Jul 1, 2015 at 5:17 AM, Mike Perez thin...@gmail.com mailto:thin...@gmail.com wrote: On 12:24 Jun 26, Matt Riedemann wrote: snip So the question is, is everyone OK with this and ready to make that change? Thanks for all your work on this Matt. +100, awesome debug, followup and fixing work by Matt I'm fine with this. I say bite the bullet and we'll see the CI's surface that aren't skipping or failing this test. Just curious, shouldn't this mean we need to have some way of Cinder querying Nova for do u have this capability and only then setting the 'encryption' key in conn_info ? Better communication between nova and cinder ? thanx, deepak __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I thought the same about some capability flag in cinder where the volume driver would tell the volume manager if it supported encryption and then the cinder volume manager would use that to tell if a request to create a volume from an encryption type was possible. But the real problem in our case is the encryption provider support, which is currently the luks and cryptsetup modules in nova. However, the encryption provider is completely pluggable [1] from what I can tell, the libvirt driver in nova just creates the provider class (assuming it can import it) and calls the methods defined in the VolumeEncryptor abstract base class [2]. So whether or not encryption is supported during attach is really up to the encryption provider implementation, the volume driver connector code (now in os-brick), and what the cinder volume driver is providing back to nova during os-initialize_connection. Yes I understand the issue, hence i said that why not cinder checks with Nova whether it supports enc for volume-attach , nova returns yes/no and based on that cinder accepts/rejects the 'create new enc volume' request. I guess my point is I don't have a simple solution besides actually failing when we know we can't encrypt the volume during attach - which is at least better than the false positive we have today. Definitely what u have proposed/fixed is appreciated. But its a workaround, the better way seems to be improving the Nova-Cinder communication ? thanx, deepak [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/__init__.py#n47 [2] http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/encryptors/base.py#n28 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [packaging] how to deal with the rename of config files in neutron on upgrade?
This change in neutron [1] renames the linuxbridge and openvswitch plugin config files. I'm familiar with the %config(noreplace) directive in rpm but I'm not sure if there is a special trick with rpm to rename a config file while not losing the changes in the config file during the upgrade. Is this just something that has to be handled with trickery in the %post macro where we merge the contents together if the old config file exists? Would symbolic links help? Changes like this seem like a potential giant pain in the ass for packagers. [1] https://review.openstack.org/#/c/195277/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging] how to deal with the rename of config files in neutron on upgrade?
On Thu, Jul 2, 2015 at 10:35 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: This change in neutron [1] renames the linuxbridge and openvswitch plugin config files. I'm familiar with the %config(noreplace) directive in rpm but I'm not sure if there is a special trick with rpm to rename a config file while not losing the changes in the config file during the upgrade. Is this just something that has to be handled with trickery in the %post macro where we merge the contents together if the old config file exists? Would symbolic links help? Changes like this seem like a potential giant pain in the ass for packagers. While a pain in the ass, this should have been done when we deprecated the agents two cycles ago, so this was really just bleeding the pain out longer. I flagged this as DocImpact so we can add a documentation note, and we'll update the Release Notes with this as well. [1] https://review.openstack.org/#/c/195277/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all]deprecating [test-]requirements-PYN.txt
On Thu, Jul 02, 2015 at 09:22:03PM +0100, Dave Walker wrote: On 29 June 2015 at 04:59, Robert Collins robe...@robertcollins.net wrote: Hi, so we're nearly ready to deprecate the python-version-specific requirements files. Once we have infra's requirements cross checking jobs all copacetic again, we should be able to move forward. There isn't a specific spec for this in pbr, and I wanted to get some broad input into the manner of the deprecation. SNIP Slightly offtopic, but I've noticed that some consumers of bandit[0] have been creating requirements-bandit.txt. This is to specify bandit requirements without requiring the whole test-requirements.txt env to be installed, to run what is essentially a linting tool. I'm not sure I like the idea of creating MORE requirements.txt style files as it pollutes the project root namespace and currently has no syncing from global-requirements. I wondered if you had any ideas on how to solve this for bandit usage, and potentially other projects? I would use setuptools extras to do this.[1] PBR has support for this since the 1.0.0 release. [2] I used it on subunit2sql to separate the fairly heavyweight requirements for using the graph command from the rest of the dependencies. [3] -Matt Treinish [0] https://wiki.openstack.org/wiki/Security/Projects/Bandit [1] https://www.python.org/dev/peps/pep-0426/#extras-optional-dependencies [2] http://docs.openstack.org/developer/pbr/#extra-requirements [3] https://review.openstack.org/#/c/184278/ signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all]deprecating [test-]requirements-PYN.txt
On 2015-07-02 21:22:03 +0100 (+0100), Dave Walker wrote: [...] I wondered if you had any ideas on how to solve this for bandit usage, and potentially other projects? Brainstorming here, but I wonder if extras_require[1] would be a good solution down the road. I.e., having `tox -e bandit` run something like `pip install .[test_bandit]` in its virtualenv before invoking bandit. [1] URL: https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging] [puppet] how to deal with the rename of config files in neutron on upgrade?
Will the agent still consider/read the old configuration file? That's up to the packagers. The agent reads whatever config files are passed as args to the process. On Thu, Jul 2, 2015 at 11:02 AM, Mathieu Gagné mga...@iweb.com wrote: Adding [puppet] tag to subject. On 2015-07-02 11:35 AM, Matt Riedemann wrote: This change in neutron [1] renames the linuxbridge and openvswitch plugin config files. I'm familiar with the %config(noreplace) directive in rpm but I'm not sure if there is a special trick with rpm to rename a config file while not losing the changes in the config file during the upgrade. Is this just something that has to be handled with trickery in the %post macro where we merge the contents together if the old config file exists? Would symbolic links help? Changes like this seem like a potential giant pain in the ass for packagers. And people maintaining configuration manager. Will the agent still consider/read the old configuration file? I'm not sure how we will be able to maintain compatibility without involving manual steps or potential misconfiguration. Furthermore, we have to consider upgrades. Neutron agent configuration files are already a mess in distribution packaging. Ok, I exaggerated the situation, it's only a mess with Ubuntu [2] where it thought it would be a great idea to read the agent config from ml2_conf.ini instead of ovs_neutron_plugin.ini like all the other distributions. Now as Puppet modules authors, should we just update the path to the configuration file and hope it's compatible with upstream packages? [1] https://review.openstack.org/#/c/195277/ [2] https://gist.github.com/mgagne/e2e06f5a8cb283a81cab -- Mathieu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] Liberty mid-cycle meet up.
Thanks for the pointer Jeremy! Malini, we will planning a conferencing capability. Please add some details incl. your preference/suggestion to the etherpad ( https://etherpad.openstack.org/p/liberty-glance-mid-cycle-meetup ) so as to help us plan accordingly. On 7/1/15 10:20 AM, Jeremy Stanley wrote: On 2015-07-01 04:39:31 + (+), Bhandaru, Malini K wrote: Nikhil any chance we can have remote participation? Based on the agenda folks can remote dial in. If IRC/Etherpad are insufficient for remote participation and you feel you need a dial-in conference bridge, remember that our project infrastructure includes a PBX with voice conferencing capability (both traditional telephone line and VoIP/SIP): https://wiki.openstack.org/wiki/Infrastructure/Conferencing -- Thanks, Nikhil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team
Phillip V. Toohill III Software Developer phone: 210-312-4366 mobile: 210-440-8374 From: Eichberger, German [german.eichber...@hp.com] Sent: Thursday, July 2, 2015 5:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team Al has been a great asset to LBaaS. Well deserved! +1000 German +1 On 7/2/15, 3:16 PM, Doug Wiegley doug...@parksidesoftware.com wrote: Hi all, As the Lieutenant of the advanced services, I would like to nominate Al Miller to be a member of the neutron-lbaas core reviewer team. Review stats are in line with other cores[2] and feedback on patches has been great. Additionally, he has been instrumental in our devstack support and octavia work. Existing cores, please vote +1/-1 for his addition to the team (that¹s Brandon, Phil, and Kyle.) Thanks, doug 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#c ore-review-hierarchy 2. http://stackalytics.com/report/contribution/neutron-lbaas/90 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [manila][puppet-manila] Bypassing jenkins FAILURE
Hi, On Thu, Jul 2, 2015 at 11:12 PM, Jeremy Stanley fu...@yuggoth.org wrote: On 2015-07-02 22:00:43 +0530 (+0530), sac wrote: I have a patch[1] which should work only on Red Hat systems and fails on non Red Hat systems due to non-availability of a package. I suppose it's up to the puppet-manager-core group members to decide whether non-portable additions are something they want to support. Agree. This currently supports only Red Hat because there is no corresponding package for nfs-ganesha in Debian/Ubuntu. However, jenkins runs the beaker tests on Ubuntu and by definition this fails, which is quite intended. However, jenkins marks FAILURE due to this error. Due to this, patch stays in Verified-1 state incorrectly. The job has been instructed to run against the entirety of the acceptance spec, and the contents of that spec aren't currently conditional for specific classes (besides repo selection and package providers). Okay, makes sense. How to bypass or waive this failure? Can I request an exception? [1] https://review.openstack.org/#/c/186699/ Waiving your change past the job wouldn't solve this, since that job would just become instantly broken for the puppet-manila repo from that point on (and also we have no way to actually do that short of disabling the job or involving an admin for the code review system itself to bypass our gating safeties). If non-portable/platform-specific features are acceptable to the puppet-manila maintainers, I think your options are to either wrap your addition to the spec in a check for $::osfamily == 'RedHat' or modify your new class to not fail when applied on non-RedHat-family platforms. I think this is a good plan. Instead of `fail', I would go ahead with `warning'. Thanks. Anyway, I suspect this is a discussion far better had in the change review itself rather than on the mailing list. Sure. Thanks, sac. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [third-party][infra] Common OpenStack CI Solution - 'Zuul' live
All, I’m please to say that there are now 3 components merged in the puppet-openstackci repo [1] This means 3rd party ci operators can now use the same scripts that the OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for: 1. Log Server 2. Jenkins 3. Zuul This work is being done as part of the common-ci spec [2] Big thanks to Fabien Boucher for completing the Zuul script refactoring, which went live today! Thanks to all the reviewers for careful reviews which led to a smooth migration. I’ve updated my repo [3] switched all my CI systems to use it. As a reminder, there will be a virtual sprint next week July 8-9, 2015 15:00 UTC to finish the remaining tasks. If you’re interested in helping out in any of the remaining tasks (Jenkins Job Builder, Nodepool, Logstash/Kibana, Documentation, Sample site.pp) Sign up on the eitherpad. [4] Also, we can use the 3rd party meeting time slot next week to discuss plans and answer questions [5]. Tuesday 7/7/15 1700 UTC #openstack-meeting Ramy IRC: asselin [1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/ [2] http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html [3] https://github.com/rasselin/os-ext-testing (forked from jaypipes/os-ext-testing) [4] https://etherpad.openstack.org/p/common-ci-sprint [5] https://wiki.openstack.org/wiki/Meetings/ThirdParty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core, switching to 2x +2 rule
Congrats Yuiko! +2:) On Wed, Jul 1, 2015 at 6:09 PM, Yuiko Takada yuikotakada0...@gmail.com wrote: Thanks for all the support!! I will do my best to meet the expectations of you. Let's make Ironic Inspector better and better together. As our core team grows, I'd like us to try to stick with 2x +2 rules. Up to now it was mostly Dmitry approves everything rule, now let us make sure we have at least 2 +2 on a patch before merging it, unless it's critical for release or fixing gate. Don't wait for me to W+1 if you see that patch already has 2x +2. I'd ask the core team to review all the incoming patches. Once our devstack gate is finally working, review will be a lot easier. Nice :) +2! Best Regards, Yuiko Takada 2015-07-01 17:56 GMT+09:00 Dmitry Tantsur dtant...@redhat.com: Hi all! Please welcome Yuiko Takada to ironic-inspector-core team. Yuiko has been with the team for some time already. She did substantial work on porting ironic-inspector to Oslo libraries and on our new devstack gate job. Thanks Yuiko, it's a pleasure to work with you. As our core team grows, I'd like us to try to stick with 2x +2 rules. Up to now it was mostly Dmitry approves everything rule, now let us make sure we have at least 2 +2 on a patch before merging it, unless it's critical for release or fixing gate. Don't wait for me to W+1 if you see that patch already has 2x +2. I'd ask the core team to review all the incoming patches. Once our devstack gate is finally working, review will be a lot easier. Cheers, Dmitry __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper
Here is the bp of Magnum+Hyper+Metal integration: https://blueprints.launchpad.net/magnum/+spec/hyperstack Wanted to hear more thoughts and kickstart some brainstorming. Thanks, Peng - Hyper - Make VM run like Container__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team
+1 for me. Phil your message did not come through. On 7/2/15, 6:32 PM, Phillip Toohill phillip.tooh...@rackspace.com wrote: Phillip V. Toohill III Software Developer phone: 210-312-4366 mobile: 210-440-8374 From: Eichberger, German [german.eichber...@hp.com] Sent: Thursday, July 2, 2015 5:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team Al has been a great asset to LBaaS. Well deserved! +1000 German +1 On 7/2/15, 3:16 PM, Doug Wiegley doug...@parksidesoftware.com wrote: Hi all, As the Lieutenant of the advanced services, I would like to nominate Al Miller to be a member of the neutron-lbaas core reviewer team. Review stats are in line with other cores[2] and feedback on patches has been great. Additionally, he has been instrumental in our devstack support and octavia work. Existing cores, please vote +1/-1 for his addition to the team (that¹s Brandon, Phil, and Kyle.) Thanks, doug 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html# c ore-review-hierarchy 2. http://stackalytics.com/report/contribution/neutron-lbaas/90 _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Kolla] Essential to sign by July 9th - Kolla-Palooza midcycle event!!
Hey community folks! The Kolla team is having a mid-cycle event in San Jose, CA. Coffee is provided throughout the day (I believe, but not certain on this point), and lunch, soda, water are provided at lunch time. An RSVP dinner is provided the night of July 27th at 7 PM so food costs should be minimal. If you plan to attend in person, please book your hotel and flight reservations quickly. Silicon Valley prices are quickly increasing and many companies have a 14 day window (July 13th) for booking travel arrangements. We can handle folks that walk in at the last moment, but for the RSVP dinner, please RSVP by July 9th so we can get an accurate count for organizing a dinner. The eventbrite information is at the bottom of this web page: https://wiki.openstack.org/wiki/Sprints/KollaLibertySprint __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Cinder][Glance] Fixing up Cinder store in Glance
Hi Cinder experts, Currently Glance has cinder backend but it is broken for a long time. I am proposing a glance-spec/patch to fix it by implementing the uploading/downloading images to/from cinder volumes. Glance-spec: https://review.openstack.org/#/c/183363/ Glance_store patch: https://review.openstack.org/#/c/166414/ This will be also useful for sharing volume data among tenants (similar use-case with public snapshots proposal discussed at the design summit). I need a review for them from cinder developers to make it progress. The main part of the patch is in cinder.py. It implements 3 methods get(), add(), and delete() methods, using cinderclient and os-brick. - get() will attach a specified cinder volume to glance node, read the data from it, then detach it. - add() will create a new volume, attach it to the glance node to write image data to it, then detach it. - delete() will delete the specified volume. The image volume will placed in the tenant specified cinder_store_tenant_name, or current user's tenant if cinder_store_tenant_name is not set. Any comments are much appreciated, thanks. Regards, Tomoki Sekiyama __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [infra] issues with beaker/centos7 job
On Thu, Jul 2, 2015 at 11:40 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2015-07-02 16:02:32 +0200 (+0200), Alan Pevec wrote: After having a closer look, I see that image has requests 2.7 installed from pypi which overwrites python-requests RPM installation and wreaks havoc when trying to upgrade RPM. I'm not sure why and where is pypi used during the image build but it should not be installed system-wide on RPM system. If really needed, install it in venv. After some deep digging, I think https://review.openstack.org/198082 will solve this (I'll fire up manual image updates once it merges). -- Jeremy Stanley It looks like things are starting to work again. Thanks Ian and Jeremy for your tremendous help. Colleen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] ALL dependencies pinned in devstack-gate now
I want to give an update on http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html - we've just passed a critical milestone there, and this affects how everyone updates requirements. As of a few minutes ago devstack-gate landed the change to set USE_CONSTRAINTS=True. What this means is that the file http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt is now used to determine the version of every dependency that is present in it. It is created by the script 'generate-constraints' in the same repository, and there's an infra job that will propose changes to that file based on what is in PyPI. Its currently pending a couple of tweaks to get it fully operational but we should resume operations in the near future. This constraints file can be used for local development too - just set USE_CONSTRAINTS=True in your stack config file. Now, the things you have to remember as developers: * If you are adding a new requirement you should also add it to upper-constraints.txt with an exact pin. * If you are raising a minimum version of a requirement, you need to also raise it in upper-constraints.txt. * New releases of *any* library will not be used by devstack-gate jobs until upper-constraints.txt has been edited to enable them. This makes it much much harder to break the world, for all of us that release things - whether it be upstream components like testtools, oslo libraries or the python-$FOOclient libraries. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] [Inspector] Addition to ironic-inspector-core, switching to 2x +2 rule
Great news Yuiko! On Wed, Jul 1, 2015 at 6:26 PM, Dmitry Tantsur dtant...@redhat.com wrote: Hi all! Please welcome Yuiko Takada to ironic-inspector-core team. Yuiko has been with the team for some time already. She did substantial work on porting ironic-inspector to Oslo libraries and on our new devstack gate job. Thanks Yuiko, it's a pleasure to work with you. As our core team grows, I'd like us to try to stick with 2x +2 rules. Up to now it was mostly Dmitry approves everything rule, now let us make sure we have at least 2 +2 on a patch before merging it, unless it's critical for release or fixing gate. Don't wait for me to W+1 if you see that patch already has 2x +2. I'd ask the core team to review all the incoming patches. Once our devstack gate is finally working, review will be a lot easier. Cheers, Dmitry -- Michael Davies mich...@the-davies.net Rackspace Cloud Builders Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team
I think I bottom posted somehow. +1 for Al! Phillip V. Toohill III Software Developer phone: 210-312-4366 mobile: 210-440-8374 From: Brandon Logan brandon.lo...@rackspace.com Sent: Thursday, July 2, 2015 6:57 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team +1 for me. Phil your message did not come through. On 7/2/15, 6:32 PM, Phillip Toohill phillip.tooh...@rackspace.com wrote: Phillip V. Toohill III Software Developer phone: 210-312-4366 mobile: 210-440-8374 From: Eichberger, German [german.eichber...@hp.com] Sent: Thursday, July 2, 2015 5:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team Al has been a great asset to LBaaS. Well deserved! +1000 German +1 On 7/2/15, 3:16 PM, Doug Wiegley doug...@parksidesoftware.com wrote: Hi all, As the Lieutenant of the advanced services, I would like to nominate Al Miller to be a member of the neutron-lbaas core reviewer team. Review stats are in line with other cores[2] and feedback on patches has been great. Additionally, he has been instrumental in our devstack support and octavia work. Existing cores, please vote +1/-1 for his addition to the team (that¹s Brandon, Phil, and Kyle.) Thanks, doug 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html# c ore-review-hierarchy 2. http://stackalytics.com/report/contribution/neutron-lbaas/90 _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone] Liberty FFE Request - Dynamic Policies
Dear Keystoners, Dynamic Policies is a great subject which affects OpenStack horizontally, offering a better mechanism for defining and delivering policies to endpoints of any OpenStack service. An overview of this subject is presented at its wiki page ( https://wiki.openstack.org/wiki/DynamicPolicies ). What we propose to the Liberty cycle is the dynamic delivering of policies, i.e., add to the Keystone server the capability to distribute the policy information to service endpoints. This goal is represented by the following core specs: * Dynamic Policies Overlay ( https://review.openstack.org/#/c/196753/ ), specifying how oslo.policy library will overlay the existing local policy file with custom rules uploaded dynamically (?from Dynamic Policy); * Dynamic Policies Fetch and Cache ( https://review.openstack.org/#/c/134655/ ), defining how the Keystone Middleware will fetch the policy for the current service endpoint it is serving and then ask oslo.policy to overlay the existing local policy file; * Dynamic Policies Delivering Mechanism ( https://review.openstack.org/#/c/197980/ ), defining how the Keystone Server will control the cache mechanism in order to keep policies consistent across different service endpoints which must have the same policy, for example, multiple Nova processes running behind an HAProxy. Currently, there is some discussion around the association of a Dynamic Policy with a given service endpoint. Alternatives are presented in the following specs: * Dynamic Policies with Custom IDs ( https://review.openstack.org/#/c/198000/ ), proposing to allow the creation of policy entities with custom IDs, easing the configuration of the Keystone Middleware and improving UX; * Policy by URL ( https://review.openstack.org/#/c/192422/ ), proposing to identify services endpoints by their URL and then use that URL to associate policy entities with them. On behalf of the team working on the Dynamic Policies subject, I would like to ask for a Feature Freeze Exception in Liberty for it. Sincerely, Samuel de Medeiros Queiroz __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [diskimage-builder] disk-image-create from virtualenv not working
Thanks for the tip! after a few trials I finally made it work by commenting out the following line in /etc/sudoers: #Defaults secure_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Alec On 7/2/15, 11:25 AM, Jens Rosenboom j.rosenb...@x-ion.de wrote: 2015-07-02 19:58 GMT+02:00 Alec Hothan (ahothan) ahot...@cisco.com: I wonder if anybody else ran into the same issue as it looks like PyYAML is required on the native python in order for disk-image-crate to succeed. Installing PyYAML in a virtual environment and running disk-iage-create from that venv won't work as one of the script fails to import yaml: dib-run-parts Thu Jul 2 09:27:50 PDT 2015 Running /tmp/image.ewtpa5DW/hooks/extra-data.d/99-squash-package-install ['/tmp/image.ewtpa5DW/hooks/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] Traceback (most recent call last): File /tmp/image.ewtpa5DW/hooks/extra-data.d/../bin/package-installs-squash, line 26, in module import yaml ImportError: No module named yaml I added a trace to print sys.path in that script and sure enough, it is bypassing completely my venv path, it looks like the PATH is not picking up the existing PATH from the calling shell. from the same shell that ran disk-iage-create above: $ pip list | grep YAML PyYAML (3.11) $ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type help, copyright, credits or license for more information. import sys print sys.path ['', '/home/localadmin/kb/lib/python2.7', '/home/localadmin/kb/lib/python2.7/plat-x86_64-linux-gnu', '/home/localadmin/kb/lib/python2.7/lib-tk', '/home/localadmin/kb/lib/python2.7/lib-old', '/home/localadmin/kb/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/localadmin/kb/local/lib/python2.7/site-packages', '/home/localadmin/kb/lib/python2.7/site-packages'] Is that a known issue/requirement or is there a workaround (other than installing pyYAML on the native python? elements/package-installs/extra-data.d/99-squash-package-install does a sudo -E package-installs-squash, however at least on Ubuntu the default seems to be having env_reset defined, causing the environment variables being cleared and thereby losing information about your venv. As a workaround you could try to change the env_reset setting in your /etc/sudoers. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] - breaking changes for plugins/drivers
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 07/01/2015 04:09 PM, Kyle Mestery wrote: On Wed, Jul 1, 2015 at 5:49 AM, Kevin Benton blak...@gmail.com mailto:blak...@gmail.com wrote: For the second one, I think we made it clear before that no plugins must rely on neutron.openstack.* contents. Where was this made clear? I didn't know about this until it broke and I'm a very frequent contributor and reviewer, which is the core of my complaint. The updated wiki is helpful, but doing it after-the-fact only helps when people try to figure out why they can't make changes to their repos. I agree, and I'd also like to mention that it would be good to move this type of documentation in-tree so it can be autogenerated rather than using the wiki. https://review.openstack.org/#/q/status:open+project:openstack/neutron+b ranch:master+topic:devref-breakage-docs,n,z Ihar -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJVlV3LAAoJEC5aWaUY1u57SocIAOEb9TyurP2L3ki8K9y9rGV4 ty+uj0F+/zOOXmpCcmMP51Xe1EMHYQBPF6qdxxGYngp3vNhGJa8xzBXOeig7jL/q g52zyzWWJK2WliP7nARNZH6QO2/I5L7kMb2UPbw61WPvqXDbxthElmAGEbZGc7IK 6bQVtS+KgIfBU9D8no/NOZUZ7ZKqpBw1USALmfdn8WW1Wq2JsNQf1sKJEDRcgAkA 5ES0dOB1wDeZbGGwpcIDpQ30gDfEWupKDe1uNRfcfJc96cY/EYENf+gIuoOKBKkC kVSOLy79ExWzESDEdxt7aZGhAapi34wE0KaWxnB4Kdkd9H4A6jZWxiZyu5QZH6Y= =LNaO -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel][Plugins] Task priority in post deployment
Hi, Currently it's not possible to configure your service before image is uploaded, because for pre deployment stage it's too early, and for post deployment stage it's too late. As a workaround I can suggest uploading TestVM image once more after new backend gets configured. On Wed, Jul 1, 2015 at 11:49 AM, loic.nico...@orange.com wrote: Hi, I’m wondering if it’s possible to create a plugin which do some actions before the upload of TestVM image ? but after openstack deployment. In my case I have done a plugin which change glance backend, but TestVM image was upload before on the default backend (swift) so I can’t use it after. My task priority (at plugin level) is : stage: post_deployment/2000 I also try to change priority to lower level (eg : 700) Regards, Loic _ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [manila][puppet-manila] Bypassing jenkins FAILURE
Hi, I have a patch[1] which should work only on Red Hat systems and fails on non Red Hat systems due to non-availability of a package. However, jenkins runs the beaker tests on Ubuntu and by definition this fails, which is quite intended. However, jenkins marks FAILURE due to this error. Due to this, patch stays in Verified-1 state incorrectly. How to bypass or waive this failure? Can I request an exception? [1] https://review.openstack.org/#/c/186699/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?
On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote: Hi Heaters. I don't think that my question is very huge for openstack-dev, but it affects a lot of Heat resourcesA and need collect more opinions before apply some of follow approaches. I recently uploaded initial approach for implementation common 'show' attribute [1]A On one of this review was raised one interesting suggestion: 'show' attribute should return map of all resource's attributes, i.e. for each attr in self.attributes_schema: A A outputs[attr] = A _resolve_attribute(attr) return outputs I agree, that it's more easier than separate show_resource method for each resource and it's the same, what returns Neutron API on show request. However, we already has opposite example, when OS::Nova::Server resource has bunch of attributes which are not similar on current 'show' attribute output: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918 I suppose, that the same situation will be and for other resources. So I want to ask about way, which we would like to follow? [1] show as collection of attributes [2] show as the same output for command some client A name of I think [2] is the most useful, and most consistent with both the nova and all neutron resources: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129 Another advantage of this transparent passthrough of the data returned by the client is folks have a workaround in the event heat attributes schema lack some new value that the client returns. Obviously when it's added to the attributes schema, it'll be better to use that instead. Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging] [puppet] how to deal with the rename of config files in neutron on upgrade?
Adding [puppet] tag to subject. On 2015-07-02 11:35 AM, Matt Riedemann wrote: This change in neutron [1] renames the linuxbridge and openvswitch plugin config files. I'm familiar with the %config(noreplace) directive in rpm but I'm not sure if there is a special trick with rpm to rename a config file while not losing the changes in the config file during the upgrade. Is this just something that has to be handled with trickery in the %post macro where we merge the contents together if the old config file exists? Would symbolic links help? Changes like this seem like a potential giant pain in the ass for packagers. And people maintaining configuration manager. Will the agent still consider/read the old configuration file? I'm not sure how we will be able to maintain compatibility without involving manual steps or potential misconfiguration. Furthermore, we have to consider upgrades. Neutron agent configuration files are already a mess in distribution packaging. Ok, I exaggerated the situation, it's only a mess with Ubuntu [2] where it thought it would be a great idea to read the agent config from ml2_conf.ini instead of ovs_neutron_plugin.ini like all the other distributions. Now as Puppet modules authors, should we just update the path to the configuration file and hope it's compatible with upstream packages? [1] https://review.openstack.org/#/c/195277/ [2] https://gist.github.com/mgagne/e2e06f5a8cb283a81cab -- Mathieu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [manila][puppet-manila] Bypassing jenkins FAILURE
On 2015-07-02 22:00:43 +0530 (+0530), sac wrote: I have a patch[1] which should work only on Red Hat systems and fails on non Red Hat systems due to non-availability of a package. I suppose it's up to the puppet-manager-core group members to decide whether non-portable additions are something they want to support. However, jenkins runs the beaker tests on Ubuntu and by definition this fails, which is quite intended. However, jenkins marks FAILURE due to this error. Due to this, patch stays in Verified-1 state incorrectly. The job has been instructed to run against the entirety of the acceptance spec, and the contents of that spec aren't currently conditional for specific classes (besides repo selection and package providers). How to bypass or waive this failure? Can I request an exception? [1] https://review.openstack.org/#/c/186699/ Waiving your change past the job wouldn't solve this, since that job would just become instantly broken for the puppet-manila repo from that point on (and also we have no way to actually do that short of disabling the job or involving an admin for the code review system itself to bypass our gating safeties). If non-portable/platform-specific features are acceptable to the puppet-manila maintainers, I think your options are to either wrap your addition to the spec in a check for $::osfamily == 'RedHat' or modify your new class to not fail when applied on non-RedHat-family platforms. Anyway, I suspect this is a discussion far better had in the change review itself rather than on the mailing list. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Hi Timur, Generally I think that it is a good idea to have a gate that will check whether new code is covered by unit tests or not. But I am not sure that this gate should be voting (if I understand you correct), because new patch may not be just a new code, committer may delete something or fix typos in docsting, etc. On Thu, Jul 2, 2015 at 8:15 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: Hi all, I suggest to add CI job which will check the unit tests coverage for Sahara repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Sahara: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Anastasia Kuznetsova __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Hi all, I suggest to add CI job which will check the unit tests coverage for Sahara repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Sahara: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [diskimage-builder] disk-image-create from virtualenv not working
I wonder if anybody else ran into the same issue as it looks like PyYAML is required on the native python in order for disk-image-crate to succeed. Installing PyYAML in a virtual environment and running disk-iage-create from that venv won't work as one of the script fails to import yaml: dib-run-parts Thu Jul 2 09:27:50 PDT 2015 Running /tmp/image.ewtpa5DW/hooks/extra-data.d/99-squash-package-install ['/tmp/image.ewtpa5DW/hooks/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] Traceback (most recent call last): File /tmp/image.ewtpa5DW/hooks/extra-data.d/../bin/package-installs-squash, line 26, in module import yaml ImportError: No module named yaml I added a trace to print sys.path in that script and sure enough, it is bypassing completely my venv path, it looks like the PATH is not picking up the existing PATH from the calling shell. from the same shell that ran disk-iage-create above: $ pip list | grep YAML PyYAML (3.11) $ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type help, copyright, credits or license for more information. import sys print sys.path ['', '/home/localadmin/kb/lib/python2.7', '/home/localadmin/kb/lib/python2.7/plat-x86_64-linux-gnu', '/home/localadmin/kb/lib/python2.7/lib-tk', '/home/localadmin/kb/lib/python2.7/lib-old', '/home/localadmin/kb/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/localadmin/kb/local/lib/python2.7/site-packages', '/home/localadmin/kb/lib/python2.7/site-packages'] Is that a known issue/requirement or is there a workaround (other than installing pyYAML on the native python? Thanks Alec __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo][oslo-vmware] Core review team
Welcome Eric and many thanks to Vui, Arnaud, Kartik, Subbu and Shawn. -- dims On Thu, Jul 2, 2015 at 1:03 PM, Gary Kotton gkot...@vmware.com wrote: Hi, Over time the team of people working in the project has changed and evolved. We would like to add the following people following their contributions for the project: Eric Brown We would like to remove the following people as they are no longer working on the project and thank them for their contributions: Vui Lam Arnaud Legendre Kartik Bommepally Subbu Shawn Hartsock Thanks Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?
Maybe use all for all attributes in the schema and use show for the raw output from the service (as is done today for server and neutron stuff). On Jul 2, 2015, at 12:46 PM, Steven Hardy sha...@redhat.com wrote: On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote: Hi Heaters. I don't think that my question is very huge for openstack-dev, but it affects a lot of Heat resourcesA and need collect more opinions before apply some of follow approaches. I recently uploaded initial approach for implementation common 'show' attribute [1]A On one of this review was raised one interesting suggestion: 'show' attribute should return map of all resource's attributes, i.e. for each attr in self.attributes_schema: A A outputs[attr] = A _resolve_attribute(attr) return outputs I agree, that it's more easier than separate show_resource method for each resource and it's the same, what returns Neutron API on show request. However, we already has opposite example, when OS::Nova::Server resource has bunch of attributes which are not similar on current 'show' attribute output: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918 I suppose, that the same situation will be and for other resources. So I want to ask about way, which we would like to follow? [1] show as collection of attributes [2] show as the same output for command some client A name of I think [2] is the most useful, and most consistent with both the nova and all neutron resources: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129 Another advantage of this transparent passthrough of the data returned by the client is folks have a workaround in the event heat attributes schema lack some new value that the client returns. Obviously when it's added to the attributes schema, it'll be better to use that instead. Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Hi all, I suggest to add CI job which will check the unit tests coverage for Sahara repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Sahara: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Glance Image Protection Policy
On 07/02/2015 03:10 AM, masoom alam wrote: Hi every one, The glance policy.json allows specific users/roles to download an image. If we apply a policy on a specific role, only that role can download and/or boot an image. What if we want to restrict downloading an image, but at the same time allowing the user to boot it via nova boot. The catch is that we will have to restrict the user from taking the snapshot right? Can glance can differentiate between user downloading an image and nova doing the same on the behalf of a user. No, as it is done with a token. The token is passed to nova, and nova passes it to glance to perform the action. If snapshot is a different API call than download, then you apply a different role for each, and make sure that tokens passed ot Nova do not have the snapshot role in it. It is issues like this that are making me try to drive the Dynamic Policy effort in Keystone. My initial write up is here: https://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/ And the wiki is here: https://wiki.openstack.org/wiki/DynamicPolicies I'd love to have your input on the process. OR how to solve the puzzle, please guide. Thanks __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Can't add a table to nova DB
Hi all, I want to add a table to nova db and I followed those suggestions: https://stackoverflow.com/questions/19424901/how-to-add-a-table-in-nova-database-openstack/24900366#24900366 but after the synchronization there are no changes in the DB. In order, I've added the structure my new table as a class in models.py, than I've written a migration file and its upgrade and downgrade function and than I've synchronizaed the db. It seems everything goes fine. Can you give some suggestions and tips? Regards, -- Silvia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo][oslo-vmware] Core review team
Hi, Over time the team of people working in the project has changed and evolved. We would like to add the following people following their contributions for the project: * Eric Brown We would like to remove the following people as they are no longer working on the project and thank them for their contributions: * Vui Lam * Arnaud Legendre * Kartik Bommepally * Subbu * Shawn Hartsock Thanks Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Error starting designate (DNSaaS)
Thanks Tim for the info. I've tried the installation of designate using the recommended guide ( http://docs.openstack.org/developer/designate/install/ubuntu-dev.html) in a vagrant with ubuntu (precise64 image). I've found a problem in the same step. However, now the error is different: $ designate-manage database sync No handlers could be found for logger oslo_config.cfg usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug] [--log-config-append PATH] [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR] [--log-file PATH] [--log-format FORMAT] [--nouse-syslog] [--nouse-syslog-rfc-format] [--noverbose] [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-syslog] [--use-syslog-rfc-format] [--verbose] [--version] [--nodebug] {powerdns} ... designate: error: argument category: invalid choice: 'database' (choose from 'powerdns') I've tried with master branch and even with stable/kilo branch with the same result. I've also noticed that master branch requires a custom installation of SQLAlchemy to avoid a version conflict: *pip install SQLAlchemy==0.9.9* I've contacted to #openstack-dns today and it looks like a dependency problem. However, all the dependencies were installed successfully. For me it's too hard to investigate the root of the problem. Tomorrow, I'll try to pursue this issue again in IRC. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel][Fuel-Library] Nominate Aleksandr Didenko for fuel-library core
On 02 Jul 2015, at 06:59, Mike Scherbakov mscherba...@mirantis.com wrote: Alex - congratulations! Added you to fuel-library core. Also congrats from me, well deserved! Regards, -- Tomasz 'Zen' Napierala Product Engineering - Poland __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Murano] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Hi all, I suggest to add CI job which will check the unit tests coverage for Murano repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Murano: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo][oslo-vmware] Core review team
+1, it'd be great to have Eric in the oslo.vmware team -Rado On 7/2/15 8:03 PM, Gary Kotton wrote: Hi, Over time the team of people working in the project has changed and evolved. We would like to add the following people following their contributions for the project: * Eric Brown We would like to remove the following people as they are no longer working on the project and thank them for their contributions: * Vui Lam * Arnaud Legendre * Kartik Bommepally * Subbu * Shawn Hartsock Thanks Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [fuel] Minutes from meeting on Kilo packages merge
We found that we where having problems getting in sync with where everyone was for actions to get the kilo packages merged. Here is the summary from the meeting https://etherpad.openstack.org/p/fuel-kilo -- -- Andrew Woodward Mirantis Fuel Community Ambassador Ceph Community __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [diskimage-builder] disk-image-create from virtualenv not working
2015-07-02 19:58 GMT+02:00 Alec Hothan (ahothan) ahot...@cisco.com: I wonder if anybody else ran into the same issue as it looks like PyYAML is required on the native python in order for disk-image-crate to succeed. Installing PyYAML in a virtual environment and running disk-iage-create from that venv won't work as one of the script fails to import yaml: dib-run-parts Thu Jul 2 09:27:50 PDT 2015 Running /tmp/image.ewtpa5DW/hooks/extra-data.d/99-squash-package-install ['/tmp/image.ewtpa5DW/hooks/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] Traceback (most recent call last): File /tmp/image.ewtpa5DW/hooks/extra-data.d/../bin/package-installs-squash, line 26, in module import yaml ImportError: No module named yaml I added a trace to print sys.path in that script and sure enough, it is bypassing completely my venv path, it looks like the PATH is not picking up the existing PATH from the calling shell. from the same shell that ran disk-iage-create above: $ pip list | grep YAML PyYAML (3.11) $ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type help, copyright, credits or license for more information. import sys print sys.path ['', '/home/localadmin/kb/lib/python2.7', '/home/localadmin/kb/lib/python2.7/plat-x86_64-linux-gnu', '/home/localadmin/kb/lib/python2.7/lib-tk', '/home/localadmin/kb/lib/python2.7/lib-old', '/home/localadmin/kb/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/localadmin/kb/local/lib/python2.7/site-packages', '/home/localadmin/kb/lib/python2.7/site-packages'] Is that a known issue/requirement or is there a workaround (other than installing pyYAML on the native python? elements/package-installs/extra-data.d/99-squash-package-install does a sudo -E package-installs-squash, however at least on Ubuntu the default seems to be having env_reset defined, causing the environment variables being cleared and thereby losing information about your venv. As a workaround you could try to change the env_reset setting in your /etc/sudoers. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [packaging] how to deal with the rename of config files in neutron on upgrade?
On 7/2/2015 10:39 AM, Kyle Mestery wrote: On Thu, Jul 2, 2015 at 10:35 AM, Matt Riedemann mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote: This change in neutron [1] renames the linuxbridge and openvswitch plugin config files. I'm familiar with the %config(noreplace) directive in rpm but I'm not sure if there is a special trick with rpm to rename a config file while not losing the changes in the config file during the upgrade. Is this just something that has to be handled with trickery in the %post macro where we merge the contents together if the old config file exists? Would symbolic links help? Changes like this seem like a potential giant pain in the ass for packagers. While a pain in the ass, this should have been done when we deprecated the agents two cycles ago, so this was really just bleeding the pain out longer. I flagged this as DocImpact so we can add a documentation note, and we'll update the Release Notes with this as well. [1] https://review.openstack.org/#/c/195277/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yeah, I'm just looking for ideas. Someone mentioned you could just copy the existing config and name it the new config so it'd have the old settings, and on install rpm won't overwrite it b/c of %config(noreplace). That's something easy to do in %pre or %post. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [infra] issues with beaker/centos7 job
On 2015-07-02 16:02:32 +0200 (+0200), Alan Pevec wrote: After having a closer look, I see that image has requests 2.7 installed from pypi which overwrites python-requests RPM installation and wreaks havoc when trying to upgrade RPM. I'm not sure why and where is pypi used during the image build but it should not be installed system-wide on RPM system. If really needed, install it in venv. After some deep digging, I think https://review.openstack.org/198082 will solve this (I'll fire up manual image updates once it merges). -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [fuel] Minutes from meeting on Kilo packages merge
Thanks xarses, most appreciated! On 07/02/2015 02:20 PM, Andrew Woodward wrote: We found that we where having problems getting in sync with where everyone was for actions to get the kilo packages merged. Here is the summary from the meeting https://etherpad.openstack.org/p/fuel-kilo -- -- Andrew Woodward Mirantis Fuel Community Ambassador Ceph Community __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Security][Bandit] Bandit gate usage
Hello Stackers, A few intrepid projects have started adopting Bandit, an automatic security linter built by the security project, into their gate tests. This is very rewarding to see for those of us who have worked on the project and people with an interest in securing the OpenStack codebase. The list of (known) adopters so far: - Keystone - Keystone-client - Barbican - Anchor - Sahara - Magnum If you know of, or are involved in a project that’s using Bandit and isn’t on our list then please let us know, it would be great to hear your feedback. If you would like to begin using it then check out our wiki for instructions here [1]. If you have no idea what this Bandit thing is then perhaps this presentation from the Vancouver summit might be interesting to you [2]. A Bandit gate job can be configured either as an experimental or none-voting job, so if your interested in trying it out you can give it a go and decide if its a good fit for your project before fully committing. Bandit is regularly discussed in the Security Project IRC meetings and feedback is very welcome. If you have questions or suggestions then feel free to drop in or reply here. [1] https://wiki.openstack.org/wiki/Security/Projects/Bandit [2] https://www.youtube.com/watch?v=hxbbpdUdU_k Many thanks -- Tim Kelsey OpenStack Security member __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Barbican : Regarding the Tempest Tests for Barbican
On Wed, Jul 01, 2015 at 03:30:55PM -0500, Douglas Mendiz?bal wrote: Hi Asha, The blueprint you linked for Tempest is over a year old. I think it pre-dates the Tempest team's decision to stop putting all project tests in the same repo. I believe the spec is obsolete, but someone from the Tempest team can correct me if I'm wrong. Yes, that blueprint was quite old and if you look at the history for it there was nary a patch submitted against it. So, I guess whoever was planning to do that work never got around to it. The reason the BP was sitting around for so long is mostly because I'm terrible at the lp maintenance. I apologize for any confusion that caused. I took some time this afternoon to go through open blueprints and specs repo to clean things up. I marked this particular BP as obsolete now to reflect it's actual state. You're correct in your assertion that we will be moving to a limited set of projects for which tests are maintained in the tempest tree. The plan is to have everything else that wants to use tempest for testing but doesn't fit into that set of projects leverage tempest-lib and the plugin interface which is currently in progress. However, until all the pieces are in place, including docs to explain this all, we're not blocking additions for projects that are currently in-tree but outside that set. (which does not include barbican because nothing was ever added) -Matt Treinish The automated tests that validate the API are the Functional Tests I linked in my earlier email. - - Douglas Mendiz?bal On 7/1/15 3:22 PM, Asha Seshagiri wrote: Hi Douglas , Are there any Automated Test cases created for validating the Barbican APIs. Thanks and Regards, Asha Seshagiri On Wed, Jul 1, 2015 at 3:12 PM, Asha Seshagiri asha.seshag...@gmail.com mailto:asha.seshag...@gmail.com wrote: Thanks Douglas for your response and appreciate for pointing me to the right link I was talking about the tempest tests to validate the Barbican APIs Please find the spec[1] and blue print link [2] for the same . [1]http://specs.openstack.org/openstack/qa-specs/specs/barbican-api-te sts.html [2]https://blueprints.launchpad.net/tempest/+spec/add-basic-tests-for-ba rbican Are above specs and blueprint have become void for Barbican? Now I could use the link sent by you for validating the APIs Thanks and Regards, Asha Seshagiri On Wed, Jul 1, 2015 at 2:32 PM, Douglas Mendiz?bal douglas.mendiza...@rackspace.com mailto:douglas.mendiza...@rackspace.com wrote: Hi Asha, I'm not sure what you mean by tempest tests. If you're looking for Functional Tests for Barbican, then you can find them in the functionaltests directory [1] inside the Barbican repo. We have no intentions of adding Barbican specific tests to the Tempest repo. It's my understanding that Tempest is moving away from one monolithic repository into a modular approach using tempest-lib. - Douglas Mendiz?bal [1] http://git.openstack.org/cgit/openstack/barbican/tree/functionaltest s On 7/1/15 2:12 PM, Asha Seshagiri wrote: Hi All , Has anyone done the Tempest tests for Barbican API Any help would be highly appreciated. -- /Thanks and Regards,/ /Asha Seshagiri/ -- /Thanks and Regards,/ /Asha Seshagiri/ signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] schedule instance based on CPU frequency ?
On 07/01/2015 12:23 AM, ChangBo Guo wrote: thanks Dan and Jay, we don't need add new scheduler for that :-), what about provide cpu frequency to api /os-hypervisors, that means we can report this value automatically, the value can be used in high level mange tools. Meh, I'm not too big of a fan of the os-hypervisors extension. Actually, one might say I despise that extension :) That said, I suppose it should be possible to include the output of the CPU frequency in the cpu_info field there... -jay 2015-07-01 2:58 GMT+08:00 Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com: On 06/30/2015 02:42 AM, ChangBo Guo wrote: CPU frequency is an import performance parameter, currently nova drivers just report cpu_info without frequency. we stored the compute node cpu_info in database with colum compute_nodes.cpu_info, we can add the frequency easily. The usage of cpu frequency I can think is used to schedule to meet applications which need high frequency. add a frequency based filter ? if we need this , I would like to propose a spec for this . There are two steps to leverage cpu frequency: 1. report cpu frequency and record the value, nova hypervisor-show will include the value . 2. filter compute nodes based cpu frequency. add a new scheduler filter to do that before I start to do these stuff. I would like to your input . Do we need leverage CPU frequency in Nova ? if yes, do we need a new filter or leverage existing filter to use frequency ? Like Dan B, I question whether CPU frequency really is a useful metric for scheduling decisions. That said, it is already possible to use CPU frequency in the MetricsWeigher scheduler weigher. The compute monitor plugin system is currently being overhauled [1], but the functionality to monitor CPU-related metrics already exists in Nova and can be enabled by doing the following in your nova-compute nova.conf: compute_monitors = ComputeDriverCPUMonitor Note that with the refactoring of the monitoring plugin interface, the above option will change due to using stevedore to load monitor extensions: compute_monitors = nova.compute.monitors.cpu.virt_driver:Monitor In your Nova scheduler nova.conf, you will need to add the following in the [metrics] section of the file: weights_setting = cpu.frequency=10.0 Again, I'm not saying that the above will result in any appreciable enhancement to the scheduler's decision-making, but it will do what you're trying to accomplish :) Best, -jay [1] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1468012,n,z __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- ChangBo Guo(gcb) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?
On Jul 2, 2015, at 2:35 PM, Steve Baker sba...@redhat.com wrote: On 03/07/15 06:03, Randall Burt wrote: Maybe use all for all attributes in the schema and use show for the raw output from the service (as is done today for server and neutron stuff). Instead of all, how about allowing a special form of {get_attr: [resource_name]} with no extra arguments to return a dict of all attributes? This would be consistent with how extra arguments traverse attribute data. +1 (Hope you can read this despite my bobo client). On Jul 2, 2015, at 12:46 PM, Steven Hardy sha...@redhat.com wrote: On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote: Hi Heaters. I don't think that my question is very huge for openstack-dev, but it affects a lot of Heat resourcesA and need collect more opinions before apply some of follow approaches. I recently uploaded initial approach for implementation common 'show' attribute [1]A On one of this review was raised one interesting suggestion: 'show' attribute should return map of all resource's attributes, i.e. for each attr in self.attributes_schema: A A outputs[attr] = A _resolve_attribute(attr) return outputs I agree, that it's more easier than separate show_resource method for each resource and it's the same, what returns Neutron API on show request. However, we already has opposite example, when OS::Nova::Server resource has bunch of attributes which are not similar on current 'show' attribute output: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918 I suppose, that the same situation will be and for other resources. So I want to ask about way, which we would like to follow? [1] show as collection of attributes [2] show as the same output for command some client A name of I think [2] is the most useful, and most consistent with both the nova and all neutron resources: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129 Another advantage of this transparent passthrough of the data returned by the client is folks have a workaround in the event heat attributes schema lack some new value that the client returns. Obviously when it's added to the attributes schema, it'll be better to use that instead. Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Hi Timur, I absolutely disagree with this approach. IMO such checks just helps to organize the unhealthy dev process when contributors will write tests only to pass this check. We have a non-voting coverage job to track the unit tests coverage in addition to the code review itself. It's the responsibility of the core team to ask for additional unit tests if they are missed. For Sahara project specifically there are tons of places where unit tests will be just mocks testing and such tests are mostly useless. For the places not covered by unit tests we have tons of integration tests. Currently, we trying to force contributors to cover the new code with unit tests if it's really applicable. Let my add some numbers. We have ~ 55% total code coverage by unit tests and ~70% by integration tests (significantly shifted in terms of code blocks coverage comparing to the unit tests). If we'll ignore the plugins dir that contains mostly deployment code fully tested by integration tests we'll have ~70% unit tests coverage. Thanks. On Thu, Jul 2, 2015 at 8:55 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: Hi all, I suggest to add CI job which will check the unit tests coverage for Sahara repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Sahara: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?
On 03/07/15 06:03, Randall Burt wrote: Maybe use all for all attributes in the schema and use show for the raw output from the service (as is done today for server and neutron stuff). Instead of all, how about allowing a special form of {get_attr: [resource_name]} with no extra arguments to return a dict of all attributes? This would be consistent with how extra arguments traverse attribute data. On Jul 2, 2015, at 12:46 PM, Steven Hardy sha...@redhat.com wrote: On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote: Hi Heaters. I don't think that my question is very huge for openstack-dev, but it affects a lot of Heat resourcesA and need collect more opinions before apply some of follow approaches. I recently uploaded initial approach for implementation common 'show' attribute [1]A On one of this review was raised one interesting suggestion: 'show' attribute should return map of all resource's attributes, i.e. for each attr in self.attributes_schema: A A outputs[attr] = A _resolve_attribute(attr) return outputs I agree, that it's more easier than separate show_resource method for each resource and it's the same, what returns Neutron API on show request. However, we already has opposite example, when OS::Nova::Server resource has bunch of attributes which are not similar on current 'show' attribute output: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918 I suppose, that the same situation will be and for other resources. So I want to ask about way, which we would like to follow? [1] show as collection of attributes [2] show as the same output for command some client A name of I think [2] is the most useful, and most consistent with both the nova and all neutron resources: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129 Another advantage of this transparent passthrough of the data returned by the client is folks have a workaround in the event heat attributes schema lack some new value that the client returns. Obviously when it's added to the attributes schema, it'll be better to use that instead. Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all]deprecating [test-]requirements-PYN.txt
On 29 June 2015 at 04:59, Robert Collins robe...@robertcollins.net wrote: Hi, so we're nearly ready to deprecate the python-version-specific requirements files. Once we have infra's requirements cross checking jobs all copacetic again, we should be able to move forward. There isn't a specific spec for this in pbr, and I wanted to get some broad input into the manner of the deprecation. SNIP Slightly offtopic, but I've noticed that some consumers of bandit[0] have been creating requirements-bandit.txt. This is to specify bandit requirements without requiring the whole test-requirements.txt env to be installed, to run what is essentially a linting tool. I'm not sure I like the idea of creating MORE requirements.txt style files as it pollutes the project root namespace and currently has no syncing from global-requirements. I wondered if you had any ideas on how to solve this for bandit usage, and potentially other projects? [0] https://wiki.openstack.org/wiki/Security/Projects/Bandit -- Kind Regards, Dave Walker __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Deadline for mid-cycle meetup signup
Just a reminder that the deadline is next week! Cheers, Michael On Fri, Jun 26, 2015 at 8:57 AM, Michael Still mi...@stillhq.com wrote: Hi, I just wanted to let people know that we're setting a deadline of 7 July for mid-cycle meetup signups. So, if you're intending to sign up but haven't already I'd do it real soon now. You can sign up at: https://www.eventbrite.com.au/e/openstack-nova-liberty-mid-cycle-developer-meetup-tickets-16908756546 Cheers, Michael -- Rackspace Australia -- Rackspace Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?
On Fri, Jul 03, 2015 at 07:35:18AM +1200, Steve Baker wrote: On 03/07/15 06:03, Randall Burt wrote: Maybe use all for all attributes in the schema and use show for the raw output from the service (as is done today for server and neutron stuff). Instead of all, how about allowing a special form of {get_attr: [resource_name]} with no extra arguments to return a dict of all attributes? This would be consistent with how extra arguments traverse attribute data. +1 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Anastasia, because new patch may not be just a new code, committer may delete something or fix typos in docsting, etc. This job compares amount of non covered lines (before and after patch). If you just remove code there will be less lines that should be covered so amount of non covered lines will be less or the same (if everything was covered before) Fixing typos in docstrings won't introduce new lines. Btw job allows you to introduce N (few) new lines that are not covered by unit tests that are uncovered in some cases. Best regards, Boris Pavlovic On Thu, Jul 2, 2015 at 10:46 AM, Anastasia Kuznetsova akuznets...@mirantis.com wrote: Hi Timur, Generally I think that it is a good idea to have a gate that will check whether new code is covered by unit tests or not. But I am not sure that this gate should be voting (if I understand you correct), because new patch may not be just a new code, committer may delete something or fix typos in docsting, etc. On Thu, Jul 2, 2015 at 8:15 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: Hi all, I suggest to add CI job which will check the unit tests coverage for Sahara repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Sahara: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Anastasia Kuznetsova __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [magnum] Magnum Midcycle Event Scheduling Doodle Poll closes July 7th
Yes, if your remotely participating please still vote so we can get the maximum people in. Regards -steve From: Kai Qiang Wu wk...@cn.ibm.commailto:wk...@cn.ibm.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Wednesday, July 1, 2015 at 6:11 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [magnum] Magnum Midcycle Event Scheduling Doodle Poll closes July 7th Hi Stdake, If remote participation, do I need to vote for http://doodle.com/pinkuc5hw688zhxw ? Thanks Best Wishes, Kai Qiang Wu (吴开强 Kennan) IBM China System and Technology Lab, Beijing E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com Tel: 86-10-82451647 Address: Building 28(Ring Building), ZhongGuanCun Software Park, No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193 Follow your heart. You are miracle! [Inactive hide details for Steven Dake (stdake) ---07/02/2015 02:06:58 AM---Apologies for double post – left off [magnum] pri]Steven Dake (stdake) ---07/02/2015 02:06:58 AM---Apologies for double post – left off [magnum] prior by error. Ton Ngo of IBM Silicon Valley Research From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: 07/02/2015 02:06 AM Subject: [openstack-dev] [magnum] Magnum Midcycle Event Scheduling Doodle Poll closes July 7th Apologies for double post – left off [magnum] prior by error. Ton Ngo of IBM Silicon Valley Research has graciously offered to host the 2 day Magnum midcycle event at IBM’s facilities. The sessions will run from 9AM – 5PM and catered lunch and refreshments (soda/water) will be provided. The mid-cycle will be a standard mid-cycle with a 1 hour introduction followed by two days of design sessions. Please cast your votes on any days you can make. http://doodle.com/pinkuc5hw688zhxw There are ~25 seats available. Preference will be given to in-person core reviewers, followed by any folks that have made commits to the repository. After dates are settled, a separate eventbrite event will be setup to sort out the specifics such as dietary needs, etc and confirm in-person seating if we are past capacity limits. We will make remote participation available, but the experience will likely be less then optimal for remote participants. Regards -steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] [QA] [tests coverage] Can we add CI job to control the unit tests coverage?
Hi Sergey, Looks like I suggested job which we already have :) so, I think non-voiting coverage job is ok for Sahara project and we can just continue to use it. Thank you! On Thu, Jul 2, 2015 at 11:06 PM, Sergey Lukjanov slukja...@mirantis.com wrote: Hi Timur, I absolutely disagree with this approach. IMO such checks just helps to organize the unhealthy dev process when contributors will write tests only to pass this check. We have a non-voting coverage job to track the unit tests coverage in addition to the code review itself. It's the responsibility of the core team to ask for additional unit tests if they are missed. For Sahara project specifically there are tons of places where unit tests will be just mocks testing and such tests are mostly useless. For the places not covered by unit tests we have tons of integration tests. Currently, we trying to force contributors to cover the new code with unit tests if it's really applicable. Let my add some numbers. We have ~ 55% total code coverage by unit tests and ~70% by integration tests (significantly shifted in terms of code blocks coverage comparing to the unit tests). If we'll ignore the plugins dir that contains mostly deployment code fully tested by integration tests we'll have ~70% unit tests coverage. Thanks. On Thu, Jul 2, 2015 at 8:55 PM, Timur Nurlygayanov tnurlygaya...@mirantis.com wrote: Hi all, I suggest to add CI job which will check the unit tests coverage for Sahara repository and will set -1 for commits with new code and without unit tests (if we have some degradation of tests coverage). This job successfully works for Rally project and it helps to organize the right code development process when developers write new unit tests for new functionality. we can just copy this job from Rally and start to use it for Sahara: Coverage control script: https://github.com/openstack/rally/blob/master/tests/ci/cover.sh Configuration file for coverage plugin (to exclude code which shouldn't be affected): https://github.com/openstack/rally/blob/master/.coveragerc Example of job in infra repository: https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L4088 I expect that it will help to increase the tests coverage by unit tests. Do we have any objections? -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team
Hi all, As the Lieutenant of the advanced services, I would like to nominate Al Miller to be a member of the neutron-lbaas core reviewer team. Review stats are in line with other cores[2] and feedback on patches has been great. Additionally, he has been instrumental in our devstack support and octavia work. Existing cores, please vote +1/-1 for his addition to the team (that’s Brandon, Phil, and Kyle.) Thanks, doug 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy 2. http://stackalytics.com/report/contribution/neutron-lbaas/90 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all]deprecating [test-]requirements-PYN.txt
On 3 July 2015 at 08:22, Dave Walker em...@daviey.com wrote: On 29 June 2015 at 04:59, Robert Collins robe...@robertcollins.net wrote: Hi, so we're nearly ready to deprecate the python-version-specific requirements files. Once we have infra's requirements cross checking jobs all copacetic again, we should be able to move forward. There isn't a specific spec for this in pbr, and I wanted to get some broad input into the manner of the deprecation. SNIP Slightly offtopic, but I've noticed that some consumers of bandit[0] have been creating requirements-bandit.txt. This is to specify bandit requirements without requiring the whole test-requirements.txt env to be installed, to run what is essentially a linting tool. Well, this doesn't really make sense to me. making a new virtualenv with tools like that is a few seconds (after the first warm-up time), and the virtualenvs persist. For CI, its not the bottleneck and thus perf is irrelevant. I'm not sure I like the idea of creating MORE requirements.txt style files as it pollutes the project root namespace and currently has no syncing from global-requirements. I wondered if you had any ideas on how to solve this for bandit usage, and potentially other projects? [0] https://wiki.openstack.org/wiki/Security/Projects/Bandit Its feasible to use extras for this, but I wouldn't. requirements-bandit.txt is definitely a problem though as its not synced, not linted, and not cross-checked. So my recommendations are: 1) test-requirements. Thats what its there for. 2) extras if that realy is an issue. Also if doing this help everyone by documenting where the issue is. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][lbaas] Proposing Al Miller for neutron-lbaas core team
Al has been a great asset to LBaaS. Well deserved! +1000 German On 7/2/15, 3:16 PM, Doug Wiegley doug...@parksidesoftware.com wrote: Hi all, As the Lieutenant of the advanced services, I would like to nominate Al Miller to be a member of the neutron-lbaas core reviewer team. Review stats are in line with other cores[2] and feedback on patches has been great. Additionally, he has been instrumental in our devstack support and octavia work. Existing cores, please vote +1/-1 for his addition to the team (that¹s Brandon, Phil, and Kyle.) Thanks, doug 1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#c ore-review-hierarchy 2. http://stackalytics.com/report/contribution/neutron-lbaas/90 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev