[openstack-dev] [nova][neutron][SR-IOV]
Hi all, I prepared etherpad with all SR-IOV Features [1] that were submitted to Neutron/Nova for Liberty. Please feel free to add new features or existing features that I missed. The etherpad also includes issues to discuss section. Please feel free add your feedback/issues under it. I will arrange BoF session. Time and day are TBD. [1] https://etherpad.openstack.org/p/liberty-sriov Regards, Moshe Levi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] setting the clock to remove pypy testing
We've disabled all the pypy tests across OpenStack because it was failing, and after 48hrs no one was actually working on any fixes. It's thus effectively just burning nodes for no value. It's not clear that there are any active contributors to OpenStack that find the pypy use case interesting enough to stay on top of it. A failure in this non main path blocks a ton of projects from landing any code. I would recommend we set the following remove criteria for June 1st - 2 weeks out. * the pypy jobs all need to be passing again * there are 2 champions that have come forward that will be active in #openstack-dev, #openstack-infra, and #openstack-qa that will commit to actively keeping an eye on such things. I feel like we need 2 champions because we need a hot spare (people go on vacation, have other distractions, having only 1 person able to do a thing means the responsibility is really thrust back onto -infra and -qa folks). I'd expect these champions to be the ones that fix the current pypy issues. I think the original theory of pypy is that we would rub cheetah blood on OpenStack and make it magically faster. But, as has been discussed in other threads: control plane services aren't being slowed down by python, they are being slowed down by other solvable architecture changes. data plane services (like swift) aren't helped enough by pypy for performance critical paths, and are thus looking into alternative languages for those parts. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Merlin] Cannot log into the dashboard
Hello all, I am trying to install Merlin and integrate it into Horizon as reported at https://github.com/stackforge/merlin, but I cannot log into the OpenStack's dashboard because the dashboard does not work anymore. It occurs after that I copied the pluggable config (|_50_add_mistral_panel.py)| for the Mistral panel to add it in Horizon. If I remove it then the dashboard returns operational. Is it a known issue? Is there another installation guide or something like that or some workarounds? Thanks in advance! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?
On 05/14/2015 06:52 AM, John Garbutt wrote: On 12 May 2015 at 20:33, Sean Dague s...@dague.net wrote: On 05/12/2015 01:12 PM, Jeremy Stanley wrote: On 2015-05-12 10:04:11 -0700 (-0700), Clint Byrum wrote: It's a nice up side. However, as others have pointed out, it's only capable of displaying the most basic pieces of the architecture. For higher level views with more components, I don't think ASCII art can provide enough bandwidth to help as much as a vector diagram. Of course, simply a reminder that just because you have one or two complex diagram callouts in a document doesn't mean it's necessary to also go back and replace your simpler ASCII art diagrams with unintelligible (without rendering) SVG or Postscript or whatever. Doing so pointlessly alienates at least some fraction of readers. Sure, it's all about trade offs. But I believe that statement implicitly assumes that ascii art diagrams do not alienate some fraction of readers. And I think that's a bad assumption. If we all feel alienated every time anyone does anything that's not exactly the way we would have done it, it's time to give up and pack it in. :) This thread specifically mentioned source based image formats that were internationally adopted open standards (w3c SVG, ISO ODG) that have free software editors that exist in Windows, Mac, and Linux (Inkscape and Open/LibreOffice). Some great points make here. Lets try decide something, and move forward here. Key requirements seem to be: * we need something that gives us readable diagrams * if its not easy to edit, it will go stale * ideally needs to be source based, so it lives happily inside git * needs to integrate into our sphinx pipeline * ideally have an opensource editor for that format (import and export), for most platforms ascii art fails on many of these, but its always a trade off. Possible way forward: * lets avoid merging large hard to edit bitmap style images * nova-core reviewers can apply their judgement on merging source based formats * however it *must* render correctly in the generated html (see result of docs CI job) Trying out SVG, and possibly blockdiag, seem like the front runners. I don't think we will get consensus without trying them, so lets do that. Will that approach work? Sounds great to me. This is all about being pragmatic. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][heat][oslo] sqlalchemy-migrate tool to alembic
Excerpts from Manickam, Kanagaraj's message of 2015-05-14 06:18:25 +: Hi Nova team, This mail is regarding an help required on the migration from sqlalchemy migration tool to alembic tool. Heat is currently using sqlalchemy-migration tool and In liberty release, we are investigating on how to bring the alembic into heat. We found that nova has already tried the same (https://review.openstack.org/#/c/15196/ ) almost 2 years back and in Kilo release, nova is still using sqlalchemy migration tool. (https://github.com/openstack/nova/tree/master/nova/db/sqlalchemy/migrate_repo/versions) So we are assuming that, in nova, you might have faced blockers to bring in alembic. And we would like to seek your recommendation/suggestions based on your experience on this. This will help us to take proper direction on using alembic in heat. Could you kindly share it. There are some folks on the oslo.db team who have looked into making alembic work. You may want to check in with them to see what the state of that project is before redoing a lot of the work yourselves. Doug Thanks in advance ! Regards, Kanagaraj M __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron]Fail to communicate new host when the first host for a new instance fails
Hi Rossella, Many thanks for your quick reply! On 14/05/15 11:08, Rossella Sblendido wrote: Hi Neil, what's the status of the port after the migration? You might be hitting [1] . See also the patch that fixes the issue [2] Thanks, but that is definitely not the cause of the problem in my case, because my agent does not call get_device_details. (BTW - it seems obviously wrong to me for an API named get_device_details to change the port status to BUILD, even if the call is coming from the correct host. I would expect that an agent could safely call get_device_details at any time without having any effect on the port state.) If you wait a bit longer, is the host_id updated by Nova? No, it isn't. I've now been able to reproduce this again, and look directly at the Neutron DB, and I think what I see indicates that this is definitely an OpenStack bug (as opposed to a problem in my mechanism driver). My hosts are named calico-vm13 and calico-vm15, and calico-vm13 is set up so that libvirt will fail to launch any instances. When I use the Horizon UI to create an instance, Nova tries calico-vm13 first - which fails - and then calico-vm15, which succeeds. Horizon then shows that the instance is on calico-vm15: admin calico-vm15 dltst cirros-0.3.2-x86_64 10.28.29.214 2001:db8:c41:2::1d9a m1.tiny Active NoneRunning 24 minutes The port for that instance is the cc80291c one here: mysql select * from ports; ++-+--+-+---+++-+--+ | tenant_id | id | name | network_id | mac_address | admin_state_up | status | device_id | device_owner | ++-+--+-+---+++-+--+ | b2d9f70... | 79fd9d6c... | | 1fca4aa4... | fa:16:3e:d3:1a:62 | 1 | DOWN | dhcpea9f... | network:dhcp | | b2d9f70... | cc80291c... | | 1fca4aa4... | fa:16:3e:bc:df:f0 | 1 | ACTIVE | e2b61f5f... | compute:None | | b2d9f70... | d9f7d1d0... | | 1fca4aa4... | fa:16:3e:0b:29:3e | 1 | DOWN | dhcp2ffe... | network:dhcp | And the ml2_port_bindings table shows that Neutron/ML2 thinks that port is still on calico-vm13: mysql select * from ml2_port_bindings; +-+-+--++-+---+---+-+ | port_id | host| vif_type | driver | segment | vnic_type | vif_details | profile | +-+-+--++-+---+---+-+ | 79fd9d6c... | calico-vm13 | tap | calico | fdc5ef44... | normal | {port_filter: true} | | | cc80291c... | calico-vm13 | tap | calico | fdc5ef44... | normal | {port_filter: true} | | | d9f7d1d0... | calico-vm15 | tap | calico | fdc5ef44... | normal | {port_filter: true} | | Where should I start looking, to see where Nova / Neutron _should_ be updating the port binding, in this scenario? Many thanks, Neil cheers, Rossella [1] https://bugs.launchpad.net/neutron/+bug/1439857 [2] https://review.openstack.org/#/c/163178/ On 05/14/2015 11:29 AM, Neil Jerram wrote: Hi all, this is about a problem I'm seeing with my Neutron ML2 mechanism driver [1]. I'm expecting to see an update_port_postcommit call to signal that the binding:host_id for a port is changing, but I don't see that. The scenario is launching a new instance in a cluster with two compute hosts, where we've rigged things so that one of the compute hosts will always be chosen first, but libvirt isn't correctly configured there and hence the instance launching attempt will fail. Then Nova tries to use the other compute host instead, and that mostly works - except that my mechanism driver still thinks that the new instance's port is still bound to the first compute host. Is anyone aware of a known problem in this area (in Juno-level code), or where I could like to start pinning this down in more detail? Many thanks, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [nova-scheduler] Cross-project requirements for an independant (gantt) scheduler at Vancouver
For sure I'll be at both sessions and can report back to the Nova scheduler session. The main reason for the Tues. session is to get cross-project ideas, I would expect that the Wed. Liberty scheduler session will be more Nova focused but there will clearly be overlap. -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 -Original Message- From: John Garbutt [mailto:j...@johngarbutt.com] Sent: Thursday, May 14, 2015 3:35 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova-scheduler] Cross-project requirements for an independant (gantt) scheduler at Vancouver On 13 May 2015 at 16:48, Dugger, Donald D donald.d.dug...@intel.com wrote: I’d like to get together to talk about what kind of APIs/requirements different projects would need from a common scheduler. To that end room 221 is available on Tues. at 3:40PM. If you’re at all interested in a common scheduler come on by and join the conversation. Location: Rm 221 Time: 3:40PM Date: Tues., 5/19 Its possible some of that will come up in this nova-scheduler session (see line 44): https://etherpad.openstack.org/p/YVR-nova-scheduler-in-liberty But frankly, there is probably more than 40 mins of topics in that session right now. In any case, it would be great if you can feedback the above discussions into the above session. Thanks, John PS Sorry, for the duplicate email, I replied to the wrong thread. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?
On 05/13/2015 04:09 PM, Dolph Mathews wrote: Developers can handle ASCII. Developers can't handle steel blue versus cornflower blue. But seriously, graphics collaboratively authored by developers should, ideally, be editable via a text file. Otherwise they won't be maintained. Like how all those code comments and inline docs are awesomely maintained all over the place. :) I'm only half trolling, but I think there is an important bit in this thread. There is concern in context switching tools to create a commit, that's a valid concern, it adds friction. I can accept it, which is why for simple things, I think it's fine. There is concern about I'm not good at art. Thus making an assumption that it's not a useful skill in developing a source base, thus optimizing the workflow to prevent it from being in the tree, thus telling people that might be good in such thing, but less good in writing unit tests, you're skills are not valued. There are people that are better at writing coherent paragraphs describing things, there are people that are better at writing reusable code, there are people that are better at creating good tests (which might be different than writing those tests), there are people that are good at ascii art, there are people that are good creating other kinds of diagrams. We are a better project if we value all these contributions, and don't declare as unvaluable the ones we personally are bad at. We all have gaps in our skill base, that's being human. The point of developing in a community is that we accept those gaps, and hopefully build a community of developers that cover all the bases collaboratively. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] The security group is confused in the create server api
On Wed, May 13, 2015 at 4:54 PM, Lei Zhang zhang.lei@gmail.com wrote: Thank for your reply. I read that thread, but it just throw a exception when using confused params. To solve this issue, is it worth to create a now micro version of api to implement above cli? As i see at the last (May, 13) Matt's comment in https://review.openstack.org/#/c/154068/ some work is planned or even started. I do not know details. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] proposal to add Chris Dent to Ceilometer core
hi folks, since we had strong support for cdent in the review below, it's my pleasure to welcome Chris Dent as the newest member of the Ceilometer core team. welcome Dude! you'll really tie the room together. cheers, gord From: g...@live.ca To: openstack-dev@lists.openstack.org Date: Fri, 8 May 2015 10:05:23 -0400 Subject: [openstack-dev] [ceilometer] proposal to add Chris Dent to Ceilometer core hi, i'd like to nominate Chris Dent to the Ceilometer core team. he has been one of the leading reviewers in Ceilometer and gives solid comments. he also has led the api effort in Ceilometer and provides insight in specs. as we did last time, please vote here: https://review.openstack.org/#/c/181394/ . if for whatever reason you cannot vote there, please respond to this. reviews: https://review.openstack.org/#/q/project:openstack/ceilometer+reviewer:%22Chris+Dent%22,n,z patches: https://review.openstack.org/#/q/project:openstack/ceilometer+owner:%22Chris+Dent%22,n,z cheers, gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Merlin] Cannot log into the dashboard
Hello, Fablo! Sorry, it was my fault :(. In one of the most recent commits I've added persistent storage of Mistral workbooks via Django models - and to enable them, you must add add a DATABASES setting to openstack_dashboard.settings module as described at https://docs.djangoproject.com/en/1.8/ref/settings/#databases (use sqlite3 driver) This change is a temporary one - just as we'll have a real Mistral API backing up the Merlin, I'll remove these models, because they're contrary to how things are done in the Horizon. Nevertheless I'll update the docs at github ASAP. If you need any further assistance, please contact me in #openstack-merlin channel at Freenode IRC server. On Thu, May 14, 2015 at 4:44 PM, Fabio Imperato fabio.imper...@dektech.com.au wrote: Hello all, I am trying to install Merlin and integrate it into Horizon as reported at https://github.com/stackforge/merlin, but I cannot log into the OpenStack's dashboard because the dashboard does not work anymore. It occurs after that I copied the pluggable config ( _50_add_mistral_panel.py) for the Mistral panel to add it in Horizon. If I remove it then the dashboard returns operational. Is it a known issue? Is there another installation guide or something like that or some workarounds? Thanks in advance! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Timur Sufiev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [CI] gate wedged by tox = 2.0
On 5/14/2015 5:46 AM, Sean Dague wrote: On 05/14/2015 04:16 AM, Robert Collins wrote: Tox 2.0 just came out, and it isolates environment variables - which is good, except if you use them (which we do). So everything is broken. https://review.openstack.org/182966 Should fix it until projects have had time to fix up their local tox.ini's to let through the needed variables. As an aside it might be nice to get this specifier from global-requirements, so that its managed in the same place as all our other specifiers. This will only apply to tempest jobs, and I see lots of tempest jobs passing without it. Do we have a bug with some failures linked because of it? If this is impacting unit tests, that has to be directly fixed there. -Sean python-novaclient, neutron and python-manilaclient are being tracked against bug https://bugs.launchpad.net/neutron/+bug/1455102. Heat is being tracked against bug https://bugs.launchpad.net/heat/+bug/1455065. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] setting the clock to remove pypy testing
Excerpts from Sean Dague's message of 2015-05-14 08:53:31 -0400: We've disabled all the pypy tests across OpenStack because it was failing, and after 48hrs no one was actually working on any fixes. It's thus effectively just burning nodes for no value. The original thread, for reference: http://lists.openstack.org/pipermail/openstack-dev/2015-May/063720.html It's not clear that there are any active contributors to OpenStack that find the pypy use case interesting enough to stay on top of it. A failure in this non main path blocks a ton of projects from landing any code. I would recommend we set the following remove criteria for June 1st - 2 weeks out. +1 * the pypy jobs all need to be passing again * there are 2 champions that have come forward that will be active in #openstack-dev, #openstack-infra, and #openstack-qa that will commit to actively keeping an eye on such things. I wonder if we have a place yet where we are documenting champions like this? They aren't quite liaisons so I'm not sure the CrossProjectLiaisons page in the wiki makes sense. We could list these on the QA page somewhere, but there are probably other themes for which we need champions, though, so maybe we should make a new Champions page and start making lists so it's easier to find someone to help with non-mainstream issues like this. I feel like we need 2 champions because we need a hot spare (people go on vacation, have other distractions, having only 1 person able to do a thing means the responsibility is really thrust back onto -infra and -qa folks). I'd expect these champions to be the ones that fix the current pypy issues. +1 to having 1 champion for anything like this I think the original theory of pypy is that we would rub cheetah blood on OpenStack and make it magically faster. But, as has been discussed in other threads: control plane services aren't being slowed down by python, they are being slowed down by other solvable architecture changes. data plane services (like swift) aren't helped enough by pypy for performance critical paths, and are thus looking into alternative languages for those parts. -Sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] proposal to add Chris Dent to Ceilometer core
On Thu, 14 May 2015, gordon chung wrote: welcome Dude! you'll really tie the room together. Thanks very much to everyone for the vote of confidence. I will try to be good and not disrupt the color scheme. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
+1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com mailto:dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com mailto:mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period - https://etherpad.openstack.org/p/havana-availability-zone-and-region-management https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. AVAILABLE_REGIONS is merely a list of keystone endpoints. They can be backed by a different IdP per endpoint and thus not share a token store. Or, they can just be keystone endpoints that are geographically different but backed by the same IdP, which may or may not share a token store. The funny thing is, for Horizon, it doesn't matter. They are all supported. But as one keystone endpoint may not know about another, unless nested, this has to be done with settings as it's not typically discoverable. If you are asking about token sharing between keystones which the thread you linked seems to indicate. Then yes, you can have a synced token store. But that is an exercise left to the operator. I'd like to quickly go on record and say that a token store sync like this is not recommended. It is possible to work around this in Kilo with some limited data sync (resource, assignment) and the use of Fernet tokens. However, as you introduce higher latencies and WAN transit this type of syncing becomes more and more prone to error. It would be possible to make DOA multi keystone aware, but before we dive too far into this I'd like to get a clear view of what exactly the use
[openstack-dev] [ceilometer] skipping ceilometer meeting today
hi folks, since we don't seem to have any topics for this week and everyone is prepping (or is already at) the summit, we'll skip today's meeting. if there are any issues to discuss pre-summit, please post them to the list here or to #openstack-ceilometer. see you next week! cheers, gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [CI] gate wedged by tox = 2.0
On Thu, May 14, 2015 at 9:41 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 5/14/2015 5:46 AM, Sean Dague wrote: On 05/14/2015 04:16 AM, Robert Collins wrote: Tox 2.0 just came out, and it isolates environment variables - which is good, except if you use them (which we do). So everything is broken. https://review.openstack.org/182966 Should fix it until projects have had time to fix up their local tox.ini's to let through the needed variables. As an aside it might be nice to get this specifier from global-requirements, so that its managed in the same place as all our other specifiers. This will only apply to tempest jobs, and I see lots of tempest jobs passing without it. Do we have a bug with some failures linked because of it? If this is impacting unit tests, that has to be directly fixed there. -Sean python-novaclient, neutron and python-manilaclient are being tracked against bug https://bugs.launchpad.net/neutron/+bug/1455102. Heat is being tracked against bug https://bugs.launchpad.net/heat/+bug/1455065. -- Thanks, Matt Riedemann Here's the fix in keystoneclient if you need an example: https://review.openstack.org/#/c/182900/ It just added passenv = OS_* If you're seeing jobs pass without the workaround then those jobs are probably not running with tox=2.0. - Brant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Add 'Ally Skills Workshop' to the sched.org Schedule.
Thierry Carrez wrote: Tony Breeds wrote: Hi All, Is there any chance we can get the Ally Skills Workshop added to the schedule (specifally the design summit schedule)? https://adainitiative.org/2015/04/register-now-ally-skills-workshop-at-openstack-summit-2015/ It'd be nice to see it on the schedule so I don't keep windering why I'm not attending any of the interesting talks that afternoon. +1 The design summit schedule syncs general sessions from the master schedule. I'll ask about adding it there. So apparently the workshop has a max capacity of 40 people and folks have to register in advance - they can't just show up to the session without pre-registering. It would therefore be slightly counter-productive to add it the schedule, although you should advise your friends to sign up and join ! -- Thierry Carrez (ttx) signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [monasca] [java]
The Monasca project currently has three major components written in Java. Monasca-persister, monasca-thresh, and monasca-api. These components work with Influxdb 0.9.0 and Vertica 7.1. They integrate with Kafka and MySQL. The monasca team is currently bringing the Python versions of these components up to parity with their Java counterparts. This effort is being undertaken because there seems to be considerable friction in introducing Java components into the OpenStack community. At this point, Id like to test the waters a bit and determine what the larger community’s reaction to having these components remain in Java would be. Would there be a general acceptance or would there be a visceral rejection? Is the issue more of integration with existing CI/CD architecture or is there more of a cultural issue? The arguments for Java are non-trivial. Monasca has requirements for very high throughput. Furthermore, integration with Kafka is better supported with Kafka's Java libraries. We’ve seen that Swift has introduced components in Go. So, this looks like a precedent for allowing other languages where deemed appropriate. Before we spend many man-hours hacking on the Python components, it seems reasonable to determine if there really exists a reason to do so. I’m interested in soliciting any feedback from the community be it pleasant or unpleasant. Thanks. — Deklan Dieterly Software Engineer HP __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session
We will have someone from Murano team also on this session. This topic is really hot for most of app level projects in OpenStack. Thanks Gosha On Thu, May 14, 2015 at 10:07 AM, Sergey Lukjanov slukja...@mirantis.com wrote: Hey, in Sahara we're looking on using Zaqar as a transport for agent some day as well. Unfortunately this section overlaps with Sahara sessions. On Thu, May 14, 2015 at 12:19 AM, Flavio Percoco fla...@redhat.com wrote: On 13/05/15 18:06 +, Fox, Kevin M wrote: Sahara also has the same problem, but worse, since they currently only support ssh with their agent, so its either assign floating ip's to all nodes, or your sahara controller must be on your one and only network node so it can tunnel. :/ Should we have a chat with them too? We've scheduled a discussion with Trove's team on Thursday at 5pm. It'd be great to have this discussion once and together to know what the common issues are and what things need to be done. I'll ping folks from both teams to invite them to this session. If they can't make it, I'm happy to use another working session slot. Cheers, Flavio http://libertydesignsummit.sched.org/event/59dc6ec910a732cdbf5970b6792e1cef#.VVRL0PYU9hE Thanks, Kevin From: Zane Bitter [zbit...@redhat.com] Sent: Wednesday, May 13, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 11/05/15 05:49, Flavio Percoco wrote: On 08/05/15 00:45 -0700, Nikhil Manchanda wrote: 3) The trove-guest-agent is in vm. it is connected by taskmanager by rabbitmq. We designed it. But is there some prectise to do this? how to make the vm be connected in vm-network and management network? Most deployments of Trove that I am familiar with set up a separate RabbitMQ server in cloud that is used by Trove. It is not recommended to use the same infrastructure RabbitMQ server for Trove for security reasons. Also most deployments of Trove set up a private (neutron) network that the RabbitMQ server and guests are connected to, and all RPC messages are sent over this network. We've discussed trove+zaqar in the past and I believe some folks from the Trove team have been in contact with Fei Long lately about this. Since one of the projects goal's for this cycle is to provide support to other projects and contribute to the adoption, I'm wondering if any of the members of the trove team would be willing to participate in a Zaqar working session completely dedicated to this integration? +1 I learned from a concurrent thread ([Murano] [Mistral] SSH workflow action) that Murano are doing exactly the same thing with a separate RabbitMQ server to communicate with guest agents. It's a real waste of energy when multiple OpenStack projects all have to solve the same problem from scratch, so a single answer to this would be great. In that thread I suggested (and Murano developers agreed with) making the transport pluggable so that operators could choose Zaqar instead. I would strongly support doing the same here. +1 :) Flavio cheers, Zane. It'd be a great opportunity to figure out what's really needed, edge cases and get some work done on this specific case. Thanks, Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov
Re: [openstack-dev] [all] setting the clock to remove pypy testing
On 15 May 2015 at 00:53, Sean Dague s...@dague.net wrote: We've disabled all the pypy tests across OpenStack because it was failing, and after 48hrs no one was actually working on any fixes. It's thus effectively just burning nodes for no value. It's not clear that there are any active contributors to OpenStack that find the pypy use case interesting enough to stay on top of it. A failure in this non main path blocks a ton of projects from landing any code. I would recommend we set the following remove criteria for June 1st - 2 weeks out. * the pypy jobs all need to be passing again What about just the end user ones. heatclient etc; no servers? [see below] * there are 2 champions that have come forward that will be active in #openstack-dev, #openstack-infra, and #openstack-qa that will commit to actively keeping an eye on such things. I feel like we need 2 champions because we need a hot spare (people go on vacation, have other distractions, having only 1 person able to do a thing means the responsibility is really thrust back onto -infra and -qa folks). I'd expect these champions to be the ones that fix the current pypy issues. Agreed. I think the original theory of pypy is that we would rub cheetah blood on OpenStack and make it magically faster. But, as has been discussed in other threads: Yeah, thats not a reason for pypy (today). Though - pypy is faster than go head-to-head in my previous benchmarking work, so I'd be interested in the details of the swift go comparison methodology, if performance is the key thing being looked for. CSP as a way of writing concurrent programs seems like a stronger argument to me... Anyhow, I hope we do find two champions for the client libraries, because I believe in enabling as many people as possible to use our clouds; and just like we have php sdk, it would be great to know that our python sdk's work on pypy too. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session
Hey, in Sahara we're looking on using Zaqar as a transport for agent some day as well. Unfortunately this section overlaps with Sahara sessions. On Thu, May 14, 2015 at 12:19 AM, Flavio Percoco fla...@redhat.com wrote: On 13/05/15 18:06 +, Fox, Kevin M wrote: Sahara also has the same problem, but worse, since they currently only support ssh with their agent, so its either assign floating ip's to all nodes, or your sahara controller must be on your one and only network node so it can tunnel. :/ Should we have a chat with them too? We've scheduled a discussion with Trove's team on Thursday at 5pm. It'd be great to have this discussion once and together to know what the common issues are and what things need to be done. I'll ping folks from both teams to invite them to this session. If they can't make it, I'm happy to use another working session slot. Cheers, Flavio http://libertydesignsummit.sched.org/event/59dc6ec910a732cdbf5970b6792e1cef#.VVRL0PYU9hE Thanks, Kevin From: Zane Bitter [zbit...@redhat.com] Sent: Wednesday, May 13, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 11/05/15 05:49, Flavio Percoco wrote: On 08/05/15 00:45 -0700, Nikhil Manchanda wrote: 3) The trove-guest-agent is in vm. it is connected by taskmanager by rabbitmq. We designed it. But is there some prectise to do this? how to make the vm be connected in vm-network and management network? Most deployments of Trove that I am familiar with set up a separate RabbitMQ server in cloud that is used by Trove. It is not recommended to use the same infrastructure RabbitMQ server for Trove for security reasons. Also most deployments of Trove set up a private (neutron) network that the RabbitMQ server and guests are connected to, and all RPC messages are sent over this network. We've discussed trove+zaqar in the past and I believe some folks from the Trove team have been in contact with Fei Long lately about this. Since one of the projects goal's for this cycle is to provide support to other projects and contribute to the adoption, I'm wondering if any of the members of the trove team would be willing to participate in a Zaqar working session completely dedicated to this integration? +1 I learned from a concurrent thread ([Murano] [Mistral] SSH workflow action) that Murano are doing exactly the same thing with a separate RabbitMQ server to communicate with guest agents. It's a real waste of energy when multiple OpenStack projects all have to solve the same problem from scratch, so a single answer to this would be great. In that thread I suggested (and Murano developers agreed with) making the transport pluggable so that operators could choose Zaqar instead. I would strongly support doing the same here. +1 :) Flavio cheers, Zane. It'd be a great opportunity to figure out what's really needed, edge cases and get some work done on this specific case. Thanks, Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [chef] OpenStack-Chef Dev Meetup
On May 12, 2015, at 3:00 PM, JJ Asghar jasg...@chef.io wrote: I’d like to announce the OpenStack-Chef Dev Meetup in Vancouver. We have an etherpad[1] going with topics people would like to discuss. I haven’t found a room or space for us yet, but when I do I’ll comment back on this thread and add it to the etherpad. We do have a time though which is Wednesday 0950 to 1030. So please keep that in mind. As discussed in our IRC meeting on 2015-05-11 we are planning on on using most if not all of the time to plan out our last mile to getting to stable/kilo. This is an opportunity like we’ve had in the past to all share a screen and make sure we get all the major points done or discussed. Hey Everyone, so it seems I need to take a step back because I got some wires crossed. Turns out the Ops meetup is our offical meeting space during the Summit which i put on the on the etherpad[2]. I’d like to keep this on topic per the agenda we’ve created on that etherpad. I think this is best, it gives the larger community of OpenStack+Chef users to come together and discuss future plans and questions in a face to face in a real time discussion. Though this brings up the topic of the Dev meetup. As the majority of the core members are going to be at the Summit, we still need a time and a place for us to discuss the stable/kilo branching. I propose we find a working space, and just spend an hour or two together. I’ve created a doodle[3] with some time slots, if you could put your ideal time in, including your email address so we can organize off the mailing list. I’ll @core this in the IRC channel too. I’m sorry for the confusion, I made some assumptions and hopefully haven’t confused more. Please don’t hesitate to reach out to me personally with my contact info and the bottom, and I’m looking forward collaborating with you in any and all capacity next week. Best Regards, JJ Asghar c: 512.619.0722 t: @jjasghar e: j...@chef.io mailto:j...@chef.io [1]: https://etherpad.openstack.org/p/YVR-dev-chef https://etherpad.openstack.org/p/YVR-dev-chef[2]:https://etherpad.openstack.org/p/YVR-ops-chef https://etherpad.openstack.org/p/YVR-ops-chef [3]:http://doodle.com/gb4ww6izbwg8k9di http://doodle.com/gb4ww6izbwg8k9di __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
On 5/14/15 11:58 AM, Doug Hellmann wrote: At one point we were exploring having both sqlalchemy-migrate and alembic run, one after the other, so that we only need to create new migrations with alembic and do not need to change any of the existing migrations. Was that idea dropped? to my knowledge the idea wasn't dropped. If a project wants to implement that using the oslo.db system, that is fine, however from my POV I'd prefer to just port the SQLA-migrate files over and drop the migrate dependency altogether. Whether or not a project does the run both step as an interim step doesn't affect that effort very much. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
Excerpts from Mike Bayer's message of 2015-05-14 11:40:44 -0400: On 5/14/15 5:44 AM, John Garbutt wrote: On 14 May 2015 at 07:18, Manickam, Kanagaraj kanagaraj.manic...@hp.com wrote: Hi Nova team, This mail is regarding an help required on the migration from sqlalchemy migration tool to alembic tool. Heat is currently using sqlalchemy-migration tool and In liberty release, we are investigating on how to bring the alembic into heat. We found that nova has already tried the same (https://review.openstack.org/#/c/15196/ ) almost 2 years back and in Kilo release, nova is still using sqlalchemy migration tool. (https://github.com/openstack/nova/tree/master/nova/db/sqlalchemy/migrate_repo/versions) So we are assuming that, in nova, you might have faced blockers to bring in alembic. And we would like to seek your recommendation/suggestions based on your experience on this. This will help us to take proper direction on using alembic in heat. Could you kindly share it. We are currently looking at going this direction (its not merged yet though): http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/online-schema-changes.html If we get that working, we wouldn't need to write or generate any migrations, we end up doing a diff between the desired state and the current database state, and split that into expand and contract phases. To support that approach, we are no longer doing any data migrations in our migration scripts, they are happening in a different way, online: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html#other-deployer-impact There are more details on how this is all planned to fit together here: http://docs.openstack.org/developer/nova/devref/upgrade.html Anyways, this is why we have paused the alembic vs sqlalchemy debate for the moment. The online schema changes patch has been abandoned.I regret that I was not able to review the full nature of this spec in time to note some concerns I have, namely that Alembic does not plan on ever acheiving 100% automation of migration generation; such a thing is not possible and would require a vast amount of development resources in any case to constantly keep up with the ever changing features and behaviors of all target databases. The online migration spec, AFAICT, does not offer any place for manual migrations to be added, and I think this will be a major problem. The decisions made by autogenerate I have always stated should always be manually reviewed and corrected, so I'd be very nervous about a system that uses autogenerate on the fly and sends those changes directly to a production database without any review. FWIW in Nova I am planning to look into doing a simple one-shot of the Nova migration scripts in nova/db/sqlalchemy/migrate_repo/versions to Alembic. The only large migration script in this repo is 216_havana.py, which consists of all Table defs that do not need any changes. There are 39 remaining scripts, and all of these are extremely short and simple, and just a direct manual change to Alembic style is most expedient for these. At one point we were exploring having both sqlalchemy-migrate and alembic run, one after the other, so that we only need to create new migrations with alembic and do not need to change any of the existing migrations. Was that idea dropped? Doug __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [monasca] [java]
Thanks, Kevin. Performance is critical. At this point, we are trying to do 100K measurements per second. Yea, Vertica is not open source. Monasca uses either Vertica OR Influxdb as the backend DB. You get to decide what you want. Zookeeper is used by Kafka for distributed synchronization and is very well regarded in the internet-applications realm. It looks like what you are saying is that the issue goes beyond just Java vs Python; it¹s an ops issue. There may be issues with supporting Kafka and Influxdb. That¹s good feedback. Interesting, at the 2014 Summit in Atlanta, the some members of the community heavily lobbied for Influxdb. We¹ve seen perf problems with MySQL in the Ceilometer Project and to wanted to avoid that by using a scalable open source DB for the backend. -- Deklan Dieterly Software Engineer HP On 5/14/15, 10:34 AM, Fox, Kevin M kevin@pnnl.gov wrote: The open source version of java is much better off then it use to be, so I'd say its not out of the question any more. My preference is still python whenever possible since it tends to be much easer to debug/patch in the field. Performance critical stuff is another matter. I would recommend very strongly considering it from the standpoint of what distro's are willing to support, and how much additional learning/operations work you are asking of ops to perform though. OpenStack already pushes an enormous amount of learning onto the ops folks. This will make or break the project. yum list | grep -i influxdb | wc -l 0 hmm... the rpm from the website looks very unusual... The distro folks wont support a package that looks like that. My gut reaction looking at it as an op is to wince and hope I don't have to install it. If I were, I'd have to carefully pull it apart to figure out how to support it long term. Definitely not a rpm -Uvh and forget. Vertica doesn't look to be Open Source? Kafka yet another messaging system... It might be needed, but its yet another thing for ops to figure out how to deal with. The quickstart says Kafka needs Zookeeper. Now yet another dependency for an op to deal with. What does ZooKeeper give that Pacemaker (already used in a lot of clouds) doesn't? I might like to deploy Monasca here some day, but it looks like it will take a large amount of work for me to do so, relative to all the other OpenStack components I want to install, so I probably cant for a while because of some of these design decisions. Thanks, Kevin From: Dieterly, Deklan [deklan.diete...@hp.com] Sent: Thursday, May 14, 2015 8:29 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [monasca] [java] The Monasca project currently has three major components written in Java. Monasca-persister, monasca-thresh, and monasca-api. These components work with Influxdb 0.9.0 and Vertica 7.1. They integrate with Kafka and MySQL. The monasca team is currently bringing the Python versions of these components up to parity with their Java counterparts. This effort is being undertaken because there seems to be considerable friction in introducing Java components into the OpenStack community. At this point, Id like to test the waters a bit and determine what the larger community¹s reaction to having these components remain in Java would be. Would there be a general acceptance or would there be a visceral rejection? Is the issue more of integration with existing CI/CD architecture or is there more of a cultural issue? The arguments for Java are non-trivial. Monasca has requirements for very high throughput. Furthermore, integration with Kafka is better supported with Kafka's Java libraries. We¹ve seen that Swift has introduced components in Go. So, this looks like a precedent for allowing other languages where deemed appropriate. Before we spend many man-hours hacking on the Python components, it seems reasonable to determine if there really exists a reason to do so. I¹m interested in soliciting any feedback from the community be it pleasant or unpleasant. Thanks. ‹ Deklan Dieterly Software Engineer HP __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
That’s interesting, because I wasn’t aware that “cloud” was part of the formal OpenStack taxonomy. Historically, we defined a region as a set of endpoints, supplied by an instance of Keystone. You seem to be saying that a cloud is a collection of regions configured in the same Keystone. [citation needed] Puzzled. Geoff On May 14, 2015, at 7:56 AM, Zane Bitter zbit...@redhat.com wrote: On 14/05/15 10:39, Geoff Arnold wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. The terminology I (and Heat) have always used is that regions are sets of endpoints configured in the same Keystone. Where you have a different Keystone auth URL that is straight up a separate cloud, no matter how you slice it. The confusion here seems to be that Horizon is using the name AVAILABLE_REGIONS to denote available Keystone auth URLS - i.e. different clouds, not different regions at all. Looked at through that lens, things seem a bit easier to understand. Heat supports multi-region trees of stacks (i.e. you can created a nested stack in another region). Multi-cloud support has been considered, but afaik has not yet landed. Figuring out where to store the credentials securely is tricky. cheers, Zane. Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com mailto:dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com mailto:mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period -https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. AVAILABLE_REGIONS is merely a
Re: [openstack-dev] [api] Proposing Michael McCune as an API Working Group core
Top posting to make it official...Michael McCune (elmiko) is an API Working Group core! Cheers, Everett On May 11, 2015, at 8:57 PM, Ryan Brown rybr...@redhat.com wrote: On 05/11/2015 04:18 PM, Everett Toews wrote: I would like to propose Michael McCune (elmiko) as an API Working Group core. Not core, but +1 from me! -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Ally Skills Workshop at the OpenStack Summit - sign up or spread the word?
Hello everyone! I’m reaching out to you all on something I feel is important for our community to continue to grow. Eight months ago, the OpenStack Community raised over $17,403 for the Ada Initiative to support women in open source software [1]. I thought this outpouring of support was an encouraging sign that our community supports this work. Now the Ada Initiative is teaching a (free) Ally Skills Workshop at the OpenStack Summit at 2pm on Monday May 18. The workshop teaches men to support women in open source. I thought you all might want to sign up if you're at the conference: https://adainitiative.org/2015/04/register-now-ally-skills-workshop-at-openstack-summit-2015/ People of all genders are more than welcome to attend! And if you think of someone who would like attending this workshop, I would super appreciate you forwarding this to them. Thanks! [1] - http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] setting the clock to remove pypy testing
On Thu, May 14, 2015, at 05:53 AM, Sean Dague wrote: We've disabled all the pypy tests across OpenStack because it was failing, and after 48hrs no one was actually working on any fixes. It's thus effectively just burning nodes for no value. I think my change to upgrade virtualenv on our test nodes should've fixed the PyPy jobs. Some spot checks earlier in the weeks seemed to confirm that it had done so. (It is possible we have a stray old image in a particular provider that needs updating though). It's not clear that there are any active contributors to OpenStack that find the pypy use case interesting enough to stay on top of it. A failure in this non main path blocks a ton of projects from landing any code. I would recommend we set the following remove criteria for June 1st - 2 weeks out. * the pypy jobs all need to be passing again * there are 2 champions that have come forward that will be active in #openstack-dev, #openstack-infra, and #openstack-qa that will commit to actively keeping an eye on such things. While I think I fixed it this time, I did so more because we needed newer pip for other reasons and getting that required upgrading virtualenv. I did not do it because I have a specific interest in PyPy. I would likely be a bad champion for this cause so am definitely not volunteering here :) Clark __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
+1 A wiki page laying out a mutually agreeable taxonomy seems like a good starting point. Geoff On May 14, 2015, at 7:47 AM, Anne Gentle annegen...@justwriteclick.com wrote: On Thu, May 14, 2015 at 9:39 AM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. We have a cross-project session and spec started about the service catalog: https://review.openstack.org/#/c/181393/ https://review.openstack.org/#/c/181393/ http://sched.co/3BL3 http://sched.co/3BL3 I hope more than a wiki page comes of it. :) Anne Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com mailto:dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com mailto:mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period - https://etherpad.openstack.org/p/havana-availability-zone-and-region-management https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. AVAILABLE_REGIONS is merely a list of keystone endpoints. They can be backed by a different IdP per endpoint and thus not share a token store. Or, they can just be keystone endpoints that are geographically different but backed by the same IdP, which may or may not share a token store. The funny thing is, for Horizon, it doesn't matter. They are all supported. But as one keystone endpoint may not know about another, unless nested, this has to be done with settings as it's not typically discoverable. If you are asking about token sharing between keystones which the thread you linked seems to
[openstack-dev] [nova] Nested Quota Implementation in nova
Hi All, This is with regard to Nested Quota Implementation in nova. The blueprint has been approved for liberty,which was earlier approved for kilo. https://review.openstack.org/#/c/160605https://review.openstack.org/#/c/160605 The complete code has been uploaded. https://review.openstack.org/#/c/149828/ There are two issues. 1. In nested quota, we are making the default quota values as zero. This is because, in nested projects, the quota of a child project is limited by its parent's free quota. So there is no point in having finite default quota values for a project. If I make default quota values as zero,whether it will be having any effect on modules in nova other than quota ? Is there any dependency,at least in the functional tests ? Actually many tests other than quota are failing and I am trying to figure it out. 2. In nova , we are having context checking in files such as wsgi.py ,sqlalchemy/db/api.py ,quotas.py ,quota_sets.py etc,out of which wsgi.py and api.py are common to many modules other than quota.In nested quota, we are making a condition that , to update the quota of a project,the token should be scoped to the parent of the project ,rather than to the project itself. This is because ,an increase in the quota limit of a child project,results in the reduction of free quota of the parent.So, it makes sense to scope the token to the parent. But this may or may not be followed by modules other than quota. In that case , I think that the context checking should be eliminated from files which are common. It will not create any security problem, since context checking is already done by the individual modules themselves.In fact ,I feel context checking in common files are redundant. Kindly give me your opinion on these issues.Actually there is a session for quota in the summit. best regards sajeesh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [openstack-operators][chef] OpenStack-Chef Official Ops Meetup
On May 12, 2015, at 3:00 PM, JJ Asghar jasg...@chef.io wrote: I’d like to announce the OpenStack-Chef Ops Meetup in Vancouver. We have an etherpad[1] going with topics people would like to discuss. I haven’t found a room or space for us yet, but when I do I’ll comment back on this thread and add it to the etherpad, (right now looking at the other declaration of time slots, I think Wednesday 1630 is going to be our best bet) As with the Paris Ops Meetup we had a lot of great questions and use cases discussed but it was very unstructured and I got some negative feedback about that, hence the etherpad. Hey Everyone, so it seems I need to take a step back because I got some wires crossed. Turns out the Ops meetup is our offical meeting space during the Summit which i put on the on the etherpad[1]. I’d like to keep this on topic per the agenda we’ve created on that etherpad. I think this is best, it gives the larger community of OpenStack+Chef users to come together and discuss future plans and questions in a face to face in a real time discussion. Just so we have it documented in this email: the meetup is in Room 217 on Wednesday at 0950am[2]. Though this brings up the topic of the Dev meetup. As the majority of the core members are going to be at the Summit, we still need a time and a place for us to discuss the stable/kilo branching. I propose we find a working space, and just spend an hour or two together. I’ve created a doodle[3] with some time slots, if you could put your ideal time in, including your email address so we can organize off the mailing list. I’ll @core this in the IRC channel too. This is also open to the community as a whole, so if you’d like to post your ideal time don’t hesitate to. If I can ask for a volunteer note-taker to step up and to directly contact me; so when we start the meeting we can just jump in that would me amazing. I can sweeten the deal if you step up with some Chef Swag too ;). Just to follow up on this, I haven’t had anyone step up yet, so if you where thinking about it, please don’t hesitate to reach out. [1]: https://etherpad.openstack.org/p/YVR-ops-chef https://etherpad.openstack.org/p/YVR-ops-chef [2]: http://sched.co/3D8a http://sched.co/3D8a[3]: http://doodle.com/gb4ww6izbwg8k9di http://doodle.com/gb4ww6izbwg8k9di__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Barbican : Unable to execute the curl command for uploading/retrieving the secrets with the latest Barbican code.
Hi all , We are able to execute the curl commands on new barbican code provided we integrated it with keystone . I ran into this issue because I was trying to configure localhost to actual IP on a plain barbican server so that I would get the response and request objects with the actual IP rather than the local host . This configuration was required for seting up HA proxy for Barbican. And then I thought of integrating with the keystone and configure Babrican server to https. *Its a good learning to know that the latest code drop of Barbican enforces the authentication mechanism with the keystone which would not allow us to execute the curl command without providing the token of Identity service (Keystone ) in the request unlike the previous Ba**rbican **versions* Please find the curl command request and responses for uploading/reteriving the secets on Barbican Server root@Clientfor-HAProxy barbican]# curl -X POST -H 'content-type:application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -d '{payload: my-secret-here,payload_content_type: text/plain}' \ -k https://localhost:9311/v1/secrets {secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e35 }[root@Clientfor-HAProxy barbican]# [root@Clientfor-HAProxy barbican]# curl -H 'Accept: application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -k https://localhost:9311/v1/secrets {secrets: [{status: ACTIVE, secret_type: opaque, updated: 2015-05-14T16:35:44.109536, name: null, algorithm: null, created: 2015-05-14T16:35:44.103982, secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e35;, content_types: {default: text/plain}, creator_id: cedd848a8a9e410196793c601c03b99a, mode: null, bit_length: null, expiration: null}], total: 1}[root@Clientfor-HAProxy barbican]# Thanks and Regards, Asha Seshagiri On Wed, May 13, 2015 at 4:26 PM, Asha Seshagiri asha.seshag...@gmail.com wrote: Hi all , When I started debugging ,we find that default group is not used instead oslo_policy would be used Please find the logs below : *2015-05-13 15:59:34.393 13210 WARNING oslo_config.cfg [-] Option policy_default_rule from group DEFAULT is deprecated. Use option policy_default_rule from group oslo_policy.2015-05-13 15:59:34.394 13210 WARNING oslo_config.cfg [-] Option policy_file from group DEFAULT is deprecated. Use option policy_file from group oslo_policy.* 2015-05-13 15:59:34.395 13210 DEBUG oslo_policy.openstack.common.fileutils [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloading cached file /etc/barbican/policy.json read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/openstack/common/fileutils.py:64 2015-05-13 15:59:34.398 13210 DEBUG oslo_policy.policy [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloaded policy file: /etc/barbican/policy.json _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:424 2015-05-13 15:59:34.399 13210 ERROR barbican.api.controllers [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Secret creation attempt not allowed - please review your user/project privileges 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers Traceback (most recent call last): 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 104, in handler 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers return fn(inst, *args, **kwargs) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 85, in enforcer 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers _do_enforce_rbac(inst, pecan.request, action_name, ctx, **kwargs) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 68, in _do_enforce_rbac 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers credentials, do_raise=True) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /usr/lib/python2.7/site-packages/oslo_policy/policy.py, line 493, in enforce 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers raise PolicyNotAuthorized(rule, target, creds) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers PolicyNotAuthorized: secrets:post on {u'payload': u'my-secret-here', u'payload_content_type': u'text/plain'} by {'project': '12345', 'user': None, 'roles': []} disallowed by policy 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers 2015-05-13 15:59:34.400 13210 INFO barbican.api.middleware.context [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] req-556e8733-aea2-4acf-ac8b-30bc671a6f22 | Processed request: 403 Forbidden - POST http://localhost:9311/v1/secrets {address space usage: 364666880 bytes/347MB} {rss usage: 65622016 bytes/62MB} [pid: 13210|app: 0|req: 1/1] 127.0.0.1 () {30 vars
Re: [openstack-dev] [qa] tempest is broken on stable/kilo
On Thu, May 14, 2015 at 06:00:01PM +0400, Evgeny Antyshev wrote: Hello, We faced the following problem when running tempest on stable/kilo branch: tempest requires tempest-lib=0.5.0, while global requirements from stable/kilo ==0.4.0 (module common/ssh was moved from tempest to tempest-lib 0.5.0) It looks like this problem was introduced by the change: https://review.openstack.org/#/c/176039/ Please clarify if tempest is supposed to work with stable/kilo or not. Tempest does not have a stable branch like other openstack projects. For more info about that see: http://specs.openstack.org/openstack/qa-specs/specs/implemented/branchless-tempest.html and http://docs.openstack.org/developer/tempest/HACKING.html#branchless-tempest-considerations Tempest will run fine against a stable/kilo cloud, however if you're trying to install everything on a single box there will be some complexities as you've encountered. The only way to install tempest with a all-in-one OpenStack deployment on the same node is to make sure you venv isolated tempest. This is actually what we do for stable/kilo jobs in the gate, we don't try to system install tempest but instead just use tox to create a venv and run tempest tests from inside the venv. This is because as you pointed out tempest's requirements change with master global requirements which will conflict with the stable branches. -Matt Treinish pgpIkxTpTOEHC.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [QA] Question about Heat Tempest tests
Hello everyone, I have a question about Heat Tempest tests. Is there any dsvm job that runs these tests? At first glance no dsvm job runs them. Thank you! Regards, Yaroslav Lobankov. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
On Thu, May 14, 2015 at 9:39 AM, Geoff Arnold ge...@geoffarnold.com wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. We have a cross-project session and spec started about the service catalog: https://review.openstack.org/#/c/181393/ http://sched.co/3BL3 I hope more than a wiki page comes of it. :) Anne Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period - https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. AVAILABLE_REGIONS is merely a list of keystone endpoints. They can be backed by a different IdP per endpoint and thus not share a token store. Or, they can just be keystone endpoints that are geographically different but backed by the same IdP, which may or may not share a token store. The funny thing is, for Horizon, it doesn't matter. They are all supported. But as one keystone endpoint may not know about another, unless nested, this has to be done with settings as it's not typically discoverable. If you are asking about token sharing between keystones which the thread you linked seems to indicate. Then yes, you can have a synced token store. But that is an exercise left to the operator. I'd like to quickly go on record and say that a token store sync like this is not recommended. It is possible to work around this in Kilo with some limited data sync (resource, assignment) and the use of Fernet tokens. However, as you introduce higher latencies and WAN transit this type of syncing becomes more and more prone to error. It would be possible to make DOA multi keystone aware, but before we dive too far into this I'd like to get a clear view of what exactly the use case (and goal is); let's do this at the summit (since it is happening soon). Having a clear view will make this easier
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
On 14/05/15 10:39, Geoff Arnold wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. The terminology I (and Heat) have always used is that regions are sets of endpoints configured in the same Keystone. Where you have a different Keystone auth URL that is straight up a separate cloud, no matter how you slice it. The confusion here seems to be that Horizon is using the name AVAILABLE_REGIONS to denote available Keystone auth URLS - i.e. different clouds, not different regions at all. Looked at through that lens, things seem a bit easier to understand. Heat supports multi-region trees of stacks (i.e. you can created a nested stack in another region). Multi-cloud support has been considered, but afaik has not yet landed. Figuring out where to store the credentials securely is tricky. cheers, Zane. Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com mailto:dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com mailto:mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period -https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. AVAILABLE_REGIONS is merely a list of keystone endpoints. They can be backed by a different IdP per endpoint and thus not share a token store. Or, they can just be keystone endpoints that are geographically different but backed by the same IdP, which may or may not share a token store. The funny thing is, for Horizon, it doesn't matter. They are all supported. But as one keystone endpoint may not know about another, unless nested, this has to be done with settings as it's not
Re: [openstack-dev] [nova][neutron]Fail to communicate new host when the first host for a new instance fails
On 14/05/15 16:04, Brian Haley wrote: On 05/14/2015 05:29 AM, Neil Jerram wrote: Hi all, this is about a problem I'm seeing with my Neutron ML2 mechanism driver [1]. I'm expecting to see an update_port_postcommit call to signal that the binding:host_id for a port is changing, but I don't see that. The scenario is launching a new instance in a cluster with two compute hosts, where we've rigged things so that one of the compute hosts will always be chosen first, but libvirt isn't correctly configured there and hence the instance launching attempt will fail. Then Nova tries to use the other compute host instead, and that mostly works - except that my mechanism driver still thinks that the new instance's port is still bound to the first compute host. Is anyone aware of a known problem in this area (in Juno-level code), or where I could like to start pinning this down in more detail? We saw something like this before, perhaps: https://review.openstack.org/#/c/98340/ https://bugs.launchpad.net/nova/+bug/1327124 It's fixed in Kilo only if that's it. Thanks so much, Brian. This does indeed look like the right fix for the problem that I'm seeing. I'll report back once we've tested further here. Regards, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [monasca] [java]
The open source version of java is much better off then it use to be, so I'd say its not out of the question any more. My preference is still python whenever possible since it tends to be much easer to debug/patch in the field. Performance critical stuff is another matter. I would recommend very strongly considering it from the standpoint of what distro's are willing to support, and how much additional learning/operations work you are asking of ops to perform though. OpenStack already pushes an enormous amount of learning onto the ops folks. This will make or break the project. yum list | grep -i influxdb | wc -l 0 hmm... the rpm from the website looks very unusual... The distro folks wont support a package that looks like that. My gut reaction looking at it as an op is to wince and hope I don't have to install it. If I were, I'd have to carefully pull it apart to figure out how to support it long term. Definitely not a rpm -Uvh and forget. Vertica doesn't look to be Open Source? Kafka yet another messaging system... It might be needed, but its yet another thing for ops to figure out how to deal with. The quickstart says Kafka needs Zookeeper. Now yet another dependency for an op to deal with. What does ZooKeeper give that Pacemaker (already used in a lot of clouds) doesn't? I might like to deploy Monasca here some day, but it looks like it will take a large amount of work for me to do so, relative to all the other OpenStack components I want to install, so I probably cant for a while because of some of these design decisions. Thanks, Kevin From: Dieterly, Deklan [deklan.diete...@hp.com] Sent: Thursday, May 14, 2015 8:29 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [monasca] [java] The Monasca project currently has three major components written in Java. Monasca-persister, monasca-thresh, and monasca-api. These components work with Influxdb 0.9.0 and Vertica 7.1. They integrate with Kafka and MySQL. The monasca team is currently bringing the Python versions of these components up to parity with their Java counterparts. This effort is being undertaken because there seems to be considerable friction in introducing Java components into the OpenStack community. At this point, Id like to test the waters a bit and determine what the larger community’s reaction to having these components remain in Java would be. Would there be a general acceptance or would there be a visceral rejection? Is the issue more of integration with existing CI/CD architecture or is there more of a cultural issue? The arguments for Java are non-trivial. Monasca has requirements for very high throughput. Furthermore, integration with Kafka is better supported with Kafka's Java libraries. We’ve seen that Swift has introduced components in Go. So, this looks like a precedent for allowing other languages where deemed appropriate. Before we spend many man-hours hacking on the Python components, it seems reasonable to determine if there really exists a reason to do so. I’m interested in soliciting any feedback from the community be it pleasant or unpleasant. Thanks. — Deklan Dieterly Software Engineer HP __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][neutron][security] Rootwrap discussions at OSSG mid-cycle
Brant, I started work to use rootwrap as daemon in Nova fyi: https://review.openstack.org/#/c/180695/ Don't know if this will help -- dims On Thu, May 14, 2015 at 11:55 AM, Brant Knudson b...@acm.org wrote: On Thu, May 14, 2015 at 2:48 AM, Angus Lees g...@inodes.org wrote: On Wed, 13 May 2015 at 02:16 Thierry Carrez thie...@openstack.org wrote: Lucas Fisher wrote: We spent some time at the OSSG mid-cycle meet-up this week discussing root wrap, looking at the existing code, and considering some of the mailing list discussions. Summary of our discussions: https://github.com/hyakuhei/OSSG-Security-Practices/blob/master/ossg_rootwrap.md The one line summary is we like the idea of a privileged daemon with higher level interfaces to the commands being run. It has a number of advantages such as easier to audit, enables better input sanitization, cleaner interfaces, and easier to take advantage of Linux capabilities, SELinux, AppArmour, etc. The write-up has some more details. For those interested in that topic and willing to work on the next stage, we'll have a work session on the future of rootwrap in the Oslo track at the Design Summit in Vancouver: http://sched.co/3B2B Fwiw, I've continued work on my privsep proposal(*) and how it interacts with existing rootwrap. I look forward to discussing it and alternatives at the session. (*) https://review.openstack.org/#/c/155631 - Gus As part of the OSSG work, I started prototyping changes in Nova where the goal is to 1) Get all the code that's calling rootwrap into one place so that it's easy to find, and get tests for this code. 2) Next (or even in step 1 if it's easy enough), tighten the interfaces, so that rather than providing a function to do chmod %s %s it would only allow whatever chmod nova actually has to do, maybe passing in a server ID rather than a bare file name. With this, we should be able to tighten up the rootwrap filters in the same way, or switch to privsep or whatever we decide to do in the future. So maybe it looks like rearranging deckchairs on the titanic, but in this case the deckchairs are blocking the emergency exits. I didn't get too far in it to even see how viable the approach is since I was working on other things. I'll put this session on my calendar. - Brant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [OpenStackClient] Meeting cacelation this week and next
We will not have the usual OpenStackClient meeting today at 19:00 UTC, and of course will be doing other things at the Design Summit next week. Regular meetings will resume 28 May 2015. https://wiki.openstack.org/wiki/Meetings/OpenStackClient Thanks dt -- Dean Troyer dtro...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstack-operators][Rally][announce] What's new in Rally v0.0.4
Mike, Thank you for release notes! Nice work! Best regards, Boris Pavlovic On Thu, May 14, 2015 at 5:48 PM, Mikhail Dubov mdu...@mirantis.com wrote: Hi everyone, Rally team is happy to announce that we have just cut the new release 0.0.4! *Release stats:* - Commits: *87* - Bug fixes: *21* - New scenarios: *14* - New contexts: *2* - New SLA: *1* - Dev cycle: *30 days* - Release date: *14/May/2015* *New features:* - *Rally now can generate load with users that already exist. *This makes it possible to use Rally for benchmarking OpenStack clouds that are using LDAP, AD or any other read-only keystone backend where it is not possible to create any users dynamically. - *New decorator **@osclients.Clients.register. *This decorator adds new OpenStack clients at runtime. The added client will be available from *osclients.Clients* at the module level and cached. - *Improved installation script.* The installation script for Rally now can be run from an unprivileged user, supports different database types, allows to specify a custom python binary, automatically installs needed software if run as root etc. For more details, take a look at the *Release notes for 0.0.4* https://rally.readthedocs.org/en/latest/release_notes/latest.html. Best regards, Mikhail Dubov Engineering OPS Mirantis, Inc. E-Mail: mdu...@mirantis.com Skype: msdubov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
On Thursday, May 14, 2015, Anne Gentle annegen...@justwriteclick.com wrote: On Thu, May 14, 2015 at 9:39 AM, Geoff Arnold ge...@geoffarnold.com javascript:_e(%7B%7D,'cvml','ge...@geoffarnold.com'); wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. We have a cross-project session and spec started about the service catalog: https://review.openstack.org/#/c/181393/ http://sched.co/3BL3 I hope more than a wiki page comes of it. :) Anne I, for one, am already planning on being there :) Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com javascript:_e(%7B%7D,'cvml','morgan.fainb...@gmail.com'); wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com javascript:_e(%7B%7D,'cvml','dkly...@gmail.com'); wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com javascript:_e(%7B%7D,'cvml','mga...@iweb.com'); wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period - https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com javascript:_e(%7B%7D,'cvml','ge...@geoffarnold.com'); mailto:ge...@geoffarnold.com javascript:_e(%7B%7D,'cvml','ge...@geoffarnold.com'); wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. AVAILABLE_REGIONS is merely a list of keystone endpoints. They can be backed by a different IdP per endpoint and thus not share a token store. Or, they can just be keystone endpoints that are geographically different but backed by the same IdP, which may or may not share a token store. The funny thing is, for Horizon, it doesn't matter. They are all supported. But as one keystone endpoint may not know about another, unless nested, this has to be done with settings as it's not typically discoverable. If you are asking about token sharing between keystones which the thread you linked seems to indicate. Then yes, you can have a synced token store. But that is an exercise left to the operator. I'd like to quickly go on record and say that a token store sync like this is not recommended. It is possible to work around this in Kilo with some
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
On 5/14/15 5:44 AM, John Garbutt wrote: On 14 May 2015 at 07:18, Manickam, Kanagaraj kanagaraj.manic...@hp.com wrote: Hi Nova team, This mail is regarding an help required on the migration from sqlalchemy migration tool to alembic tool. Heat is currently using sqlalchemy-migration tool and In liberty release, we are investigating on how to bring the alembic into heat. We found that nova has already tried the same (https://review.openstack.org/#/c/15196/ ) almost 2 years back and in Kilo release, nova is still using sqlalchemy migration tool. (https://github.com/openstack/nova/tree/master/nova/db/sqlalchemy/migrate_repo/versions) So we are assuming that, in nova, you might have faced blockers to bring in alembic. And we would like to seek your recommendation/suggestions based on your experience on this. This will help us to take proper direction on using alembic in heat. Could you kindly share it. We are currently looking at going this direction (its not merged yet though): http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/online-schema-changes.html If we get that working, we wouldn't need to write or generate any migrations, we end up doing a diff between the desired state and the current database state, and split that into expand and contract phases. To support that approach, we are no longer doing any data migrations in our migration scripts, they are happening in a different way, online: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html#other-deployer-impact There are more details on how this is all planned to fit together here: http://docs.openstack.org/developer/nova/devref/upgrade.html Anyways, this is why we have paused the alembic vs sqlalchemy debate for the moment. The online schema changes patch has been abandoned.I regret that I was not able to review the full nature of this spec in time to note some concerns I have, namely that Alembic does not plan on ever acheiving 100% automation of migration generation; such a thing is not possible and would require a vast amount of development resources in any case to constantly keep up with the ever changing features and behaviors of all target databases. The online migration spec, AFAICT, does not offer any place for manual migrations to be added, and I think this will be a major problem. The decisions made by autogenerate I have always stated should always be manually reviewed and corrected, so I'd be very nervous about a system that uses autogenerate on the fly and sends those changes directly to a production database without any review. FWIW in Nova I am planning to look into doing a simple one-shot of the Nova migration scripts in nova/db/sqlalchemy/migrate_repo/versions to Alembic. The only large migration script in this repo is 216_havana.py, which consists of all Table defs that do not need any changes. There are 39 remaining scripts, and all of these are extremely short and simple, and just a direct manual change to Alembic style is most expedient for these. The more intellectual part of the exercise is to replace the use of sqlalchemy-migrate APIs in Nova's installation scripts and test harnesses to use Alembic as well. At that point Nova will be on a straight Alembic install that will at first behave similarly to the Migrate one. Assuming I can get that done in the next few months, the next step would be that the migration streams can be broken into branches, e.g. juno, kilo, liberty, etc. so that we can easily add new migration files that are backportable in place to a target backport. Additionally, the issue of end-user migrations that are divergent from what is part of Nova itself should also be achieved using additional branches; if a customer wants to make their own changes to a database, such as adding extra indexes, this can be done in their own user-specific branch that will integrate with the primary migration stream.That would eliminate the need for a system that is tasked with diffing live database schemas and then executing decisions directly without any chance to review or version-control the changes that are being made. Does that help? Thanks, John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [openstack-operators][Rally][announce] What's new in Rally v0.0.4
Hi everyone, Rally team is happy to announce that we have just cut the new release 0.0.4! *Release stats:* - Commits: *87* - Bug fixes: *21* - New scenarios: *14* - New contexts: *2* - New SLA: *1* - Dev cycle: *30 days* - Release date: *14/May/2015* *New features:* - *Rally now can generate load with users that already exist. *This makes it possible to use Rally for benchmarking OpenStack clouds that are using LDAP, AD or any other read-only keystone backend where it is not possible to create any users dynamically. - *New decorator **@osclients.Clients.register. *This decorator adds new OpenStack clients at runtime. The added client will be available from *osclients.Clients* at the module level and cached. - *Improved installation script.* The installation script for Rally now can be run from an unprivileged user, supports different database types, allows to specify a custom python binary, automatically installs needed software if run as root etc. For more details, take a look at the *Release notes for 0.0.4* https://rally.readthedocs.org/en/latest/release_notes/latest.html. Best regards, Mikhail Dubov Engineering OPS Mirantis, Inc. E-Mail: mdu...@mirantis.com Skype: msdubov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] tempest is broken on stable/kilo
Hello, We faced the following problem when running tempest on stable/kilo branch: tempest requires tempest-lib=0.5.0, while global requirements from stable/kilo ==0.4.0 (module common/ssh was moved from tempest to tempest-lib 0.5.0) It looks like this problem was introduced by the change: https://review.openstack.org/#/c/176039/ Please clarify if tempest is supposed to work with stable/kilo or not. -- Best regards, Evgeny Antyshev. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron]Fail to communicate new host when the first host for a new instance fails
On 05/14/2015 05:29 AM, Neil Jerram wrote: Hi all, this is about a problem I'm seeing with my Neutron ML2 mechanism driver [1]. I'm expecting to see an update_port_postcommit call to signal that the binding:host_id for a port is changing, but I don't see that. The scenario is launching a new instance in a cluster with two compute hosts, where we've rigged things so that one of the compute hosts will always be chosen first, but libvirt isn't correctly configured there and hence the instance launching attempt will fail. Then Nova tries to use the other compute host instead, and that mostly works - except that my mechanism driver still thinks that the new instance's port is still bound to the first compute host. Is anyone aware of a known problem in this area (in Juno-level code), or where I could like to start pinning this down in more detail? We saw something like this before, perhaps: https://review.openstack.org/#/c/98340/ https://bugs.launchpad.net/nova/+bug/1327124 It's fixed in Kilo only if that's it. -Brian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
On 5/14/15 11:40 AM, Mike Bayer wrote: Assuming I can get that done in the next few months, the next step would be that the migration streams can be broken into branches, e.g. juno, kilo, liberty, etc. so that we can easily add new migration files that are backportable in place to a target backport. Additionally, the issue of end-user migrations that are divergent from what is part of Nova itself should also be achieved using additional branches; if a customer wants to make their own changes to a database, such as adding extra indexes, this can be done in their own user-specific branch that will integrate with the primary migration stream.That would eliminate the need for a system that is tasked with diffing live database schemas and then executing decisions directly without any chance to review or version-control the changes that are being made. I would also note that this system can maintain the expand and contract idea from the online schema changes system, such that the migration files themselves would be organized into branches for expand and contract that can be run separately (the current Alembic branching model did not exist at the time that the online schema changes spec was written). This way the ability to run just the expansion in one phase and the contract in another can be maintained; it's just the migration steps themselves would live in version-controlled migration files as they always do. Does that help? Thanks, John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][neutron][security] Rootwrap discussions at OSSG mid-cycle
On Thu, May 14, 2015 at 2:48 AM, Angus Lees g...@inodes.org wrote: On Wed, 13 May 2015 at 02:16 Thierry Carrez thie...@openstack.org wrote: Lucas Fisher wrote: We spent some time at the OSSG mid-cycle meet-up this week discussing root wrap, looking at the existing code, and considering some of the mailing list discussions. Summary of our discussions: https://github.com/hyakuhei/OSSG-Security-Practices/blob/master/ossg_rootwrap.md The one line summary is we like the idea of a privileged daemon with higher level interfaces to the commands being run. It has a number of advantages such as easier to audit, enables better input sanitization, cleaner interfaces, and easier to take advantage of Linux capabilities, SELinux, AppArmour, etc. The write-up has some more details. For those interested in that topic and willing to work on the next stage, we'll have a work session on the future of rootwrap in the Oslo track at the Design Summit in Vancouver: http://sched.co/3B2B Fwiw, I've continued work on my privsep proposal(*) and how it interacts with existing rootwrap. I look forward to discussing it and alternatives at the session. (*) https://review.openstack.org/#/c/155631 - Gus As part of the OSSG work, I started prototyping changes in Nova where the goal is to 1) Get all the code that's calling rootwrap into one place so that it's easy to find, and get tests for this code. 2) Next (or even in step 1 if it's easy enough), tighten the interfaces, so that rather than providing a function to do chmod %s %s it would only allow whatever chmod nova actually has to do, maybe passing in a server ID rather than a bare file name. With this, we should be able to tighten up the rootwrap filters in the same way, or switch to privsep or whatever we decide to do in the future. So maybe it looks like rearranging deckchairs on the titanic, but in this case the deckchairs are blocking the emergency exits. I didn't get too far in it to even see how viable the approach is since I was working on other things. I'll put this session on my calendar. - Brant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session
If there's a free session, can we dedicate a session specifically to the zaqar, barbican, sahara, heat, trove, guestagent, keystone auth thingy so everyone's all together? Thanks, Kevin From: Flavio Percoco [fla...@redhat.com] Sent: Thursday, May 14, 2015 12:15 PM To: Sergey Lukjanov Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 14/05/15 10:07 -0700, Sergey Lukjanov wrote: Hey, in Sahara we're looking on using Zaqar as a transport for agent some day as well. Unfortunately this section overlaps with Sahara sessions. Sergey, We still have some free sessions, we'd be happy to dedicate one to Sahara. Any slot that looks good for you? http://libertydesignsummit.sched.org/overview/type/design+summit/Zaqar#.VVT0CvYU9hE Thanks, Flavio On Thu, May 14, 2015 at 12:19 AM, Flavio Percoco fla...@redhat.com wrote: On 13/05/15 18:06 +, Fox, Kevin M wrote: Sahara also has the same problem, but worse, since they currently only support ssh with their agent, so its either assign floating ip's to all nodes, or your sahara controller must be on your one and only network node so it can tunnel. :/ Should we have a chat with them too? We've scheduled a discussion with Trove's team on Thursday at 5pm. It'd be great to have this discussion once and together to know what the common issues are and what things need to be done. I'll ping folks from both teams to invite them to this session. If they can't make it, I'm happy to use another working session slot. Cheers, Flavio http://libertydesignsummit.sched.org/event/59dc6ec910a732cdbf5970b6792e1cef #.VVRL0PYU9hE Thanks, Kevin From: Zane Bitter [zbit...@redhat.com] Sent: Wednesday, May 13, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 11/05/15 05:49, Flavio Percoco wrote: On 08/05/15 00:45 -0700, Nikhil Manchanda wrote: 3) The trove-guest-agent is in vm. it is connected by taskmanager by rabbitmq. We designed it. But is there some prectise to do this? how to make the vm be connected in vm-network and management network? Most deployments of Trove that I am familiar with set up a separate RabbitMQ server in cloud that is used by Trove. It is not recommended to use the same infrastructure RabbitMQ server for Trove for security reasons. Also most deployments of Trove set up a private (neutron) network that the RabbitMQ server and guests are connected to, and all RPC messages are sent over this network. We've discussed trove+zaqar in the past and I believe some folks from the Trove team have been in contact with Fei Long lately about this. Since one of the projects goal's for this cycle is to provide support to other projects and contribute to the adoption, I'm wondering if any of the members of the trove team would be willing to participate in a Zaqar working session completely dedicated to this integration? +1 I learned from a concurrent thread ([Murano] [Mistral] SSH workflow action) that Murano are doing exactly the same thing with a separate RabbitMQ server to communicate with guest agents. It's a real waste of energy when multiple OpenStack projects all have to solve the same problem from scratch, so a single answer to this would be great. In that thread I suggested (and Murano developers agreed with) making the transport pluggable so that operators could choose Zaqar instead. I would strongly support doing the same here. +1 :) Flavio cheers, Zane. It'd be a great opportunity to figure out what's really needed, edge cases and get some work done on this specific case. Thanks, Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org? subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
On 2015-05-14 12:34 AM, David Lyle wrote: Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. I'm asking to NOT remove the feature provided by AVAILABLE_REGIONS which is what you described: support for multiple keystone endpoint (or OpenStack installations) in one Horizon installation. If you are asking about token sharing between keystones which the thread you linked seems to indicate. Then yes, you can have a synced token store. But that is an exercise left to the operator. I'm not suggesting token sharing. I'm merely trying to explain that AVAILABLE_REGIONS answers a different need than multi-regions in the same keystone endpoint which Horizon already supports fine. Those are 2 features answering different needs and AVAILABLE_REGIONS shouldn't be removed as suggested previously: we might consider deprecating AVAILABLE_REGIONS in Horizon. -- Mathieu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?
On 5/14/2015 2:59 PM, Kris G. Lindgren wrote: How would this impact someone running juno nova-compute on rhel 6 boxes? Or installing the python2.7 from SCL and running kilo+ code on rhel6? For [3] it couldn't we get the exact same information from /proc/cpuinfo? Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: The minimum required version of libvirt in the driver is 0.9.11 still [1]. We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno. The libvirt distro support matrix is here: [2] Can we safely assume the people aren't going to be running Libvirt compute nodes on RHEL 7.1 or Ubuntu Precise? Regarding RHEL, I think this is a safe bet because in Kilo nova dropped python 2.6 support and RHEL 6 doesn't have py26 so you'd be in trouble running kilo+ nova on RHEL 6.x anyway. There are some workarounds in the code [3] I'd like to see removed by bumping the minimum required version. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver .py?id=2015.1.0#n335 [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p y?id=2015.1.0#n754 -- Thanks, Matt Riedemann ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators This would be Liberty, so when you upgrade nova-compute to Liberty you'd also need to upgrade the host OS to something that supports libvirt = 1.2.2. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
On May 14, 2015 12:50 PM, Maish Saidel-Keesing mais...@maishsk.com wrote: I just saw an email on the Operators list [1] that I think would allow a much simpler process for the non-developer community to submit a feature request. I understand that this was raised once upon a time [2] - at least in part a while back. Rally have have the option to submit a feature request (a.k.a. backlog) - which I think is straight forward and simple. I think this will be a good way for those who are not familiar with the way a spec should be written, and honestly would not know how to write such a spec for any of the projects, but have identified a missing feature or a need for an improvement in one of the Projects. They only need to specify 3 small things (a sentence / two for each) 1. Use Case 2. Problem description 3. Possible solution That is exactly what the backlog is supposed to be. These specifications have the problem description completed, but all other sections are optional. From: http://specs.openstack.org/openstack/nova-specs/specs/backlog/ As I am sure there is room for improvement, why not propose a change to the specs repo to improve the backlog spec process? I am not saying that each feature request should be implemented - or that each possible solution is the best and only way to solve the problem. That should be up to each and every project how this will be (or even if it should be) implemented. How important it will be for them to implement this feature and what priority this should receive. A developer then picks up the request and turns it into a proper blueprint with proper actionable items. Of course this has to be valid feature request, and not just someone looking for support - how exactly this should be vetted, I have not thought this through till the end. But I was looking to hear some feedback on the idea of making this a way for all of the OpenStack projects to allow them to collect actual feedback in a simple way. Your thoughts and feedback would be appreciated. [1] http://lists.openstack.org/pipermail/openstack-operators/2015-May/006982.html [2] http://lists.openstack.org/pipermail/openstack-dev/2014-August/044057.html -- Best Regards, Maish Saidel-Keesing __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][lbaas]LBaaSv2 movies / links
Hi Everyone, As you may be aware, we have a speaking slot on the Vancouver summit to discuss LBaaS v2, Kilo and beyond on Monday https://openstacksummitmay2015vancouver.sched.org/event/3f1e9e24f36238152749afea9c21a264#.VVTwCPmqqko We are considering to show vendors demos or list/link such movies. Please let us know if you have such so we can list/include in the talk. Regards, -Sam. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] what is the ticket tracking system openstack community use?
I am new to openstack community. Any reply will be appreciated. Regards! Chen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session
Flavio, it would be great if we could chat about Sahara-Zaqar at Thu 9am On Thu, May 14, 2015 at 1:13 PM, Fox, Kevin M kevin@pnnl.gov wrote: If there's a free session, can we dedicate a session specifically to the zaqar, barbican, sahara, heat, trove, guestagent, keystone auth thingy so everyone's all together? Thanks, Kevin From: Flavio Percoco [fla...@redhat.com] Sent: Thursday, May 14, 2015 12:15 PM To: Sergey Lukjanov Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 14/05/15 10:07 -0700, Sergey Lukjanov wrote: Hey, in Sahara we're looking on using Zaqar as a transport for agent some day as well. Unfortunately this section overlaps with Sahara sessions. Sergey, We still have some free sessions, we'd be happy to dedicate one to Sahara. Any slot that looks good for you? http://libertydesignsummit.sched.org/overview/type/design+summit/Zaqar#.VVT0CvYU9hE Thanks, Flavio On Thu, May 14, 2015 at 12:19 AM, Flavio Percoco fla...@redhat.com wrote: On 13/05/15 18:06 +, Fox, Kevin M wrote: Sahara also has the same problem, but worse, since they currently only support ssh with their agent, so its either assign floating ip's to all nodes, or your sahara controller must be on your one and only network node so it can tunnel. :/ Should we have a chat with them too? We've scheduled a discussion with Trove's team on Thursday at 5pm. It'd be great to have this discussion once and together to know what the common issues are and what things need to be done. I'll ping folks from both teams to invite them to this session. If they can't make it, I'm happy to use another working session slot. Cheers, Flavio http://libertydesignsummit.sched.org/event/59dc6ec910a732cdbf5970b6792e1cef #.VVRL0PYU9hE Thanks, Kevin From: Zane Bitter [zbit...@redhat.com] Sent: Wednesday, May 13, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 11/05/15 05:49, Flavio Percoco wrote: On 08/05/15 00:45 -0700, Nikhil Manchanda wrote: 3) The trove-guest-agent is in vm. it is connected by taskmanager by rabbitmq. We designed it. But is there some prectise to do this? how to make the vm be connected in vm-network and management network? Most deployments of Trove that I am familiar with set up a separate RabbitMQ server in cloud that is used by Trove. It is not recommended to use the same infrastructure RabbitMQ server for Trove for security reasons. Also most deployments of Trove set up a private (neutron) network that the RabbitMQ server and guests are connected to, and all RPC messages are sent over this network. We've discussed trove+zaqar in the past and I believe some folks from the Trove team have been in contact with Fei Long lately about this. Since one of the projects goal's for this cycle is to provide support to other projects and contribute to the adoption, I'm wondering if any of the members of the trove team would be willing to participate in a Zaqar working session completely dedicated to this integration? +1 I learned from a concurrent thread ([Murano] [Mistral] SSH workflow action) that Murano are doing exactly the same thing with a separate RabbitMQ server to communicate with guest agents. It's a real waste of energy when multiple OpenStack projects all have to solve the same problem from scratch, so a single answer to this would be great. In that thread I suggested (and Murano developers agreed with) making the transport pluggable so that operators could choose Zaqar instead. I would strongly support doing the same here. +1 :) Flavio cheers, Zane. It'd be a great opportunity to figure out what's really needed, edge cases and get some work done on this specific case. Thanks, Flavio __ OpenStack Development
Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
On 05/14/2015 03:48 PM, Maish Saidel-Keesing wrote: I just saw an email on the Operators list [1] that I think would allow a much simpler process for the non-developer community to submit a feature request. I understand that this was raised once upon a time [2] - at least in part a while back. Rally have have the option to submit a feature request (a.k.a. backlog) - which I think is straight forward and simple. I think this will be a good way for those who are not familiar with the way a spec should be written, and honestly would not know how to write such a spec for any of the projects, but have identified a missing feature or a need for an improvement in one of the Projects. They only need to specify 3 small things (a sentence / two for each) 1. Use Case 2. Problem description 3. Possible solution I am not saying that each feature request should be implemented - or that each possible solution is the best and only way to solve the problem. That should be up to each and every project how this will be (or even if it should be) implemented. How important it will be for them to implement this feature and what priority this should receive. A developer then picks up the request and turns it into a proper blueprint with proper actionable items. Of course this has to be valid feature request, and not just someone looking for support - how exactly this should be vetted, I have not thought this through till the end. But I was looking to hear some feedback on the idea of making this a way for all of the OpenStack projects to allow them to collect actual feedback in a simple way. Hi Maish, I would support this kind of thing for projects that wish to do it, but at the same time, I wouldn't want the TC to mandate all projects use this method of collecting feedback. Projects, IMHO, should be free to self-organize as they wish, including developing processes that make the most sense for the project team. Best, -jay Your thoughts and feedback would be appreciated. [1] http://lists.openstack.org/pipermail/openstack-operators/2015-May/006982.html [2] http://lists.openstack.org/pipermail/openstack-dev/2014-August/044057.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
On 05/14/2015 04:34 PM, Jay Pipes wrote: On 05/14/2015 03:48 PM, Maish Saidel-Keesing wrote: I just saw an email on the Operators list [1] that I think would allow a much simpler process for the non-developer community to submit a feature request. I understand that this was raised once upon a time [2] - at least in part a while back. Rally have have the option to submit a feature request (a.k.a. backlog) - which I think is straight forward and simple. I think this will be a good way for those who are not familiar with the way a spec should be written, and honestly would not know how to write such a spec for any of the projects, but have identified a missing feature or a need for an improvement in one of the Projects. They only need to specify 3 small things (a sentence / two for each) 1. Use Case 2. Problem description 3. Possible solution I am not saying that each feature request should be implemented - or that each possible solution is the best and only way to solve the problem. That should be up to each and every project how this will be (or even if it should be) implemented. How important it will be for them to implement this feature and what priority this should receive. A developer then picks up the request and turns it into a proper blueprint with proper actionable items. Of course this has to be valid feature request, and not just someone looking for support - how exactly this should be vetted, I have not thought this through till the end. But I was looking to hear some feedback on the idea of making this a way for all of the OpenStack projects to allow them to collect actual feedback in a simple way. Hi Maish, I would support this kind of thing for projects that wish to do it, but at the same time, I wouldn't want the TC to mandate all projects use this method of collecting feedback. Projects, IMHO, should be free to self-organize as they wish, including developing processes that make the most sense for the project team. Many projects, including at least keystone and nova (others too I believe), support the idea of a backlog spec. Which is exactly this kind of things. A feature, described in some detail as to why it's interesting, that has no current implementer. I can then live in the specs tree as good idea, and future implementers are encouraged to dive in. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
On 15 May 2015 at 08:34, Jay Pipes jaypi...@gmail.com wrote: Hi Maish, I would support this kind of thing for projects that wish to do it, but at the same time, I wouldn't want the TC to mandate all projects use this method of collecting feedback. Projects, IMHO, should be free to self-organize as they wish, including developing processes that make the most sense for the project team. I think there is a balance to be struck. Where we tell users and operators to learn something different for every project, that has real impact. It makes it harder to engage with us, and it makes it harder to move between projects for contributors. Imagine if we had a spread of gerrit, github PR's, launchpad reviews, gitlab PRs and bitbucket PR's - say nova, swift, barbican, keystone and glance. That sounds silly because we all recognise the costs of switching there: I think we need to recognise the costs for other people even in things that as developers we don't interact with all that much. So I think we should explicitly leave room for experimentation and divergence, but also encourage a single common path - don't be different to be different, be difference because it is important in this specific case. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] Technical Committee Highlights May 13, 2015
In response to the feedback during elections, the Technical Committee now has a subteam dedicated to communications. Below is a link to the first post in our revitalized series. As always, we're here for you and listening and adjusting. http://www.openstack.org/blog/2015/05/technical-committee-highlights-may-13-2015 Thanks, Anne -- Anne Gentle annegen...@justwriteclick.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?
The minimum required version of libvirt in the driver is 0.9.11 still [1]. We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno. The libvirt distro support matrix is here: [2] Can we safely assume the people aren't going to be running Libvirt compute nodes on RHEL 7.1 or Ubuntu Precise? Regarding RHEL, I think this is a safe bet because in Kilo nova dropped python 2.6 support and RHEL 6 doesn't have py26 so you'd be in trouble running kilo+ nova on RHEL 6.x anyway. There are some workarounds in the code [3] I'd like to see removed by bumping the minimum required version. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py?id=2015.1.0#n335 [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.py?id=2015.1.0#n754 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
The keystone api has had regions as part of the api for a long time I think. This would imply the one keystone, multiple regions definition. Thanks, Kevin From: Geoff Arnold [ge...@geoffarnold.com] Sent: Thursday, May 14, 2015 11:41 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive? That’s interesting, because I wasn’t aware that “cloud” was part of the formal OpenStack taxonomy. Historically, we defined a region as a set of endpoints, supplied by an instance of Keystone. You seem to be saying that a cloud is a collection of regions configured in the same Keystone. [citation needed] Puzzled. Geoff On May 14, 2015, at 7:56 AM, Zane Bitter zbit...@redhat.com wrote: On 14/05/15 10:39, Geoff Arnold wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. The terminology I (and Heat) have always used is that regions are sets of endpoints configured in the same Keystone. Where you have a different Keystone auth URL that is straight up a separate cloud, no matter how you slice it. The confusion here seems to be that Horizon is using the name AVAILABLE_REGIONS to denote available Keystone auth URLS - i.e. different clouds, not different regions at all. Looked at through that lens, things seem a bit easier to understand. Heat supports multi-region trees of stacks (i.e. you can created a nested stack in another region). Multi-cloud support has been considered, but afaik has not yet landed. Figuring out where to store the credentials securely is tricky. cheers, Zane. Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com mailto:dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com mailto:mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period -https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat it looks as if Heat won’t support the AVAILABLE_REGIONS model. That seems like an unfortunate disconnect. Horizon only supports authenticating to one keystone endpoint at a time,
Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session
On 14/05/15 10:07 -0700, Sergey Lukjanov wrote: Hey, in Sahara we're looking on using Zaqar as a transport for agent some day as well. Unfortunately this section overlaps with Sahara sessions. Sergey, We still have some free sessions, we'd be happy to dedicate one to Sahara. Any slot that looks good for you? http://libertydesignsummit.sched.org/overview/type/design+summit/Zaqar#.VVT0CvYU9hE Thanks, Flavio On Thu, May 14, 2015 at 12:19 AM, Flavio Percoco fla...@redhat.com wrote: On 13/05/15 18:06 +, Fox, Kevin M wrote: Sahara also has the same problem, but worse, since they currently only support ssh with their agent, so its either assign floating ip's to all nodes, or your sahara controller must be on your one and only network node so it can tunnel. :/ Should we have a chat with them too? We've scheduled a discussion with Trove's team on Thursday at 5pm. It'd be great to have this discussion once and together to know what the common issues are and what things need to be done. I'll ping folks from both teams to invite them to this session. If they can't make it, I'm happy to use another working session slot. Cheers, Flavio http://libertydesignsummit.sched.org/event/59dc6ec910a732cdbf5970b6792e1cef #.VVRL0PYU9hE Thanks, Kevin From: Zane Bitter [zbit...@redhat.com] Sent: Wednesday, May 13, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 11/05/15 05:49, Flavio Percoco wrote: On 08/05/15 00:45 -0700, Nikhil Manchanda wrote: 3) The trove-guest-agent is in vm. it is connected by taskmanager by rabbitmq. We designed it. But is there some prectise to do this? how to make the vm be connected in vm-network and management network? Most deployments of Trove that I am familiar with set up a separate RabbitMQ server in cloud that is used by Trove. It is not recommended to use the same infrastructure RabbitMQ server for Trove for security reasons. Also most deployments of Trove set up a private (neutron) network that the RabbitMQ server and guests are connected to, and all RPC messages are sent over this network. We've discussed trove+zaqar in the past and I believe some folks from the Trove team have been in contact with Fei Long lately about this. Since one of the projects goal's for this cycle is to provide support to other projects and contribute to the adoption, I'm wondering if any of the members of the trove team would be willing to participate in a Zaqar working session completely dedicated to this integration? +1 I learned from a concurrent thread ([Murano] [Mistral] SSH workflow action) that Murano are doing exactly the same thing with a separate RabbitMQ server to communicate with guest agents. It's a real waste of energy when multiple OpenStack projects all have to solve the same problem from scratch, so a single answer to this would be great. In that thread I suggested (and Murano developers agreed with) making the transport pluggable so that operators could choose Zaqar instead. I would strongly support doing the same here. +1 :) Flavio cheers, Zane. It'd be a great opportunity to figure out what's really needed, edge cases and get some work done on this specific case. Thanks, Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org? subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org? subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org? subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] what is the ticket tracking system openstack community use?
On 5/14/2015 2:24 PM, Chen He wrote: I am new to openstack community. Any reply will be appreciated. Regards! Chen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Launchpad. More info here: https://wiki.openstack.org/wiki/Bugs -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] what is the ticket tracking system openstack community use?
Most projects track their bugs on Launchpad: https://launchpad.net/openstack Some projects are also using a new system called StoryBoard: https://storyboard.openstack.org/ You can also find a wealth of information about such issues in the infrastructure manual: http://docs.openstack.org/infra/manual/ Welcome! Excerpts from Chen He's message of 2015-05-14 12:24:02 -0700: I am new to openstack community. Any reply will be appreciated. Regards! Chen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Technical Committee Highlights May 13, 2015
On 15 May 2015 at 07:15, Anne Gentle annegen...@justwriteclick.com wrote: In response to the feedback during elections, the Technical Committee now has a subteam dedicated to communications. Below is a link to the first post in our revitalized series. As always, we're here for you and listening and adjusting. http://www.openstack.org/blog/2015/05/technical-committee-highlights-may-13-2015 Cool - thanks very much for leading this! Uhm, one small errata. My term just started :) -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
I just saw an email on the Operators list [1] that I think would allow a much simpler process for the non-developer community to submit a feature request. I understand that this was raised once upon a time [2] - at least in part a while back. Rally have have the option to submit a feature request (a.k.a. backlog) - which I think is straight forward and simple. I think this will be a good way for those who are not familiar with the way a spec should be written, and honestly would not know how to write such a spec for any of the projects, but have identified a missing feature or a need for an improvement in one of the Projects. They only need to specify 3 small things (a sentence / two for each) 1. Use Case 2. Problem description 3. Possible solution I am not saying that each feature request should be implemented - or that each possible solution is the best and only way to solve the problem. That should be up to each and every project how this will be (or even if it should be) implemented. How important it will be for them to implement this feature and what priority this should receive. A developer then picks up the request and turns it into a proper blueprint with proper actionable items. Of course this has to be valid feature request, and not just someone looking for support - how exactly this should be vetted, I have not thought this through till the end. But I was looking to hear some feedback on the idea of making this a way for all of the OpenStack projects to allow them to collect actual feedback in a simple way. Your thoughts and feedback would be appreciated. [1] http://lists.openstack.org/pipermail/openstack-operators/2015-May/006982.html [2] http://lists.openstack.org/pipermail/openstack-dev/2014-August/044057.html -- Best Regards, Maish Saidel-Keesing __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
Robert, So I think we should explicitly leave room for experimentation and divergence, but also encourage a single common path - don't be different to be different, be difference because it is important in this specific case. First of all feature request are the same process as specs (in other projects) Difference is what we are expecting to get in spec and feature request (and auditory) By the way feature request in Rally were introduced* far far before backlogs in other Keystone and Nova.* It strange from me that those projects are reinventing working mechanism from other project=( and not just use it. Best regards, Boris Pavlovic On Thu, May 14, 2015 at 11:45 PM, Robert Collins robe...@robertcollins.net wrote: On 15 May 2015 at 08:34, Jay Pipes jaypi...@gmail.com wrote: Hi Maish, I would support this kind of thing for projects that wish to do it, but at the same time, I wouldn't want the TC to mandate all projects use this method of collecting feedback. Projects, IMHO, should be free to self-organize as they wish, including developing processes that make the most sense for the project team. I think there is a balance to be struck. Where we tell users and operators to learn something different for every project, that has real impact. It makes it harder to engage with us, and it makes it harder to move between projects for contributors. Imagine if we had a spread of gerrit, github PR's, launchpad reviews, gitlab PRs and bitbucket PR's - say nova, swift, barbican, keystone and glance. That sounds silly because we all recognise the costs of switching there: I think we need to recognise the costs for other people even in things that as developers we don't interact with all that much. So I think we should explicitly leave room for experimentation and divergence, but also encourage a single common path - don't be different to be different, be difference because it is important in this specific case. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Data-plane performance testing with Shaker
Hi all! Let me introduce you Shaker - a tool for data-plane performance testing in OpenStack. The motivation behind it is to have a simple way for measuring networking bandwidth between instances. Shaker key features are: 1. *User-defined topology*. The topology is specified as Heat template, so users may do arbitrary configuration for instances, networks, routers, floating ips, etc. Instance scheduling is controlled, it is possible to specify number of instances per compute node and their location. 2. *Simultaneous test execution*. By default Shaker runs tests synchronously on all deployed instances. It is also possible to increase the load, thus measuring dependency on number of concurrently working instances. The feature is useful when one needs to find bottleneck in the cloud (like usage of non-DVR routers). 3. *Pluggable tools*. Out of the box Shaker supports iperf, netperf and able to calculate aggregated stats based on their output. Adding a new tool is easy, in the simplest case it does not even require coding. 4. *Interactive report*. Shaker produces report as single-page HTML application. The report contains aggregated charts for bandwidth depending on concurrency, bandwidth per node and precise timeline of traffic on every node. The report does not have any dependencies and can be shared easily. If you are interested in knowing more about Shaker welcome to Neutron Lightning talk presentation by Oleg Bondarev next Wed in Vancouver ( http://sched.co/3BNR). And certainly welcome to use and contribute! Code: https://github.com/stackforge/shaker Docs: http://pyshaker.readthedocs.org/ Launchpad: https://launchpad.net/shaker/ PyPi - https://pypi.python.org/pypi/pyshaker/ Thanks, Ilya __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
On 14/05/15 14:41, Geoff Arnold wrote: That’s interesting, because I wasn’t aware that “cloud” was part of the formal OpenStack taxonomy. Um, OK. AWS, Rackspace and Helion are all different clouds, even though the last two both run OpenStack. Do we really need a formal taxonomy for that? Likewise, if you install OpenStack twice, you get two different clouds. Each of which may or may not be split into multiple regions. Historically, we defined a region as a set of endpoints, supplied by an instance of Keystone. Right. You seem to be saying that a cloud is a collection of regions configured in the same Keystone. Yes, that's what I'm saying. How is that different? [citation needed] Seriously, this is not nearly as complicated as you are making out. Puzzled. Two regions that don't appear in the same Keystone catalog are not part of the same cloud. Horizon offers support for dealing with multiple regions within a single cloud. It also offers an option to switch between multiple different clouds using an option unfortunately called AVAILABLE_REGIONS, which is a total misnomer. - ZB Geoff On May 14, 2015, at 7:56 AM, Zane Bitter zbit...@redhat.com wrote: On 14/05/15 10:39, Geoff Arnold wrote: +1 There seems to be a significant disconnect between Heat, Horizon and Keystone on the subject of multi-region configurations, and the documentation isn’t helpful. At the very least, it would be useful if discussions at the summit could result in a decent Wiki page on the subject. The terminology I (and Heat) have always used is that regions are sets of endpoints configured in the same Keystone. Where you have a different Keystone auth URL that is straight up a separate cloud, no matter how you slice it. The confusion here seems to be that Horizon is using the name AVAILABLE_REGIONS to denote available Keystone auth URLS - i.e. different clouds, not different regions at all. Looked at through that lens, things seem a bit easier to understand. Heat supports multi-region trees of stacks (i.e. you can created a nested stack in another region). Multi-cloud support has been considered, but afaik has not yet landed. Figuring out where to store the credentials securely is tricky. cheers, Zane. Geoff On May 13, 2015, at 9:49 PM, Morgan Fainberg morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote: On May 13, 2015, at 21:34, David Lyle dkly...@gmail.com mailto:dkly...@gmail.com wrote: On Wed, May 13, 2015 at 3:24 PM, Mathieu Gagné mga...@iweb.com mailto:mga...@iweb.com wrote: When using AVAILABLE_REGIONS, you get a dropdown at login time to choose your region which is in fact your keystone endpoint. Once logged in, you get a new dropdown at the top right to switch between the keystone endpoints. This means you can configure an Horizon installation to login to multiple independent OpenStack installations. So I don't fully understand what enhancing the multi-region support in Keystone would mean. Would you be able to configure Horizon to login to multiple independent OpenStack installations? Mathieu On 2015-05-13 5:06 PM, Geoff Arnold wrote: Further digging suggests that we might consider deprecating AVAILABLE_REGIONS in Horizon and enhancing the multi-region support in Keystone. It wouldn’t take a lot; the main points: * Implement the Regions API discussed back in the Havana time period -https://etherpad.openstack.org/p/havana-availability-zone-and-region-management - but with full CRUD * Enhance the Endpoints API to allow filtering by region Supporting two different multi region models is problematic if we’re serious about things like multi-region Heat. Thoughts? Geoff On May 13, 2015, at 12:01 PM, Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com mailto:ge...@geoffarnold.com wrote: I’m looking at implementing dynamically-configured multi-region support for service federation, and the prior art on multi-region support in Horizon is pretty sketchy. This thread: http://lists.openstack.org/pipermail/openstack/2014-January/004372.html is the only real discussion I’ve found, and it’s pretty inconclusive. More precisely, if I configure a single Horizon with AVAILABLE_REGIONS pointing at two different Keystones with region names “X” and “Y, and each of those Keystones returns a service catalog with multiple regions (“A” and “B” for one, “P”, “Q”, and “R” for the other), what’s Horizon going to do? Or rather, what’s it expected to do? Yes, I’m being lazy: I could actually configure this to see what happens, but hopefully it was considered during the design. Geoff PS I’ve added Heat to the subject, because from a quick read of
Re: [openstack-dev] [all] Technical Committee Highlights May 13, 2015
On Thu, May 14, 2015 at 2:32 PM, Robert Collins robe...@robertcollins.net wrote: On 15 May 2015 at 07:15, Anne Gentle annegen...@justwriteclick.com wrote: In response to the feedback during elections, the Technical Committee now has a subteam dedicated to communications. Below is a link to the first post in our revitalized series. As always, we're here for you and listening and adjusting. http://www.openstack.org/blog/2015/05/technical-committee-highlights-may-13-2015 Cool - thanks very much for leading this! Uhm, one small errata. My term just started :) Ha, oops! Thanks Rob, working on it. :) -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Anne Gentle annegen...@justwriteclick.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [new][cognitive] Announcing Cognitive - a project to deliver Machine Learning as a Service for OpenStack
Hi! It is a great pleasure to announce the development of a new project called Cognitive. Cognitive provides Machine Learning [1] as a Service that enables operators to offer next generation data science based services on top of their OpenStack Clouds. This project will begin as a StackForge project baed upon an empty cookiecutter [2] repo. The repos to work in are: Server: https://github.com/stackforge/cognitive Client: https://github.com/stackforge/python-cognitiveclient Please join us via iRC on #openstack-cognitive on freenode. We will be holding a doodle poll to select times for our first meeting the week after summit. This doodle poll will close May 24th and meeting times will be announced on the mailing list at that time. At our first IRC meeting, we will draft additional core team members. We would like to invite interested individuals to join this exciting new development effort! Please commit your schedule in the doodle poll here: http://doodle.com/drrka5tgbwpbfbxy Initial core team: Steven Dake, Aparupa Das Gupa, Debo~ Dutta, Johnu George, Kyle Mestery, Sarvesh Ranjan, Ralf Rantzau, Komei Shimamura, Marc Solanas, Manoj Sharma, Yathi Udupi, Kai Zhang. A little bit about Cognitive: Data driven applications on cloud infrastructure increasingly rely on Machine Learning. Most data driven applications today use Machine Learning (ML). This often requires application developers and data scientists to write their own machine learning stack or deploy other packages to do any kind of data science based applications. Data scientists also need to have an easy way to rapidly experiment with data without having to write basic infrastructure for data manipulations. Cognitive is a Machine Learning service on top of OpenStack and provides machine learning based services to tenants (API, workbench, compute service). For information about blueprints check out: https://blueprints.launchpad.net/cognitive https://blueprints.launchpad.net/python-cognitiveclient For more details, check out our Wiki: https://wiki.openstack.org/wiki/Cognitive Please join the awesome Cognitive team in designing a world class Machine Learning as a Service solution. We look forward to seeing you on IRC on #openstack-cognitive. Regards, Debo~ Dutta (on behalf of the initial team) [1] http://en.wikipedia.org/wiki/Machine_learning [2] https://github.com/openstack-dev/cookiecutter __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 I'm very much interested in talking with some Keystone folks about this auth issue. I would be willing to dedicate a Barbican Working Session to this discussion if there is a time slot that works for all the interested parties. - - Douglas Mendizabal On 5/14/15 3:13 PM, Fox, Kevin M wrote: If there's a free session, can we dedicate a session specifically to the zaqar, barbican, sahara, heat, trove, guestagent, keystone auth thingy so everyone's all together? Thanks, Kevin From: Flavio Percoco [fla...@redhat.com] Sent: Thursday, May 14, 2015 12:15 PM To: Sergey Lukjanov Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 14/05/15 10:07 -0700, Sergey Lukjanov wrote: Hey, in Sahara we're looking on using Zaqar as a transport for agent some day as well. Unfortunately this section overlaps with Sahara sessions. Sergey, We still have some free sessions, we'd be happy to dedicate one to Sahara. Any slot that looks good for you? http://libertydesignsummit.sched.org/overview/type/design+summit/Zaqar #.VVT0CvYU9hE Thanks, Flavio On Thu, May 14, 2015 at 12:19 AM, Flavio Percoco fla...@redhat.com wrote: On 13/05/15 18:06 +, Fox, Kevin M wrote: Sahara also has the same problem, but worse, since they currently only support ssh with their agent, so its either assign floating ip's to all nodes, or your sahara controller must be on your one and only network node so it can tunnel. :/ Should we have a chat with them too? We've scheduled a discussion with Trove's team on Thursday at 5pm. It'd be great to have this discussion once and together to know what the common issues are and what things need to be done. I'll ping folks from both teams to invite them to this session. If they can't make it, I'm happy to use another working session slot. Cheers, Flavio http://libertydesignsummit.sched.org/event/59dc6ec910a732cdbf5970b679 2e1cef #.VVRL0PYU9hE Thanks, Kevin From: Zane Bitter [zbit...@redhat.com] Sent: Wednesday, May 13, 2015 10:26 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [trove][zaqar] Trove and Zaqar integration. Summit working session On 11/05/15 05:49, Flavio Percoco wrote: On 08/05/15 00:45 -0700, Nikhil Manchanda wrote: 3) The trove-guest-agent is in vm. it is connected by taskmanager by rabbitmq. We designed it. But is there some prectise to do this? how to make the vm be connected in vm-network and management network? Most deployments of Trove that I am familiar with set up a separate RabbitMQ server in cloud that is used by Trove. It is not recommended to use the same infrastructure RabbitMQ server for Trove for security reasons. Also most deployments of Trove set up a private (neutron) network that the RabbitMQ server and guests are connected to, and all RPC messages are sent over this network. We've discussed trove+zaqar in the past and I believe some folks from the Trove team have been in contact with Fei Long lately about this. Since one of the projects goal's for this cycle is to provide support to other projects and contribute to the adoption, I'm wondering if any of the members of the trove team would be willing to participate in a Zaqar working session completely dedicated to this integration? +1 I learned from a concurrent thread ([Murano] [Mistral] SSH workflow action) that Murano are doing exactly the same thing with a separate RabbitMQ server to communicate with guest agents. It's a real waste of energy when multiple OpenStack projects all have to solve the same problem from scratch, so a single answer to this would be great. In that thread I suggested (and Murano developers agreed with) making the transport pluggable so that operators could choose Zaqar instead. I would strongly support doing the same here. +1 :) Flavio cheers, Zane. It'd be a great opportunity to figure out what's really needed, edge cases and get some work done on this specific case. Thanks, Flavio _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org? subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org? subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not
Re: [openstack-dev] [horizon][keystone][heat] Are AVAILABLE_REGIONS and multi-region service catalog mutually exclusive?
If we don’t want to deprecate AVAILABLE_REGIONS, we certainly need to clean up the ambiguity. And to be honest, the existing documentation for both multi-region” schemes (AVAILABLE_REGIONS and Keystone based) is completely inadequate. Geoff On May 14, 2015, at 1:13 PM, Mathieu Gagné mga...@iweb.com wrote: On 2015-05-14 12:34 AM, David Lyle wrote: Horizon only supports authenticating to one keystone endpoint at a time, specifically to one of the entries in AVAILABLE_REGIONS as defined in settings.py. Once you have an authenticated session in Horizon, the region selection support is merely for filtering between regions registered with the keystone endpoint you authenticated to, where the list of regions is determined by parsing the service catalog returned to you with your token. What's really unclear to me is what you are intending to ask. I'm asking to NOT remove the feature provided by AVAILABLE_REGIONS which is what you described: support for multiple keystone endpoint (or OpenStack installations) in one Horizon installation. If you are asking about token sharing between keystones which the thread you linked seems to indicate. Then yes, you can have a synced token store. But that is an exercise left to the operator. I'm not suggesting token sharing. I'm merely trying to explain that AVAILABLE_REGIONS answers a different need than multi-regions in the same keystone endpoint which Horizon already supports fine. Those are 2 features answering different needs and AVAILABLE_REGIONS shouldn't be removed as suggested previously: we might consider deprecating AVAILABLE_REGIONS in Horizon. -- Mathieu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][api] State of the OpenStack API Working Group: Liberty Edition
This blog post is basically a preview of our cross-project session. http://blog.phymata.com/2015/05/14/state-of-the-api-wg-liberty-edition/ The API WG will also be busy bunch at the Summit. 1 cross-project session API Working Group: State of the Group [1] 2 working group sessions API Working Group: Working Session 1 [2] API Working Group: Working Session 2 [3] 2 summit sessions related to the API WG. APIs Matter [4] The Good and the Bad of the OpenStack REST APIs [5] If you're attending the summit, we hope to see you there! Cheers, Everett [1] https://libertydesignsummit.sched.org/event/e14d84514003140fe30e984027299a44 [2] https://libertydesignsummit.sched.org/event/3fe7ba65fed52540e6116f7bee2392a6 [3] https://libertydesignsummit.sched.org/event/c02c575cd390b71d5e17a3f27f6b5806 [4] https://libertydesignsummit.sched.org/event/bf6f86afe58148a96ab9d1dd0d30a554 [5] https://libertydesignsummit.sched.org/event/602a2acdca6f546cef89dc0c4356e3d8 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?
On 5/14/2015 3:35 PM, Matt Riedemann wrote: On 5/14/2015 2:59 PM, Kris G. Lindgren wrote: How would this impact someone running juno nova-compute on rhel 6 boxes? Or installing the python2.7 from SCL and running kilo+ code on rhel6? For [3] it couldn't we get the exact same information from /proc/cpuinfo? Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: The minimum required version of libvirt in the driver is 0.9.11 still [1]. We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno. The libvirt distro support matrix is here: [2] Can we safely assume the people aren't going to be running Libvirt compute nodes on RHEL 7.1 or Ubuntu Precise? Regarding RHEL, I think this is a safe bet because in Kilo nova dropped python 2.6 support and RHEL 6 doesn't have py26 so you'd be in trouble running kilo+ nova on RHEL 6.x anyway. There are some workarounds in the code [3] I'd like to see removed by bumping the minimum required version. [1] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver .py?id=2015.1.0#n335 [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p y?id=2015.1.0#n754 -- Thanks, Matt Riedemann ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators This would be Liberty, so when you upgrade nova-compute to Liberty you'd also need to upgrade the host OS to something that supports libvirt = 1.2.2. Here is the patch to see what this would look like: https://review.openstack.org/#/c/183220/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Data-plane performance testing with Shaker
Hi Ilya I am interested in this and many thanks for posting this. I have to ask how relevant the performance testing is given that Neutron overlays are dependent on the underlay? I believe your point 4 below I can see some uses and value for, but I am struggling to this been used as a “tool for data-plane performance testing” in Neutron networks. I look forward to the lightning talks. /Alan From: Ilya Shakhat [mailto:ishak...@mirantis.com] Sent: May-14-15 11:30 PM To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org) Subject: [openstack-dev] [Neutron] Data-plane performance testing with Shaker Hi all! Let me introduce you Shaker - a tool for data-plane performance testing in OpenStack. The motivation behind it is to have a simple way for measuring networking bandwidth between instances. Shaker key features are: 1. User-defined topology. The topology is specified as Heat template, so users may do arbitrary configuration for instances, networks, routers, floating ips, etc. Instance scheduling is controlled, it is possible to specify number of instances per compute node and their location. 2. Simultaneous test execution. By default Shaker runs tests synchronously on all deployed instances. It is also possible to increase the load, thus measuring dependency on number of concurrently working instances. The feature is useful when one needs to find bottleneck in the cloud (like usage of non-DVR routers). 3. Pluggable tools. Out of the box Shaker supports iperf, netperf and able to calculate aggregated stats based on their output. Adding a new tool is easy, in the simplest case it does not even require coding. 4. Interactive report. Shaker produces report as single-page HTML application. The report contains aggregated charts for bandwidth depending on concurrency, bandwidth per node and precise timeline of traffic on every node. The report does not have any dependencies and can be shared easily. If you are interested in knowing more about Shaker welcome to Neutron Lightning talk presentation by Oleg Bondarev next Wed in Vancouver (http://sched.co/3BNR). And certainly welcome to use and contribute! Code: https://github.com/stackforge/shaker Docs: http://pyshaker.readthedocs.org/ Launchpad: https://launchpad.net/shaker/ PyPi - https://pypi.python.org/pypi/pyshaker/ Thanks, Ilya __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] [Zaqar] [Keystone] SSH workflow action
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 +1 to a Keystone and Oslo solution for this problem. One of my objections to Kevin's spec for Barbican is the copying of Keystone code into the Barbican tree. It seems to me like a code smell that we're trying to solve a problem that Keystone should be solving. Like I mentioned in the other thread [1] I would be willing to schedule this discussion during one of the Barbican Working Sessions. I'm also very interested in learning more about the Policy work being done, since we recently did a lot of work to provide additional policies for some of our contributors who find the current Keystone models burdensome. [2] - - Douglas Mendizábal [1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064196.html [2] http://specs.openstack.org/openstack/barbican-specs/specs/kilo/add-creat or-only-option.html On 5/12/15 8:43 PM, Zane Bitter wrote: On 12/05/15 13:06, Georgy Okrokvertskhov wrote: There is one thing which still bothers me. It is authentication. Right now with separate RabbitMQ instance we keep VMs authentication isolated from OpenStack infra. This is still a problem if you want to use webhooks (Heat autoscaling, Murano actions) via our own authentication models. If we plan to use Zaqar it will be interesting to know how Zaqar solves this issue. Aha, if you'd read my blog post you would know the answer ;) There's no specific provision for this in Zaqar at the moment AFAIK. The problem is really Keystone: it was never designed for authenticating applications to the cloud, only real live users. We need to fix this, in Keystone Oslo, for the benefit of all application-facing services. Some work has already started: - Keystone can now support separate backends for separate domains, so even if you are backed by read-only LDAP you can create service users in a separate DB-backed domain: http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/ - Work is planned on Dynamic Policy to make the authorisation model more powerful: http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/ I'm not sure that this goes far enough though (although I don't completely grok the implications of Dynamic Policy). We really want users to be able to define their own policy for application accounts that they create, and at an even more fine-grained level, so a common library for this sort of authorisation would be helpful. Frankly, I don't think that this is a good idea to use Keystone credentials or tokens for MQ clients on VMs. This topic, probably, deserves its own e-mail thread. Keystone credentials _are_ the answer, but they must not be the *user's* Keystone credentials. I can tell you how Heat does this right now for authenticating application callbacks for WaitConditions. It's not exactly pretty, but it works. Basically we create the application users in a separate domain and then do extra authorisation checks ourselves. Steve Hardy wrote a comprehensive summary on his blog: http://hardysteven.blogspot.com/2014/04/heat-auth-model-updates-part-2 - -stack.html So the mechanisms are there. In the short term we'd need some cross-project co-operation to define a system through which we can do this across projects (i.e. Murano or any other service can create a user and have Zaqar authorise it for listening on a particular queue). Maybe this is an extra parameter when creating a queue in Zaqar to also create a user account in a separate domain (the way Heat does) with permission to send and/or receive only on that particular queue and return its credentials. That would be easier to secure and easier to implement than having other services create the user accounts themselves. In the medium term hopefully we'll be able to come up with a less hacky solution that uses Oslo libraries to manage all of the user creation and authorisation. It will be interesting to discuss this with Keystone team. What is it is possible to have a token which is restricted to be authenticated to specific API URL like GET /v1/queues/queue-id/ Yes, we should definitely discuss this with the Keystone team and make sure they're getting feedback from all of the many groups who need it so that they can prioritise the work appropriately :) cheers, Zane. Thanks Gosha On Tue, May 12, 2015 at 8:58 AM, Fox, Kevin M kevin@pnnl.gov mailto:kevin@pnnl.gov wrote: +1 From: Zane Bitter [zbit...@redhat.com mailto:zbit...@redhat.com] Sent: Monday, May 11, 2015 6:15 PM To: openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Murano] [Mistral] SSH workflow action Hello! This looks like a perfect soapbox from which to talk about my favourite issue ;) You're right about the ssh idea, for the reasons discussed related to networking and a few more that weren't (e.g.
Re: [openstack-dev] Barbican : Unable to execute the curl command for uploading/retrieving the secrets with the latest Barbican code.
Asha, I spent some time looking into this, It looks to be a regression that occurred a few days ago when a CR was merged that moved us over to oslo_context. I have reported the issue here: https://bugs.launchpad.net/barbican/+bug/1455247 I have a couple ideas on how to fix it, so keep your eyes out for a CR to resolve the issue. John Vrbanac On Thu, 2015-05-14 at 12:26 -0500, Asha Seshagiri wrote: Hi all , We are able to execute the curl commands on new barbican code provided we integrated it with keystone . I ran into this issue because I was trying to configure localhost to actual IP on a plain barbican server so that I would get the response and request objects with the actual IP rather than the local host . This configuration was required for seting up HA proxy for Barbican. And then I thought of integrating with the keystone and configure Babrican server to https. Its a good learning to know that the latest code drop of Barbican enforces the authentication mechanism with the keystone which would not allow us to execute the curl command without providing the token of Identity service (Keystone ) in the request unlike the previous Barbican versions Please find the curl command request and responses for uploading/reteriving the secets on Barbican Server root@Clientfor-HAProxy barbican]# curl -X POST -H 'content-type:application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -d '{payload: my-secret-here,payload_content_type: text/plain}' \ -k https://localhost:9311/v1/secrets {secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e35}[root@Clientfor-HAProxy barbican]# [root@Clientfor-HAProxy barbican]# curl -H 'Accept: application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -k https://localhost:9311/v1/secrets {secrets: [{status: ACTIVE, secret_type: opaque, updated: 2015-05-14T16:35:44.109536, name: null, algorithm: null, created: 2015-05-14T16:35:44.103982, secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e35;, content_types: {default: text/plain}, creator_id: cedd848a8a9e410196793c601c03b99a, mode: null, bit_length: null, expiration: null}], total: 1}[root@Clientfor-HAProxy barbican]# Thanks and Regards, Asha Seshagiri On Wed, May 13, 2015 at 4:26 PM, Asha Seshagiri asha.seshag...@gmail.commailto:asha.seshag...@gmail.com wrote: Hi all , When I started debugging ,we find that default group is not used instead oslo_policy would be used Please find the logs below : 2015-05-13 15:59:34.393 13210 WARNING oslo_config.cfg [-] Option policy_default_rule from group DEFAULT is deprecated. Use option policy_default_rule from group oslo_policy. 2015-05-13 15:59:34.394 13210 WARNING oslo_config.cfg [-] Option policy_file from group DEFAULT is deprecated. Use option policy_file from group oslo_policy. 2015-05-13 15:59:34.395 13210 DEBUG oslo_policy.openstack.common.fileutils [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloading cached file /etc/barbican/policy.json read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/openstack/common/fileutils.py:64 2015-05-13 15:59:34.398 13210 DEBUG oslo_policy.policy [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloaded policy file: /etc/barbican/policy.json _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:424 2015-05-13 15:59:34.399 13210 ERROR barbican.api.controllers [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Secret creation attempt not allowed - please review your user/project privileges 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers Traceback (most recent call last): 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 104, in handler 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers return fn(inst, *args, **kwargs) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 85, in enforcer 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers _do_enforce_rbac(inst, pecan.request, action_name, ctx, **kwargs) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 68, in _do_enforce_rbac 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers credentials, do_raise=True) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /usr/lib/python2.7/site-packages/oslo_policy/policy.py, line 493, in enforce 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers raise PolicyNotAuthorized(rule, target, creds) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers PolicyNotAuthorized: secrets:post on {u'payload': u'my-secret-here', u'payload_content_type': u'text/plain'} by {'project': '12345', 'user': None, 'roles': []} disallowed by
Re: [openstack-dev] [all] Technical Committee Highlights May 13, 2015
On Thu, 14 May 2015, Anne Gentle wrote: In response to the feedback during elections, the Technical Committee now has a subteam dedicated to communications. Below is a link to the first post in our revitalized series. As always, we're here for you and listening and adjusting. Awesome, thanks very much for getting this rolling. A very promising start. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
On Fri, May 15, 2015 at 4:46 AM, Mike Bayer mba...@redhat.com wrote: On 5/14/15 11:58 AM, Doug Hellmann wrote: At one point we were exploring having both sqlalchemy-migrate and alembic run, one after the other, so that we only need to create new migrations with alembic and do not need to change any of the existing migrations. Was that idea dropped? to my knowledge the idea wasn't dropped. If a project wants to implement that using the oslo.db system, that is fine, however from my POV I'd prefer to just port the SQLA-migrate files over and drop the migrate dependency altogether. Whether or not a project does the run both step as an interim step doesn't affect that effort very much. Hi Mike Just a quick question: How would the alembic scripts know where to start the migration from if the current installation had been up until that point been using migrate (I believe both alembic and migrate write to a small table what the current version is, would you look for that?)? -Angus __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Horizon]Informal Pre-summit get together
The Horizon team will be meeting on Sunday night informally over a beer and dinner: Sunday 6pm @ The Charles Bar http://thecharlesbar.ca/ Hope to see you there! Doug __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] OpenStack 2015.1.0 for Debian Sid and Jessie
Hi, I am pleased to announce the general availability of OpenStack 2015.1.0 (aka Kilo) in Debian unstable (aka Sid) and through the official Debian backports repository for Debian 8.0 (aka Sid). Debian 8.0 Jessie just released === As you may know, Debian 8.0 was released on the 25th of April, just a few days before OpenStack Kilo (on the 30th of April). Just right after Debian Jessie got released, OpenStack Kilo was uploaded to unstable, and slowly migrated the usual way to the new Debian Testing, named Stretch. As a lot of new packages had to go through the Debian FTP master NEW queue for review (they check mainly for the copyright / licensing information, but also if the package is conform to the Debian policy). I'd like here to publicly thank Paul Tagliamonte from the Debian FTP team for his prompt work, which allowed Kilo to reach the Debian repositories just a few days after its release (in fact, Kilo was fully available in Unstable more than a week ago). Debian Jessie Backports === Previously, each release of OpenStack, as a backport for Debian Stable, was only available through private repositories. This wasn't a satisfying solution, and we wanted to address it by uploading to the official Debian backports. And the result is now available: all of OpenStack Kilo has been uploaded to Debian jessie-backports. If you want to use these repositories, just add them to your sources.list (note that the Debian installer proposes to add it by default): deb http://httpredir.debian.org/debian jessie-backports main (of course, you can use any Debian mirror, not just the httpredir) All of the usual OpenStack components are currently available in the official backports, but there's still some more to come, like for example Heat, Murano, Trove or Sahara. For Heat, it's because we're still waiting for python-oslo.versionedobjects 0.1.1-2 to migrate to Stretch (as a rule: we can't upload to backports unless a package is already in Testing). For the last 3, I'm not sure if they will be backported to Jessie. Please provide your feedback and tell the Debian packaging team if they are important for you in the official jessie-backports repository, or if Sid is enough. Also, at the time of writing of this message, Horizon and Designate are still in the backports FTP master NEW queue (but it should be approved very soon). Also, I have just uploaded a first version of Barbican (still in the NEW queue waiting for approval...), and there's a package for Manila that is currently on the work by a new contributor. Note on Neutron off-tree drivers The neutron-lbaas, neutron-fwaas and neutron-vpnaas packages have been uploaded and are part of Sid. If you need it through jessie-backports, please just let me know. All vendor-specific drivers have been separated from Neutron, and are now available as separate packages. I wrote packages for them all, but the issue is that most of them wouldn't even build due to failed unit tests. For most of them, it used to work in the Kilo beta 3 of Neutron (it's the case for all but 2 of them who were broken at the time), but they appeared broken with the Kilo final release, as they didn't update after the Kilo release. I have repaired some of them, but working on these packages has shown to be a very frustrating work, as they receive very few updates from upstream. I do not plan to work much on them unless one of the below condition: - My employer needs them - things are moving forward upstream, and that these unit tests are repaired in the stackforge repository. If you are a network hardware vendor and read this, please push for more maintenance, as it's in a really bad state ATM. You are welcome to get in touch with me, and I'll be happy to help you to help. Bug report == If you see any issue in the packages, please do report them to the Debian bug tracker. Instructions are available here: https://www.debian.org/Bugs/Reporting Happy installation, Thomas Goirand (zigo) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Barbican : Unable to execute the curl command for uploading/retrieving the secrets with the latest Barbican code.
Thanks a lot John for your response. But would like to know why do would we have to fix the issue for creating the secret for unauthenticated context for Barbican since it would be good to have access control mechanism enforced to access secrets , orders and other entities from Barbican. This should be the expected behavior from security perspective .And also we are able to access secrets by providing the right token from the Identity service (Keystone ). Looking forward for your response. Thanks and Regards, Asha Seshagiri On Thu, May 14, 2015 at 4:43 PM, John Vrbanac john.vrba...@rackspace.com wrote: Asha, I spent some time looking into this, It looks to be a regression that occurred a few days ago when a CR was merged that moved us over to oslo_context. I have reported the issue here: https://bugs.launchpad.net/barbican/+bug/1455247 I have a couple ideas on how to fix it, so keep your eyes out for a CR to resolve the issue. John Vrbanac On Thu, 2015-05-14 at 12:26 -0500, Asha Seshagiri wrote: Hi all , We are able to execute the curl commands on new barbican code provided we integrated it with keystone . I ran into this issue because I was trying to configure localhost to actual IP on a plain barbican server so that I would get the response and request objects with the actual IP rather than the local host . This configuration was required for seting up HA proxy for Barbican. And then I thought of integrating with the keystone and configure Babrican server to https. *Its a good learning to know that the latest code drop of Barbican enforces the authentication mechanism with the keystone which would not allow us to execute the curl command without providing the token of Identity service (Keystone ) in the request unlike the previous Barbican versions* Please find the curl command request and responses for uploading/reteriving the secets on Barbican Server root@Clientfor-HAProxy barbican]# curl -X POST -H 'content-type:application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -d '{payload: my-secret-here,payload_content_type: text/plain}' \ -k https://localhost:9311/v1/secrets {secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e35 }[root@Clientfor-HAProxy barbican]# [root@Clientfor-HAProxy barbican]# curl -H 'Accept: application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -k https://localhost:9311/v1/secrets {secrets: [{status: ACTIVE, secret_type: opaque, updated: 2015-05-14T16:35:44.109536, name: null, algorithm: null, created: 2015-05-14T16:35:44.103982, secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e35;, content_types: {default: text/plain}, creator_id: cedd848a8a9e410196793c601c03b99a, mode: null, bit_length: null, expiration: null}], total: 1}[root@Clientfor-HAProxy barbican]# Thanks and Regards, Asha Seshagiri On Wed, May 13, 2015 at 4:26 PM, Asha Seshagiri asha.seshag...@gmail.com wrote: Hi all , When I started debugging ,we find that default group is not used instead oslo_policy would be used Please find the logs below : *2015-05-13 15:59:34.393 13210 WARNING oslo_config.cfg [-] Option policy_default_rule from group DEFAULT is deprecated. Use option policy_default_rule from group oslo_policy.* *2015-05-13 15:59:34.394 13210 WARNING oslo_config.cfg [-] Option policy_file from group DEFAULT is deprecated. Use option policy_file from group oslo_policy.* 2015-05-13 15:59:34.395 13210 DEBUG oslo_policy.openstack.common.fileutils [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloading cached file /etc/barbican/policy.json read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/openstack/common/fileutils.py:64 2015-05-13 15:59:34.398 13210 DEBUG oslo_policy.policy [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloaded policy file: /etc/barbican/policy.json _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:424 2015-05-13 15:59:34.399 13210 ERROR barbican.api.controllers [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Secret creation attempt not allowed - please review your user/project privileges 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers Traceback (most recent call last): 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 104, in handler 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers return fn(inst, *args, **kwargs) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File /root/barbican/barbican/api/controllers/__init__.py, line 85, in enforcer 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers _do_enforce_rbac(inst, pecan.request, action_name, ctx, **kwargs) 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers File
[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting 5/20/15
All, We won¹t have an Octavia meeting next week due to the OpenStack summit but we will have a few sessions there ‹ so please make sure to say hiŠ German __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Barbican : Unable to execute the curl command for uploading/retrieving the secrets with the latest Barbican code.
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hi Asha, The reason we support an Unauthenticated Context in Barbican is purely for development purposes. We recommend that all production Barbican deployments use Keystone or an alternative AuthN/AuthZ service in front of Barbican. Setting up a working Keystone environment just to hack on Barbican is a steep requirement, which is why we need the Unauthenticated Context to work. - - Douglas Mendizabal On 5/14/15 6:07 PM, Asha Seshagiri wrote: Thanks a lot John for your response. But would like to know why do would we have to fix the issue for creating the secret for unauthenticated context for Barbican since it would be good to have access control mechanism enforced to access secrets , orders and other entities from Barbican. This should be the expected behavior from security perspective .And also we are able to access secrets by providing the right token from the Identity service (Keystone ). Looking forward for your response. Thanks and Regards, Asha Seshagiri On Thu, May 14, 2015 at 4:43 PM, John Vrbanac john.vrba...@rackspace.com mailto:john.vrba...@rackspace.com wrote: __ Asha, I spent some time looking into this, It looks to be a regression that occurred a few days ago when a CR was merged that moved us over to oslo_context. I have reported the issue here: https://bugs.launchpad.net/barbican/+bug/1455247 I have a couple ideas on how to fix it, so keep your eyes out for a CR to resolve the issue. John Vrbanac On Thu, 2015-05-14 at 12:26 -0500, Asha Seshagiri wrote: Hi all , We are able to execute the curl commands on new barbican code provided we integrated it with keystone . I ran into this issue because I was trying to configure localhost to actual IP on a plain barbican server so that I would get the response and request objects with the actual IP rather than the local host . This configuration was required for seting up HA proxy for Barbican. And then I thought of integrating with the keystone and configure Babrican server to https. *Its a good learning to know that the latest code drop of Barbican enforces the authentication mechanism with the keystone which would not allow us to execute the curl command without providing the token of Identity service (Keystone ) in the request unlike the previous Barbican versions* Please find the curl command request and responses for uploading/reteriving the secets on Barbican Server root@Clientfor-HAProxy barbican]# curl -X POST -H 'content-type:application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -d '{payload: my-secret-here,payload_content_type: text/plain}' \ -k https://localhost:9311/v1/secrets {secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e 35}[root@Clientfor-HAProxy barbican]# [root@Clientfor-HAProxy barbican]# curl -H 'Accept: application/json' -H 'X-Project-Id:12345' \ -H X-Auth-Token:c9ac81784e1e4e089fccbca19f862be2 -k https://localhost:9311/v1/secrets {secrets: [{status: ACTIVE, secret_type: opaque, updated: 2015-05-14T16:35:44.109536, name: null, algorithm: null, created: 2015-05-14T16:35:44.103982, secret_ref: https://localhost:9311/v1/secrets/02336016-623b-4deb-bca5-caedc0bf0e 35, content_types: {default: text/plain}, creator_id: cedd848a8a9e410196793c601c03b99a, mode: null, bit_length: null, expiration: null}], total: 1}[root@Clientfor-HAProxy barbican]# Thanks and Regards, Asha Seshagiri On Wed, May 13, 2015 at 4:26 PM, Asha Seshagiri asha.seshag...@gmail.com mailto:asha.seshag...@gmail.com wrote: Hi all , When I started debugging ,we find that default group is not used instead oslo_policy would be used Please find the logs below : *2015-05-13 15:59:34.393 13210 WARNING oslo_config.cfg [-] Option policy_default_rule from group DEFAULT is deprecated. Use option policy_default_rule from group oslo_policy.* *2015-05-13 15:59:34.394 13210 WARNING oslo_config.cfg [-] Option policy_file from group DEFAULT is deprecated. Use option policy_file from group oslo_policy.* 2015-05-13 15:59:34.395 13210 DEBUG oslo_policy.openstack.common.fileutils [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloading cached file /etc/barbican/policy.json read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/openstack/common/fileuti ls.py:64 2015-05-13 15:59:34.398 13210 DEBUG oslo_policy.policy [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Reloaded policy file: /etc/barbican/policy.json _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:424 2015-05-13 15:59:34.399 13210 ERROR barbican.api.controllers [req-0c6d2db4-bc15-4752-93ca-5203cf742d79 - 12345 - - -] Secret creation attempt not allowed - please review your user/project privileges 2015-05-13 15:59:34.399 13210 TRACE barbican.api.controllers Traceback (most recent call last):
Re: [openstack-dev] [QA] Question about Heat Tempest tests
On Fri, May 15, 2015 at 1:53 AM, Yaroslav Lobankov yloban...@mirantis.com wrote: Hello everyone, I have a question about Heat Tempest tests. Is there any dsvm job that runs these tests? At first glance no dsvm job runs them. We are using in tree functional tests now: https://github.com/openstack/heat/tree/master/heat_integrationtests And heat has a tempest check job too (for the API tests): check-tempest-dsvm-heat for example: https://review.openstack.org/#/c/182971/ -Angus Thank you! Regards, Yaroslav Lobankov. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Neutron API rate limiting
There isn't anything in neutron at this point that does that. I think the assumption so far is that you could rate limit at your load balancer or whatever distributes requests to neutron servers. On May 14, 2015 5:26 PM, Tidwell, Ryan ryan.tidw...@hp.com wrote: I was batting around some ideas regarding IPAM functionality, and it occurred to me that rate-limiting at an API level might come in handy and as an example might help provide one level of defense against DoS for an external IPAM provider that Neutron might make calls off to. I’m simply using IPAM as an example here, there are a number of other (ie better) reasons for rate-limiting at the API level. I may just be ignorant (please forgive me if I am J ), but I’m not aware of any rate-limiting functionality at the API level in Neutron. Does anyone know if such a feature exists that could point me at some documentation? If it doesn’t exist, has the Neutron community broached this subject before? I have to imagine someone has brought this up before and I just was out of the loop. Anyone have thoughts they care to share? Thanks! -Ryan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Add 'Ally Skills Workshop' to the sched.org Schedule.
On Thu, May 14, 2015 at 04:52:23PM +0200, Thierry Carrez wrote: So apparently the workshop has a max capacity of 40 people and folks have to register in advance - they can't just show up to the session without pre-registering. It would therefore be slightly counter-productive to add it the schedule, although you should advise your friends to sign up and join ! Okay No problem. I'll find another way to remember :) Yours Tony. pgpLEz02p896W.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Neutron API rate limiting
I was batting around some ideas regarding IPAM functionality, and it occurred to me that rate-limiting at an API level might come in handy and as an example might help provide one level of defense against DoS for an external IPAM provider that Neutron might make calls off to. I'm simply using IPAM as an example here, there are a number of other (ie better) reasons for rate-limiting at the API level. I may just be ignorant (please forgive me if I am :) ), but I'm not aware of any rate-limiting functionality at the API level in Neutron. Does anyone know if such a feature exists that could point me at some documentation? If it doesn't exist, has the Neutron community broached this subject before? I have to imagine someone has brought this up before and I just was out of the loop. Anyone have thoughts they care to share? Thanks! -Ryan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
On 5/14/15 7:12 PM, Angus Salkeld wrote: On Fri, May 15, 2015 at 4:46 AM, Mike Bayer mba...@redhat.com mailto:mba...@redhat.com wrote: On 5/14/15 11:58 AM, Doug Hellmann wrote: At one point we were exploring having both sqlalchemy-migrate and alembic run, one after the other, so that we only need to create new migrations with alembic and do not need to change any of the existing migrations. Was that idea dropped? to my knowledge the idea wasn't dropped. If a project wants to implement that using the oslo.db system, that is fine, however from my POV I'd prefer to just port the SQLA-migrate files over and drop the migrate dependency altogether. Whether or not a project does the run both step as an interim step doesn't affect that effort very much. Hi Mike Just a quick question: How would the alembic scripts know where to start the migration from if the current installation had been up until that point been using migrate (I believe both alembic and migrate write to a small table what the current version is, would you look for that?)? Thinking about that issue, the most controllable and clean-break way to do it would be to add support to Alembic itself that augments its own handling of the alembic_version table to transfer data from an existing sqlalchemy_migrate table. I can even see using traditional alembic hex-style version numbers in migration files which then also can refer to their previous numerically-based migrate file. It's not unreasonable that Alembic would support some standard upgrade path from Migrate, the only other migration tool SQLAlchemy has ever had, so I'd just add that as a feature. -Angus __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [new][cognitive] Announcing Cognitive - a project to deliver Machine Learning as a Service for OpenStack
On 15 May 2015 at 00:19, Debojyoti Dutta ddu...@gmail.com wrote: Hi! It is a great pleasure to announce the development of a new project called Cognitive. Cognitive provides Machine Learning [1] as a Service that enables operators to offer next generation data science based services on top of their OpenStack Clouds. I was indeed wondering when Machine Learning as a Service would come up... This project will begin as a StackForge project baed upon an empty cookiecutter [2] repo. The repos to work in are: Server: https://github.com/stackforge/cognitive Client: https://github.com/stackforge/python-cognitiveclient Please join us via iRC on #openstack-cognitive on freenode. We will be holding a doodle poll to select times for our first meeting the week after summit. This doodle poll will close May 24th and meeting times will be announced on the mailing list at that time. At our first IRC meeting, we will draft additional core team members. We would like to invite interested individuals to join this exciting new development effort! From my little experience, drafting core members before even actually having a code base has drawbacks. Also, it seems the initial starting team is already large enough for ensuring support for 1 or 2 release cycle. Please commit your schedule in the doodle poll here: http://doodle.com/drrka5tgbwpbfbxy Initial core team: Steven Dake, Aparupa Das Gupa, Debo~ Dutta, Johnu George, Kyle Mestery, Sarvesh Ranjan, Ralf Rantzau, Komei Shimamura, Marc Solanas, Manoj Sharma, Yathi Udupi, Kai Zhang. Hey! What's the Neutron PTL doing there? Sorry we need his reviews we can't loan it to you! A little bit about Cognitive: Data driven applications on cloud infrastructure increasingly rely on Machine Learning. Most data driven applications today use Machine Learning (ML). This often requires application developers and data scientists to write their own machine learning stack or deploy other packages to do any kind of data science based applications. Data scientists also need to have an easy way to rapidly experiment with data without having to write basic infrastructure for data manipulations. Cognitive is a Machine Learning service on top of OpenStack and provides machine learning based services to tenants (API, workbench, compute service). I wonder what kind of services you would offer; also you could have shared something about the architecture of this service. Is it providing a full machine learning stack, or just facilitating the use of existing one? But I see that there's a link to a wiki page below. This might have all the answers. For information about blueprints check out: https://blueprints.launchpad.net/cognitive https://blueprints.launchpad.net/python-cognitiveclient For more details, check out our Wiki: https://wiki.openstack.org/wiki/Cognitive ... and unfortunately the wiki is empty ;) Please join the awesome Cognitive team in designing a world class Machine Learning as a Service solution. We look forward to seeing you on IRC on #openstack-cognitive. Regards, Debo~ Dutta (on behalf of the initial team) [1] http://en.wikipedia.org/wiki/Machine_learning [2] https://github.com/openstack-dev/cookiecutter __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Rally] Improve review process
it's super cool!! On Wed, May 6, 2015 at 9:41 AM Yingjun Li yinja...@163.com wrote: Nice! On May 5, 2015, at 8:11 PM, Roman Vasilets rvasil...@mirantis.com wrote: Hi, Rally Team. I have created Rally Gerrit dashboard that organized patches in groups: Critical for next release, Waiting for final approve, Bug fixes, Proposed specs, Important patches, Ready for review, Has -1 but passed tests. Please use link http://goo.gl/iRxA5t for you comfortable. Patch is here https://review.openstack.org/#/c/179610/ It was made by gerrit-dash-creator. First group are the patches that are needed to merge to the nearest release. Content of the next three groups is obvious from the titles. Important patches - its just patches chosen(starred) by Boris Pavlovic or Mikhail Dubov. Ready for review - patches that are waiting for attention. And the last section - its patches with -1 mark but they passed CI. Roman Vasilets, Mirantis Inc. Intern Software Engineer __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] [Zaqar] [Keystone] SSH workflow action
I would agree some kind of keystone support would be great. Whether its the right solution for the barbican/vm workflow though, I'm not as sure. You can have a look at the implementation if you'd like. It does not copy any Keystone code. It does use Keystone's token signing code though to create a signed message between nova and barbican. The full review is here: https://review.openstack.org/#/c/159573/ Relevant bits are creating signed token: https://review.openstack.org/#/c/159573/5/barbican/plugin/vendordata_barbican.py which calls: https://review.openstack.org/#/c/159573/5/barbican/token.py and verifying token: https://review.openstack.org/#/c/159573/5/barbican/api/middleware/context.py Pretty simple stuff really. Thanks, Kevin From: Douglas Mendizábal [douglas.mendiza...@rackspace.com] Sent: Thursday, May 14, 2015 4:09 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Murano] [Mistral] [Zaqar] [Keystone] SSH workflow action -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 +1 to a Keystone and Oslo solution for this problem. One of my objections to Kevin's spec for Barbican is the copying of Keystone code into the Barbican tree. It seems to me like a code smell that we're trying to solve a problem that Keystone should be solving. Like I mentioned in the other thread [1] I would be willing to schedule this discussion during one of the Barbican Working Sessions. I'm also very interested in learning more about the Policy work being done, since we recently did a lot of work to provide additional policies for some of our contributors who find the current Keystone models burdensome. [2] - - Douglas Mendizábal [1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064196.html [2] http://specs.openstack.org/openstack/barbican-specs/specs/kilo/add-creat or-only-option.html On 5/12/15 8:43 PM, Zane Bitter wrote: On 12/05/15 13:06, Georgy Okrokvertskhov wrote: There is one thing which still bothers me. It is authentication. Right now with separate RabbitMQ instance we keep VMs authentication isolated from OpenStack infra. This is still a problem if you want to use webhooks (Heat autoscaling, Murano actions) via our own authentication models. If we plan to use Zaqar it will be interesting to know how Zaqar solves this issue. Aha, if you'd read my blog post you would know the answer ;) There's no specific provision for this in Zaqar at the moment AFAIK. The problem is really Keystone: it was never designed for authenticating applications to the cloud, only real live users. We need to fix this, in Keystone Oslo, for the benefit of all application-facing services. Some work has already started: - Keystone can now support separate backends for separate domains, so even if you are backed by read-only LDAP you can create service users in a separate DB-backed domain: http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/ - Work is planned on Dynamic Policy to make the authorisation model more powerful: http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/ I'm not sure that this goes far enough though (although I don't completely grok the implications of Dynamic Policy). We really want users to be able to define their own policy for application accounts that they create, and at an even more fine-grained level, so a common library for this sort of authorisation would be helpful. Frankly, I don't think that this is a good idea to use Keystone credentials or tokens for MQ clients on VMs. This topic, probably, deserves its own e-mail thread. Keystone credentials _are_ the answer, but they must not be the *user's* Keystone credentials. I can tell you how Heat does this right now for authenticating application callbacks for WaitConditions. It's not exactly pretty, but it works. Basically we create the application users in a separate domain and then do extra authorisation checks ourselves. Steve Hardy wrote a comprehensive summary on his blog: http://hardysteven.blogspot.com/2014/04/heat-auth-model-updates-part-2 - -stack.html So the mechanisms are there. In the short term we'd need some cross-project co-operation to define a system through which we can do this across projects (i.e. Murano or any other service can create a user and have Zaqar authorise it for listening on a particular queue). Maybe this is an extra parameter when creating a queue in Zaqar to also create a user account in a separate domain (the way Heat does) with permission to send and/or receive only on that particular queue and return its credentials. That would be easier to secure and easier to implement than having other services create the user accounts themselves. In the medium term hopefully we'll be able to come up with a less hacky solution that uses Oslo libraries to manage all of the user creation and authorisation. It will be interesting to discuss this with Keystone
Re: [openstack-dev] [nova][heat] sqlalchemy-migrate tool to alembic
On Fri, May 15, 2015 at 11:54 AM, Mike Bayer mba...@redhat.com wrote: On 5/14/15 7:12 PM, Angus Salkeld wrote: On Fri, May 15, 2015 at 4:46 AM, Mike Bayer mba...@redhat.com wrote: On 5/14/15 11:58 AM, Doug Hellmann wrote: At one point we were exploring having both sqlalchemy-migrate and alembic run, one after the other, so that we only need to create new migrations with alembic and do not need to change any of the existing migrations. Was that idea dropped? to my knowledge the idea wasn't dropped. If a project wants to implement that using the oslo.db system, that is fine, however from my POV I'd prefer to just port the SQLA-migrate files over and drop the migrate dependency altogether. Whether or not a project does the run both step as an interim step doesn't affect that effort very much. Hi Mike Just a quick question: How would the alembic scripts know where to start the migration from if the current installation had been up until that point been using migrate (I believe both alembic and migrate write to a small table what the current version is, would you look for that?)? Thinking about that issue, the most controllable and clean-break way to do it would be to add support to Alembic itself that augments its own handling of the alembic_version table to transfer data from an existing sqlalchemy_migrate table. I can even see using traditional alembic hex-style version numbers in migration files which then also can refer to their previous numerically-based migrate file. It's not unreasonable that Alembic would support some standard upgrade path from Migrate, the only other migration tool SQLAlchemy has ever had, so I'd just add that as a feature. Thanks. Would you suggest we hold off moving to alembic (in Heat) until you have this ironed out? I just want to make sure we don't do this prematurely. -Angus -Angus __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [devstack] What's required to accept a new distro rlease?
Hi All, I have reasons but I need to use Ubuntu 15.04 and devstack. CLearly I can run: FORCE=yes ./stack.sh and with a couple of patches I get a devstack up and running *For my config/system* I'm wondering what are the requirements for accepting something like: -if [[ ! ${DISTRO} =~ (precise|trusty|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then +if [[ ! ${DISTRO} =~ (precise|trusty|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then I figure there must be more to it as utopic was never added. Yours Tony. pgpBJKh_5_fQb.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Proposing Michael McCune as an API Working Group core
On 05/14/2015 12:58 PM, Everett Toews wrote: Top posting to make it official...Michael McCune (elmiko) is an API Working Group core! Cheers, Everett thanks everybody! =) mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack] What's required to accept a new distro rlease?
On Fri, May 15, 2015 at 01:35:06PM +1000, Ian Wienand wrote: On 05/15/2015 01:05 PM, Tony Breeds wrote: I'm wondering what are the requirements for accepting something like: -if [[ ! ${DISTRO} =~ (precise|trusty|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then +if [[ ! ${DISTRO} =~ (precise|trusty|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then Having a CI story makes it a no-brainer... anyone working on getting vivid nodes up in nodepool? In the past we've generally accepted if people have a convincing story that it works (i.e. they're using it and tempest is working, etc). Thanks. Apart from a strange prblem with rabbitmq taking too long to start it works fine for me. I'll do a compleet tempest run and look into what would be involved with getting (and maintaining) vivid in nodepool [until wily is released] Honestly I doubt 7.0|wheezy|sid|testing|jessie all work anyway -- they don't have any CI (I know of) -- so we're probably overselling it anyway. Could be. If I find time after the summit I'll try them. 7.0/wheezy is oldstable now. It'd be good to know that 8.0/jesse works. I figure there must be more to it as utopic was never added. I think that was approved, failed a test and was never re-merged [1]. So I guess it's more nobody cared? Ahh okay. Yours Tony. pgpCm39oAuIrm.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] A way for Operators/Users to submit feature requests
On 05/14/15 23:34, Jay Pipes wrote: On 05/14/2015 03:48 PM, Maish Saidel-Keesing wrote: I just saw an email on the Operators list [1] that I think would allow a much simpler process for the non-developer community to submit a feature request. I understand that this was raised once upon a time [2] - at least in part a while back. Rally have have the option to submit a feature request (a.k.a. backlog) - which I think is straight forward and simple. I think this will be a good way for those who are not familiar with the way a spec should be written, and honestly would not know how to write such a spec for any of the projects, but have identified a missing feature or a need for an improvement in one of the Projects. They only need to specify 3 small things (a sentence / two for each) 1. Use Case 2. Problem description 3. Possible solution I am not saying that each feature request should be implemented - or that each possible solution is the best and only way to solve the problem. That should be up to each and every project how this will be (or even if it should be) implemented. How important it will be for them to implement this feature and what priority this should receive. A developer then picks up the request and turns it into a proper blueprint with proper actionable items. Of course this has to be valid feature request, and not just someone looking for support - how exactly this should be vetted, I have not thought this through till the end. But I was looking to hear some feedback on the idea of making this a way for all of the OpenStack projects to allow them to collect actual feedback in a simple way. Hi Maish, I would support this kind of thing for projects that wish to do it, but at the same time, I wouldn't want the TC to mandate all projects use this method of collecting feedback. Projects, IMHO, should be free to self-organize as they wish, including developing processes that make the most sense for the project team. Best, -jay Thanks for the feedback Jay. There is a line between projects self organizing and providing a standard way that OpenStack does things. The TC (and the community as a whole) has decided on several guidelines for projects to be part of OpenStack. The way we gate, testing, security guidelines, naming conventions, etc.. These are mandated. A standard way of accepting feature requests will be a good thing in my opinion. -- Best Regards, Maish Saidel-Keesing __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev