[openstack-dev] [Neutron][stable] How to backport database schema fixes
Hi, all It seems that currently it's hard to backport any database schema fix to Neutron [1] which uses alembic to manage db schema version. Nova has the same issue before and a workaround is to put some placeholder files before each release. So first do we allow db schema fixes to be backport to stable for Neutron ? If we do, then how about put some placeholder files similar to Nova at the end of each release cycle? or we have some better solution for alembic. From the stable maintainer side, we have a policy for stable backport https://wiki.openstack.org/wiki/StableBranch - DB schema changes is forbidden If we allow db schema backports for more than one project, I think we need to update the wiki. [1] https://review.openstack.org/#/c/110642/ https://review.openstack.org/#/c/110642/ -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever
could we use Taskflow https://wiki.openstack.org/wiki/TaskFlow to manage task state and resource for this kind of tasks in Nova? Cinder has been an pilot to use Taskflow for volume backup tasks. anyone interested in this suggestion or has done some research to improve the live migration workflow? 2013/12/17 Vladik Romanovsky vladik.romanov...@enovance.com I would block it in the API or have the API cancelling the migration first. I don't see a reason why to start an operation that is meant to fail, which also has a complex chain of event, following it failure. Regardless of the above, I think that the suggested exception handling is needed in any case. Vladik - Original Message - From: Loganathan Parthipan parthi...@hp.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Monday, 16 December, 2013 8:25:09 AM Subject: Re: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever Isn’t just handling the exception instance_not_found enough? By this time source would’ve been cleaned up. Destination VM resources will get cleaned up by the periodic task since the VM is not associated with this host. Am I missing something here? From: 王宏 [mailto:w.wangho...@gmail.com] Sent: 16 December 2013 11:32 To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever Hi all. When I try to fix a bug: https://bugs.launchpad.net/nova/+bug/1242961 , I get a trouble. To reproduce the bug is very easy. Live migrate a vm in block_migration mode, and then delelte the vm immediately. The reason of this bug is as follow: 1. Because live migrate costs more time, so the vm will be deleted sucessfully before live migrate complete. And then, we will get an exception while live migrating. 2. After live migrate failed, we start to rollback. But, in the rollback method we will get or modify the info of vm from db. Because the vm has been deleted already, so we will get instance_not_found exception and rollback will be faild too. I have two ways to fix the bug: i)Add check in nova-api. When try to delete a vm, we return an error message if the vm_state is LIVE_MIGRATING. This way is very simple, but need to carefully consider. I have found a related discussion: http://lists.openstack.org/pipermail/openstack-dev/2013-October/017454.html, but it has no result in the discussion. ii)Before live migrate we get all the data needed by rollback method, and add a new rollback method. The new method will clean up resources at destination based on the above data(The resouces at source has been already cleaned up by deleting). I have no idea whitch one I should choose. Or, any other ideas?:) Regards, wanghong ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] where to expose network quota
Hi all, Now Neutron has its own quota management API for network related items(floating ips, security groups .etc) which are also manged by Nova. when using nova with neutron as network service, the network related quota items are stored in two different databases and managed by different APIs. I'd like your suggestions on which of the following is best to fix the issue. 1, let nova to proxy all network related quota info operation(update, list,delete) through neutron API. 2, filter network related quota info from nova when using neutron as network service, and change novaclient to get quota info from nova and neutron quota API. -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone] python-keystoneclient v3 functionality
Hi all, I am sorry if this has been discussed before, the question is will we support keystone v3 operation in python-keystoneclient? I know most of the v3 functionality have been implemented in python-openstackclient, but from the python-openstackclient wiki says, it's primarily a wrapper of python-*client, and provides unified interface to user. The end user uses python-keystoneclient to manage user, tenant, service before, if we don't intend to support v3 functionality in keystoneclient, then it means we force end user to change from keystoneclient to openstackclient, is this what we want to do? -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] python-keystoneclient v3 functionality
Thanks Jamie, then the following question is do we intend to move other services client library V3 identity support to python-openstackclient? AFAIK it's poorly supported for Nova Cinder Neutron client library, and I am working on add v3 support for those libraries[1], just want to make sure that is the correct direction. [1] https://review.openstack.org/#/c/81749/ https://review.openstack.org/#/c/81767/ 2014-04-01 12:08 GMT+08:00 Jamie Lennox jamielen...@redhat.com: On Tue, 2014-04-01 at 11:53 +0800, Yaguang Tang wrote: Hi all, I am sorry if this has been discussed before, the question is will we support keystone v3 operation in python-keystoneclient? I know most of the v3 functionality have been implemented in python-openstackclient, but from the python-openstackclient wiki says, it's primarily a wrapper of python-*client, and provides unified interface to user. The end user uses python-keystoneclient to manage user, tenant, service before, if we don't intend to support v3 functionality in keystoneclient, then it means we force end user to change from keystoneclient to openstackclient, is this what we want to do? It depends what you mean by python-keystoneclient. If you mean the python library then yes it supports the V3 API already. If you mean the keystone CLI that is currently bundled as part of the python-keystoneclient then yes that is deprecated in favour of python-openstackclient. We will maintain the CLI application in keystoneclient however even for V2 API calls I recommend that you use the openstack CLI tool. Jamie -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Keystone] python-keystoneclient v3 functionality
2014-04-03 7:12 GMT+08:00 Jamie Lennox jamielen...@redhat.com: - Original Message - From: Adam Young ayo...@redhat.com To: openstack-dev@lists.openstack.org Sent: Wednesday, 2 April, 2014 11:13:22 PM Subject: Re: [openstack-dev] [Keystone] python-keystoneclient v3 functionality On 04/01/2014 07:36 AM, Yaguang Tang wrote: Thanks Jamie, then the following question is do we intend to move other services client library V3 identity support to python-openstackclient? AFAIK it's poorly supported for Nova Cinder Neutron client library, and I am working on add v3 support for those libraries[1], just want to make sure that is the correct direction. [1] https://review.openstack.org/#/c/81749/ https://review.openstack.org/#/c/81767/ Sort of. The Keystone client should be responsible for all fo the service catalog manipulations throughout openstack. So the Cinder client should use the Keystone client. To see the idea, read Jamie's blog post: http://www.jamielennox.net/blog/2014/02/24/client-session-objects/ The main idea is that Keystone will handle the HTTP session setup, so that we have a single place to focus attention on for HTTP network security issues. And you should not need to parse the service catalog at all. So Adam is right in that the general idea to transition people to using the V3 API will be to use the keystoneclient.session.Session object and that will do everything for you. This is not going to be an easy transition for everyone and i've got a summit session proposed: http://summit.openstack.org/cfp/details/205 in which i want to deal with precisely this problem. Regarding your notion of openstackclient though, you need to seperate the concept of a CLI and the library. Take for example Heat or Horizon, they communicate with keystone through the python-keystoneclient and nova via the python-novaclient etc. They do not use the keystone or nova cli utility. The primary job of the python-*client libraries is NOT to provide a cli. The cli is just an application that makes use of the library. Yeah, but when we talking about the support of v3 for Nova Cinder and Neutron. I think it's mostly about the auth token middleware and CLI support of the client library. So yes, i think it has generally been accepted that the clients will move (at there own pace) to using openstackclient for there CLI, but openstackclient will still rely on the various libraries to do the actual communication with services I wonder is it worth to enable v3 for CLI when we moving to openstackclient so that OpenStack users and operators can test and evaluate v3 API , and we may probably add a v3 test gate to CI. what we face is that user can use v3 API with novaclient CLI, this has been asked many times in the OpenStack user mailing list. Jamie 2014-04-01 12:08 GMT+08:00 Jamie Lennox jamielen...@redhat.com : On Tue, 2014-04-01 at 11:53 +0800, Yaguang Tang wrote: Hi all, I am sorry if this has been discussed before, the question is will we support keystone v3 operation in python-keystoneclient? I know most of the v3 functionality have been implemented in python-openstackclient, but from the python-openstackclient wiki says, it's primarily a wrapper of python-*client, and provides unified interface to user. The end user uses python-keystoneclient to manage user, tenant, service before, if we don't intend to support v3 functionality in keystoneclient, then it means we force end user to change from keystoneclient to openstackclient, is this what we want to do? It depends what you mean by python-keystoneclient. If you mean the python library then yes it supports the V3 API already. If you mean the keystone CLI that is currently bundled as part of the python-keystoneclient then yes that is deprecated in favour of python-openstackclient. We will maintain the CLI application in keystoneclient however even for V2 API calls I recommend that you use the openstack CLI tool. Jamie -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Release Notes for Icehouse
Hi all, I think it's important for our developers to publish an official Release Note as other core openstack projects does at the end of Icehouse development cycle, it contains the new features added and upgrade issue to be noticed by the users. any one like to be volunteer to help accomplish it? https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!
How can I enable or trigger Mine Sweeper for VMware related patches? I have update a patch about VMware driver today https://review.openstack.org/#/c/51793/ . but haven't seen any posting results . 2013/10/18 Sean Dague s...@dague.net On 10/17/2013 02:29 PM, Dan Smith wrote: This system is running tempest against a VMWare deployment and posting the results publicly. This is really great progress. It will go a long way in helping reviewers be more confident in changes to this driver. This is huge progress, congrats and thanks to the VMware team for making this happen! There is really no substitute for the value this will provide for overall quality. Agreed. Nice job guys! It's super cool to now see SmokeStack and Mine Sweeper posting back on patches. Tip of the hat to the VMWare team for pulling this together so quickly. -Sean -- Sean Dague http://dague.net __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps
after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests, because stable/glance requires iso8601=0.1.4, log info https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console , I have filed a bug to track this https://bugs.launchpad.net/glance/+bug/1255419. 2013/11/26 Thomas Goirand z...@debian.org I'm sorry to restart this topic. I don't mind if we upgrade to 0.1.8, but then I will need to have patches for Havana to support version 0.1.8. Otherwise, it's going to be very difficult on the packaging side: I will need to upload 0.1.8 for Icehouse, but then it will break everything else (eg: Havana) that is currently in Sid. Was there some patches already for that? If so, please point to them so that I can cherry-pick them, and carry the patches in the Debian packages (it doesn't have to be backported to the Havana branch, I'm fine keeping the patches in the packages, if at least they are identified). Is there a way that I can grep all commits in Gerrit, to see if there was such patches committed recently? Cheers, Thomas Goirand (zigo) On 10/24/2013 09:37 PM, Morgan Fainberg wrote: It seems like adopting 0.1.8 is the right approach. If it doesn't work with other projects, we should work to help those projects get updated to work with it. --Morgan On Thursday, October 24, 2013, Zhi Yan Liu wrote: Hi all, Adopt 0.1.8 as iso8601 minimum version: https://review.openstack.org/#/c/53567/ zhiyan On Thu, Oct 24, 2013 at 4:09 AM, Dolph Mathews dolph.math...@gmail.com javascript:; wrote: On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins robe...@robertcollins.net javascript:; wrote: On 24 October 2013 07:34, Mark Washenberger mark.washenber...@markwash.net javascript:; wrote: Hi folks! 1) Adopt 0.1.8 as the minimum version in openstack-requirements. 2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and just fix the tests so they don't care about these extra formats) 3) Make Glance work with the added formats even if 0.1.4 is installed. I think we should do (1) because both (2) will permit surprising, nonobvious changes in behaviour and (3) is just nasty engineering. Alternatively, add a (4) which is (2) with whinge on startup if 0.1.4 is installed to make identifying this situation easy. I'm in favor of (1), unless there's a reason why 0.1.8 not viable for another project or packager, in which case, I've never heard the term whinge before so there should definitely be some of that. The last thing a new / upgraded deployment wants is something like nova, or a third party API script failing in nonobvious ways with no breadcrumbs to lead them to 'upgrade iso8601' as an answer. -Rob -- Robert Collins rbtcoll...@hp.com javascript:; Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron]Tempest testing error
Hi Trinath, I think you missed some configuration in tempest.conf for your testing, the exception is duo to no public_network_id defined. an example of tempest.conf for upsteam gate tests is here for your reference http://logs.openstack.org/32/99832/5/check/check-tempest-dsvm-neutron/0bab272/logs/tempest_conf.txt.gz 2014-08-01 10:29 GMT+08:00 trinath.soman...@freescale.com trinath.soman...@freescale.com: Hi- When I execute router tests from tempest on my devstack environment, I get the following error for all the tests. ft46.5: tempest.api.network.test_routers.RoutersTest.test_create_show_list_update_delete_router[gate,smoke]_StringException: Empty attachments: stderr stdout pythonlogging:'': {{{2014-08-01 02:29:26,473 22759 INFO [tempest.common.rest_client] Request (RoutersTest:test_create_show_list_update_delete_router): 400 POST http://127.0.0.1:9696/v2.0/routers 0.164s}}} Traceback (most recent call last): File tempest/api/network/test_routers.py, line 60, in test_create_show_list_update_delete_router admin_state_up=False) File tempest/services/network/json/network_client.py, line 71, in create_router resp, body = self.post(uri, body) File tempest/services/network/network_client_base.py, line 73, in post return self.rest_client.post(uri, body, headers) File tempest/common/rest_client.py, line 218, in post return self.request('POST', url, extra_headers, headers, body) File tempest/common/rest_client.py, line 430, in request resp, resp_body) File tempest/common/rest_client.py, line 479, in _error_checker raise exceptions.BadRequest(resp_body) BadRequest: Bad request Details: {u'NeutronError': uInvalid input for external_gateway_info. Reason: '' is not a valid UUID.} Kindly help me troubleshoot the issue. Is this an issue with devstack environment. Thanking you. -- Trinath Somanchi - B39208 trinath.soman...@freescale.com | extn: 4048 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Project list with turned-on policy in Keystone
I think this is an common requirement for users who want to keystone v3. I filed a blueprint for it https://blueprints.launchpad.net/horizon/+spec/domain-based-rbac. 2014-04-24 23:30 GMT+08:00 Roman Bodnarchuk roman.bodnarc...@indigitus.ch: Hello, As far as I can tell, Horizon uses python-openstack-auth to authenticate users. In the same time, openstack_auth.KeystoneBackend.authenticate method generates only project scoped tokens. After enabling policy checks in Keystone, I tried to view a list of all projects on Admin panel and got *Error: *Unauthorized: Unable to retrieve project list. on dashboard and the next in Keystone log: enforce identity:list_projects: {'project_id': u'80d91944f5af4c53ad5df4e386376e08', 'group_ids': [], 'user_id': u'ed14fd91122b47d2a6f575499ed0c4bb', 'roles': [u'admin']} ... WARNING keystone.common.wsgi [-] You are not authorized to perform the requested action, identity:list_projects. This is expected, since user's token is scoped to project, and no access to domain-wide resources should be allowed. How to work-around this? Is it possible to use policy checks on Keystone side while working with Horizon? I am using stable/icehouse and Keystone API v3. Thanks, Roman ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Hierarchical administrative boundary [keystone]
Frittoli, I think for other services we could achieve that by modifying the policy.json( add domain admin role and control what the cloud admin can do ) so that domain admin user is able to manage resources belong to users and projects in that domain. 2014-05-09 15:24 GMT+08:00 Frittoli, Andrea (HP Cloud) fritt...@hp.com: *From:* Adam Young [mailto:ayo...@redhat.com] *Sent:* 09 May 2014 04:19 *To:* openstack-dev@lists.openstack.org *Subject:* Re: [openstack-dev] Hierarchical administrative boundary [keystone] On 05/08/2014 07:55 PM, Tiwari, Arvind wrote: Hi All, Below is my proposal to address VPC use case using hierarchical administrative boundary. This topic is scheduled in Hierarchical Multitenancyhttp://junodesignsummit.sched.org/event/20465cd62e9054d4043dda156da5070e#.U2wYXXKLR_9session of Atlanta design summit. https://wiki.openstack.org/wiki/Hierarchical_administrative_boundary Please take a look. Thanks, Arvind ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Looks very good. One question: Why hierarchical domains and not Projects. I'm not disagreeing, mind you, just that I think the Nova team is going for hierarchical Projects. *--* Looks good, thank you! But for this to be even more interesting nova (and other services) should be domain aware – e.g. so that a domain admin could have control on all resources which belong to users and projects in that domain. andrea ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Hierarchical administrative boundary [keystone]
Tiwari, Could you elaborate how to solve the issue by using unique role names ? With domain model, services like nova have to be aware of domain admin user and cloud admin user by roles. domain admin manage IAM resources and non-IAM resources by inheriting roles to projects, cloud admin have additional privilege to enable/disable OpenStack services. But the admin role can be granted by a domain user to its own user. How nova api identity a user is real admin user that in admin domain? 2014-05-10 2:23 GMT+08:00 Tiwari, Arvind arvind.tiw...@hp.com: Hi All, Thanks for looking in to my proposal. Below are my comments and answers to questions which is based on “my personal opinion”. *Why domain hierarchy, why not project hierarchy? *Because project hierarchy is more impactful and need cross project changes. As per my understanding we all are trying to solve one business use problem, which is “how to support VPC or Reseller” model on OS based cloud deployment. As per problem described in different proposals, it is purely a IAM use case, where different identities (users, admins, reseller ….) has different perception about the system/resources (IAM and non IAM) and they want ability to manage them. Keystone (OS IAM service) abstracts all the IAM complexity from lower level services (Nova, Swift, cinder …) by providing unified integration model (auth token and verification by auth middleware). Lover level services trusts Keystone and allow access (for particular requests) to actual resource based on subject’s roles provided by keystone. Each service supports multi tenancy and tenancy mapping is establish by keystone through projects. If hierarchy enforced at project level then we need to propagate the hierarchy info to all lower level services, where the hierarchy info is not serving any good purpose but just used to map one tenant. Enforcing the hierarchy at project level is more impactful because all services have to change their implementation to consume the notion of hierarchy. Propagating project hierarchy to services would make sense if end resources (VMs, cinder volumes , swift resource ….) does obey the hierarchy based on projects, I think that is not the case. As per definition domains are container for projects, users and groups and maps well with a business entities (ProductionIT, SuperDevShop, WidgetMaster, SPI, reseller .). Using domain to establish hierarchy (as per my design) will abstract the complexity from lower level services. Services don’t have to worry about the domain hierarchy and we can retain the current integration (Keystone project - service Tenant ) model and no need to make big change in different service. Mostly one place change which is Keystone. *Services has to be domain aware* IMO services (Nova, Swift …) don’t have to be domain aware (Unless I am missing something) as they manage resources for keystone projects. Domain is IAM concept which used to scope IAM resources and not very useful for end services. I think what we are lacking is unique role (role name) per service, having unique role names for each service (IAM, Nova, Swift ….) will resolve the problem mentioned below by Yaguang Tang. Please let me know why services have to be domain aware? Thoughts? Thanks, Arvind Note: IAM Resources – Users, groups, projects … Non IAM resources – VMs, Swift objects, ……. *From:* Yaguang Tang [mailto:yaguang.t...@canonical.com] *Sent:* Friday, May 09, 2014 4:33 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] Hierarchical administrative boundary [keystone] *Frittoli,* I think for other services we could achieve that by modifying the policy.json( add domain admin role and control what the cloud admin can do ) so that domain admin user is able to manage resources belong to users and projects in that domain. 2014-05-09 15:24 GMT+08:00 Frittoli, Andrea (HP Cloud) fritt...@hp.com: *From:* Adam Young [mailto:ayo...@redhat.com] *Sent:* 09 May 2014 04:19 *To:* openstack-dev@lists.openstack.org *Subject:* Re: [openstack-dev] Hierarchical administrative boundary [keystone] On 05/08/2014 07:55 PM, Tiwari, Arvind wrote: Hi All, Below is my proposal to address VPC use case using hierarchical administrative boundary. This topic is scheduled in Hierarchical Multitenancyhttp://junodesignsummit.sched.org/event/20465cd62e9054d4043dda156da5070e#.U2wYXXKLR_9session of Atlanta design summit. https://wiki.openstack.org/wiki/Hierarchical_administrative_boundary Please take a look. Thanks, Arvind ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Looks very good. One question: Why hierarchical domains and not Projects. I'm not disagreeing, mind you, just
Re: [openstack-dev] [Horizon] Project list with turned-on policy in Keystone
Roman, It's not fully supported, right now domain, project ,user management isn't supported within admin user or domain user, but you can login in with domain user and operate as a normal user. 2014-05-06 16:23 GMT+08:00 Roman Bodnarchuk roman.bodnarc...@indigitus.ch: Hello, Does this mean that there is no real support for non-default domains in Horizon? Thanks, Roman On 5/5/2014 2:30 PM, Yaguang Tang wrote: I think this is an common requirement for users who want to keystone v3. I filed a blueprint for it https://blueprints.launchpad.net/horizon/+spec/domain-based-rbac. 2014-04-24 23:30 GMT+08:00 Roman Bodnarchuk roman.bodnarc...@indigitus.ch : Hello, As far as I can tell, Horizon uses python-openstack-auth to authenticate users. In the same time, openstack_auth.KeystoneBackend.authenticate method generates only project scoped tokens. After enabling policy checks in Keystone, I tried to view a list of all projects on Admin panel and got *Error: *Unauthorized: Unable to retrieve project list. on dashboard and the next in Keystone log: enforce identity:list_projects: {'project_id': u'80d91944f5af4c53ad5df4e386376e08', 'group_ids': [], 'user_id': u'ed14fd91122b47d2a6f575499ed0c4bb', 'roles': [u'admin']} ... WARNING keystone.common.wsgi [-] You are not authorized to perform the requested action, identity:list_projects. This is expected, since user's token is scoped to project, and no access to domain-wide resources should be allowed. How to work-around this? Is it possible to use policy checks on Keystone side while working with Horizon? I am using stable/icehouse and Keystone API v3. Thanks, Roman ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] common codes
we put openstack common code in oslo , and sync to other projects to keep the common code in each project is aways up to date, when oslo is mature enough, then we will publish oslo as a openstack common library. the common code in each project just need to change from from nova.openstack.common import something to from oslo.openstack.common import something after oslo is released , as the common code is aways sync from oslo, so there isn't any big change. correct me if my understanding is wrong. 在 2013-7-16 上午10:25,Gareth academicgar...@gmail.com写道: Hi, all There are some common codes in most of projects, such as opnstack/common, db, and some else (?). I know a good way is using 'import oslo' is ok, instead of copy those codes here and there. And now we already have project oslo and trove, but how and when do we handle old codes, remove that in next major release? -- Gareth *Cloud Computing, OpenStack, Fitness, Basketball* *OpenStack contributor* *Company: UnitedStack http://www.ustack.com* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ¥1 to an open organization you specify.* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] review request
Hi all, I'd appreciate if any of the nova-core reviewers could take a look at https://review.openstack.org/#/c/39226/, as this add keypair notification events. which is more like a small feature than a bug, I am callling review just in case after feature freeze it can't be accepted. -- Tang Yaguang Canonical Ltd. | www.ubuntu.com | www.canonical.com Mobile: +86 152 1094 6968 gpg key: 0x187F664F ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
On Thu, Sep 10, 2015 at 1:52 PM, Igor Kalnitsky <ikalnit...@mirantis.com> wrote: > Hello, > > I agree with Vladimir - the idea of online repos is a right way to > move. In 2015 I believe we can ignore this "poor Internet connection" > reason, and simplify both Fuel and UX. Moreover, take a look at Linux > distributives - most of them fetch needed packages from the Internet > during installation, not from CD/DVD. The netboot installers are > popular, I can't even remember when was the last time I install my > Debian from the DVD-1 - I use netboot installer for years. > You are think in a way of developers, but the fact is that Fuel are widely used by various enterprises in the world wide, there are many security policies for enterprise to have no internet connection. > Thanks, > Igor > > > On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <yt...@mirantis.com> wrote: > > > > > > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschu...@mirantis.com> > wrote: > >> > >> > >> Hey Vladimir, > >> > >>> > >>> > >>>>> > >>>>> 1) There won't be such things in like [1] and [2], thus less > >>>>> complicated flow, less errors, easier to maintain, easier to > understand, > >>>>> easier to troubleshoot > >>>>> 2) If one wants to have local mirror, the flow is the same as in case > >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to > >>>>> understand. > >>>> > >>>> > >>>> From the issues I've seen, fuel-createmirror isn't very straight > >>>> forward and has some issues making it a bad UX. > >>> > >>> > >>> I'd say the whole approach of having such tool as fuel-createmirror is > a > >>> way too naive. Reliable internet connection is totally up to network > >>> engineering rather than deployment. Even using proxy is much better > that > >>> creating local mirror. But this discussion is totally out of the scope > of > >>> this letter. Currently, we have fuel-createmirror and it is pretty > >>> straightforward (installed as rpm, has just a couple of command line > >>> options). The quality of this script is also out of the scope of this > >>> thread. BTW we have plans to improve it. > >> > >> > >> > >> Fair enough, I just wanted to raise the UX issues around these types of > >> things as they should go into the decision making process. > >> > >> > >>> > >>>>> > >>>>> > >>>>> Many people still associate ISO with MOS, but it is not true when > using > >>>>> package based delivery approach. > >>>>> > >>>>> It is easy to define necessary repos during deployment and thus it is > >>>>> easy to control what exactly is going to be installed on slave nodes. > >>>>> > >>>>> What do you guys think of it? > >>>>> > >>>>> > >>>> > >>>> Reliance on internet connectivity has been an issue since 6.1. For > many > >>>> large users, complete access to the internet is not available or not > >>>> desired. If we want to continue down this path, we need to improve > the > >>>> tools to setup the local mirror and properly document what > urls/ports/etc > >>>> need to be available for the installation of openstack and any mirror > >>>> creation process. The ideal thing is to have an all-in-one CD > similar to a > >>>> live cd that allows a user to completely try out fuel wherever they > want > >>>> with out further requirements of internet access. If we don't want to > >>>> continue with that, we need to do a better job around providing the > tools > >>>> for a user to get up and running in a timely fashion. Perhaps > providing an > >>>> net-only iso and an all-included iso would be a better solution so > people > >>>> will have their expectations properly set up front? > >>> > >>> > >>> Let me explain why I think having local MOS mirror by default is bad: > >>> 1) I don't see any reason why we should treat MOS repo other way than > >>> all other online repos. A user sees on the settings tab the list of > repos > >>> one of which is local by default while others are online. It can make >
Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
to have simple working scheme clear for everyone. I >> think dealing with all repos the same way is what is gonna makes things >> simpler. >> >> > > Who is going to build a custom ISO? How does one request that? What > resources are consumed by custom ISO creation process/request? Does this > scale? > > > >> This thread is not about internet connectivity, it is about aligning >> things. >> >> > You are correct in that this thread is not explicitly about internet > connectivity, but they are related. Any changes to remove a local > repository and only provide an internet based solution makes internet > connectivity something that needs to be included in the discussion. I just > want to make sure that we properly evaluate this decision based on end user > feedback not because we don't want to manage this from a developer > standpoint. > +1, whatever the changes is, please keep Fuel as a tool that can deploy without Internet access, this is part of reason that people like it and it's better that other tools. > > -Alex > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Yaguang Tang Technical Support, Mirantis China *Phone*: +86 15210946968 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer][panko] Ocata ceilometer event storage configuration
Thanks Along and Gordon , after making changes , it works. On Fri, Jun 30, 2017 at 12:02 AM, Along Meng <alongm...@gmail.com> wrote: > Yes, and you need make sure ceilometer-agent-notification and panko was > intalled in a same machine in your environment. > Because ceilometer will load the publisher dispatcher panko from the > module panko and use the database url[1] which configured in panko.conf to > init the database connection. > > [1] > https://github.com/openstack/panko/blob/stable/ocata/panko/ > dispatcher/database.py#L43 > > MengAlong > > > On Thu, Jun 29, 2017 at 8:56 PM, gordon chung <g...@live.ca> wrote: > >> >> >> On 29/06/17 07:24 AM, Yaguang Tang wrote: >> > sinks: >> > - name: event_sink >> > transformers: >> > triggers: >> > publishers: >> > - direct:// >> > - panko:// >> >> the publisher path is only available if you have Pike Panko. you need to >> either backport[1] or configure your publisher as: >> >> direct://?dispatcher=panko >> >> >> [1] >> https://github.com/openstack/panko/commit/d785015552937455d7 >> 6f083d313a73a7c0c076b3 >> >> cheers, >> -- >> gord >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ceilometer][panko] Ocata ceilometer event storage configuration
Hi all, I am using Ocata Ceilometer Gnocchi with Panko, Panko is configured to use MySQL as backend to store events, I configured Ceilometer event_pipeline.yaml as follow: sinks: - name: event_sink transformers: triggers: publishers: - direct:// - panko:// but it seems no events stored actually, looking at the code, Ceilometer have no event storage backend already, so how to config Ceilometer to pushlish/dispatch events to store through Panko database ? -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [gnocchi][ceilometer] gnocchi-metricd error using redis as coordination
Thanks Along, finally I figure out that this is a bug and fixed by this commit commit e749b60f49a4a3b48cc5da67a797f717dd8cd01d Author: Julien Danjou <jul...@danjou.info> Date: Tue Jun 20 16:36:14 2017 +0200 utils: use ASCII bytes as member id Tooz actually wants ASCII bytes and not random bytes. Fixes #130 diff --git a/gnocchi/utils.py b/gnocchi/utils.py index f81d93e..7666711 100644 --- a/gnocchi/utils.py +++ b/gnocchi/utils.py @@ -90,7 +90,7 @@ def _enable_coordination(coord): def get_coordinator_and_start(url): -my_id = uuid.uuid4().bytes +my_id = str(uuid.uuid4()).encode() coord = coordination.get_coordinator(url, my_id) _enable_coordination(coord) return coord, my_id On Mon, Aug 7, 2017 at 10:10 PM, Along Meng <alongm...@gmail.com> wrote: > From the log info,It shows that your 'node' maybe is not the valid str. > You can show the node name via 'print node' and try to call > str(node).encode('utf-8') > , identify does it can goes well. > > 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key = > str(node).encode('utf-8') > 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils > UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 4: > ordinal not in range(128) > 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils > > > > On Sat, Aug 5, 2017 at 7:16 PM, Yaguang Tang <heut2...@gmail.com> wrote: > >> Hi gnocchi devs, >> >> I have an issue when using gnocchi 4.0, the storage backend is ceph, and >> tooz coordination is redis. currently gnocchi api in apache wsgi mode, >> only one controller node running gnocchi-metricd & gnocchi-statsd daemon. >> the error log of gnocchi-metricd is as follow. >> >> >> >> 2017-08-05 18:14:18.643 1329257 INFO gnocchi.storage.common.ceph [-] Ceph >> storage backend use 'cradox' python library >> 2017-08-05 18:14:18.654 1329257 INFO gnocchi.storage.common.ceph [-] Ceph >> storage backend use 'cradox' python library >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils [-] Unhandled >> exception >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Traceback (most >> recent call last): >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 84, in >> exit_on_exception >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils yield >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in >> _run >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.run() >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 120, in run >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils >> self._configure() >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in >> wrapped_f >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return >> r.call(f, *args, **kw) >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 177, in >> call >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return >> fut.result() >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line >> 396, in result >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return >> self.__get_result() >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 159, in >> call >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils result = >> fn(*args, **kwargs) >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 193, in >> _configure >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.GROUP_ID, >> partitions=200) >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 284, in >> join_partitioned_group >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return >> partitioner.Partitioner(self, group_id) >> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File >> "/usr/lib/python2.7/site-packages/tooz/partitioner.py", line 45, in >> __init__ >> 2017-08-05 18:14:1
[openstack-dev] [gnocchi][ceilometer] gnocchi-metricd error using redis as coordination
Hi gnocchi devs, I have an issue when using gnocchi 4.0, the storage backend is ceph, and tooz coordination is redis. currently gnocchi api in apache wsgi mode, only one controller node running gnocchi-metricd & gnocchi-statsd daemon. the error log of gnocchi-metricd is as follow. 2017-08-05 18:14:18.643 1329257 INFO gnocchi.storage.common.ceph [-] Ceph storage backend use 'cradox' python library 2017-08-05 18:14:18.654 1329257 INFO gnocchi.storage.common.ceph [-] Ceph storage backend use 'cradox' python library 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils [-] Unhandled exception 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Traceback (most recent call last): 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 84, in exit_on_exception 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils yield 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in _run 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.run() 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 120, in run 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self._configure() 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in wrapped_f 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return r.call(f, *args, **kw) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 177, in call 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return fut.result() 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 396, in result 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return self.__get_result() 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 159, in call 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils result = fn(*args, **kwargs) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 193, in _configure 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.GROUP_ID, partitions=200) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 284, in join_partitioned_group 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return partitioner.Partitioner(self, group_id) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tooz/partitioner.py", line 45, in __init__ 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils partitions=self.partitions) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 47, in __init__ 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.add_nodes(set(nodes)) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils File "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 71, in add_nodes 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key = str(node).encode('utf-8') 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 4: ordinal not in range(128) 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Is this a config issue or bug ? any tips or help is much appreciated :-) -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [gnocchi] Redis for storage backend
Hi all, I see the latest Gnocchi support using Redis as a storage backend, I am testing performance of Redis and Ceph, it seems using Redis as storage backend we can achieve more realtime metric data, gnocchi status shows there is always few metric to process. Is Redis a recommend storage backend ? -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] PCI pass through settings on a flavor without aliases on the API nodes
Hi Van, You can try to create a pci-passthrough port with PCI info, then create vm using that port without any alias. On Wed, Oct 18, 2017 at 11:27 PM, Eric Friedwrote: > Robert- > > No. > > Some day, once generic device management is baked, there will be. > Depending how your favorite virt driver decides to model things, one > could envision a flavor with extra specs like: > > resources:SRIOV_NET_PF:1 > trait:CUSTOM_PCI_VENDORID_8086=required > trait:CUSTOM_PCI_PRODUCTID_154D=required > > In the meantime, it's especially useful to get this kind of > feedback > from ops so we can ensure we're meeting the right requirements as we > design things. Please reach out if you want to discuss further. > > Thanks, > Eric Fried (efried) > > On 10/18/2017 09:56 AM, Van Leeuwen, Robert wrote: > > Hi, > > > > > > > > Does anyone know if it is possible to set PCI pass through on a flavor > > without also needing to set the alias on the nova API nodes as mentioned > > here: > > https://docs.openstack.org/nova/pike/admin/pci-passthrough.html > > > > > > > > E.G you need to set in nova.conf: > > > > [pci] > > > > alias = { "vendor_id":"8086", "product_id":"154d ", > > "device_type":"type-PF", "name":"a1" } > > > > > > > > Then you can set the flavor: > > > > openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2" > > > > > > > > > > > > E.g. I would be fine with just setting the PCI vendor/product on the > > flavor instead of also needing to set this at the api node > > > > So something like: > > > > openstack flavor set m1.large –property “pci_passthrough:vendor”=”8086” > > “pci_passthrough:device”=”154d:1” > > > > > > > > Thx, > > > > Robert van Leeuwen > > > > > > > > ___ > > OpenStack-operators mailing list > > openstack-operat...@lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [gnocchi] Redis for storage backend
Hi Gordon, We launched 300vms and each vm has about 10 metrics, OpenStack cluster have 3 controllers and 2 compute nodes(ceph replication is set to 2). what we want to archive is to make all metric measures data get processed as soon as possible, metric processing delay is set to 10s, and ceilometer polling interval is 30s. when the backend of incoming and storeage is set to ceph, the average of "gnocchi status" shows that there are around 7000 measures waiting to be process, but when changing incoming and storage backend to Redis, the result of gnocchi status shows unprocessed measures is around 200. we try to add more metricd process on every controller nodes, to improve the performance of calculate and writing speed to storage backend but have little effect. On Fri, Oct 13, 2017 at 9:03 PM, gordon chung <g...@live.ca> wrote: > > > On 2017-10-13 03:37 AM, Julien Danjou wrote: > > On Fri, Oct 13 2017, Yaguang Tang wrote: > > > >> I see the latest Gnocchi support using Redis as a storage backend, I am > >> testing performance of Redis and Ceph, it seems using Redis as storage > >> backend we can achieve more realtime metric > >> data, gnocchi status shows there is always few metric to process. > >> > >> Is Redis a recommend storage backend ? > > > > Redis is recommended as an incoming measures backend, not really as a > > storage – though it really depends on the size of your installation. > > > > Up until 4.0 version, Gnocchi process metrics every > > $metricd.metric_processing_delay seconds. With version 4.1 (to be > > released), the Redis incoming driver has a more realtime processing > > delay which avoids having to poll for incoming data. > > > > what Julien said :) > > redis as a storage driver really depends on how you setup persistence[1] > and how much you trust it[2]. > > would love to see your redis vs ceph numbers compared to mine[3] :) > > [1] https://redis.io/topics/persistence > [2] https://aphyr.com/posts/283-jepsen-redis > [3] https://medium.com/@gord.chung/gnocchi-4-introspective-a83055e99776 > (tested part of 4.1 optimisations) > > cheers, > > -- > gord > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [gnocchi] Redis for storage backend
Hi Gordon, Thanks for your test results, we investigate more on our env, finally it turns out that our ceph cluster isn't work as expected. which made gnocchi performance decrease a lot. On Thu, Oct 19, 2017 at 1:09 AM, gordon chung <g...@live.ca> wrote: > > > On 2017-10-18 12:15 PM, Yaguang Tang wrote: > > > > We launched 300vms and each vm has about 10 metrics, OpenStack cluster > > have 3 controllers and 2 compute nodes(ceph replication is set to 2). > > seems smaller than my test, i have 20K metrics in my test > > > > > what we want to archive is to make all metric measures data get > > processed as soon as possible, metric processing delay is set to 10s, > > and ceilometer polling interval is 30s. > > are you batching the data you push to gnocchi? in gnocchi4.1, the redis > driver will (attempt to) process immediately, rather than cyclically > using the metric processing delay. > > > > > when the backend of incoming and storeage is set to ceph, the average of > > "gnocchi status" > > shows that there are around 7000 measures waiting to be process, but > > when changing incoming and storage backend to Redis, the result of > > gnocchi status shows unprocessed measures is around 200. > > i should clarify, having a high gnocchi status is not a bad thing, ie, > if you just send a large spike of measures, it's expected to jump. it's > bad if never shrinks. > > that said, how many workers do you have? i have 18 workers for 20K > metrics and it takes 2minutes i believe? do you have debug enable? how > long does it take to process metric? > > when i tested gnocchi+ceph vs gnocchi+redis, i didn't see a very large > difference in performance (redis was definitely better though). maybe > it's your ceph environment? > > > > > we try to add more metricd process on every controller nodes, to improve > > the performance of > > calculate and writing speed to storage backend but have little effect. > > performance should increase (relatively) proportionally. ie. if you 2x > metricd, you should perform (almost) 2x quicker. if you add 5% more > metricd, you should perform (almost) 5% quicker. > > cheers, > > -- > gord > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone][all] v2.0 API removal
Keystone is one project that all other OpenStack projects use, so personally I think the change to remove the API which are widely used should be discussed at TC meeting . As far as I know ,not all OpenStack projects support the keystone v3 domain (domain, project,user) as well as keystone, you can see the policy.json of each project to check. most of projects have no domain specified role API. I'd ask how much effort do we need to maintain the keystone v2 api ? can we just keep the code there? On Fri, Oct 20, 2017 at 2:41 AM, Alex Schultzwrote: > On Thu, Oct 19, 2017 at 11:49 AM, Lance Bragstad > wrote: > > Yeah - we specifically talked about this in a recent meeting [0]. We > > will be more verbose about this in the future. > > > > I'm glad to see a review of this. In reading the meeting logs, I > understand it was well communicated that the api was going to go away > at some point. Yes we all knew it was coming, but the exact time of > impact wasn't known outside of Keystone. Also saying "oh it works in > devstack" is not enough when you do something this major. So a "FYI, > patches to remove v2.0 to start landing next week (or today)" is more > what would have been helpful for the devs who consume master. It > dramatically shortens the time spent debugging failures if you have an > idea about when something major changes and then we don't have to go > through git logs/gerrit to figure out what happened :) > > IMHO when large efforts that affect the usage of your service are > going to start to land, it's good to send a note before landing those > patches. Or at least at the same time. Anyway I hope other projects > will also follow a similar pattern when they ultimately need to do > something like this in the future. > > Thanks, > -Alex > > > > > [0] > > http://eavesdrop.openstack.org/meetings/keystone/2017/ > keystone.2017-10-10-18.00.log.html#l-107 > > > > On 10/19/2017 12:00 PM, Alex Schultz wrote: > >> On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad > wrote: > >>> Hey all, > >>> > >>> Now that we're finishing up the last few bits of v2.0 removal, I'd > like to > >>> send out a reminder that Queens will not include the v2.0 keystone APIs > >>> except the ec2-api. Authentication and validation of v2.0 tokens has > been > >>> removed (in addition to the public and admin APIs) after a lengthy > >>> deprecation period. > >>> > >> In the future can we have a notice before the actual code removal > >> starts? We've been battling various places where we thought we had > >> converted to v3 only to find out we hadn't correctly done so because > >> it use to just 'work' and the only way we know now is that CI blew up. > >> A heads up on the ML probably wouldn't have lessened the pain in this > >> instance but at least we might have been able to pinpoint the exact > >> problem quicker. > >> > >> Thanks, > >> -Alex > >> > >> > >>> Let us know if you have any questions. > >>> > >>> Thanks! > >>> > >>> > >>> > __ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: > unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > __ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal
Should this kind of change be discussed and have an agreement of the TC and User committee? -- Forwarded message -- From: Lance BragstadDate: Fri, Oct 20, 2017 at 12:08 AM Subject: [Openstack-operators] [keystone][all] v2.0 API removal To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev@lists.openstack.org>, openstack-operat...@lists.openstack.org Hey all, Now that we're finishing up the last few bits of v2.0 removal, I'd like to send out a reminder that *Queens* will not include the *v2.0 keystone APIs* except the ec2-api. Authentication and validation of v2.0 tokens has been removed (in addition to the public and admin APIs) after a lengthy deprecation period. Let us know if you have any questions. Thanks! ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Tang Yaguang signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone][all][tc] v2.0 API removal
I am not saying keystone team don't follow the policy. Just want to express my concern for this big action. it's a cross project thing, so want to have a widely agreement. from the user's aspect, I want to ask the keystone team to keep the V2 API for a long time if we don't have to spend to much effort to maintain it. On Fri, Oct 20, 2017 at 8:46 PM, Jeremy Stanley <fu...@yuggoth.org> wrote: > On 2017-10-20 10:52:36 +0800 (+0800), Yaguang Tang wrote: > > Keystone is one project that all other OpenStack projects use, so > > personally I think the change to remove the API which are widely > > used should be discussed at TC meeting . > [...] > > The OpenStack Technical Committee ceased holding regular weekly > meetings around 6 months ago: > > https://governance.openstack.org/tc/resolutions/20170425- > drop-tc-weekly-meetings.html > > Also, the TC is not generally in the business of making decisions on > behalf of projects and instead provides opt-in policy in the form of > "tags" which projects can choose to apply to their teams or > deliverables, such as: > > https://governance.openstack.org/tc/reference/tags/assert_ > follows-standard-deprecation.html > > As you can see, the Keystone team asserts the keystone API service > follows the deprecation model indicated there. Are you suggesting > that policy was not followed, or that it's merely insufficient? > -- > Jeremy Stanley > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Tang Yaguang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev