Re: [openstack-dev] [Congress] hosts table of nova datasource driver
Hey, Host table works fine, and is enabled now. You should be able to pull the latest commit and use it for appropriate policies. https://review.openstack.org/#/c/172333/ Regards, Samta Rangare On 10 Apr 2015 06:38, Masahito MUROI muroi.masah...@lab.ntt.co.jp wrote: Hi Alex, Thank you for replying. I feel easy to hear the team keeps the table. I try to use the table and if I reproduce the bug in my environment I'll add details in the report. best regard, masa On 2015/04/09 3:03, Alex Yip wrote: Hi Masahito, Thanks for the reminder. We do intend to keep the hosts table. The code is commented out only because there is a bug that keeps the driver from working with the hosts table right now. I don't know if anyone is looking into it, but I just created a bug report for it: https://bugs.launchpad.net/congress/+bug/1441778 thanks, Alex From: Masahito MUROI muroi.masah...@lab.ntt.co.jp Sent: Tuesday, April 7, 2015 11:11 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Congress] hosts table of nova datasource driver Hi, congress folks I'm new to congress and I'd like to use hosts table of nova datasource driver in my usecase. I, however, found out the hosts table is commented out in current master. Is there any reason to comment out it? or will it be removed from an official datasource in the future? best regard, masa -- 室井 雅仁(Masahito MUROI) Software Innovation Center, NTT Tel: +81-422-59-4539 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Neutron scaling datapoints?
Which periodic updates did you have in mind to eliminate? One of the few remaining ones I can think of is sync_routers but it would be great if you can enumerate the ones you observed because eliminating overhead in agents is something I've been working on as well. One of the most common is the heartbeat from each agent. However, I don't think we can't eliminate them because they are used to determine if the agents are still alive for scheduling purposes. Did you have something else in mind to determine if an agent is alive? On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas afaze...@redhat.com wrote: I'm 99.9% sure, for scaling above 100k managed node, we do not really need to split the openstack to multiple smaller openstack, or use significant number of extra controller machine. The problem is openstack using the right tools SQL/AMQP/(zk), but in a wrong way. For example.: Periodic updates can be avoided almost in all cases The new data can be pushed to the agent just when it needed. The agent can know when the AMQP connection become unreliable (queue or connection loose), and needs to do full sync. https://bugs.launchpad.net/neutron/+bug/1438159 Also the agents when gets some notification, they start asking for details via the AMQP - SQL. Why they do not know it already or get it with the notification ? - Original Message - From: Neil Jerram neil.jer...@metaswitch.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Thursday, April 9, 2015 5:01:45 PM Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints? Hi Joe, Many thanks for your reply! On 09/04/15 03:34, joehuang wrote: Hi, Neil, From theoretic, Neutron is like a broadcast domain, for example, enforcement of DVR and security group has to touch each regarding host where there is VM of this project resides. Even using SDN controller, the touch to regarding host is inevitable. If there are plenty of physical hosts, for example, 10k, inside one Neutron, it's very hard to overcome the broadcast storm issue under concurrent operation, that's the bottleneck for scalability of Neutron. I think I understand that in general terms - but can you be more specific about the broadcast storm? Is there one particular message exchange that involves broadcasting? Is it only from the server to agents, or are there 'broadcasts' in other directions as well? (I presume you are talking about control plane messages here, i.e. between Neutron components. Is that right? Obviously there can also be broadcast storm problems in the data plane - but I don't think that's what you are talking about here.) We need layered architecture in Neutron to solve the broadcast domain bottleneck of scalability. The test report from OpenStack cascading shows that through layered architecture Neutron cascading, Neutron can supports up to million level ports and 100k level physical hosts. You can find the report here: http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers Many thanks, I will take a look at this. Neutron cascading also brings extra benefit: One cascading Neutron can have many cascaded Neutrons, and different cascaded Neutron can leverage different SDN controller, maybe one is ODL, the other one is OpenContrail. Cascading Neutron--- / \ --cascaded Neutron-- --cascaded Neutron- | | -ODL-- OpenContrail And furthermore, if using Neutron cascading in multiple data centers, the DCI controller (Data center inter-connection controller) can also be used under cascading Neutron, to provide NaaS ( network as a service ) across data centers. ---Cascading Neutron-- /| \ --cascaded Neutron-- -DCI controller- --cascaded Neutron- | || -ODL-- | OpenContrail | --(Data center 1)-- --(DCI networking)-- --(Data center 2)-- Is it possible for us to discuss this in OpenStack Vancouver summit? Most certainly, yes. I will be there from mid Monday afternoon through to end Friday. But it will be my first summit, so I have no idea yet as to how I might run into you - please can you suggest! Best Regards Chaoyi Huang ( Joe Huang ) Regards, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon
Events shouldnt help much since the workflow is, user does something. It breaks with error notify admin then admin needs to figure out why... ideally there needs to be a field for why in the db, and the dashboard can show it to admins? Thanks, Kevin From: Duncan Thomas Sent: Friday, April 10, 2015 11:47:07 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon Maybe cinder can emit more notification events, then a dashboard can be build separately with the appropriate access control and filtering? The same dash can then be used for many projects - all they need to add are the events with enough metadata On 10 Apr 2015 18:53, Ben Swartzlander b...@swartzlander.orgmailto:b...@swartzlander.org wrote: On 04/10/2015 08:11 AM, John Griffith wrote: On Fri, Apr 10, 2015 at 6:16 AM, liuxinguo liuxin...@huawei.commailto:liuxin...@huawei.com wrote: Hi, When we create a volume in the horizon, there may occurrs some errors at the driver backend, and the in horizon we just see a “error” in the volume status. So is there any way to put the error information to the horizon so users can know what happened exactly just from the horizon? Thanks, Liu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev The challenge here is that the calls to the driver are async, so we don't actually have any direct feedback/messages regarding the result. One thing that's been talked about in the past is adding additional information to the status which could be presented so that end users and tools like Horizon would have a little more information. The trick is however we also do NOT want driver info going back to the end user, the whole point of Cinder is to abstract all of that backend specific type stuff. The idea kicked around in the past was to introduce a sub-state with a bit more detail, but still wouldn't be anything specific from the driver. Sounds like a good topic to revisit for the Liberty release. Agreed that driver messages shouldn't reach the end user, but would it be helpful at all to the administrator to have driver error messages visible on the admin panel? -Ben __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon
I'd say events are *more* useful in that workflow, not less, as long as they contain enough context. For example, the user creates a volume, tries to attach it which fails for some config error, so the user deletes it. With an event based model, the admin now has an error event in their queue. If we used a db field then the error status is potentially revived by the successful delete. Similarly if an intermittent fault happens but the action succeeds on a retry, then the error info is again lost. On 10 Apr 2015 20:45, Fox, Kevin M kevin@pnnl.gov wrote: Events shouldnt help much since the workflow is, user does something. It breaks with error notify admin then admin needs to figure out why... ideally there needs to be a field for why in the db, and the dashboard can show it to admins? Thanks, Kevin -- *From:* Duncan Thomas *Sent:* Friday, April 10, 2015 11:47:07 AM *To:* OpenStack Development Mailing List *Subject:* Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon Maybe cinder can emit more notification events, then a dashboard can be build separately with the appropriate access control and filtering? The same dash can then be used for many projects - all they need to add are the events with enough metadata On 10 Apr 2015 18:53, Ben Swartzlander b...@swartzlander.org wrote: On 04/10/2015 08:11 AM, John Griffith wrote: On Fri, Apr 10, 2015 at 6:16 AM, liuxinguo liuxin...@huawei.com wrote: Hi, When we create a volume in the horizon, there may occurrs some errors at the driver backend, and the in horizon we just see a “error” in the volume status. So is there any way to put the error information to the horizon so users can know what happened exactly just from the horizon? Thanks, Liu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev The challenge here is that the calls to the driver are async, so we don't actually have any direct feedback/messages regarding the result. One thing that's been talked about in the past is adding additional information to the status which could be presented so that end users and tools like Horizon would have a little more information. The trick is however we also do NOT want driver info going back to the end user, the whole point of Cinder is to abstract all of that backend specific type stuff. The idea kicked around in the past was to introduce a sub-state with a bit more detail, but still wouldn't be anything specific from the driver. Sounds like a good topic to revisit for the Liberty release. Agreed that driver messages shouldn't reach the end user, but would it be helpful at all to the administrator to have driver error messages visible on the admin panel? -Ben __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] New version of python-neutronclient release for Kilo: 2.4.0
On Thu, Apr 09, 2015 at 04:00:39PM -0500, Matt Riedemann wrote: On 4/9/2015 3:14 PM, Kyle Mestery wrote: The Neutron team is proud to announce the release of the latest version of python-neutronclient. This release includes the following bug fixes and improvements: aa1215a Merge Fix one remaining E125 error and remove it from ignore list cdfcf3c Fix one remaining E125 error and remove it from ignore list b978f90 Add Neutron subnetpool API d6cfd34 Revert Remove unused AlreadyAttachedClient 5b46457 Merge Fix E265 block comment should start with '# ' d32298a Merge Remove author tag da804ef Merge Update hacking to 0.10 8aa2e35 Merge Make secgroup rules more readable in security-group-show a20160b Merge Support fwaasrouterinsertion extension ddbdf6f Merge Allow passing None for subnetpool 5c4717c Merge Add Neutron subnet-create with subnetpool c242441 Allow passing None for subnetpool 6e10447 Add Neutron subnet-create with subnetpool af3fcb7 Adding VLAN Transparency support to neutronclient 052b9da 'neutron port-create' missing help info for --binding:vnic-type 6588c42 Support fwaasrouterinsertion extension ee929fd Merge Prefer argparse mutual exclusion f3e80b8 Prefer argparse mutual exclusion 9c6c7c0 Merge Add HA router state to l3-agent-list-hosting-router e73f304 Add HA router state to l3-agent-list-hosting-router 07334cb Make secgroup rules more readable in security-group-show 639a458 Merge Updated from global requirements 631e551 Fix E265 block comment should start with '# ' ed46ba9 Remove author tag e2ca291 Update hacking to 0.10 9b5d397 Merge security-group-rule-list: show all info of rules briefly b56c6de Merge Show rules in handy format in security-group-list c6bcc05 Merge Fix failures when calling list operations using Python binding 0c9cd0d Updated from global requirements 5f0f280 Fix failures when calling list operations using Python binding c892724 Merge Add commands from extensions to available commands 9f4dafe Merge Updates pool session persistence options ce93e46 Merge Added client calls for the lbaas v2 agent scheduler c6c788d Merge Updating lbaas cli for TLS 4e98615 Updates pool session persistence options a3d46c4 Merge Change Creates to Create in help text 4829e25 security-group-rule-list: show all info of rules briefly 5a6e608 Show rules in handy format in security-group-list 0eb43b8 Add commands from extensions to available commands 6e48413 Updating lbaas cli for TLS 942d821 Merge Remove unused AlreadyAttachedClient a4a5087 Copy functional tests from tempest cli dd934ce Merge exec permission to port_test_hook.sh 30b198e Remove unused AlreadyAttachedClient a403265 Merge Reinstate Max URI length checking to V2_0 Client 0e9d1e5 exec permission to port_test_hook.sh 4b6ed76 Reinstate Max URI length checking to V2_0 Client 014d4e7 Add post_test_hook for functional tests 9b3b253 First pass at tempest-lib based functional testing 09e27d0 Merge Add OS_TEST_PATH to testr 7fcb315 Merge Ignore order of query parameters when compared in MyUrlComparator ca52c27 Add OS_TEST_PATH to testr aa0042e Merge Fixed pool and health monitor create bugs 45774d3 Merge Honor allow_names in *-update command 17f0ca3 Ignore order of query parameters when compared in MyUrlComparator aa0c39f Fixed pool and health monitor create bugs 6ca9a00 Added client calls for the lbaas v2 agent scheduler c964a12 Merge Client command extension support e615388 Merge Fix lbaas-loadbalancer-create with no --name c61b1cd Merge Make some auth error messages more verbose 779b02e Client command extension support e5e815c Fix lbaas-loadbalancer-create with no --name 7b8c224 Honor allow_names in *-update command b9a7d52 Updated from global requirements 62a8a5b Make some auth error messages more verbose 8903cce Change Creates to Create in help text For more details on the release, please see the LP page and the detailed git log history. https://launchpad.net/python-neutronclient/2.4/2.4.0 Please report any bugs in LP. Thanks! Kyle __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev And the gate has exploded on kilo-rc1: http://goo.gl/dnfSPC Proposed: https://review.openstack.org/#/c/172150/ Besides breaking the gate this release also appears to have introduced a backwards incompatible CLI change which breaks using the subnet-create command when using the args in certain order. (which is how some of the docs show invoking subnet-create) The bug was opened here: https://bugs.launchpad.net/python-neutronclient/+bug/1442771 -Matt Treinish pgp8aUOqCYJMF.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage
[openstack-dev] [cinder] volume driver for Blockbridge EPS backend
Hi, I'm working on getting our cinder volume driver in shape for submission for the Liberty release. To that end, I have some questions about the OpenStack development process. The associated blueprint is here: https://blueprints.launchpad.net/cinder/+spec/blockbridge-eps-driver Our volume driver is in good shape, and has been used in-house for the better part of a year. I'm currently writing unit tests in preparation for submission. I've signed the Individual CLA, and set up Gerrit/git-review. The driver passes all of the style requirements imposed by hacking, except for complaining about multiple imports from the oslo.i18n package. A couple questions: 1) When is it appropriate to submit the driver for review? Is 3rd-party CI a prerequisite? 2) How do we get the blueprint assigned to target a specific release? Thanks in advance for your feedback and assistance, Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management
On 04/10/2015 01:04 PM, Boris Pavlovic wrote: Hi, I believe that specs are too detailed and too dev oriented for managers, operators and devops. They actually don't want/have time to write/read all the stuff in specs and that's why the communication between dev operators community is a broken. +1 I would recommend to think about simpler approaches like making mechanism for proposing features/changes in projects. Like we have in Rally: https://rally.readthedocs.org/en/latest/feature_requests.html Are any other OpenStack projects handling feature requests like this? I think a feature request process like this would be useful across more components than just Neutron. I'm sure there are some requests that would also require changes across multiple components. This is similar to specs but more concentrate on WHAT rather than HOW. Best regards, Boris Pavlovic On Fri, Apr 10, 2015 at 7:34 PM, Kevin Benton blak...@gmail.com mailto:blak...@gmail.com wrote: The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. Are you hinting here that there should be a separate team of people from the developers who are deciding what should and should not be worked on in Neutron? Have there been examples of that working in other open source projects where the majority of the development isn't driven by one employer? I ask that because I don't see much of an incentive for a developer to follow requirements generated by people not familiar with the Neutron code base. One of the roles of the driver team is to determine what is feasible in the release cycle. How would that be possible without actively contributing or (at a minimum) being involved in code reviews? On Thu, Apr 9, 2015 at 7:52 AM, Assaf Muller amul...@redhat.com mailto:amul...@redhat.com wrote: The Neutron specs process was introduced during the Juno timecycle. At the time it was mostly a bureaucratic bottleneck (The ability to say no) to ease the pain of cores and manage workloads throughout a cycle. Perhaps this is a somewhat naive outlook, but I see other positives, such as more upfront design (Some is better than none), less high level talk during the implementation review process and more focus on the details, and 'free' documentation for every major change to the project (Some would say this is kind of a big deal; What better way to write documentation than to force the developers to do it in order for their features to get merged). That being said, you can only get a feature merged if you propose a spec, and the only people largely proposing specs are developers. This ingrains the open source culture of developer focused evolution, that, while empowering and great for developers, is bad for product managers, users (That are sometimes under-presented, as is the case I'm trying to make) and generally causes a lack of a cohesive vision. Like it or not, the specs process and the driver's team approval process form a sort of product management, deciding what features will ultimately go in to Neutron and in what time frame. We shouldn't ignore the fact that we clearly have people and product managers pulling the strings in the background, often deciding where developers will spend their time and what specs to propose, for the purpose of this discussion. I argue that managers often don't have the tools to understand what is important to the project, only to their own customers. The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. I'd like to avoid to go in to a discussion of what's wrong with the current specs process as I'm sure people have heard me complain about this in #openstack-neutron plenty of times before. Instead, I'd like to suggest a system that would perhaps get us to implement specs that are currently not being proposed, and give an additional form of input that would make sure that the development community is spending it's time in the right places.
Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon
True. I was thinking more of the horizon event thing that show for a few seconds, then vanish. not useful for this case... Maybe something like the way heat keeps track of events, but more globally then? A list of error events in the admin tab that admins should look at and clear? From: Duncan Thomas Sent: Friday, April 10, 2015 12:52:21 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon I'd say events are *more* useful in that workflow, not less, as long as they contain enough context. For example, the user creates a volume, tries to attach it which fails for some config error, so the user deletes it. With an event based model, the admin now has an error event in their queue. If we used a db field then the error status is potentially revived by the successful delete. Similarly if an intermittent fault happens but the action succeeds on a retry, then the error info is again lost. On 10 Apr 2015 20:45, Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov wrote: Events shouldnt help much since the workflow is, user does something. It breaks with error notify admin then admin needs to figure out why... ideally there needs to be a field for why in the db, and the dashboard can show it to admins? Thanks, Kevin From: Duncan Thomas Sent: Friday, April 10, 2015 11:47:07 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon Maybe cinder can emit more notification events, then a dashboard can be build separately with the appropriate access control and filtering? The same dash can then be used for many projects - all they need to add are the events with enough metadata On 10 Apr 2015 18:53, Ben Swartzlander b...@swartzlander.orgmailto:b...@swartzlander.org wrote: On 04/10/2015 08:11 AM, John Griffith wrote: On Fri, Apr 10, 2015 at 6:16 AM, liuxinguo liuxin...@huawei.commailto:liuxin...@huawei.com wrote: Hi, When we create a volume in the horizon, there may occurrs some errors at the driver backend, and the in horizon we just see a “error” in the volume status. So is there any way to put the error information to the horizon so users can know what happened exactly just from the horizon? Thanks, Liu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev The challenge here is that the calls to the driver are async, so we don't actually have any direct feedback/messages regarding the result. One thing that's been talked about in the past is adding additional information to the status which could be presented so that end users and tools like Horizon would have a little more information. The trick is however we also do NOT want driver info going back to the end user, the whole point of Cinder is to abstract all of that backend specific type stuff. The idea kicked around in the past was to introduce a sub-state with a bit more detail, but still wouldn't be anything specific from the driver. Sounds like a good topic to revisit for the Liberty release. Agreed that driver messages shouldn't reach the end user, but would it be helpful at all to the administrator to have driver error messages visible on the admin panel? -Ben __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management
I like this idea. It leaves specs and implementation details to people familiar with the code base while providing a good place for users and devs to discuss feature requests. On Apr 10, 2015 2:04 PM, John Kasperski jckas...@linux.vnet.ibm.com wrote: On 04/10/2015 01:04 PM, Boris Pavlovic wrote: Hi, I believe that specs are too detailed and too dev oriented for managers, operators and devops. They actually don't want/have time to write/read all the stuff in specs and that's why the communication between dev operators community is a broken. +1 I would recommend to think about simpler approaches like making mechanism for proposing features/changes in projects. Like we have in Rally: https://rally.readthedocs.org/en/latest/feature_requests.html Are any other OpenStack projects handling feature requests like this?I think a feature request process like this would be useful across more components than just Neutron. I'm sure there are some requests that would also require changes across multiple components. This is similar to specs but more concentrate on WHAT rather than HOW. Best regards, Boris Pavlovic On Fri, Apr 10, 2015 at 7:34 PM, Kevin Benton blak...@gmail.com wrote: The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. Are you hinting here that there should be a separate team of people from the developers who are deciding what should and should not be worked on in Neutron? Have there been examples of that working in other open source projects where the majority of the development isn't driven by one employer? I ask that because I don't see much of an incentive for a developer to follow requirements generated by people not familiar with the Neutron code base. One of the roles of the driver team is to determine what is feasible in the release cycle. How would that be possible without actively contributing or (at a minimum) being involved in code reviews? On Thu, Apr 9, 2015 at 7:52 AM, Assaf Muller amul...@redhat.com wrote: The Neutron specs process was introduced during the Juno timecycle. At the time it was mostly a bureaucratic bottleneck (The ability to say no) to ease the pain of cores and manage workloads throughout a cycle. Perhaps this is a somewhat naive outlook, but I see other positives, such as more upfront design (Some is better than none), less high level talk during the implementation review process and more focus on the details, and 'free' documentation for every major change to the project (Some would say this is kind of a big deal; What better way to write documentation than to force the developers to do it in order for their features to get merged). That being said, you can only get a feature merged if you propose a spec, and the only people largely proposing specs are developers. This ingrains the open source culture of developer focused evolution, that, while empowering and great for developers, is bad for product managers, users (That are sometimes under-presented, as is the case I'm trying to make) and generally causes a lack of a cohesive vision. Like it or not, the specs process and the driver's team approval process form a sort of product management, deciding what features will ultimately go in to Neutron and in what time frame. We shouldn't ignore the fact that we clearly have people and product managers pulling the strings in the background, often deciding where developers will spend their time and what specs to propose, for the purpose of this discussion. I argue that managers often don't have the tools to understand what is important to the project, only to their own customers. The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. I'd like to avoid to go in to a discussion of what's wrong with the current specs process as I'm sure people have heard me complain about this in #openstack-neutron plenty of times before. Instead, I'd like to suggest a system that would perhaps get us to implement specs that are currently not being proposed, and give an additional form of input that would make sure that the development community is spending it's time in the right places. While 'super users' have been given more exposure, and operators summits give operators an additional tool to provide feedback, from a developer's point of view, the input is non-empiric and scattered. I also have a hunch that operators still feel their voice is not being
Re: [openstack-dev] [cinder] Is there any way to put the driver backend error message to the horizon
I'd say events are *more* useful in that workflow, not less, as long as they contain enough context. For example, the user creates a volume, tries to attach it which fails for some config error, so the user deletes it. With an event based model, the admin now has an error event in their queue. If we used a db field then the error status is potentially revived by the successful delete. +1 Nova currently emits a good set of events and errors and we've found it especially useful to debug / do postmortem analysis by collecting these notifications and being able to view the entire workflow. we've found quite a few occasions where the error popups presented in Horizon are not the real error but just the last/wrapped error. there are various consumers that already collate these error notifications from Nova and i don't think it's much of a change if any to collect error notifications from Cinder. i don't think there's any change from Ceilometer POV -- just publish to error topic. cheers, gord __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [docs] [End User Guide] [Keystone] keyring support for python-keystoneclient and python-openstackclient
On 04/07/2015 09:44 AM, Brant Knudson wrote: On Tue, Apr 7, 2015 at 3:52 AM, Olena Logvinova ologvin...@mirantis.com mailto:ologvin...@mirantis.com wrote: Good day to everyone! My name is Olena, I am new in OpenStack (working as a tech writer). And I'm stuck on a bug https://launchpad.net/bugs/1419990 (patch https://review.openstack.org/#/c/163503/). I have 2 questions, please: Does the page http://docs.openstack.org/user-guide/content/cli_openrc.html contain info about python-keystoneclient only, or both python-keystoneclient and python-openstackclient? The page contains info about all the CLIs, not just keystone and openstack. Users should be able to create an OpenStack RC file that works with the keystone command or the openstack command or nova, glance, etc. Note that the keystone command is deprecated in favor of the openstack command, so I think we should be removing any use of it from the documentation. And should we remove the keyring support part here (because it was removed from python-openstackclient), or do some re-wording (because it is still left in python-keystoneclient)? (Here is a patch saying that keyring is still used by keystoneclient: https://review.openstack.org/#/c/107719/ ) I tried the keystone command and it doesn't have an --os-use-keyring option, and neither does the openstack command. I think the Keyring support section should be removed. Has anyone looked into Keyring support in python-openstackclient yet? - Brant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] [Horizon] Insufficient (?) features in current Nova API
This discussion is actually rather timely. In Kilo, we just added a new experimental API in Glance called the Catalog Index Service [1]. Basically, it provides multi-tenant elastic search based caching and indexing for various resources using a plugin mechanism. We proposed it and built it initially to be targeted specifically at Glance resources for a number of reasons including constraining the scope and ensuring that we had proper guidance over some RBAC aspects. Not surprisingly, quite a few people expressed a very strong interest in this service providing search and indexing for many OpenStack services including Nova. There is also debate on where it should reside. Ultimately, the decision was to address this during liberty and to have at least one session at the Vancouver summit on it in the Glance track. I also have proposed it as a topic on the Horizon summit planning ether pad and plan to provide more details shortly (finishing out Kilo was the top priority). The current experimental service was built with a resource plugin architecture so that the actual indexing (with notification listening) and RBAC handling for different resource types is handled via plugins. In addition, when deployed it is registered in Keystone as a service of type ³search² with its own endpoint. The basic idea for Horizon has been that we can check to see if the search service is enabled for the current user¹s logged in region. If so, we will add support in the API layers to leverage the search service for resources indexed by the service when appropriate. We also will be able to take better advantage of advanced search faceting with near real-time results and autocompletion using an angular UI component called Magic Search [2] that Eucalyptus open sourced and which IBM / HP have been working on to enable in Horizon [3]. We want to also discuss this at the summit in the Horizon track. We are currently investigating and prototyping out a number of things related to all of the above and will provide more info and results very shortly. We¹ll definitely be looking for feedback and help on how to approach a number of topics! Thank you, Travis [1] http://docs-draft.openstack.org/51/138051/9/check/gate-glance-specs-docs/d9 ad1b8//doc/build/html/specs/kilo/catalog-index-service.html [2] https://www.youtube.com/watch?v=FTjqFddBJBA [3] https://review.openstack.org/#/c/157481/ On 4/10/15, 11:30 AM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Attila Fazekas's message of 2015-04-10 06:20:02 -0700: I think the Nova API (list) needs to be extended by custom attribute filtering and with multiple `server` get by uuid (up to 128+). The basic list usually does not shows what I need. nova processing more data than it is really needs just for a basic list. The detailed list is very slow. Maybe the attribute filtering or the multi item get(/POST) is not too RESTish thing, but it can be very efficient! Multi-get is a good idea anyway, as it will help with workloads that need it. However, it won't really be much better than the detail list for things like Horizon. It will just break the problem up into a less-slow list, and a less-slow fetch, instead of one slow list. This is related to the discussion about how OpenStack should talk to users. It would make a lot more sense for Horizon to poll a high speed but not 100% consistent Atom feed that is updated by a daemon listening to notifications, than always asking for the details from the realtime-accurate API. I believe rackspace has something in place called 'yagi'[1] that does this for them, but I don't know what UI's they have that use it. Another option is for the listing API to just cache aggressively on its backend, with users being given the option to add a request option that means go slow and get me up to date information. However, one must be careful to avoid thundering herds and other problems that backend caches tend to mask until the worst moment. For instance, if the cache starts getting invalidated faster than users are querying it because there are too many users asking for the slow version, then you are suddenly back in the sinking boat without a bucket to bail water anymore. [1] https://github.com/rackerlabs/yagi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] how to send messages (and events) to our users
From: Angus Salkeld asalk...@mirantis.commailto:asalk...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Thursday, April 9, 2015 6:43 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [all] how to send messages (and events) to our users On Fri, Apr 10, 2015 at 2:24 AM, Sandy Walsh sandy.wa...@rackspace.commailto:sandy.wa...@rackspace.com wrote: From: Angus Salkeld asalk...@mirantis.commailto:asalk...@mirantis.com Sent: Wednesday, April 8, 2015 8:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] how to send messages (and events) to our users I also want to point out that what I'd actually rather see is that all of the services provide functionality like this. Users would be served by having an event stream from Nova telling them when their instances are active, deleted, stopped, started, error, etc. Also, I really liked Sandy's suggestion to use the notifications on the backend, and then funnel them into something that the user can consume. The project they have, yagi, for putting them into atom feeds is pretty interesting. If we could give people a simple API that says subscribe to Nova/Cinder/Heat/etc. notifications for instance X, and put them in an atom feed, that seems like something that would make sense as an under-the-cloud service that would be relatively low cost and would ultimately reduce load on API servers. an under-the-clould service ? - That is not what I am after here. Yeah, we're using this as an under cloud service. Our notifications are only consumed internally, so it's not a multi-tenant/SaaS solution. What I am really after is a general OpenStack solution for how end users can consume service notifications (and replace heat event-list). Right now there is ceilometer event-list, but as some Ceilometer devs have said, they don't want to store every notification that comes. So is the yagi + atom hopper solution something we can point end-users to? Is it per-tenant etc... However, there is a team within Rax working on this SaaS offering: Peter Kazmir and Joe Savak. I'll let them respond with their lessons on AtomHopper, etc. Great, thanks Sandy. It would be good to see what this is (is it just Zaqar? or something totally different). AtomHopper is an atom-pub server (it's open source: http://atomhopper.org/ ), we use it with Repose (http://openrepose.org/) to support Keystone auth, etc. We use Yagi to read notifications from the rabbit queues and publish notifications in Atom format to a feed. We've been using this setup internally for a couple of years now. As Sandy mentioned, Peter and Joe are working on the project to provide these Atom feeds to end users. Sandy, do you have a write up somewhere on how to set this up so I can experiment a bit? Yagi: https://github.com/rackerlabs/yagi AtomHopper: http://atomhopper.org/ (java warning) The StackTach.v3 sandbox is DevStack-for-Notifications. It simulates notifications (no openstack deploy needed) and it has Yagi set up to consume them. There's also Vagrant scripts to get you going. http://www.stacktach.com/install.html https://github.com/stackforge/stacktach-sandbox and some, slightly older, screencasts on the Sandbox here: http://www.stacktach.com/screencasts.html We're in the #stacktach channel, by all means ping us if you run into problems. Or if a Hangout works better for you, just scream :) Thanks for the help! -Angus __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] volume driver for Blockbridge EPS backend
On 10 Apr 2015 21:02, Joshua Huber jhu...@blockbridge.com wrote: A couple questions: 1) When is it appropriate to submit the driver for review? Is 3rd-party CI a prerequisite? We will not be merging any drivers without stable 3rd party CI in future, so it is a requirement. You can submit the driver for initial review anytime now, but it will not get final approval without CI. 2) How do we get the blueprint assigned to target a specific release? Join the channel #Openstack-cinder on IRC and ask Mike Perez (Thingee on IRC) to target it. The channel is a good place in general for cinder questions. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [puppet] Puppet PTL
Just to make it official: since we only had one nominee for PTL, we will go ahead and name Emilien Macchi as our new PTL without proceeding with an election process. Thanks, Emilien, for all your hard work and for taking on this responsibility! Colleen (crinkle) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] Puppet PTL
Congratulations Emilien, Looking forward to the progress we can make as a community! On Apr 10, 2015 4:58 PM, Colleen Murphy coll...@puppetlabs.com wrote: Just to make it official: since we only had one nominee for PTL, we will go ahead and name Emilien Macchi as our new PTL without proceeding with an election process. Thanks, Emilien, for all your hard work and for taking on this responsibility! Colleen (crinkle) -- To unsubscribe from this group and stop receiving emails from it, send an email to puppet-openstack+unsubscr...@puppetlabs.com. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] volume driver for Blockbridge EPS backend
On Fri, Apr 10, 2015 at 6:28 PM, Duncan Thomas duncan.tho...@gmail.com wrote: We will not be merging any drivers without stable 3rd party CI in future, so it is a requirement. You can submit the driver for initial review anytime now, but it will not get final approval without CI. Great, we'll get the CI set up ASAP, but in the meantime start the review process. Join the channel #Openstack-cinder on IRC and ask Mike Perez (Thingee on IRC) to target it. The channel is a good place in general for cinder questions. Sounds good, thanks for the quick response. :) -Josh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [barbican] Utilizing the KMIP plugin
Hello Christopher, It does seem that configs are being read for another location. Try to remove that copy in you home directory (so just keep the /etc location). If you see the same issue, try to rename your /etc/barbican/barbican-api.conf file to something else. Barbican should crash, probably with a No SQL connection error. Also, double check the ‘kmip_plugin’ setting in setup.cfg as per below, and try running ‘pip install -e .’ again in your virtual environment. FWIW, this CR adds better logging of plugin errors once the loading problem you have is figured out: https://review.openstack.org/#/c/171868/ Thanks, John From: Christopher N Solis cnso...@us.ibm.commailto:cnso...@us.ibm.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Thursday, April 9, 2015 at 1:55 PM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin Hey John. Thanks for letting me know about the error. But I think my configuration is not seeing the kmip_plugin selection. In my barbican-api.conf file in /etc/barbican I have set enabled_secretstore_plugins = kmip_plugin However, I don't think it is creating a KMIPSecretStore instance. I edited the code in kmip_secret_store.py and put a breakpoint at the very beginning of the init function. When I make a barbican request to put a secret in there, it did not stop at the breakpoint at all. I put another breakpoint in the store_crypto.py file inside the init function for the StoreCryptoAdapterPlugin and I was able to enter the code at that breakpoint. So even though in my barbican-api.conf file I specified kmip_plugin it seems to be using the store_crypto plugin instead. Is there something that might cause this to happen? I also want to note that my code has the most up to date pull from the community code. Here's what my /etc/barbican/barbican-api.conf file has in it: # = Secret Store Plugin === [secretstore] namespace = barbican.secretstore.plugin enabled_secretstore_plugins = kmip_plugin ... ... ... # == KMIP plugin = [kmip_plugin] username = '**' password = '**' host = 10.0.2.15 port = 5696 keyfile = '/etc/barbican/rootCA.key' certfile = '/etc/barbican/rootCA.pem' ca_certs = '/etc/barbican/rootCA.pem' Regards, Christopher Solis [Inactive hide details for John Wood ---04/08/2015 03:16:58 PM---Hello Christopher, My local configuration is indeed seeing the]John Wood ---04/08/2015 03:16:58 PM---Hello Christopher, My local configuration is indeed seeing the kmip_plugin selection, but when steve From: John Wood john.w...@rackspace.commailto:john.w...@rackspace.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: 04/08/2015 03:16 PM Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin Hello Christopher, My local configuration is indeed seeing the kmip_plugin selection, but when stevedore tries to load the KMIP plugin it crashes because required files are missing in my local environment (see https://github.com/openstack/barbican/blob/master/barbican/plugin/kmip_secret_store.py#L131) for example. Stevedore logs the exception but then doesn’t load this module, so when Barbican asks for an available plugin it doesn’t see it and crashes as you see. So the root exception from stevedore isn’t showing up in my logs for some reason, and probably not in yours as well. We’ll try to put up a CR to at least expose this exception in logs. In the mean time, make sure the KMIP values checked via that link above are configured on your machine. Sorry for the inconvenience, John From: Christopher N Solis cnso...@us.ibm.commailto:cnso...@us.ibm.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Wednesday, April 8, 2015 at 11:27 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin Hey John. I do have the barbican-api.conf file located in the /etc/barbican folder. But that does not seem to be the one that barbican reads from. It seems to be reading from the barbican-api.conf file locate in my home directory. Either way, both have the exact same configurations. I also checked the setup.cfg file and it does have the line for kmip_plugin . Regards, CHRIS SOLIS [Inactive hide details for John Wood ---04/07/2015 10:39:18 AM---Hello Christopher, Just checking, but is that barbican-api.conf]John Wood ---04/07/2015 10:39:18 AM---Hello
Re: [openstack-dev] [neutron] Neutron scaling datapoints?
Hi, Neil, See inline comments. Best Regards Chaoyi Huang From: Neil Jerram [neil.jer...@metaswitch.com] Sent: 09 April 2015 23:01 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints? Hi Joe, Many thanks for your reply! On 09/04/15 03:34, joehuang wrote: Hi, Neil, From theoretic, Neutron is like a broadcast domain, for example, enforcement of DVR and security group has to touch each regarding host where there is VM of this project resides. Even using SDN controller, the touch to regarding host is inevitable. If there are plenty of physical hosts, for example, 10k, inside one Neutron, it's very hard to overcome the broadcast storm issue under concurrent operation, that's the bottleneck for scalability of Neutron. I think I understand that in general terms - but can you be more specific about the broadcast storm? Is there one particular message exchange that involves broadcasting? Is it only from the server to agents, or are there 'broadcasts' in other directions as well? [[joehuang]] for example, L2 population, Security group rule update, DVR route update. Both direction in different scenario. (I presume you are talking about control plane messages here, i.e. between Neutron components. Is that right? Obviously there can also be broadcast storm problems in the data plane - but I don't think that's what you are talking about here.) [[joehuang]] Yes, controll plane here. We need layered architecture in Neutron to solve the broadcast domain bottleneck of scalability. The test report from OpenStack cascading shows that through layered architecture Neutron cascading, Neutron can supports up to million level ports and 100k level physical hosts. You can find the report here: http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers Many thanks, I will take a look at this. Neutron cascading also brings extra benefit: One cascading Neutron can have many cascaded Neutrons, and different cascaded Neutron can leverage different SDN controller, maybe one is ODL, the other one is OpenContrail. Cascading Neutron--- / \ --cascaded Neutron-- --cascaded Neutron- | | -ODL-- OpenContrail And furthermore, if using Neutron cascading in multiple data centers, the DCI controller (Data center inter-connection controller) can also be used under cascading Neutron, to provide NaaS ( network as a service ) across data centers. ---Cascading Neutron-- /| \ --cascaded Neutron-- -DCI controller- --cascaded Neutron- | || -ODL-- | OpenContrail | --(Data center 1)-- --(DCI networking)-- --(Data center 2)-- Is it possible for us to discuss this in OpenStack Vancouver summit? Most certainly, yes. I will be there from mid Monday afternoon through to end Friday. But it will be my first summit, so I have no idea yet as to how I might run into you - please can you suggest! I will also attend the summit whole week, sometimes in the OPNFV parts, sometimes in OpenStack parts. Let me see how to meet. Best Regards Chaoyi Huang ( Joe Huang ) Regards, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] volume driver for Blockbridge EPS backend
On 2015-04-11 01:28:51 +0300 (+0300), Duncan Thomas wrote: [...] We will not be merging any drivers without stable 3rd party CI in future [...] For clarity, hopefully this is stable testing reporting on changes to the project in general, and not 3rd party CI specifically. After all, the Project Infrastructure team is thrilled to have other free/libre software leveraging our systems, and are happy to work with authors of the same to help set that up as long as it doesn't have any dependency on special hardware or otherwise present serious complications with our system design. In these cases it wouldn't be a third party doing the testing, because it would be tested upstream. Unfortunately we've seen too many free software devs in recent months trying to bow and scrape hardware into service so they could meet the testing requirements set forth by OpenStack projects, without realizing that we're here to help. It's a burden for anyone to get running, but at least for non-proprietary drivers and plug-ins we can try to make sure it's a little less of a burden by showing them how to test upstream. -- Jeremy Stanley signature.asc Description: Digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Neutron scaling datapoints?
Hi, Attlia, Interesting idea. Can you show us your test (or simulation test) result that Neutron can support up to 100k managed node. Best Regards Chaoyi Huang ( joehuang ) From: Attila Fazekas [afaze...@redhat.com] Sent: 10 April 2015 17:18 To: neil.jer...@metaswitch.com Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints? I'm 99.9% sure, for scaling above 100k managed node, we do not really need to split the openstack to multiple smaller openstack, or use significant number of extra controller machine. The problem is openstack using the right tools SQL/AMQP/(zk), but in a wrong way. For example.: Periodic updates can be avoided almost in all cases The new data can be pushed to the agent just when it needed. The agent can know when the AMQP connection become unreliable (queue or connection loose), and needs to do full sync. https://bugs.launchpad.net/neutron/+bug/1438159 Also the agents when gets some notification, they start asking for details via the AMQP - SQL. Why they do not know it already or get it with the notification ? - Original Message - From: Neil Jerram neil.jer...@metaswitch.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Thursday, April 9, 2015 5:01:45 PM Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints? Hi Joe, Many thanks for your reply! On 09/04/15 03:34, joehuang wrote: Hi, Neil, From theoretic, Neutron is like a broadcast domain, for example, enforcement of DVR and security group has to touch each regarding host where there is VM of this project resides. Even using SDN controller, the touch to regarding host is inevitable. If there are plenty of physical hosts, for example, 10k, inside one Neutron, it's very hard to overcome the broadcast storm issue under concurrent operation, that's the bottleneck for scalability of Neutron. I think I understand that in general terms - but can you be more specific about the broadcast storm? Is there one particular message exchange that involves broadcasting? Is it only from the server to agents, or are there 'broadcasts' in other directions as well? (I presume you are talking about control plane messages here, i.e. between Neutron components. Is that right? Obviously there can also be broadcast storm problems in the data plane - but I don't think that's what you are talking about here.) We need layered architecture in Neutron to solve the broadcast domain bottleneck of scalability. The test report from OpenStack cascading shows that through layered architecture Neutron cascading, Neutron can supports up to million level ports and 100k level physical hosts. You can find the report here: http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers Many thanks, I will take a look at this. Neutron cascading also brings extra benefit: One cascading Neutron can have many cascaded Neutrons, and different cascaded Neutron can leverage different SDN controller, maybe one is ODL, the other one is OpenContrail. Cascading Neutron--- / \ --cascaded Neutron-- --cascaded Neutron- | | -ODL-- OpenContrail And furthermore, if using Neutron cascading in multiple data centers, the DCI controller (Data center inter-connection controller) can also be used under cascading Neutron, to provide NaaS ( network as a service ) across data centers. ---Cascading Neutron-- /| \ --cascaded Neutron-- -DCI controller- --cascaded Neutron- | || -ODL-- | OpenContrail | --(Data center 1)-- --(DCI networking)-- --(Data center 2)-- Is it possible for us to discuss this in OpenStack Vancouver summit? Most certainly, yes. I will be there from mid Monday afternoon through to end Friday. But it will be my first summit, so I have no idea yet as to how I might run into you - please can you suggest! Best Regards Chaoyi Huang ( Joe Huang ) Regards, Neil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] [all] [api] [log] Erring is Caring
Hi all, The first draft of the spec for Log Message Error Codes (from the Log Working Group) is out for review: https://review.openstack.org/#/c/172552/ Please comment, and please look for the way forward so that the different places, ways and reasons errors get reported provide consistency across the projects the make up the Openstack ecosystem. Consistency across uses will speed problem solving and will provide a common language across the diversity of users of the OpenStack code and environments. This cross project spec is focused on just a part of the Log Message header, but it is the start of where log messages need to go. It dovetails in with developer focused API errors, but is aimed at the log files the operators rely on to keep their clouds running. Over the next couple of days, I will also specifically add reviewers to the list if you haven't already commented on the spec;-). Thanks for your patience waiting for this. It *is* a work in progress, but I think the meat is there for discussion. Also, please look at and comment on: Return Request ID to caller https://review.openstack.org/#/c/156508/ as this is also critical to get right for logging and developer efforts. --Rocky -Original Message- From: Everett Toews [mailto:everett.to...@rackspace.com] Sent: Tuesday, March 31, 2015 14:36 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [all] [api] Erring is Caring Hi All, An API Working Group Guideline for Errors https://review.openstack.org/#/c/167793/ Errors are a crucial part of the developer experience when using an API. As developers learn the API they inevitably run into errors. The quality and consistency of the error messages returned to them will play a large part in how quickly they can learn the API, how they can be more effective with the API, and how much they enjoy using the API. We need consistency across all services for the error format returned in the response body. The Way Forward I did a bit of research into the current state of consistency in errors across OpenStack services [1]. Since no services seem to respond with a top-level errors key, it's possible that they could just include this key in the response body along with their usual response and the 2 can live side-by-side for some deprecation period. Hopefully those services with unstructured errors should okay with adding some structure. That said, the current error formats aren't documented anywhere that I've seen so this all feels fair game anyway. How this would get implemented in code is up to you. It could eventually be implemented in all projects individually or perhaps a Oslo utility is called for. However, this discussion is not about the implementation. This discussion is about the error format. The Review I've explicitly added all of the API WG and Logging WG CPLs as reviewers to that patch but feedback from all is welcome. You can find a more readable version of patch set 4 at [2]. I see the id and code fields as the connection point to what the logging working group is doing. Thanks, Everett [1] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Errors [2] http://docs-draft.openstack.org/93/167793/4/check/gate-api-wg-docs/e2f5b6e//doc/build/html/guidelines/errors.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev