Re: [openstack-dev] The Evolution of core developer to maintainer?
On 03/31/2015 06:24 PM, John Griffith wrote: What is missing for me here however is who picks these special people. I'm convinced that this does more to promote the idea of special contributors than anything else. Maybe that's actually what you want, but it seemed based on your message that wasn't the case. Anyway, core nominations are fairly objective in my opinion and is *mostly* based on number of reviews and perceived quality of those reviews (measured somewhat by disagreement rates etc). What are the metrics for this special group of folks that you're proposing we empower and title as maintainers? Do I get to be a maintainer, is it reserved for a special group of people, a specific company? What's the criteria? Do *you* get to be a maintainer? What standards are *Maintainers* held to? Who/How do we decide he/she is doing their job? Are there any rules about representation and interests (keeping the team of people balanced). What about the work by those maintainers that introduces more/new bugs? I think Joe's comments about giving more people more responsibility make a lot of sense. I worked with the Linux kernel more-or-less professionally for about a decade, and while the kernel project has its problems there were a things about its maintainer model that I liked. 1) There was a MAINTAINERS file at the top level in the source, listing who was currently responsible for what areas of code, along with their contact information. Generally this was one or two people, with larger subsystems having a mailing list as well. 2) The maintainers were generally chosen by consensus because they were the experts in that area, they had time available, and they were willing to take on the task. Usually when a maintainer stepped down there was someone to take their place who had been working closely with them for some time. 3) If you found a bug in a particular area, you could look up that area and find out who was in charge and take the problem to them. Similarly, if you wanted to contribute some code in a particular area there was a relatively small number of specific people that you could to talk to about whether the change made sense, or what modifications would be needed to get it accepted. I think some of this exists informally within OpenStack, but it's not obvious to a newcomer who they need to talk to if they have an issue with libvirt, or with the scheduler, or with the DB, or some minutiae of the REST API. (Sorry for the nova-specific examples, it's where I've spent most of my time.) I don't know what sort of process would be appropriate for selecting these people within OpenStack, but I think it would be useful to follow Joe's suggestion and give people approval privileges within a subsection. It's *hard* to find people that are able to wrap their heads around the entirety of something like nova. I suspect it would be easier to find people willing to own a smaller piece of the code. Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Design Summit Session etherpad
The next meeting is Tuesday 1400UTC, that is Apr 7. Neutron meeting time is rotated between Monday 2100UTC and Tuesday 1400UTC. Akihiro 2015-04-01 14:12 GMT+09:00 Vikram Choudhary vikram.choudh...@huawei.com: Hi Kyle, The link [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics shows the next meeting is scheduled on (4/7/2015) but it's Tuesday not Monday. Date on Monday is (4/6/2015) so I got confused:L Agenda for Next Neutron Team Meeting Monday (4/7/2015) at 1400 UTC on #openstack-meeting Thanks Vikram From: Kyle Mestery [mailto:mest...@mestery.com] Sent: 01 April 2015 01:02 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron] Design Summit Session etherpad Hi folks! Now that we're deep into the feature freeze and the Liberty specs repository is open [1], I wanted to let everyone know I've created an etherpad to track Design Summit Sessions. If you'd like to propose something, please have a look at the etherpad. We'll discuss these next week in the Neutron meeting [3], so please join and come with your ideas! Thanks, Kyle [1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/060183.html [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics [3] https://wiki.openstack.org/wiki/Network/Meetings __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro Motoki amot...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Design Summit Session etherpad
Its UTC, check with your local time. From: Vikram Choudhary [mailto:vikram.choudh...@huawei.com] Sent: Wednesday, April 01, 2015 10:42 AM To: mest...@mestery.com Cc: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [neutron] Design Summit Session etherpad Hi Kyle, The link [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics shows the next meeting is scheduled on (4/7/2015) but it’s Tuesday not Monday. Date on Monday is (4/6/2015) so I got confused:☹ Agenda for Next Neutron Team Meeting Monday (4/7/2015) at 1400 UTC on #openstack-meeting Thanks Vikram From: Kyle Mestery [mailto:mest...@mestery.com] Sent: 01 April 2015 01:02 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron] Design Summit Session etherpad Hi folks! Now that we're deep into the feature freeze and the Liberty specs repository is open [1], I wanted to let everyone know I've created an etherpad to track Design Summit Sessions. If you'd like to propose something, please have a look at the etherpad. We'll discuss these next week in the Neutron meeting [3], so please join and come with your ideas! Thanks, Kyle [1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/060183.html [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics [3] https://wiki.openstack.org/wiki/Network/Meetings __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] Issue for backup speed
Hi, I tested Swift backup driver in Cinder-backup and that performance isn't high. In our test environment, The average time for backup 50G volume is 20min. I found a patch for this that add multi thread for swift backup driver( https://review.openstack.org/#/c/111314) but It's also too slow. It looks like that patch doesn't implement thread properly. Is there any improvement way about this? I'd appreciate other's thoughts on these issues. Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]: Doubts regarding Cells V2 deployment
Hi Devs, I have couple of questions regarding cells V2. Once cells V2 is implemented then there will be no concept on non-cell deployment which means there will be one cell at least (by default). 1. How the deployment with multiple availability zones done with cells V2? 2. How the availability zones and cells will be mapped? Is there any specific plan to target this? Please let me know your opinions on the same. Thanks Regards, Abhishek Kekane __ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding.__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][cells] Meeting time change
Le 26/03/2015 17:17, Andrew Laski a écrit : Daylight saving time has made it so that the 2200UTC meeting time is fairly inconvenient for a few of us and the 2100UTC timeslot is open so we're going to shift the meeting up by an hour. I have already spoken with many of the people in regular attendance at the meetings so this should come as little surprise. See you all next week at 2100! I amended https://wiki.openstack.org/wiki/Meetings#Nova_Cellsv2_Meeting accordingly. I guess we're still on #openstack-meeting-3 ? __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
John Griffith wrote: On Tue, Mar 31, 2015 at 4:30 PM, Joe Gordon joe.gord...@gmail.com mailto:joe.gord...@gmail.com wrote: I am starting this thread based on Thierry's feedback on [0]. Instead of writing the same thing twice, you can look at the rendered html from that patch [1]. Neutron tried to go from core to maintainer but after input from the TC and others, they are keeping the term 'core' but are clarifying what it means to be a neutron core [2]. [2] does a very good job of showing how what it means to be core is evolving. From everyone is a dev and everyone is a reviewer. No committers or repo owners, no aristocracy. Some people just commit to do a lot of reviewing and keep current with the code, and have votes that matter more (+2). (Theirry) To a system where cores are more then people who have votes that matter more. Neutron's proposal tries to align that document with what is already happening. 1. They share responsibility in the project's success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend their time doing what needs to be done to ensure the projects success, not necessarily what is the most interesting or fun. I think there are a few issues at the heart of this debate: 1. Our current concept of a core team has never been able to grow past 20 or so people, even for really big projects like nova and cinder. Why is that? How do we delegate responsibility for subsystems? How do we keep growing? 2. If everyone is just developers and reviewers who is actually responsible for the projects success? How does that mesh with the ideal of no 'aristocracy'? Do are early goals still make sense today? Do you feel like a core deveper/reviewer (we initially called them core developers) [1]: In OpenStack a core developer is a developer who has submitted enough high quality code and done enough code reviews that we trust their code reviews for merging into the base source tree. It is important that we have a process for active developers to be added to the core developer team. Or a maintainer [1]: 1. They share responsibility in the project’s success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend that time doing whatever needs to be done, not necessarily what is the most interesting or fun. Maintainers are often under-appreciated, because their work is harder to appreciate. It’s easy to appreciate a really cool and technically advanced feature. It’s harder to appreciate the absence of bugs, the slow but steady improvement in stability, or the reliability of a release process. But those things distinguish a good project from a great one. [0] https://review.openstack.org/#/c/163660/ [1] http://docs-draft.openstack.org/60/163660/3/check/gate-governance-docs/f386acf//doc/build/html/resolutions/20150311-rename-core-to-maintainers.html [2] https://review.openstack.org/#/c/164208/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hey Joe, I mentioned in last weeks TC meeting that I didn't really see a burning need to change or create new labels; but that's probably beside the point. So if I read this it really comes down to a number of people in the community want core to mean something more than special reviewer is that right? I mean regardless of whether you change the name from core to maintainer I really don't care. If it makes some folks feel better to have that title/label associated with themselves that's cool by me (yes I get the *extra* responsibilities part you lined out). +1 to this. I feel we have much much much bigger things to be thinking about than just labels. Maybe this was a misunderstanding of this mail thread but in all honesty who cares... What is missing for me here however is who picks these special people. I'm convinced that this does more to promote the idea of special contributors than anything else. Maybe that's actually what you want, but it seemed based on your message that wasn't the case. Anyway, core nominations are fairly objective in my opinion and is *mostly* based on number of reviews and perceived quality of those reviews (measured somewhat by disagreement rates etc). What are the metrics for this special group of folks that you're proposing we empower and title as maintainers? Do I get to be a maintainer, is it reserved
Re: [openstack-dev] [nova]: Doubts regarding Cells V2 deployment
Le 01/04/2015 08:37, Kekane, Abhishek a écrit : Hi Devs, I have couple of questions regarding cells V2. Once cells V2 is implemented then there will be no concept on non-cell deployment which means there will be one cell at least (by default). 1. How the deployment with multiple availability zones done with cells V2? 2. How the availability zones and cells will be mapped? Is there any specific plan to target this? At the moment, we haven't yet agreed on if the availability zones will be per-cell or at the top level only - or both. To be honest, all of these questions are tied to how Cells V2 will schedule boot requests and the spec is currently at a draft stage [1] for Liberty. Some discussions will happen during the Cells V2 weekly meetings [2], feel free to attend. -Sylvain [1] https://review.openstack.org/#/c/141486/ [2] https://wiki.openstack.org/wiki/Meetings#Nova_Cellsv2_Meeting Please let me know your opinions on the same. Thanks Regards, Abhishek Kekane __ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Million level scalability test report from cascading
Hi Rob There is no API request failure under current hardware configuration. If you remove one physical server for NOVA API service or for scheduler service, then API request failure will happen under 1000 concurrency. Similar to Neutron. Before expansion to the current test environment hardware configuration, API request failure was observed. For the system load during the steady-state, because the cascaded OpenStack simulator will make the VM status changed now and then ( and also for volume, port ), these status change will be periodically batch polled and synchronized to the cascading layer. The status change is more frequently than normal steady-state in one OpenStack instance. That's why you see the load is not very low even if there is no API request. For VM operation failure rate, because the cascaded OpenStack is simulator, so it's not measured. The rate is co-related how the model is implemented in the cascaded simulator. === Something wrong with my openstack-dev mail list subscription. I can receive others' mail, but cannot receive all mails related to the current thread. Have to re-subscribe and send the mail again. Best Regards Chaoyi Huang ( joehuang ) On 31 March 2015 at 22:05, joehuang joehu...@huawei.com wrote: Hi, all, During the last cross project meeting[1][2] for the next step of OpenStack cascading solution[3], the conclusion of the meeting is OpenStack isn't ready for the project, and if he want's it ready sooner than later, joehuang needs to help make it ready by working on scaling being coded now, and the scaling is on the first priority for OpenStack community. We just finished the 1 million VMs semi-simulation test report[4] for OpenStack cascading solution, the most interesting findings during the test is, the cascading architecture can support million level ports in Neutron, and also million level VMs in Nova. And the test report also shows that OpenStack cascading solution can manage up to 100k physical hosts without challenge. Some scaling issues were found during the test and listed in the report. The conclusion of the report is: According to the Phase I and Phase II test data analysis, due to the hardware resources limitation, the OpenStack cascading solution with current configuration can supports a maximum of 1 million virtual machines and is capable of handling 500 concurrent API request if L3 (DVR) mode is included or, 1000 concurrent API request if only L2 networking needed. It's up to deployment policy to use OpenStack cascading solution inside one site ( one data center) or multi-sites (multi-data centers), the maximal sites (data centers) supported are 100, i.e., 100 cascaded OpenStack instances. The test report is shared first, let's discuss the next step later. Wow thats beautiful stuff. The next time someone does a report like this, I'd like to suggest some extra metrics to capture. API failure rate: what % of API errors occur. VM failure rate: what % of operations lead to a failed VM (e.g. not deleted on delete, or not started on create, or didn't boot correctly) block device failure rate similarly. Looking in your results, I observe significant load in the steady-state mode for most of the DB's. Thats a little worrying, if as I assume steady-state means 'no new API calls being made'. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs
On Wed, Apr 1, 2015 at 8:20 AM, Ian Wienand iwien...@redhat.com wrote: On 03/27/2015 08:47 PM, Alan Pevec wrote: But how come that same recent pyOpenSSL doesn't consume more memory on Ubuntu? Just to loop back on the final status of this ... pyOpenSSL 0.14 does seem to use about an order of magnitude more memory than 0.13 (2mb - 20mb). For details see [1]. This is due to the way it now goes through cryptography (the package, not the concept :) which binds to openssl using cffi. This ends up parsing a bunch of C to build up the ABI representation, and it seems pycparser's model of this consumes most of the memory [2]. If that is a bug or not remains to be seen. Ubuntu doesn't notice this in our CI environment because it comes with python-openssl 0.13 pre-installed in the image. Centos started hitting this when I merged my change to start using as many libraries from pip as possible. I have a devstack workaround for centos out (preinstall the package) [3] and I think a global solution of avoiding it in requirements [4] (reviews appreciated). I'm also thinking about how we can better monitor memory usage for jobs. Being able to see exactly what change pushed up memory usage by a large % would have made finding this easier. We keep some overall details for devstack runs in a log file, but there is room to do better. Interesting debug, and good to see this was finally nailed. Few questions: 1) So why did this happen on rax VM only, the same (Centos job)on hpcloud didn't seem to hit it even when we ran hpcloud VM with 8GB memory. 2) Should this also be sent to centos-devel folks so that they don't upgrade/update the pyopenssl in their distro repos until the issue is resolved ? thanx, deepak -i [1] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job [2] https://github.com/eliben/pycparser/issues/72 [3] https://review.openstack.org/168217 [4] https://review.openstack.org/169596 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Issue for backup speed
This is something we're working on (I work with the author of the patch you referenced) but the refactoring of the backup code in this cycle has made progress challenging. If you have a patch that works, please submit it, even if it needs some cleaning up, we'd be happy to work with you on and testing, cleaning up and improvements. The basic problem is that backup is CPU bound (compression, ssl) so the existing parallelisation techniques used in cinder don't help. Running many cinder-backup processes can give you good aggregate throughput if you're running many backups at once, but this appears not to be a common case, even in a large public cloud. On 1 April 2015 at 11:41, Jae Sang Lee hyan...@gmail.com wrote: Hi, I tested Swift backup driver in Cinder-backup and that performance isn't high. In our test environment, The average time for backup 50G volume is 20min. I found a patch for this that add multi thread for swift backup driver( https://review.openstack.org/#/c/111314) but It's also too slow. It looks like that patch doesn't implement thread properly. Is there any improvement way about this? I'd appreciate other's thoughts on these issues. Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Tue, 31 Mar 2015, Anita Kuno wrote: I am really having a problem with a lack of common vision. Now this may just be my problem here, and if it is, that is fine, I'll own that. It's not just you. But other folks, as Dean mentions above, do indicate in their language that they feel something was present at one point and is either gone now or is in danger of going. I haven't got enough history to know if it was once around and is now gone, but I get powerful sense that it is not here now. In part I think this, like so many other things, is an issue of scale and growth: Things get fuzzy as they expand and diffuse so it is inevitable. But I also think that part of it is about identity. People often ask me what OpenStack _is_ and I really struggle to give a concise answer. There are a lot of economic factors driving that lack of identity: Many parties want to be under the OpenStack umbrella because being there has cachet and other value. It's a tricky business because at many levels, including: * project inclusion under the big tent * contributions (of all types) from everyone (people who are 100% of time dedicated to projects to casual passers by) * properly acknowledging the value of contributions of different types we want to be inclusive (and non-aristocratic) yet by being inclusive we cause the diffusion that we then need to counteract in some way to manage the culture. This doesn't really answer your question about what to name things, but I think the question is missing the forest for the trees. It's common in large groups that are trying to collaborate to see them reach a point where they say oops, we're not working as well as we want to, important things are being dropped and then to discover that one of the primary drivers for that lack of effectiveness is because people aren't actually working towards the same goal and a reason they aren't is because they've been using similar words to talk about issues, but meaning entirely different things. You gotta have shared language and shared understanding before you can go on to create the shared goals which are required to really be collaborating. In the compressed and rushed environment that we're working in it is easy to skip the part where we establish the shared language. It seems that underlying Joe posting this thread is an invitation to do the hard word of finding and formalizing some language so that we can use that to set some goals that we all share. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Congress][Horizon]New BP submitted for Congress UI improvement
Hello all, A month ago, I registered a blueprint named horizon-policy-abstractions in Congress, the link is: https://blueprints.launchpad.net/congress/+spec/horizon-policy-abstraction. In this BP, I abstract the policy in congress into name, object, violation condition, action and data. And this abstraction will be show in Horizon. By this way, users can express their intent via these elements easier and more intuitional. In this month, I have completed the spec and discussed it with Janet Yu. she thinks it is a good idea to express users' policy. Now I have submitted the spec, gerrit topic in https://review.openstack.org/#/q/topic:bp/horizon-policy-abstraction,n,z ( https://review.openstack.org/#/c/168539/). And I have completed some codes, I think it looks good for typical use cases. So could you give some advices about if this policy abstraction is easy and intuitional to express policies in Congress in your opinion? Thanks a lot! Best regards, Yali Zhang __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
Joe Gordon wrote: I am starting this thread based on Thierry's feedback on [0]. Instead of writing the same thing twice, you can look at the rendered html from that patch [1]. Neutron tried to go from core to maintainer but after input from the TC and others, they are keeping the term 'core' but are clarifying what it means to be a neutron core [2]. [2] does a very good job of showing how what it means to be core is evolving. From everyone is a dev and everyone is a reviewer. No committers or repo owners, no aristocracy. Some people just commit to do a lot of reviewing and keep current with the code, and have votes that matter more (+2). (Theirry) To a system where cores are more then people who have votes that matter more. Neutron's proposal tries to align that document with what is already happening. 1. They share responsibility in the project's success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend their time doing what needs to be done to ensure the projects success, not necessarily what is the most interesting or fun. A bit of history is useful here. We used[1] to have 4 groups for each project, mostly driven by the need to put people in ACL groups. The PTL (which has ultimate control), the Drivers (the trusted group around the PTL which had control over blueprint targeting in Launchpad), the Core reviewers (which have +2 on the repos in Gerrit), and the bug team (which had special Launchpad bugs rights like the ability to confirm stuff). [1] https://wiki.openstack.org/wiki/Launchpad_Teams_and_Gerrit_Groups In that model, drivers is closer to what you describe for maintainers -- people invested 100% in the project success, and able to spend 95% of the work time to ensure it. My main objection to the model you propose is its binary nature. You bundle core reviewing duties with drivers duties into a single group. That simplification means that drivers have to be core reviewers, and that core reviewers have to be drivers. Sure, a lot of core reviewers are good candidates to become drivers. But I think bundling the two concepts excludes a lot of interesting people from being a driver. If someone steps up and owns bug triaging in a project, that is very interesting and I'd like that person to be part of the drivers group. That said, bug triaging (like core reviewing) is a full time job. You can't expect the person who owns bug triaging to commit to the level of reviewing that core reviewers commit to. It's also a different skillset. Saying core reviewers and maintainers are the same thing, you basically exclude people from stepping up to the project leadership unless they are code reviewers. I think that's a bad thing. We need more people volunteering to own bug triaging and liaison work, not less. So, in summary: * I'm not against reviving the concept of drivers * I'm against making core reviewing a requirement for drivers * I'm for recognizing other duties (like bug triaging or liaison work) as being key project leadership positions Hope this clarifies, -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
Joe Gordon wrote: On Tue, Mar 31, 2015 at 5:46 PM, Dean Troyer dtro...@gmail.com mailto:dtro...@gmail.com wrote: On Tue, Mar 31, 2015 at 5:30 PM, Joe Gordon joe.gord...@gmail.com mailto:joe.gord...@gmail.com wrote: Do you feel like a core deveper/reviewer (we initially called them core developers) [1]: In OpenStack a core developer is a developer who has submitted enough high quality code and done enough code reviews that we trust their code reviews for merging into the base source tree. It is important that we have a process for active developers to be added to the core developer team. Or a maintainer [1]: 1. They share responsibility in the project’s success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend that time doing whatever needs to be done, not necessarily what is the most interesting or fun. First, I don't think these two things are mutually exclusive, that's a false dichotomy. They sound like two groups of attributes (or roles), both of which must be earned in the eyes of the rest of the project team. Frankly, being a PTL is your maintainer list on steroids for some projects, except that the PTL is directly elected. +1000 Yes, these are not orthogonal ideas. The question should be rephrased to 'which description do you identify the most with: core developer/reviewer or maintainer?' - Some people are core reviewers and maintainers (or drivers, to reuse the openstack terminology we already have for that) - Some people are core reviewers only (because they can't commit 90% of their work time to work on project priorities) - Some people are maintainers/drivers only (because their project duties don't give them enough time to also do reviewing) - Some people are casual developers / reviewers (because they can't spend more than 30% of their day on project stuff) All those people are valuable. Simply renaming core reviewers to maintainers (creating a single super-developer class) just excludes valuable people. -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] unit tests result in false negatives on system z platform CI
Context: During the Kilo development cycle the KVM/libvirt on system z platform made some effort to be supported by the libvirt driver [1]. Observation: Our first tests in a prototype platform CI showed some false negatives because some unit tests don't seem to be fully platform independent. For example the result of test: nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase. test_get_guest_config_without_qga_through_image_meta [0.016369s] ... FAILED Captured traceback: ~~~ Traceback (most recent call last): File nova/tests/unit/virt/libvirt/test_driver.py, line 3112, in test_get_guest_config_without_qga_through_image_meta vconfig.LibvirtConfigGuestSerial) [...] raise mismatch_error testtools.matchers._impl.MismatchError: 'nova.virt.libvirt.config.LibvirtConfigGuestConsole object' is not an instance of LibvirtConfigGuestSerial This mismatch makes fully sense if x86 is assumed as default underlying platform for unit test execution. The root cause (in this case) is that the call of libvirt_utils.get_arch() is not mocked and actually speaks to the underlying platform. On our system z CI this call returns s390x which hits platform switches in the code (a search for arch.S390X will show you all this platform specific code). A first test A change of this specific test could look like this: # git diff nova/tests/unit/virt/libvirt/test_driver.py diff --git a/nova/tests/unit/virt/libvirt/test_driver.py b/nova/tests/unit/virt/libvirt/test_driver.py index 5fbe5e1..ebcc9ed 100644 --- a/nova/tests/unit/virt/libvirt/test_driver.py +++ b/nova/tests/unit/virt/libvirt/test_driver.py @@ -3091,7 +3091,9 @@ class LibvirtConnTestCase(test.NoDBTestCase): image_meta, disk_info) -def test_get_guest_config_without_qga_through_image_meta(self): +@mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', + return_value=arch.X86_64) +def test_get_guest_config_without_qga_through_image_meta(self, mock_get_arch): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) Open questions -- We currently discovered around 30 test cases (mostly the class LibvirtConnTestCase) which could be treated that way, which seems to be cumbersome from my point of view. I'm looking for a way to express the assumption that x86 should be the default platform in the unit tests and prevent calls to the underlying system. This has to be rewritable if platform specific code like in [2] has to be tested. I'd like to discuss how that could be achieved in a maintainable way. References -- [1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz [2] test_driver.py; test_get_guest_config_with_type_kvm_on_s390; https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/test_driver.py#L2592 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][tripleo] Optional Ansible deployment coming
On 03/31/2015 03:30 PM, Steven Dake (stdake) wrote: Hey folks, One of our community members submitted a review to add optional Ansible support to deploy OpenStack using Ansible and the containers within Kolla. Our main objective remains: for third party deployment tools to use Kolla as a nice, I think it would be nice down the line if we could have direct interaction from ansible with the docker containers and not having to call docker-compose from ansible playbooks to allow more flexibility. Monty Taylor mord...@inaugust.com writes: If you haven't already, I'd connect with the os-ansible-deployment folks, who are deploying openstack with ansible into containers. I believe they are currently doing production deploys, so you may find working with them rather than duplicating more productive. As far as I know os-ansible-deployment is doing straight integration but that's based on LXC and not using packages (and don't seem to respect the one daemon/process by container aspect) but even tho I am sure there is a lot of things that they have solved there that could potentially be shared with kola. Chmouel __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API WG Meeting Time
On 03/31/2015 10:13 PM, Everett Toews wrote: Ever since daylight savings time it has been increasing difficult for many API WG members to make it to the Thursday 00:00 UTC meeting time. Do we change it so there’s only the Thursday 16:00 UTC meeting time? On a related note, I can’t make it to tomorrow’s meeting. Can someone else please #startmeeting? We should value our Asian friends' input. I think Having multiple meeting times is appropriate. Perhaps not 00:00UTC, but maybe something more in their afternoon? Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API WG Meeting Time
On Wed, 1 Apr 2015, Ryan Brown wrote: On 03/31/2015 10:13 PM, Everett Toews wrote: Ever since daylight savings time it has been increasing difficult for many API WG members to make it to the Thursday 00:00 UTC meeting time. Do we change it so there’s only the Thursday 16:00 UTC meeting time? On a related note, I can’t make it to tomorrow’s meeting. Can someone else please #startmeeting? Thanks, Everett +1 for moving to only 16:00UTC +1 -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work
It might be possible with iptables or ebtables rules, but it's not planned that I'm aware of and it would be non-trivial to do. The current implementation depends heavily on OVS flow rules.[1] 1. https://wiki.openstack.org/wiki/Neutron/DVR_L2_Agent On Tue, Mar 31, 2015 at 10:37 PM, Dr. Jens Rosenboom j.rosenb...@x-ion.de wrote: Am 01/04/15 um 04:10 schrieb Kevin Benton: It's worth pointing out here that the in-tree OVS solution controls traffic using iptables on regular bridges too. The difference between the two occurs when it comes to how traffic is separated into different networks. It's also worth noting that DVR requires OVS as well. If nobody is comfortable with OVS then they can't use DVR and they won't have parity with Nova network as far as floating IP resilience and performance is concerned. It was my understanding that the reason for this was that the first implementation for DVR was only done for OVS, probably because it is the default. Or is there some reason to assume that DVR also cannot be made to work with linuxbridge within Liberty? FWIW, I think I made some progress in getting [1] to work, though if someone could jump in and make a proper patch from my hack, that would be great. [1] https://review.openstack.org/168423 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] Security around enterprise credentials and OpenStack API
+ developers mailing list, hopefully a developer might be able to chime in. On Wed, Apr 1, 2015 at 3:58 AM, Marc Heckmann marc.heckm...@ubisoft.com wrote: Hi all, I was going to post a similar question this evening, so I decided to just bounce on Mathieu’s question. See below inline. On Mar 31, 2015, at 8:35 PM, Matt Fischer m...@mattfischer.com wrote: Mathieu, We LDAP (AD) with a fallback to MySQL. This allows us to store service accounts (like nova) and team accounts for use in Jenkins/scripts etc in MySQL. We only do Identity via LDAP and we have a forked copy of this driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't have any permissions to write into LDAP or move people into groups, so we keep a copy of users locally for purposes of user-list operations. The only interaction between OpenStack and LDAP for us is when that driver tries a bind. On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagné mga...@iweb.com wrote: Hi, Lets say I wish to use an existing enterprise LDAP service to manage my OpenStack users so I only have one place to manage users. How would you manage authentication and credentials from a security point of view? Do you tell your users to use their enterprise credentials or do you use an other method/credentials? We too have integration with enterprise credentials through LDAP, but as you suggest, we certainly don’t want users to use those credentials in scripts or store them on instances. Instead we have a custom Web portal where they can create separate Keystone credentials for their project/tenant which are stored in Keystone’s MySQL database. Our LDAP integration actually happens at a level above Keystone. We don’t actually let users acquire Keystone tokens using their LDAP accounts. We’re not really happy with this solution, it’s a hack and we are looking to revamp it entirely. The problem is that I never have been able to find a clear answer on how to do this with Keystone. I’m actually quite partial to the way AWS IAM works. Especially the instance “role features. Roles in AWS IAM is similar to TRUSTS in Keystone except that it is integrated into the instance metadata. It’s pretty cool. Other than that, RBAC policies in Openstack get us a good way towards IAM like functionality. We just need a policy editor in Horizon. Anyway, the problem is around delegation of credentials which are used non-interactively. We need to limit what those users can do (through RBAC policy) but also somehow make the credentials ephemeral. If someone (Keystone developer?) could point us in the right direction, that would be great. Thanks in advance. The reason is that (usually) enterprise credentials also give access to a whole lot of systems other than OpenStack itself. And it goes without saying that I'm not fond of the idea of storing my password in plain text to be used by some scripts I created. What's your opinion/suggestion? Do you guys have a second credential system solely used for OpenStack? -- Mathieu ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Wed, Apr 01 2015, Joshua Harlow wrote: +1 to this. There will always be people who will want to work on fun stuff and those who don't; it's the job of leadership in the community to direct people if they can (but also the same job of that leadership to understand that they can't direct everyone; it is open-source after all and saying 'no' to people just makes them run to some other project that doesn't do this...). +1, and as a casual contributor to a lot of different projects in OpenStack, I think there is a lot of work to be done in that area. The leadership is too scarce and too rarely available in general and random people are discouraging others to contribute regularly with bad feedbacks. IMHO (and a rant probably better for another thread) but I've seen to many projects/specs/split-outs (ie, scheduler tweaks, constraint solving scheduler...) get abandoned because of cores saying this or that is the priority right now (and this in all honesty pisses me off); I don't feel this is right (cores should be leaders and guides, not dictators); if a core is going to tell anyone that then they better act as a guide to the person they are telling that to and make sure they lead that person they just told no; after all any child can say no but it takes a real man/woman to go the extra distance... +1 And I'm not sure it's completely orthogonal to this thread actually. It's sort of funny looking back over the years. We used to complain over and over that we don't have enough reviewers, and that reviewing is crucial but under appreciated work. Since then there's all sorts of people striving to spend time doing reviews and provide in some cases real constructive feedback. Now we seem to be saying reviewing isn't where it's at, anybody can do that; bug fixes is the new coolness. I think there are others way to address this by the way, possibly more effective ways. Heck, you could even do commit credits; it costs five bug fixes to the overall project before you can commit a feature (ok, don't take me seriously there). Maybe I'm misinterpreting some of this, maybe there's something in between. Regardless I personally need a good deal more detail before I form my opinion. The problem I see now, is that random people who has very little knowledge of $PROJECT or OpenStack as its whole jump in random review and put a -1 in Gerrit. And then never remove it. And then your patch is stuck for ever in review. Probably because we pushed people to review patches, because we needed review, etc. Personally this is hitting me back a lot and I'm getting more and more tired of that. How can you have people reviewing code when then never even wrote a patch on the project? I've _never_ used only review numbers to put people to core reviewer. We had people trying to play the game that way, but I don't think you can become a core reviewer any code if you never fixed a bug nor wrote a patch in a project. -- Julien Danjou // Free Software hacker // http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Million level scalability test report from cascading
Hi Rob There is no API request failure under current hardware configuration. If you remove one physical server for NOVA API service or for scheduler service, then API request failure will happen under 1000 concurrency. Similar to Neutron. Before expansion to the current test environment hardware configuration, API request failure was observed. For the system load during the steady-state, because the cascaded OpenStack simulator will make the VM status changed now and then ( and also for volume, port ), these status change will be periodically batch polled and synchronized to the cascading layer. The status change is more frequently than normal steady-state in one OpenStack instance. That's why you see the load is not very low even if there is no API request. For VM operation failure rate, because the cascaded OpenStack is simulator, so it's not measured. The rate is co-related how the model is implemented in the cascaded simulator. === I don't know why I can't see my mail and your reply in the OpenStack-dev mail list, but I had thought that the mail has not been sent successfully, but someone else told me he saw the mail. I only found your reply through the mail list achieve : http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg49335.html Best Regards Chaoyi Huang ( joehuang ) On 31 March 2015 at 22:05, joehuang joehu...@huawei.com wrote: Hi, all, During the last cross project meeting[1][2] for the next step of OpenStack cascading solution[3], the conclusion of the meeting is OpenStack isn't ready for the project, and if he want's it ready sooner than later, joehuang needs to help make it ready by working on scaling being coded now, and the scaling is on the first priority for OpenStack community. We just finished the 1 million VMs semi-simulation test report[4] for OpenStack cascading solution, the most interesting findings during the test is, the cascading architecture can support million level ports in Neutron, and also million level VMs in Nova. And the test report also shows that OpenStack cascading solution can manage up to 100k physical hosts without challenge. Some scaling issues were found during the test and listed in the report. The conclusion of the report is: According to the Phase I and Phase II test data analysis, due to the hardware resources limitation, the OpenStack cascading solution with current configuration can supports a maximum of 1 million virtual machines and is capable of handling 500 concurrent API request if L3 (DVR) mode is included or, 1000 concurrent API request if only L2 networking needed. It's up to deployment policy to use OpenStack cascading solution inside one site ( one data center) or multi-sites (multi-data centers), the maximal sites (data centers) supported are 100, i.e., 100 cascaded OpenStack instances. The test report is shared first, let's discuss the next step later. Wow thats beautiful stuff. The next time someone does a report like this, I'd like to suggest some extra metrics to capture. API failure rate: what % of API errors occur. VM failure rate: what % of operations lead to a failed VM (e.g. not deleted on delete, or not started on create, or didn't boot correctly) block device failure rate similarly. Looking in your results, I observe significant load in the steady-state mode for most of the DB's. Thats a little worrying, if as I assume steady-state means 'no new API calls being made'. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 04/01/2015 05:41 AM, Thierry Carrez wrote: Joe Gordon wrote: I am starting this thread based on Thierry's feedback on [0]. Instead of writing the same thing twice, you can look at the rendered html from that patch [1]. Neutron tried to go from core to maintainer but after input from the TC and others, they are keeping the term 'core' but are clarifying what it means to be a neutron core [2]. [2] does a very good job of showing how what it means to be core is evolving. From everyone is a dev and everyone is a reviewer. No committers or repo owners, no aristocracy. Some people just commit to do a lot of reviewing and keep current with the code, and have votes that matter more (+2). (Theirry) To a system where cores are more then people who have votes that matter more. Neutron's proposal tries to align that document with what is already happening. 1. They share responsibility in the project's success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend their time doing what needs to be done to ensure the projects success, not necessarily what is the most interesting or fun. A bit of history is useful here. We used[1] to have 4 groups for each project, mostly driven by the need to put people in ACL groups. The PTL (which has ultimate control), the Drivers (the trusted group around the PTL which had control over blueprint targeting in Launchpad), the Core reviewers (which have +2 on the repos in Gerrit), and the bug team (which had special Launchpad bugs rights like the ability to confirm stuff). [1] https://wiki.openstack.org/wiki/Launchpad_Teams_and_Gerrit_Groups In that model, drivers is closer to what you describe for maintainers -- people invested 100% in the project success, and able to spend 95% of the work time to ensure it. My main objection to the model you propose is its binary nature. You bundle core reviewing duties with drivers duties into a single group. That simplification means that drivers have to be core reviewers, and that core reviewers have to be drivers. Sure, a lot of core reviewers are good candidates to become drivers. But I think bundling the two concepts excludes a lot of interesting people from being a driver. If someone steps up and owns bug triaging in a project, that is very interesting and I'd like that person to be part of the drivers group. That said, bug triaging (like core reviewing) is a full time job. You can't expect the person who owns bug triaging to commit to the level of reviewing that core reviewers commit to. It's also a different skillset. Saying core reviewers and maintainers are the same thing, you basically exclude people from stepping up to the project leadership unless they are code reviewers. I think that's a bad thing. We need more people volunteering to own bug triaging and liaison work, not less. So, in summary: * I'm not against reviving the concept of drivers * I'm against making core reviewing a requirement for drivers * I'm for recognizing other duties (like bug triaging or liaison work) as being key project leadership positions ++ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Design Summit Session etherpad
Thanks for the clarification Akihiro -Original Message- From: Akihiro Motoki [mailto:amot...@gmail.com] Sent: 01 April 2015 13:41 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Design Summit Session etherpad The next meeting is Tuesday 1400UTC, that is Apr 7. Neutron meeting time is rotated between Monday 2100UTC and Tuesday 1400UTC. Akihiro 2015-04-01 14:12 GMT+09:00 Vikram Choudhary vikram.choudh...@huawei.com: Hi Kyle, The link [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics shows the next meeting is scheduled on (4/7/2015) but it's Tuesday not Monday. Date on Monday is (4/6/2015) so I got confused:L Agenda for Next Neutron Team Meeting Monday (4/7/2015) at 1400 UTC on #openstack-meeting Thanks Vikram From: Kyle Mestery [mailto:mest...@mestery.com] Sent: 01 April 2015 01:02 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron] Design Summit Session etherpad Hi folks! Now that we're deep into the feature freeze and the Liberty specs repository is open [1], I wanted to let everyone know I've created an etherpad to track Design Summit Sessions. If you'd like to propose something, please have a look at the etherpad. We'll discuss these next week in the Neutron meeting [3], so please join and come with your ideas! Thanks, Kyle [1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/060183.h tml [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics [3] https://wiki.openstack.org/wiki/Network/Meetings __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro Motoki amot...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API WG Meeting Time
On 03/31/2015 10:13 PM, Everett Toews wrote: Ever since daylight savings time it has been increasing difficult for many API WG members to make it to the Thursday 00:00 UTC meeting time. Do we change it so there’s only the Thursday 16:00 UTC meeting time? On a related note, I can’t make it to tomorrow’s meeting. Can someone else please #startmeeting? Thanks, Everett +1 for moving to only 16:00UTC -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Changing the resources of a running vm instance
Hi All, I am looking if we can change the resources of a running VM instance (i.e. cpu, ram, changing the flavor). On running instances you can just edit security group, attach volumes or associate floating IP's. So what can we do if we want to change the flavor of a running vm or resize it. Thanks and Regards Abhishek Talwar =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Let's stick to OpenStack global requirements
Today I encountered an issue which is caused by the lack of automatic version checks. One of the users of Fuel Client asked me why it doesn’t work on their environment. I realized, that the issue was caused by wrong version specification in the Fuel Client requirements list. I analyzed python-fuelclient’s requirements, RPM’s package requirements and Global Requirements and found that none of them was compatible to another one. And the issue was not that there is only one package, that is not compatible, but that there is only one package, that IS compatible. Fuel Client is a set of very simple single threaded scripts so detecting problems in it is relatively simple. However, we also have complex components like Nailgun and different kinds of agents, where a version mismatch may produce errors which will take months to discover. My proposal of sticking to requirements partially fixes this issue. The only thing I’d like to add to that scheme is this: - Add a CI jod that on daily basis checks whether python-requirements are in line with the requirements in RPM specs. - romcheg 26 бер. 2015 о 12:07 Roman Prykhodchenko m...@romcheg.me написав(ла): So guys, I think it’s reasonable to find a consensus on this thread. I think this rule fits fine within the general frame of Roman's proposal: - if the base distro already has a package that satisfies OpenStack global requirements (or Fuel requirements), the distro package is used; - else, OSCI mirror should contain the maximum version of a requirement that matches its version specification. Yup, that also fits well. The only note I’d like to make here is that OS services are tested against the latest version of requirements. So perhaps we want to test them against the versions, supplied in distos, if those requirements are used. - romcheg 19 бер. 2015 о 19:14 Dmitry Borodaenko dborodae...@mirantis.com написав(ла): Maciej, Maintaining multiple versions of the same package concurrently and tracking their compatibility with the many different components of OpenStack and Fuel creates additional work on many different levels, from spec branch management to repo management to validation to container building and so on. Unified global requirements help avoid such work where it isn't necessary (and when you look close enough into each specific case you're likely to find that it's never really necessary). -DmitryB On Thu, Mar 19, 2015 at 4:04 AM, Maciej Kwiek mkw...@mirantis.com wrote: I guess it would depend on how many docker containers are running on master node and if we are able to pull off such stunt :). I am not familiar with the amount of work needed to do sth like that, so the proposition may be silly. Just let me know if it is. On Thu, Mar 19, 2015 at 11:51 AM, Dmitry Burmistrov dburmist...@mirantis.com wrote: Folks, Correct me if I am wrong, but isn't it what we have containers on master node for? On the master node itself conflicts won’t happen because the components are run in their containers. Do you propose to use _separate_ package repository for each docker container? (It means separate gerrit project for each package of each container, including openstack projects) On Thu, Mar 19, 2015 at 1:16 PM, Roman Prykhodchenko m...@romcheg.me wrote: Folks, I assume you meant: If a requirement that previously was only in Fuel Requirements is merged to Global Requirements, it should be removed from *Fuel* Requirements». Exactly. I'm not sure it's good idea. We should stay so close to upstream distro as we can. It should be very important reason to update package against it's upstream distro The issue is the following: OpenStack’s components are tested against those versions of dependencies, that are specified in their requirements. IIRC those requirements are set up by pip so CI nodes contain latest versions of python packages that match their specs. The question is whether it’s required to ship OpenStack services with packages from a distro or with packages, that are used for testing. Splitting of repositories doesn't help to solve python packages conflicts because master node uses a number of openstack components. On the master node itself conflicts won’t happen because the components are run in their containers. - romcheg 19 бер. 2015 о 10:47 Dmitry Burmistrov dburmist...@mirantis.com написав(ла): Roman, all - OSCI mirror should contain the maximum version of a requirement that matches its version specification. I'm not sure it's good idea. We should stay so close to upstream distro as we can. It should be very important reason to update package against it's upstream distro version. If we update some package, we should maintain it too. Tracking bugs, CVEs and so on. The more packages, the more efforts to support it. - Set up CI jobs to notify OSCI team if either Global Requirements or Fuel
Re: [openstack-dev] [all][oslo] oslo.log hacking rules
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/31/2015 09:19 PM, Ivan Kolodyazhny wrote: Hi all, After moving to oslo.log and a lots of reviews to Cinder we merged some parts for hacking checks to our code [1], [2]. Some of them are also implemented in Nova [2], [3]. I didn't check other projects. We try to make our code following logging guidelines [5], [6] and making cross-project hacking checks for all logging guidelines will help every project. Does anybody from oslo and other project interested in it? If it it needed for oslo.log, I really hope in it, I could be a volunteer to move hacking checks inside openstack-dev/hacking or oslo.log project. We should definitely maintain the rules in single place. Neutron also have those btw. /Ihar -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJVG8qSAAoJEC5aWaUY1u57gNAH/RDMG5PB37dkXx3iR8WrWvdB cTXTH6vu4945Loyz6WlEsc3yAXQXtUdfAaPphVAURV3B8RbdXG8K25X37HI5WEp3 IJ0dTGA7WvVVJcGcK4kNv9yiLvr06J5ijwXcLY+aYZ8I/8/uy1ZIuU3Jkxiys87f Eql2QidtgubBA+HDbhSxDJ0n8kGNP534zUQip5nOVBOyN0Vfh2xBUje1qMEnJnKR uQ2V73CBVXh6fZX2FArmpw1MB6BiWFOXI427fsG4OuM5f700+ECiDQ6wMZCWfaXK PFcRkEWVnag2wKP//iRdXpEYX3v7eIOatn5P5LoRYTf8XwF2+VxuWtYKNxGdk1A= =J/1a -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Changing the resources of a running vm instance
http://developer.openstack.org/api-ref-compute-v2.html#compute_server-actions Fawad Khaliq On Wed, Apr 1, 2015 at 4:45 PM, Abhishek Talwar/HYD/TCS abhishek.tal...@tcs.com wrote: Hi All, I am looking if we can change the resources of a running VM instance (i.e. cpu, ram, changing the flavor). On running instances you can just edit security group, attach volumes or associate floating IP's. So what can we do if we want to change the flavor of a running vm or resize it. Thanks and Regards Abhishek Talwar =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API WG Meeting Time
On 04/01/2015 08:35 AM, Jay Pipes wrote: On 03/31/2015 10:13 PM, Everett Toews wrote: Ever since daylight savings time it has been increasing difficult for many API WG members to make it to the Thursday 00:00 UTC meeting time. Do we change it so there’s only the Thursday 16:00 UTC meeting time? On a related note, I can’t make it to tomorrow’s meeting. Can someone else please #startmeeting? We should value our Asian friends' input. I think Having multiple meeting times is appropriate. Perhaps not 00:00UTC, but maybe something more in their afternoon? although the 16:00UTC time works well for me, i agree with Jay's sentiment here. +1 for finding another alternating time mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] glusterfs plugin
Hello, I've been investigating bug [1] concentrating on the fuel-plugin-external-glusterfs. First of all: [2] there are no core reviewers for Gerrit for this repo so even if there was a patch to fix [1] no one could merge it. I saw also fuel-plugin-external-nfs -- same issue, haven't checked other repos. Why is this? Can we fix this quickly? Second, the plugin throws: DEPRECATION WARNING: The plugin has old 1.0 package format, this format does not support many features, such as plugins updates, find plugin in new format or migrate and rebuild this one. I don't think this is appropriate for a plugin that is listed in the official catalog [3]. Third, I created a supposed fix for this bug [4] and wanted to test it with the fuel-qa scripts. Basically I built an .fp file with fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable to point to that .fp file and then ran the group=deploy_ha_one_controller_glusterfs tests. The test failed [5]. Then I reverted the changes from the patch and the test still failed [6]. But installing the plugin by hand shows that it's available there so I don't know if it's broken plugin test or am I still missing something. It would be nice to get some QA help here. P. [1] https://bugs.launchpad.net/fuel/+bug/1415058 [2] https://review.openstack.org/#/admin/groups/577,members [3] https://fuel-infra.org/plugins/catalog.html [4] https://review.openstack.org/#/c/169683/ [5] https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0 [6] https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 2015-04-01 11:41:29 +0200 (+0200), Thierry Carrez wrote: [...] We used[1] to have 4 groups for each project, mostly driven by the need to put people in ACL groups. The PTL (which has ultimate control), the Drivers (the trusted group around the PTL which had control over blueprint targeting in Launchpad), the Core reviewers (which have +2 on the repos in Gerrit), and the bug team (which had special Launchpad bugs rights like the ability to confirm stuff). [...] And here is the crux of the situation, which I think bears highlighting. These empowered groups are (or at least started out as) nothing more than an attempt to map responsibilities onto the ACLs available to our projects in the tools we use to do the work. Coming up with some new pie-in-the-sky model of leadership hierarchy is an interesting thought exercise, but many people in this discussion are losing sight of the fact that the model we have is determined to a great extent by the tools we use. Change the tools and you may change the model, but changing the model doesn't automatically change the tools to support it (and those proposing a new model need to pony up the resources to implement it in _reality_, not just in _thought_). Responsibilities not tied to specific controls in our tools do exist in abundance, but they tend to be more fluid and ad-hoc because in most cases there's been no need to wrap authorization/enforcement around them. What I worry is happening is that as a community we're enshrining the arbitrary constructs which we invented to be able to configure our tools sanely. I see this discussion as an attempt to recognize those other responsibilities as well, but worry that creation of additional unnecessary authorization/enforcement process will emerge as a solution and drive us further into pointless bureaucracy. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] bug expiration
I just spent a chunk of the morning purging out some really old Incomplete bugs because about 9 months ago we disabled the auto expiration bit in launchpad - https://bugs.launchpad.net/nova/+configure-bugtracker This is a manually grueling task, which by looking at these bugs, no one else is doing. I'd like to turn that bit back on so we can actually get attention focused on actionable bugs. Any objections here? -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Murano] Murano projects pylint job
Hello I have noticed that some openstack projects [1] use pylint gate job. From my point of view it could simplify code reviews even as non-voing job and generally it could improve code quality. Some code issues like code duplication are not easy to discover during code review so automatic job would be helpful. Please let me know your opinion about that. Thanks [1] https://review.openstack.org/#/c/164772/ Regards Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Wed, Apr 01 2015, Jeremy Stanley wrote: Responsibilities not tied to specific controls in our tools do exist in abundance, but they tend to be more fluid and ad-hoc because in most cases there's been no need to wrap authorization/enforcement around them. What I worry is happening is that as a community we're enshrining the arbitrary constructs which we invented to be able to configure our tools sanely. I see this discussion as an attempt to recognize those other responsibilities as well, but worry that creation of additional unnecessary authorization/enforcement process will emerge as a solution and drive us further into pointless bureaucracy. +1 We never used so fine grained ACLs in Ceilometer. If a person knows enough about the project, sounds responsible and is helping, then I'm giving him/her the rights to help the project. Which usually includes all the right so that person is not blocked by some ACL if he/she wants suddenly to give his/her advice on a piece of code or triage some bugs. I've never seen big mistakes, and we don't have a lot of unrecoverable mistakes. In the end I prefer to give forgiveness than permission. -- Julien Danjou /* Free Software hacker http://julien.danjou.info */ signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] open Virtual Router support in OpenStack from Cloud-Router
Hi, I think it is better to have Open source enabled Virtual Router, in OpenStack supported by Cloud Router. With open virtual router available, many of the L3 functionality/routing functionality can be easily integrated in cloud for overlay network. Regards, keshava -Original Message- From: A, Keshava Sent: Wednesday, April 01, 2015 7:15 PM To: Chandrasekar Kannan; de...@lists.cloudrouter.org Cc: A, Keshava Subject: RE: [Devel] Cloudrouter as a L3 plugin in Openstack neutron ? Hi, I agree with this idea, I think there should be open sourced router (like cloud router) available in OpenStack will be of greater advantage. Currently OpenStack is in the need of such Open virtual router. If Cloud router is available in the OpenStack OVS enabled server, many of the L3 functionality can be added/enhanced. Regards, keshava -Original Message- From: Devel [mailto:devel-boun...@lists.cloudrouter.org] On Behalf Of Chandrasekar Kannan Sent: Wednesday, April 01, 2015 6:36 PM To: de...@lists.cloudrouter.org Subject: [Devel] Cloudrouter as a L3 plugin in Openstack neutron ? This is an idea for now. But I was wondering if it makes sense to start pushing/promoting including CloudRouter as a neutron agent in Openstack for routing needs similar to this. https://blueprints.launchpad.net/neutron/+spec/l3-plugin-brocade-vyatta-vrouter Thoughts? -Chandra ___ Devel mailing list de...@lists.cloudrouter.org https://lists.cloudrouter.org/mailman/listinfo/devel __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] bug expiration
Le 01/04/2015 15:51, Sean Dague a écrit : I just spent a chunk of the morning purging out some really old Incomplete bugs because about 9 months ago we disabled the auto expiration bit in launchpad - https://bugs.launchpad.net/nova/+configure-bugtracker This is a manually grueling task, which by looking at these bugs, no one else is doing. I'd like to turn that bit back on so we can actually get attention focused on actionable bugs. Any objections here? +1000. Incomplete for 9 months means that the bug was reported at least for the last release. We can't hardly investigate more (and reproduce) the bug so that's a dead-end. There is still the possibility for the reporter to reopen the bug if he feels that it still deserves a fix but in that case, he can also provide the missing details. -Sylvain -Sean __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][tripleo] Optional Ansible deployment coming
On 4/1/15, 5:12 AM, Monty Taylor mord...@inaugust.com wrote: On 03/31/2015 03:30 PM, Steven Dake (stdake) wrote: Hey folks, One of our community members submitted a review to add optional Ansible support to deploy OpenStack using Ansible and the containers within Kolla. Our main objective remains: for third party deployment tools to use Kolla as a building block for container content and management. Since this is scope expansion for our small core team, I required a majority vote on the first commit. See the review here: https://review.openstack.org/#/c/168637/ A couple follow-on reviews: https://review.openstack.org/169154 https://review.openstack.org/169152 If folks in the community want to build an Ansible deployment tool that deploys thin containers, now is your chance to get involved from nearly the first commit. The core team doesn¹t know much about Ansible, so we could really use extra expertise :) If you haven't already, I'd connect with the os-ansible-deployment folks, who are deploying openstack with ansible into containers. I believe they are currently doing production deploys, so you may find working with them rather than duplicating more productive. Already working with them and have been sorting out getting summit design track space (half-day) for the OSAD project specifically at summit. I¹d like to see our efforts merge, but I think in the short term we would have to see what each of these efforts look like. Regards -steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][tripleo] Optional Ansible deployment coming
On 4/1/15, 5:32 AM, Chmouel Boudjnah chmo...@chmouel.com wrote: On 03/31/2015 03:30 PM, Steven Dake (stdake) wrote: Hey folks, One of our community members submitted a review to add optional Ansible support to deploy OpenStack using Ansible and the containers within Kolla. Our main objective remains: for third party deployment tools to use Kolla as a nice, I think it would be nice down the line if we could have direct interaction from ansible with the docker containers and not having to call docker-compose from ansible playbooks to allow more flexibility. The playbooks in that review introduce a docker-compose module into Ansible. The docker-compose tool offers the flexibility needed to get the job done already. If the real goal is dependency management (trimming depepdendencies) I could see that rationale. Regards -steve Monty Taylor mord...@inaugust.com writes: If you haven't already, I'd connect with the os-ansible-deployment folks, who are deploying openstack with ansible into containers. I believe they are currently doing production deploys, so you may find working with them rather than duplicating more productive. As far as I know os-ansible-deployment is doing straight integration but that's based on LXC and not using packages (and don't seem to respect the one daemon/process by container aspect) but even tho I am sure there is a lot of things that they have solved there that could potentially be shared with kola. Chmouel __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Murano projects pylint job
Hi Filip, I think adding pylint job to Murano gates is an awesome idea, have you checked out how to do this? On Wed, Apr 1, 2015 at 4:03 PM, Filip Blaha filip.bl...@hp.com wrote: Hello I have noticed that some openstack projects [1] use pylint gate job. From my point of view it could simplify code reviews even as non-voing job and generally it could improve code quality. Some code issues like code duplication are not easy to discover during code review so automatic job would be helpful. Please let me know your opinion about that. Thanks [1] https://review.openstack.org/#/c/164772/ Regards Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelik...@mirantis.com +7 (495) 640-4904, 0261 +7 (903) 156-0836 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic} Ironic Bug Squash Day
Hello Fellow Ironicers! We have a Bug Squash scheduled for Thursday, 02April 2015 starting at 0800hrs PST. We have quite a few bugs listed for RC1 that need to be tackled, so the more folks who are able to join, the merrier! Here is the link to the pad: https://etherpad.openstack.org/p/IronicReviewDay Look forward to seeing y'all there! :) Cheers! John Stafford Program Manager | HP Helion OpenSource |OpenStack-Ironic E: john.staff...@hp.commailto:john.staff...@hp.com | V: 360.212.9720 | M: 206.963.0916 |IRC: BadCub (Freenode) [cid:7AFA25C2-9877-4224-B65B-284C7AC54155@dlink.com] __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] API WG Meeting Time
On 4/1/15, 08:24, michael mccune m...@redhat.com wrote: On 04/01/2015 08:35 AM, Jay Pipes wrote: On 03/31/2015 10:13 PM, Everett Toews wrote: Ever since daylight savings time it has been increasing difficult for many API WG members to make it to the Thursday 00:00 UTC meeting time. Do we change it so there’s only the Thursday 16:00 UTC meeting time? On a related note, I can’t make it to tomorrow’s meeting. Can someone else please #startmeeting? We should value our Asian friends' input. I think Having multiple meeting times is appropriate. Perhaps not 00:00UTC, but maybe something more in their afternoon? although the 16:00UTC time works well for me, i agree with Jay's sentiment here. +1 for finding another alternating time Mike Not just Asia, but Australia and other parts of Europe. I’d like Christopher to weigh in since he was the person who had pushed for the 00:00UTC meeting time. Is something like 02:00 UTC easier for them? If no one else volunteers, I’ll #startmeeting the meeting at 00:00UTC this week (tonight for the US folks). — Ian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] FFE request for Authomatic cleanup of share_servers
On 03/31/2015 08:49 AM, Julia Varlamova wrote: Hello, I'd like to request a Feature Freeze Exception forAuthomatic cleanup of share_servers (Launchpad: https://blueprints.launchpad.net/manila/+spec/automatic-cleanup-of-share-servers). Patch can be found here: https://review.openstack.org/#/c/166182 I am looking forward for your decision about considering this change for a FFe. Thank you! I haven't heard any negative feedback on this in 24 hours, so... Approved! -- Regards, Julia Varlamova __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Mellanox request for permission for Nova CI
On Wed, Apr 1, 2015 at 8:28 AM, Lenny Verkhovsky len...@mellanox.com wrote: Hi all, We had some issues with presentation of the logs, now it looks ok. You can see Nova CI logs here http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150401_1102/ Tempest output is http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150401_1102/testr_results.html.gz We are currently running tempest api tests on Mellanox HW using SRiOV configuration, We are working to add tempest scenario tests with port direct configuration for SRiOV We are also planning to extend tests with our in-house tests developments. Thanks, that looks a lot better. I would like to get a second opinion from another nova-core but this looks like enough to start commenting on nova patches. *Lenny Verkhovsky* SW Engineer, Mellanox Technologies www.mellanox.com Office:+972 74 712 9244 Mobile: +972 54 554 0233 Fax:+972 72 257 9400 *From:* Joe Gordon [mailto:joe.gord...@gmail.com] *Sent:* Thursday, March 26, 2015 3:29 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] Mellanox request for permission for Nova CI On Thu, Mar 19, 2015 at 5:52 AM, Nurit Vilosny nur...@mellanox.com wrote: Hi Joe, Sorry for the late response. Here are some latest logs for the Nova CI: http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1650/ http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1506/ http://144.76.193.39/ci-artifacts/37/165437/1/check-nova/Check-MLNX-Nova-ML2-Sriov-driver/e90a677/ http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1851/ I couldn't find the equivalent of: http://logs.openstack.org/68/135768/9/check/check-tempest-dsvm-full/f6c95de/logs/testr_results.html.gz Also what tests are running and how do they actually check if sriov works? I can provide more if needed. Thanks, Nurit. *From:* Joe Gordon [mailto:joe.gord...@gmail.com] *Sent:* Wednesday, March 11, 2015 7:50 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev] Mellanox request for permission for Nova CI On Wed, Mar 11, 2015 at 12:49 AM, Nurit Vilosny nur...@mellanox.com wrote: Hi , I would like to ask for a CI permission to start commenting on Nova branch. Mellanox is engaged in pci pass-through features for quite some time now. We have an operating Neutron CI for ~2 years, and since the pci pass-through features are part of Nova as well, we would like to start monitoring Nova’s patches. Our CI had been silently running locally over the past couple of weeks, and I would like to step ahead, and start commenting in a *non-voting mode*. During this period we will be closely monitor our systems and be ready to solve any problem that might occur. Do you have a link to the output of your testing system, so we can check what its testing etc. Thanks, Nurit Vilosny SW Cloud Solutions Manager Mellanox Technologies 13 Zarchin St. Raanana, Israel Office: 972-74-712-9410 Cell: 972-54-4713000 Fax: 972-74-712-9111 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 1 April 2015 at 10:04, Joshua Harlow harlo...@outlook.com wrote: +1 to this. There will always be people who will want to work on fun stuff and those who don't; it's the job of leadership in the community to direct people if they can (but also the same job of that leadership to understand that they can't direct everyone; it is open-source after all and saying 'no' to people just makes them run to some other project that doesn't do this...). IMHO (and a rant probably better for another thread) but I've seen to many projects/specs/split-outs (ie, scheduler tweaks, constraint solving scheduler...) get abandoned because of cores saying this or that is the priority right now (and this in all honesty pisses me off); I don't feel this is right (cores should be leaders and guides, not dictators); if a core is going to tell anyone that then they better act as a guide to the person they are telling that to and make sure they lead that person they just told no; after all any child can say no but it takes a real man/woman to go the extra distance... So I think saying no is sometimes a vital part of the core team's role, keeping up code quality and vision is really hard to do while new features are flooding in, and doing architectural reworking while features are merging is an epic task. There are also plenty of features that don't necessarily fit the shared vision of the project; just because we can do something doesn't mean we should. For example: there are plenty of companies trying to turn Openstack into a datacentre manager rather than a cloud (i.e. too much focus on pets .v. cattle style VMs), and I think we're right to push back against that. Right now there are some strong indications that there are areas we are very weak at (nova network still being preferred to neutron, the amount of difficultly people had establishing 3rd party CI setups for cinder) that really *should* be prioritised over new features. That said, some projects can be worked on successfully in parallel with the main development - I suspect that a scheduler split out proposal is one of them. This doesn't need much/any buy-in from cores, it can be demonstrated in a fairly complete state before it is evaluated, so the only buyi-in needed is on the concept. This is a common development mode in the kernel world too. -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators][keystone] Security around enterprise credentials and OpenStack API
On Wed, Apr 1, 2015 at 1:17 AM, Daniel Comnea comnea.d...@gmail.com wrote: + developers mailing list, hopefully a developer might be able to chime in. On Wed, Apr 1, 2015 at 3:58 AM, Marc Heckmann marc.heckm...@ubisoft.com wrote: Hi all, I was going to post a similar question this evening, so I decided to just bounce on Mathieu’s question. See below inline. On Mar 31, 2015, at 8:35 PM, Matt Fischer m...@mattfischer.com wrote: Mathieu, We LDAP (AD) with a fallback to MySQL. This allows us to store service accounts (like nova) and team accounts for use in Jenkins/scripts etc in MySQL. We only do Identity via LDAP and we have a forked copy of this driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't have any permissions to write into LDAP or move people into groups, so we keep a copy of users locally for purposes of user-list operations. The only interaction between OpenStack and LDAP for us is when that driver tries a bind. On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagné mga...@iweb.com wrote: Hi, Lets say I wish to use an existing enterprise LDAP service to manage my OpenStack users so I only have one place to manage users. How would you manage authentication and credentials from a security point of view? Do you tell your users to use their enterprise credentials or do you use an other method/credentials? We too have integration with enterprise credentials through LDAP, but as you suggest, we certainly don’t want users to use those credentials in scripts or store them on instances. Instead we have a custom Web portal where they can create separate Keystone credentials for their project/tenant which are stored in Keystone’s MySQL database. Our LDAP integration actually happens at a level above Keystone. We don’t actually let users acquire Keystone tokens using their LDAP accounts. We’re not really happy with this solution, it’s a hack and we are looking to revamp it entirely. The problem is that I never have been able to find a clear answer on how to do this with Keystone. I’m actually quite partial to the way AWS IAM works. Especially the instance “role features. Roles in AWS IAM is similar to TRUSTS in Keystone except that it is integrated into the instance metadata. It’s pretty cool. Other than that, RBAC policies in Openstack get us a good way towards IAM like functionality. We just need a policy editor in Horizon. Anyway, the problem is around delegation of credentials which are used non-interactively. We need to limit what those users can do (through RBAC policy) but also somehow make the credentials ephemeral. If someone (Keystone developer?) could point us in the right direction, that would be great. Thanks in advance. The reason is that (usually) enterprise credentials also give access to a whole lot of systems other than OpenStack itself. And it goes without saying that I'm not fond of the idea of storing my password in plain text to be used by some scripts I created. What's your opinion/suggestion? Do you guys have a second credential system solely used for OpenStack? -- Mathieu The solution for this in keystone is to use domain-specific drivers. The only documentation on it I can find is in the developer docs -- http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers -- like a lot of keystone features, it hasn't made it to the admin guide. To give an idea how it works, you have a domain for service users which is local in SQL, and a separate domain for regular users which uses LDAP. - Brant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Mellanox request for permission for Nova CI
Hi all, We had some issues with presentation of the logs, now it looks ok. You can see Nova CI logs here http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150401_1102/ Tempest output is http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150401_1102/testr_results.html.gz We are currently running tempest api tests on Mellanox HW using SRiOV configuration, We are working to add tempest scenario tests with port direct configuration for SRiOV We are also planning to extend tests with our in-house tests developments. Lenny Verkhovsky SW Engineer, Mellanox Technologies www.mellanox.comhttp://www.mellanox.com Office:+972 74 712 9244 Mobile: +972 54 554 0233 Fax:+972 72 257 9400 From: Joe Gordon [mailto:joe.gord...@gmail.com] Sent: Thursday, March 26, 2015 3:29 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI On Thu, Mar 19, 2015 at 5:52 AM, Nurit Vilosny nur...@mellanox.commailto:nur...@mellanox.com wrote: Hi Joe, Sorry for the late response. Here are some latest logs for the Nova CI: http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1650/ http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1506/ http://144.76.193.39/ci-artifacts/37/165437/1/check-nova/Check-MLNX-Nova-ML2-Sriov-driver/e90a677/ http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1851/ I couldn't find the equivalent of: http://logs.openstack.org/68/135768/9/check/check-tempest-dsvm-full/f6c95de/logs/testr_results.html.gz Also what tests are running and how do they actually check if sriov works? I can provide more if needed. Thanks, Nurit. From: Joe Gordon [mailto:joe.gord...@gmail.commailto:joe.gord...@gmail.com] Sent: Wednesday, March 11, 2015 7:50 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI On Wed, Mar 11, 2015 at 12:49 AM, Nurit Vilosny nur...@mellanox.commailto:nur...@mellanox.com wrote: Hi , I would like to ask for a CI permission to start commenting on Nova branch. Mellanox is engaged in pci pass-through features for quite some time now. We have an operating Neutron CI for ~2 years, and since the pci pass-through features are part of Nova as well, we would like to start monitoring Nova’s patches. Our CI had been silently running locally over the past couple of weeks, and I would like to step ahead, and start commenting in a non-voting mode. During this period we will be closely monitor our systems and be ready to solve any problem that might occur. Do you have a link to the output of your testing system, so we can check what its testing etc. Thanks, Nurit Vilosny SW Cloud Solutions Manager Mellanox Technologies 13 Zarchin St. Raanana, Israel Office: 972-74-712-9410 Cell: 972-54-4713000 Fax: 972-74-712-9111 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Apr 1, 2015, at 3:52 AM, Thierry Carrez thie...@openstack.org wrote: Joe Gordon wrote: On Tue, Mar 31, 2015 at 5:46 PM, Dean Troyer dtro...@gmail.com mailto:dtro...@gmail.com wrote: On Tue, Mar 31, 2015 at 5:30 PM, Joe Gordon joe.gord...@gmail.com mailto:joe.gord...@gmail.com wrote: Do you feel like a core deveper/reviewer (we initially called them core developers) [1]: In OpenStack a core developer is a developer who has submitted enough high quality code and done enough code reviews that we trust their code reviews for merging into the base source tree. It is important that we have a process for active developers to be added to the core developer team. Or a maintainer [1]: 1. They share responsibility in the project’s success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend that time doing whatever needs to be done, not necessarily what is the most interesting or fun. First, I don't think these two things are mutually exclusive, that's a false dichotomy. They sound like two groups of attributes (or roles), both of which must be earned in the eyes of the rest of the project team. Frankly, being a PTL is your maintainer list on steroids for some projects, except that the PTL is directly elected. +1000 Yes, these are not orthogonal ideas. The question should be rephrased to 'which description do you identify the most with: core developer/reviewer or maintainer?' - Some people are core reviewers and maintainers (or drivers, to reuse the openstack terminology we already have for that) - Some people are core reviewers only (because they can't commit 90% of their work time to work on project priorities) - Some people are maintainers/drivers only (because their project duties don't give them enough time to also do reviewing) - Some people are casual developers / reviewers (because they can't spend more than 30% of their day on project stuff) That's a nice, concise list. I like that. All those people are valuable. Simply renaming core reviewers to maintainers (creating a single super-developer class) just excludes valuable people. I don't care about the name, but... It's been interesting to watch reactions to the naming thing, because some folks see maintainer as an upgrade, and some don't, and it's easy to tell what someone's reaction will be simply by the bias they're bringing to that word. I'd like to see in the text of any of the proposals where it actually advocates a super developer, because I'm not seeing it, and the constant repeating of this meme isn't helping. Thanks, doug -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability
Great. I'm just doing some experiments to evaluate REQ/REP pattern. It seems that your implementation is completed. Looking forward to reviewing your updates. On Mon, Mar 30, 2015 at 4:02 PM, ozamiatin ozamia...@mirantis.com wrote: Hi, Sorry for not replying on [1] comments too long. I'm almost ready to return to the spec with updates. The main lack of current zmq-driver implementation is that it manually implements REQ/REP on top of PUSH/PULL. It results in: 1. PUSH/PULL is one way directed socket (reply needs another connection) We need to support backwards socket pipeline (two pipelines). In REQ/REP we have it all in one socket pipeline. 2. Supporting delivery of reply over second pipeline (REQ/REP state machine). I would like to propose such socket pipeline: rpc_client(REQ(tcp)) = proxy_frontend(ROUTER(tcp)) = proxy_backend(DEALER(ipc)) = rpc_server(REP(ipc)) ROUTER and DEALER are asynchronous substitution for REQ/REP for building 1-N and N-N topologies, and they don't break the pattern. Recommended pipeline nicely matches for CALL. However CAST can also be implemented over REQ/REP, using reply as message delivery acknowledgement, but not returning it to caller. Listening to reply for CAST in background thread keeps it async as well. Regards, Oleksii Zamiatin On 30.03.15 06:39, Li Ma wrote: Hi all, I'd like to propose a simple but straightforward method to improve the stability of the current implementation. Here's the current implementation: receiver(PULL(tcp)) -- service(PUSH(tcp)) receiver(PUB(ipc)) -- service(SUB(ipc)) receiver(PUSH(ipc)) -- service(PULL(ipc)) Actually, as far as I know, the local IPC method is much more stable than network. I'd like to switch PULL/PUSH to REP/REQ for TCP communication. The change is very simple but effective for stable network communication. I cannot apply the patch for our production systems. I tried it in my lab, and it works well. I know there's another blueprint for REP/REQ pattern [1], but it's not the same, I think. I'd like to discuss it about how to take advantage of REP/REQ of zeromq. [1] https://review.openstack.org/#/c/154094/2/specs/kilo/zmq-req-rep-call.rst Best regards, -- Li Ma (Nick) Email: skywalker.n...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation.
Hi all, Do we have plan to implement it in Liberity? I am really interest in and want to join it. /Yalei From: Miguel Ángel Ajo [mailto:majop...@redhat.com] Sent: Saturday, February 21, 2015 12:31 AM To: Ben Pfaff Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [ovs-dev] [PATCH 8/8] [RFC] [neutron] ovn: Start work on design ocumentation. On Friday, 20 de February de 2015 at 17:06, Ben Pfaff wrote: On Fri, Feb 20, 2015 at 12:45:46PM +0100, Miguel Ángel Ajo wrote: On Thursday, 19 de February de 2015 at 23:15, Kyle Mestery wrote: On Thu, Feb 19, 2015 at 3:55 PM, Ben Pfaff b...@nicira.com (mailto:b...@nicira.com) wrote: My initial reaction is that we can implement security groups as another action in the ACL table that is similar to allow but in addition permits reciprocal inbound traffic. Does that sound sufficient to you? Yes, having fine grained allows (matching on protocols, ports, and remote ips would satisfy the neutron use case). Also we use connection tracking to allow reciprocal inbound traffic via ESTABLISHED/RELATED, any equivalent solution would do. For reference, our SG implementation, currently is able to match on combinations of: * direction: ingress/egress * protocol: icmp/tcp/udp/raw number * port_range: min-max (it’s always dst) * L2 packet ethertype: IPv4, IPv6, etc... * remote_ip_prefix: as a CIDR or * remote_group_id (to reference all other IPs in a certain group) All of them assume connection tracking so known connection packets will go the other way around. OK. All of those should work OK. (I don't know for sure whether we'll have explicit groups; initially, probably not.) That makes sense. Is the exponential explosion due to cross-producting, that is, because you have, say, n1 source addresses and n2 destination addresses and so you need n1*n2 flows to specify all the combinations? We aim to solve that in OVN by giving the CMS direct support for more sophisticated matching rules, so that it can say something like: ip.src in {a, b, c, ...} ip.dst in {d, e, f, ...} (tcp.src in {80, 443, 8080} || tcp.dst in {80, 443, 8080}) That sounds good and very flexible. and let OVN implement it in OVS via the conjunctive match feature recently added, which is like a set membership match but more powerful. Hmm, where can I find examples about that feature, sounds interesting. If you look at ovs-ofctl(8) in a development version of OVS, such as http://benpfaff.org/~blp/dist-docs/ovs-ofctl.8.pdf search for conjunction, which explains the implementation. Amazing, yes, it seems like conjunctions will do the work quite optimally at OpenFlow level. My hat off… :) (This isn't the form that Neutron would use with OVN; that is the Boolean expression syntax above.) Of course, understood, I was curious about the low level supporting the high level above. It might still be nice to support lists of IPs (or whatever), since these lists could still recur in a number of circumstances, but my guess is that this will help a lot even without that. As afar as I understood, given the way megaflows resolve rules via hashes even if we had lots of rules with different ip addresses, that would be very fast, probably as fast or more than our current ipset solution. The only caveat would be having to update lots of flow rules when a port goes in or out of a security group, since you have to go and clear/add the rules to each single port on the same security group (as long as they have 1 rule referencing the sg). That sounds like another good argument for allowing explicit groups. I have a design in mind for that but I doubt it's the first thing to implement. Of course, 1 step at a time. I will do a 2nd pass on your documents, looking a bit more on the higher level. I’m very happy to see that the low level is very well tied up and capable. Best regards, Miguel Ángel. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 04/01/2015 12:31 PM, Duncan Thomas wrote: On 1 April 2015 at 10:04, Joshua Harlow harlo...@outlook.com mailto:harlo...@outlook.com wrote: +1 to this. There will always be people who will want to work on fun stuff and those who don't; it's the job of leadership in the community to direct people if they can (but also the same job of that leadership to understand that they can't direct everyone; it is open-source after all and saying 'no' to people just makes them run to some other project that doesn't do this...). IMHO (and a rant probably better for another thread) but I've seen to many projects/specs/split-outs (ie, scheduler tweaks, constraint solving scheduler...) get abandoned because of cores saying this or that is the priority right now (and this in all honesty pisses me off); I don't feel this is right (cores should be leaders and guides, not dictators); if a core is going to tell anyone that then they better act as a guide to the person they are telling that to and make sure they lead that person they just told no; after all any child can say no but it takes a real man/woman to go the extra distance... So I think saying no is sometimes a vital part of the core team's role, keeping up code quality and vision is really hard to do while new features are flooding in, and doing architectural reworking while features are merging is an epic task. There are also plenty of features that don't necessarily fit the shared vision of the project; just because we can do something doesn't mean we should. For example: there are plenty of companies trying to turn Openstack into a datacentre manager rather than a cloud (i.e. too much focus on pets .v. cattle style VMs), and I think we're right to push back against that. Amen to the above. All of it. Right now there are some strong indications that there are areas we are very weak at (nova network still being preferred to neutron, the amount of difficultly people had establishing 3rd party CI setups for cinder) that really *should* be prioritised over new features. That said, some projects can be worked on successfully in parallel with the main development - I suspect that a scheduler split out proposal is one of them. This doesn't need much/any buy-in from cores, it can be demonstrated in a fairly complete state before it is evaluated, so the only buyi-in needed is on the concept. Ha, I had to laugh at this last paragraph :) You mention the fact that nova-network is still very much in use in the paragraph above (for good reasons that have been highlighted in other threads). And yet you then go on to suspect that a nova-scheduler split would something that would be successfully worked on in parallel... The Gantt project tried and failed to split the Nova scheduler out (before it had any public or versioned interfaces). The solver scheduler has not gotten any traction not because as Josh says some cores are acting like dictators but because it doesn't solve the right problem: it makes more complex scheduling placement decisions in a different way from the Nova scheduler, but it doesn't solve the distributed scale problems in the Nova scheduler architecture. If somebody developed an external generic resource placement engine that scaled in a distributed, horizontal fashion and that had well-documented public interfaces, I'd welcome that work and quickly work to add a driver for it inside Nova. But both Gantt and the solver scheduler fall victim to the same problem: trying to use the existing Nova scheduler architecture when it's flat-out not scalable. Alright, now that I've said that, I'll wait here for the inevitable complaints that as a Nova core, I'm being a dictator because I speak my mind about major architectural issues I see in proposals. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] initial OVN testing
On Mar 31, 2015, at 12:34 AM, Kevin Benton blak...@gmail.com wrote: Very cool. What's the latest status on data-plane support for the conntrack based things like firewall rules and conntrack integration? As Miguel mentioned, we have working code: https://github.com/justinpettit/ovs/tree/conntrack We're in the process of trying to get the support upstreamed in the Linux kernel. --Justin __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler
I think there's a lot of `a rose by any other name would smell as sweet' going on here, we're really just arguing about how we label things. I admit I use the term gantt as a very expansive, this is the effort to clean up the current scheduler and create a separate scheduler as a service project. There should be no reason that this effort should turn off people, if you're interested in the scheduler then very quickly you will get pointed to gantt. I'd like to hear what others think but I still don't see a need to change the name (but I'm willing to change if the majority thinks we should drop gantt for now). -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 -Original Message- From: Sylvain Bauza [mailto:sba...@redhat.com] Sent: Tuesday, March 31, 2015 1:49 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler Le 31/03/2015 02:57, Dugger, Donald D a écrit : I actually prefer to use the term Gantt, it neatly encapsulates the discussions and it doesn't take much effort to realize that Gantt refers to the scheduler and, if you feel there is confusion, we can clarify things in the wiki page to emphasize the process: clean up the current scheduler interfaces and then split off the scheduler. The end goal will be the Gantt scheduler and I'd prefer not to change the discussion. Bottom line is I don't see a need to drop the Gantt reference. While I agree with you that *most* of the scheduler effort is to spin-off the scheduler as a dedicated repository whose codename is Gantt, there are some notes to do : 1. not all the efforts are related to the split, some are only reducing the tech debt within Nova (eg. bp/detach-service-from-computenode has very little impact on the scheduler itself, but rather on what is passed to the scheduler as resources) and may confuse people who could wonder why it is related to the split 2. We haven't yet agreed on a migration path for Gantt and what will become the existing nova-scheduler. I seriously doubt that the Nova community would accept to keep the existing nova-scheduler as a feature duplicate to the future Gantt codebase, but that has been not yet discussed and things can be less clear 3. Based on my experience, we are loosing contributors or people interested in the scheduler area because they just don't know that Gantt is actually at the moment the Nova scheduler. I seriously don't think that if we decide to leave the Gantt codename unused while we're working on Nova, it won't seriously impact our capacity to propose an alternative based on a separate repository, ideally as a cross-project service. It will just translate the reality, ie. that Gantt is at the moment more an idea than a project. -Sylvain -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 -Original Message- From: Sylvain Bauza [mailto:sba...@redhat.com] Sent: Monday, March 30, 2015 8:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler Hi, tl;dr: I used the [gantt] tag for this e-mail, but I would prefer if we could do this for the last time until we spin-off the project. As it is confusing for many people to understand the difference in between the future Gantt project and the Nova scheduler effort we're doing, I'm proposing to stop using that name for all the efforts related to reducing the technical debt and splitting out the scheduler. That includes, not exhaustively, the topic name for our IRC weekly meetings on Tuesdays, any ML thread related to the Nova scheduler or any discussed related to the scheduler happening on IRC. Instead of using [gantt], please use [nova] [scheduler] tags. That said, any discussion related to the real future of a cross-project scheduler based on the existing Nova scheduler makes sense to be tagged as Gantt, of course. -Sylvain __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 04/02/2015 09:02 AM, Jeremy Stanley wrote: but since parties who don't understand our mostly non-hierarchical community can see those sets of access controls, they cling to them as a sign of importance and hierarchy of the people listed within. There is no hierarchy for submitting code -- that is good. We all know situations in a traditional company where people say that's foo's area, we don't work on that. Once code is submitted, there *is* a hierarchy. The only way something gets merged in OpenStack is by Brownian motion of this hierarchy. These special cores float around and as a contributor you just hope that two of them meet up and decide your change is ready. You have zero insight into when this might happen, if at all. The efficiency is appalling but somehow we get there in the end. IMO requiring two cores to approve *every* change is too much. What we should do is move the responsibility downwards. Currently, as a contributor I am only 1/3 responsible for my change making it through. I write it, test it, clean it up and contribute it; then require the extra 2/3 to come from the hierarchy. If you only need one core, then core and myself share the responsibility for the change. In my mind, this better recognises the skill of the contributor -- we are essentially saying we trust you. People involved in openstack are not idiots. If a change is controversial, or a reviewer isn't confident, they can and will ask for assistance or second opinions. This isn't a two-person-key system in a nuclear missile silo; we can always revert. If you want cores to be less special then talking about it or calling them something else doesn't help -- the only way is to make them actually less special. -i __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Reducing noise of the ML (was: Re: [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core)
Actually, for some projects the +1 is part of a public voting process and therefore required. Michael On 2 Apr 2015 8:11 am, Steve Martinelli steve...@ca.ibm.com wrote: *puts mailing list police hat on* Refer to http://lists.openstack.org/pipermail/openstack-dev/2015-March/059642.html I know we're trying to show support for our peers, and +1'ing let's them know just that. But it causes a lot of noise, and in the end it's up the the PTL. Thanks, Steve Martinelli OpenStack Keystone Core Fei Long Wang feil...@catalyst.net.nz wrote on 04/01/2015 04:34:12 PM: From: Fei Long Wang feil...@catalyst.net.nz To: openstack-dev@lists.openstack.org Date: 04/01/2015 04:39 PM Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core +1 if this can be counted :) On 02/04/15 06:18, gordon chung wrote: hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22 +project:openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22 +project:openstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core
Super good news!!! - Kun Huang (Gareth) -Original Message- From: gordon chung [mailto:g...@live.ca] Sent: Thursday, April 02, 2015 8:47 AM To: OpenStack Development Mailing List not for usage questions Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core for those who i have/haven't already discuss this nomination with already, please vote here: https://review.openstack.org/#/c/169959/ for those who can't comment in gerrit, just holler something in #openstack-ceilometer. cheers, gord Date: Thu, 2 Apr 2015 09:34:12 +1300 From: feil...@catalyst.net.nz To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core +1 if this can be counted :) On 02/04/15 06:18, gordon chung wrote: hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22+project: openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22+project:ope nstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo][puppet] Running custom puppet manifests during overcloud post-deployment
Hey all, I've run into a requirement where it'd be useful if, as an end user, I could inject a personal ssh key onto all provisioned overcloud nodes. Obviously this is something that not every user would need or want. I talked about some options with Dan Prince on IRC, and (besides suggesting that I bring the discussion to the mailing list) he proposed some generic solutions - and Dan, please feel free to correct me if I misunderstood any of your ideas. The first is to specify a pre-set custom puppet manifest to be run when the Heat stack is created by adding a post_deployment_customizations.pp puppet manifest to be run by all roles. Users would simply override this manifest. The second solution is essentially the same as the first, except we'd perform the override at the Heat resource registry level: the user would update the resource reference to point to a their custom manifest (rather than overriding the default post-deployment customization manifest). Do either of these solutions seem acceptable to others? Would one be preferred? Thanks, Tzu-Mainn Chen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core
for those who i have/haven't already discuss this nomination with already, please vote here: https://review.openstack.org/#/c/169959/ for those who can't comment in gerrit, just holler something in #openstack-ceilometer. cheers, gord Date: Thu, 2 Apr 2015 09:34:12 +1300 From: feil...@catalyst.net.nz To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core +1 if this can be counted :) On 02/04/15 06:18, gordon chung wrote: hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22+project:openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22+project:openstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 04/01/2015 03:23 AM, Julien Danjou wrote: The problem I see now, is that random people who has very little knowledge of $PROJECT or OpenStack as its whole jump in random review and put a -1 in Gerrit. And then never remove it. And then your patch is stuck for ever in review. Probably because we pushed people to review patches, because we needed review, etc. Personally this is hitting me back a lot and I'm getting more and more tired of that. How can you have people reviewing code when then never even wrote a patch on the project? We encourage people to do code reviews as a way to get involved in the project, to learn about the code base, to learn from the core reviewer team. We encourage people to do bug fixing and commit useful patches in the same way. If the core team won't take the time to help new contributors participate in the project, that is a problem with the core team members. If you don't like someone's review feedback, tell them. Teach them and guide them to more productive contributions. If you don't speak up, you cannot expect the person to change their actions. I've _never_ used only review numbers to put people to core reviewer. We had people trying to play the game that way, but I don't think you can become a core reviewer any code if you never fixed a bug nor wrote a patch in a project. Show me a single person that is in a core team or that has been nominated for a core team that never pushed a patch or fixed a bug in the project. Best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04/01/2015 06:31 PM, Duncan Thomas wrote: Right now there are some strong indications that there are areas we are very weak at (nova network still being preferred to neutron I don't think it's correct. As per latest summit survey [1], nova-network was used by 30% of production sites, while other setups are neutron based. [1]: http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014 /Ihar -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJVHCEmAAoJEC5aWaUY1u57JAgIANi8w8w7GUtpWji+VywxKET0 /fFTQEZdhRhmba36fZ8CTGcUNX0yRmjrSV5RXa5UjGQIEJ9G6bI2KYIVWCBSc+xq 01H/D6JFuHfPYT7b1XoCUw4KYEH7KYA+FzjsRLLAMbVesljuCyjcn9ukCdP6r2Ze IiL18+CBwHeF3buSjEvWPhxi0pygpcKDSD6t/3hXhGcBIFd88I+l9EGvu2+j3H7Y gfpjyUNrrGlwRUnnR8AoN1T6gtOfHRKG6m07z+4DxYclNBL1KnkTMRCFprmnE82l yUT50fnztnnKhKqi0+jjCx7eww4g2U2tm0F1/83VJQVMjbcrpP5EZpUpRP5jHfM= =DJW6 -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core
hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22+project:openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22+project:openstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
Julien Danjou wrote: On Wed, Apr 01 2015, Jeremy Stanley wrote: Responsibilities not tied to specific controls in our tools do exist in abundance, but they tend to be more fluid and ad-hoc because in most cases there's been no need to wrap authorization/enforcement around them. What I worry is happening is that as a community we're enshrining the arbitrary constructs which we invented to be able to configure our tools sanely. I see this discussion as an attempt to recognize those other responsibilities as well, but worry that creation of additional unnecessary authorization/enforcement process will emerge as a solution and drive us further into pointless bureaucracy. +1 We never used so fine grained ACLs in Ceilometer. If a person knows enough about the project, sounds responsible and is helping, then I'm giving him/her the rights to help the project. Which usually includes all the right so that person is not blocked by some ACL if he/she wants suddenly to give his/her advice on a piece of code or triage some bugs. I've never seen big mistakes, and we don't have a lot of unrecoverable mistakes. In the end I prefer to give forgiveness than permission. +1 Thank you thank you thank you for being a good/decent human :) (I also prefer to do the same...) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core
On Wed, Apr 01 2015, gordon chung wrote: i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. +1 -- Julien Danjou -- Free Software hacker -- http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][cells] Meeting time change
On 04/01/2015 03:40 AM, Sylvain Bauza wrote: Le 26/03/2015 17:17, Andrew Laski a écrit : Daylight saving time has made it so that the 2200UTC meeting time is fairly inconvenient for a few of us and the 2100UTC timeslot is open so we're going to shift the meeting up by an hour. I have already spoken with many of the people in regular attendance at the meetings so this should come as little surprise. See you all next week at 2100! I amended https://wiki.openstack.org/wiki/Meetings#Nova_Cellsv2_Meeting accordingly. I guess we're still on #openstack-meeting-3 ? Yes. Thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
Duncan Thomas wrote: On 1 April 2015 at 10:04, Joshua Harlow harlo...@outlook.com mailto:harlo...@outlook.com wrote: +1 to this. There will always be people who will want to work on fun stuff and those who don't; it's the job of leadership in the community to direct people if they can (but also the same job of that leadership to understand that they can't direct everyone; it is open-source after all and saying 'no' to people just makes them run to some other project that doesn't do this...). IMHO (and a rant probably better for another thread) but I've seen to many projects/specs/split-outs (ie, scheduler tweaks, constraint solving scheduler...) get abandoned because of cores saying this or that is the priority right now (and this in all honesty pisses me off); I don't feel this is right (cores should be leaders and guides, not dictators); if a core is going to tell anyone that then they better act as a guide to the person they are telling that to and make sure they lead that person they just told no; after all any child can say no but it takes a real man/woman to go the extra distance... So I think saying no is sometimes a vital part of the core team's role, keeping up code quality and vision is really hard to do while new features are flooding in, and doing architectural reworking while features are merging is an epic task. There are also plenty of features that don't necessarily fit the shared vision of the project; just because we can do something doesn't mean we should. For example: there are plenty of companies trying to turn Openstack into a datacentre manager rather than a cloud (i.e. too much focus on pets .v. cattle style VMs), and I think we're right to push back against that. Sure say 'no' but guide the person u just told to that to in a way that gets them to work on something that both of you find useful; just saying no and to 'shove off' (for lack of a better saying) IMHO isn't the right thing to do. It should IMHO be the responsibility of the person saying 'no' to someone else (I guess this is the core team?) to man up and guide the person they said 'no' to (and not the other way around). I don't feel like this has happened though (but maybe I'm to much in my own little world). Right now there are some strong indications that there are areas we are very weak at (nova network still being preferred to neutron, the amount of difficultly people had establishing 3rd party CI setups for cinder) that really *should* be prioritised over new features. Sure; I'm not gonna associate blame; but I feel like something hasn't worked out right and I start to look at the TC for some of this, bu I'm not gonna go much deeper into this since blame is a bad thing to try to place (and doesn't really help make anything better)... That said, some projects can be worked on successfully in parallel with the main development - I suspect that a scheduler split out proposal is one of them. This doesn't need much/any buy-in from cores, it can be demonstrated in a fairly complete state before it is evaluated, so the only buyi-in needed is on the concept. This is a common development mode in the kernel world too. Agreed. -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Wed, Apr 1, 2015 at 2:41 AM, Thierry Carrez thie...@openstack.org wrote: Joe Gordon wrote: I am starting this thread based on Thierry's feedback on [0]. Instead of writing the same thing twice, you can look at the rendered html from that patch [1]. Neutron tried to go from core to maintainer but after input from the TC and others, they are keeping the term 'core' but are clarifying what it means to be a neutron core [2]. [2] does a very good job of showing how what it means to be core is evolving. From everyone is a dev and everyone is a reviewer. No committers or repo owners, no aristocracy. Some people just commit to do a lot of reviewing and keep current with the code, and have votes that matter more (+2). (Theirry) To a system where cores are more then people who have votes that matter more. Neutron's proposal tries to align that document with what is already happening. 1. They share responsibility in the project's success. 2. They have made a long-term, recurring time investment to improve the project. 3. They spend their time doing what needs to be done to ensure the projects success, not necessarily what is the most interesting or fun. A bit of history is useful here. We used[1] to have 4 groups for each project, mostly driven by the need to put people in ACL groups. The PTL (which has ultimate control), the Drivers (the trusted group around the PTL which had control over blueprint targeting in Launchpad), the Core reviewers (which have +2 on the repos in Gerrit), and the bug team (which had special Launchpad bugs rights like the ability to confirm stuff). [1] https://wiki.openstack.org/wiki/Launchpad_Teams_and_Gerrit_Groups In that model, drivers is closer to what you describe for maintainers -- people invested 100% in the project success, and able to spend 95% of the work time to ensure it. I am having a hard time aligning your description to what we have today. This comparison misses what IMHO is the most important parts of the maintainers model, subsystem maintainers. Maintaining a subsystem doesn't need to be a full time job. My main objection to the model you propose is its binary nature. You bundle core reviewing duties with drivers duties into a single group. That simplification means that drivers have to be core reviewers, and that core reviewers have to be drivers. Sure, a lot of core reviewers are good candidates to become drivers. But I think bundling the two concepts excludes a lot of interesting people from being a driver. I cannot speak for all projects, but at least in Nova you have to be a nova-core to be part of nova-drivers. If someone steps up and owns bug triaging in a project, that is very interesting and I'd like that person to be part of the drivers group. In our current model, not sure why they would need to be part of drivers. the bug triage group is open to anyone. That said, bug triaging (like core reviewing) is a full time job. You can't expect the person who owns bug triaging to commit to the level of reviewing that core reviewers commit to. It's also a different skillset. Saying core reviewers and maintainers are the same thing, you basically I don't want to make it harder for people to feel empowered to help, I want to make it easier. exclude people from stepping up to the project leadership unless they are code reviewers. I think that's a bad thing. We need more people volunteering to own bug triaging and liaison work, not less. I don't agree with this statement, I am not saying reviewing and maintenance need to be tightly coupled. Why do we review code? http://docs.openstack.org/infra/manual/developers.html#code-review gives a incomplete list of what we are looking for. But I think it boils down two two general components: * Does the change make sense in the context of the project? * Does the patch pass our code quality requirements (testing, pythonic, formatting, commit message, logging etc.)? If someone doesn't have a good grasp of the code they are trying to maintain they won't be able to review if the patch makes sense or not. In this case, yes you really cannot be responsible for a piece of code if you don't understand it -- but we have this constraint today. As for the second aspect of reviewing, this has a lot less to do with what the patch is doing and instead how it is doing it. There is no reason reviews for code quality need to be coupled with anything else. I think our idea of combining two separate review criteria into a single review (and ACL) is making things more confusing. So, in summary: * I'm not against reviving the concept of drivers * I'm against making core reviewing a requirement for drivers * I'm for recognizing other duties (like bug triaging or liaison work) as being key project leadership positions Hope this clarifies, I really want to know what you meant be 'no aristocracy' and the
Re: [openstack-dev] [Openstack-dev] resource quotas limit per stacks within a project
Any ideas/ thoughts please? In VMware world is basically the same feature provided by the resource pool. Thanks, Dani On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea comnea.d...@gmail.com wrote: Hi all, I'm trying to understand what options i have for the below use case... Having multiple stacks (various number of instances) deployed within 1 Openstack project (tenant), how can i guarantee that there will be no race after the project resources. E.g - say i have few stacks like stack 1 = production stack 2 = development stack 3 = integration i don't want to be in a situation where stack 3 (because of a need to run some heavy tests) will use all of the resources for a short while while production will suffer from it. Any ideas? Thanks, Dani P.S - i'm aware of the heavy work being put into improving the quotas or the CPU pinning however that is at the project level __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Apr 1, 2015, at 6:09 AM, Jeremy Stanley fu...@yuggoth.org wrote: And here is the crux of the situation, which I think bears highlighting. These empowered groups are (or at least started out as) nothing more than an attempt to map responsibilities onto the ACLs available to our projects in the tools we use to do the work. Coming up with some new pie-in-the-sky model of leadership hierarchy is an interesting thought exercise, but many people in this discussion are losing sight of the fact that the model we have is determined to a great extent by the tools we use. Change the tools and you may change the model, but changing the model doesn't automatically change the tools to support it (and those proposing a new model need to pony up the resources to implement it in _reality_, not just in _thought_). Responsibilities not tied to specific controls in our tools do exist in abundance, but they tend to be more fluid and ad-hoc because in most cases there's been no need to wrap authorization/enforcement around them. What I worry is happening is that as a community we're enshrining the arbitrary constructs which we invented to be able to configure our tools sanely. I see this discussion as an attempt to recognize those other responsibilities as well, but worry that creation of additional unnecessary authorization/enforcement process will emerge as a solution and drive us further into pointless bureaucracy. Given how important trust and relationships are to the functioning of individual projects, I think we’re past the point where we should allow our tooling to be the limiting factor in how we structure ourselves. Do we need finer-grained permissions in gerrit to enable something like subtree maintainers? I don't believe we do. In large projects like Neutron, there is no such thing as someone who knows everything anymore, so we all need to be aware of our limitations and know not to merge things we don't understand without oversight from those of our peers that do. Responsibility in this case could be subject to social rather than tool-based oversight. Maru __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 1 April 2015 at 12:52, Thierry Carrez thie...@openstack.org wrote: Yes, these are not orthogonal ideas. The question should be rephrased to 'which description do you identify the most with: core developer/reviewer or maintainer?' - Some people are core reviewers and maintainers (or drivers, to reuse the openstack terminology we already have for that) - Some people are core reviewers only (because they can't commit 90% of their work time to work on project priorities) - Some people are maintainers/drivers only (because their project duties don't give them enough time to also do reviewing) - Some people are casual developers / reviewers (because they can't spend more than 30% of their day on project stuff) All those people are valuable. Simply renaming core reviewers to maintainers (creating a single super-developer class) just excludes valuable people. Ok, I'd misunderstood the proposal further up the thread when I replied before. This sounds eminently sensible. There's certainly no hard at all in recognising large contributions other than reviews, and bug triage is almost becoming as large a job at various points in the cycle. -- Duncan Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Reducing noise of the ML
Thanks for the heads up. Has it been a common policy already? On 02/04/15 10:09, Steve Martinelli wrote: *puts mailing list police hat on* Refer to http://lists.openstack.org/pipermail/openstack-dev/2015-March/059642.html I know we're trying to show support for our peers, and +1'ing let's them know just that. But it causes a lot of noise, and in the end it's up the the PTL. Thanks, Steve Martinelli OpenStack Keystone Core Fei Long Wang feil...@catalyst.net.nz wrote on 04/01/2015 04:34:12 PM: From: Fei Long Wang feil...@catalyst.net.nz To: openstack-dev@lists.openstack.org Date: 04/01/2015 04:39 PM Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core +1 if this can be counted :) On 02/04/15 06:18, gordon chung wrote: hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22 +project:openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22 +project:openstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Apr 1, 2015, at 1:47 PM, Jeremy Stanley fu...@yuggoth.org wrote: On 2015-04-01 12:00:53 -0700 (-0700), Maru Newby wrote: Given how important trust and relationships are to the functioning of individual projects, I think we’re past the point where we should allow our tooling to be the limiting factor in how we structure ourselves. I'm definitely not suggesting that either, merely pointing out that if you have an ACL which, for example, defines the set of people able to push a particular button then it's helpful to have a term for that set of people. As soon as you start to conflate that specific permission with other roles and responsibilities then the term for it gets overloaded. To me a core reviewer is just that: people with accounts in the .*-core Gerrit groups granted the ability to push a review button indicating that a proposed change is suitable to merge. Whether or not those same people are also afforded permissions outside that system is orthogonal. I find your perspective on the term ‘core reviewer’ to be interesting indeed, and for me it underscores the need to consider whether using the term outside of gerrit is justified. Maru __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [release] python-heatclient 0.4.0
We are pleased to announce the release of: python-heatclient 0.4.0: OpenStack Orchestration API Client Library For more details, please see the git log history below and: http://launchpad.net/python-heatclient/+milestone/0.4.0 Please report issues through launchpad: http://bugs.launchpad.net/python-heatclient Notable changes - The following new CLI commands: deployment-output-show deployment-create - The ability to set and clear pre-create hooks from the CLI and environment - The ability to specify a filename to populate a parameter value with its contents using --parameter-file Changes in python-heatclient 0.3.0..0.4.0 - 22660e9 Parse nested files if they are template 7965bf6 Add wildcard support to hook-clear fa7dd96 Add options for setting and clearing of hooks e8ccf2e Fix test class names 02f7f72 Add option for heatclient to accept parameter value from file 0146483 Sync with oslo_incubator 284c1c5 Migrate to new oslo_xxx namespace d4dab8c Updated from global requirements f213590 Implement deployment-create 747d7d4 Implement deployment-output-show 5475a2e Make ; parsing optional in format_parameters 6b7b1a6 Updated from global requirements ebc1676 Fix SessionClient error when endpoint=None 4cda08d Fix non-working endpoint type argument 5834b62 Updated from global requirements d565efc Updates heat.rst with 'service-list 9b28902 Sort event-list by oldest first Diffstat (except docs and test files) - heatclient/common/deployment_utils.py | 147 + heatclient/common/http.py | 25 +- heatclient/common/template_utils.py| 84 ++ heatclient/common/utils.py | 103 +-- heatclient/exc.py | 2 +- heatclient/openstack/common/_i18n.py | 49 +-- heatclient/openstack/common/apiclient/auth.py | 13 + heatclient/openstack/common/apiclient/base.py | 16 +- heatclient/openstack/common/apiclient/client.py| 6 +- .../openstack/common/apiclient/fake_client.py | 28 +- heatclient/openstack/common/apiclient/utils.py | 17 +- heatclient/openstack/common/cliutils.py| 4 +- heatclient/openstack/common/uuidutils.py | 37 --- heatclient/shell.py| 9 +- heatclient/v1/events.py| 3 +- heatclient/v1/resource_types.py| 3 +- heatclient/v1/resources.py | 3 +- heatclient/v1/shell.py | 270 - requirements.txt | 11 +- test-requirements.txt | 6 +- 28 files changed, 1836 insertions(+), 265 deletions(-) Requirements updates diff --git a/requirements.txt b/requirements.txt index dcd74bb..b8eaca6 100644 --- a/requirements.txt +++ b/requirements.txt @@ -10,4 +10,5 @@ PrettyTable=0.7,0.8 -oslo.i18n=1.3.0 # Apache-2.0 -oslo.serialization=1.2.0 # Apache-2.0 -oslo.utils=1.2.0 # Apache-2.0 -python-keystoneclient=1.0.0 +oslo.i18n=1.5.0,1.6.0 # Apache-2.0 +oslo.serialization=1.4.0,1.5.0 # Apache-2.0 +oslo.utils=1.4.0,1.5.0 # Apache-2.0 +python-keystoneclient=1.1.0 +python-swiftclient=2.2.0 @@ -16 +17 @@ requests=2.2.0,!=2.4.0 -six=1.7.0 +six=1.9.0 diff --git a/test-requirements.txt b/test-requirements.txt index bc0e85c..cc3962d 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -10 +10 @@ fixtures=0.3.14 -requests-mock=0.5.1 # Apache-2.0 +requests-mock=0.6.0 # Apache-2.0 @@ -13,2 +13,2 @@ mox3=0.7.0 -oslosphinx=2.2.0 # Apache-2.0 -oslotest=1.2.0 # Apache-2.0 +oslosphinx=2.5.0,2.6.0 # Apache-2.0 +oslotest=1.5.1,1.6.0 # Apache-2.0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 2015-04-01 14:35:22 -0700 (-0700), Maru Newby wrote: I find your perspective on the term ‘core reviewer’ to be interesting indeed, and for me it underscores the need to consider whether using the term outside of gerrit is justified. Agreed, that's why I said I'm worried that our community is enshrining an implementation detail, and ascribing something more to it than is warranted. Many of the people who have access to mark changes as ready to merge also do bug triage or undertake thankless refactoring of the code commons or set development priorities or write documentation or translate strings or... these are all valuable contributions within the community. Some of these require access to specific controls in our tools granted based on the trust of the community, while others do not, and many of us do more than just one of these things at a time too. There are certainly some nuanced relationships between various tasks, and how our community self-organizes determines some of this. However I'm not sure codifying it and wrapping those relationships in process and policy is always beneficial. I really just wanted to warn against the temptation I've seen for people to confuse the work being done (which is valuable) for the permissions needed to safely do some of that work (which is merely an implementation detail). Work which can be done without needing special permission is not necessarily any less valuable than that which requires addition to some access control; but since parties who don't understand our mostly non-hierarchical community can see those sets of access controls, they cling to them as a sign of importance and hierarchy of the people listed within. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Reducing noise of the ML (was: Re: [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core)
*puts mailing list police hat on* Refer to http://lists.openstack.org/pipermail/openstack-dev/2015-March/059642.html I know we're trying to show support for our peers, and +1'ing let's them know just that. But it causes a lot of noise, and in the end it's up the the PTL. Thanks, Steve Martinelli OpenStack Keystone Core Fei Long Wang feil...@catalyst.net.nz wrote on 04/01/2015 04:34:12 PM: From: Fei Long Wang feil...@catalyst.net.nz To: openstack-dev@lists.openstack.org Date: 04/01/2015 04:39 PM Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core +1 if this can be counted :) On 02/04/15 06:18, gordon chung wrote: hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22 +project:openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22 +project:openstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] PTL Candidacy
Emilien Macchi wrote: As we want to move under the big tent, we decided in the last Puppet OpenStack meeting that we need a PTL for the next Cycle. I would like to announce my candidacy. Emilien, Though I've only been involved with the project for a short time, it is clear that you are an extremely qualified PTL candidate. FWIW, a +1 from me. Regards, Richard signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo][kolla] Investigating containerizing TripleO
Hi all, I've had various discussions about $subject, which have been re-started lately due to some excellent work going on in the Heat community (Rabi Mishra's work integrating SoftwareDeployments with various container launching tools [1]) tl;dr It's now possible to launch containers via heat SoftwareConfig resources in much the same way as we currently apply puppet manifests. I'm also aware there has been some great work going on around the kolla community making things work well with both docker-compose and atomic. I'm interested in discussing the next steps, which would appear to involve providing an optional way to deploy services via containers using TripleO. It seems that we can potentially build on the existing abstractions which were added for puppet integration, e.g: https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-resource-registry-puppet.yaml We could have an alternative resource-registry which maps in a different set of templates (which have the same parameter interfaces) which bootstrap a container host, and deploy each service in a container. This might look something like: https://github.com/hardys/heat-templates/blob/docker-host/hot/software-config/example-templates/example-docker-script.yaml This is just a simple example using docker run, but similar (probably much cleaner) approaches will be possible using atomic, docker-compose and other tools. For example, here's an example of how we might bootstrap a pristine atomic image, install a privileged container hosting the agents needed to talk to heat, then use that container to launch containers with the services: https://review.openstack.org/#/c/164572/6/hot/software-config/example-templates/example-pristine-atomic-docker-compose.yaml Similar example for docker-compose: https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/example-docker-compose-template.yaml There does seem to be a variety of tools folks prefer, but the pattern appears to be the same in most cases: 1. Provide input parameters to the template 2. Map parameters to an environment consumable by the container-launching tool 3. Run the tool and wait for success It may be possible to abstract the details of the various tools inside the heat hooks, such that you could e.g choose the tool you want via a template parameter - e.g it should be possible to build the templates in a way which is somewhat tool-agnostic, if we get the heat interfaces refined correctly. What do people think, is this direction reasonable? I'm keen to figure out how we do a simple PoC which will bottom out the details, but it'd be great to get some feedback on the general approach. Thanks! Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On Apr 1, 2015, at 2:52 AM, Thierry Carrez thie...@openstack.org wrote: - Some people are core reviewers and maintainers (or drivers, to reuse the openstack terminology we already have for that) - Some people are core reviewers only (because they can't commit 90% of their work time to work on project priorities) - Some people are maintainers/drivers only (because their project duties don't give them enough time to also do reviewing) - Some people are casual developers / reviewers (because they can't spend more than 30% of their day on project stuff) All those people are valuable. Simply renaming core reviewers to maintainers (creating a single super-developer class) just excludes valuable people. I hear that you believe that the proposal to rename 'core reviewer' to 'maintainer' in Neutron was intended to entrench privilege. Nothing could be further from the truth - it was actually intended to break it down. As per Joe’s recent reply, ‘drivers’ in Nova have to be core reviewers. This is true in Neutron as well. I think a more accurate taxonomy, at least in Neutron, is the following: - Everyone that participates in the project is a 'contributor' - Some contributors are 'core reviewers' - members of the team with merge rights on a primary repo and a responsibility to actively review for that repo. - Some core reviewers are 'drivers' - members of a team with merge rights on the spec repo and a responsibility to actively review for that repo. This is obviously a gross simplification, but it should serve for what I'm trying to communicate. Many of us in the Neutron community find this taxonomy restrictive and not representative of all the work that makes the project possible. Worse, 'cores' are put on a pedastal, and not just in the project. Every summit a 'core reviewer dinner' is held that underscores the glorification of this designation. By proposing to rename 'core reviewer' to 'maintainer' the goal was to lay the groundwork for broadening the base of people whose valuable contribution could be recognized. The goal was to recognize not just review-related contributors, but also roles like doc/bug/test czar and cross-project liaison. The statue of the people filling these roles today is less if they are not also ‘core’, and that makes the work less attractive to many. Given the TC's apparent mandate to define the organizational taxonomy that a project like Neutron is allowed to use, I would ask you and your fellow committee members to consider addressing the role that the current taxonomy plays in valuing reviewing ahead of other forms of contribution. It provides disincentive against other forms of contribution, since they aren’t recognized on an equal footing, and I think this needs to change if we want to ensure the long-term viability of projects like Neutron (if not OpenStack as a whole). Maru __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][kolla] Investigating containerizing TripleO
Hi, Over the last few weeks Ian Main and I have been working on getting some integration between Kolla and tripleo. For a simple proof of concept, we launched devstack to serve as an undercloud and used heat to spawn an overcloud on a rhel-atomic image. In the overcloud, we deployed openstack in containers pulling from the latest kolla images. We successfully tested keystone, loading an image into glance, booting an image in nova, and sshing into that image. We're starting to move over to a tripleo environment to try and directly integrate now that we've proven it works. Here is the heat template that we are using: https://github.com/rthallisey/atomic-osp-installer/blob/master/heat/openstack_deploy.yaml This will serve as a template that will get openstack up and running on a single node. This template is a good foundation to create addition templates since any other config is mostly going to be a copy and paste. This POC uses atomic to start up openstack, but it can also be done using docker-compose. The heat template for docker compose should look about the same. We're going to push this work upstream shortly, but in the meantime the work can be tracked in the repo I linked above. Thanks, Ryan Hallisey __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] bug expiration
On Thu, Apr 2, 2015 at 4:28 AM, Joe Gordon joe.gord...@gmail.com wrote: On Wed, Apr 1, 2015 at 6:59 AM, Sylvain Bauza sba...@redhat.com wrote: Le 01/04/2015 15:51, Sean Dague a écrit : I just spent a chunk of the morning purging out some really old Incomplete bugs because about 9 months ago we disabled the auto expiration bit in launchpad - https://bugs.launchpad.net/nova/+configure-bugtracker This is a manually grueling task, which by looking at these bugs, no one else is doing. I'd like to turn that bit back on so we can actually get attention focused on actionable bugs. Any objections here? +1000. ++ to re-enabling this. I wonder why it was disabled in the first place. I agree, I didn't realise it was off. Is the manual cleanup complete, or do we need to do more of that at some point? Michael -- Rackspace Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Mellanox request for permission for Nova CI
This looks good to me, but it would be interesting to see what Sean or Matt thought. Michael On Thu, Apr 2, 2015 at 3:25 AM, Joe Gordon joe.gord...@gmail.com wrote: On Wed, Apr 1, 2015 at 8:28 AM, Lenny Verkhovsky len...@mellanox.com wrote: Hi all, We had some issues with presentation of the logs, now it looks ok. You can see Nova CI logs here http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150401_1102/ Tempest output is http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150401_1102/testr_results.html.gz We are currently running tempest api tests on Mellanox HW using SRiOV configuration, We are working to add tempest scenario tests with port direct configuration for SRiOV We are also planning to extend tests with our in-house tests developments. Thanks, that looks a lot better. I would like to get a second opinion from another nova-core but this looks like enough to start commenting on nova patches. Lenny Verkhovsky SW Engineer, Mellanox Technologies www.mellanox.com Office:+972 74 712 9244 Mobile: +972 54 554 0233 Fax:+972 72 257 9400 From: Joe Gordon [mailto:joe.gord...@gmail.com] Sent: Thursday, March 26, 2015 3:29 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI On Thu, Mar 19, 2015 at 5:52 AM, Nurit Vilosny nur...@mellanox.com wrote: Hi Joe, Sorry for the late response. Here are some latest logs for the Nova CI: http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1650/ http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1506/ http://144.76.193.39/ci-artifacts/37/165437/1/check-nova/Check-MLNX-Nova-ML2-Sriov-driver/e90a677/ http://144.76.193.39/ci-artifacts/Check-MLNX-Nova-ML2-Sriov-driver_20150318_1851/ I couldn't find the equivalent of: http://logs.openstack.org/68/135768/9/check/check-tempest-dsvm-full/f6c95de/logs/testr_results.html.gz Also what tests are running and how do they actually check if sriov works? I can provide more if needed. Thanks, Nurit. From: Joe Gordon [mailto:joe.gord...@gmail.com] Sent: Wednesday, March 11, 2015 7:50 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Mellanox request for permission for Nova CI On Wed, Mar 11, 2015 at 12:49 AM, Nurit Vilosny nur...@mellanox.com wrote: Hi , I would like to ask for a CI permission to start commenting on Nova branch. Mellanox is engaged in pci pass-through features for quite some time now. We have an operating Neutron CI for ~2 years, and since the pci pass-through features are part of Nova as well, we would like to start monitoring Nova’s patches. Our CI had been silently running locally over the past couple of weeks, and I would like to step ahead, and start commenting in a non-voting mode. During this period we will be closely monitor our systems and be ready to solve any problem that might occur. Do you have a link to the output of your testing system, so we can check what its testing etc. Thanks, Nurit Vilosny SW Cloud Solutions Manager Mellanox Technologies 13 Zarchin St. Raanana, Israel Office: 972-74-712-9410 Cell: 972-54-4713000 Fax: 972-74-712-9111 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] bug expiration
On 04/01/2015 04:56 PM, Michael Still wrote: On Thu, Apr 2, 2015 at 4:28 AM, Joe Gordon joe.gord...@gmail.com wrote: On Wed, Apr 1, 2015 at 6:59 AM, Sylvain Bauza sba...@redhat.com wrote: Le 01/04/2015 15:51, Sean Dague a écrit : I just spent a chunk of the morning purging out some really old Incomplete bugs because about 9 months ago we disabled the auto expiration bit in launchpad - https://bugs.launchpad.net/nova/+configure-bugtracker This is a manually grueling task, which by looking at these bugs, no one else is doing. I'd like to turn that bit back on so we can actually get attention focused on actionable bugs. Any objections here? +1000. ++ to re-enabling this. I wonder why it was disabled in the first place. I agree, I didn't realise it was off. Is the manual cleanup complete, or do we need to do more of that at some point? I did a lot of manual cleanup today, which meant closing out about 100 bugs that were old and highly unlikely to ever get enough info to do anything with. However, turning on auto expire again would be nice. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] unit tests result in false negatives on system z platform CI
Thanks for the detailed email on this. How about we add this to the agenda for this weeks nova meeting? One option would be to add a fixture to some higher level test class, but perhaps someone has a better idea than that. Michael On Wed, Apr 1, 2015 at 8:54 PM, Markus Zoeller mzoel...@de.ibm.com wrote: Context: During the Kilo development cycle the KVM/libvirt on system z platform made some effort to be supported by the libvirt driver [1]. Observation: Our first tests in a prototype platform CI showed some false negatives because some unit tests don't seem to be fully platform independent. For example the result of test: nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase. test_get_guest_config_without_qga_through_image_meta [0.016369s] ... FAILED Captured traceback: ~~~ Traceback (most recent call last): File nova/tests/unit/virt/libvirt/test_driver.py, line 3112, in test_get_guest_config_without_qga_through_image_meta vconfig.LibvirtConfigGuestSerial) [...] raise mismatch_error testtools.matchers._impl.MismatchError: 'nova.virt.libvirt.config.LibvirtConfigGuestConsole object' is not an instance of LibvirtConfigGuestSerial This mismatch makes fully sense if x86 is assumed as default underlying platform for unit test execution. The root cause (in this case) is that the call of libvirt_utils.get_arch() is not mocked and actually speaks to the underlying platform. On our system z CI this call returns s390x which hits platform switches in the code (a search for arch.S390X will show you all this platform specific code). A first test A change of this specific test could look like this: # git diff nova/tests/unit/virt/libvirt/test_driver.py diff --git a/nova/tests/unit/virt/libvirt/test_driver.py b/nova/tests/unit/virt/libvirt/test_driver.py index 5fbe5e1..ebcc9ed 100644 --- a/nova/tests/unit/virt/libvirt/test_driver.py +++ b/nova/tests/unit/virt/libvirt/test_driver.py @@ -3091,7 +3091,9 @@ class LibvirtConnTestCase(test.NoDBTestCase): image_meta, disk_info) -def test_get_guest_config_without_qga_through_image_meta(self): +@mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', + return_value=arch.X86_64) +def test_get_guest_config_without_qga_through_image_meta(self, mock_get_arch): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) Open questions -- We currently discovered around 30 test cases (mostly the class LibvirtConnTestCase) which could be treated that way, which seems to be cumbersome from my point of view. I'm looking for a way to express the assumption that x86 should be the default platform in the unit tests and prevent calls to the underlying system. This has to be rewritable if platform specific code like in [2] has to be tested. I'd like to discuss how that could be achieved in a maintainable way. References -- [1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz [2] test_driver.py; test_get_guest_config_with_type_kvm_on_s390; https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/test_driver.py#L2592 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core
+1 if this can be counted :) On 02/04/15 06:18, gordon chung wrote: hi, i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he has been a leading reviewer in Ceilometer and consistently gives insightful reviews. he also contributes patches and helps triage bugs. reviews: https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22+project:openstack/ceilometer,n,z patches: https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22+project:openstack/ceilometer,n,z cheers, gord ps. this isn't an april fool's joke as he initially thought when i asked him. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers Best regards, Fei Long Wang (王飞龙) -- Senior Cloud Software Engineer Tel: +64-48032246 Email: flw...@catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] The Evolution of core developer to maintainer?
On 2015-04-01 12:00:53 -0700 (-0700), Maru Newby wrote: Given how important trust and relationships are to the functioning of individual projects, I think we’re past the point where we should allow our tooling to be the limiting factor in how we structure ourselves. I'm definitely not suggesting that either, merely pointing out that if you have an ACL which, for example, defines the set of people able to push a particular button then it's helpful to have a term for that set of people. As soon as you start to conflate that specific permission with other roles and responsibilities then the term for it gets overloaded. To me a core reviewer is just that: people with accounts in the .*-core Gerrit groups granted the ability to push a review button indicating that a proposed change is suitable to merge. Whether or not those same people are also afforded permissions outside that system is orthogonal. Do we need finer-grained permissions in gerrit to enable something like subtree maintainers? I don't believe we do. In large projects like Neutron, there is no such thing as someone who knows everything anymore, so we all need to be aware of our limitations and know not to merge things we don't understand without oversight from those of our peers that do. Responsibility in this case could be subject to social rather than tool-based oversight. Right, there's nothing stopping you from doing this now. A lot of our project teams already operate in the way you're describing. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs
Note; I haven't finished debugging the glusterfs job yet. This relates to the OOM that started happening on Centos after we moved to using as much pip-packaging as possible. glusterfs was still failing even before this. On 04/01/2015 07:58 PM, Deepak Shetty wrote: 1) So why did this happen on rax VM only, the same (Centos job)on hpcloud didn't seem to hit it even when we ran hpcloud VM with 8GB memory. I am still not entirely certain that hp wasn't masking the issue when we were accidentally giving hosts 32gb RAM. We can get back to this once these changes merge. 2) Should this also be sent to centos-devel folks so that they don't upgrade/update the pyopenssl in their distro repos until the issue is resolved ? I think let's give the upstream issues a little while to play-out, then we decide our next steps around use of the library based on that information. thanks -i __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev