[openstack-dev] [nova] [neutron] Specs for K release
Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Nova API meeting
Hi, Just a reminder that the weekly Nova API meeting is being held tomorrow Friday UTC . We encourage cloud operators and those who use the REST API such as SDK developers and others who and are interested in the future of the API to participate. In other timezones the meeting is at: EST 20:00 (Thu) Japan 09:00 (Fri) China 08:00 (Fri) ACDT 9:30 (Fri) The proposed agenda and meeting details are here: https://wiki.openstack.org/wiki/Meetings/NovaAPI Please feel free to add items to the agenda. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
You could just make the kilo folder in your commit and then rebase it once Kilo is open. On Thu, Aug 28, 2014 at 12:07 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them
On 08/27/2014 05:52 PM, Doug Hellmann wrote: On Aug 27, 2014, at 11:14 AM, Flavio Percoco fla...@redhat.com wrote: On 08/27/2014 04:31 PM, Sean Dague wrote: So this change came in with adding glance.store - https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a bad direction to be headed. Here is the problem when it comes to working with code from git, in python, that uses namespaces, it's kind of a hack that violates the principle of least surprise. For instance: cd /opt/stack/oslo.vmware pip install . cd /opt/stack/olso.config pip install -e . python -m olso.vmware /usr/bin/python: No module named olso.vmware In python 2.7 (using pip) namespaces are a bolt on because of the way importing modules works. And depending on how you install things in a namespace will overwrite the base __init__.py for the top level part of the namespace in such a way that you can't get access to the submodules. It's well known, and every conversation with dstuft that I've had in the past was don't use namespaces. A big reason we see this a lot is due to the fact that devstack does 'editable' pip installs for most things, because the point is it's a development environment, and you should be able to change code, and see if live without having to go through the install step again. If people remember the constant issues with oslo.config in unit tests 9 months ago, this was because of mismatch of editable vs. non editable libraries in the system and virtualenvs. This took months to get to a consistent workaround. The *workaround* that was done is we just gave up on installing olso libs in a development friendly way. I don't consider that a solution, it's a work around. But it has some big implications for the usefulness of the development environment. It also definitely violates the principle of least surprise, as changes to oslo.messaging in a devstack env don't immediately apply, you have to reinstall olso.messaging to get them to take. If this is just oslo, that's one thing (and still something I think should be revisted, because when the maintainer of pip says don't do this I'm inclined to go by that). But this change aims to start brining this pattern into other projects. Realistically I'm quite concerned that this will trigger more work arounds and confusion. It also means, for instance, that once we are in a namespace we can never decide to install some of the namespace from pypi and some of it from git editable (because it's a part that's under more interesting rapid development). So I'd like us to revisit using a namespace for glance, and honestly, for other places in OpenStack, because these kinds of violations of the principle of least surprise is something that I'd like us to be actively minimizing. Sean, Thanks for bringing this up. To be honest, I became familiar with these namespace issues when I started working on glance.store. That said, I realize how this can be an issue even with the current workaround. Unfortunately, it's already quite late in the release and starting the rename process will delay Glance's migration to glance.store leaving us with not enough time to test it and make sure things are working as expected. With full transparency, I don't have an counter-argument for what you're saying/proposing. I talked to Doug on IRC and he mentioned this is something that won't be fixed in py27 so there's not even hope on seeing it fixed/working soon. Based on that, I'm happy to rename glance.store but, I'd like us to think in a good way to make this rename happen without blocking the glance.store work in Juno. When you asked me about namespace packages, I thought using them was fine because we had worked out how to do it for Oslo and the same approach worked for you in glance. I didn’t realize that was still considered an unresolved issue, so I apologize that my mistake has ended up causing you more work, Flavio. NP. Good thing is we now have a plan and a reference for future cases if there happen to be any. You've been way too helpful in this whole process. Thanks for all your guidance and support, really. I've 2 ways to make this happen: 1. Do a partial rename and then complete it after the glance migration is done. If I'm not missing anything, we should be able to do something like: - Rename the project internally - Release a new version with the new name `glancestore` - Switch glance over to `glancestore` - Complete the rename process with support from infra This seems like the best approach. If nothing is using the library now, all of the name changes would need to happen within the library. You can release “glancestore” from a git repo called “glance.store” and we can rename that repo somewhere down the line, so you shouldn’t be blocked on the infra team (who I expect are going to be really busy keeping an eye on the gate as we get close to
Re: [openstack-dev] [nova] [neutron] Specs for K release
I think it's ok to submit specs for Kilo - mostly because it would be a bit pointless submitting them for Juno! Salvatore On 28 August 2014 09:26, Kevin Benton blak...@gmail.com wrote: You could just make the kilo folder in your commit and then rebase it once Kilo is open. On Thu, Aug 28, 2014 at 12:07 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)
On 08/27/2014 06:33 PM, Nataliia Uvarova wrote: I doesn't support the idea of removing this endpoint, although it requires some efforts to maintain. First of all, because of the confusion among users, that it could bring. The href to message is returned in many cases, and was seen as canonical way to deal with it. (As far as I understand, we encourage users to use links we provide and not ids etc). And if you only could send DELETE requests to this url and not GET, that is looking not good. And the second. We do have ability get a set of messages by id, by using /messages?ids=ids endpoint. By removing ability to get message normal way, we could unintentionally force users to use this hacky approach to get single message. The cost of support both endpoint is not that much higher, than support only one of them. As for me, changes in v1.1 is more cosmetic one in the part of queues/messages, and it is better to make decision about it in v2, where will have more understanding what is needed and what is not. It is only my thoughts, I'm not very experienced in Zaqar API yet. On 08/27/2014 05:48 PM, Kurt Griffiths wrote: Crew, as we continue implementing v1.1 in anticipation for a “public preview” at the summit, I’ve started to wonder again about removing the ability to GET a message by ID from the API. Previously, I was concerned that it may be too disruptive a change and should wait for 2.0. But consider this: in order to GET a message by ID you already have to have either listed or claimed that message, in which case you already have the message. Therefore, this operation would appear to have no practical purpose, and so probably won’t be missed by users if we remove it. Am I missing something? What does everyone think about removing getting messages by ID in v1.1? Can I agree with both of you? :D I agree that getting message by ID is not very useful and also quite confusing for a messaging service. This is one of those things we agreed on in the early days that we've had to keep for backwards compatibility. Unfortunately, as Nataliia mentioned, we can't just get rid of it in v1.1 because that implies a major change in the API, which would require a major release. What we can do, though, is start working on a spec for the V2 of the API. API v2 sounds like a good topic for a design session, we've already done a good cleanup of minor things in v1.1 and I believe we can make v2 happen in Kilo. Thoughts? Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
Thanks for your feedback. So I will create a branch for my bp, add the kilo folder and commit it together with my spec. To get sphinx running I also had to make changes to the toc tree in nova-specs/doc/source/index.rst I will also commit this change within my branch, right? thanks, Andreas On Thu, 2014-08-28 at 09:48 +0200, Salvatore Orlando wrote: I think it's ok to submit specs for Kilo - mostly because it would be a bit pointless submitting them for Juno! Salvatore On 28 August 2014 09:26, Kevin Benton blak...@gmail.com wrote: You could just make the kilo folder in your commit and then rebase it once Kilo is open. On Thu, Aug 28, 2014 at 12:07 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Zaqar] Early proposals for design summit sessions
Greetings, I'd like to join the early coordination effort for design sessions. I've shamelessly copied Doug's template for Oslo into a new etherpad so we can start proposing sessions there. https://etherpad.openstack.org/p/kilo-zaqar-summit-topics Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Zaqar] Early proposals for design summit sessions
Greetings, I'd like to join the early coordination effort for design sessions. I've shamelessly copied Doug's template for Oslo into a new etherpad so we can start proposing sessions there. https://etherpad.openstack.org/p/kilo-zaqar-summit-topics Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA
Anthony and Robert, Thanks for your reply. I don't know if the arping is there for NAT, but I am pretty sure it's for HA setup to broadcast the router's own change since the arping is controlled by send_arp_for_ha config. By checking the man page of arping, you can find the arping -A we use in code is sending out ARP REPLY instead of ARP REQUEST. This is like saying I am here instead of where are you. I didn't realized this either until Brain pointed this out at my code review below. http://linux.die.net/man/8/arping https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py Thoughts? Xu Han On 08/27/2014 10:01 PM, Veiga, Anthony wrote: Hi Xuhan, What I saw is that GARP is sent to the gateway port and also to the router ports, from a neutron router. I'm not sure why it's sent to the router ports (internal network). My understanding for arping to the gateway port is that it is needed for proper NAT operation. Since we are not planning to support ipv6 NAT, so this is not required/needed for ipv6 any more? I agree that this is no longer necessary. There is an abandoned patch that disabled the arping for ipv6 gateway port: https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py thanks, Robert On 8/27/14, 1:03 AM, Xuhan Peng pengxu...@gmail.com mailto:pengxu...@gmail.com wrote: As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to start a discussion about how to support l3 agent HA when IP version is IPv6. This problem is triggered by bug [1] where sending gratuitous arp packet for HA doesn't work for IPv6 subnet gateways. This is because neighbor discovery instead of ARP should be used for IPv6. My thought to solve this problem turns into how to send out neighbor advertisement for IPv6 routers just like sending ARP reply for IPv4 routers after reading the comments on code review [2]. I searched for utilities which can do this and only find a utility called ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on other linux distributions. There are comments in yesterday's meeting that it's the new router's job to send out RA and there is no need for neighbor discovery. But we didn't get enough time to finish the discussion. Because OpenStack runs the l3 agent, it is the router. Instead of needing to do gratuitous ARP to alert all clients of the new MAC, a simple RA from the new router for the same prefix would accomplish the same, without having to resort to a special package to generate unsolicited NA packets. RAs must be generated from the l3 agent anyway if it's the gateway, and we're doing that via radvd now. The HA failover simply needs to start the proper radvd process on the secondary gateway and resume normal operation. Can you comment your thoughts about how to solve this problem in this thread, please? [1] https://bugs.launchpad.net/neutron/+bug/1357068 [2] https://review.openstack.org/#/c/114437/ [3] http://manpages.ubuntu.com/manpages/oneiric/man8/ndsend.8.html Thanks, Xu Han -Anthony ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] Heat AWS WaitCondition's count
Hi all, the AWS::CloudFormation::WaitCondition resource in Heat allows to update the 'count' property, although in real AWS this is prohibited ( https://bugs.launchpad.net/heat/+bug/1340100). My question is does TripleO still depends on this behavior of AWS WaitCondition in any way? I want to be sure that fixing the mentioned bug will not break TripleO. Best regards, Pavlo Shchelokovskyy. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Is the BP approval process broken?
On Thu, Aug 28, 2014 at 01:04:57AM +, Dugger, Donald D wrote: I'll try and not whine about my pet project but I do think there is a problem here. For the Gantt project to split out the scheduler there is a crucial BP that needs to be implemented ( https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP has been rejected and we'll have to try again for Kilo. My question is did we do something wrong or is the process broken? Note that we originally proposed the BP on 4/23/14, went through 10 iterations to the final version on 7/25/14 and the final version got three +1s and a +2 by 8/5. Unfortunately, even after reaching out to specific people, we didn't get the second +2, hence the rejection. I see at that it did not even get one +2 at the time of the feature proposal approval freeze. You then successfully requested an exception and after a couple more minor updates got a +2 from John but from no one else. I do think this shows a flaw in our (core teams) handling of the blueprint. When we agreed upon the freeze exception, that should have included a firm commitment for a least 2 core devs to review it. IOW I think it is reasonable to say that either your feature should have ended up with two +2s and +A, or you should have seen a -1 from another core dev. I don't think it is acceptable that after the exception was approved it only got feedback from one core dev. I actually thought that when approving exceptions, we always got 2 cores to agree to review the item to avoid this, so I'm not sure why we failed here. I understand that reviews are a burden and very hard but it seems wrong that a BP with multiple positive reviews and no negative reviews is dropped because of what looks like indifference. Given that there is still time to review the actual code patches it seems like there should be a simpler way to get a BP approved. Without an approved BP it's difficult to even start the coding process. I see 2 possibilities here: 1) This is an isolated case specific to this BP. If so, there's no need to change the procedures but I would like to know what we should be doing differently. We got a +2 review on 8/4 and then silence for 3 weeks. 2) This is a process problem that other people encounter. Maybe there are times when silence means assent. Something like a BP with multiple +1s and at least one +2 should automatically be accepted if no one reviews it 2 weeks after the +2 is given. My two thoughts are - When we approve something for exception should actively monitor progress of review to ensure it gets the neccessary attention to either approve or reject it. It makes no sense to approve an exception and then let it lie silently waiting for weeks with no attention. I'd expect that any time exceptions are approved we should babysit them and actively review their status in the weekly meeting to ensure they are followed up on. - Core reviewers should prioritize reviews of things which already have a +2 on them. I wrote about this in the context of code reviews last week, but all my points apply equally to spec reviews I believe. http://lists.openstack.org/pipermail/openstack-dev/2014-August/043657.html Also note that in Kilo the process will be slightly less heavyweight in that we're going to try allow some features changes into tree without first requiring a spec/blueprint to be written. I can't say offhand whether this particular feature would have qualifying for the lighter process, but in general by reducing need for specs for the more trivial items, we'll have more time available for review of things which do require specs. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova]libvirt: connect_volume scans all LUNs, it will be very slow with a large number of volumes.RE: [OpenStack][Nova]May be performance issues of connect_volume in Nova
Hi, All I’ve reported a bug related to this mail: https://bugs.launchpad.net/nova/+bug/1362513 From: Joe Gordon [mailto:joe.gord...@gmail.com] Sent: Wednesday, August 27, 2014 1:12 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack][Nova]May be performance issues of connect_volume in Nova On Tue, Aug 26, 2014 at 5:36 AM, Wang Shen ws1...@gmail.commailto:ws1...@gmail.com wrote: Hi, All I have done some work to test the performance of LUN scanning, use iscsiadm with --rescan like what Nova dose. In my test, a host connected with a lot of LUNs , more than 1000 LUNs. Because --rescan will cause kernel to scan all of the LUNs connected to the host, it costs several minutes to complete the scanning. According to connect_volume at line 284 in nova.virt.libvirt.volume.pyhttp://nova.virt.libvirt.volume.py: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L252 Nova uses iscsiadm with --rescan to detect new volume, but this command will scan all of the LUNs, including all the others which already connected to this host. So if a host has a large number of LUNs connected to it, the connect_volume will be very slow. I think connect_volume needn't scan all of the LUNs, only need scan the LUN specified by connection_info. Is it necessary to discuss a more efficient way to improve this issues. It sounds like this is a bug; we use https://bugs.launchpad.net/nova to track bugs so they don't get lost. -- Best wishes == Peter.W == ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [murano] Murano documentation needs improvement
Hi, All! I'd like to keep an attention to Murano documentation. Documentation is really important and leaves the impression of the whole project since it's a first thing with what user or developer faced. So we need to concentrate on it's improvement. I prepared an etherpad https://etherpad.openstack.org/p/murano-docs[1] with the list of possible changes. Anyone is welcome to add items and to take on their implementation since it's a good way to get involved into the project. Please, contact us at #murano channel if you have any questions. Good luck everyone! Kate. [1] - https://etherpad.openstack.org/p/murano-docs ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them
On 27/08/14 16:31, Sean Dague wrote: [snip] In python 2.7 (using pip) namespaces are a bolt on because of the way importing modules works. And depending on how you install things in a namespace will overwrite the base __init__.py for the top level part of the namespace in such a way that you can't get access to the submodules. It's well known, and every conversation with dstuft that I've had in the past was don't use namespaces. I think this is actually a solved problem. You just need a single line in your __init__.py files: https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py -- Radomir Dopieralski ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them
On 28/08/14 12:41, Radomir Dopieralski wrote: On 27/08/14 16:31, Sean Dague wrote: [snip] In python 2.7 (using pip) namespaces are a bolt on because of the way importing modules works. And depending on how you install things in a namespace will overwrite the base __init__.py for the top level part of the namespace in such a way that you can't get access to the submodules. It's well known, and every conversation with dstuft that I've had in the past was don't use namespaces. I think this is actually a solved problem. You just need a single line in your __init__.py files: https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py More on that at http://pythonhosted.org/setuptools/setuptools.html?namespace-packages#namespace-packages -- Radomir Dopieralski ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them
On 08/28/2014 12:50 PM, Radomir Dopieralski wrote: On 28/08/14 12:41, Radomir Dopieralski wrote: On 27/08/14 16:31, Sean Dague wrote: [snip] In python 2.7 (using pip) namespaces are a bolt on because of the way importing modules works. And depending on how you install things in a namespace will overwrite the base __init__.py for the top level part of the namespace in such a way that you can't get access to the submodules. It's well known, and every conversation with dstuft that I've had in the past was don't use namespaces. I think this is actually a solved problem. You just need a single line in your __init__.py files: https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py More on that at http://pythonhosted.org/setuptools/setuptools.html?namespace-packages#namespace-packages This is not solved. In fact, we already declare the namespace in the __init__ files. The problem, as Sean mentioned, is the way pip and setuptools will install this packages, which ends up overwriting the base __init__ file. Thanks for the links, though. Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3
On 08/27/2014 03:35 PM, Ken Giusti wrote: Hi All, I believe Juno-3 is our last chance to get this feature [1] included into olso.messaging. I honestly believe this patch is about as low risk as possible for a change that introduces a whole new transport into oslo.messaging. The patch shouldn't affect the existing transports at all, and doesn't come into play unless the application specifically turns on the new 'amqp' transport, which won't be the case for existing applications. The patch includes a set of functional tests which exercise all the messaging patterns, timeouts, and even broker failover. These tests do not mock out any part of the driver - a simple test broker is included which allows the full driver codepath to be executed and verified. IFAIK, the only remaining technical block to adding this feature, aside from core reviews [2], is sufficient infrastructure test coverage. We discussed this a bit at the last design summit. The root of the issue is that this feature is dependent on a platform-specific library (proton) that isn't in the base repos for most of the CI platforms. But it is available via EPEL, and the Apache QPID team is actively working towards getting the packages into Debian (a PPA is available in the meantime). In the interim I've proposed a non-voting CI check job that will sanity check the new driver on EPEL based systems [3]. I'm also working towards adding devstack support [4], which won't be done in time for Juno but nevertheless I'm making it happen. I fear that this feature's inclusion is stuck in a chicken/egg deadlock: the driver won't get merged until there is CI support, but the CI support won't run correctly (and probably won't get merged) until the driver is available. The driver really has to be merged first, before I can continue with CI/devstack development. [1] https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation [2] https://review.openstack.org/#/c/75815/ [3] https://review.openstack.org/#/c/115752/ [4] https://review.openstack.org/#/c/109118/ Hi Ken, Thanks a lot for your hard work here. As I stated in my last comment on the driver's review, I think we should let this driver land and let future patches improve it where/when needed. I agreed on letting the driver land as-is based on the fact that there are patches already submitted ready to enable the gates for this driver. Lets see what others think about this, Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes
I would like to add a question to John's list - Original Message - From: John Schwarz jschw...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Sent: Tuesday, August 26, 2014 2:22:33 PM Subject: Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes On 08/25/2014 10:06 PM, Brandon Logan wrote: 2. Therefor, there should be some configuration to specifically enable either version (not both) in case LBaaS is needed. In this case, the other version is disabled (ie. a REST query for non-active version should return a not activated error). Additionally, adding a 'lb-version' command to return the version currently active seems like a good user-facing idea. We should see how this doesn't negatively effect the db migration process (for example, allowing read-only commands for both versions?) A /version endpoint can be added for both v1 and v2 extensions and service plugins. If it doesn't already exist, it would be nice if neutron had an endpoint that would return the list of loaded extensions and their versions. There is 'neutron ext-list', but I'm not familiar enough with it or with the REST API to say if we can use that. 3. Another decision that's needed to be made is the syntax for v2. As mentioned, the current new syntax is 'neutron lbaas-object-command' (against the old 'lb-object-action'), keeping in mind that once v1 is deprecated, a syntax like 'lbv2-object-action' would be probably unwanted. Is 'lbaas-object-command' okay with everyone? That is the reason we with with lbaas because lbv2 looks ugly and we'd be stuck with it for the lifetime of v2, unless we did another migration back to lb for it. Which seemed wrong to do, since then we'd have to accept both lbv2 and lb commands, and then deprecate lbv2. I assume this also means you are fine with the prefix in the API resource of /lbaas as well then? I don't mind, as long there is a similar mechanism which disables the non-active REST API commands. Does anyone disagree? 4. If we are going for different API between versions, appropriate patches also need to be written for lbaas-related scripts and also Tempest, and their maintainers should probably be notified. Could you elaborate on this? I don't understand what you mean by different API between version. The intention was that the change of the user-facing API also forces changes on other levels - not only neutronclient needs to be modified accordingly, but also tempest system tests, horizon interface regarding LBaaS... 5. If we accept #3 and #4 to mean that the python-client API and CLI must be changed/updated and so does Tempest clients and tests, then what about other projects consuming the Neutron API? How are Heat and Ceilometer going to be affected by this change? Yair ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
How to do we handle specs that have slipped through the cracks and did not make it for Juno? /Alan From: Salvatore Orlando [mailto:sorla...@nicira.com] Sent: August-28-14 9:48 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [neutron] Specs for K release I think it's ok to submit specs for Kilo - mostly because it would be a bit pointless submitting them for Juno! Salvatore On 28 August 2014 09:26, Kevin Benton blak...@gmail.commailto:blak...@gmail.com wrote: You could just make the kilo folder in your commit and then rebase it once Kilo is open. On Thu, Aug 28, 2014 at 12:07 AM, Andreas Scheuring scheu...@linux.vnet.ibm.commailto:scheu...@linux.vnet.ibm.com wrote: Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote: How to do we handle specs that have slipped through the cracks and did not make it for Juno? Rebase the proposal so it is under the 'kilo' directory path instead of 'juno' and submit it for review again. Make sure to keep the ChangeId line intact so people see the history of any review comments in the earlier Juno proposal. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects
Thierry Carrez wrote: Doug Hellmann wrote: This makes sense to me, so let’s move ahead with your plan. OK, this is now done: Project group @ https://launchpad.net/oslo Oslo incubator: https://launchpad.net/oslo-incubator oslo.messaging: https://launchpad.net/oslo.messaging General blueprint view: https://blueprints.launchpad.net/oslo General bug view: https://bugs.launchpad.net/oslo We do have launchpad projects for some of the other oslo libraries, we just haven’t been using them for release tracking: https://launchpad.net/python-stevedore https://launchpad.net/python-cliff https://launchpad.net/taskflow https://launchpad.net/pbr https://launchpad.net/oslo.vmware Cool, good to know. I'll include them in the oslo group if we create it. I added pbr, but I don't have the rights to move the other ones. It would generally be good to have oslo-drivers added as maintainer or driver for those projects so that we can fix them, if they are part of oslo. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
David Kranz wrote: On 08/27/2014 03:43 PM, Sean Dague wrote: On 08/27/2014 03:33 PM, David Kranz wrote: Race conditions are what makes debugging very hard. I think we are in the process of experimenting with such an idea: asymetric gating by moving functional tests to projects, making them deeper and more extensive, and gating against their own projects. The result should be that when a code change is made, we will spend much more time running tests of code that is most likely to be growing a race bug from the change. Of course there is a risk that we will impair integration testing and we will have to be vigilant about that. One mitigating factor is that if cross-project interaction uses apis (official or not) that are well tested by the functional tests, there is less risk that a bug will only show up only when those apis are used by another project. So, sorry, this is really not about systemic changes (we're running those in parallel), but more about skills transfer in people getting engaged. Because we need both. I guess that's the danger of breaking the thread is apparently I lost part of the context. I agree we need both. I made the comment because if we can make gate debugging less daunting then less skill will be needed and I think that is key. Honestly, I am not sure the full skill you have can be transferred. It was gained partly through learning in simpler times. I think we could develop tools and visualizations that would help the debugging tasks. We could make those tasks more visible, and therefore more appealing to the brave souls that step up to tackle them. Sean and Joe did a ton of work improving the raw data, deriving graphs from it, highlighting log syntax or adding helpful Apache footers. But those days they spend so much time fixing the issues themselves, they can't continue on improving those tools. And that's part of where the gate burnout comes from: spending so much time on the issues themselves that you can no longer work on preventing them from happening, or making the job of handling the issues easier, or documenting/mentoring other people so that they can do it in your place. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] team meeting Aug 28 1800 UTC
Hi folks, We'll be having the Sahara team meeting as usual in #openstack-meeting-alt channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140828T18 -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Goals for 5.1.1 6.0
Hi Fuelers, while we are busy with last bugs which block us from releasing 5.1, we need to start thinking about upcoming releases. Some of you already started POC, some - specs, and I see discussions in ML and IRC. From overall strategy perspective, focus for 6.0 is: - OpenStack Juno release - Certificate 100-node deployment. In terms of OpenStack, if not possible for Juno, let's do for Icehouse - Send anonymous stats about deployment (deployment modes, features used, etc.) - Stability and Reliability Let's take a little break and think, in a first order, about features, sustaining items and bugs which block us from releasing whether 5.1.1 or 6.0. We have to start creating blueprints (and moving them to 6.0 milestone) and make sure there are critical bugs assigned to appropriate milestone, if there are any. Examples which come to my mind immediately: - Use service token to auth in Keystone for upgrades (affects 5.1.1), instead of plain admin login / pass. Otherwise it affects security, and user should keep password in plain text - Decrease upgrade tarball size Please come up with blueprints and LP bugs links, and short explanation why it's a blocker for upcoming releases. Thanks, -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][third-party] Midokura Third-Party CI Status
Hello, today we realised that the Midokura third party jenkins was down and didn't vote for new patches. Apologies. Lucas (who is in charge of this) is in vacations and I'm fighting another wars... Now it should work again as expected, although we have cleaned the queue to don't overload our server. Cheers, -- Jaume Devesa Software Engineer at Midokura ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3
On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote: On 08/27/2014 03:35 PM, Ken Giusti wrote: Hi All, I believe Juno-3 is our last chance to get this feature [1] included into olso.messaging. I honestly believe this patch is about as low risk as possible for a change that introduces a whole new transport into oslo.messaging. The patch shouldn't affect the existing transports at all, and doesn't come into play unless the application specifically turns on the new 'amqp' transport, which won't be the case for existing applications. The patch includes a set of functional tests which exercise all the messaging patterns, timeouts, and even broker failover. These tests do not mock out any part of the driver - a simple test broker is included which allows the full driver codepath to be executed and verified. IFAIK, the only remaining technical block to adding this feature, aside from core reviews [2], is sufficient infrastructure test coverage. We discussed this a bit at the last design summit. The root of the issue is that this feature is dependent on a platform-specific library (proton) that isn't in the base repos for most of the CI platforms. But it is available via EPEL, and the Apache QPID team is actively working towards getting the packages into Debian (a PPA is available in the meantime). In the interim I've proposed a non-voting CI check job that will sanity check the new driver on EPEL based systems [3]. I'm also working towards adding devstack support [4], which won't be done in time for Juno but nevertheless I'm making it happen. I fear that this feature's inclusion is stuck in a chicken/egg deadlock: the driver won't get merged until there is CI support, but the CI support won't run correctly (and probably won't get merged) until the driver is available. The driver really has to be merged first, before I can continue with CI/devstack development. [1] https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation [2] https://review.openstack.org/#/c/75815/ [3] https://review.openstack.org/#/c/115752/ [4] https://review.openstack.org/#/c/109118/ Hi Ken, Thanks a lot for your hard work here. As I stated in my last comment on the driver's review, I think we should let this driver land and let future patches improve it where/when needed. I agreed on letting the driver land as-is based on the fact that there are patches already submitted ready to enable the gates for this driver. I feel bad that the driver has been in a pretty complete state for quite a while but hasn't received a whole lot of reviews. There's a lot of promise to this idea, so it would be ideal if we could unblock it. One thing I've been meaning to do this cycle is add concrete advice for operators on the state of each driver. I think we'd be a lot more comfortable merging this in Juno if we could somehow make it clear to operators that it's experimental right now. My idea was: - Write up some notes which discusses the state of each driver e.g. - RabbitMQ - the default, used by the majority of OpenStack deployments, perhaps list some of the known bugs, particularly around HA. - Qpid - suitable for production, but used in a limited number of deployments. Again, list known issues. Mention that it will probably be removed with the amqp10 driver matures. - Proton/AMQP 1.0 - experimental, in active development, will support multiple brokers and topologies, perhaps a pointer to a wiki page with the current TODO list - ZeroMQ - unmaintained and deprecated, planned for removal in Kilo - Propose this addition to the API docs and ask the operators list for feedback - Propose a patch which adds a load-time deprecation warning to the ZeroMQ driver - Include a load-time experimental warning in the proton driver Thoughts on that? (I understand the ZeroMQ situation needs further discussion - I don't think that's on-topic for the thread, I was just using it as example of what kind of advice we'd be giving in these docs) Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs
James Polley wrote: Point of clarification: I've heard PTL=Project Technical Lead and PTL=Program Technical Lead. Which is it? It is kind of important as OpenStack grows, because the first is responsible for *a* project, and the second is responsible for all projects within a program. Now Program, formerly Project. I think this is worthy of more exploration. Our docs seem to be very inconsistent about what a PTL is - and more broadly, what the difference is between a Project and a Program. Just a few examples: https://wiki.openstack.org/wiki/PTLguide says Program Technical Lead. https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014 simply says PTL - but does say that each PTL is elected by/for a Program. However, Thierry pointed to https://wiki.openstack.org/wiki/Governance/Foundation/Structure which still refers to Project Technical Leads and says explicitly that they lead individual projects, not programs. I actually have edit access to that page, so I could at least update that with a simple s/Project/Program/, if I was sure that was the right thing to do. Don't underestimate how stale wiki pages can become! Yes, fix it. http://www.openstack.org/ has a link in the bottom nav that says Projects; it points to http://www.openstack.org/projects/ which redirects to http://www.openstack.org/software/ which has a list of things like Compute and Storage - which as far as I know are Programs, not Projects. I don't know how to update that link in the nav panel. That's because the same word (compute) is used for two different things: a program name (Compute) and an official OpenStack name for a project (OpenStack Compute a.k.a. Nova). Basically official OpenStack names reduce confusion for newcomers (What is Nova ?), but they confuse old-timers at some point (so the Compute program produces Nova a.k.a. OpenStack Compute ?). I wasn't around when the original Programs/Projects discussion was happening - which, I suspect, has a lot to do with why I'm confused today - it seems as though people who were around at the time understand the difference, but people who have joined since then are relying on multiple conflicting verbal definitions. I believe, though, that http://lists.openstack.org/pipermail/openstack-dev/2013-June/010821.html was one of the earliest starting points of the discussion. That page points at https://wiki.openstack.org/wiki/Projects, which today contains a list of Programs. That page does have a definition of what a Program is, but doesn't explain what a Project is or how they relate to Programs. This page seems to be locked down, so I can't edit it. https://wiki.openstack.org/wiki/Projects was renamed to https://wiki.openstack.org/wiki/Programs with the wiki helpfully leaving a redirect behind. So the content you are seeing here is the Programs wiki page, which is why it doesn't define projects. We don't really use the word project that much anymore, we prefer to talk about code repositories. Programs are teams working on a set of code repositories. Some of those code repositories may appear in the integrated release. That page does mention projects, once. The context makes it read, to me, as though a program can follow one process to become part of OpenStack and then another process to become an Integrated project and part of the OpenStack coordinated release - when my understanding of reality is that the second process applies to Projects, not Programs. I've tried to find any other page that talks about what a Project is and how they relate to Programs, but I haven't been able to find anything. Perhaps there's some definition locked up in a mailing list thread or some TC minutes, but I haven't been able to find it. During the previous megathread, I got the feeling that at least some of the differing viewpoints we saw were possibly down to some people thinking of a PTL as responsible for just one project, while others think of a PTL as being responsible for any projects that might fit within a Program's scope. As we approach the next PTL elections, I think it would be helpful for us to recap the discussions that led to the Program/Project split and make sure our docs are consistent, so that people who weren't following the discussion this time last year can still be clear what they're voting for. Programs are just acknowledging that code repositories should be organized in the way that makes the most sense technically. They should not be artificially organized to match our governance structure. Before programs existed, it was difficult for teams to organize their code the way they wanted, because there was only one code repository (The Project), so everything had to be in it. Then we added an exception for the Python client projects, so the Nova team could work on the Nova project *and* the Python client for it. But then it made sense to organize the code differently,
Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?
Hi Nikola, Firstly, thanks for waiting for me to respond and sorry I was absent for the last couple of weeks. The extensible resource tracker bp deals with two distinct information flows: 1. information about resources that is passed from the compute node to the scheduler, 2. information about resource requirements passed to the scheduler and the compute node. If I understand your email below correctly, you are saying that information such as extra_specs, is not made available to the compute node or the resource plugins. This is specifically about the second item (2.) above. The patch that you propose to revert addresses the first item (1.), i.e. it provides a means to select which resources are tracked and to pass that information to the scheduler. It gives us two things: we can add resource plugins and pass information to the scheduler without having to change the resource tracker or scheduler. We can also pick and chose which resource plugins to use, and so what information we want to write to the database and pass to the scheduler. The ability to omit resource information is as useful as the ability to add it. So if new a resource plugin is added, operators that do not use that information do not need to configure it. As an operator myself, I would be happy to omit the proliferation of compute node details that are coming, while benefiting from those that are of use to me. The interface for the plugins does not need to be considered a fixed external interface, it is not. It is ok to add necessary parameters if there is no other sensible way to pass information you need. So in short, the ability to add resource information without impacting everybody is the value that the patch you want to revert brings. If in the future another design is settled on for resource tracking and scheduling it will still have to face the same requirement. The compute node will have a set of resource information that could be tracked and used, but not everyone will want the overhead of discovering and reporting all of it, so they should not need to have all of it. Paul On 19 August 2014 10:11, Nikola Đipanov ndipa...@redhat.com wrote: Since after a week of discussing it I see no compelling argument against reverting it - here's the proposal: https://review.openstack.org/115218 Thanks, N. On 08/12/2014 12:21 PM, Nikola Đipanov wrote: Hey Nova-istas, While I was hacking on [1] I was considering how to approach the fact that we now need to track one more thing (NUMA node utilization) in our resources. I went with - I'll add it to compute nodes table thinking it's a fundamental enough property of a compute host that it deserves to be there, although I was considering Extensible Resource Tracker at one point (ERT from now on - see [2]) but looking at the code - it did not seem to provide anything I desperately needed, so I went with keeping it simple. So fast-forward a few days, and I caught myself solving a problem that I kept thinking ERT should have solved - but apparently hasn't, and I think it is fundamentally a broken design without it - so I'd really like to see it re-visited. The problem can be described by the following lemma (if you take 'lemma' to mean 'a sentence I came up with just now' :)): Due to the way scheduling works in Nova (roughly: pick a host based on stale(ish) data, rely on claims to trigger a re-schedule), _same exact_ information that scheduling service used when making a placement decision, needs to be available to the compute service when testing the placement. This is not the case right now, and the ERT does not propose any way to solve it - (see how I hacked around needing to be able to get extra_specs when making claims in [3], without hammering the DB). The result will be that any resource that we add and needs user supplied info for scheduling an instance against it, will need a buggy re-implementation of gathering all the bits from the request that scheduler sees, to be able to work properly. This is obviously a bigger concern when we want to allow users to pass data (through image or flavor) that can affect scheduling, but still a huge concern IMHO. As I see that there are already BPs proposing to use this IMHO broken ERT ([4] for example), which will surely add to the proliferation of code that hacks around these design shortcomings in what is already a messy, but also crucial (for perf as well as features) bit of Nova code. I propose to revert [2] ASAP since it is still fresh, and see how we can come up with a cleaner design. Would like to hear opinions on this, before I propose the patch tho! Thanks all, Nikola [1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement [2] https://review.openstack.org/#/c/109643/ [3] https://review.openstack.org/#/c/111782/ [4] https://review.openstack.org/#/c/89893
[openstack-dev] What's Up Doc? Aug 28, 2014
_Operators midcycle meetup report _ I went to the Operators midcycle meetup this week. Nice work Gauvain on the automation of the New, Changed, and Deprecated configuration options -- at the Operators Midcycle Meetup yesterday they asked for it and lo and behold, it was already there! Nice. The discussion about docs at the Operators mid-cycle meetup is here: https://etherpad.openstack.org/p/SAT-ops-docs Despite the bit of discussion around where to find networking info the feedback I got from talking to people is that a separate networking guide will be helpful so we're staying the course for Juno. I also hear that logging and monitoring are hot topics for operators. Another outcome from the Operators midcycle meeti up is that the High Availability Guide is going to move out of the openstack-manuals repo and into its own repository with its own set of core reviewers. _ Security Guide updates _ Bryan Payne did a nice summary post last week about the work and activity going on with the Security Guide. It was also discussed at the Operators Midcycle Meetup and lots of the audience there knows about it. http://lists.openstack.org/pipermail/openstack-docs/2014-August/005080.html _ Networking information _ We're continuing to work on the Networking Guide although it is not published yet due to being full of bacon lorem ipsum. We would love more collaborators on that guide so refer to the spec and the outline to get started. https://wiki.openstack.org/wiki/NetworkingGuide/TOC We're working through issues with the LBaaS v1 API needing to be documented along with the experimental v2 Load Balancing API (such as bug 1361413 https://bugs.launchpad.net/openstack-api-site/+bug/1361413). _ Doc tools _ I'm trying to find a release timing and collection for clouddocs maven plugin. Stay tuned. _ Doc bugs _ We're definitely seeing an uptick in new doc bugs with the upcoming feature freeze (Winter is Coming) so please keep triaging incoming doc bugs. The ones that come in from DocImpact are merged and should be set to Confirmed. _ Ongoing doc work _ The cinder and trove configuration tables have been updated this week. The HOT user information is ongoing. Gauvain, I think you got your question about audience answered, sounds like assuming the audience understands cloud will work for that info. Looks like Telemetry got their admin information completed, nice work all! Ildiko and Eoghan were especially attentive, thanks all for the efforts here. The training team met this week and the log is here: http://eavesdrop.openstack.org/meetings/training_guides/2014/training_guides.2014-08-25-17.00.log.html Docs are on target, but I wondered if you'd had a chance to bring back any changes from the basic install into the openstack-manuals repo? Let us know the plan there please. Any questions or other items to report? Let us know! Thanks, Anne ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
For nova we haven't gotten around to doing this, but it shouldn't be a big deal. I'll add it to the agenda for today's meeting. Michael On Thu, Aug 28, 2014 at 2:07 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Juno-3 BP Review Meeting Minutes
See below. As discussed in the meeting, lets focus on the medium/high priority BPs this week and try to merge them before FF next week. Thanks to all who attended! Kyle Minutes: http://eavesdrop.openstack.org/meetings/neutron_juno_3_bp_review/2014/neutron_juno_3_bp_review.2014-08-28-13.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/neutron_juno_3_bp_review/2014/neutron_juno_3_bp_review.2014-08-28-13.01.txt Log: http://eavesdrop.openstack.org/meetings/neutron_juno_3_bp_review/2014/neutron_juno_3_bp_review.2014-08-28-13.01.log.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
On Thu, Aug 28, 2014 at 8:30 AM, Michael Still mi...@stillhq.com wrote: For nova we haven't gotten around to doing this, but it shouldn't be a big deal. I'll add it to the agenda for today's meeting. Michael For Neutron, I have not gone through and removed specs which merged and haven't made it yet. I'll do that today with a review to neutron-specs, and once we hit FF next week I'll make another pass to remove things which didn't make Juno. Keep in mind if your spec doesn't make Juno you will have to re-propose it for Kilo. Thanks! Kyle On Thu, Aug 28, 2014 at 2:07 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com wrote: Hi, is it already possible to submit specs (nova neutron) for the K release? Would be great for getting early feedback and tracking comments. Or should I just commit it to the juno folder? Thanks, Andreas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)
On 08/26/2014 05:41 PM, Kurt Griffiths wrote: * uWSGI + gevent * config: http://paste.openstack.org/show/100592/ * app.py: http://paste.openstack.org/show/100593/ Hi Kurt! Thanks for posting the benchmark configuration and results. Good stuff :) I'm curious about what effect removing http-keepalive from the uWSGI config would make. AIUI, for systems that need to support lots and lots of random reads/writes from lots of tenants, using keepalive sessions would cause congestion for incoming new connections, and may not be appropriate for such systems. Totally not a big deal; really, just curious if you'd run one or more of the benchmarks with keepalive turned off and what results you saw. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]instance_info_cache_update method has been removed
Hello All, I can see that instance_info_cache_update method has been removed in version 1.59 in the file nova/conductor/rpcapi.py of the Juno-2 code. Please let me know How can I update the instance-info-cache table by making rpc call from nova in the Juno-2 code. Thanks Praveen. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA
Anthony and Robert, Thanks for your reply. I don't know if the arping is there for NAT, but I am pretty sure it's for HA setup to broadcast the router's own change since the arping is controlled by send_arp_for_ha config. By checking the man page of arping, you can find the arping -A we use in code is sending out ARP REPLY instead of ARP REQUEST. This is like saying I am here instead of where are you. I didn't realized this either until Brain pointed this out at my code review below. That’s what I was trying to say earlier. Sending out the RA is the same effect. RA says “I’m here, oh and I’m also a router” and should supersede the need for an unsolicited NA. The only thing to consider here is that RAs are from LLAs. If you’re doing IPv6 HA, you’ll need to have two gateway IPs for the RA of the standby to work. So far as I know, I think there’s still a bug out on this since you can only have one gateway per subnet. http://linux.die.net/man/8/arping https://review.openstack.org/#/c/114437/2/neutron/agent/l3_agent.py Thoughts? Xu Han On 08/27/2014 10:01 PM, Veiga, Anthony wrote: Hi Xuhan, What I saw is that GARP is sent to the gateway port and also to the router ports, from a neutron router. I’m not sure why it’s sent to the router ports (internal network). My understanding for arping to the gateway port is that it is needed for proper NAT operation. Since we are not planning to support ipv6 NAT, so this is not required/needed for ipv6 any more? I agree that this is no longer necessary. There is an abandoned patch that disabled the arping for ipv6 gateway port: https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py thanks, Robert On 8/27/14, 1:03 AM, Xuhan Peng pengxu...@gmail.commailto:pengxu...@gmail.com wrote: As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to start a discussion about how to support l3 agent HA when IP version is IPv6. This problem is triggered by bug [1] where sending gratuitous arp packet for HA doesn't work for IPv6 subnet gateways. This is because neighbor discovery instead of ARP should be used for IPv6. My thought to solve this problem turns into how to send out neighbor advertisement for IPv6 routers just like sending ARP reply for IPv4 routers after reading the comments on code review [2]. I searched for utilities which can do this and only find a utility called ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on other linux distributions. There are comments in yesterday's meeting that it's the new router's job to send out RA and there is no need for neighbor discovery. But we didn't get enough time to finish the discussion. Because OpenStack runs the l3 agent, it is the router. Instead of needing to do gratuitous ARP to alert all clients of the new MAC, a simple RA from the new router for the same prefix would accomplish the same, without having to resort to a special package to generate unsolicited NA packets. RAs must be generated from the l3 agent anyway if it’s the gateway, and we’re doing that via radvd now. The HA failover simply needs to start the proper radvd process on the secondary gateway and resume normal operation. Can you comment your thoughts about how to solve this problem in this thread, please? [1] https://bugs.launchpad.net/neutron/+bug/1357068 [2] https://review.openstack.org/#/c/114437/ [3] http://manpages.ubuntu.com/manpages/oneiric/man8/ndsend.8.html Thanks, Xu Han -Anthony ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] static IP DHCP
Hello Sanjivini, How are trying to do it? Creating a port with static ip: $ neutron port-create --fixed-ip subnet_id=subnet_id,ip_address=10.0.0.100 net_id and then deploy a vm with this port, should work: $ nova boot --flavor 1 --image image_id --nic port-id=created_port_id test_vm Please note that this is an usage question and this is the development list. Can you please send the mail to *openst...@lists.openstack.org openst...@lists.openstack.org* next time? On 27 August 2014 15:49, Sanjivini Naikar sanjivininai...@tataelxsi.co.in wrote: Hi, I want to assign static IP to my instance. However, when trying to do so, the IP doesnt get associated with the VM. My VM boot logs show: Sending discover... Sending discover... Sending discover... No lease, failing WARN: /etc/rc3.d/S40-network failed How do I assign a static IP to my VM? Regards, Sanjivini Naikar ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Jaume Devesa Software Engineer at Midokura ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?
It is not a big worry but it would have been better if overcloud nodes to be independent of undercloud except being a manager. Think the use case where: 1. nova has three region, geographically apart 2. geographical area hosting undercloud gets doomed 3. all region of nova gets affected as they can not reboot But, we can leave operator to ensure undercloud comes backend. Or, am I thinking too much? On Thu, Aug 28, 2014 at 10:47 AM, James Polley j...@jamezpolley.com wrote: That is correct, we intend that the undercloud should be designed with HA in mind. In terms of the design of TripleO, the devtest.sh implementation aims to have this available as a default (see http://docs.openstack.org/developer/tripleo-incubator/README.html#stage-2-being-worked-on for some dated and very brief notes about our aims). This is something that's been receiving a lot of attention over the last cycle, but it's not quite ready out-of-the-box yet. In terms of implementation, this will still require the implementor to make sure that their implementation meets their needs for HA. For instance, it doesn't matter how many redundant neutron nodes we run if they all hang off the back of a single physical non-redundant switch... Are there particular concerns you have with this design? On Thu, Aug 28, 2014 at 2:20 PM, Jyoti Ranjan jran...@gmail.com wrote: I do agree but it create an extra requirement for Undercloud if we high availability is important criteria. Because of this, undercloud has to be there 24x7, 365 days and to make it available we need to have HA for this also. So, you indirectly mean that undercloud also should be designed keeping high availability in mind. On Wed, Aug 27, 2014 at 7:53 PM, Ben Nemec openst...@nemebean.com wrote: We probably will at some point, but I don't know that it's a huge priority right now. The PXE booting method works fine, and as I mentioned we don't intend you to reboot machines without using the undercloud anyway, just like Nova doesn't expect you to reboot vms directly via libvirt (or your driver of choice). It's likely there would be other issues booting a deployed machine if the undercloud is down anyway (nothing to respond to DHCP requests, for one), so I don't see that as something we want to encourage anyway. -Ben On 08/27/2014 04:11 AM, Jyoti Ranjan wrote: I believe that local boot option is available in Ironic. Will not be a good idea to boot from local disk instead of relying on PXE boot always? Curious to know why we are not going this path? On Wed, Aug 27, 2014 at 3:54 AM, 严超 yanchao...@gmail.com wrote: Thank you very much. And sorry for the cross-posting. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* 2014-08-26 23:17 GMT+08:00 Ben Nemec openst...@nemebean.com: Oh, after writing my response below I realized this is cross-posted between openstack and openstack-dev. Please don't do that. I suppose this probably belongs on the users list, but since I've already written the response I guess I'm not going to argue too much. :-) On 08/26/2014 07:36 AM, 严超 wrote: Hi, All: I've deployed undercloud and overcloud on some baremetals. All overcloud machines are deployed by undercloud. Then I tried to shutdown undercloud machines. After that, if I reboot one overcloud machine, it will never boot from net, AKA PXE used by undercloud. Yes, that's normal. With the way our baremetal deployments work today, the deployed systems always PXE boot. After deployment they PXE boot a kernel and ramdisk that use the deployed hard disk image, but it's still a PXE boot. Is that what TripleO is designed to be ? We can never shutdown undercloud machines for maintainance of overcloud ? Please help me clearify that. Yes, that's working as intended at the moment. I recall hearing that there were plans to eliminate the PXE requirement after deployment, but you'd have to talk to the Ironic team about that. Also, I don't think it was ever the intent of TripleO that the undercloud would be shut down after deployment. The idea is that you use the undercloud to manage the overcloud machines, so if you want to reboot one you do it via the undercloud nova, not directly on the system itself. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list
[openstack-dev] Raising priority for Dynamic Routing
I would like to ask you consider to raise priority from low to medium (I completely understand this is not a high priority feature) the BGP Dynamic Routing feature (https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing). The feature has just 3 patches. * https://review.openstack.org/#/c/115554/ * https://review.openstack.org/#/c/115667/ * https://review.openstack.org/#/c/115938/ ready since last week. It has been proposed as one of the extensions to include in the Neutron Incubator project. I'm sorry but I ignore the status of the rest proposed extensions, but I would be happy to collaborate in anything to include it there as first experimental extension. Cheers, -- Jaume Devesa Software Engineer at Midokura ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Raising priority for Dynamic Routing
Sorry, I forgot to tag the subject. On 28 August 2014 16:24, Jaume Devesa devv...@gmail.com wrote: I would like to ask you consider to raise priority from low to medium (I completely understand this is not a high priority feature) the BGP Dynamic Routing feature (https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing). The feature has just 3 patches. * https://review.openstack.org/#/c/115554/ * https://review.openstack.org/#/c/115667/ * https://review.openstack.org/#/c/115938/ ready since last week. It has been proposed as one of the extensions to include in the Neutron Incubator project. I'm sorry but I ignore the status of the rest proposed extensions, but I would be happy to collaborate in anything to include it there as first experimental extension. Cheers, -- Jaume Devesa Software Engineer at Midokura -- Jaume Devesa Software Engineer at Midokura ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] Maintenance mode for patching
All, I'm not sure if it deserves to be mentioned in our documentation, this seems to be a common practice. If an administrator wants to patch his environment, he should be prepared for a temporary downtime of OpenStack services. And he should plan to perform patching in advance: choose a time with minimal load and warn users about possible interruptions of service availability. Our current implementation of patching does not protect from downtime during the patching procedure. HA deployments seems to be more or less stable. But it looks like it is possible to schedule an action on a compute node and get an error because of service restart. Deployments with one controller... well, you won’t be able to use your cluster until the patching is finished. There is no way to get rid of downtime here. As I understand, we can get rid of possible issues with computes in HA. But it will require migration of instances and stopping of nova-compute service before patching. And it will make the overall patching procedure much longer. Do we want to investigate this process? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
On 08/27/2014 04:28 PM, Kevin Benton wrote: What are you talking about? The only reply was from me clarifying that one of the purposes of the incubator was for components of neutron that are experimental but are intended to be merged. Right. The special unicorns. In that case it might not make sense to have a life cycle of their own in another repo indefinitely. The main reasons these experimental components don't make sense to live in their own repo indefinitely are: a) Neutron's design doesn't make it easy or straightforward to build/layer other things on top of it, or: b) The experimental piece of code intends to replace whole-hog a large chunk of Neutron's existing codebase, or: c) The experimental piece of code relies so heavily on inconsistent, unversioned internal interface and plugin calls that it cannot be designed externally due to the fragility of those interfaces Fixing a) is the solution to these problems. An incubator area where experimental components can live will just continue to mask the true problem domain, which is that Neutron's design is cumbersome to build on top of, and its cross-component interfaces need to be versioned, made consistent, and cleaned up to use versioned data structures instead of passing random nested dicts of randomly-prefixed string key/values. Frankly, we're going through a similar problem in Nova right now. There is a group of folks who believe that separating the nova-scheduler code into the Gantt project will magically make placement decision code and solver components *easier* to work on (because the pace of coding can be increased if there wasn't that pesky nova-core review process). But this is not correct, IMO. Separating out the scheduler into its own project before internal interfaces and data structures are cleaned up and versioned will just lead to greater technical debt and an increase in frustration on the part of Nova developers and scheduler developers alike. -jay On Wed, Aug 27, 2014 at 11:52 AM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 08/26/2014 07:09 PM, James E. Blair wrote: Hi, After reading https://wiki.openstack.org/__wiki/Network/Incubator https://wiki.openstack.org/wiki/Network/Incubator I have some thoughts about the proposed workflow. We have quite a bit of experience and some good tools around splitting code out of projects and into new projects. But we don't generally do a lot of importing code into projects. We've done this once, to my recollection, in a way that preserved history, and that was with the switch to keystone-lite. It wasn't easy; it's major git surgery and would require significant infra-team involvement any time we wanted to do it. However, reading the proposal, it occurred to me that it's pretty clear that we expect these tools to be able to operate outside of the Neutron project itself, to even be releasable on their own. Why not just stick with that? In other words, the goal of this process should be to create separate projects with their own development lifecycle that will continue indefinitely, rather than expecting the code itself to merge into the neutron repo. This has advantages in simplifying workflow and making it more consistent. Plus it builds on known integration mechanisms like APIs and python project versions. But more importantly, it helps scale the neutron project itself. I think that a focused neutron core upon which projects like these can build on in a reliable fashion would be ideal. Despite replies to you saying that certain branches of Neutron development work are special unicorns, I wanted to say I *fully* support your above statement. Best, -jay _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Design sessions for Neutron LBaaS. What do we want/need?
LBaaS team, As we discussed in the Weekly LBaaS meeting this morning we should make sure we get the design sessions scheduled that we are interested in. We currently agreed on the following: * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we want to go over status and also the whole incubator thingy and how we will best move forward. * Octavia: We want to schedule 2 sessions. --- During one of the sessions I would like to discuss the pros and cons of putting Octavia into the Neutron LBaaS incubator project right away. If it is going to be the reference implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 incubator. * Flavors which should be coordinated with markmcclain and enikanorov. --- https://review.openstack.org/#/c/102723/ Is this too many sessions given the constraints? I am assuming that we can also meet at the pods like we did at the last summit. thoughts? Regards Susanne Thierry Carrez thie...@openstack.org Aug 27 (1 day ago) to OpenStack Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?
With a corrected Subject. Susanne On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com wrote: LBaaS team, As we discussed in the Weekly LBaaS meeting this morning we should make sure we get the design sessions scheduled that we are interested in. We currently agreed on the following: * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we want to go over status and also the whole incubator thingy and how we will best move forward. * Octavia: We want to schedule 2 sessions. --- During one of the sessions I would like to discuss the pros and cons of putting Octavia into the Neutron LBaaS incubator project right away. If it is going to be the reference implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 incubator. * Flavors which should be coordinated with markmcclain and enikanorov. --- https://review.openstack.org/#/c/102723/ Is this too many sessions given the constraints? I am assuming that we can also meet at the pods like we did at the last summit. thoughts? Regards Susanne Thierry Carrez thie...@openstack.org Aug 27 (1 day ago) to OpenStack Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more
Hello, Drago! I'm extremely interested in learning more about your HOT graphical builder. The screenshots you had attached look gorgeous! Yours visual representation of Heat resources is much more concise and simple than I had drawn in Merlin PoC mock-ups [1]. On the other hand I have some suspicions that D3.js is a good fit for a general purpose UI toolkit Merlin aims to provide. Please don't get me wrong, D3.js is a great library which can do fantastic things with data - in case your data-visualization use-case maps to the one of the facilities D3.js provides out of the box. In case it doesn't, there are 2 options: either change your approach to what should be visualized/how it should be visualized, or tweak some inner machinery of D3.js While bending the design towards the facilities of D3.js doesn't seem a viable choice, changing D3.js from inside can be painful too. AFAIK force-directed graph layout from D3.js doesn't provide the means to represent composable entities (which isn't a big problem for Heat, but is a very serious issue for Murano) out of the box. By composable I mean something like [2] - but with much more complex inner structure (imagine the Resource entity [3] having as its properties other Resource entities which are shown as simple rounded rectangles with labels on that picture, but are expanded into complex objects similar to [3] once the user, say, clicks on them). As far as I understand, you are visualizing that kind of composition via arrow links, but I'd like to try another design options (especially in case of Murano) and fear that D3.js will constrain me here. I've been thinking a bit about using more low-level SVG js-framework like Raphael.js - it doesn't offer most of the goodies D3.js does, but also it doesn't force me to create the design based on some data transformations in a way that D3.js does, providing the good old procedural API instead. Of course, I may be wrong, perhaps more time and efforts invested into Merlin PoC would allow me to realize it (or not). Yet you are totally right having stressed the importance of right tool for implementing the underlying object model (or JSON-wrapper as you called it) - Barricade.js. That's the second big part of work Merlin had to do, and I couldn't underestimate how it would be beneficial for Merlin to leverage some of the facilities that Barricade.js provides. I'll gladly look at the demo of template builder and Barricade. Is there any chance I could take a look also at the source code of Barricade.js, so I would better understand to which extent it suits Merlin’s needs? I've searched through github.com and didn't found any traces of Barricade.js repo, so it seems like some in-house project to me. What are your plans for sharing this library with the community? [1] https://wiki.openstack.org/wiki/Merlin/PoC [2] http://mbostock.github.io/d3/talk/2016/pack-hierarchy.html [3] https://wiki.openstack.org/wiki/File:Merlin_poc_3.png On Wed, Aug 27, 2014 at 6:14 PM, Drago Rosson drago.ros...@rackspace.com wrote: Hello Timur, I have been developing a graphical Heat Orchestration Template builder. I first heard about the Merlin project around a week ago and found that what I have developed in the past few months is in line with what Merlin is trying to accomplish. Some of the features of the template builder (described further down) could be used as part of the UI framework Merlin aims to produce. However, its most important and useful component is a JavaScript library I have developed called Barricade.js, which powers the template builder’s data layer. Barricade.js aims to solve the problem of using JSON data across a web app. It creates data model objects out of JSON using a predefined schema (which looks similar to the schema used in your Murano Dynamic UI). This removes the disadvantages with JSON and introduces some very important advantages: - Encapsulation. JSON values can be set to any type or deleted entirely, which either causes errors when UI components expect these values to exist or be of a certain type, or forces the UI components to constantly check for correctness. Barricade instead wraps around the JSON and provides accessor methods to ensure type-safe data manipulation. Additionally, Barricade objects are observable, so changes made to their data trigger events that can be subscribed to by UI components. - Normalization. Whenever properties that are defined in the schema are missing in the JSON, Barricade will fill them in with default values. This way, UIs will always have valid values where they expect them, making their design much simpler. Optional properties are extremely common in Heat templates. - Metadata. Anything extra attached to JSON must be handled carefully (such as when converting back to the original YAML format). By wrapping the JSON with Barricade, metadata and convenience methods that UI components can use can be defined. For instance, the datatype of any
Re: [openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?
So I think the thing to keep in mind here is that your overcloud nodes are OpenStack _instances_ in TripleO. Would you would expect to start a VM in Nova, shut off all of the OpenStack services, and still be able to reboot that VM successfully? Maybe if you had static networking, but I highly doubt this is a use case we intentionally support. Running instances should stay running and available, but until you have OpenStack back up you won't be able to make changes to those instances. The same applies to TripleO overcloud nodes. Also keep in mind that TripleO isn't simply a deployment tool - it's also the thing you manage your cloud with. No undercloud = no management interface, which is why HA is so important. Enabling the no undercloud use case might be possible, but it doesn't scale and I think it would be a diversion from the ultimate goal of the project. Just my opinion, of course. -Ben On 08/28/2014 09:20 AM, Jyoti Ranjan wrote: It is not a big worry but it would have been better if overcloud nodes to be independent of undercloud except being a manager. Think the use case where: 1. nova has three region, geographically apart 2. geographical area hosting undercloud gets doomed 3. all region of nova gets affected as they can not reboot But, we can leave operator to ensure undercloud comes backend. Or, am I thinking too much? On Thu, Aug 28, 2014 at 10:47 AM, James Polley j...@jamezpolley.com wrote: That is correct, we intend that the undercloud should be designed with HA in mind. In terms of the design of TripleO, the devtest.sh implementation aims to have this available as a default (see http://docs.openstack.org/developer/tripleo-incubator/README.html#stage-2-being-worked-on for some dated and very brief notes about our aims). This is something that's been receiving a lot of attention over the last cycle, but it's not quite ready out-of-the-box yet. In terms of implementation, this will still require the implementor to make sure that their implementation meets their needs for HA. For instance, it doesn't matter how many redundant neutron nodes we run if they all hang off the back of a single physical non-redundant switch... Are there particular concerns you have with this design? On Thu, Aug 28, 2014 at 2:20 PM, Jyoti Ranjan jran...@gmail.com wrote: I do agree but it create an extra requirement for Undercloud if we high availability is important criteria. Because of this, undercloud has to be there 24x7, 365 days and to make it available we need to have HA for this also. So, you indirectly mean that undercloud also should be designed keeping high availability in mind. On Wed, Aug 27, 2014 at 7:53 PM, Ben Nemec openst...@nemebean.com wrote: We probably will at some point, but I don't know that it's a huge priority right now. The PXE booting method works fine, and as I mentioned we don't intend you to reboot machines without using the undercloud anyway, just like Nova doesn't expect you to reboot vms directly via libvirt (or your driver of choice). It's likely there would be other issues booting a deployed machine if the undercloud is down anyway (nothing to respond to DHCP requests, for one), so I don't see that as something we want to encourage anyway. -Ben On 08/27/2014 04:11 AM, Jyoti Ranjan wrote: I believe that local boot option is available in Ironic. Will not be a good idea to boot from local disk instead of relying on PXE boot always? Curious to know why we are not going this path? On Wed, Aug 27, 2014 at 3:54 AM, 严超 yanchao...@gmail.com wrote: Thank you very much. And sorry for the cross-posting. *Best Regards!* *Chao Yan--**My twitter:Andy Yan @yanchao727 https://twitter.com/yanchao727* *My Weibo:http://weibo.com/herewearenow http://weibo.com/herewearenow--* 2014-08-26 23:17 GMT+08:00 Ben Nemec openst...@nemebean.com: Oh, after writing my response below I realized this is cross-posted between openstack and openstack-dev. Please don't do that. I suppose this probably belongs on the users list, but since I've already written the response I guess I'm not going to argue too much. :-) On 08/26/2014 07:36 AM, 严超 wrote: Hi, All: I've deployed undercloud and overcloud on some baremetals. All overcloud machines are deployed by undercloud. Then I tried to shutdown undercloud machines. After that, if I reboot one overcloud machine, it will never boot from net, AKA PXE used by undercloud. Yes, that's normal. With the way our baremetal deployments work today, the deployed systems always PXE boot. After deployment they PXE boot a kernel and ramdisk that use the deployed hard disk image, but it's still a PXE boot. Is that what TripleO is designed to be ? We can never shutdown undercloud machines for maintainance of overcloud ? Please help me clearify that. Yes, that's working as intended at the moment. I recall hearing that
[openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths
Hi folks, I've updated this etherpad with notes from an investigation of Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra repo. Following the notes there, I was able to access swift:// paths from Spark jobs on a Spark standalone cluster launched from Sahara and then fixed up by hand. Comments welcome. This is a POC at this point imho, we have work to do to fully integrate this into Sahara. https://etherpad.openstack.org/p/sahara_spark_edp Best, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
Right. The special unicorns. Repeating this without defining it isn't helping anything. b) The experimental piece of code intends to replace whole-hog a large chunk of Neutron's existing codebase, or: In the DVR example I gave this is is the only relevant reason. Regardless of how well the internal Neutron APIs are defined, this same problem would have arisen. DVR completely changed the reference L3 service plugin, which lives in the main tree. A well-defined, versioned internal API would not have helped any of the issues I brought up. The 'experimental' part of the DVR work isn't referring to its internal API modifications, its referring to how the L3 service plugin will integrate with the data plane. On Thu, Aug 28, 2014 at 7:45 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/27/2014 04:28 PM, Kevin Benton wrote: What are you talking about? The only reply was from me clarifying that one of the purposes of the incubator was for components of neutron that are experimental but are intended to be merged. Right. The special unicorns. In that case it might not make sense to have a life cycle of their own in another repo indefinitely. The main reasons these experimental components don't make sense to live in their own repo indefinitely are: a) Neutron's design doesn't make it easy or straightforward to build/layer other things on top of it, or: b) The experimental piece of code intends to replace whole-hog a large chunk of Neutron's existing codebase, or: c) The experimental piece of code relies so heavily on inconsistent, unversioned internal interface and plugin calls that it cannot be designed externally due to the fragility of those interfaces Fixing a) is the solution to these problems. An incubator area where experimental components can live will just continue to mask the true problem domain, which is that Neutron's design is cumbersome to build on top of, and its cross-component interfaces need to be versioned, made consistent, and cleaned up to use versioned data structures instead of passing random nested dicts of randomly-prefixed string key/values. Frankly, we're going through a similar problem in Nova right now. There is a group of folks who believe that separating the nova-scheduler code into the Gantt project will magically make placement decision code and solver components *easier* to work on (because the pace of coding can be increased if there wasn't that pesky nova-core review process). But this is not correct, IMO. Separating out the scheduler into its own project before internal interfaces and data structures are cleaned up and versioned will just lead to greater technical debt and an increase in frustration on the part of Nova developers and scheduler developers alike. -jay On Wed, Aug 27, 2014 at 11:52 AM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 08/26/2014 07:09 PM, James E. Blair wrote: Hi, After reading https://wiki.openstack.org/__wiki/Network/Incubator https://wiki.openstack.org/wiki/Network/Incubator I have some thoughts about the proposed workflow. We have quite a bit of experience and some good tools around splitting code out of projects and into new projects. But we don't generally do a lot of importing code into projects. We've done this once, to my recollection, in a way that preserved history, and that was with the switch to keystone-lite. It wasn't easy; it's major git surgery and would require significant infra-team involvement any time we wanted to do it. However, reading the proposal, it occurred to me that it's pretty clear that we expect these tools to be able to operate outside of the Neutron project itself, to even be releasable on their own. Why not just stick with that? In other words, the goal of this process should be to create separate projects with their own development lifecycle that will continue indefinitely, rather than expecting the code itself to merge into the neutron repo. This has advantages in simplifying workflow and making it more consistent. Plus it builds on known integration mechanisms like APIs and python project versions. But more importantly, it helps scale the neutron project itself. I think that a focused neutron core upon which projects like these can build on in a reliable fashion would be ideal. Despite replies to you saying that certain branches of Neutron development work are special unicorns, I wanted to say I *fully* support your above statement. Best, -jay _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org
[openstack-dev] [all] Gerrit Downtime on August 30, 2014
Hi, Gerrit will be unavailable starting at 1600-1630 UTC on Saturday, August 30, 2014 to rename the glance.store project to glancestore. I apologize for the late notice, however, in another thread on the -dev list, you'll find the rationale for executing this change swiftly. Thanks, Jim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ironic] [neutron] make mac address updatable - port status for ironic servers
Right, so the question now is whether there are plans to change this. It doesn't seem desirable to be showing DOWN status when things are functional, so I'm nervous about relying on this behavior to allow mac address update to work. If future plans involved ironic providing port status, and did it in such a way that the status would not be ACTIVE when there's a mac address mismatch (for example), it would make me feel better. :) I guess I'm a worrier. I could see a user filing a bug and someone fixes it and no more mac addr updates Chuck On Aug 27, 2014, at 9:59 PM, Adam Gandelman ad...@ubuntu.commailto:ad...@ubuntu.com wrote: Yes, its to be expected. IIRC, you were helping me investigate the same thing at the TripleO mid-cycle. :) The port gets the necessary configuration setup by the DHCP agent to take care of PXE, but this being baremetal there is no hypervisor on which some agent is running and plugging VIFs into a vSwitch ports at the compute host level. We rely (at least in devstack) on a flat networking environment setup where baremetal nodes are already tapped into the tenant network namespace without the help of the agents. I believe devtest sets up something similar on the host thats running the devtest VMs. The end result is ports showing DOWN tho they are functional. On Wed, Aug 27, 2014 at 5:06 PM, Carlino, Chuck chuck.carl...@hp.commailto:chuck.carl...@hp.com wrote: Hi all, I'm working on bug [1] and have code in review[2]. The bug wants neutron's port['mac_address'] to be updatable to handle the case where an ironic server's nic is replaced. A comment in the review suggests that we only allow mac address update when the port status is not ACTIVE. While I've made another change which may (or may not) address the underlying concern, I want to ask if port status is correct for ironic server ports. When I run devtest (tripleo) on my laptop, I get VM ironic servers, which when booted all have neutron ports with 'binding:vif_type': 'binding_failed' and 'status': 'DOWN'. Is this expected? Does this happen with hardware ironic servers? Thanks in advance! Chuck [1] https://bugs.launchpad.net/neutron/+bug/1341268 [2] https://review.openstack.org/112129 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Gerrit Downtime on August 30, 2014
On 08/28/2014 05:39 PM, James E. Blair wrote: Hi, Gerrit will be unavailable starting at 1600-1630 UTC on Saturday, August 30, 2014 to rename the glance.store project to glancestore. I went with glance_store Hope that's fine! Thanks a lot for addressing this so quickly. Flavio -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters
Hi Mathieu, please see my comments below. On Wed, 2014-08-27 at 16:13 +0200, Mathieu Rohon wrote: hi irena, in the proposal of andreas you want to enforce the non-promisc mode per l2-agent? yes, kind of. We're currently not yet sure how to figure out the right interface for the registration. In the worst case it would be another config parameter saying register macs on ethx. But maybe there's a better automated way. We need to have a closer look at this. so every port managed by this agent will have to be in a non-promisc state? at a first read of the mail, I understood that you want to manage that per port with an extension. I guess you came to that conclusion as I was mentioning only in vlan and flat networking, right? This applies at least for my use case, but I do not yet have full insight into the sriov use case of Irena. But if it's also only for vlan and flat networking, an extension might be a good option. But we need a better understanding of the problem and of the extension framework first! Thanks for your hint! By using an extension, an agent could host promisc and non-promisc net-adapters, and other MD could potentially leverage this info (at least LB MD). On Wed, Aug 27, 2014 at 3:45 PM, Irena Berezovsky ire...@mellanox.com wrote: Hi Mathieu, We had a short discussion with Andreas about the use case stated below and also considered the SR-IOV related use case. It seems that all required changes can be encapsulated in the L2 OVS agent, since it requires to add fdb mac registration on adapted interface. What was your idea related to extension manager in ML2? Thanks, Irena -Original Message- From: Mathieu Rohon [mailto:] Sent: Wednesday, August 27, 2014 3:11 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters you probably should consider using the future extension manager in ML2 : https://review.openstack.org/#/c/89211/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] VPNaaS pending state handling
https://bugs.launchpad.net/neutron/+bug/1355360 I'm working on this vpn vendor bug and am looking for guidance on the approach. I'm also relatively new to neutron development so bear with some newbie gaffs :) The problem reported in this bug, in a nutshell, is the policies in the neutron vpn db and virtual-machine implementing vpn goes out of sync when the agent restarts (restart could be either operator driven or due to a software error). CSR vpn device driver currently doesn't do a sync when it comes up. I'm going to add that as part of this bug fix. Still it will only partially solve the problem as it will take care of new connections created (which goes to PENDING_CREATE state) updates to existing connections while the agent was down but NOT for deletes. For deletes the connection entry gets deleted right at vpn_db level. My proposal is to introduce PENDING_DELETE state for vpn site-to-site connection. Implementing pending_delete will involve, 1) Moving the delete operation from vpn_db into service driver 2) Changing the reference ipsec service driver to handle PENDING_DELETE state. For now we can just do a simple db delete to preserve the existing behavior. 3) CSR device driver will make use of PENDING_DELETE to correctly delete the entries in the CSR device when the agent comes up. Sounds reasonable? Any thoughts? thanks, - Sridhar ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Spam] Re: [Openstack][TripleO] [Ironic] What if undercloud machines down, can we reboot overcloud machines?
Excerpts from Jyoti Ranjan's message of 2014-08-27 21:20:19 -0700: I do agree but it create an extra requirement for Undercloud if we high availability is important criteria. Because of this, undercloud has to be there 24x7, 365 days and to make it available we need to have HA for this also. So, you indirectly mean that undercloud also should be designed keeping high availability in mind. I'm worried that you may be overstating the needs of a typical cloud. The undercloud needs to be able to reach a state of availability when you need to boot boxes. Even if you are doing CD and _constantly_ rebooting boxes, you can take your undercloud down for an hour, as long as it can be brought back up for emergencies. However, Ironic has already been designed this way. I believe that Ironic has a nice dynamic hash ring of server ownership, and if you mark a conductor down, the other conductors will assume ownership of the machines that it was holding. So the path to making this HA is basically add one more undercloud server. Ironic experts, please tell me this is true, and not just something I inserted into my own distorted version of reality to help me sleep at night. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Gerrit Downtime on August 30, 2014
Flavio Percoco fla...@redhat.com writes: On 08/28/2014 05:39 PM, James E. Blair wrote: Hi, Gerrit will be unavailable starting at 1600-1630 UTC on Saturday, August 30, 2014 to rename the glance.store project to glancestore. I went with glance_store Hope that's fine! Even better! Thanks a lot for addressing this so quickly. No problem. -Jim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
On Aug 28, 2014, at 10:45 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/27/2014 04:28 PM, Kevin Benton wrote: What are you talking about? The only reply was from me clarifying that one of the purposes of the incubator was for components of neutron that are experimental but are intended to be merged. Right. The special unicorns. In that case it might not make sense to have a life cycle of their own in another repo indefinitely. The main reasons these experimental components don't make sense to live in their own repo indefinitely are: a) Neutron's design doesn't make it easy or straightforward to build/layer other things on top of it, or: Correct and this something I want the team to address in Kilo. Many of the L7 services would be easier to build if we invest some time early in the cycle to establishing a well defined interface for a few items. I’m sure the LBaaS team has good feedback to share with everyone. b) The experimental piece of code intends to replace whole-hog a large chunk of Neutron's existing codebase, or: c) The experimental piece of code relies so heavily on inconsistent, unversioned internal interface and plugin calls that it cannot be designed externally due to the fragility of those interfaces I’m glad Jim reminded us of the pain of merging histories and the availability of feature branches. I think for items where we’re replacing large chunks of code feature branches make lots of sense. Fixing a) is the solution to these problems. An incubator area where experimental components can live will just continue to mask the true problem domain, which is that Neutron's design is cumbersome to build on top of, and its cross-component interfaces need to be versioned, made consistent, and cleaned up to use versioned data structures instead of passing random nested dicts of randomly-prefixed string key/values. Frankly, we're going through a similar problem in Nova right now. There is a group of folks who believe that separating the nova-scheduler code into the Gantt project will magically make placement decision code and solver components *easier* to work on (because the pace of coding can be increased if there wasn't that pesky nova-core review process). But this is not correct, IMO. Separating out the scheduler into its own project before internal interfaces and data structures are cleaned up and versioned will just lead to greater technical debt and an increase in frustration on the part of Nova developers and scheduler developers alike. Right hopefully the incubator will allow us to develop components that should be independent from the start without incurring too much debt. -jay On Wed, Aug 27, 2014 at 11:52 AM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 08/26/2014 07:09 PM, James E. Blair wrote: Hi, After reading https://wiki.openstack.org/__wiki/Network/Incubator https://wiki.openstack.org/wiki/Network/Incubator I have some thoughts about the proposed workflow. We have quite a bit of experience and some good tools around splitting code out of projects and into new projects. But we don't generally do a lot of importing code into projects. We've done this once, to my recollection, in a way that preserved history, and that was with the switch to keystone-lite. It wasn't easy; it's major git surgery and would require significant infra-team involvement any time we wanted to do it. However, reading the proposal, it occurred to me that it's pretty clear that we expect these tools to be able to operate outside of the Neutron project itself, to even be releasable on their own. Why not just stick with that? In other words, the goal of this process should be to create separate projects with their own development lifecycle that will continue indefinitely, rather than expecting the code itself to merge into the neutron repo. This has advantages in simplifying workflow and making it more consistent. Plus it builds on known integration mechanisms like APIs and python project versions. But more importantly, it helps scale the neutron project itself. I think that a focused neutron core upon which projects like these can build on in a reliable fashion would be ideal. Despite replies to you saying that certain branches of Neutron development work are special unicorns, I wanted to say I *fully* support your above statement. Best, -jay _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them
On Aug 28, 2014, at 6:41 AM, Radomir Dopieralski openst...@sheep.art.pl wrote: On 27/08/14 16:31, Sean Dague wrote: [snip] In python 2.7 (using pip) namespaces are a bolt on because of the way importing modules works. And depending on how you install things in a namespace will overwrite the base __init__.py for the top level part of the namespace in such a way that you can't get access to the submodules. It's well known, and every conversation with dstuft that I've had in the past was don't use namespaces. I think this is actually a solved problem. You just need a single line in your __init__.py files: https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py The problem is that the setuptools implementation of namespace packages breaks in a way that is repeatable but difficult to debug when a common OpenStack installation pattern is used. So the fix is “don’t do that” where I thought “that” meant the installation pattern and Sean thought it meant “use namespace packages”. :-) Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths
Hi, In case this is helpful for you, this is the patch i submitted to Spark about Swift and Spark integration ( about to be merged ) https://github.com/apache/spark/pull/1010 I sent information about this patch to this mailing list about two months ago. All the best, Gil. From: Trevor McKay tmc...@redhat.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Date: 28/08/2014 06:22 PM Subject:[openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths Hi folks, I've updated this etherpad with notes from an investigation of Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra repo. Following the notes there, I was able to access swift:// paths from Spark jobs on a Spark standalone cluster launched from Sahara and then fixed up by hand. Comments welcome. This is a POC at this point imho, we have work to do to fully integrate this into Sahara. https://etherpad.openstack.org/p/sahara_spark_edp Best, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them
On 08/28/2014 12:22 PM, Doug Hellmann wrote: On Aug 28, 2014, at 6:41 AM, Radomir Dopieralski openst...@sheep.art.pl wrote: On 27/08/14 16:31, Sean Dague wrote: [snip] In python 2.7 (using pip) namespaces are a bolt on because of the way importing modules works. And depending on how you install things in a namespace will overwrite the base __init__.py for the top level part of the namespace in such a way that you can't get access to the submodules. It's well known, and every conversation with dstuft that I've had in the past was don't use namespaces. I think this is actually a solved problem. You just need a single line in your __init__.py files: https://bitbucket.org/thomaswaldmann/xstatic-jquery/src/tip/xstatic/__init__.py The problem is that the setuptools implementation of namespace packages breaks in a way that is repeatable but difficult to debug when a common OpenStack installation pattern is used. So the fix is “don’t do that” where I thought “that” meant the installation pattern and Sean thought it meant “use namespace packages”. :-) Stupid english... be more specific! Yeh, Doug provides the most concise statement of where we failed on communication (I take a big chunk of that blame). Hopefully now it's a lot clearer what's going on, and why it hurts if you do it. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is my off the cuff set of guidelines: #1 - is it a test failure or a setup failure This should be pretty easy to figure out. Test failures come at the end of console log and say that tests failed (after you see a bunch of passing tempest tests). Always start at *the end* of files and work backwards. That’s interesting because in my case I saw a lot of failures after the initial “real” problem. So I usually read the logs like C compiler output: Assume the first error is real, and the
Re: [openstack-dev] [nova] [neutron] Specs for K release
On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange berra...@redhat.com wrote: On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote: How to do we handle specs that have slipped through the cracks and did not make it for Juno? Rebase the proposal so it is under the 'kilo' directory path instead of 'juno' and submit it for review again. Make sure to keep the ChangeId line intact so people see the history of any review comments in the earlier Juno proposal. Yes, but... I think we should talk about tweaking the structure of the juno directory. Something like having proposed, approved, and implemented directories. That would provide better signalling to operators about what we actually did, what we thought we'd do, and what we didn't do. I worry that gerrit is a terrible place to archive the things which were proposed by not approved. If someone else wants to pick something up later, its super hard for them to find. Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On Thu, Aug 28, 2014 at 11:48 AM, Doug Hellmann d...@doughellmann.com wrote: In my case, a neutron call failed. Most of the other services seem to have a *-api.log file, but neutron doesn’t. It took a little while to find the API-related messages in screen-q-svc.txt (I’m glad I’ve been around long enough to know it used to be called “quantum”). I get that screen-n-*.txt would collide with nova. Is it necessary to abbreviate those filenames at all? Cleaning up the service names has been a background conversation for some time and came up again last night in IRC. I abbreviated them in the first place to try to get them all in my screen status bar, so that was a while ago... I don't think the current ENABLED_SERVICES is scaling well and using full names (nova-api, glance-registry, etc) will make it even harder to read. May that is misplaced concern? I do think though that making the logfile names and locations more obvious in the gate results will be helpful. I've started scratching out a plan to migrate to full names and will get it into an Etherpad soon. Also simplifying the log file configuration vars and locations. dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Spam] Re: [Openstack][TripleO] [Ironic] What if undercloud machines down, can we reboot overcloud machines?
On August 28, 2014 8:58:11 AM PDT, Clint Byrum cl...@fewbar.com wrote: Excerpts from Jyoti Ranjan's message of 2014-08-27 21:20:19 -0700: I do agree but it create an extra requirement for Undercloud if we high availability is important criteria. Because of this, undercloud has to be there 24x7, 365 days and to make it available we need to have HA for this also. So, you indirectly mean that undercloud also should be designed keeping high availability in mind. I'm worried that you may be overstating the needs of a typical cloud. The undercloud needs to be able to reach a state of availability when you need to boot boxes. Even if you are doing CD and _constantly_ rebooting boxes, you can take your undercloud down for an hour, as long as it can be brought back up for emergencies. However, Ironic has already been designed this way. I believe that Ironic has a nice dynamic hash ring of server ownership, and if you mark a conductor down, the other conductors will assume ownership of the machines that it was holding. So the path to making this HA is basically add one more undercloud server. Ironic experts, please tell me this is true, and not just something I inserted into my own distorted version of reality to help me sleep at night. This is correct, HA is achieved in Ironic by having multiple conductors and API servers. It isn't perfect today, but Greg Haynes is working on some of this and it is planned to land in Juno. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev // jim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths
Gil, thanks! I'll take a look. Trevor On Thu, 2014-08-28 at 19:31 +0300, Gil Vernik wrote: Hi, In case this is helpful for you, this is the patch i submitted to Spark about Swift and Spark integration ( about to be merged ) https://github.com/apache/spark/pull/1010 I sent information about this patch to this mailing list about two months ago. All the best, Gil. From:Trevor McKay tmc...@redhat.com To:OpenStack Development Mailing List openstack-dev@lists.openstack.org Date:28/08/2014 06:22 PM Subject:[openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths __ Hi folks, I've updated this etherpad with notes from an investigation of Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra repo. Following the notes there, I was able to access swift:// paths from Spark jobs on a Spark standalone cluster launched from Sahara and then fixed up by hand. Comments welcome. This is a POC at this point imho, we have work to do to fully integrate this into Sahara. https://etherpad.openstack.org/p/sahara_spark_edp Best, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths
hi Gil, that's cool about the patch to Spark, has there been any talk about upgrading that patch to include Keystone v3 operations? - Original Message - Hi, In case this is helpful for you, this is the patch i submitted to Spark about Swift and Spark integration ( about to be merged ) https://github.com/apache/spark/pull/1010 I sent information about this patch to this mailing list about two months ago. All the best, Gil. From: Trevor McKay tmc...@redhat.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Date: 28/08/2014 06:22 PM Subject:[openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths Hi folks, I've updated this etherpad with notes from an investigation of Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra repo. Following the notes there, I was able to access swift:// paths from Spark jobs on a Spark standalone cluster launched from Sahara and then fixed up by hand. Comments welcome. This is a POC at this point imho, we have work to do to fully integrate this into Sahara. https://etherpad.openstack.org/p/sahara_spark_edp Best, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
On 08/28/2014 12:50 PM, Michael Still wrote: On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange berra...@redhat.com wrote: On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote: How to do we handle specs that have slipped through the cracks and did not make it for Juno? Rebase the proposal so it is under the 'kilo' directory path instead of 'juno' and submit it for review again. Make sure to keep the ChangeId line intact so people see the history of any review comments in the earlier Juno proposal. Yes, but... I think we should talk about tweaking the structure of the juno directory. Something like having proposed, approved, and implemented directories. That would provide better signalling to operators about what we actually did, what we thought we'd do, and what we didn't do. I think this would be really useful. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is my off the cuff set of guidelines: #1 - is it a test failure or a setup failure This should be pretty easy to figure out. Test failures come at the end of console log and say that tests failed (after you see a bunch of passing tempest tests). Always start at *the end* of files and work backwards. That’s interesting because in my case I saw a lot of failures after the initial “real” problem. So I usually read the logs like C compiler
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
I have another question about incubator proposal, for CLI and GUI. Do we imply that the incubator feature will need to branch python-neutron client, Horizon, and or Nova ( if changes are needed)? On Tue, Aug 26, 2014 at 7:09 PM, James E. Blair cor...@inaugust.com wrote: Hi, After reading https://wiki.openstack.org/wiki/Network/Incubator I have some thoughts about the proposed workflow. We have quite a bit of experience and some good tools around splitting code out of projects and into new projects. But we don't generally do a lot of importing code into projects. We've done this once, to my recollection, in a way that preserved history, and that was with the switch to keystone-lite. It wasn't easy; it's major git surgery and would require significant infra-team involvement any time we wanted to do it. However, reading the proposal, it occurred to me that it's pretty clear that we expect these tools to be able to operate outside of the Neutron project itself, to even be releasable on their own. Why not just stick with that? In other words, the goal of this process should be to create separate projects with their own development lifecycle that will continue indefinitely, rather than expecting the code itself to merge into the neutron repo. This has advantages in simplifying workflow and making it more consistent. Plus it builds on known integration mechanisms like APIs and python project versions. But more importantly, it helps scale the neutron project itself. I think that a focused neutron core upon which projects like these can build on in a reliable fashion would be ideal. -Jim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [neutron] Specs for K release
+1 I agree that this is a good idea. Regards, Mandeep On Thu, Aug 28, 2014 at 10:13 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/28/2014 12:50 PM, Michael Still wrote: On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange berra...@redhat.com wrote: On Thu, Aug 28, 2014 at 11:51:32AM +, Alan Kavanagh wrote: How to do we handle specs that have slipped through the cracks and did not make it for Juno? Rebase the proposal so it is under the 'kilo' directory path instead of 'juno' and submit it for review again. Make sure to keep the ChangeId line intact so people see the history of any review comments in the earlier Juno proposal. Yes, but... I think we should talk about tweaking the structure of the juno directory. Something like having proposed, approved, and implemented directories. That would provide better signalling to operators about what we actually did, what we thought we'd do, and what we didn't do. I think this would be really useful. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths
Hi Michael, I have an update to this patch with temp auth authentication also, but it's not yet submitted. I am not aware of v3 support. All the best, Gil. From: Michael McCune mimcc...@redhat.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 28/08/2014 08:14 PM Subject:Re: [openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths hi Gil, that's cool about the patch to Spark, has there been any talk about upgrading that patch to include Keystone v3 operations? - Original Message - Hi, In case this is helpful for you, this is the patch i submitted to Spark about Swift and Spark integration ( about to be merged ) https://github.com/apache/spark/pull/1010 I sent information about this patch to this mailing list about two months ago. All the best, Gil. From: Trevor McKay tmc...@redhat.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Date: 28/08/2014 06:22 PM Subject:[openstack-dev] [sahara] Notes on developing Sahara Spark EDP to work with swift:// paths Hi folks, I've updated this etherpad with notes from an investigation of Spark/Swift and the hadoop-openstack plugin carried in the sahara-extra repo. Following the notes there, I was able to access swift:// paths from Spark jobs on a Spark standalone cluster launched from Sahara and then fixed up by hand. Comments welcome. This is a POC at this point imho, we have work to do to fully integrate this into Sahara. https://etherpad.openstack.org/p/sahara_spark_edp Best, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more
Timur, Composable entities can be a real need for Heat if provider templates (which allow templates to be used as a resource, with a template’s parameters and outputs becoming properties and attributes, respectively) are to be included in the app. A provider template resource, since it is a template itself, would be composed of resources which would require a composable entity. What is great about D3’s force graph is that it’s nodes and links can be completely arbitrary - meaning they can be any JavaScript object (including an SVG or DOM element). Additionally, the force graph simulation updates x and y properties on those elements and calls a user-defined “tick” function. The tick function can use the x and y properties in any way it wants to do the *actual* update to the position of each element. For example, this is how multiple foci can be implemented [1]. Lots of other customization is available, including starting and stopping the simulation, updating the node and link data, and having per-element control of most (all?) properties such as charge or link distance. Composability can be achieved using SVG’s g elements to group multiple graphical elements together. The tick function would need to update the g’s transform attribute [2]. This is how it is done in my app since my nodes and links are composed of icons, labels, backgrounds, etc. I think that D3’s force graph is not a limiting factor since it itself does not concern itself with graphics at all. Therefore, the question seems to be whether D3 can do everything graphically that Merlin needs. D3 is not a graphics API, but it does have support for graphical manipulation, animations, and events. They have sufficed for me so far. Plus, D3 can do these things without having to use its fancy data transformations so it can be used as a low-level SVG library where necessary. D3 can do a lot [3] so hopefully it could also do what Merlin needs. You are in luck, because I have just now open-sourced Barricade! Check it out [4]. I am working on getting documentation written for it but to see some ways it can be used, look at its test suite [5]. [1] http://bl.ocks.org/mbostock/1021953 [2] node.attr(transform, function (d) { return translate( + d.x + , + d.y + ); }); [3] http://christopheviau.com/d3list/ [4] https://github.com/rackerlabs/barricade [5] https://github.com/rackerlabs/barricade/blob/master/test/barricade_Spec.js On 8/28/14, 10:03 AM, Timur Sufiev tsuf...@mirantis.com wrote: Hello, Drago! I'm extremely interested in learning more about your HOT graphical builder. The screenshots you had attached look gorgeous! Yours visual representation of Heat resources is much more concise and simple than I had drawn in Merlin PoC mock-ups [1]. On the other hand I have some suspicions that D3.js is a good fit for a general purpose UI toolkit Merlin aims to provide. Please don't get me wrong, D3.js is a great library which can do fantastic things with data - in case your data-visualization use-case maps to the one of the facilities D3.js provides out of the box. In case it doesn't, there are 2 options: either change your approach to what should be visualized/how it should be visualized, or tweak some inner machinery of D3.js While bending the design towards the facilities of D3.js doesn't seem a viable choice, changing D3.js from inside can be painful too. AFAIK force-directed graph layout from D3.js doesn't provide the means to represent composable entities (which isn't a big problem for Heat, but is a very serious issue for Murano) out of the box. By composable I mean something like [2] - but with much more complex inner structure (imagine the Resource entity [3] having as its properties other Resource entities which are shown as simple rounded rectangles with labels on that picture, but are expanded into complex objects similar to [3] once the user, say, clicks on them). As far as I understand, you are visualizing that kind of composition via arrow links, but I'd like to try another design options (especially in case of Murano) and fear that D3.js will constrain me here. I've been thinking a bit about using more low-level SVG js-framework like Raphael.js - it doesn't offer most of the goodies D3.js does, but also it doesn't force me to create the design based on some data transformations in a way that D3.js does, providing the good old procedural API instead. Of course, I may be wrong, perhaps more time and efforts invested into Merlin PoC would allow me to realize it (or not). Yet you are totally right having stressed the importance of right tool for implementing the underlying object model (or JSON-wrapper as you called it) - Barricade.js. That's the second big part of work Merlin had to do, and I couldn't underestimate how it would be beneficial for Merlin to leverage some of the facilities that Barricade.js provides. I'll gladly look at the demo of template builder and Barricade. Is there any chance I could take a look also at the
Re: [openstack-dev] [all] gate debugging
On Aug 28, 2014, at 1:00 PM, Dean Troyer dtro...@gmail.com wrote: On Thu, Aug 28, 2014 at 11:48 AM, Doug Hellmann d...@doughellmann.com wrote: In my case, a neutron call failed. Most of the other services seem to have a *-api.log file, but neutron doesn’t. It took a little while to find the API-related messages in screen-q-svc.txt (I’m glad I’ve been around long enough to know it used to be called “quantum”). I get that screen-n-*.txt would collide with nova. Is it necessary to abbreviate those filenames at all? Cleaning up the service names has been a background conversation for some time and came up again last night in IRC. I abbreviated them in the first place to try to get them all in my screen status bar, so that was a while ago... I don't think the current ENABLED_SERVICES is scaling well and using full names (nova-api, glance-registry, etc) will make it even harder to read. May that is misplaced concern? I do think though that making the logfile names and locations more obvious in the gate results will be helpful. I usually use the functions for editing ENABLED_SERVICES. Is it still common to edit the variable directly? I've started scratching out a plan to migrate to full names and will get it into an Etherpad soon. Also simplifying the log file configuration vars and locations. Cool. Let us know if we can make any changes in oslo.log to simplify that work. Doug dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On Aug 28, 2014, at 1:17 PM, Sean Dague s...@dague.net wrote: On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is my off the cuff set of guidelines: #1 - is it a test failure or a setup failure This should be pretty easy to figure out. Test failures come at the end of console log and say that tests failed (after you see a bunch of passing tempest tests). Always start at *the end* of files and work backwards. That’s interesting because in my case I saw a lot of failures
Re: [openstack-dev] [all] Design Summit reloaded
On 08/27/2014 11:34 AM, Doug Hellmann wrote: On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. If anything, I’d like to have fewer cross-project tracks running simultaneously. Depending on which are proposed, maybe we can make that happen. On the other hand, cross-project issues is a big theme right now so maybe we should consider devoting more than a day to dealing with them. I agree with Doug here. I'd almost say having a single cross-project room, with serialized content would be better than 3 separate cross-project tracks. By nature, the cross-project sessions will attract developers that work or are interested in a set of projects that looks like a big Venn diagram. By having 3 separate cross-project tracks, we would increase the likelihood that developers would once more have to choose among simultaneous sessions that they have equal interest in. For Infra and QA folks, this likelihood is even greater... I think I'd prefer a single cross-project track on the first day. Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. The message I’m getting from this change in available space is that we need to start thinking about and writing up ideas early, so teams can figure out which upcoming specs need more discussion and which don’t. ++ Also, I think as a community we need to get much better about saying No for certain things. No to sessions that don't have much specific details to them. No to blueprints that don't add much functionality that cannot be widely used or taken advantage of. No to specs that don't have a narrow-enough scope, etc. I also think we need to be better at saying Yes to other things, though... but that's a different thread ;) Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. This is a good compromise between needing to allow folks to move around between tracks (including speaking at the conference) and having a large block of unstructured time for deep dives. Agreed. Best, -jay I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague s...@dague.net wrote: On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is my off the cuff set of guidelines: #1 - is it a test failure or a setup failure This should be pretty easy to figure out. Test failures come at the end of console log and say that tests failed (after you see a bunch of passing tempest tests). Always start at *the end* of files and work backwards. That’s
[openstack-dev] [CEILOMETER] Trending Alarm
Hello, everyone! I want to have an alarm that is triggered by some kind of trend. For example, an alarm that is triggeredwhen the CPU utilization is growing steadly (for example, has grown approximately 10% per 5 minutes, where the percentage and time window would be parameters, but then I would evaluate also more complex forms to compute trends).Is there any way to do this kind of task? I took a brief look on the code and saw that new evaluators can be created. So, I thought about two possibilities: the former includes creating a new Evaluator that considers a given window size and the latter considers on adding a change rate comparator, which will enable to set the growth rate as the threshold. What do you think about it? Best Regards -- Henrique Truta ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On Thu, Aug 28, 2014 at 12:44 PM, Doug Hellmann d...@doughellmann.com wrote: I usually use the functions for editing ENABLED_SERVICES. Is it still common to edit the variable directly? Not generally. It was looking at it in log files to see what was/was not enabled where I started to think about that. The default is already pretty long, however having full words might make the scan easier than x- does. I've started scratching out a plan to migrate to full names and will get it into an Etherpad soon. Also simplifying the log file configuration vars and locations. https://etherpad.openstack.org/p/devstack-logging Cool. Let us know if we can make any changes in oslo.log to simplify that work. I don't think olso.log is involved, this is all of the log files that DevStack generates or captures: screen windows and the stack.sh run itself. There might be room to optimize if we're capturing something that is also being logged elsewhere, but when using screen people seem to want it all in a window (see horizon and recent keystone windows) anyway. dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On 08/28/2014 01:48 PM, Doug Hellmann wrote: On Aug 28, 2014, at 1:17 PM, Sean Dague s...@dague.net wrote: On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is my off the cuff set of guidelines: #1 - is it a test failure or a setup failure This should be pretty easy to figure out. Test failures come at the end of console log and say that tests failed (after you see a bunch of passing tempest tests). Always start at *the end* of files and work backwards. That’s interesting because in my case I
Re: [openstack-dev] [all] gate debugging
On 08/28/2014 02:07 PM, Joe Gordon wrote: On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com mailto:chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
On 2014-08-28 08:31:26 -0700 (-0700), Kevin Benton wrote: [...] DVR completely changed the reference L3 service plugin, which lives in the main tree. A well-defined, versioned internal API would not have helped any of the issues I brought up. [...] Except, perhaps, insofar as that (in some ideal world) it might allow the reference L3 service plugin to be extracted from the main tree and developed within a separate source code repository with its own life cycle. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Launch of a instance failed
Hi i am deploying a devstack juno on ubuntu 14.04 server virtual machine. After installation,when i am trying to launch a instance,its failed. I am getting host not found error. Below is part of /opt/stack/logs/screen/screen-n-cond.log Below is ther error 2014-08-28 23:44:59.448 ERROR nova.scheduler.utils [req-6f220296-8ec2-4e49-821d-0d69d3acc315 admin admin] [instance: 7f105394-414c-4458-b1a1-6f37d6cff87a] Error from last host: juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent call last):\n', u' File /opt/stack/nova/nova/compute/manager.py, line 1932, in do_build_and_run_instance\nfilter_properties)\n', u' File /opt/stack/nova/nova/compute/manager.py, line 2067, in _build_and_run_instance\ninstance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 7f105394-414c-4458-b1a1-6f37d6cff87a was re-scheduled: not all arguments converted during string formatting\n'] Regards Nikesh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Design Summit reloaded
On 08/28/2014 01:58 PM, Jay Pipes wrote: On 08/27/2014 11:34 AM, Doug Hellmann wrote: On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. If anything, I’d like to have fewer cross-project tracks running simultaneously. Depending on which are proposed, maybe we can make that happen. On the other hand, cross-project issues is a big theme right now so maybe we should consider devoting more than a day to dealing with them. I agree with Doug here. I'd almost say having a single cross-project room, with serialized content would be better than 3 separate cross-project tracks. By nature, the cross-project sessions will attract developers that work or are interested in a set of projects that looks like a big Venn diagram. By having 3 separate cross-project tracks, we would increase the likelihood that developers would once more have to choose among simultaneous sessions that they have equal interest in. For Infra and QA folks, this likelihood is even greater... I think I'd prefer a single cross-project track on the first day. So the fallout of that is there will be 6 or 7 cross-project slots for the design summit. Maybe that's the right mix if the TC does a good job picking the top 5 things we want accomplished from a cross project standpoint during the cycle. But it's going to have to be a pretty directed pick. I think last time we had 21 slots, and with a couple of doubling up that gave 19 sessions. (about 30 - 35 proposals for that slot set). Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. The message I’m getting from this change in available space is that we need to start thinking about and writing up ideas early, so teams can figure out which upcoming specs need more discussion and which don’t. ++ Also, I think as a community we need to get much better about saying No for certain things. No to sessions that don't have much specific details to them. No to blueprints that don't add much functionality that cannot be widely used or taken advantage of. No to specs that don't have a narrow-enough scope, etc. I also think we need to be better at saying Yes to other things, though... but that's a different thread ;) Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. This is a good compromise between needing to allow folks to move around between tracks (including speaking at the conference) and having a large block of unstructured time for deep dives. Agreed. Best, -jay I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like
Re: [openstack-dev] [Mistral] Workflow on-finish
Is there an example somewhere that I can reference on how to define this special task? Thanks! On Wed, Aug 27, 2014 at 10:02 PM, Renat Akhmerov rakhme...@mirantis.com wrote: Right now, you can just include a special task into a workflow that, for example, sends an HTTP request to whatever you need to notify about workflow completion. Although, I see it rather as a hack (not so horrible though). Renat Akhmerov @ Mirantis Inc. On 28 Aug 2014, at 12:01, Renat Akhmerov rakhme...@mirantis.com wrote: There are two blueprints that I supposed to use for this purpose: https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-http https://blueprints.launchpad.net/mistral/+spec/mistral-event-listeners-amqp So my opinion: - This functionality should be orthogonal to what we configure in DSL. - The mechanism of listeners would is more generic and would your requirement as a special case. - At this point, I see that we may want to implement a generic transport-agnostic listener mechanism internally (not that hard task) and then implement required transport specific plugins to it. Inviting everyone to discussion. Thanks Renat Akhmerov @ Mirantis Inc. On 28 Aug 2014, at 06:17, W Chan m4d.co...@gmail.com wrote: Renat, It will be helpful to perform a callback on completion of the async workflow. Can we add on-finish to the workflow spec and when workflow completes, runs task(s) defined in the on-finish section of the spec? This will allow the workflow author to define how the callback is to be done. Here's the bp link. https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-on-finish Thanks. Winson ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Is the BP approval process broken?
On Thu, Aug 28, 2014 at 2:40 AM, Daniel P. Berrange berra...@redhat.com wrote: On Thu, Aug 28, 2014 at 01:04:57AM +, Dugger, Donald D wrote: I'll try and not whine about my pet project but I do think there is a problem here. For the Gantt project to split out the scheduler there is a crucial BP that needs to be implemented ( https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP has been rejected and we'll have to try again for Kilo. My question is did we do something wrong or is the process broken? Note that we originally proposed the BP on 4/23/14, went through 10 iterations to the final version on 7/25/14 and the final version got three +1s and a +2 by 8/5. Unfortunately, even after reaching out to specific people, we didn't get the second +2, hence the rejection. I see at that it did not even get one +2 at the time of the feature proposal approval freeze. You then successfully requested an exception and after a couple more minor updates got a +2 from John but from no one else. I do think this shows a flaw in our (core teams) handling of the blueprint. When we agreed upon the freeze exception, that should have included a firm commitment for a least 2 core devs to review it. IOW I think it is reasonable to say that either your feature should have ended up with two +2s and +A, or you should have seen a -1 from another core dev. I don't think it is acceptable that after the exception was approved it only got feedback from one core dev. I actually thought that when approving exceptions, we always got 2 cores to agree to review the item to avoid this, so I'm not sure why we failed here. I understand that reviews are a burden and very hard but it seems wrong that a BP with multiple positive reviews and no negative reviews is dropped because of what looks like indifference. Given that there is still time to review the actual code patches it seems like there should be a simpler way to get a BP approved. Without an approved BP it's difficult to even start the coding process. So the question is the BP approval process broken doesn't have a simple answer. There are definitely things we should change, but in this case I think the process sort of worked. The problem you hit is we just don't have enough people doing reviews. Your blueprint didn't get approved in part because the ratio of reviews needed to reviewers is off. If we don't even have enough bandwidth to approve this spec we certainly don't have enough bandwidth to review the code associated with the spec. I see 2 possibilities here: 1) This is an isolated case specific to this BP. If so, there's no need to change the procedures but I would like to know what we should be doing differently. We got a +2 review on 8/4 and then silence for 3 weeks. 2) This is a process problem that other people encounter. Maybe there are times when silence means assent. Something like a BP with multiple +1s and at least one +2 should automatically be accepted if no one reviews it 2 weeks after the +2 is given. My two thoughts are - When we approve something for exception should actively monitor progress of review to ensure it gets the neccessary attention to either approve or reject it. It makes no sense to approve an exception and then let it lie silently waiting for weeks with no attention. I'd expect that any time exceptions are approved we should babysit them and actively review their status in the weekly meeting to ensure they are followed up on. - Core reviewers should prioritize reviews of things which already have a +2 on them. I wrote about this in the context of code reviews last week, but all my points apply equally to spec reviews I believe. http://lists.openstack.org/pipermail/openstack-dev/2014-August/043657.html Also note that in Kilo the process will be slightly less heavyweight in that we're going to try allow some features changes into tree without first requiring a spec/blueprint to be written. I can't say offhand whether this particular feature would have qualifying for the lighter process, but in general by reducing need for specs for the more trivial items, we'll have more time available for review of things which do require specs. Under the proposed changes to the spec/blueprint process, this would still need a spec. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___
Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow
Yes, in theory all of the plugins should be removable from the core neutron repo. So then it would only need to be responsible for the APIs, db models, etc. However, IIRC there are no plans to move any reference plugins from the tree. On Thu, Aug 28, 2014 at 11:20 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2014-08-28 08:31:26 -0700 (-0700), Kevin Benton wrote: [...] DVR completely changed the reference L3 service plugin, which lives in the main tree. A well-defined, versioned internal API would not have helped any of the issues I brought up. [...] Except, perhaps, insofar as that (in some ideal world) it might allow the reference L3 service plugin to be extracted from the main tree and developed within a separate source code repository with its own life cycle. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity
Brandon I am not sure how ready that nova feature is for general use and have asked our nova lead about that. He is on vacation but should be back by the start of next week. I believe this is the right approach for us moving forward. We cannot make it mandatory to run the 2 filters but we can say in the documentation that if these two filters aren't set that we cannot guaranty Anti-affinity or Affinity. The other way we can implement this is by using availability zones and host aggregates. This is one technique we use to make sure we deploy our in-cloud services in an HA model. This also would assume that the operator is setting up Availabiltiy zones which we can't. http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/ Sahara is currently using the following filters to support host affinity which is probably due to the fact that they did the work before ServerGroups. I am not advocating the use of those filters but just showing you that we can document the feature and it will be up to the operator to set it up to get the right behavior. Regards Susanne Anti-affinity One of the problems in Hadoop running on OpenStack is that there is no ability to control where machine is actually running. We cannot be sure that two new virtual machines are started on different physical machines. As a result, any replication with cluster is not reliable because all replicas may turn up on one physical machine. Anti-affinity feature provides an ability to explicitly tell Sahara to run specified processes on different compute nodes. This is especially useful for Hadoop datanode process to make HDFS replicas reliable. The Anti-Affinity feature requires certain scheduler filters to be enabled on Nova. Edit your/etc/nova/nova.conf in the following way: [DEFAULT] ... scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_default_filters=DifferentHostFilter,SameHostFilter This feature is supported by all plugins out of the box. http://docs.openstack.org/developer/sahara/userdoc/features.html On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan brandon.lo...@rackspace.com wrote: Nova scheduler has ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter which does the colocation and apolocation for VMs. I think this is something we've discussed before about taking advantage of nova's scheduling. I need to verify that this will work with what we (RAX) plan to do, but I'd like to get everyone else's thoughts. Also, if we do decide this works for everyone involved, should we make it mandatory that the nova-compute services are running these two filters? I'm also trying to see if we can use this to also do our own colocation and apolocation on load balancers, but it looks like it will be a bit complex if it can even work. Hopefully, I can have something definitive on that soon. Thanks, Brandon ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Design Summit reloaded
On 08/28/2014 02:21 PM, Sean Dague wrote: On 08/28/2014 01:58 PM, Jay Pipes wrote: On 08/27/2014 11:34 AM, Doug Hellmann wrote: On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. If anything, I’d like to have fewer cross-project tracks running simultaneously. Depending on which are proposed, maybe we can make that happen. On the other hand, cross-project issues is a big theme right now so maybe we should consider devoting more than a day to dealing with them. I agree with Doug here. I'd almost say having a single cross-project room, with serialized content would be better than 3 separate cross-project tracks. By nature, the cross-project sessions will attract developers that work or are interested in a set of projects that looks like a big Venn diagram. By having 3 separate cross-project tracks, we would increase the likelihood that developers would once more have to choose among simultaneous sessions that they have equal interest in. For Infra and QA folks, this likelihood is even greater... I think I'd prefer a single cross-project track on the first day. So the fallout of that is there will be 6 or 7 cross-project slots for the design summit. Maybe that's the right mix if the TC does a good job picking the top 5 things we want accomplished from a cross project standpoint during the cycle. But it's going to have to be a pretty directed pick. I think last time we had 21 slots, and with a couple of doubling up that gave 19 sessions. (about 30 - 35 proposals for that slot set). I'm not sure that would be a bad thing :) I think one of the reasons the mid-cycles have been successful is that they have adequately limited the scope of discussions and I think by doing our homework by fully vetting and voting on cross-project sessions and being OK with saying No, not this time., we will be more productive than if we had 20+ cross-project sessions. Just my two cents, though.. -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?
Hi Susanne-- Regarding the Octavia sessions: I think we probably will have enough to discuss that we could use two design sessions. However, I also think that we can probably come to conclusions on whether Octavia should become a part of Neutron Incubator right away via discussion on this mailing list. Do we want to have that discussion in another thread, or should we use this one? Stephen On Thu, Aug 28, 2014 at 7:51 AM, Susanne Balle sleipnir...@gmail.com wrote: With a corrected Subject. Susanne On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com wrote: LBaaS team, As we discussed in the Weekly LBaaS meeting this morning we should make sure we get the design sessions scheduled that we are interested in. We currently agreed on the following: * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we want to go over status and also the whole incubator thingy and how we will best move forward. * Octavia: We want to schedule 2 sessions. --- During one of the sessions I would like to discuss the pros and cons of putting Octavia into the Neutron LBaaS incubator project right away. If it is going to be the reference implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 incubator. * Flavors which should be coordinated with markmcclain and enikanorov. --- https://review.openstack.org/#/c/102723/ Is this too many sessions given the constraints? I am assuming that we can also meet at the pods like we did at the last summit. thoughts? Regards Susanne Thierry Carrez thie...@openstack.org Aug 27 (1 day ago) to OpenStack Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Design sessions for Neutron LBaaS. What do we want/need?
Let's use a different email thread to discuss if Octavia should be part of the Neutron incubator project right away or not. I would like to keep the two discussions separate. Susanne On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com wrote: LBaaS team, As we discussed in the Weekly LBaaS meeting this morning we should make sure we get the design sessions scheduled that we are interested in. We currently agreed on the following: * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we want to go over status and also the whole incubator thingy and how we will best move forward. * Octavia: We want to schedule 2 sessions. --- During one of the sessions I would like to discuss the pros and cons of putting Octavia into the Neutron LBaaS incubator project right away. If it is going to be the reference implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 incubator. * Flavors which should be coordinated with markmcclain and enikanorov. --- https://review.openstack.org/#/c/102723/ Is this too many sessions given the constraints? I am assuming that we can also meet at the pods like we did at the last summit. thoughts? Regards Susanne Thierry Carrez thie...@openstack.org Aug 27 (1 day ago) to OpenStack Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBass] Design sessions for Neutron LBaaS. What do we want/need?
Let's use a different email thread to discuss if Octavia should be part of the Neutron incubator project right away or not. I would like to keep the two discussions separate. Susanne On Thu, Aug 28, 2014 at 3:20 PM, Stephen Balukoff sbaluk...@bluebox.net wrote: Hi Susanne-- Regarding the Octavia sessions: I think we probably will have enough to discuss that we could use two design sessions. However, I also think that we can probably come to conclusions on whether Octavia should become a part of Neutron Incubator right away via discussion on this mailing list. Do we want to have that discussion in another thread, or should we use this one? Stephen On Thu, Aug 28, 2014 at 7:51 AM, Susanne Balle sleipnir...@gmail.com wrote: With a corrected Subject. Susanne On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle sleipnir...@gmail.com wrote: LBaaS team, As we discussed in the Weekly LBaaS meeting this morning we should make sure we get the design sessions scheduled that we are interested in. We currently agreed on the following: * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we want to go over status and also the whole incubator thingy and how we will best move forward. * Octavia: We want to schedule 2 sessions. --- During one of the sessions I would like to discuss the pros and cons of putting Octavia into the Neutron LBaaS incubator project right away. If it is going to be the reference implementation for LBaaS v 2 then I believe Octavia belong in Neutron LBaaS v2 incubator. * Flavors which should be coordinated with markmcclain and enikanorov. --- https://review.openstack.org/#/c/102723/ Is this too many sessions given the constraints? I am assuming that we can also meet at the pods like we did at the last summit. thoughts? Regards Susanne Thierry Carrez thie...@openstack.org Aug 27 (1 day ago) to OpenStack Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing
Re: [openstack-dev] [all] Design Summit reloaded
On 08/28/2014 03:06 PM, Jay Pipes wrote: On 08/28/2014 02:21 PM, Sean Dague wrote: On 08/28/2014 01:58 PM, Jay Pipes wrote: On 08/27/2014 11:34 AM, Doug Hellmann wrote: On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. If anything, I’d like to have fewer cross-project tracks running simultaneously. Depending on which are proposed, maybe we can make that happen. On the other hand, cross-project issues is a big theme right now so maybe we should consider devoting more than a day to dealing with them. I agree with Doug here. I'd almost say having a single cross-project room, with serialized content would be better than 3 separate cross-project tracks. By nature, the cross-project sessions will attract developers that work or are interested in a set of projects that looks like a big Venn diagram. By having 3 separate cross-project tracks, we would increase the likelihood that developers would once more have to choose among simultaneous sessions that they have equal interest in. For Infra and QA folks, this likelihood is even greater... I think I'd prefer a single cross-project track on the first day. So the fallout of that is there will be 6 or 7 cross-project slots for the design summit. Maybe that's the right mix if the TC does a good job picking the top 5 things we want accomplished from a cross project standpoint during the cycle. But it's going to have to be a pretty directed pick. I think last time we had 21 slots, and with a couple of doubling up that gave 19 sessions. (about 30 - 35 proposals for that slot set). I'm not sure that would be a bad thing :) I think one of the reasons the mid-cycles have been successful is that they have adequately limited the scope of discussions and I think by doing our homework by fully vetting and voting on cross-project sessions and being OK with saying No, not this time., we will be more productive than if we had 20+ cross-project sessions. Just my two cents, though.. I'm not sure it would be a bad thing either. I just wanted to be explicit about what we are saying the cross projects sessions are for in this case: the 5 key cross project activities the TC believes should be worked on this next cycle. The other question is if we did that what's running in competition to cross project day? Is it another free form pod day for people not working on those things? -Sean -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Design Summit reloaded
On Wed, Aug 27, 2014 at 7:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. Yep, I think this works in theory, the tough part will be when all the incubating projects realize they're sending people for a single day? Maybe it'll work out differently than I think though. It means fitting ironic, barbican, designate, manila, marconi in a day? Also since QA, Infra, and Docs are cross-project AND Programs, where do they land? Day 2 and Day 3. Scheduled sessions for various programs That's our traditional scheduled space. We'll have a 33% less slots available. So, rather than trying to cover all the scope, the idea would be to focus those sessions on specific issues which really require face-to-face discussion (which can't be solved on the ML or using spec discussion) *or* require a lot of user feedback. That way, appearing in the general schedule is very helpful. This will require us to be a lot stricter on what we accept there and what we don't -- we won't have space for courtesy sessions anymore, and traditional/unnecessary sessions (like my traditional release schedule one) should just move to the mailing-list. I like thinking about what we can move to the mailing lists. Nice. Day 4. Contributors meetups On the last day, we could try to split the space so that we can conduct parallel midcycle-meetup-like contributors gatherings, with no time boundaries and an open agenda. Large projects could get a full day, smaller projects would get half a day (but could continue the discussion in a local bar). Ideally that meetup would end with some alignment on release goals, but the idea is to make the best of that time together to solve the issues you have. Friday would finish with the design summit feedback session, for those who are still around. Sounds good. I think this proposal makes the best use of our setup: discuss clear cross-project issues, address key specific topics which need face-to-face time and broader attendance, then try to replicate the success of midcycle meetup-like open unscheduled time to discuss whatever is hot at this point. There are still details to work out (is it possible split the space, should we use the usual design summit CFP website to organize the scheduled time...), but I would first like to have your feedback on this format. Also if you have alternative proposals that would make a better use of our 4 days, let me know. Cheers, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Octavia] Octavia VM image design
I agree with Michael. We need to use the OpenStack tooling. Sahara is encountering some of the same issues we are as they are building up their hadoop VM/clusters. See http://docs.openstack.org/developer/sahara/userdoc/vanilla_plugin.html http://docs.openstack.org/developer/sahara/userdoc/diskimagebuilder.html for inspiration, Susanne On Wed, Aug 27, 2014 at 6:21 PM, Michael Johnson johnso...@gmail.com wrote: I am investigating building scripts that use diskimage-builder (https://github.com/openstack/diskimage-builder) to create a purpose built image. This should allow some flexibility in the base image and the output image format (including a path to docker). The definition of purpose built is open at this point. I will likely try to have a minimal Ubuntu based VM image as a starting point/test case and we can add/change as necessary. Michael On Wed, Aug 27, 2014 at 2:12 PM, Dustin Lundquist dus...@null-ptr.net wrote: It seems to me there are two major approaches to the Octavia VM design: Start with a standard Linux distribution (e.g. Ubuntu 14.04 LTS) and install HAProxy 1.5 and Octavia control layer Develop a minimal purpose driven distribution (similar to m0n0wall) with just HAProxy, iproute2 and a Python runtime for the control layer. The primary difference here is additional development effort for option 2, verses the increased image size of option 1. Using Ubuntu and CirrOS images a representative of the two options it looks like the image size difference is on the about 20 times larger for a full featured distribution. If one of the HA models is to spin up a replacement instance on failure the image size could be significantly affect fail-over time. For initial work I think starting with a standard distribution would be sensible, but we should target systemd (Debian adopted systemd as new default, and Ubuntu is following suit). I wanted to find out if there is interest in a minimal Octavia image, and if so this may affect design decisions on the instance control plane component. -Dustin ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] gate debugging
On Aug 28, 2014, at 2:16 PM, Sean Dague s...@dague.net wrote: On 08/28/2014 02:07 PM, Joe Gordon wrote: On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com mailto:chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is
Re: [openstack-dev] [all] gate debugging
On Aug 28, 2014, at 2:15 PM, Sean Dague s...@dague.net wrote: On 08/28/2014 01:48 PM, Doug Hellmann wrote: On Aug 28, 2014, at 1:17 PM, Sean Dague s...@dague.net wrote: On 08/28/2014 12:48 PM, Doug Hellmann wrote: On Aug 27, 2014, at 5:56 PM, Sean Dague s...@dague.net wrote: On 08/27/2014 05:27 PM, Doug Hellmann wrote: On Aug 27, 2014, at 2:54 PM, Sean Dague s...@dague.net wrote: Note: thread intentionally broken, this is really a different topic. On 08/27/2014 02:30 PM, Doug Hellmann wrote: On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote: On Wed, 27 Aug 2014, Doug Hellmann wrote: I have found it immensely helpful, for example, to have a written set of the steps involved in creating a new library, from importing the git repo all the way through to making it available to other projects. Without those instructions, it would have been much harder to split up the work. The team would have had to train each other by word of mouth, and we would have had constant issues with inconsistent approaches triggering different failures. The time we spent building and verifying the instructions has paid off to the extent that we even had one developer not on the core team handle a graduation for us. +many more for the relatively simple act of just writing stuff down Write it down.” is my theme for Kilo. I definitely get the sentiment. Write it down is also hard when you are talking about things that do change around quite a bit. OpenStack as a whole sees 250 - 500 changes a week, so the interaction pattern moves around enough that it's really easy to have *very* stale information written down. Stale information is even more dangerous than no information some times, as it takes people down very wrong paths. I think we break down on communication when we get into a conversation of I want to learn gate debugging because I don't quite know what that means, or where the starting point of understanding is. So those intentions are well meaning, but tend to stall. The reality was there was no road map for those of us that dive in, it's just understanding how OpenStack holds together as a whole and where some of the high risk parts are. And a lot of that comes with days staring at code and logs until patterns emerge. Maybe if we can get smaller more targeted questions, we can help folks better? I'm personally a big fan of answering the targeted questions because then I also know that the time spent exposing that information was directly useful. I'm more than happy to mentor folks. But I just end up finding the I want to learn at the generic level something that's hard to grasp onto or figure out how we turn it into action. I'd love to hear more ideas from folks about ways we might do that better. You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don’t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to. I guess what I’m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue? I realize there’s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don’t have to keep adding disclaimers. Sure. Matt's actually working up a blog post describing the thing he nailed earlier in the week. Yes, I appreciate that both of you are responding to my questions. :-) I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I’ve found it hard, and not just me complaining. I’d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo. Here is my off the cuff set of guidelines: #1 - is it a test failure or a setup failure This should be pretty easy to figure out. Test failures come at the end of console log and say that tests failed (after you see a bunch of passing tempest tests).
Re: [openstack-dev] [nova] Is the BP approval process broken?
On 08/27/2014 09:04 PM, Dugger, Donald D wrote: I’ll try and not whine about my pet project but I do think there is a problem here. For the Gantt project to split out the scheduler there is a crucial BP that needs to be implemented ( https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP has been rejected and we’ll have to try again for Kilo. My question is did we do something wrong or is the process broken? Note that we originally proposed the BP on 4/23/14, went through 10 iterations to the final version on 7/25/14 and the final version got three +1s and a +2 by 8/5. Unfortunately, even after reaching out to specific people, we didn’t get the second +2, hence the rejection. I understand that reviews are a burden and very hard but it seems wrong that a BP with multiple positive reviews and no negative reviews is dropped because of what looks like indifference. I would posit that this is not actually indifference. The reason that there may not have been 1 +2 from a core team member may very well have been that the core team members did not feel that the blueprint's priority was high enough to put before other work, or that the core team members did have the time to comment on the spec (due to them not feeling the blueprint had the priority to justify the time to do a full review). Note that I'm not a core drivers team member. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Design Summit reloaded
On 08/28/2014 03:31 PM, Sean Dague wrote: On 08/28/2014 03:06 PM, Jay Pipes wrote: On 08/28/2014 02:21 PM, Sean Dague wrote: On 08/28/2014 01:58 PM, Jay Pipes wrote: On 08/27/2014 11:34 AM, Doug Hellmann wrote: On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. If anything, I’d like to have fewer cross-project tracks running simultaneously. Depending on which are proposed, maybe we can make that happen. On the other hand, cross-project issues is a big theme right now so maybe we should consider devoting more than a day to dealing with them. I agree with Doug here. I'd almost say having a single cross-project room, with serialized content would be better than 3 separate cross-project tracks. By nature, the cross-project sessions will attract developers that work or are interested in a set of projects that looks like a big Venn diagram. By having 3 separate cross-project tracks, we would increase the likelihood that developers would once more have to choose among simultaneous sessions that they have equal interest in. For Infra and QA folks, this likelihood is even greater... I think I'd prefer a single cross-project track on the first day. So the fallout of that is there will be 6 or 7 cross-project slots for the design summit. Maybe that's the right mix if the TC does a good job picking the top 5 things we want accomplished from a cross project standpoint during the cycle. But it's going to have to be a pretty directed pick. I think last time we had 21 slots, and with a couple of doubling up that gave 19 sessions. (about 30 - 35 proposals for that slot set). I'm not sure that would be a bad thing :) I think one of the reasons the mid-cycles have been successful is that they have adequately limited the scope of discussions and I think by doing our homework by fully vetting and voting on cross-project sessions and being OK with saying No, not this time., we will be more productive than if we had 20+ cross-project sessions. Just my two cents, though.. I'm not sure it would be a bad thing either. I just wanted to be explicit about what we are saying the cross projects sessions are for in this case: the 5 key cross project activities the TC believes should be worked on this next cycle. Yes. The other question is if we did that what's running in competition to cross project day? Is it another free form pod day for people not working on those things? It could be a pod day, sure. Or just an extended hallway session day... :) -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] change to deprecation policy in the incubator
On Aug 28, 2014, at 12:14 PM, Doug Hellmann d...@doughellmann.com wrote: Before Juno we set a deprecation policy for graduating libraries that said the incubated versions of the modules would stay in the incubator repository for one full cycle after graduation. This gives projects time to adopt the libraries and still receive bug fixes to the incubated version (see https://wiki.openstack.org/wiki/Oslo#Graduation). That policy worked well early on, but has recently introduced some challenges with the low level modules. Other modules in the incubator are still importing the incubated versions of, for example, timeutils, and so tests that rely on mocking out or modifying the behavior of timeutils do not work as expected when different parts of the application code end up calling different versions of timeutils. We had similar issues with the notifiers and RPC code, and I expect to find other cases as we continue with the graduations. To deal with this problem, I propose that for Kilo we delete graduating modules as soon as the new library is released, rather than waiting to the end of the cycle. We can update the other incubated modules at the same time, so that the incubator will always use the new libraries and be consistent. We have not had a lot of patches where backports were necessary, but there have been a few important ones, so we need to retain the ability to handle them and allow projects to adopt libraries at a reasonable pace. To handle backports cleanly, we can “freeze” all changes to the master branch version of modules slated for graduation during Kilo (we would need to make a good list very early in the cycle), and use the stable/juno branch for backports. The new process would be: 1. Declare which modules we expect to graduate during Kilo. 2. Changes to those pre-graduation modules could be made in the master branch before their library is released, as long as the change is also backported to the stable/juno branch at the same time (we should enforce this by having both patches submitted before accepting either). 3. When graduation for a library starts, freeze those modules in all branches until the library is released. 4. Remove modules from the incubator’s master branch after the library is released. 5. Land changes in the library first. 6. Backport changes, as needed, to stable/juno instead of master. It would be better to begin the export/import process as early as possible in Kilo to keep the window where point 2 applies very short. If there are objections to using stable/juno, we could introduce a new branch with a name like backports/kilo, but I am afraid having the extra branch to manage would just cause confusion. I would like to move ahead with this plan by creating the stable/juno branch and starting to update the incubator as soon as the oslo.log repository is imported (https://review.openstack.org/116934). That change has merged and the oslo.log repository has been created. Doug Thoughts? Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Design Summit reloaded
On 08/28/2014 03:31 PM, Sean Dague wrote: On 08/28/2014 03:06 PM, Jay Pipes wrote: On 08/28/2014 02:21 PM, Sean Dague wrote: On 08/28/2014 01:58 PM, Jay Pipes wrote: On 08/27/2014 11:34 AM, Doug Hellmann wrote: On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, I've been thinking about what changes we can bring to the Design Summit format to make it more productive. I've heard the feedback from the mid-cycle meetups and would like to apply some of those ideas for Paris, within the constraints we have (already booked space and time). Here is something we could do: Day 1. Cross-project sessions / incubated projects / other projects I think that worked well last time. 3 parallel rooms where we can address top cross-project questions, discuss the results of the various experiments we conducted during juno. Don't hesitate to schedule 2 slots for discussions, so that we have time to come to the bottom of those issues. Incubated projects (and maybe other projects, if space allows) occupy the remaining space on day 1, and could occupy pods on the other days. If anything, I’d like to have fewer cross-project tracks running simultaneously. Depending on which are proposed, maybe we can make that happen. On the other hand, cross-project issues is a big theme right now so maybe we should consider devoting more than a day to dealing with them. I agree with Doug here. I'd almost say having a single cross-project room, with serialized content would be better than 3 separate cross-project tracks. By nature, the cross-project sessions will attract developers that work or are interested in a set of projects that looks like a big Venn diagram. By having 3 separate cross-project tracks, we would increase the likelihood that developers would once more have to choose among simultaneous sessions that they have equal interest in. For Infra and QA folks, this likelihood is even greater... I think I'd prefer a single cross-project track on the first day. So the fallout of that is there will be 6 or 7 cross-project slots for the design summit. Maybe that's the right mix if the TC does a good job picking the top 5 things we want accomplished from a cross project standpoint during the cycle. But it's going to have to be a pretty directed pick. I think last time we had 21 slots, and with a couple of doubling up that gave 19 sessions. (about 30 - 35 proposals for that slot set). I'm not sure that would be a bad thing :) I think one of the reasons the mid-cycles have been successful is that they have adequately limited the scope of discussions and I think by doing our homework by fully vetting and voting on cross-project sessions and being OK with saying No, not this time., we will be more productive than if we had 20+ cross-project sessions. Just my two cents, though.. I'm not sure it would be a bad thing either. I just wanted to be explicit about what we are saying the cross projects sessions are for in this case: the 5 key cross project activities the TC believes should be worked on this next cycle. The other question is if we did that what's running in competition to cross project day? Is it another free form pod day for people not working on those things? -Sean -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I'm curious to know how many people would be expected to be all in the same room? And what percentage of these folks are participating in the conversation and how many are audience? One of the issues that seem to be universal in the identified discontent area with summit sessions currently (which gets discussed after each of the mid-cycles) is that 30 people talking in a room with an audience of 200 isn't very efficient. I wonder if this well intentioned direction might end up with this result which many folks I talked to don't want. The other issue that comes to mind for me is trying to allow everyone to be included in the discussion while keeping it focusing and reducing the side conversations. If folks are impatient to have their point (or off topic joke) heard, they won't wait for a turn from whoever is chairing, they will just start talking. This can create tension for the rest of the folks who *are* patiently trying to wait their turn. I chaired a day and a half of discussions at the qa/infra mid-cycle (the rest of the time was code sprinting) and it was a real challenge in a room of 30 people with a full spectrum of contributor experience (at least one person made their first contribution in Germany plus there were folks who have been involved since the beginning) to keep everyone
Re: [openstack-dev] [nova] Is the BP approval process broken?
On 08/28/2014 01:44 PM, Jay Pipes wrote: On 08/27/2014 09:04 PM, Dugger, Donald D wrote: I understand that reviews are a burden and very hard but it seems wrong that a BP with multiple positive reviews and no negative reviews is dropped because of what looks like indifference. I would posit that this is not actually indifference. The reason that there may not have been 1 +2 from a core team member may very well have been that the core team members did not feel that the blueprint's priority was high enough to put before other work, or that the core team members did have the time to comment on the spec (due to them not feeling the blueprint had the priority to justify the time to do a full review). The overall scheduler-lib Blueprint is marked with a high priority at http://status.openstack.org/release/;. Hopefully that would apply to sub-blueprints as well. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev