Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Fri, Dec 15, 2017 at 11:39:16AM -0700, Alex Schultz wrote: > Perhaps we can start reviewing the items and those with little to no > impact we can merge for the remainder of the cycle. I know > realistically everything has an impact so it'll be >0, but lets try > and keep it as close to 0 as possible. I know we've already merged > some arch support stuff this cycle but it does seem doubtful that > it'll be properly fully supported in TripleO by the end of Queens. I > think it best to leave the blueprint slated for Rocky-1 unless we get > some sudden movement on patches. Please continue to propose patches > for review and let's make sure they work in a backwards compatible > fashion with extra eyes on upgrade impacts. Okay. We'll do our best and see where we get to. In the meantime I'll start a related, but seperate thread to talk through the design issues for the multi-arcgh images and containers to try and focus development when we get there ;P Yours Tony. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Thu, Dec 14, 2017 at 5:01 PM, Tony Breeds wrote: > On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote: >> I assume since some of this work was sort of done earlier outside of >> tripleo and does not affect the default installation path that most >> folks will consume, it shouldn't be impacting to general testing or >> increase regressions. My general requirement for anyone who needed an >> FFE for functionality that isn't essential is that it's off by >> default, has minimal impact to the existing functionality and we have >> a rough estimate on feature landing. Do you have idea when you expect >> to land this functionality? Additionally the patches seem to be >> primarily around the ironic integration so have those been sorted out? > > Sadly this is going to be more impactful on x86 and anyone will like, > and I appologise for not raising these issues before now. > > There are 3 main aspects: > 1. Ironic integration/provisioning setup. >1.1 Teaching ironic inspector how to deal with ppc64le memory >detection. There are a couple of approaches there but they don't >directly impact tripleo >1.2 I think there will be some work with puppet-ironic to setup the >introspection dnsmasq in a way that's compatible with mulri-arch. >right now this is the introduction of a new tag (based on options >in the DHCP request and then sending diffent responses in the >presense/absence of that. Verymuch akin to the ipxe stuff there >today. >1.3 Helping tripleo understadn that there is now more than one >deply/overcloud image and correctly using that. These are mostly >covered with the review Mark published but there is the backwards >compat/corner cases to deal with. >1.4 Right now ppc64le has very specific requirements with respect to >the boot partition layout. Last time I checked these weren't >handled by default in ironic. The smiple workaround here is to >make the overcloud image on ppc64le a whole disk rather than >single partition and I think given the scope of everythign else >that's the most likley outcome for queens. > > 2. Containers. >Here we run in to several issues not least of which is my general >lack of understanding of containers but the challenges as I >understand them are: >2.1 Having a venue to build/publish/test ppc64le container builds. >This in many ways is tied to the CI issue below, but all of the >potential solutions require some conatiner image for ppc64le to >be available to validate that adding them doesn't impact x86_64. >2.2 As I understand it the right way to do multi-arch containers is >with an image manifest or manifest list images[1] There are so >many open questions here. >2.2.1 If the container registry supports manifest lists when we > pull them onto the the undercloud can we get *all* > layers/objects - or will we just get the one that matches > the host CPU? >2.2.2 If the container registry doesn't support manifest list > images, can we use somethign like manifest-tool[2] to pull > "nova" from multiple registreies or orgs on the same > registry and combine them into a single manifest image on > the underclud? >2.2.3 Do we give up entirely on manifest images and just have > multiple images / tags on the undercloud for example: > nova:latest > nova:x86_64_latest > nova:ppc64le_64_latest > and have the deployed node pull the $(arch)_latest tag > first and if $(arch) == x86_64 pull the :latest tag if the > first pull failed? >2.3 All the things I can't describe/know about 'cause I haven't >gotten there yet. > 3. CI >There isn't any ppc64le CI for tripleo and frankly there wont be in >the forseeable future. Given the CI that's inplace on x86 we can >confidently assert that we won't break x86 but the code paths we add >for power will largely be untested (beyonf unit tests) and any/all >issues will have to be caught by downstream teams. > > So as you can see the aim is to have minimal impact on x86_64 and > default to the existing behaviour in the absence of anything > specifically requesting multi-arch support. but minimal *may* be > none > :( > > As to code ETAs realistically all of the ironic related code will be > public by m3 but probably not merged, and the containers stuff is > somewhat dpenedant on that work / direction from the community on how to > handle the points I enumerated. > Perhaps we can start reviewing the items and those with little to no impact we can merge for the remainder of the cycle. I know realistically everything has an impact so it'll be >0, but lets try and keep it as close to 0 as possible. I know we've
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
I have https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-executions-yaql-function almost all implemented and I would like submit an FFE for it Cheers, Adriano On Fri, Dec 15, 2017 at 12:01 AM, Tony Breeds wrote: > On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote: > > I assume since some of this work was sort of done earlier outside of > > tripleo and does not affect the default installation path that most > > folks will consume, it shouldn't be impacting to general testing or > > increase regressions. My general requirement for anyone who needed an > > FFE for functionality that isn't essential is that it's off by > > default, has minimal impact to the existing functionality and we have > > a rough estimate on feature landing. Do you have idea when you expect > > to land this functionality? Additionally the patches seem to be > > primarily around the ironic integration so have those been sorted out? > > Sadly this is going to be more impactful on x86 and anyone will like, > and I appologise for not raising these issues before now. > > There are 3 main aspects: > 1. Ironic integration/provisioning setup. >1.1 Teaching ironic inspector how to deal with ppc64le memory >detection. There are a couple of approaches there but they don't >directly impact tripleo >1.2 I think there will be some work with puppet-ironic to setup the >introspection dnsmasq in a way that's compatible with mulri-arch. >right now this is the introduction of a new tag (based on options >in the DHCP request and then sending diffent responses in the >presense/absence of that. Verymuch akin to the ipxe stuff there >today. >1.3 Helping tripleo understadn that there is now more than one >deply/overcloud image and correctly using that. These are mostly >covered with the review Mark published but there is the backwards >compat/corner cases to deal with. >1.4 Right now ppc64le has very specific requirements with respect to >the boot partition layout. Last time I checked these weren't >handled by default in ironic. The smiple workaround here is to >make the overcloud image on ppc64le a whole disk rather than >single partition and I think given the scope of everythign else >that's the most likley outcome for queens. > > 2. Containers. >Here we run in to several issues not least of which is my general >lack of understanding of containers but the challenges as I >understand them are: >2.1 Having a venue to build/publish/test ppc64le container builds. >This in many ways is tied to the CI issue below, but all of the >potential solutions require some conatiner image for ppc64le to >be available to validate that adding them doesn't impact x86_64. >2.2 As I understand it the right way to do multi-arch containers is >with an image manifest or manifest list images[1] There are so >many open questions here. >2.2.1 If the container registry supports manifest lists when we > pull them onto the the undercloud can we get *all* > layers/objects - or will we just get the one that matches > the host CPU? >2.2.2 If the container registry doesn't support manifest list > images, can we use somethign like manifest-tool[2] to pull > "nova" from multiple registreies or orgs on the same > registry and combine them into a single manifest image on > the underclud? >2.2.3 Do we give up entirely on manifest images and just have > multiple images / tags on the undercloud for example: > nova:latest > nova:x86_64_latest > nova:ppc64le_64_latest > and have the deployed node pull the $(arch)_latest tag > first and if $(arch) == x86_64 pull the :latest tag if the > first pull failed? >2.3 All the things I can't describe/know about 'cause I haven't >gotten there yet. > 3. CI >There isn't any ppc64le CI for tripleo and frankly there wont be in >the forseeable future. Given the CI that's inplace on x86 we can >confidently assert that we won't break x86 but the code paths we add >for power will largely be untested (beyonf unit tests) and any/all >issues will have to be caught by downstream teams. > > So as you can see the aim is to have minimal impact on x86_64 and > default to the existing behaviour in the absence of anything > specifically requesting multi-arch support. but minimal *may* be > none > :( > > As to code ETAs realistically all of the ironic related code will be > public by m3 but probably not merged, and the containers stuff is > somewhat dpenedant on that work / direction from the community on how to > handle the points I enumerated. > > > Yours Tony. > > [1] https://docs.docker.com/registry/spe
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote: > I assume since some of this work was sort of done earlier outside of > tripleo and does not affect the default installation path that most > folks will consume, it shouldn't be impacting to general testing or > increase regressions. My general requirement for anyone who needed an > FFE for functionality that isn't essential is that it's off by > default, has minimal impact to the existing functionality and we have > a rough estimate on feature landing. Do you have idea when you expect > to land this functionality? Additionally the patches seem to be > primarily around the ironic integration so have those been sorted out? Sadly this is going to be more impactful on x86 and anyone will like, and I appologise for not raising these issues before now. There are 3 main aspects: 1. Ironic integration/provisioning setup. 1.1 Teaching ironic inspector how to deal with ppc64le memory detection. There are a couple of approaches there but they don't directly impact tripleo 1.2 I think there will be some work with puppet-ironic to setup the introspection dnsmasq in a way that's compatible with mulri-arch. right now this is the introduction of a new tag (based on options in the DHCP request and then sending diffent responses in the presense/absence of that. Verymuch akin to the ipxe stuff there today. 1.3 Helping tripleo understadn that there is now more than one deply/overcloud image and correctly using that. These are mostly covered with the review Mark published but there is the backwards compat/corner cases to deal with. 1.4 Right now ppc64le has very specific requirements with respect to the boot partition layout. Last time I checked these weren't handled by default in ironic. The smiple workaround here is to make the overcloud image on ppc64le a whole disk rather than single partition and I think given the scope of everythign else that's the most likley outcome for queens. 2. Containers. Here we run in to several issues not least of which is my general lack of understanding of containers but the challenges as I understand them are: 2.1 Having a venue to build/publish/test ppc64le container builds. This in many ways is tied to the CI issue below, but all of the potential solutions require some conatiner image for ppc64le to be available to validate that adding them doesn't impact x86_64. 2.2 As I understand it the right way to do multi-arch containers is with an image manifest or manifest list images[1] There are so many open questions here. 2.2.1 If the container registry supports manifest lists when we pull them onto the the undercloud can we get *all* layers/objects - or will we just get the one that matches the host CPU? 2.2.2 If the container registry doesn't support manifest list images, can we use somethign like manifest-tool[2] to pull "nova" from multiple registreies or orgs on the same registry and combine them into a single manifest image on the underclud? 2.2.3 Do we give up entirely on manifest images and just have multiple images / tags on the undercloud for example: nova:latest nova:x86_64_latest nova:ppc64le_64_latest and have the deployed node pull the $(arch)_latest tag first and if $(arch) == x86_64 pull the :latest tag if the first pull failed? 2.3 All the things I can't describe/know about 'cause I haven't gotten there yet. 3. CI There isn't any ppc64le CI for tripleo and frankly there wont be in the forseeable future. Given the CI that's inplace on x86 we can confidently assert that we won't break x86 but the code paths we add for power will largely be untested (beyonf unit tests) and any/all issues will have to be caught by downstream teams. So as you can see the aim is to have minimal impact on x86_64 and default to the existing behaviour in the absence of anything specifically requesting multi-arch support. but minimal *may* be > none :( As to code ETAs realistically all of the ironic related code will be public by m3 but probably not merged, and the containers stuff is somewhat dpenedant on that work / direction from the community on how to handle the points I enumerated. Yours Tony. [1] https://docs.docker.com/registry/spec/manifest-v2-2/ [2] https://github.com/estesp/manifest-tool __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Thu, Dec 14, 2017 at 12:38 PM, Mark Hamzy wrote: > Alex Schultz wrote on 12/14/2017 09:24:54 AM: >> On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy wrote: >> ... As I said previously, please post the >> patches ASAP so we can get eyes on these changes. Since this does >> have an impact on the existing functionality this should have been >> merged earlier in the cycle so we could work out any user facing >> issues. > > Sorry about that. > https://review.openstack.org/#/c/528000/ > https://review.openstack.org/#/c/528060/ > I reviewed it a bit and I think you can put in the backwards compatibility in the few spots I listed. The problem is really that a Queens undercloud (tripleoclient/tripleo-common) needs to be able to manage a Pike undercloud. For now I think we can grant the FFE because it's not too bad if this is the only bit of changes we need to make. But we will need to solve for the backwards compatibility prior to merging. I'll update the blueprint with this. Thanks, -Alex > I will see how easy it is to also support the old naming convention... > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
Alex Schultz wrote on 12/14/2017 09:24:54 AM: > On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy wrote: > ... As I said previously, please post the > patches ASAP so we can get eyes on these changes. Since this does > have an impact on the existing functionality this should have been > merged earlier in the cycle so we could work out any user facing > issues. Sorry about that. https://review.openstack.org/#/c/528000/ https://review.openstack.org/#/c/528060/ I will see how easy it is to also support the old naming convention... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy wrote: > Alex Schultz wrote on 12/13/2017 04:29:49 PM: >> On Wed, Dec 13, 2017 at 3:22 PM, Mark Hamzy wrote: >> > What I have done at a high level is to rename the images into >> > architecture >> > specific >> > images. For example, >> > >> > (undercloud) [stack@oscloud5 ~]$ openstack image list >> > +-- >> +---++ >> > | ID | Name | >> > Status | >> > +-- >> +---++ >> > | fa0ed7cb-21d7-427b-b8cb-7c62f0ff7760 | ppc64le-bm-deploy-kernel | >> > active | >> > | 94dc2adf-49ce-4db5-b914-970b57a8127f | ppc64le-bm-deploy-ramdisk | >> > active | >> > | 6c50587d-dd29-41ba-8971-e0abf3429020 | ppc64le-overcloud-full| >> > active | >> > | 59e512a7-990e-4689-85d2-f1f4e1e6e7a8 | x86_64-bm-deploy-kernel | >> > active | >> > | bcad2821-01be-4556-b686-31c70bb64716 | x86_64-bm-deploy-ramdisk | >> > active | >> > | 3ab489fa-32c7-4758-a630-287c510fc473 | x86_64-overcloud-full | >> > active | >> > | 661f18f7-4d99-43e8-b7b8-f5c8a9d5b116 | x86_64-overcloud-full-initrd | >> > active | >> > | 4a09c422-3de0-46ca-98c3-7c6f1f7717ff | x86_64-overcloud-full-vmlinuz | >> > active | >> > +-- >> +---++ >> > >> > This will change existing functionality. >> > >> >> Any chance of backwards compatibility if no arch is specified in the >> image list so it's not that impacting? > > The patch as currently coded does not do that. It is more consistent and > cleaner as it is currently written. How opposed is the community to a > new convention? I know we are pushing up against holidays and deadlines and > don't know how much longer it will take to also support the old naming > convention. > It's not that the community is averse to new conventions, the issue is the lack of backwards compatibility especially late in the cycle. If we need to extend out until the middle of January/the end of M3 for this that is an option to get the backwards compatibility. I'm wondering if this lack of backwards compatibility would be a problem for upgrades or the more advanced use cases. We do support the ability for custom role images for the end users so I'm wondering what the impact would be for that. As I said previously, please post the patches ASAP so we can get eyes on these changes. Since this does have an impact on the existing functionality this should have been merged earlier in the cycle so we could work out any user facing issues. > RedHat is asking for another identifier along with ppc64le given that there > are > different optimizations and CPU instructions between a Power 8 system and a > Power 9 system. The kernel is certainly different and the base operating > system might be as well. > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
Alex Schultz wrote on 12/13/2017 04:29:49 PM: > On Wed, Dec 13, 2017 at 3:22 PM, Mark Hamzy wrote: > > What I have done at a high level is to rename the images into architecture > > specific > > images. For example, > > > > (undercloud) [stack@oscloud5 ~]$ openstack image list > > +-- > +---++ > > | ID | Name | > > Status | > > +-- > +---++ > > | fa0ed7cb-21d7-427b-b8cb-7c62f0ff7760 | ppc64le-bm-deploy-kernel | > > active | > > | 94dc2adf-49ce-4db5-b914-970b57a8127f | ppc64le-bm-deploy-ramdisk | > > active | > > | 6c50587d-dd29-41ba-8971-e0abf3429020 | ppc64le-overcloud-full | > > active | > > | 59e512a7-990e-4689-85d2-f1f4e1e6e7a8 | x86_64-bm-deploy-kernel | > > active | > > | bcad2821-01be-4556-b686-31c70bb64716 | x86_64-bm-deploy-ramdisk | > > active | > > | 3ab489fa-32c7-4758-a630-287c510fc473 | x86_64-overcloud-full | > > active | > > | 661f18f7-4d99-43e8-b7b8-f5c8a9d5b116 | x86_64-overcloud-full-initrd | > > active | > > | 4a09c422-3de0-46ca-98c3-7c6f1f7717ff | x86_64-overcloud-full-vmlinuz | > > active | > > +-- > +---++ > > > > This will change existing functionality. > > > > Any chance of backwards compatibility if no arch is specified in the > image list so it's not that impacting? The patch as currently coded does not do that. It is more consistent and cleaner as it is currently written. How opposed is the community to a new convention? I know we are pushing up against holidays and deadlines and don't know how much longer it will take to also support the old naming convention. RedHat is asking for another identifier along with ppc64le given that there are different optimizations and CPU instructions between a Power 8 system and a Power 9 system. The kernel is certainly different and the base operating system might be as well. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Wed, Dec 13, 2017 at 3:22 PM, Mark Hamzy wrote: >> I just need an understanding on the impact and the timeline. Replying >> here is sufficient. >> >> I assume since some of this work was sort of done earlier outside of >> tripleo and does not affect the default installation path that most >> folks will consume, it shouldn't be impacting to general testing or >> increase regressions. My general requirement for anyone who needed an >> FFE for functionality that isn't essential is that it's off by >> default, has minimal impact to the existing functionality and we have >> a rough estimate on feature landing. Do you have idea when you expect >> to land this functionality? Additionally the patches seem to be >> primarily around the ironic integration so have those been sorted out? > > I have been working on a multi-architecture patch for TripleO and am almost > ready to submit a WIP to r.o.o. I have delayed until I can get all of the > testcases passing. > Please submit ASAP so we can get a proper review of what is actually impacted. The failing test cases would also indicate how much of an impact this really is. > Currently the patches exist at: > https://hamzy.fedorapeople.org/TripleO-multi-arch/05.bb2b96e/0001-fix_multi_arch-tripleo-common.patch > https://hamzy.fedorapeople.org/TripleO-multi-arch/05.cc5fee3/0001-fix_multi_arch-python-tripleoclient.patch > > And the full installation instructions are at: > https://fedoraproject.org/wiki/User:Hamzy/TripleO_mixed_undercloud_overcloud_try9 > > What I have done at a high level is to rename the images into architecture > specific > images. For example, > > (undercloud) [stack@oscloud5 ~]$ openstack image list > +--+---++ > | ID | Name | > Status | > +--+---++ > | fa0ed7cb-21d7-427b-b8cb-7c62f0ff7760 | ppc64le-bm-deploy-kernel | > active | > | 94dc2adf-49ce-4db5-b914-970b57a8127f | ppc64le-bm-deploy-ramdisk | > active | > | 6c50587d-dd29-41ba-8971-e0abf3429020 | ppc64le-overcloud-full| > active | > | 59e512a7-990e-4689-85d2-f1f4e1e6e7a8 | x86_64-bm-deploy-kernel | > active | > | bcad2821-01be-4556-b686-31c70bb64716 | x86_64-bm-deploy-ramdisk | > active | > | 3ab489fa-32c7-4758-a630-287c510fc473 | x86_64-overcloud-full | > active | > | 661f18f7-4d99-43e8-b7b8-f5c8a9d5b116 | x86_64-overcloud-full-initrd | > active | > | 4a09c422-3de0-46ca-98c3-7c6f1f7717ff | x86_64-overcloud-full-vmlinuz | > active | > +--+---++ > > This will change existing functionality. > Any chance of backwards compatibility if no arch is specified in the image list so it's not that impacting? > I still need to work with RedHat on changing the patch for their needs, but > it currently can > deploy an x86_64 undercloud, an x86_64 overcloud controller node and a > ppc64le overcloud > compute node. > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
> I just need an understanding on the impact and the timeline. Replying > here is sufficient. > > I assume since some of this work was sort of done earlier outside of > tripleo and does not affect the default installation path that most > folks will consume, it shouldn't be impacting to general testing or > increase regressions. My general requirement for anyone who needed an > FFE for functionality that isn't essential is that it's off by > default, has minimal impact to the existing functionality and we have > a rough estimate on feature landing. Do you have idea when you expect > to land this functionality? Additionally the patches seem to be > primarily around the ironic integration so have those been sorted out? I have been working on a multi-architecture patch for TripleO and am almost ready to submit a WIP to r.o.o. I have delayed until I can get all of the testcases passing. Currently the patches exist at: https://hamzy.fedorapeople.org/TripleO-multi-arch/05.bb2b96e/0001-fix_multi_arch-tripleo-common.patch https://hamzy.fedorapeople.org/TripleO-multi-arch/05.cc5fee3/0001-fix_multi_arch-python-tripleoclient.patch And the full installation instructions are at: https://fedoraproject.org/wiki/User:Hamzy/TripleO_mixed_undercloud_overcloud_try9 What I have done at a high level is to rename the images into architecture specific images. For example, (undercloud) [stack@oscloud5 ~]$ openstack image list +--+---++ | ID | Name | Status | +--+---++ | fa0ed7cb-21d7-427b-b8cb-7c62f0ff7760 | ppc64le-bm-deploy-kernel | active | | 94dc2adf-49ce-4db5-b914-970b57a8127f | ppc64le-bm-deploy-ramdisk | active | | 6c50587d-dd29-41ba-8971-e0abf3429020 | ppc64le-overcloud-full| active | | 59e512a7-990e-4689-85d2-f1f4e1e6e7a8 | x86_64-bm-deploy-kernel | active | | bcad2821-01be-4556-b686-31c70bb64716 | x86_64-bm-deploy-ramdisk | active | | 3ab489fa-32c7-4758-a630-287c510fc473 | x86_64-overcloud-full | active | | 661f18f7-4d99-43e8-b7b8-f5c8a9d5b116 | x86_64-overcloud-full-initrd | active | | 4a09c422-3de0-46ca-98c3-7c6f1f7717ff | x86_64-overcloud-full-vmlinuz | active | +--+---++ This will change existing functionality. I still need to work with RedHat on changing the patch for their needs, but it currently can deploy an x86_64 undercloud, an x86_64 overcloud controller node and a ppc64le overcloud compute node. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Wed, Dec 13, 2017 at 11:16 AM, Sven Anderson wrote: > > On Sat, Dec 9, 2017 at 12:35 AM Alex Schultz wrote: >> >> Please take some time to review the list of blueprints currently >> associated with Rocky[0] to see if your efforts have been moved. If >> you believe you're close to implementing the feature in the next week >> or two, let me know and we can move it back into Queens. If you think >> it will take an extended period of time (more than 2 weeks) to land >> but we need it in Queens, please submit an FFE. > > > As discussed on IRC today, I'd like to try to implement > > https://blueprints.launchpad.net/tripleo/+spec/tripleo-realtime > > until Queens M3. It has been punted many releases already, and depends now > on the ironic ansible driver, which just merged and now gets it's finishing > touch. Since it's a pure add-on feature that is off by default and shouldn't > have impact on existing functionality, it's a pretty safe thing to try on > best effort basis. If we see it becomes unfeasible to land this until M3 I > will punt it. > > Even if I make good progress next week, it is very unlikely to finish it > this year, so I also like to submit an FFE for it. > Thanks Sven. As discussed, I updated the blueprint. Please keep me in the loop if it will not make Queens. Thanks, -Alex > Cheers, > > Sven > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Tue, Dec 12, 2017 at 5:50 PM, Tony Breeds wrote: > On Fri, Dec 08, 2017 at 04:34:09PM -0700, Alex Schultz wrote: >> Hey folks, >> >> So I went through the list of blueprints and moved some that were >> either not updated or appeared to have a bunch of patches not in a >> mergable state. >> >> Please take some time to review the list of blueprints currently >> associated with Rocky[0] to see if your efforts have been moved. If >> you believe you're close to implementing the feature in the next week >> or two, let me know and we can move it back into Queens. If you think >> it will take an extended period of time (more than 2 weeks) to land >> but we need it in Queens, please submit an FFE. > > I'd like to get the ball rolling on applying for an FFE for: > https://blueprints.launchpad.net/tripleo/+spec/multiarch-support > > So how do I do that thing? For requirements it's just a thread on the > mailing list is there soemthing more formal for tripleo? > I just need an understanding on the impact and the timeline. Replying here is sufficient. I assume since some of this work was sort of done earlier outside of tripleo and does not affect the default installation path that most folks will consume, it shouldn't be impacting to general testing or increase regressions. My general requirement for anyone who needed an FFE for functionality that isn't essential is that it's off by default, has minimal impact to the existing functionality and we have a rough estimate on feature landing. Do you have idea when you expect to land this functionality? Additionally the patches seem to be primarily around the ironic integration so have those been sorted out? Thanks, -Alex > Yours Tony. > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Sat, Dec 9, 2017 at 12:35 AM Alex Schultz wrote: > Please take some time to review the list of blueprints currently > associated with Rocky[0] to see if your efforts have been moved. If > you believe you're close to implementing the feature in the next week > or two, let me know and we can move it back into Queens. If you think > it will take an extended period of time (more than 2 weeks) to land > but we need it in Queens, please submit an FFE. > As discussed on IRC today, I'd like to try to implement https://blueprints.launchpad.net/tripleo/+spec/tripleo-realtime until Queens M3. It has been punted many releases already, and depends now on the ironic ansible driver, which just merged and now gets it's finishing touch. Since it's a pure add-on feature that is off by default and shouldn't have impact on existing functionality, it's a pretty safe thing to try on best effort basis. If we see it becomes unfeasible to land this until M3 I will punt it. Even if I make good progress next week, it is very unlikely to finish it this year, so I also like to submit an FFE for it. Cheers, Sven __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Fri, Dec 08, 2017 at 04:34:09PM -0700, Alex Schultz wrote: > Hey folks, > > So I went through the list of blueprints and moved some that were > either not updated or appeared to have a bunch of patches not in a > mergable state. > > Please take some time to review the list of blueprints currently > associated with Rocky[0] to see if your efforts have been moved. If > you believe you're close to implementing the feature in the next week > or two, let me know and we can move it back into Queens. If you think > it will take an extended period of time (more than 2 weeks) to land > but we need it in Queens, please submit an FFE. I'd like to get the ball rolling on applying for an FFE for: https://blueprints.launchpad.net/tripleo/+spec/multiarch-support So how do I do that thing? For requirements it's just a thread on the mailing list is there soemthing more formal for tripleo? Yours Tony. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Tue, Dec 12, 2017 at 4:56 AM, Moshe Levi wrote: > I believe so, > Just regarding the use of sample-env-generator tool is not supporting > parameters for roles: > Such as : > > # Kernel arguments for ComputeSriov node > ComputeSriovParameters: > KernelArgs: "intel_iommu=on iommu=pt" > OvsHwOffload: True > It can, you just have to document them in the env generator file. See https://github.com/openstack/tripleo-heat-templates/blob/master/sample-env-generator/composable-roles.yaml#L128 This has the advantage of being able to properly document the parameters for end user consumption. > So can we merged the patches as is and fix the sample-env-generator late? If you throw up a WIP patch for this on top of the existing one, I'll merge it. I just don't want it forgotten as it's important for the end user to be able to consume these environment files via the UI as well. Thanks, -Alex > >> -Original Message- >> From: Alex Schultz [mailto:aschu...@redhat.com] >> Sent: Tuesday, December 12, 2017 12:20 AM >> To: OpenStack Development Mailing List (not for usage questions) >> >> Subject: Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky >> >> On Mon, Dec 11, 2017 at 1:20 PM, Brad P. Crochet >> wrote: >> > >> > >> > On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz wrote: >> >> >> >> Hey folks, >> >> >> >> So I went through the list of blueprints and moved some that were >> >> either not updated or appeared to have a bunch of patches not in a >> >> mergable state. >> >> >> >> Please take some time to review the list of blueprints currently >> >> associated with Rocky[0] to see if your efforts have been moved. If >> >> you believe you're close to implementing the feature in the next week >> >> or two, let me know and we can move it back into Queens. If you think >> >> it will take an extended period of time (more than 2 weeks) to land >> >> but we need it in Queens, please submit an FFE. >> >> >> > >> > I think these are in a close enough state to warrant inclusion in Queens: >> > >> > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fget-networks- >> action&data=0 >> > >> 2%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540e56 >> 51a%7Ca >> > >> 652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857&sda >> ta=EKi >> > 0yRd07V01KvoZoiQY%2FMIZU9OPZHez6%2F06PL7e7EU%3D&reserved=0 >> > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-common-list- >> availa >> > ble-roles- >> action&data=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee862294 >> > >> 7ea8ccc08d540e5651a%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7 >> C63648 >> > >> 6276360618857&sdata=rNn1Ujf2XKCslsW7idN1qJwn0DWpN7I8A4gvYWiELbI >> %3D&res >> > erved=0 >> > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-common-select- >> role >> > s- >> workflow&data=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947 >> ea8cc >> > >> c08d540e5651a%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6364 >> 8627636 >> > >> 0618857&sdata=jpH6L7d1c0w6WDZxPoeDeHBzb0T2FIgEhHiT3PElEMg%3D&re >> served= >> > 0 >> > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fupdate-networks- >> action&dat >> > >> a=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540 >> e5651a% >> > >> 7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857& >> sdata= >> > AlBpvtX3kdMedrO%2BehaRaTUhl%2BUclJdesiqsu7HvvEA%3D&reserved=0 >> > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fvalidate-roles- >> networks&da >> > >> ta=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540 >> e5651a >> > >> %7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63648627636061885 >> 7&sdata >> > >> =uDK%2F4Wo9S0D9qySb%2BORYVE1HDtrJ0J6Ec8qWU8hUwkw%3D&reserve >> d=0 >> > >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Mon, Dec 11, 2017 at 5:20 PM Alex Schultz wrote: > On Mon, Dec 11, 2017 at 1:20 PM, Brad P. Crochet wrote: > > > > > > On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz wrote: > >> > >> Hey folks, > >> > >> So I went through the list of blueprints and moved some that were > >> either not updated or appeared to have a bunch of patches not in a > >> mergable state. > >> > >> Please take some time to review the list of blueprints currently > >> associated with Rocky[0] to see if your efforts have been moved. If > >> you believe you're close to implementing the feature in the next week > >> or two, let me know and we can move it back into Queens. If you think > >> it will take an extended period of time (more than 2 weeks) to land > >> but we need it in Queens, please submit an FFE. > >> > > > > I think these are in a close enough state to warrant inclusion in Queens: > > > > https://blueprints.launchpad.net/tripleo/+spec/get-networks-action > > > https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-list-available-roles-action > > > https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-select-roles-workflow > > https://blueprints.launchpad.net/tripleo/+spec/update-networks-action > > https://blueprints.launchpad.net/tripleo/+spec/validate-roles-networks > > https://blueprints.launchpad.net/tripleo/+spec/update-roles-action > > > > Ok I reviewed them and they do appear to have patches posted and are > getting reviews. I'll pull them back in to Queens and set the > milestone to queens-3. Please make sure to update us on the status > during this week and next week's IRC meetings. I would like to make > sure these land ASAP. Do you think they should be in a state to land > by the end of next week say 12/21? > > Thanks, > -Alex > > Yes. They will be in a state to land by 12/21. Thanks, Brad > > There is a good chance of these being completed in the coming week. > > > > Thanks, > > > > Brad > >> > >> > > -- > > Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS > > Principal Software Engineer > > (c) 704.236.9385 <(704)%20236-9385> > > > > > > > __ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS Principal Software Engineer (c) 704.236.9385 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
I believe so, Just regarding the use of sample-env-generator tool is not supporting parameters for roles: Such as : # Kernel arguments for ComputeSriov node ComputeSriovParameters: KernelArgs: "intel_iommu=on iommu=pt" OvsHwOffload: True So can we merged the patches as is and fix the sample-env-generator late? > -Original Message- > From: Alex Schultz [mailto:aschu...@redhat.com] > Sent: Tuesday, December 12, 2017 12:20 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky > > On Mon, Dec 11, 2017 at 1:20 PM, Brad P. Crochet > wrote: > > > > > > On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz wrote: > >> > >> Hey folks, > >> > >> So I went through the list of blueprints and moved some that were > >> either not updated or appeared to have a bunch of patches not in a > >> mergable state. > >> > >> Please take some time to review the list of blueprints currently > >> associated with Rocky[0] to see if your efforts have been moved. If > >> you believe you're close to implementing the feature in the next week > >> or two, let me know and we can move it back into Queens. If you think > >> it will take an extended period of time (more than 2 weeks) to land > >> but we need it in Queens, please submit an FFE. > >> > > > > I think these are in a close enough state to warrant inclusion in Queens: > > > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fget-networks- > action&data=0 > > > 2%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540e56 > 51a%7Ca > > > 652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857&sda > ta=EKi > > 0yRd07V01KvoZoiQY%2FMIZU9OPZHez6%2F06PL7e7EU%3D&reserved=0 > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-common-list- > availa > > ble-roles- > action&data=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee862294 > > > 7ea8ccc08d540e5651a%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7 > C63648 > > > 6276360618857&sdata=rNn1Ujf2XKCslsW7idN1qJwn0DWpN7I8A4gvYWiELbI > %3D&res > > erved=0 > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-common-select- > role > > s- > workflow&data=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947 > ea8cc > > > c08d540e5651a%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6364 > 8627636 > > > 0618857&sdata=jpH6L7d1c0w6WDZxPoeDeHBzb0T2FIgEhHiT3PElEMg%3D&re > served= > > 0 > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fupdate-networks- > action&dat > > > a=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540 > e5651a% > > > 7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857& > sdata= > > AlBpvtX3kdMedrO%2BehaRaTUhl%2BUclJdesiqsu7HvvEA%3D&reserved=0 > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fvalidate-roles- > networks&da > > > ta=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540 > e5651a > > > %7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63648627636061885 > 7&sdata > > > =uDK%2F4Wo9S0D9qySb%2BORYVE1HDtrJ0J6Ec8qWU8hUwkw%3D&reserve > d=0 > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fupdate-roles- > action&data=0 > > > 2%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540e56 > 51a%7Ca > > > 652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857&sda > ta=Oos > > SA5xeI4L4Ue6XbPh1Wd83tUU1SehQG%2BaSxRFXNW8%3D&reserved=0 > > > > Ok I reviewed them and they do appear to have patches posted and are > getting reviews. I'll pull them back in to Queens and set the milestone to > queens-3. Please make sure to update us on the status during this week and > next week's IRC meetings. I would like to make sure these land ASAP. Do you > think they should be in a state to land by the end of next week say 12/21? > > Thanks, > -Alex > > > There is a good chance of these being completed in the coming week. > > > > Thanks, > > > > Brad > >> > >> > >
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Mon, Dec 11, 2017 at 1:20 PM, Brad P. Crochet wrote: > > > On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz wrote: >> >> Hey folks, >> >> So I went through the list of blueprints and moved some that were >> either not updated or appeared to have a bunch of patches not in a >> mergable state. >> >> Please take some time to review the list of blueprints currently >> associated with Rocky[0] to see if your efforts have been moved. If >> you believe you're close to implementing the feature in the next week >> or two, let me know and we can move it back into Queens. If you think >> it will take an extended period of time (more than 2 weeks) to land >> but we need it in Queens, please submit an FFE. >> > > I think these are in a close enough state to warrant inclusion in Queens: > > https://blueprints.launchpad.net/tripleo/+spec/get-networks-action > https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-list-available-roles-action > https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-select-roles-workflow > https://blueprints.launchpad.net/tripleo/+spec/update-networks-action > https://blueprints.launchpad.net/tripleo/+spec/validate-roles-networks > https://blueprints.launchpad.net/tripleo/+spec/update-roles-action > Ok I reviewed them and they do appear to have patches posted and are getting reviews. I'll pull them back in to Queens and set the milestone to queens-3. Please make sure to update us on the status during this week and next week's IRC meetings. I would like to make sure these land ASAP. Do you think they should be in a state to land by the end of next week say 12/21? Thanks, -Alex > There is a good chance of these being completed in the coming week. > > Thanks, > > Brad >> >> > -- > Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS > Principal Software Engineer > (c) 704.236.9385 > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz wrote: > Hey folks, > > So I went through the list of blueprints and moved some that were > either not updated or appeared to have a bunch of patches not in a > mergable state. > > Please take some time to review the list of blueprints currently > associated with Rocky[0] to see if your efforts have been moved. If > you believe you're close to implementing the feature in the next week > or two, let me know and we can move it back into Queens. If you think > it will take an extended period of time (more than 2 weeks) to land > but we need it in Queens, please submit an FFE. > > I think these are in a close enough state to warrant inclusion in Queens: https://blueprints.launchpad.net/tripleo/+spec/get-networks-action https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-list-available-roles-action https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-select-roles-workflow https://blueprints.launchpad.net/tripleo/+spec/update-networks-action https://blueprints.launchpad.net/tripleo/+spec/validate-roles-networks https://blueprints.launchpad.net/tripleo/+spec/update-roles-action There is a good chance of these being completed in the coming week. Thanks, Brad > > -- Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS Principal Software Engineer (c) 704.236.9385 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
> -Original Message- > From: Alex Schultz [mailto:aschu...@redhat.com] > Sent: Monday, December 11, 2017 8:31 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky > > On Fri, Dec 8, 2017 at 6:11 PM, Moshe Levi wrote: > > Hi Alex, > > > > I don't see the tripleo ovs hardware offload feature. The spec was merge > into queens [1], but for some reason the blueprint is not in approve state > [2]. > > > > Just as a reminder it's everyone's responsibility to make sure their > blueprints > are properly up to date. I've mentioned this in the weekly meeting a > few[0][1] times in the last month. I've added the patches from this email into > the blueprint for tracking Sorry I didn’t know > > > I has only 3 patches left: > > 1. > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev > > > iew.openstack.org%2F%23%2Fc%2F507401%2F&data=02%7C01%7Cmoshele > %40mella > > > nox.com%7Cab9e4e0441974f8465c108d540c5812b%7Ca652971c7d2e4d9ba6a > 4d1492 > > > 56f461b%7C0%7C0%7C636486139387929919&sdata=BWaNEcgEAp59IE%2FdX > 6cBoDueb > > s%2FlJBKh7P%2BepnQP38g%3D&reserved=0 has 2 +2 2. > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev > > > iew.openstack.org%2F%23%2Fc%2F507100%2F&data=02%7C01%7Cmoshele > %40mella > > > nox.com%7Cab9e4e0441974f8465c108d540c5812b%7Ca652971c7d2e4d9ba6a > 4d1492 > > > 56f461b%7C0%7C0%7C636486139387929919&sdata=iiVBnWBoVnGnhMC%2B > SIYWjd1uS > > DVNovnHR0Tr8oxetY0%3D&reserved=0 has 1 +2 3. > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev > > > iew.openstack.org%2F%23%2Fc%2F518715%2F&data=02%7C01%7Cmoshele > %40mella > > > nox.com%7Cab9e4e0441974f8465c108d540c5812b%7Ca652971c7d2e4d9ba6a > 4d1492 > > > 56f461b%7C0%7C0%7C636486139387929919&sdata=LwfNUwfdrhqWGwWyG > bO8LdrhJyk > > e4ydqVe7s6wgth%2Bo%3D&reserved=0 > > > > I would appreciated if we can land all the patches to queens release. > > We should be able to land these and they appear to be in decent shape. > Please reach out on irc if you aren't getting additional reviews on these. It > would be really beneficial to land these in the new week or so if possible. Ok I will Thanks Alex > > Thanks, > -Alex > > [0] > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Feav > esdrop.openstack.org%2Fmeetings%2Ftripleo%2F2017%2Ftripleo.2017-11- > 28-14.00.log.html%23l- > 106&data=02%7C01%7Cmoshele%40mellanox.com%7Cab9e4e0441974f8465c > 108d540c5812b%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6364 > 86139387929919&sdata=XzUZGUp8qffSXL4yCYYxWjdHjkByPBvIeOjveFiFthg% > 3D&reserved=0 > [1] > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Feav > esdrop.openstack.org%2Fmeetings%2Ftripleo%2F2017%2Ftripleo.2017-11- > 14-14.01.log.html%23l- > 170&data=02%7C01%7Cmoshele%40mellanox.com%7Cab9e4e0441974f8465c > 108d540c5812b%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6364 > 86139387929919&sdata=vacSmcfZZxexRUxCzOhIv7qqLo38JrN%2BjRc77bDiT > W4%3D&reserved=0 > > > > > [1] - > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev > > > iew.openstack.org%2F%23%2Fc%2F502313%2F&data=02%7C01%7Cmoshele > %40mella > > > nox.com%7Cab9e4e0441974f8465c108d540c5812b%7Ca652971c7d2e4d9ba6a > 4d1492 > > > 56f461b%7C0%7C0%7C636486139387929919&sdata=6khPnm%2B0u%2B2M6D > ojHrFNC7x > > HUgLuNI3hOWEFniF8Bxg%3D&reserved=0 > > [2] - > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-ovs-hw- > offload&dat > > > a=02%7C01%7Cmoshele%40mellanox.com%7Cab9e4e0441974f8465c108d540 > c5812b% > > > 7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486139387929919& > sdata= > > QUycuuC%2Ft2PSgc%2FIv2KeYlhdGLeE16z4ECFNI2CPBSw%3D&reserved=0 > > > >> -Original Message- > >> From: Alex Schultz [mailto:aschu...@redhat.com] > >> Sent: Saturday, December 9, 2017 1:34 AM > >> To: OpenStack Development Mailing List (not for usage questions) > >> > >> Subject: [openstack-dev] [tripleo] Blueprints moved out to Rocky > >> > >> Hey folks, > >> > >> So I went through the list of blueprints and moved some that were > >> either not updated or appeared to have a bunch of patches not in a > mergable state. > >> > >> Pleas
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
On Fri, Dec 8, 2017 at 6:11 PM, Moshe Levi wrote: > Hi Alex, > > I don't see the tripleo ovs hardware offload feature. The spec was merge into > queens [1], but for some reason the blueprint is not in approve state [2]. > Just as a reminder it's everyone's responsibility to make sure their blueprints are properly up to date. I've mentioned this in the weekly meeting a few[0][1] times in the last month. I've added the patches from this email into the blueprint for tracking > I has only 3 patches left: > 1. https://review.openstack.org/#/c/507401/ has 2 +2 > 2. https://review.openstack.org/#/c/507100/ has 1 +2 > 3. https://review.openstack.org/#/c/518715/ > > I would appreciated if we can land all the patches to queens release. We should be able to land these and they appear to be in decent shape. Please reach out on irc if you aren't getting additional reviews on these. It would be really beneficial to land these in the new week or so if possible. Thanks, -Alex [0] http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-11-28-14.00.log.html#l-106 [1] http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-11-14-14.01.log.html#l-170 > > [1] - https://review.openstack.org/#/c/502313/ > [2] - https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-hw-offload > >> -Original Message- >> From: Alex Schultz [mailto:aschu...@redhat.com] >> Sent: Saturday, December 9, 2017 1:34 AM >> To: OpenStack Development Mailing List (not for usage questions) >> >> Subject: [openstack-dev] [tripleo] Blueprints moved out to Rocky >> >> Hey folks, >> >> So I went through the list of blueprints and moved some that were either not >> updated or appeared to have a bunch of patches not in a mergable state. >> >> Please take some time to review the list of blueprints currently associated >> with Rocky[0] to see if your efforts have been moved. If you believe you're >> close to implementing the feature in the next week or two, let me know and >> we can move it back into Queens. If you think it will take an extended period >> of time (more than 2 weeks) to land but we need it in Queens, please submit >> an FFE. >> >> If you have an blueprint that is currently not in implemented in Queens[1], >> please make sure to update the blueprint status if possible. For the ones I >> left in due to the patches being in a decent state, please make sure those >> get >> merged in the next few weeks or we will need to push them out to Rocky. >> >> Thanks, >> -Alex >> >> >> [0] >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> eprints.launchpad.net%2Ftripleo%2Frocky&data=02%7C01%7Cmoshele%40 >> mellanox.com%7C0abe7d8deef74a13d29508d53e945633%7Ca652971c7d2e4d >> 9ba6a4d149256f461b%7C0%7C0%7C636483729194465277&sdata=xDwvHSmx >> niqu6HN5FaQB2DTc6N8mRS879Ku1y4FDLss%3D&reserved=0 >> [1] >> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu >> eprints.launchpad.net%2Ftripleo%2Fqueens&data=02%7C01%7Cmoshele%4 >> 0mellanox.com%7C0abe7d8deef74a13d29508d53e945633%7Ca652971c7d2e4 >> d9ba6a4d149256f461b%7C0%7C0%7C636483729194465277&sdata=v6v1yDjt1f >> dc3FAILGsS2voCiMUmaLQGPwwxNtTzcso%3D&reserved=0 >> >> __ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev- >> requ...@lists.openstack.org?subject:unsubscribe >> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists. >> openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack- >> dev&data=02%7C01%7Cmoshele%40mellanox.com%7C0abe7d8deef74a13d2 >> 9508d53e945633%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636 >> 483729194465277&sdata=N%2B78nm8bMlSWsASO6uIX2mJlO1%2BX9VfTM2 >> A3qUu6GUk%3D&reserved=0 > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
Hi Alex, I don't see the tripleo ovs hardware offload feature. The spec was merge into queens [1], but for some reason the blueprint is not in approve state [2]. I has only 3 patches left: 1. https://review.openstack.org/#/c/507401/ has 2 +2 2. https://review.openstack.org/#/c/507100/ has 1 +2 3. https://review.openstack.org/#/c/518715/ I would appreciated if we can land all the patches to queens release. [1] - https://review.openstack.org/#/c/502313/ [2] - https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-hw-offload > -Original Message- > From: Alex Schultz [mailto:aschu...@redhat.com] > Sent: Saturday, December 9, 2017 1:34 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [tripleo] Blueprints moved out to Rocky > > Hey folks, > > So I went through the list of blueprints and moved some that were either not > updated or appeared to have a bunch of patches not in a mergable state. > > Please take some time to review the list of blueprints currently associated > with Rocky[0] to see if your efforts have been moved. If you believe you're > close to implementing the feature in the next week or two, let me know and > we can move it back into Queens. If you think it will take an extended period > of time (more than 2 weeks) to land but we need it in Queens, please submit > an FFE. > > If you have an blueprint that is currently not in implemented in Queens[1], > please make sure to update the blueprint status if possible. For the ones I > left in due to the patches being in a decent state, please make sure those get > merged in the next few weeks or we will need to push them out to Rocky. > > Thanks, > -Alex > > > [0] > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > eprints.launchpad.net%2Ftripleo%2Frocky&data=02%7C01%7Cmoshele%40 > mellanox.com%7C0abe7d8deef74a13d29508d53e945633%7Ca652971c7d2e4d > 9ba6a4d149256f461b%7C0%7C0%7C636483729194465277&sdata=xDwvHSmx > niqu6HN5FaQB2DTc6N8mRS879Ku1y4FDLss%3D&reserved=0 > [1] > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu > eprints.launchpad.net%2Ftripleo%2Fqueens&data=02%7C01%7Cmoshele%4 > 0mellanox.com%7C0abe7d8deef74a13d29508d53e945633%7Ca652971c7d2e4 > d9ba6a4d149256f461b%7C0%7C0%7C636483729194465277&sdata=v6v1yDjt1f > dc3FAILGsS2voCiMUmaLQGPwwxNtTzcso%3D&reserved=0 > > __ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > requ...@lists.openstack.org?subject:unsubscribe > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists. > openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack- > dev&data=02%7C01%7Cmoshele%40mellanox.com%7C0abe7d8deef74a13d2 > 9508d53e945633%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636 > 483729194465277&sdata=N%2B78nm8bMlSWsASO6uIX2mJlO1%2BX9VfTM2 > A3qUu6GUk%3D&reserved=0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev