Re: [openstack-dev] [puppet] Ubuntu problems + Help needed
> Just a note, the queens repo is not currently synced in the infra so > the queens repo patch is failing on Ubuntu jobs. I've proposed adding > queens to the infra configuration to resolve this: > https://review.openstack.org/529670 > As a follow up, the mirrors have landed and two of the four scenarios now pass. Scenario001 is failing on ceilometer-api which was removed so I have a patch[0] to remove it. Scenario004 is having issues with neutron and the db looks to be very unhappy[1]. Thanks, -Alex [0] https://review.openstack.org/529787 [1] http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [release][PTL] Cycle highlights reminder
On 12/14/2017 2:24 PM, Sean McGinnis wrote: Hey all, As we get closer to Queens-3 and our final RCs, I wanted to remind everyone about the new 'cycle-highlights' we have added to our deliverable info. Background -- As a reminder on the background, we were finding that a lot of PTLs were getting pings several times at the end of every release cycle by various folks asking for highlights of what was new and what significant changes were coming in the new release. It was often the same answer to journalists, product managers, and others that needed to compile that info. To try to mitigate that somewhat, we've built in the ability to capture these highlights as part of the release. It get compiled and published to the web site so we have one place to point these folks to. It is intended as a place where they can get the basic info they need, not as a complete marketing message. As you prepare for upcoming releases, please start to consider what you might want to show up in this collection. We ideally want just a few highlights, probably no more than 3 or 4 in most cases, from each project team. Usage - This is done by adding some information to deliverables/queens/$PROJECT.yaml in the openstack/releases repo similar to this: cycle-highlights: - Introduced new service to use unused host to mine bitcoin. The formatting options for this tag are the same as what you are probably used to with Reno release notes. You can check on the formatting of the output by either running locally: tox -e docs And checking the resulting file under doc/build/html/queens/highlights.html, or you can view the output of the build-openstack-sphinx-docs job under html/queens/highlights.html. Ongoing --- We now have three different places that document activities for three different audiences: Commit messages: Developer documentation Release notes:End user and deployer documentation Cycle highlights: Journalists, product manager, and others This will be the first release cycle with this last data source. It is our hope that this will ease some of the burden for this information from the PTLs. If you still get requests, please try to direct them here to spread awareness. Thanks! -- Sean McGinnis (smcginnis) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I didn't see this before the q1 or q2 tags - can the cycle highlights be applied retroactively? -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
On 12/21/2017 4:37 PM, Joshua Harlow wrote: My 3 cents are that this isn't something that u get asked to put in from operators, it's something that is built in from the start. Just look at other workflow systems (which is really what nova/cinder/neutron... are); they don't try to add this functionality in later (or they shouldn't at least) but c'est la vie... With that stated I would agree this is a community-wide goal (and a very very hard one since every single project 'forgot' to build this in from the start). https://raw.githubusercontent.com/spotify/luigi/master/doc/visualiser_front_page.png (another example of another projects UI that does something similar). Seems like something you could build based on consuming the notification RPC queue events in an external system. Most operations in nova have a start/end/error notification sequence. You can track when there are tasks running based on the start events, and when they finish based on the end/error events. So, this is something you could build with what's available, as a separate optional dashboard thing. Not everything needs to be baked directly into a project as long as that project provides the APIs for consumers to build from it. Very similar concept with the zaqar thing that was discussed in the -tc channel yesterday. -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Switching to longer development cycles
My 3 cents are that this isn't something that u get asked to put in from operators, it's something that is built in from the start. Just look at other workflow systems (which is really what nova/cinder/neutron... are); they don't try to add this functionality in later (or they shouldn't at least) but c'est la vie... With that stated I would agree this is a community-wide goal (and a very very hard one since every single project 'forgot' to build this in from the start). https://raw.githubusercontent.com/spotify/luigi/master/doc/visualiser_front_page.png (another example of another projects UI that does something similar). Matt Riedemann wrote: On 12/19/2017 2:29 PM, Joshua Harlow wrote: * Clear workflow state (and transition) 'machine' that is followed in code and can be used by operators/others such as UI developers to get a view on what nova is or is not doing (may fit under the broad topic of observability?) Take for example http://flower.readthedocs.io/en/latest/screenshots.html and ask yourself why nova-compute (or nova-conductor or nova-scheduler...) doesn't have an equivalent kind of 'viewer' (and no it doesn't need to be flower, that's just an example...) OK...first I've heard of this too. Is this something that the majority of people deploying, operating and/or using Nova are asking for as a priority? Also, this doesn't just seem like a nova thing - this smells like a community-wide goal. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [release] Release countdown for week R-9 and R-8, December 23 - January 5
Welcome to the extended Festivus edition of the countdown email. And now you're gonna hear about it. Development Focus - Teams should be focused on implementing planned work for the cycle. It is also a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information --- The deadline for extra ATC's is coming up on January 12. If there is someone that contributes to your project in a way that is not reflected by the usual metrics, this is the last chance to have them recognized and therefore able to vote in PTL elections. Extra-ATCs can be added by submitting an update to the reference/projects.yaml file in the openstack/governance repo. We have a few other deadlines coming up as we get closer to the end of the cycle: * Non-client libraries (generally, any library that is not python-${PROJECT}client) must have a final release by January 18. Only critical bugfixes will be allowed past this point. Please make sure any important feature works has required library changes by this time. * Client libraries must have a final release by January 25. Upcoming Deadlines & Dates -- Final non-client library release deadline: January 18 Final client library release deadline: January 25 Queens-3 Milestone: January 25 Rocky PTG in Dublin: Week of February 26, 2018 -- Sean McGinnis (smcginnis) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Race in FixedIP.associate_pool
On Fri, 15 Dec 2017 18:38:00 -0800, Arun Sag wrote: Here are the sequence of actions happen in nova-network 1. allocate_for_instance calls -> allocate_fixed_ips 2. FixedIPs are successfully associated (we can see this in the log) 3. allocate_for_instance calls get_instance_nw_info, which in turn gets the fixedip's associated in step 2 using objects.FixedIPList.get_by_instance_uuid, This raises FixedIPNotFound exception We remove the slave and just ran with just single master, the errors went away. We also switched to using semi-synchronous replication between master and slave, the errors went away too. All of this points to a race between write and read to the DB. Does openstack expects synchronous replication to read-only slaves? No, synchronous replication to read-only slaves is not expected. The way this is handled is that oslo.db has the notion of an "async reader" which is safe to use on an asynchronously updated slave database and a regular "reader" which is only safe to use on a synchronously updated slave database, else the master database will be used [1]. In nova, we indicate to oslo.db whether a database API method is safe for use on an asynchronously updated slave database using decorators [2][3]. There are few methods decorated this way. The method you're seeing the race with, fixed_ip_get_by_instance [4] is decorated with the "reader" decorator, indicating that it's only safe for a synchronously updated slave database, else it will use the master. So, this query should *not* be going to an asynchronously updated slave database. If you're using asynchronous replication, it should be going to the master. Have you patched any nova/db/sqlalchemy/api method decorators or patched oslo.db at all to use the "async reader" for more methods? If not, then it's possible there is a bug in oslo.db or nova related to "async reader" state leaking across green threads. Which reminds me of a fairly recent bug [5] we ran into when doing a concurrent scatter-gather to multiple cell databases. You might try the patch [6] locally to see if it changes the behavior when you have asynchronous replication enabled. We had thought only scatter-gather was affected (which was introduced in pike) but it's possible the async slave database read might also be affected. If you could try that patch, please let me know whether it helps and we will backport it. Thanks, -melanie [1] https://github.com/openstack/oslo.db/blob/0260f0e/oslo_db/sqlalchemy/enginefacade.py#L44-L59 [2] https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L214-L219 [3] https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L272 [4] https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1469-L1470 [5] https://bugs.launchpad.net/nova/+bug/1722404 [6] https://review.openstack.org/#/c/511651 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] Ubuntu problems + Help needed
On Thu, Dec 21, 2017 at 10:40 AM, Alex Schultzwrote: > Currently they are all globally failing in master (we are using pike > still[0] which is probably the problem) in the tempest run[1] due to: > AttributeError: 'module' object has no attribute 'requires_ext' > > I've submit a patch[2] to switch UCA to queens. If history is any > indication, it will probably end up with a bunch of failing tests that > will need to be looked at. Feel free to follow along/help with the > switch. > Just a note, the queens repo is not currently synced in the infra so the queens repo patch is failing on Ubuntu jobs. I've proposed adding queens to the infra configuration to resolve this: https://review.openstack.org/529670 > Thanks, > -Alex > > [0] > https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6 > [1] > http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190 > [2] https://review.openstack.org/#/c/529657/ > > On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin > wrote: >> Thanks for letting us know! >> >> I can push for time on this if we can get a list. >> >> >> Best regards >> >> Tobias >> >> >> On 12/21/2017 08:04 AM, Andrew Woodward wrote: >> >> Some pointers for perusal as to the observed problems would be helpful, >> Thanks! >> >> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short wrote: >>> >>> Hi Mohammed, >>> >>> I might be able to help where can I find this info? >>> >>> Thanks >>> chuck >>> >>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser >>> wrote: Hi everyone, I'll get right into the point. At the moment, the Puppet OpenStack modules don't have much contributors which can help maintain the Ubuntu support. We deploy on CentOS (so we try to get all the fixes in that we can) and there is a lot of activity from the TripleO team as well which does their deployments on CentOS which means that the CentOS support is very reliable and CI is always sought after. However, starting a while back, we started seeing occasional failures with Ubuntu deploys which lead us set the job to non-voting. At the moment, the Puppet integration jobs for Ubuntu are always failing because of some Tempest issue. This means that with every Puppet change, we're wasting ~80 minutes of CI run time for a job that will always fail. We've had a lot of support from the packaging team at RDO (which are used in Puppet deployments) and they run our integration before promoting packages which makes it helpful in finding issues together. However, we do not have that with Ubuntu neither has there been anyone who is taking initiative to look and investigate those issues. I understand that there are users out there who use Ubuntu with Puppet OpenStack modules. We need your help to come and try and clear those issues out. We'd be more than happy to give assistance to lead you in the right way to help fix those issues. Unfortunately, if we don't have any folks stepping up to resolving this, we'll be forced to drop all CI for Ubuntu and make a note to users that Ubuntu is not fully tested and hope that as users run into issues, they can contribute fixes back (or that someone can work on getting Ubuntu gating working again). Thanks for reading through this, I am quite sad that we'd have to drop support for such a major operating system, but there's only so much we can do with a much smaller team. Thank you, Mohammed __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> Andrew Woodward >> >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][networking-ovn] Stable branch maintainers for networking-ovn
For the record, I added Lucas to the gerrit group. I assume he will mostly focus on OVN patches. You are still welcome to review other repo patches, and if you stick to it, I am happy to expand your role to cover all of them. As for OVN, you only have one +2, and existing stable reviewers may not be responsive. Don't hesitate to poke us about patches that block your progress or don't attract enough attention in let's say a week. Welcome to the game and enjoy! Ihar On Thu, Dec 21, 2017 at 1:55 AM, Lucas Alvares Gomeswrote: > Hi, > >> Please tell me who from the OVN group is ready to take the burden, and >> I will make you part of neutron-stable-maint. I think it's ok to be >> more laissez faire with backports for subprojects than we were used >> to, with the recent drop in core team membership and reduced capacity. > > Great! I will reach out the OVN core team so we make a decision about > who should do it. > > Cheers, > Lucas > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] Ubuntu problems + Help needed
Currently they are all globally failing in master (we are using pike still[0] which is probably the problem) in the tempest run[1] due to: AttributeError: 'module' object has no attribute 'requires_ext' I've submit a patch[2] to switch UCA to queens. If history is any indication, it will probably end up with a bunch of failing tests that will need to be looked at. Feel free to follow along/help with the switch. Thanks, -Alex [0] https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6 [1] http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190 [2] https://review.openstack.org/#/c/529657/ On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdinwrote: > Thanks for letting us know! > > I can push for time on this if we can get a list. > > > Best regards > > Tobias > > > On 12/21/2017 08:04 AM, Andrew Woodward wrote: > > Some pointers for perusal as to the observed problems would be helpful, > Thanks! > > On Wed, Dec 20, 2017 at 11:09 AM Chuck Short wrote: >> >> Hi Mohammed, >> >> I might be able to help where can I find this info? >> >> Thanks >> chuck >> >> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser >> wrote: >>> >>> Hi everyone, >>> >>> I'll get right into the point. >>> >>> At the moment, the Puppet OpenStack modules don't have much >>> contributors which can help maintain the Ubuntu support. We deploy on >>> CentOS (so we try to get all the fixes in that we can) and there is a >>> lot of activity from the TripleO team as well which does their >>> deployments on CentOS which means that the CentOS support is very >>> reliable and CI is always sought after. >>> >>> However, starting a while back, we started seeing occasional failures >>> with Ubuntu deploys which lead us set the job to non-voting. At the >>> moment, the Puppet integration jobs for Ubuntu are always failing >>> because of some Tempest issue. This means that with every Puppet >>> change, we're wasting ~80 minutes of CI run time for a job that will >>> always fail. >>> >>> We've had a lot of support from the packaging team at RDO (which are >>> used in Puppet deployments) and they run our integration before >>> promoting packages which makes it helpful in finding issues together. >>> However, we do not have that with Ubuntu neither has there been anyone >>> who is taking initiative to look and investigate those issues. >>> >>> I understand that there are users out there who use Ubuntu with Puppet >>> OpenStack modules. We need your help to come and try and clear those >>> issues out. We'd be more than happy to give assistance to lead you in >>> the right way to help fix those issues. >>> >>> Unfortunately, if we don't have any folks stepping up to resolving >>> this, we'll be forced to drop all CI for Ubuntu and make a note to >>> users that Ubuntu is not fully tested and hope that as users run into >>> issues, they can contribute fixes back (or that someone can work on >>> getting Ubuntu gating working again). >>> >>> Thanks for reading through this, I am quite sad that we'd have to drop >>> support for such a major operating system, but there's only so much we >>> can do with a much smaller team. >>> >>> Thank you, >>> Mohammed >>> >>> >>> __ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Andrew Woodward > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [requirements][mistral][vitrage][octavia][taskflow][watcher] Networkx version 2.0
I don't expect it to be a big thing for taskflow, I'll mess around this weekend; afaik most of the changes in networkx 2.0 were around making all the things iterators. Matthew Thode wrote: On 17-12-20 15:51:17, Afek, Ifat (Nokia - IL/Kfar Sava) wrote: Hi, There is an open bug in launchpad about the new release of Networkx 2.0, that is backward incompatible with versions 1.x [1]. Is there a plan to change the Networkx version in the global requirements in Queens? We need to make some code refactoring in Vitrage, and I’m trying to understand how urgent it is. [1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576 Mistral, Vitrage, Octavia, Taskflow, Watcher Those are the projects using NetworkX that'd need to be updated. http://codesearch.openstack.org/?q=networkx=nope=.*requirements.*= I'm open to uncapping networkx if these projects have buyin. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] [api] [sdks] [keystone] Sahara APIv2: service discovery
I've been delayed in actually starting the grunt work on this. If anyone else besides Monty is also able to chime in, feel free. I'm a bit lost in trying to find examples of clients doing version discovery + endpoint manipulation the "right way". If I could find a good example life would be easier. Stuff that constitutes the "right way": - use keystoneauth for version discovery (should we be using add_catalog_discover_hack [0] in Sahara's case?) - actually using keystoneauth discovery features when creating the client object (in my case, modifying [1]) - somehow putting the project id back in the URL (depending on at what point in the process this happens, can that just be "%(project_id)s" or must it be the actual project id, not sure) All help appreciated. Thanks! [0] https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/discover.py#L1224 [1] https://github.com/openstack/python-saharaclient/blob/master/saharaclient/api/client.py On Thu, Nov 30, 2017 at 7:34 PM, Monty Taylorwrote: > > On 11/30/2017 03:07 PM, Jeremy Freudberg wrote: >> >> Hi all, >> >> In the Sahara world, we are getting ready to expose our experimental >> v2 API to real users and not just curious devs. Therefore we need to >> start thinking about service/version discovery of this new API. > > > \o/ > >> Earlier this year, the service types authority was created, and one of >> the things it asserted was that different service types for each API >> version (like Cinder and Mistral did) is bad. >> >> So, it would entail that we should not adopt the `data-processingv2` >> service type. > > > Yes. Please don't... the service-types data has made its way into many places > now. > >> Unfortunately it's not so easy because Sahara API v1 relies on project >> ID in the URL, and therefore is expected to be registered with the >> project ID template in the Keystone service catalog. But API v2 does >> not accept project ID in the URL. >> >> We don't want to break existing clients' ability to discover and use >> Sahara among all clouds. So if we changed the expectation of the >> endpoint for the current `dataprocessing` service type to never >> contain project ID, some clients might get spooked. (Correct me if I'm >> wrong) > > > WELL - there's totally a way to do this that works, although it's gonna be > somewhat annoying. > > First and most importantly you need to update python-saharaclient to make > sure it can handle it an unversioned endpoint in the catalog (by doing > discovery) - and that if it finds an unversioned endpoint in the catalog it > knows to prepend project-id to the urls is sends. The easiest/best way to do > this it to make sure it's delegating version discovery to keystoneauth ... I > will be more than happy to help you get that updated. > > Then, for now, recommend that *new* deployments put the unversioned endpoint > into their catalog, but that existing deployments keep the v1 endpoint in the > catalog even if they upgrade sahara to a version that has v2 as well. (The > full description of version discovery describes how to get to a newer version > even if an older version is in the catalog, so people can opt-in to v2 if > it's there with no trouble) > > That gets us to a state where: > > - existing deployments with users using v1 are not broken > - existing deployments that upgrade can have user's opt-in to v2 easily > - new deployments will have both v1 and v2 - but users who want to use v1 > will have to do so with a client that understands actually doing discovery > > Then let it sit that way for a while, and we can work to make sure that other > clients with sahara support are also up to date with version discovery. > > There will eventually come a point where a deployer will decide they want to > change their catalog from /v1/{project_id} to / ... but by then we should > have all the clients able to understand discovery fully. > >> So, we either need to break the rules and create the >> `data-processingv2` type anyway, or we can create a new type just >> called, for example, `bigdata` which going forward can be used to >> discover either v1 or v2 without any interop concerns. > > > I think renaming to bigdata is less terrible than data-processing2 ... but > let's see if we can't get things to work the other day first - there's a lot > of churn otherwise. > >> This is not an aspect of OpenStack I know a lot about, so any guidance >> is appreciated. Once we figure out a way forward I will make sure >> patches get proposed to the service types authority repo. > > > Almost nobody does. :) But we can totally figure this one out. > > Monty > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Overview of local disk serial numbers series for reviewers
This series is ready for review. As normal I've done my best to break it up into logically separate changes, so it's currently 15 patches long. I've also done my best to write commit messages with the reviewer in mind. If something looks weird, please check if I called it out in the commit message. This email is an overview of the series to help see all 15 patches in context. The spec is here: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/local-disk-serial-numbers.html The patch series is here: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/local-disk-serial-numbers * Add a uuid to BlockDeviceMapping https://review.openstack.org/242602 Add uuid column to BlockDeviceMapping https://review.openstack.org/242603 Make BlockDeviceMapping object support uuid https://review.openstack.org/525599 Add an online migration for BDM.uuid The first 2 were actually originally Dan Smith's patches from 2015, resurrected for the second time. I've cribbed heavily from the original patches and also the recent migration uuid patches. I've added a bunch of robustification to the potentially racy bits, though, so they differ in some details. We want the libvirt driver to use this uuid as a serial number when creating a guest. We now need to expose it to the libvirt driver for all disks. * Expose BlockDeviceMapping.uuid to drivers https://review.openstack.org/524167 DriverBlockDevice: make subclasses inherit _proxy_as_attr https://review.openstack.org/529037 Expose BDM uuid to drivers Drivers don't consume BlockDeviceMapping directly. Instead, BlockDeviceMapping objects are translated to DriverBlockDevice objects and passed in the block_device_info struct. These 2 changes simply add uuid to existing DriverBlockDevice objects. * Expose the local root disk BDM to drivers https://review.openstack.org/526346 Give volume DriverBlockDevice classes a common prefix https://review.openstack.org/526347 Add DriverLocalImageBlockDevice https://review.openstack.org/529028 Rename block_device_info_get_root https://review.openstack.org/529029 Add local_root to block_device_info Unfortunately, for reasons I'm still not entirely clear on block_device_info has never contained a DriverBlockDevice object representing a local root disk. The patches create a new DriverBlockDevice class for a local root disk, and add it to block_device_info. We do this in such a way that drivers which don't expect it to be there will never notice it. At this point in the series, drivers have access to BDM.uuid for all of an instance's disks. * Fix an array of incorrect uses of DriverBlockDevice and block_device_info https://review.openstack.org/528362 Expose driver_block_device fields as attributes https://review.openstack.org/528363 Pass DriverBlockDevice to driver.attach_volume https://review.openstack.org/527916 Use real block_device_info data in libvirt tests https://review.openstack.org/529328 Fix libvirt volume tests passing invalid disk_info We're going to make changes which require BDM.uuid to be available. Unfortunately we hardcode BDM data all over the place. That's all going to break, even when the change isn't relevant to the test in question, which it isn't 95% of the time. This mini-series focuses on using real data everywhere we can. This reduces a ton of noise later. The specific driver for the second patch in the series (to driver.attach_volume) is that the libvirt driver needs a DriverBlockDevice there, and was creating a fake one in non-test code. At this point in the series all relevant test and non-test code is actually using the new data. This means we can make changes in the libvirt driver which require this new data without breaking everything. * Implement local disk serial numbers for the libvirt driver https://review.openstack.org/529329 Pass disk_info dict to libvirt_info https://review.openstack.org/529380 Local disk serial numbers for the libvirt driver Here we finally make the libvirt driver-specific changes to expose BDM uuid as a serial number for local disks. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Stepping down from core
On Fri, Dec 15, 2017 at 2:01 PM, Armando M.wrote: > Hi neutrinos, > > To some of you this email may not come as a surprise. > > During the past few months my upstream community engagements have been more > and more sporadic. While I tried hard to stay committed and fulfill my core > responsibilities I feel like I failed to retain the level of quality and > consistency that I would have liked ever since I stepped down from being the > Neutron PTL back at the end of Ocata. > > I stated many times when talking to other core developers that being core is > a duty rather than a privilege, and I personally feel like it's way overdue > for me to recognize on the mailing list that it's the time that I state > officially my intention to step down due to other commitments. > > This does not mean that I will disappear tomorrow. I'll continue to be on > neutron IRC channels, support the neutron team, being the release liasion > for Queens, participate at meetings, and be open to providing feedback to > anyone who thinks my opinion is still valuable, especially when dealing with > the neutron quirks for which I might be (git) blamed :) How weird. You're such a fixture in the Neutron community, I can't imagine you not being there. I can't think of many people who made a greater impact on both the code and the community than you. You'd probably get a kick knowing that I point new team members to your reviews as a positive example of deep, substantive review practices. Even highly effective people benefit from some good amount of luck, so, good luck to you Armando :) > > Cheers, > Armando > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [etsinfv][gap-03][blazar]: Network resources to be reserved
Hi, During the Forum session about the ETSI NFV Gaps I received a request to clarify what network resources should be reserved. According to the feedback from the IFA group the most important network resources to be reserved are bandwith and public IP addresses. Any comments are welcome, also if you need more clarification or have more comments on the gaps listed in [1] do not hesitate to contact me. Br, Gerg0 [1]: https://etherpad.openstack.org/p/ptg-denver-etsi-nfv-tst003-gaps-explained __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] Office Hours Report 2017-12-21
Hello, Thanks everyone for attending QA office hour. The IRC report [0] and full log [1] are available through meetbot. We have triaged 3 bugs and talks about some reviews, As below is the report. * Bug #1736988 in devstack: "using long-deprecated keystonemiddleware options" https://bugs.launchpad.net/devstack/+bug/1736988 Status: Invalid, transferred to Keystone * Bug #1738938 in devstack: ""[: ==: unary operator expected" when running unstack.sh" https://bugs.launchpad.net/devstack/+bug/1738938 Status: Fixed in progress. *Bug #1737634 in tempest: "ImagesTestJSON.test_delete_saving_image can wait for image status SAVING when the snapshot is already ACTIVE" https://bugs.launchpad.net/tempest/+bug/1737634 Status: Confirmed, Tagged as low-hanging-fruit Discussions on review: * Remove jobs from tempest-lib as repo is deprecated: https://review.openstack.org/#/c/529524/ * Add profiler support into Tempest - https://review.openstack.org/#/c/523935/ Links: [0] http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-12-21-09.01.html [1] http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-12-21-09.01.log.html Thanks for reading, Happy Holidays, see ya next year :-) Thanks, Chandan Kumar __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][networking-ovn] Stable branch maintainers for networking-ovn
Hi, > Please tell me who from the OVN group is ready to take the burden, and > I will make you part of neutron-stable-maint. I think it's ok to be > more laissez faire with backports for subprojects than we were used > to, with the recent drop in core team membership and reduced capacity. Great! I will reach out the OVN core team so we make a decision about who should do it. Cheers, Lucas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] question on , quience use call and unquience use cast
Hi, @jichen: Something I found, FYI: The commit history in here: https://review.openstack.org/#/c/138795 I noticed that the unquience change call to cast in PS5 and PS6: https://review.openstack.org/#/c/138795/5..6/nova/compute/rpcapi.py and some comments on https://review.openstack.org/#/c/138795/5/nova/compute/api.py@2235 According history comments, the reason of "use cast for unquience": it adds _wait_for_snapshots_completion operation on unquience method in PS6, it will cause the rpc timeout before snapshot finished if we use call rpc. The reason of "use call for quience", I think is just quience is a short operation, no need to change cast to call? In the other word, call or cast is okay for quience operation, so, he didn't change it. Hope this helps, : ) Regards, Yikun Jiang Yikun(Kero) Mail: yikunk...@gmail.com Tel: (+86) 13572822142 2017-12-21 16:48 GMT+08:00 Chen CH Ji: > During review https://review.openstack.org/#/c/529278/2 ,some question on > the method for quience/unquience > > https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L1140 > use call for quience > https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L1146 > use cast for unquience > > just curious ,any special purpose for use different type here? > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com > Phone: +86-10-82451493 <+86%2010%208245%201493> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, > Beijing 100193, PRC > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] question on , quience use call and unquience use cast
During review https://review.openstack.org/#/c/529278/2 ,some question on the method for quience/unquience https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L1140 use call for quience https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L1146 use cast for unquience just curious ,any special purpose for use different type here? Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev