Re: [openstack-dev] [nova] Pike PTG recap - cells
On Tue, Feb 28, 2017 at 3:10 AM, Zhenyu Zheng wrote: Matt, Thanks for the recap, it's a pity I cannot attend PTG due to personal reason, I will be willing to take the work you mentioned, and will check the details with you. And another thing, I don't know whether you guys discussed or not, I saw in [1] we are talking about adding tags field(and others of cause) to the instance notification payload, to be able to send during instance.create and instance.update. The fact is that currently we cannot add tags during boot, nor we send notifications when we add/update/delete tags latter(it is a direct DB change and no instance.update notification has been sent out), so for tags field, we have to wait for another instance.update action to get the latest info. And I have already tried to working on the boot thing[2] and already planned to work on the tag notification thing[3]. So, are there any plans about those? Maybe it is OK to send out instance.update notification in tags actions once [1] got merged? Hi, The [1] was shortly discussed and agreed that it is high priority for Pike. The basic things in that bp have code up for review [4] so we have a good chance to merge it in early Pike. The BDM part of [1] needs some data modeling work still. I did an initial review round on [3] and left some question in the spec. [4] https://review.openstack.org/#/q/project:openstack/nova+topic:bp/additional-notification-fields-for-searchlight Cheers, gibi Thanks, Kevin Zheng [1] https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight [2] https://blueprints.launchpad.net/nova/+spec/support-tag-instance-when-boot [3] https://blueprints.launchpad.net/nova/+spec/send-tag-notification On Tue, Feb 28, 2017 at 6:33 AM, Matt Riedemann wrote: We talked about cells on Wednesday morning at the PTG. The full etherpad is here [1]. Searchlight integration --- We talked a bit about what needs to happen for this, and it starts with getting the data into searchlight so that it can serve the REST API, which is being worked in this blueprint [2]. We want to get that done early in Pike. We plan on making the use of Searchlight configurable in Nova since at first you might not even have anything in it, so listing instances wouldn't work. We're also going to attempt to merge-sort when listing instances across multiple cells but it's going to be a known issue that it will be slow. For testing Nova with Searchlight, we need to start by enabling the Searchlight devstack plugin in the nova-next CI job, which I'll work on. I'm going to talk to Kevin Zheng about seeing if he can spend some time on getting Nova to use Searchlight if it's (1) configured for use and (2) is available (the endpoint is in the service catalog). Kevin is a Searchlight core and familiar with the Nova API code, so he's a good candidate for working on this (assuming he's available and willing to own it). Cells-aware gaps in the API --- Dan Smith has started a blueprint [3] for closing gaps in the API which break in a multi-cell deployment. He has a test patch [4] to expose the failures and then they can be worked on individually. The pattern of the work is in [5]. Help is welcome here, so please attend the weekly cells meeting [6] if you want to help out. Auto-discovery of compute hosts --- The "discover_hosts_in_cells_interval" config option was introduced in Ocata which controls a periodic task in the scheduler to discover new unmapped compute hosts but it's not very efficient since it queries all cell mappings and then all compute nodes in each cell mapping and checks to see if those compute nodes are yet mapped to the cell in the nova_api database. Dan Smith has a series of changes [7] which should make that discovery process more efficient, it just needs to be cleaned up a bit. Service arrangement --- Dan Smith is working on a series of changes in both Nova and devstack for testing with multiple cells [8]. The general idea is that there will still be two nodes and two nova-compute services. There will be three nova-conductor services, one per cell, and then another top-level "super conductor" which is there for building instances and sending the server create down to one of the cells. All three conductors are going to be running in the subnode just to balance the resources a bit otherwise the primary node is going to be starved. The multi-cell job won't be running migration tests since we don't currently support instance move operations between cells. We're going to work a hack into the scheduler to restrict a move operation to the same cell the instance is already in. This means the live migration job will still be a single-cell setup where both nova-computes are in the same cell. Getting rid of nova-consoleauth --- There i
[openstack-dev] [nova] Next notification meeting
Hi, The next notification subteam meeting will be held on 2017.02.28 17:00 UTC [1] on #openstack-meeting-4. Cheers, gibi [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170228T17 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Pike PTG recap - notifications
Hi, We discussed couple of notification related items last week on the PTG [1]. Searchlight related notification enhancements - We decided that the Extending versioned notifications for searchlight integration blueprint [2] has high priority for Pike to help the Nova-Searchlight integration which is needed for the cellv2 work. See more details in the Matt's recap on the cells discussion [3]. Code is already in good shape for everything except the BDM part of the bp. Short circuit notification generation - We agreed that we want to avoid generating the notification payloads if nova or oslo messaging is configured so that the actual notification will not be sent. There will be no new configuration parameter to turn off the notification payload generation and the existing notification_format and oslo_messaging_notifications.driver configuration parameters will be used in the implementation. A WIP patch is already proposed [4]. Versioned notification transformation work -- We agreed that it would help cores to review the patches if the subteam could keep a short list (about 5 items) of patches that the cores should look at. So I will make sure that such a list will be up to date on the priority etherpad [5]. Also we agreed that I will send out a short weekly mail about the items in focus, similarly to the placement status mails. Also we hope that the Nova-Searchlight integration will provide some focus and motivation by showing the impact of this work. Cheers, gibi [1] https://etherpad.openstack.org/p/nova-ptg-pike [2] https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight [3] http://lists.openstack.org/pipermail/openstack-dev/2017-February/112996.html [4] https://review.openstack.org/#/c/428260 [5] https://etherpad.openstack.org/p/pike-nova-priorities-tracking __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?
On Mon, Mar 6, 2017 at 3:09 AM, Zhenyu Zheng wrote: Hi, Matt AFAIK, searchlight did delete the record, it catch the instance.delete notification and perform the action: http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n100 -> http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n307 Hi, There is instance.soft_delete legacy notification [2] (delete_type == 'soft_delete'). This could be transformed to versioned notification along with [3]. So I guess there could be a way to distinguish between soft delete and real delete on searchlight side based on these notifications. Cheers, gibi [2] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1872 [3] https://review.openstack.org/#/c/410297/ I will double check with others from the SL team, and if it is the case, we will try to find a way to solve this ASAP. Thanks, Kevin Zheng On Mon, Mar 6, 2017 at 1:21 AM, Matt Riedemann wrote: I've posted a spec [1] for nova's integration with searchlight for listing instance across multiple cells. One of the open questions I have on that is when/how do instances get removed from searchlight? When an instance gets deleted via the compute API today, it's not really deleted from the database. It's considered "soft" deleted and you can still list (soft) deleted instances from the database via the compute API if you're an admin. Nova will be sending instance.destroy notifications to searchlight but we don't really want the ES entry removed because we still have to support the compute API contract to list deleted instances. Granted, this is a pretty limp contract because there is no guarantee that you'll be able to list those deleted instances forever because once they get archived (moved to shadow tables in the nova database) or purged (hard delete), then they are gone from that API query path. So I'm wondering at what point instances stored in searchlight will be removed. Maybe there is already an answer to this and the searchlight team can just inform me. Otherwise we might need to think about data retention policies and how long a deleted instances will be stored in searchlight before it's removed. Again, I'm not sure if nova would control this or if it's something searchlight supports already. [1] https://review.openstack.org/#/c/441692/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 9
Hi, As we agreed on the PTG I will send out short status update / focus-setting mail about notifications work items to the ML weekly. Bugs There are couple of important outstanding bugs: [High] https://bugs.launchpad.net/nova/+bug/1665263 The legacy instance.delete notification is missing for unscheduled instance. This is a regression introduced when the instance creation was moved to the conductor. Patch has couple of +1s but needs core attention https://review.openstack.org/#/c/437222/ [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. Fix is ready for the cores to review https://review.openstack.org/#/c/421981 [Medium] https://bugs.launchpad.net/nova/+bug/1653221 Lazy loaded and uninitialized object fields are missing from the notification payloads. The patch needs review from the subteam https://review.openstack.org/#/c/415857/ Versioned notification transformation - Plenty of transformation patches has been rebased to Pike and reviewed by the subteam. These are easy to review patches so let's focus on merging the below short list this week: * https://review.openstack.org/#/c/384922 Transform instance.rebuild notification * https://review.openstack.org/#/c/396621 Transform instance.rebuild.error notification * https://review.openstack.org/#/c/382959 Transform instance.reboot notification There are a long list of patches that needs to be respin to target the new Pike blueprint versioned-notification-transformation-pike Searchlight integration --- Listing instances ~ Matt proposed a patch to list instances using searchlight https://review.openstack.org/#/c/441692/ . Separate ML thread has bee started to sort out the open issues: http://lists.openstack.org/pipermail/openstack-dev/2017-March/113355.html . bp additional-notification-fields-for-searchlight ~~ 4 patch from blueprint additional-notification-fields-for-searchlight has been reviewed by the subteam and waiting for core review https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight The blueprint needs discussion about how to model BlockDeviceMapping in the instance notifications. Other items --- Short circuit notification payload generation ~ Code review is ongoing to short circuit notification payload generation if the notifications are not configured to be emitted to avoid unnecessary load (e.g. db load). There is a generic patch on oslo_messaging that needs some iteration https://review.openstack.org/#/c/441221 and then the nova patch can be adapted to the new oslo feature https://review.openstack.org/#/c/428260/ bp json-schema-for-versioned-notifications ~~~ The implementation progressed in Ocata but the bp hasn't been reproposed to Pike yet as we need somebody to pick up the implementation work for Pike. https://blueprints.launchpad.net/nova/+spec/json-schema-for-versioned-notifications -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 7th of March https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170307T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?
On Mon, Mar 6, 2017 at 3:06 PM, Zhenyu Zheng wrote: Hi, Gibi Yes, soft_delete.end notification didn't got handled in SL, and we should do it, but what Matt mean here is deferent, even you 'hard' delete an instance the record still exists in DB and user with certain role can list it using deleted=true, so we should also do it in SL Yes, that is really different. If SL should be able to return hard deleted instances as well then the catching the soft_delete notification would not help. Cheers, gibi On Monday, March 6, 2017, Balazs Gibizer wrote: On Mon, Mar 6, 2017 at 3:09 AM, Zhenyu Zheng wrote: Hi, Matt AFAIK, searchlight did delete the record, it catch the instance.delete notification and perform the action: http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n100 -> http://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/elasticsearch/plugins/nova/notification_handler.py#n307 Hi, There is instance.soft_delete legacy notification [2] (delete_type == 'soft_delete'). This could be transformed to versioned notification along with [3]. So I guess there could be a way to distinguish between soft delete and real delete on searchlight side based on these notifications. Cheers, gibi [2] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1872 [3] https://review.openstack.org/#/c/410297/ I will double check with others from the SL team, and if it is the case, we will try to find a way to solve this ASAP. Thanks, Kevin Zheng On Mon, Mar 6, 2017 at 1:21 AM, Matt Riedemann wrote: I've posted a spec [1] for nova's integration with searchlight for listing instance across multiple cells. One of the open questions I have on that is when/how do instances get removed from searchlight? When an instance gets deleted via the compute API today, it's not really deleted from the database. It's considered "soft" deleted and you can still list (soft) deleted instances from the database via the compute API if you're an admin. Nova will be sending instance.destroy notifications to searchlight but we don't really want the ES entry removed because we still have to support the compute API contract to list deleted instances. Granted, this is a pretty limp contract because there is no guarantee that you'll be able to list those deleted instances forever because once they get archived (moved to shadow tables in the nova database) or purged (hard delete), then they are gone from that API query path. So I'm wondering at what point instances stored in searchlight will be removed. Maybe there is already an answer to this and the searchlight team can just inform me. Otherwise we might need to think about data retention policies and how long a deleted instances will be stored in searchlight before it's removed. Again, I'm not sure if nova would control this or if it's something searchlight supports already. [1] https://review.openstack.org/#/c/441692/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][searchlight] When do instances get removed from Searchlight?
On Mon, Mar 6, 2017 at 3:06 PM, Lei Zhang wrote: On Mon, Mar 6, 2017 at 1:21 AM, Matt Riedemann wrote: So I'm wondering at what point instances stored in searchlight will be removed. Maybe there is already an answer to this and the searchlight team can just inform me. Otherwise we might need to think about data retention policies and how long a deleted instances will be stored in searchlight before it's removed. Again, I'm not sure if nova would control this or if it's something searchlight supports already. Hi, Currently Searchlight doesn't capture soft delete notifications and simply remove instance from ES if real delete notification comes. If these two kind of notifications can be distinguished we could fix this issue by marking the document in the ES instead of removing it. And we also need to capture some extra notifications like restore(suppose it exists) . Yes, there is instance.restore notification [1] to match instance.soft_delete. Cheers, gibi [1] https://review.openstack.org/#/c/331972/ About data retention policies, I'm not sure if something like ttl to define how long a deleted instance should be stored in searchlight is enough. I know Nova has cli to purge the database about deleted instances, if there are no notification emitted for these operations, it's impossible for searchlight to know when these delete instances are removed. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Question to clarify versioned notifications
On Tue, Mar 7, 2017 at 8:42 PM, Matt Riedemann wrote: While digging through nova code today to compare versioned and unversioned notifications, and reading specs and seeing how seachlight handles nova notifications, I noticed that the unversioned notifications have a "compute." prefix in the name. The versioned notifications do not. It also took me awhile but I also sorted out that unversioned notifications are on the 'notifications' topic which is the default in oslo.messaging ([oslo_messaging_notifications]topics) and versioned notifications are on the 'versioned_notifications' topic. Yes, versioned notifications are always emitted to the 'versioned_notifications' topic. Sorry if that wasn't clear from the notification dev-ref. The actual code is here [5] My question is, was it intentional to drop the "compute." prefix from the event type on the versioned notifications? I didn't see anything specifically stating that in the original spec [1]. Quote from the spec [1]: The value of the event_type field of the envelope on the wire will be defined by the name of the affected object, the name of the performed action emitting the notification and the phase of the action. For example: instance.create.end, aggregate.removehost.start, filterscheduler.select_destinations.end. The notification model will do basic validation on the content of the event_type e.g. enum for valid phases will be created. So yes, dropping the compute prefix was intentional as the information carried by the prefix was already in the publisher_id of the notification. The goal was that the publisher_id should define what service emitted the notifications and the event_type should be a unique id of the given event regardless of which service emits that event. For example the event_type instance.delete.start is used by nova-compute and nova-conductor service. The publisher_id will contain the name of the service binary and the hostname where that binary runs[3][4] [3] https://review.openstack.org/#/c/410297/12/doc/notification_samples/instance-delete-start_not_scheduled.json@59[4] https://github.com/openstack/nova/blob/master/doc/notification_samples/instance-delete-start.json#L72 [5] https://github.com/openstack/nova/blob/master/nova/rpc.py#L92-L99 Since the notifications are on independent topics it probably doesn't matter. I was just thinking about this from the searchlight perspective because they don't support nova versioned notifications yet and already have code to map the "compute." event types [2], I wasn't sure if they could re-use that and just listen on the 'versioned_notifications' topic. In talking with Steve McLellan it doesn't sound like the different event types format will be an issue. Honestly, If searchlight needs to be adapted to the versioned notifications then the smallest thing to change is to handle the removed prefix from the event_type. The biggest difference is the format and the content of the payload. In the legacy notifications the payload was a simply json dict in the versioned notification the payload is a json serialized ovo. Which means quite a different data structure. E.g. extra keys, deeper nesting, etc. Cheers, gibi [1] https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/versioned-notification-api.html [2] https://github.com/openstack/searchlight/blob/2.0.0/searchlight/elasticsearch/plugins/nova/notification_handler.py#L82 -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 10
Hi, Here is the status update / focus setting mail about notification work for week 10. Bugs There are couple of important outstanding bugs: [High] https://bugs.launchpad.net/nova/+bug/1665263 The legacy instance.delete notification is missing for unscheduled instance. The Pike fix has been merged. The Ocata backport has been proposed https://review.openstack.org/#/c/441171/ [High] https://bugs.launchpad.net/nova/+bug/1671847 Incorrect set deprecated flag for notify_on_state_change . Patch is in the gate queue https://review.openstack.org/#/c/444357 and the Ocata backport has been proposed https://review.openstack.org/#/c/444374 [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. Fix is ready for the cores to review https://review.openstack.org/#/c/421981 Versioned notification transformation - Patches that just need a second core: * https://review.openstack.org/#/c/384922 Transform instance.rebuild notification * https://review.openstack.org/#/c/396621 Transform instance.rebuild.error notification * https://review.openstack.org/#/c/382959 Transform instance.reboot notification Here is the set off patches we would like to focus on this week: * https://review.openstack.org/#/c/411791/ Transform instance.reboot.error notification * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification There are a patches that needs to be respin to target the new Pike blueprint versioned-notification-transformation-pike. Also there are patches in merge conflict that needs to be rebased. Searchlight integration --- Listing instances ~ The list instances spec https://review.openstack.org/#/c/441692/ was well discussed last week. There is still discussion about how to keep the Searchlight data in sync with the Nova db with deleted or deleted and archived instances. It turned out that Searchlight still needs to be adapted to Nova's versioned notifiations so the https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications bp is a hard dependency for the integration work. bp additional-notification-fields-for-searchlight ~~ 4 patch from blueprint additional-notification-fields-for-searchlight needs to be rebased due to merge conflict but the code seem ready to be merged: https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight The blueprint needs discussion about how to model BlockDeviceMapping in the instance notifications. I ran out of time last week so I will try to draft the BDM payload object early this week. Other items --- Short circuit notification payload generation ~ The oslo messaging dependency has been merged and new oslo messaging library release is promised early this week. So work soon be continued on the nova patch https://review.openstack.org/#/c/428260/ bp json-schema-for-versioned-notifications ~~~ We shortly discussed that this feature might help keeping the Searchlight data model in sync with the nova notification model. However we still waiting for somebody to pick up the implementation work behind the bp https://blueprints.launchpad.net/nova/+spec/json-schema-for-versioned-notifications Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 14th of March https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170314T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [notification] BlockDeviceMapping in InstancePayload
Hi, As part of the Searchlight integration we need to extend our instance notifications with BDM data [1]. As far as I understand the main goal is to provide enough data about the instance to Searchlight so that Nova can use Searchlight to generate the response of the GET /servers/{server_id} requests based on the data stored in Searchlight. I checked the server API response and I found one field that needs BDM related data: os-extended-volumes:volumes_attached. Only the uuid of the volume and the value of delete_on_terminate is provided in the API response. I have two options about what to add to the InstancePayload and I want to get some opinions about which direction we should go with the implementation. Option A: Add only the minimum required information from the BDM to the InstancePayload additional InstancePayload field: block_devices: ListOfObjectsField(BlockDevicePayload) class BlockDevicePayload(base.NotificationPayloadBase): fields = { 'delete_on_termination': fields.BooleanField(default=False), 'volume_id': fields.StringField(nullable=True), } This payload would be generated from the BDMs connected to the instance where the BDM.destination_type == 'volume'. Option B: Provide a comprehensive set of BDM attributes class BlockDevicePayload(base.NotificationPayloadBase): fields = { 'source_type': fields.BlockDeviceSourceTypeField(nullable=True), 'destination_type': fields.BlockDeviceDestinationTypeField( nullable=True), 'guest_format': fields.StringField(nullable=True), 'device_type': fields.BlockDeviceTypeField(nullable=True), 'disk_bus': fields.StringField(nullable=True), 'boot_index': fields.IntegerField(nullable=True), 'device_name': fields.StringField(nullable=True), 'delete_on_termination': fields.BooleanField(default=False), 'snapshot_id': fields.StringField(nullable=True), 'volume_id': fields.StringField(nullable=True), 'volume_size': fields.IntegerField(nullable=True), 'image_id': fields.StringField(nullable=True), 'no_device': fields.BooleanField(default=False), 'tag': fields.StringField(nullable=True) } In this case Nova would provide every BDM attached to the instance not just the volume ones. I intentionally left out connection_info and the db id as those seems really system internal. I also left out the instance related references as this BlockDevicePayload would be part of an InstancePayload which has an the instance uuid already. What do you think, which direction we should go? Cheers, gibi [1] https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [notification] BlockDeviceMapping in InstancePayload
On Wed, Mar 15, 2017 at 1:44 PM, John Garbutt wrote: On 13 March 2017 at 17:14, Balazs Gibizer wrote: Hi, As part of the Searchlight integration we need to extend our instance notifications with BDM data [1]. As far as I understand the main goal is to provide enough data about the instance to Searchlight so that Nova can use Searchlight to generate the response of the GET /servers/{server_id} requests based on the data stored in Searchlight. I checked the server API response and I found one field that needs BDM related data: os-extended-volumes:volumes_attached. Only the uuid of the volume and the value of delete_on_terminate is provided in the API response. I have two options about what to add to the InstancePayload and I want to get some opinions about which direction we should go with the implementation. Option A: Add only the minimum required information from the BDM to the InstancePayload additional InstancePayload field: block_devices: ListOfObjectsField(BlockDevicePayload) class BlockDevicePayload(base.NotificationPayloadBase): fields = { 'delete_on_termination': fields.BooleanField(default=False), 'volume_id': fields.StringField(nullable=True), } This payload would be generated from the BDMs connected to the instance where the BDM.destination_type == 'volume'. Option B: Provide a comprehensive set of BDM attributes class BlockDevicePayload(base.NotificationPayloadBase): fields = { 'source_type': fields.BlockDeviceSourceTypeField(nullable=True), 'destination_type': fields.BlockDeviceDestinationTypeField( nullable=True), 'guest_format': fields.StringField(nullable=True), 'device_type': fields.BlockDeviceTypeField(nullable=True), 'disk_bus': fields.StringField(nullable=True), 'boot_index': fields.IntegerField(nullable=True), 'device_name': fields.StringField(nullable=True), 'delete_on_termination': fields.BooleanField(default=False), 'snapshot_id': fields.StringField(nullable=True), 'volume_id': fields.StringField(nullable=True), 'volume_size': fields.IntegerField(nullable=True), 'image_id': fields.StringField(nullable=True), 'no_device': fields.BooleanField(default=False), 'tag': fields.StringField(nullable=True) } In this case Nova would provide every BDM attached to the instance not just the volume ones. I intentionally left out connection_info and the db id as those seems really system internal. I also left out the instance related references as this BlockDevicePayload would be part of an InstancePayload which has an the instance uuid already. +1 leaving those out. What do you think, which direction we should go? There are discussions around extending the info we give out about BDMs in the API. What about in between, list all types of BDMs, so include a touch more info so you can tell which one is a volume for sure. class BlockDevicePayload(base.NotificationPayloadBase): fields = { 'destination_type': fields.BlockDeviceDestinationTypeField( nullable=True), # Maybe just called "type"? 'boot_index': fields.IntegerField(nullable=True), 'device_name': fields.StringField(nullable=True), # do we ignore that now? 'delete_on_termination': fields.BooleanField(default=False), 'volume_id': fields.StringField(nullable=True), 'tag': fields.StringField(nullable=True) } This payload is OK for me. I agree to use 'type' instead of 'destination_type' as destination doesn't have too much meaning after the device is attached. The libvirt driver ignores the device_name but I'm not sure about the other virt drivers. Cheers, gibi Thanks, John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 11
Hi, Here is the status update / focus setting mail about notification work for week 11. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. Fix is ready for the cores to review https://review.openstack.org/#/c/421981 [Undecided] https://bugs.launchpad.net/nova/+bug/1673375 ValueError: Circular reference detected" in send_notification. Fix proposed https://review.openstack.org/#/c/446948/ . Versioned notification transformation - No new goal setting, let's try to focus on to finish the goals of the last to weeks Patches that just need a second core: * https://review.openstack.org/#/c/382959 Transform instance.reboot notification * https://review.openstack.org/#/c/411791/ Transform instance.reboot.error notification * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification Searchlight integration --- changing Searchlight to use versioned notifications ~~~ https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications bp is a hard dependency for the integration work. Searchlight team promised to provide a list of notifications needed to be transformed so that they can switch to the versioned interface. We will prioritize the transformation work on the nova side accordingly. bp additional-notification-fields-for-searchlight ~ Patches needs review: https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight The BlockDeviceMapping addition to the InstancePayload has been discussed on the ML and on the weekly meeting. Implementation is onging. Other items --- Short circuit notification payload generation ~ New oslo messaging has been released with the needed addition and global requirements has been bumped. Work needs to be continued on the nova patch https://review.openstack.org/#/c/428260/ Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 21th of March https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170321T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 12
Hi, Here is the status update / focus setting mail about notification work for week 12. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. Fix is ready for the cores to review https://review.openstack.org/#/c/421981 Versioned notification transformation - Most of the transformation patches are in merge conflict. There is a patch to avoid such trivial merge conflicts in the future: https://review.openstack.org/#/c/448225/ Pre-add functional tests stub to notification testing The following patches are needed for searchlight to be able to switch to versioned notifications hence they are in focus: * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification Searchlight integration --- changing Searchlight to use versioned notifications ~~~ https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications bp is a hard dependency for the integration work. Searchlight needs instance.volume_attach and instance.volume_detach notifications to be transformed before they can switch to the nova's versioned notifications. So we treat those transformation patches with priority. bp additional-notification-fields-for-searchlight ~ Patches needs review: https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight The BlockDeviceMapping addition to the InstancePayload has been proposed: https://review.openstack.org/#/c/448779/ [WIP] Add BDM to InstancePayload Other items --- Short circuit notification payload generation ~ Test coverage is still needed for the nova patch https://review.openstack.org/#/c/428260/ Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 28th of March https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170328T17 Please note that most of Europe switched to daylight saving time this weekend but the meeting is booked in UTC. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] rebuild_instance (nova evacuate) failed to add trunk port
On Thu, Mar 30, 2017 at 10:14 PM, KHAN, RAO ADNAN wrote: In Juno, there is an issue with instance rebuild (nova evacuate) when trunk port is associated with that instance. On the target, it is not provisioning tbr (bridge) and hence 'ovs-vsctl' command failed when adding trunk port. Does Juno version of Neutron has trunk port support? As far as I see trunk port feature was released in Newton [1]. Cheers, gibi [1] https://wiki.openstack.org/wiki/Neutron/TrunkPort To me this seems a design gap; but I couldn't pin point it. Can someone point me to the right direction? Thanks much, Rao Adnan Khan AT&T Integrated Cloud (AIC) Development | SE Software Development & Engineering (SD&E) Emai: rk2...@att.com Cell phone: 972-342-5638 RESTRICTED - PROPRIETARY INFORMATION This email is the property of AT&T and intended solely for the use of the addressee. If you have reason to believe you have received this in error, please delete this immediately; any other use, retention, dissemination, copying or printing of this email is strictly prohibited. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 13
Hi, Here is the status update / focus setting mail about notification work for week 13. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. Fix is ready for the cores to review https://review.openstack.org/#/c/421981 Versioned notification transformation - The volume_attach and detach notifications are still in focus to support Searchlight switching to the versioned notifications: * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification Searchlight integration --- changing Searchlight to use versioned notifications ~~~ https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications bp is a hard dependency for the integration work. bp additional-notification-fields-for-searchlight ~ Besides volume_attach and volume_detach we need the following patches to help Searchlight integration: https://review.openstack.org/#/q/status:open+topic:bp/additional-notification-fields-for-searchlight Some of them only missing a second +2. Other items --- Short circuit notification payload generation ~ Implementation has been merged https://review.openstack.org/#/c/428260/ Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 04th of April. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170404T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 14
Hi, Here is the status update / focus setting mail about notification work for week 14. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. Fix seems abandoned by the original author so I will update it soon to fix the review comments on https://review.openstack.org/#/c/421981 Versioned notification transformation - The volume_attach and detach notifications are still in focus to support Searchlight switching to the versioned notifications: * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification. It is ready for core review. * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification. It needs some update to make Jenkins happy. Searchlight integration --- changing Searchlight to use versioned notifications ~~~ https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications bp is a hard dependency for the integration work. There is a WIP path on searchlight side to follow https://review.openstack.org/#/c/453352/ bp additional-notification-fields-for-searchlight ~ Besides volume_attach and volume_detach we need the following patches to help Searchlight integration: https://review.openstack.org/#/q/status:open+topic:bp/additional-notification-fields-for-searchlight Some of them only missing a second +2. There seems to be a debate about adding the tags field as it is a lazy loaded field. Tags can only have new value in instance.create and instance.update notifications so we might be able to limit the db load by only add tags there. This is a change in the original direction where we made the tags field loaded by default to avoid this complication. The https://review.openstack.org/#/c/448779/ (Add BDM to InstancePayload) is now updated with the agreed bdms_in_notifications config option to allow limiting the db load. Other items --- soft_delete and force_delete During solving a regression about sending delete notifications for unscheduled instances some of the soft_delete cases are removed. The patch continuing the transformation of instance.delete notifications hit a wall of soft_ and force_delete in https://review.openstack.org/#/c/443764/ use context mgr in instance.delete. This need some discussion how to move forward. Inheritance in notification payload types ~ During the implementation of https://review.openstack.org/#/c/453667 (Add snapshot id to the snapshot notifications) we realized that the inheritance used in the notification payload classes has some drawbacks. When we need to introduce new leaf classes the content of the nova.obj_name field will change in the emited payload. This should be avoided if possible or at least we have to keep the version number increasing all the time. See the https://review.openstack.org/#/c/453667/ (explain payload inheritance in notification devref) for more info. Remove notification sample duplications ~~ Every notification has its own sample file in nova tree for functional testing and for doc generation. Many notifications use the same payload structure and therefore these sample files are quite similar to each other. If a new field is added to the payload a lot of sample file needs to be updated. The following patch series show a way to remove such duplications https://review.openstack.org/#/c/452818/ (Factor out duplicated notification sample data) Small improvements ~~ https://review.openstack.org/#/c/418489/ (Remove **kwargs passing in payload __init__) https://review.openstack.org/#/c/428199/ (Improve assertJsonEqual error reporting) https://review.openstack.org/#/c/443686/ (Using max api version in notification sample test) https://review.openstack.org/#/c/450787/ (remove ugly local import) https://review.openstack.org/#/c/443677/ (Add json style checking for sample notifications) Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 11th of April. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170411T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 16
Hi, Here is the status update / focus setting mail about notification work for week 16. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. We need to iterate more on the solution I updated last week. https://review.openstack.org/#/c/421981 Versioned notification transformation - The volume_attach and detach notifications are still in focus to support Searchlight switching to the versioned notifications. Both are ready for core review: * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification * https://review.openstack.org/#/c/455801/ Transform instance.volume_attach.error notification Searchlight integration --- changing Searchlight to use versioned notifications ~~~ https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications bp is a hard dependency for the integration work. The path on Searchlight side https://review.openstack.org/#/c/453352/ evolves gradually. That code gives feedback about the usability of the versioned interface from consumer perspective. Current versioning schema that use separate versions per even_type and separate independent ve rsions per subobjects in the payloads makes the interface flexible but harder to consume. See some of my comments in PS5 on the above patch for details. Also Searchlight code still need to do some transformation between notification fields and API fields to match them up. bp additional-notification-fields-for-searchlight ~ Besides volume_attach and volume_detach we need the following patches to help Searchlight integration: https://review.openstack.org/#/q/status:open+topic:bp/additional-notification-fields-for-searchlight The debate about adding the tags to the instance payload now seems to moving to the direction of adding the tags only to instance.create and instance.update. This means that the related patch needs to be reworked. Patches only needs a second +2: https://review.openstack.org/#/c/419185/ Adding auto_disk_config field to InstancePayload https://review.openstack.org/#/c/407228/ add tags field to instance.update notification The keypairs patch also up to date and ready for review: https://review.openstack.org/#/c/419730/ We still need to work on the https://review.openstack.org/#/c/448779/ (Add BDM to InstancePayload) path from testing point of view. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 25th of April. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170425T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [QA][Nova] Special scenario tests
On Thu, Apr 27, 2017 at 11:56 PM, Matt Riedemann wrote: On 4/21/2017 8:36 AM, Ferenc Horváth wrote: > Dear OpenStackerz, > > I'd like to improve the coverage of the current test suite over some > special code parts in Nova. > My main target is to add a few scenarios [1] that would exercise the > AggregateMultiTenancyIsolation scheduler filter. > I'm also planning on adding new test cases for other filters and for > some libvirt related features [2] as well. > > Unfortunately, [1] and [2] could not be merged into Tempest for various > reasons, hence I started working on functional tests in Nova. > However, since functional tests cannot be used to verify that a deployed > system behaves correctly, we still need end to end tests. > Therefore, I'm proposing a new Tempest test plug-in [3,4] that would be > the home of currently out of tree tests. > The idea is that these tests would run separately on a weekly basis to > not use too much resources, but the rest of the questions are still open. > > Therefore, I'd appreciate any advice or review on this topic. > > Thank You all in advance. > > Best regards, > Ferenc Horváth > > [1] https://review.openstack.org/#/c/374887/ > [2] https://review.openstack.org/#/c/315786/ > [3] https://review.openstack.org/#/c/448482/ > [4] https://review.openstack.org/#/c/451227/ > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev As discussed in the nova meeting today [1] I think some of this could make sense as a Tempest plugin, or an in-tree functional test using fixtures, or an in-tree dsvm integration style job that's not Tempest but runs real tests against a live devstack configuration. For the scheduler filter testing, I think that's something that is totally doable with the in-tree functional tests using our fixtures. You don't need devstack for that since it's just testing the logic of the scheduler filters. We have running services, database, and a live API in those tests, and external services like cinder/glance/neutron are stubbed out. An example of a test like this is here [2]. I understand you have some internal requirements for how these tests are performed though, so I can't really help you there. Keep in mind if you do it with an in-tree functional test, it gets tested on every change and is voting, whereas a periodic job is not and you only find out it's broken after we break it. I think the best of both words would be to set up a 3pp CI to run the extra tempest tests. This way the upstream CI doesn't have to spend extra resources to run the tests but we could have e2e test coverage. At the moment I don't think this will be realized in the near future. So as a compromise we will provide filter tests in nova functional test environment. I think hferenc already started adding such coverage. For testing the libvirt watchdog action we obviously need a real deployment with running libvirt. I think you could do that as a tempest plugin or as an in-tree dsvm integration job, much like how the novaclient functional tests work (those aren't tempest but they run against a live devstack and execute real APIs via the nova CLI). The downside is we don't have any dsvm-integration infrastructure setup in Nova today, so you'd have to blaze that trail. But we've talked about this as a need for a long time, so it'd be great if someone worked on it. Alternatively it could be an in-tree Tempest plugin...but then we can't test things like the libvirt image cache which is something we could do with a dsvm-integration job I think, or the evacuate API (we'd have to run that in serial so it doesn't break other concurrently running tests). Actually testing evacuate would be awesome though. I think dsvm-integration-libvirt is a nice idea so we will look the novaclient env to learn how that works and we will try to implement something similar in nova tree. Thanks for the feedback! Cheers, gibi In general I feel like writing a CI job for a very specific scheduler filter configuration is overdoing it, unless you also made that job re-configurable on the fly, like how the live migration job works [3]. [1] http://eavesdrop.openstack.org/meetings/nova/2017/nova.2017-04-27-21.00.log.html#l-111 [2] https://github.com/openstack/nova/blob/master/nova/tests/functional/regressions/test_bug_1671648.py [3] https://github.com/openstack/nova/blob/master/nova/tests/live_migration/hooks/run_tests.sh -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _
[openstack-dev] [nova] notification update week 18
Hi, Here is the status update / focus setting mail about notification work for week 18. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. We need to iterate more on the solution https://review.openstack.org/#/c/421981 [Medium] https://bugs.launchpad.net/nova/+bug/1687012 flavor-delete notification should not try to lazy-load projects The patch https://review.openstack.org/#/c/461032 seems to be in a good shape already. Versioned notification transformation - The volume_attach and detach notifications are still in focus to support Searchlight switching to the versioned notifications. Both are ready for core review: * https://review.openstack.org/#/c/401992/ Transform instance.volume_attach notification * https://review.openstack.org/#/c/408676/ Transform instance.volume_detach notification * https://review.openstack.org/#/c/455801/ Transform instance.volume_attach.error notification Searchlight integration --- bp additional-notification-fields-for-searchlight ~ Besides volume_attach and volume_detach we need the following patches to help Searchlight integration: https://review.openstack.org/#/q/status:open+topic:bp/additional-notification-fields-for-searchlight Both the BDM and the keypairs patch up to date and ready for review: https://review.openstack.org/#/c/419730/ https://review.openstack.org/#/c/448779/ Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4 so the next meeting will be held on 2nd of May. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170502T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 19
Hi, Here is the status update / focus setting mail about notification work for week 19. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. The solution still needs time and effort from the subteam https://review.openstack.org/#/c/421981 [Medium] https://bugs.launchpad.net/nova/+bug/1687012 flavor-delete notification should not try to lazy-load projects The patch https://review.openstack.org/#/c/461032 needs core review. Versioned notification transformation - The volume_attach and detach patches merged last week. Thanks for everybody who made that happen. Currently the following three transformation patches are in good shape so let's focus on them in the coming weeks: * https://review.openstack.org/#/c/396225/ Transform instance.trigger_crash_dump notification * https://review.openstack.org/#/c/396210/ Transform aggregate.add_host notification * https://review.openstack.org/#/c/396211/ Transform aggregate.remove_host notification Searchlight integration --- bp additional-notification-fields-for-searchlight ~ The keypairs patch has been split to add whole keypair objects only to the instance.create notification and add only the key_name to every instance. notification: * https://review.openstack.org/#/c/463001 Add separate instance.create payload type * https://review.openstack.org/#/c/419730 Add keypairs field to InstanceCreatePayload * https://review.openstack.org/#/c/463002 Add key_name field to InstancePayload Adding BDM to instance. is also in the pipe: * https://review.openstack.org/#/c/448779/ There is also a separate patch to add tags to instance.create: https://review.openstack.org/#/c/459493/ Add tags to instance.create Notification Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. Due to the Boston forum this week's meeting is cancelled and the next meeting will be held on 16th of May. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170516T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 20
Hi, Here is the status update / focus setting mail about notification work for week 20. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. The solution is now consits of three patches and the series is wait for code review: https://review.openstack.org/#/q/topic:bug/1657428 [Medium] https://bugs.launchpad.net/nova/+bug/1687012 flavor-delete notification should not try to lazy-load projects The patch https://review.openstack.org/#/c/461032 needs core review. Versioned notification transformation - Let's continue focusing on the next three transformation patches: * https://review.openstack.org/#/c/396225/ Transform instance.trigger_crash_dump notification * https://review.openstack.org/#/c/396210/ Transform aggregate.add_host notification * https://review.openstack.org/#/c/396211/ Transform aggregate.remove_host notification Searchlight integration --- bp additional-notification-fields-for-searchlight ~ The keypairs patch has been split to add whole keypair objects only to the instance.create notification and add only the key_name to every instance. notification: * https://review.openstack.org/#/c/463001 Add separate instance.create payload type * https://review.openstack.org/#/c/419730 Add keypairs field to InstanceCreatePayload * https://review.openstack.org/#/c/463002 Add key_name field to InstancePayload Adding BDM to instance. is also in the pipe: * https://review.openstack.org/#/c/448779/ There is also a separate patch to add tags to instance.create: https://review.openstack.org/#/c/459493/ Add tags to instance.create Notification Small improvements ~~ * https://review.openstack.org/#/c/418489/ Remove **kwargs passing in payload __init__ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/c/450787/ remove ugly local import * https://review.openstack.org/#/c/453077 Add snapshot id to the snapshot notifications * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 16th of May. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170516T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]next notification subteam meeting
Hi, The next two notification subteam meetings are canceled so the next meeting will be held on 6th of June. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170606T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 23
Hi, After two weeks of silence here is the status update / fucus setting mail about notification work for week 23. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. One of the prerequisite patches needs some discussion https://review.openstack.org/#/q/topic:bug/1657428 [New] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at We missed the update_at field during the transformation of the instance notifications. Versioned notification transformation - The following patches are waiting for core review: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 Searchlight integration --- bp additional-notification-fields-for-searchlight ~ The patch series adding keypairs, BDMs and tags needs a rebase: https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/c/450787/ remove ugly local import * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 6th of June. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170606T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 24
Hi, Here is the status update / focus setting mail about notification work for week 24. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance notifications are sent with inconsistent timestamp format. One of the prerequisite patches needs some discussion https://review.openstack.org/#/q/topic:bug/1657428 [New] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at We missed the update_at field during the transformation of the instance notifications. This is pretty easy to fix so I marked as low-hanging. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Versioned notification transformation - Patches are still need core attention: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 Searchlight integration --- bp additional-notification-fields-for-searchlight ~ The first two patches in the series needs core attention the rest need some care by the author: https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/c/450787/ remove ugly local import * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 13th of June. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170613T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 25
Hi, Here is the status update / focus setting mail about notification work for week 25. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at Takashi proposed the fix https://review.openstack.org/#/c/475276/ that looks good. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Versioned notification transformation - Patches needs only a second +2: * https://review.openstack.org/#/c/385644/ Transform rescue/unrescue instance notifications * https://review.openstack.org/#/c/402124/ Transform instance.live_migration_rollback notification * https://review.openstack.org/#/c/460029/ Transform instance.soft_delete notifications * https://review.openstack.org/#/c/453077/ Add snapshot id to the snapshot notifications Patches that looks good from the subteam perspective: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 Searchlight integration --- bp additional-notification-fields-for-searchlight ~ https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open First patch in the series needs just a second +2. The rest needs general review. Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 20th of June. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170620T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] api.fault notification is never emitted
Hi, I come across a questionable behavior of nova while I tried to use the notify_on_api_faults configuration option [0] while testing the related versioned notification transformation patch [1]. Based on the description of the config option and the code that uses it [2] nova sends and api.fault notification if the nova-api service encounters an unhandle exception. There is a FaultWrapper class [3] added to the pipeline of the REST request which catches every exception and calls the notification sending. Based on some debugging in devstack this FaultWrapper never catches any exception. I injected a ValueError to the beginning of nova.objects.aggregate.Aggregate.create method. This resulted in an HTTPInternalServerError exception and HTTP 500 error code but the exception handling part of the FaultWrapper [4] was never reached. So I dig a bit deeper and I think I found the reason. Every REST API method is decorated with expected_errors decorator [5] which as a last resort translate the unexpected exception to HTTPInternalServerError. In the wsgi stack the actual REST api call is guarded with ResourceExceptionHandler context manager [7] which translates HTTPException to a Fault [8]. Then Fault is catched and translated to the REST response [7]. This way the exception never propagates back to the FaultWrapper in [6] and therefore the api.fault notification is never emitted. You can see the api logs here [9] and the patch that I used to add the extra traces here [10]. Please note that there is a compute.exception notification visible in the log but that is a different notification emitted from wrap_exception decorator [11] used in compute.manager [12] and compute.api [13] only. So my questions are: 1) Is it a bug in the nova wsgi or it is expected that the wsgi code catches everything? 2) Do we need FaultWrapper at all if the wsgi stack catches every exception? 3) Do we need api.fault notification at all? It seems nobody missed it so far. 4) If we want to have api.fault notification then what would be the good place to emit it? Maybe ResourceExceptionHandler at [8]? I filed a bug for tracking purposes [14]. Cheers, gibi [0] https://github.com/openstack/nova/blob/e66e5822abf0e9f933cf6bd1b4c63007b170/nova/conf/notifications.py#L49 [1] https://review.openstack.org/#/c/469038 [2] https://github.com/openstack/nova/blob/d68626595ed54698c7eb013a788ee3b98e068cdd/nova/notifications/base.py#L83 [3] https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/__init__.py#L79 [4] https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/__init__.py#L87 [5] https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/extensions.py#L325 [6] https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/extensions.py#L368 [7] https://github.com/openstack/nova/blob/4a0fb6ae79acedabf134086d4dce6aae0e4f6209/nova/api/openstack/wsgi.py#L637 [8] https://github.com/openstack/nova/blob/4a0fb6ae79acedabf134086d4dce6aae0e4f6209/nova/api/openstack/wsgi.py#L418 [9] https://pastebin.com/Eu6rBjNN [10] https://pastebin.com/en4aFutc [11] https://github.com/openstack/nova/blob/master/nova/exception_wrapper.py#L57 [12] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L105 [13] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L92 [14] https://bugs.launchpad.net/nova/+bug/1699115 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 26
Hi, Here is the status update / focus setting mail about notification work for week 26. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at Takashi proposed the fix https://review.openstack.org/#/c/475276/ that only needs a second +2. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Patch has been proposed: https://review.openstack.org/#/c/476538/ [Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted It seems that the legacy api.fault notification is never emited from nova. More details and the question about the way forward is in a separate ML thread http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [Undecided] https://bugs.launchpad.net/nova/+bug/1698779 aggregate related notification samples are missing from the notification dev-ref It is a doc generation bug. Fix and improvement on doc generation has been proposed https://review.openstack.org/#/c/475349/ [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications are emitted per-cell instead of globally Vitrage tempest test was broken due to missing notifications from nova-compute caused by the cells devstack change https://review.openstack.org/#/c/436094/. Revert is on the way https://review.openstack.org/#/c/477436/. The final solution is to configure a separate and non cell local transport_url for notifications. This is already possible with current oslo.messaging https://docs.openstack.org/developer/oslo.messaging/opts.html#oslo_messaging_notifications.transport_url Versioned notification transformation - Patches needs only a second +2: * https://review.openstack.org/#/c/402124/ Transform instance.live_migration_rollback notification Patches that looks good from the subteam perspective: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 Searchlight integration --- bp additional-notification-fields-for-searchlight ~ https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open The next two patches in the series needs just a second +2. The last patch needs a rebase due to conflict Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 27th of June. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170627T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 27
Hi, Here is the status update / focus setting mail about notification work for week 27. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at Takashi proposed the fix https://review.openstack.org/#/c/475276/ that is ready for the cores to look at. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Patch has been proposed: https://review.openstack.org/#/c/476538/ [Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted It seems that the legacy api.fault notification is never emited from nova. More details and the question about the way forward is in a separate ML thread http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [Undecided] https://bugs.launchpad.net/nova/+bug/1698779 aggregate related notification samples are missing from the notification dev-ref It is a doc generation bug. Fix and improvement on doc generation has been proposed https://review.openstack.org/#/c/475349/ and reviewed by the subteam. So it is ready for the cores to look at. [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications are emitted per-cell instead of globally Fix is to configure a global MQ endpoint for the notifications in cells v2. Patch is beinf worked on: https://review.openstack.org/#/c/477556/ Versioned notification transformation - Couple of patches has been merged but we still have a bunch ready from the subteam perspective: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 Searchlight integration --- bp additional-notification-fields-for-searchlight ~ Two patches merged last week. The BDM addition just needs a second +2 https://review.openstack.org/#/c/448779/ Besides the BDM patch we are still missing the Add tags to instance.create Notification https://review.openstack.org/#/c/459493/ patch but that depends on supporting tags and instance boot https://review.openstack.org/#/c/394321/ which is still not ready. Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 04th of July. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170704T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 28
Hi, Here is the status update / focus setting mail about notification work for week 28. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at Takashi's fix needs a second +2 https://review.openstack.org/#/c/475276/ [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Patch has been proposed: https://review.openstack.org/#/c/476538/ [Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id of the versioned instance.update notification is not consistent with other notifications The inconsistency of publisher_ids was revealed by #1696152. Fix has been proposed https://review.openstack.org/#/c/480984 [Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted Still no response on the ML thread about the way forward. http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications are emitted per-cell instead of globally Fix is to configure a global MQ endpoint for the notifications in cells v2. Patch is being worked on: https://review.openstack.org/#/c/477556/ Versioned notification transformation - There is quite a long list of ready notification transformations for the cores to look at: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 If you are affraid of the long list then here is a short list of live migration related transformations: * https://review.openstack.org/#/c/480214/ * https://review.openstack.org/#/c/420453/ * https://review.openstack.org/#/c/480119/ * https://review.openstack.org/#/c/469784/ Searchlight integration --- bp additional-notification-fields-for-searchlight ~ The BDM addition needs core review, it just lost +2 due to a rebase: https://review.openstack.org/#/c/448779/ Besides the BDM patch we are still missing the Add tags to instance.create Notification https://review.openstack.org/#/c/459493/ patch but that depends on supporting tags and instance boot https://review.openstack.org/#/c/394321/ which is still not ready. Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 11th of July. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170711T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][placement] scheduling with custom resouce classes
Dear Placement developers, I'm trying to build on top of the custom resource class implementation [1][2] from the current master [3]. I'd like to place instances based on normal resources (cpu, ram) and based on a custom resource I will call MAGIC for this discussion. So far I managed to use the placement API to define the CUSTOM_MAGIC resource, create a provider and report some inventory of MAGIC from that provider. Then I added the 'resources:CUSTOM_MAGIC=512' to the flavor's extra_specs. During server boot the scheduler builds a seemingly good placement request "GET /placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" but placement returns an empty response. Then nova scheduler falls back to legacy behavior [4] and places the instance without considering the custom resource request. Then I tried to connect the compute provider and the MAGIC provider to the same aggregate via the placement API but the above placement request still resulted in empty response. See my exact steps in [5]. Do I still missing some environment setup on my side to make it work? Is the work in [1] incomplete? Are the missing pieces in [2] needed to make this use case work? If more implementation is needed then I can offer some help during Queens cycle. To make the above use case fully functional I realized that I need a service that periodically updates the placement service with the state of the MAGIC resource like the resource tracker in Nova. Is there any existing plans creating a generic service or framework that can be used for the tracking and reporting purposes? Cheers, gibi [1] https://review.openstack.org/#/q/topic:bp/custom-resource-classes-pike [2] https://review.openstack.org/#/q/topic:bp/custom-resource-classes-in-flavors [3] 0ffe7b27892fde243fc1006f800f309c10d66028 [4] https://github.com/openstack/nova/blob/48268c73e3f43fa763d071422816942942987f4a/nova/scheduler/manager.py#L116 [5] http://paste.openstack.org/show/615152/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]notification update week 29
Hi, Here is the status update / focus setting mail about notification work for week 29. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at The fix https://review.openstack.org/#/c/475276/ is in focus but comments needs to be addressed. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Patch needs review: https://review.openstack.org/#/c/476538/ [Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id of the versioned instance.update notification is not consistent with other notifications The inconsistency of publisher_ids was revealed by #1696152. Patch needs review: https://review.openstack.org/#/c/480984 [Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted Still no response on the ML thread about the way forward. http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications are emitted per-cell instead of globally Fix is to configure a global MQ endpoint for the notifications in cells v2. Patch looks good from notification perspective but affects other part of the system as well: https://review.openstack.org/#/c/477556/ Versioned notification transformation - The last week's merge conflicts are mostly cleaned up and there is 11 patches that are waiting for core revew: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Verified%253C0+AND+NOT+label:Code-Review%253C0 If you are affraid of the long list then here is a short list of live migration related transformations to look at: * https://review.openstack.org/#/c/480214/ * https://review.openstack.org/#/c/420453/ * https://review.openstack.org/#/c/480119/ * https://review.openstack.org/#/c/469784/ Searchlight integration --- bp additional-notification-fields-for-searchlight ~ The BDM addition has been merged. As a last piece of the bp we are still missing the Add tags to instance.create Notification https://review.openstack.org/#/c/459493/ patch but that depends on supporting tags and instance boot https://review.openstack.org/#/c/394321/ which is getting closer to be merged. Focus is on these patches. There are a set of follow up patches for the BDM addition to optimize the payload generation but these are not mandatory for the functionality https://review.openstack.org/#/c/483324/ Instability of the notification sample tests Multiple instability of the sample test was detected last week. The nova functional test fails intermittenly at least for two distinct reasons: * https://bugs.launchpad.net/nova/+bug/1704423 _test_unshelve_server intermittently fails in functional versioned notification tests Possible solution found, fix proposed and it only needs a second +2: https://review.openstack.org/#/c/483986/ * https://bugs.launchpad.net/nova/+bug/1704392 TestInstanceNotificationSample.test_volume_swap_server fails with "testtools.matchers._impl.MismatchError: 7 != 6" Patch that improves logging of the failure has been merged https://review.openstack.org/#/c/483939/ and detailed log now available to look at http://logs.openstack.org/82/482382/4/check/gate-nova-tox-functional-ubuntu-xenial/38a4cb4/console.html#_2017-07-16_01_14_36_313757 Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 18th of July. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170718T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes
On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent wrote: On Thu, 13 Jul 2017, Balazs Gibizer wrote: /placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" but placement returns an empty response. Then nova scheduler falls back to legacy behavior [4] and places the instance without considering the custom resource request. As far as I can tell at least one missing piece of the puzzle here is that your MAGIC provider does not have the 'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute and MAGIC to be in the same aggregate, the MAGIC needs to announce that its inventory is for sharing. The comments here have a bit more on that: https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678 Thanks a lot for the detailed answer. Yes, this was the missing piece. However I had to add that trait both the the MAGIC provider and to my compute provider to make it work. Is it intentional that the compute also has to have that trait? I updated my script with the trait. [3] It's quite likely this is not well documented yet as this style of declaring that something is shared was a later development. The initial code that added the support for GET /resource_providers was around, it was later reused for GET /allocation_candidates: https://review.openstack.org/#/c/460798/ What would be a good place to document this? I think I can help with enhancing the documentation from this perspective. Thanks again. Cheers, gibi -- Chris Dent ┬──┬◡ノ(° -°ノ) https://anticdent.org/ freenode: cdent tw: @anticdent [3] http://paste.openstack.org/show/615629/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova]notification update week 29
On Mon, Jul 17, 2017 at 9:32 PM, Matt Riedemann wrote: On 7/17/2017 2:36 AM, Balazs Gibizer wrote: > Hi, > > Here is the status update / focus setting mail about notification work > for week 29. > > Bugs > > [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned > server notifications don't include updated_at > The fix https://review.openstack.org/#/c/475276/ is in focus but > comments needs to be addressed. > > [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications > use nova-api as binary name instead of nova-osapi_compute > Agreed not to change the binary name in the notifications. Instead we > make an enum for that name to show that the name is intentional. > Patch needs review: https://review.openstack.org/#/c/476538/ > > [Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id of > the versioned instance.update notification is not consistent with other > notifications > The inconsistency of publisher_ids was revealed by #1696152. Patch needs > review: https://review.openstack.org/#/c/480984 > > [Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault > notification is never emitted > Still no response on the ML thread about the way forward. > http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html > > [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications > are emitted per-cell instead of globally > Fix is to configure a global MQ endpoint for the notifications in cells > v2. Patch looks good from notification perspective but affects other > part of the system as well: https://review.openstack.org/#/c/477556/ > > > Versioned notification transformation > - > The last week's merge conflicts are mostly cleaned up and there is 11 > patches that are waiting for core revew: > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Verified%253C0+AND+NOT+label:Code-Review%253C0 > > > If you are affraid of the long list then here is a short list of live > migration related transformations to look at: > * https://review.openstack.org/#/c/480214/ > * https://review.openstack.org/#/c/420453/ > * https://review.openstack.org/#/c/480119/ > * https://review.openstack.org/#/c/469784/ > > > Searchlight integration > --- > bp additional-notification-fields-for-searchlight > ~ > The BDM addition has been merged. > > As a last piece of the bp we are still missing the Add tags to > instance.create Notification https://review.openstack.org/#/c/459493/ > patch but that depends on supporting tags and instance boot > https://review.openstack.org/#/c/394321/ which is getting closer to be > merged. Focus is on these patches. > > There are a set of follow up patches for the BDM addition to optimize > the payload generation but these are not mandatory for the functionality > https://review.openstack.org/#/c/483324/ > > > Instability of the notification sample tests > > Multiple instability of the sample test was detected last week. The nova > functional test fails intermittenly at least for two distinct reasons: > * https://bugs.launchpad.net/nova/+bug/1704423 _test_unshelve_server > intermittently fails in functional versioned notification tests > Possible solution found, fix proposed and it only needs a second +2: > https://review.openstack.org/#/c/483986/ > * https://bugs.launchpad.net/nova/+bug/1704392 > TestInstanceNotificationSample.test_volume_swap_server fails with > "testtools.matchers._impl.MismatchError: 7 != 6" > Patch that improves logging of the failure has been merged > https://review.openstack.org/#/c/483939/ and detailed log now available > to look at > http://logs.openstack.org/82/482382/4/check/gate-nova-tox-functional-ubuntu-xenial/38a4cb4/console.html#_2017-07-16_01_14_36_313757 > > > > Small improvements > ~~ > * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual > error reporting > * https://review.openstack.org/#/q/topic:refactor-notification-samples > Factor out duplicated notification sample data > This is a start of a longer patch series to deduplicate notification > sample data. The third patch already shows how much sample data can be > deleted from nova tree. We added a minimal hand rolled json ref > implementation to notification sample test as the existing python json > ref implementatio
Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes
On Mon, Jul 17, 2017 at 6:40 PM, Jay Pipes wrote: On 07/17/2017 11:31 AM, Balazs Gibizer wrote: > On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent > wrote: >> On Thu, 13 Jul 2017, Balazs Gibizer wrote: >> >>> /placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" >>> but placement returns an empty response. Then nova scheduler falls >>> back to legacy behavior [4] and places the instance without >>> considering the custom resource request. >> >> As far as I can tell at least one missing piece of the puzzle here >> is that your MAGIC provider does not have the >> 'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute >> and MAGIC to be in the same aggregate, the MAGIC needs to announce >> that its inventory is for sharing. The comments here have a bit more >> on that: >> >> https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678 > > Thanks a lot for the detailed answer. Yes, this was the missing piece. > However I had to add that trait both the the MAGIC provider and to my > compute provider to make it work. Is it intentional that the compute > also has to have that trait? No. The compute node doesn't need that trait. It only needs to be associated to an aggregate that is associated to the provider that is marked with the MISC_SHARES_VIA_AGGREGATE trait. In other words, you need to do this: 1) Create the provider record for the thing that is going to share the CUSTOM_MAGIC resources 2) Create an inventory record on that provider 3) Set the MISC_SHARES_VIA_AGGREGATE trait on that provider 4) Create an aggregate 5) Associate both the above provider and the compute node provider with the aggregate That's it. The compute node provider will now have access to the CUSTOM_MAGIC resources that the other provider has in inventory. Something doesn't add up. We tried exactly your order of actions (see the script [1]) but placement returns an empty result (see the logs of the script[2], of the scheduler[3], of the placement[4]). However as soon as we add the MISC_SHARES_VIA_AGGREGATE trait to the compute provider as well then placement-api returns allocation candidates as expected. We are trying to get some help from the related functional test [5] but honestly we still need some time to digest that LOCs. So any direct help is appreciated. BTW, should I open a bug for it? As a related question. I looked at the claim in the scheduler patch https://review.openstack.org/#/c/483566 and I wondering if that patch wants to claim not just the resources a compute provider provides but also custom resources like MAGIC at [6]. In the meantime I will go and test that patch to see what it actually does with some MAGIC. :) Thanks for the help! Cheers, gibi [1] http://paste.openstack.org/show/615707/ [2] http://paste.openstack.org/show/615708/ [3] http://paste.openstack.org/show/615709/ [4] http://paste.openstack.org/show/615710/ [5] https://github.com/openstack/nova/blob/0e6cac5fde830f1de0ebdd4eebc130de1eb0198d/nova/tests/functional/db/test_resource_provider.py#L1969 [6] https://review.openstack.org/#/c/483566/3/nova/scheduler/filter_scheduler.py@167 Magic. :) Best, -jay > I updated my script with the trait. [3] > >> >> It's quite likely this is not well documented yet as this style of >> declaring that something is shared was a later development. The >> initial code that added the support for GET /resource_providers >> was around, it was later reused for GET /allocation_candidates: >> >> https://review.openstack.org/#/c/460798/ > > What would be a good place to document this? I think I can help with > enhancing the documentation from this perspective. > > Thanks again. > Cheers, > gibi > >> >> -- >> Chris Dent ┬──┬◡ノ(° -°ノ) https://anticdent.org/ >> freenode: cdent tw: @anticdent > > [3] http://paste.openstack.org/show/615629/ > > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes
On Tue, Jul 18, 2017 at 2:39 PM, Balazs Gibizer wrote: On Mon, Jul 17, 2017 at 6:40 PM, Jay Pipes wrote: > On 07/17/2017 11:31 AM, Balazs Gibizer wrote: > > On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent > > > wrote: > >> On Thu, 13 Jul 2017, Balazs Gibizer wrote: > >> > >>> > /placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" > >>> but placement returns an empty response. Then nova scheduler falls > >>> back to legacy behavior [4] and places the instance without > >>> considering the custom resource request. > >> > >> As far as I can tell at least one missing piece of the puzzle here > >> is that your MAGIC provider does not have the > >> 'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute > >> and MAGIC to be in the same aggregate, the MAGIC needs to announce > >> that its inventory is for sharing. The comments here have a bit > more > >> on that: > >> > >> > https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678 > > > > Thanks a lot for the detailed answer. Yes, this was the missing > piece. > > However I had to add that trait both the the MAGIC provider and to > my > > compute provider to make it work. Is it intentional that the compute > > also has to have that trait? > > No. The compute node doesn't need that trait. It only needs to be > associated to an aggregate that is associated to the provider that is > marked with the MISC_SHARES_VIA_AGGREGATE trait. > > In other words, you need to do this: > > 1) Create the provider record for the thing that is going to share the > CUSTOM_MAGIC resources > > 2) Create an inventory record on that provider > > 3) Set the MISC_SHARES_VIA_AGGREGATE trait on that provider > > 4) Create an aggregate > > 5) Associate both the above provider and the compute node provider > with > the aggregate > > That's it. The compute node provider will now have access to the > CUSTOM_MAGIC resources that the other provider has in inventory. Something doesn't add up. We tried exactly your order of actions (see the script [1]) but placement returns an empty result (see the logs of the script[2], of the scheduler[3], of the placement[4]). However as soon as we add the MISC_SHARES_VIA_AGGREGATE trait to the compute provider as well then placement-api returns allocation candidates as expected. We are trying to get some help from the related functional test [5] but honestly we still need some time to digest that LOCs. So any direct help is appreciated. I managed to create a functional test case that reproduces the above problem https://review.openstack.org/#/c/485088/ BTW, should I open a bug for it? I also filed a bug so that we can track this work https://bugs.launchpad.net/nova/+bug/1705071 Cheers, gibi As a related question. I looked at the claim in the scheduler patch https://review.openstack.org/#/c/483566 and I wondering if that patch wants to claim not just the resources a compute provider provides but also custom resources like MAGIC at [6]. In the meantime I will go and test that patch to see what it actually does with some MAGIC. :) Thanks for the help! Cheers, gibi [1] http://paste.openstack.org/show/615707/ [2] http://paste.openstack.org/show/615708/ [3] http://paste.openstack.org/show/615709/ [4] http://paste.openstack.org/show/615710/ [5] https://github.com/openstack/nova/blob/0e6cac5fde830f1de0ebdd4eebc130de1eb0198d/nova/tests/functional/db/test_resource_provider.py#L1969 [6] https://review.openstack.org/#/c/483566/3/nova/scheduler/filter_scheduler.py@167 > > > Magic. :) > > Best, > -jay > > > I updated my script with the trait. [3] > > > >> > >> It's quite likely this is not well documented yet as this style of > >> declaring that something is shared was a later development. The > >> initial code that added the support for GET /resource_providers > >> was around, it was later reused for GET /allocation_candidates: > >> > >> https://review.openstack.org/#/c/460798/ > > > > What would be a good place to document this? I think I can help with > > enhancing the documentation from this perspective. > > > > Thanks again. > > Cheers, > > gibi > > > >> > >> -- > >> Chris Dent ┬──┬◡ノ(° -°ノ) > https://anticdent.org/ > >> freenode: cdent tw: > @anticdent > > > > [3] http://paste.openstack.org/show/615629/ > > > > > > > > > > >
Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes
On Wed, Jul 19, 2017 at 1:13 PM, Chris Dent wrote: On Wed, 19 Jul 2017, Balazs Gibizer wrote: We are trying to get some help from the related functional test [5] but honestly we still need some time to digest that LOCs. So any direct help is appreciated. I managed to create a functional test case that reproduces the above problem https://review.openstack.org/#/c/485088/ Excellent, thank you. I was planning to look into repeating this today, will first look at this test and see what I can see. Your experimentation is exactly the sort of stuff we need right now, so thank you very much. I added more info to the bug report and the review as it seems the test is fluctuating. BTW, should I open a bug for it? I also filed a bug so that we can track this work https://bugs.launchpad.net/nova/+bug/1705071 I guess Jay and Matt have already fixed a part of this, but not the whole thing. Sorry copy pasted the wrong link, the correct link is https://bugs.launchpad.net/nova/+bug/1705231 Cheers, gibi -- Chris Dent ┬──┬◡ノ(° -°ノ) https://anticdent.org/ freenode: cdent tw: @anticdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications
Hi, Steve asked the following question on IRC [1] < sjmc7> hi gibi. sorry, meant to bring this up in the notifications meeting but i had to step away for a bit. we were having a discussion last week about the field that the API returns as 'status' - do the notifications have an equivalent? I will try to answer it here so others can chime in. Internally in nova an instance has vm_state, task_state and power_state. On the REST API the instance has status which is calculated from vm_state and task_state. See the code doing the conversion here [2]. The instance notifications contain both the vm_state, task_state and power_state of the instance but do not contain the calculated status value [3]. The instance.update notification has extra state fields to signal possible state transitions [4]. Technically we can add the calculated status field to the notifications but it is not there at the moment. So if searchlight needs that info right now then it needs to be calculated on searchlight side based on the vm_state and the task_state from the notification. Adding this field can be a continuation of the bp additional-notification-fields-for-searchlight [5] in Queens. Cheers, gibi [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-07-18.log.html#t2017-07-18T17:39:36 [2] https://github.com/openstack/nova/blob/a4a9733f4a9ead01356f0f76c1bb1f04f905fa4e/nova/api/openstack/common.py#L113 [3] https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L52-L54 [4] https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L352 [5] https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications
On Wed, Jul 19, 2017 at 5:38 PM, McLellan, Steven wrote: Thanks Balazs for noticing and replying to my message! The Status field is quite important to us since it's the indicator of VM state that Horizon displays most prominently and the most simple description of whether a VM is currently usable or not without having to parse the various _state fields. If we can't get this change added in Pike I'll probably implement a simplified version of the mapping in [2], but it would be really good to get it into the notifications in Pike if possible. I understand though that this late in the cycle it may not be possible. I can create a patch to add the status to the instance notifications but I don't know if nova cores accept it for this late in Pike. @Cores: Do you? Cheers, gibi Thanks, Steve On 7/19/17, 10:27 AM, "Balazs Gibizer" wrote: Hi, Steve asked the following question on IRC [1] < sjmc7> hi gibi. sorry, meant to bring this up in the notifications meeting but i had to step away for a bit. we were having a discussion last week about the field that the API returns as 'status' - do the notifications have an equivalent? I will try to answer it here so others can chime in. Internally in nova an instance has vm_state, task_state and power_state. On the REST API the instance has status which is calculated from vm_state and task_state. See the code doing the conversion here [2]. The instance notifications contain both the vm_state, task_state and power_state of the instance but do not contain the calculated status value [3]. The instance.update notification has extra state fields to signal possible state transitions [4]. Technically we can add the calculated status field to the notifications but it is not there at the moment. So if searchlight needs that info right now then it needs to be calculated on searchlight side based on the vm_state and the task_state from the notification. Adding this field can be a continuation of the bp additional-notification-fields-for-searchlight [5] in Queens. Cheers, gibi [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-07-18.log.html#t2017-07-18T17:39:36 [2] https://github.com/openstack/nova/blob/a4a9733f4a9ead01356f0f76c1bb1f04f905fa4e/nova/api/openstack/common.py#L113 [3] https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L52-L54 [4] https://github.com/openstack/nova/blob/2e4417d57cb6f74664c5746b43db9a96797f33e9/nova/notifications/objects/instance.py#L352 [5] https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes
On Wed, Jul 19, 2017 at 3:54 PM, Chris Dent wrote: On Wed, 19 Jul 2017, Balazs Gibizer wrote: I added more info to the bug report and the review as it seems the test is fluctuating. (Reflecting some conversation gibi and I have had in IRC) I've made a gabbi-based replication of the desired functionality. It also flaps, with a >50% failure rate: https://review.openstack.org/#/c/485209/ Sorry copy pasted the wrong link, the correct link is https://bugs.launchpad.net/nova/+bug/1705231 This has been updated (by gibi) to show that the generated SQL is different between the failure and success cases. Thanks Jay for proposing the fix https://review.openstack.org/#/c/485088/ . It works for me both in the functional env and in devstack. cheers, gibi -- Chris Dent ┬──┬◡ノ(° -°ノ) https://anticdent.org/ freenode: cdent tw: @anticdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]notification update week 30
Hi, Here is the status update / focus setting mail about notification work for week 30. Better late than never. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server notifications don't include updated_at The fix is on review https://review.openstack.org/#/c/475276/ and seem complete. However the solution uncovered another bug https://bugs.launchpad.net/nova/+bug/1704928 updated_at field is set on the instance only after it is scheduled. Matt wanted to see at least an analysis about the root cause to rule out the connection with the notification change. Analysis is available in the bug report, it seems it is a db api problem and a dirty fix is proposed. So I think the original notification fix can land now. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. Patch needs update based on the review feedback https://review.openstack.org/#/c/476538/ [Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id of the versioned instance.update notification is not consistent with other notifications The inconsistency of publisher_ids was revealed by #1696152. Patch needs a second +2: https://review.openstack.org/#/c/480984 [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted Still no response on the ML thread about the way forward. http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications are emitted per-cell instead of globally Fix is to configure a global MQ endpoint for the notifications in cells v2. Patch looks good from notification perspective and only need a second +2 https://review.openstack.org/#/c/477556/ Versioned notification transformation - There are 16 patches that are waiting for core revew: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Verified%253C0+AND+NOT+label:Code-Review%253C0 We will continue the work in Queens with the ones that miss the Feature Freeze. Searchlight integration --- bp additional-notification-fields-for-searchlight ~ All the must have pieces has been merged and the bp is marked as implemented. Thank you for all who had been involved. Nice job! There are follow up patches related to the bp that most probably need to be moved to Queens: * There are a set of patches for the BDM addition to optimize the payload generation but these are not mandatory for the functionality https://review.openstack.org/#/c/483324/ * There was a late request from Searchlight to provide 'status' field in the instance notifications as well. See the discussion on the ML http://lists.openstack.org/pipermail/openstack-dev/2017-July/119891.html There is WIP patch with the solution but we are running out of time with that https://review.openstack.org/#/c/485525/ Instability of the notification sample tests Last week we cleaned up couple of instabilities. But it seem one still remained about test_create_delete_server test case https://bugs.launchpad.net/nova/+bug/1705818. Troubleshooting patch has been merged https://review.openstack.org/#/c/486301/ . Signature has been added to the bug report but no occurrence with detailed log observerd yet. Small improvements ~~ * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting. I think it an improvement that is safe to be considered after the FF as it is only a test improvement. * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. This is also just test and doc improvement so I hope it can continue after the FF. Weekly meeting -- The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC on openstack-meeting-4. The next meeting will be held on 25th of July. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170725T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/ma
[openstack-dev] [nova]notification update week 31
Hi, Here is the status update / focus setting mail about notification work for week 31. Bugs [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. The patch was split to two parts: * https://review.openstack.org/#/c/487126 rename binary to source in versioned notifications * https://review.openstack.org/#/c/476538 Use enum value instead of string service name [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted Still no response on the ML thread about the way forward. http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeout [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. Can we somehow tell python not to truncate the stack trace in this case to see where the infinite recursion starts? [Undecided] https://bugs.launchpad.net/nova/+bug/1706533 TestInstanceNotificationSample.test_rebuild_server_exc fails with testtools.matchers._impl.MismatchError: 2 != 1 Yet another notification sample test instability. Fix is under review: https://review.openstack.org/#/c/487382/ Versioned notification transformation - As Pike FF happened I will open a new bp for Queens to track the remaining work there. Searchlight integration --- I will open a follow up bp for Queens. There are follow up patches to be moved there: * There are a set of patches for the BDM addition to optimize the payload generation but these are not mandatory for the functionality https://review.openstack.org/#/c/483324/ * There was a late request from Searchlight to provide 'status' field in the instance notifications as well. See the discussion on the ML http://lists.openstack.org/pipermail/openstack-dev/2017-July/119891.html There is WIP patch with the solution but we are running out of time with that https://review.openstack.org/#/c/485525/ Small improvements -- These improvements are test and doc generation only so probably not affected by the FF. * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting. * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- As Pike FF happened I suggest to skip this week's meeting. Please disagree in a reply if you have items to discuss. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]notification update week 32
Hi, Here is the status update / focus setting mail about notification work for week 32. Bugs [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. The patch was split to two parts: * https://review.openstack.org/#/c/487126 rename binary to source in versioned notifications * https://review.openstack.org/#/c/476538 Use enum value instead of string service name [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted Still no response on the ML thread about the way forward. http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeout [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. I don't know about a way to reproduce it localy or to change the gate env so that python prints out the full stack trace to see where the problematic call is. Also adding extra log messages won't help as a timed out test doesn't have the log messages printed to the logs. So this bug is pretty stuck. Versioned notification transformation - Thanks to Matt we have the Queens bp open https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-queens Open patches needs to retargeted to this bp as soon as master is open for Queens. Searchlight integration --- I opened a follow up bp for Queens: https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight-queens I left it in drafting state as I expect the Searchlight team to come back with some feedback and / or extra needs. Small improvements -- These improvements are test and doc generation only so probably not affected by the FF. * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting. * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- As nothing important is going on right now in the subteam so I'm planning to not having a meeting this week either. Please disagree in a reply if you have items to discuss. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Thanks gibi!
On Thu, Aug 10, 2017 at 8:43 PM, Jay Pipes wrote: On 08/10/2017 01:57 PM, Matt Riedemann wrote: > Apparently we don't have community contributor awards at the PTG, only > the summit, and seeing as that's several months away now, which is kind > of an eternity, I wanted to take the time now to thank gibi (Balazs > Gibizer to his parents) for all the work he's been doing in Nova. > > Not only does gibi lead the versioned notification transformation work, > which includes running a weekly meeting (that only one other person > shows up to) and sending a weekly status email, and does it in a > ridiculously patient and kind way, but he's also been identifying > several critical issues late in the release related to the Placement and > claims in the scheduler work that's going on. > > And it's not just doing manual testing, reporting a bug and throwing it > over the wall - which is a major feat in OpenStack on it's own - but > also taking the time to write automated functional regression tests to > exhibit the bugs so when we have a fix we can tell it's actually > working, plus he's been fixing some on his own also. > > So with all that, I just wanted to formally and publicly say thanks to > gibi for the great work he's doing which often goes overlooked when > we're crunching toward a deadline. Couldn't agree more. Thank you Gibi for your hard work and valuable contributions over the last cycle and more. Your efforts have not gone unnoticed. Thank you guys for the nice words, the support, and the encouragement. I enjoy working with the community. So I'm planning to continue sending those bug reports in the future too. Cheers, gibi All the best, -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification subteam meeting is canceled
Hi, The notification subteam meeting is canceled this week. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification subteam meeting is cancelled this week
Hi, The notification subteam meeting is canceled this week. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]notification update week 35
Hi, After a couple of weeks of silence here is the status update / focus settings mail for w35. Bugs As there was no critical bug for Pike in the notification area there was not much progress in the bugs below. [Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications use nova-api as binary name instead of nova-osapi_compute Agreed not to change the binary name in the notifications. Instead we make an enum for that name to show that the name is intentional. The patch was split to two parts: * https://review.openstack.org/#/c/487126 rename binary to source in versioned notifications * https://review.openstack.org/#/c/476538 Use enum value instead of string service name While I was preparing this mail both patches has been approved. [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted Still no response on the ML thread about the way forward. http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeout [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. I don't know about a way to reproduce it localy or to change the gate env so that python prints out the full stack trace to see where the problematic call is. Also adding extra log messages won't help as a timed out test doesn't have the log messages printed to the logs. So this bug is pretty stuck. [Undecided] https://bugs.launchpad.net/nova/+bug/1700496 Notifications are emitted per-cell instead of globally Devstack config has already been modified so notifications are emitted to the top level MQ. It seems that only a nova cells doc update is needed that tells the admin how to configure the transport_url for the not ifications. Versioned notification transformation - BP for Queens are approved, some patch was already re-proposed to the new BP. I think we have to start reviewing those soon. https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-queens Searchlight integration --- I opened a follow up bp for Queens: https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight-queens I left it in drafting state as I expect the Searchlight team to come back with some feedback and / or extra needs. I pinged the Searchlight folks on IRC to get some feedback on this BP. Small improvements -- * https://review.openstack.org/#/c/428199/ Improve assertJsonEqual error reporting. This is on the gate now. :) * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. I will rebase the series and resolve the merge conflicts then I will ask for feedback about the validity of the current direction. Weekly meeting -- After a long pause let's have a short subteam meeting at the usual place and time: Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170829T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova]notification update week 35
On Mon, Aug 28, 2017 at 11:27 PM, Matt Riedemann wrote: On 8/28/2017 10:52 AM, Balazs Gibizer wrote: > [Undecided] https://bugs.launchpad.net/nova/+bug/1700496 Notifications > are emitted per-cell instead of globally > Devstack config has already been modified so notifications are emitted > to the top level MQ. It seems that only a nova cells doc update is > needed that tells the admin how to configure the transport_url for the not > ifications. This was done as part of the cells v2 layout docs here: https://docs.openstack.org/nova/latest/user/cellsv2_layout.html#notifications Thanks. I update the bug report accordingly. Cheers, gibi -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Proposing Balazs Gibizer for nova-core
On Tue, Aug 29, 2017 at 6:02 PM, Matt Riedemann wrote: On 8/22/2017 8:18 PM, Matt Riedemann wrote: > I'm proposing that we add gibi to the nova core team. He's been around > for awhile now and has shown persistence and leadership in the > multi-release versioned notifications effort, which also included > helping new contributors to Nova get involved which helps grow our > contributor base. > > Beyond that though, gibi has a good understanding of several areas of > Nova, gives thoughtful reviews and feedback, which includes -1s on > changes to get them in shape before a core reviewer gets to them, > something I really value and look for in people doing reviews who aren't > yet on the core team. He's also really helpful with not only reporting > and triaging bugs, but writing tests to recreate bugs so we know when > they are fixed, and also works on fixing them - something I expect from > a core maintainer of the project. > > So to the existing core team members, please respond with a yay/nay and > after about a week or so we should have a decision (knowing a few cores > are on vacation right now). > It's been a week and we've had enough +1s so it's a done deal. Welcome to the nova core team gibi! Thank you all for the support and the trust. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] connecting nova notification users and developers
Hi, Nova emits notifications for many different event, like different instance actions[1]. Also the nova developer community is working on making nova notifications well defined and easy to consume [2]. The goal of this mail is twofold. 1) We in the nova developer community would like to see which projects are using (or planning to use) the nova notification interface. Also we would like to know if you are using the legacy unversioned notifications or the new versioned ones. We would like to know what are your use cases towards our notification interface and we also would like to get any type of feedback about the interface (both the old and the new one). Based on this information we can make better decision where to focus our development effort. As a good example we already have a cooperation with the searchlight project to enhance nova's versioned notification interface based on their needs [3]. I opened an etherpad [4] to collect the projects and the feedback and we can go through that feedback in the PTG to define some actions. 2) Creating a well defined and easy to use notification interface gives us plenty of work in nova. So we are also looking for developers who can help us in this work. Big chunk of [2] is considered as low hanging fruit and I'm happy to mentor anybody who is interested learning this part of nova. If you want to join to this work just ping me (gibi) or IRC. Cheers, gibi [1]https://docs.openstack.org/nova/latest/reference/notifications.html [2]https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-queens [3]https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight [4]https://etherpad.openstack.org/p/queens-nova-notifications __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][scheduling] Can VM placement consider the VM network traffic need?
On Fri, Sep 1, 2017 at 10:42 AM, Rua, Philippe (Nokia - FI/Espoo) wrote: Will it be possible to include network bandwidth as a resource in Nova scheduling, for VM placement decision? I think it will. Context: in telecommunication applications, the network traffic is an important dimension of resource usage. For example, it is often important to distribute "bandwidth-greedy" VMs to different compute nodes. There were some earlier discussions on this topic, but I could not find a concrete outcome. [1][2][3] After some reading, I wonder whether the Custom resource classes can provide a generic mechanism? [4][5][6] Here is what I have in mind: - The VM need is specified in the flavor extra-specs, e.g. resources:CUSTOM_BANDWIDTH=123. - The compute node total capacity is specified in host aggregate metadata, e.g. CUSTOM_BANDWIDTH=999. I'm not aware of any feature that considers aggregate metadata key as resource inventory. As far as I know you have to define new resource providers for your CUSTOM_BANDWIDTH resource via the placement API and you have to report the 999 as inventory on those resource providers also via placement API. Also don't forget to connect your resource provider to the existing compute resource providers via an aggregate (this is an aggregate in placement which is different from the host aggregate concept in nova). This review contains some test cases that can help you how to set things up https://review.openstack.org/#/c/497399 - Nova then takes care of the rest: scheduling where the free capacity is sufficient, and performing simple resource usage accounting (updating the compute node free network bandwidth capacity as required). With the above flavor extra spec as request and the above resource provider setup nova will do the rest of the resource accounting for the your custom resource. Except in case you hit one of the bugs we discovered in this area https://bugs.launchpad.net/nova/+bugs?field.tag=placement Is the outline above according to current plans? If not, what would be possible/needed in order to achieve the same result, i.e. consider the VM network traffic need during VM placement? You might want to keep an eye on the nested-resource-provider work planned for Queens as it will give you better options to model your resources: https://blueprints.launchpad.net/nova/+spec/nested-resource-providers Cheers, gibi BR, Philippe [1] https://blueprints.launchpad.net/nova/+spec/bandwidth-as-scheduler-metric [2] https://wiki.openstack.org/wiki/NetworkBandwidthEntitlement [3] https://openstack.nimeyo.com/80515/openstack-scheduling-bandwidth-resources-nic_bw_kb-resource [4] https://docs.openstack.org/nova/latest/user/placement.html [5] http://specs.openstack.org/openstack/nova-specs/priorities/pike-priorities.html#placement [6] https://review.openstack.org/#/c/473627/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]notification update week 36
Hi, Here is the status update / focus settings mail for w36. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted We still have to figure out what is the expected behavior here based on: http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeout [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. I don't know about a way to reproduce it localy or to change the gate env so that python prints out the full stack trace to see where the problematic call is. Also adding extra log messages won't help as a timed out test doesn't have the log messages printed to the logs. So this bug is pretty stuck. Versioned notification transformation - Review backlog is piling up behind https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-queens Searchlight integration --- I opened a follow up bp for Queens: https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight-queens I left it in drafting state as I expect the Searchlight team to come back with some feedback and / or extra needs. I pinged the Searchlight folks on IRC to get some feedback on this BP. I think we can still wait for feedback. No reason to rush here. Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. I will rebase the series and resolve the merge conflicts then I will ask for feedback about the validity of the current direction. Weekly meeting -- This week I cannot chair the meeting. Next week most of us will be in Denver. So the next meeting is expected to be held on 19th of September. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][scheduling] Can VM placement consider the VM network traffic need?
On Mon, Sep 4, 2017 at 9:11 PM, Jay Pipes wrote: On 09/01/2017 04:42 AM, Rua, Philippe (Nokia - FI/Espoo) wrote: > Will it be possible to include network bandwidth as a resource in Nova scheduling, for VM placement decision? Yes. See here for a related Neutron spec that mentions Placement: https://review.openstack.org/#/c/396297/7/specs/pike/strict-minimum-bandwidth-support.rst > Context: in telecommunication applications, the network traffic is an important dimension of resource usage. For example, it is often important to distribute "bandwidth-greedy" VMs to different compute nodes. There were some earlier discussions on this topic, but I could not find a concrete outcome. [1][2][3] > > After some reading, I wonder whether the Custom resource classes can provide a generic mechanism? [4][5][6] No :) Custom resource classes are antithetical to generic/standard mechanisms. We want to add two *standard* resource classes, one called NET_INGRESS_BYTES_SEC and another called NET_EGRESS_BYTES_SEC which would represent the total bandwidth in bytes per second the for corresponding traffic directions. While I agree that the end goal is to have standard resource classes for bandwidth I think custom resource classes are generic enough to model bandwidth resource. If you want to play with the bandwidth based scheduling idea based on Pike then custom resource classes are available as a tool for a proof of concept. What would be the resource provider, though? There are at least two potential answers here: 1) A network interface controller on the compute host In this case, the NIC on the host would be a child provider of the compute host resource provider. It would have an inventory record of resource class NET_INGRESS_BYTES_SEC with a total value representing the entire bandwidth of the host NIC. Instances would consume some amount of NET_INGRESS_BYTES_SEC corresponding to *either* the Nova flavor (if the resources:NET_INGRESS_BYTES_SEC extra-spec is set) *or* to the sum of consumed bandwidth amounts from the port profile of any ports specified when launching the instance (and thus would be part of the pci device request collection attached to the build request). 2) A "network slice" of a network interface controller on the compute host In this case, assume that the NIC on the compute host has had its total bandwidth constrained via traffic control so that 50% of its available ingress bandwidth is allocated to network A and 50% is allocated to network B. There would be multiple resources providers, each with an inventory record of resource class NET_INGRESS_BYTES_SEC with a total value of 1/2 the total NIC bandwidth. Both of these resource providers would be child providers of the compute host resource provider. One of these child resource providers will be decorated with the trait "CUSTOM_NETWORK_A" and the other with trait "CUSTOM_NETWORK_B". The scheduler would be able to determine which resource provider to consume the NET_INGRESS_BYTES_SEC resources from by looking for a resource provider that has both the required amount of NET_INGRESS_BYTES_SEC as well as the trait required by the port profile. If, say, the port profile specifies that the port is to go on a NIC with access to network "A", then the build request would contain a request to the scheduler for CUSTOM_NETWORK_A trait... The above setup can be simulated with custom resource classes and individual resource providers per compute node connected to the given compute node's resource provider via an aggregate. You most probably need to simulate the above network traits with individual custom resource classes in Pike. I definitely don't think it is something I would do in production based on Pike due to two reasons: 1) we have bugs in Pike GA that prevents nova to handle some edge cases (especially in VM moving scenarios) 2) I agree with Jay that nested providers and neutron support will allows us to do something much more cleaner in the future. However I think Pike is a good base to build a PoC and gather feedback. For example I already foresee a need to model OVS packet processing limits and in the long run even include the capacity of the TOR switches into the picture. If you're coming to Denver, I encourage you to get with me, Sean Mooney, Moshe Levi and others who are interested in seeing this work move forward. @Jay: sign me up for this list. Cheers, gibi Best, -jay > Here is what I have in mind: > - The VM need is specified in the flavor extra-specs, e.g. resources:CUSTOM_BANDWIDTH=123. > - The compute node total capacity is specified in host aggregate metadata, e.g. CUSTOM_BANDWIDTH=999. > - Nova then takes care of the rest: scheduling where the free capacity is sufficient, and performing simple resource usage accounting (updating the compute node free network bandwidth capacity as required). > > Is the outline above according to current plans? > If not, what would b
[openstack-dev] [nova] [notification] cleaning up networking related notifications
Hi, The addFixedIp REST API was deprecated in Pike [1] (in microversion 2.44). As a result the legacy create_ip.start and create_ip.end notifications will not be emitted after microversion 2.44. We had a TODO[2] to transform this notification to the versioned format but now that seems a bit pointless. Also I've just found out that the existing POST os-interface REST API call does not emit any notification. I think it would make sense not to transform the legacy notification in a deprecated code path but instead emit a new instance.interface_attach and instance.interface_detach notifications from the POST os-interface REST API code path, preferably from compute.manager.attach_interface(). Do you agree? We also have TODOs about transforming floating_ip related notifications[2], e.g. network.floating_ip.allocate emitted from [3]. As far as I understand these notifications are only emitted from an already deprecated code path. Is it OK to remove them from our TODO list? Cheers, gibi [1]https://review.openstack.org/#/c/457181/ [2]https://vntburndown-gibi.rhcloud.com/index.html [3]https://github.com/openstack/nova/blob/master/nova/network/floating_ips.py#L230 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] connecting nova notification users and developers
On Wed, Aug 30, 2017 at 4:30 PM, Balazs Gibizer wrote: 1) We in the nova developer community would like to see which projects are using (or planning to use) the nova notification interface. Also we would like to know if you are using the legacy unversioned notifications or the new versioned ones. We would like to know what are your use cases towards our notification interface and we also would like to get any type of feedback about the interface (both the old and the new one). Based on this information we can make better decision where to focus our development effort. As a good example we already have a cooperation with the searchlight project to enhance nova's versioned notification interface based on their needs [3]. I opened an etherpad [4] to collect the projects and the feedback and we can go through that feedback in the PTG to define some actions. Thanks for all the replies and etherpad updates. We will discuss the gathered information on the nova track on the PTG most probably on Friday. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]notification update week 38
Hi, Here is the status update / focus settings mail for w38. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted We still have to figure out what is the expected behavior here based on: http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html I think I will propose a patch to remove the api.fault notification to help start the discussion. [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeout [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. I don't know about a way to reproduce it localy or to change the gate env so that python prints out the full stack trace to see where the problematic call is. Also adding extra log messages won't help as a timed out test doesn't have the log messages printed to the logs. So this bug is pretty stuck. Versioned notification transformation - There are 3 transformation patches that only need a second +2: * https://review.openstack.org/#/c/454023/ Transform servergroup.create notification * https://review.openstack.org/#/c/483902/ Transform servergroup.delete notification * https://review.openstack.org/#/c/396210/ Transform aggregate.add_host notification Searchlight integration --- As we discussed on the PTG the Searchlight integration is not likely to happen in the near future so extending the nova notifications is not a priority. This means that we are not planning to add the 'status' field to the instance notifications. The other task in the current bp https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight-queens is to avoid unnecessary BDM DB query when we emit instance notifications. We agreed that we want to do this as it is a meningful optimization of the current code. Patches already proposed and waiting for review: https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight-queens Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- Next subteam meeting will be held on 19th of September, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170919T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [notification] not transforming HostAPI related versioned notifications
Hi, Similar to my earlier mail about not transforming legacy notifications in the networking area [1] now I want to propose not to transform HostAPI related notifications. We have the following legacy notifications on our TODO list [2] to be transformed: * HostAPI.power_action.end * HostAPI.power_action.start * HostAPI.set_enabled.end * HostAPI.set_enabled.start * HostAPI.set_maintenance.end * HostAPI.set_maintenance.start However os-hosts API has been depraceted since microversion 2.43. The suggested replacement is os-services API. The os-services API already emits service.update notification for every action on that API. So I suggest not to transform the above HostAPI notifications to the versioned notification format. Cheers, gibi [1] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121968.html [2] https://vntburndown-gibi.rhcloud.com/index.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [notification] not transforming HostAPI related versioned notifications
On Wed, Sep 20, 2017 at 2:37 AM, Matt Riedemann wrote: On 9/19/2017 10:35 AM, Balazs Gibizer wrote: > Hi, > > Similar to my earlier mail about not transforming legacy notifications > in the networking area [1] now I want to propose not to transform > HostAPI related notifications. > We have the following legacy notifications on our TODO list [2] to be > transformed: > * HostAPI.power_action.end > * HostAPI.power_action.start > * HostAPI.set_enabled.end > * HostAPI.set_enabled.start > * HostAPI.set_maintenance.end > * HostAPI.set_maintenance.start > > However os-hosts API has been depraceted since microversion 2.43. The > suggested replacement is os-services API. The os-services API already > emits service.update notification for every action on that API. So I > suggest not to transform the above HostAPI notifications to the > versioned notification format. > > Cheers, > gibi > > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2017-September/121968.html > > [2] https://vntburndown-gibi.rhcloud.com/index.html > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev This also seems reasonable to me. I had to dig up what set_enabled was for again, but now I remember, it's basically the same thing as enable/disable a service in the os-services API, but only implemented for the xenapi driver. So yeah, +1 to not converting these to versioned notifications. Cool, thanks. As a side question: how do you keep track of the things we purposefully *aren't* going to implement for versioned notifications? I remove them [2] from our TODO list[1] with a nice commit message explaining the reason [3]. Do you feel we need something more user facing documentations about these decisions? Cheers, gibi [1] https://vntburndown-gibi.rhcloud.com/index.html [2] https://github.com/gibizer/nova-versioned-notification-transformation-burndown/commits/master/to_be_transformed [3] https://github.com/gibizer/nova-versioned-notification-transformation-burndown/commit/112a25aecf7e9b1f344840ae4ce150f70e75b634#diff-cd2b276ea9db6ffddf9aa78d871ab2e9 -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 39
Hi, Here is the status update / focus settings mail for w39. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted We still have to figure out what is the expected behavior here based on: http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html A proposed a patch with on of the possible solution and I hope this will help starting some discussion. * https://review.openstack.org/#/c/505164/ Remove dead code of api.fault notification sending [Medium] https://bugs.launchpad.net/nova/+bug/1718485 instance.live.migration.force.complete is not a versioned notification and not whitelisted Solution is simple and ready for review: https://review.openstack.org/#/c/506104/ [Undecided] https://bugs.launchpad.net/nova/+bug/1717917 test_resize_server_error_and_reschedule_was_failed failing due to missing notification Test stability issue. The fix only needs a second +2 https://review.openstack.org/#/c/504930/ [Undecided] https://bugs.launchpad.net/nova/+bug/1718226 bdm is wastefully loaded for versioned instance notifications This is a bug to follow up of the closed bp https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight The fix removes lot of unnecessary BDM loading from the notification code path: https://review.openstack.org/#/q/topic:bug/1718226 [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeout [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. I don't know about a way to reproduce it localy or to change the gate env so that python prints out the full stack trace to see where the problematic call is. Also adding extra log messages won't help as a timed out test doesn't have the log messages printed to the logs. So this bug is pretty stuck. Versioned notification transformation - There are 3 transformation patches that only need a second +2: * https://review.openstack.org/#/c/396210 Transform aggregate.add_host notification * https://review.openstack.org/#/c/396211 Transform aggregate.remove_host notification * https://review.openstack.org/#/c/503089 Add instance.interface_attach notification Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The third patch already shows how much sample data can be deleted from nova tree. We added a minimal hand rolled json ref implementation to notification sample test as the existing python json ref implementations are not well maintained. Weekly meeting -- Next subteam meeting will be held on 26th of September, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170926T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM
On Wed, Sep 27, 2017 at 11:58 AM, Jakub Jursa wrote: On 27.09.2017 11:12, Jakub Jursa wrote: On 27.09.2017 10:40, Blair Bethwaite wrote: On 27 September 2017 at 18:14, Stephen Finucane wrote: What you're probably looking for is the 'reserved_host_memory_mb' option. This defaults to 512 (at least in the latest master) so if you up this to 4192 or similar you should resolve the issue. I don't see how this would help given the problem description - reserved_host_memory_mb would only help avoid causing OOM when launching the last guest that would otherwise fit on a host based on Nova's simplified notion of memory capacity. It sounds like both CPU and NUMA pinning are in play here, otherwise the host would have no problem allocating RAM on a different NUMA node and OOM would be avoided. I'm not quite sure if/how OpenStack handles NUMA pinning (why is VM being killed by OOM rather than having memory allocated on different NUMA node). Anyway, good point, thank you, I should have a look at exact parameters passed to QEMU when using CPU pinning. Jakub, your numbers sound reasonable to me, i.e., use 60 out of 64GB Hm, but the question is, how to prevent having some smaller instance (e.g. 2GB RAM) scheduled on such NUMA node? when only considering QEMU overhead - however I would expect that might be a problem on NUMA node0 where there will be extra reserved memory regions for kernel and devices. In such a configuration where you are wanting to pin multiple guests into each of multiple NUMA nodes I think you may end up needing different flavor/instance-type configs (using less RAM) for node0 versus other NUMA nodes. Suggest What do you mean using different flavor? From what I understand ( http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html https://docs.openstack.org/nova/pike/admin/cpu-topologies.html ) it can be specified that flavor 'wants' different amount memory from its (virtual) NUMA nodes, but mapping vCPU <-> pCPU is more or less arbitrary (meaning that there is no way how to specify for NUMA node0 on physical host that it has less memory available for VM allocation) Can't be 'reserved_huge_pages' option used to reserve memory on certain NUMA nodes? https://docs.openstack.org/ocata/config-reference/compute/config-options.html I think the qemu memory overhead is allocated from the 4k memory pool so the question is if it is possible to reserve 4k pages with the reserved_huge_pages config option. I don't find any restriction in the code base about 4k pages (even if it is not considered as a large page by definition) so in theory you can do it. However this also means you have to enable NumaTopologyFilter. Cheers, gibi freshly booting one of your hypervisors and then with no guests running take a look at e.g. /proc/buddyinfo/ and /proc/zoneinfo to see what memory is used/available and where. Thanks, I'll look into it. Regards, Jakub __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification update week 40
Hi, Here is the status update / focus settings mail for w40. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted We still have to figure out what is the expected behavior here based on: http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html We requested information of possible users of api.fault notification [1] and it seems that rackspace does not use api.fault notification [2][3]. [1] http://lists.openstack.org/pipermail/openstack-operators/2017-September/014267.html [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-09-26.log.html#t2017-09-26T19:57:03 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-09-27.log.html#t2017-09-27T08:30:57 Patch is proposed to remove the legacy api.fault notification dead code and configuration: * https://review.openstack.org/#/c/505164/ Remove dead code of api.fault notification sending [Medium] https://bugs.launchpad.net/nova/+bug/1718485 instance.live.migration.force.complete is not a versioned notification and not whitelisted Solution merged to master: https://review.openstack.org/#/c/506104/ and now it needs to be backported to stable branches. [Undecided] https://bugs.launchpad.net/nova/+bug/1718226 bdm is wastefully loaded for versioned instance notifications This is a bug to follow up of the closed bp https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight The fix removes lot of unnecessary BDM loading from the notification code path: https://review.openstack.org/#/q/topic:bug/1718226 [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeou [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. Last week two compeeting patches were proposed that might help us getting at least logs from the failing test cases: * https://review.openstack.org/#/c/507253/ * https://review.openstack.org/#/c/507239/ [Medium] https://bugs.launchpad.net/nova/+bug/1719915 test_live_migrate_delete race fail when checking allocations: MismatchError: 2 != 1 It turned out that the original failure reported in the bug report was only seen in two patches that are actually changing behavior of the live migration. However during the investigation we found that there is a real race that affects the test_live_migrate_delete test case and a patch is proposed and already on the gate. https://review.openstack.org/#/c/507911/ Versioned notification transformation - Let's try to merge the same 3 transformation patches than last week. All the three only needs a second core to look at: * https://review.openstack.org/#/c/396210 Transform aggregate.add_host notification * https://review.openstack.org/#/c/396211 Transform aggregate.remove_host notification * https://review.openstack.org/#/c/503089 Add instance.interface_attach notification Versioned notification burndown chart = Last week I realized that the current notification burndown chart [1] needs to be moved to another hosting solution as OpenShift2 is retired. Chris made a generous offer to host it so the chart is now moved to [2]. Thank you Chris! [1] https://vntburndown-gibi.rhcloud.com/index.html [2] http://burndown.peermore.com/nova-notification/ Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. Takashi pointed out in the review that the current proposal actually changes the content of the sample appearing in our documentations. The reason is that some fields of the common sample fragment is overridden only during the functional test run and not during the doc generation. On the last subteam meeting we agreed with Matt to try to make the override in a more clever way that applies to both the functional test and the doc generation. See more in meeting log: http://eavesdrop.openstack.org/meetings/nova_notification/2017/nova_notification.2017-09-26-17.00.log.html#l-63 Weekly meeting -- Next subteam meeting will be held on 3nd of October, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171003T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification subteam meeting is canceled
Hi, Today's notification subteam meeting is canceled not to disturb spec review focus. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] otification update week 41
Hi, Here is the status update / focus settings mail for w41. Bugs [Medium] https://bugs.launchpad.net/nova/+bug/1699115 api.fault notification is never emitted It seems that rackspace does not use api.fault notification [2][3] So the agreement seems to be to remove the dead code. Patch is proposed and needs a second +2: * https://review.openstack.org/#/c/505164/ Remove dead code of api.fault notification sending [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-09-26.log.html#t2017-09-26T19:57:03 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-09-27.log.html#t2017-09-27T08:30:57 [Undecided] https://bugs.launchpad.net/nova/+bug/1718226 bdm is wastefully loaded for versioned instance notifications Patch series needs a second +2: https://review.openstack.org/#/q/topic:bug/1718226 [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeou [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. Two compeeting patches were proposed that might help us getting at least logs from the failing test cases but there is no decision which patch should be merged. * https://review.openstack.org/#/c/507253/ * https://review.openstack.org/#/c/507239/ [Undecided] https://bugs.launchpad.net/nova/+bug/1721670 Build notification in conductor fails to send due to InstanceNotFound Edit Patch has been proposed: https://review.openstack.org/#/c/509967/ [Undecided] https://bugs.launchpad.net/nova/+bug/1721843 Unversioned notifications not being sent Regression in the legacy notifications introduced when the short cutting of the versioned notification payload generation was added. Only affects compute.instance.update legacy notification. Versioned notification transformation - The interface.attach / detach notifications has been merged. \o/ Here are the 3 patches for this week: * https://review.openstack.org/#/c/396210 Transform aggregate.add_host notification * https://review.openstack.org/#/c/396211 Transform aggregate.remove_host notification * https://review.openstack.org/#/c/443764 use context mgr in instance.delete Just a reminder that the versioned notification burndown chart is available on a new address: http://burndown.peermore.com/nova-notification/ Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. Takashi pointed out in the review that the current proposal actually changes the content of the sample appearing in our documentations. The reason is that some fields of the common sample fragment is overridden only during the functional test run and not during the doc generation. We agreed with Matt to try to make the override in a more clever way that applies to both the functional test and the doc generation. The series needs to be updated. Weekly meeting -- Next subteam meeting will be held on 10th of October, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171010T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 42
Hi, Here is the status update / focus settings mail for w42. Bugs [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeou [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. Related patch https://review.openstack.org/#/c/507239/ has been merged. It makes the test run with timeout and lock support this might help with the troubleshooting of the bug. Based on logstash the last failure happened at 12th of October and the above patch was merged 13th of October so it is even possible that the problem does not happen after the this related fix. [High] https://bugs.launchpad.net/nova/+bug/1721843 Unversioned notifications not being sent Regression in the legacy notifications introduced when the short cutting of the versioned notification payload generation was added. Only affects compute.instance.update legacy notification. Fix merged on master backporting is in progress. Versioned notification transformation - Here are the 3 patches for this week: * https://review.openstack.org/#/c/443764 use context mgr in instance.delete * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager Just a reminder that the versioned notification burndown chart is available on a new address: http://burndown.peermore.com/nova-notification/ Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. Takashi pointed out in the review that the current proposal actually changes the content of the sample appearing in our documentations. The reason is that some fields of the common sample fragment is overridden only during the functional test run and not during the doc generation. We agreed with Matt to try to make the override in a more clever way that applies to both the functional test and the doc generation. The series needs to be updated. Weekly meeting -- Next subteam meeting will be held on 17th of October, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171017T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] next week notification subteam meeting is canceled
Hi, The next notification subteam meeting is canceled. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 43
Hi, Here is the status update / focus settings mail for w43. Bugs [High] https://bugs.launchpad.net/nova/+bug/1706563 TestRPC.test_cleanup_notifier_null fails with timeou [High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error: Cannot recover from stack overflow. - in py35 unit test job The first bug is just a duplicate of the second. It seems the TetRPC test suite has a way to end up in an infinite recusion. Related patch https://review.openstack.org/#/c/507239/ has been merged. It makes the test run with timeout and lock support this might help with the troubleshooting of the bug. Based on logstash there was no new appearance of this problem since then so I think the related patch actually fixed the problem. Versioned notification transformation - Here are the 3 patches for this week: * https://review.openstack.org/#/c/467514 Transform keypair.import notification * https://review.openstack.org/#/c/396225 Transform instance.trigger_crash_dump notification * https://review.openstack.org/#/c/443764 use context mgr in instance.delete Service create and destroy notifications This is the only notification heavy spec that was approved to Queens. It adds two new notifications service.create and service.delete similar to the already existing service.update versioned notification. https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data This is a start of a longer patch series to deduplicate notification sample data. The series needs to be updated. Weekly meeting -- Next subteam meeting will be held on 31th of October, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171031T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 44
Hi, Here is the status update / focus settings mail for w44. Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation As the behavior is unchanged in the last 5 years a patch is proposed to update the documentation to reflect this long standing behavior. https://review.openstack.org/516264 Versioned notification transformation - There are 3 patches only needs a second +2: * https://review.openstack.org/#/c/467514 Transform keypair.import notification * https://review.openstack.org/#/c/396225 Transform instance.trigger_crash_dump notification * https://review.openstack.org/#/c/443764 use context mgr in instance.delete Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html Waiting for the implementation to be proposed. Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data Finally I had time to introduce the possiblity to override fields coming from a common sample. This way the samples in the documentation can be kept realistic even if we deduplicate most of the sample data. The s eries are up to date and show the way how to drastically decrease the amount of json sample data stored in the nova tree. Weekly meeting -- Next subteam meeting will be held on 31th of October, Tuesday 17:00 UTC on openstack-meeting-4. (Please note that EU already went through the daylight saving time switch on the last weekend but the USA has not done that yet.) https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171031T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 45
Hi, Here is the status update / focus settings mail for w45 Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation As the behavior is unchanged in the last 5 years a patch is proposed to update the documentation to reflect this long standing behavior. The solution only needs a second +2: https://review.openstack.org/516264 Versioned notification transformation - As the last week's 3 patches have been merged, so this week we will try 4 patches :) * https://review.openstack.org/#/c/482070 Transform instance-live_migration_pre notification * https://review.openstack.org/#/c/420453 Transform instance-live_migration_abort notification * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html Still waiting for the implementation to be proposed. Small improvements -- * https://review.openstack.org/#/q/topic:refactor-notification-samples Factor out duplicated notification sample data The series are up to date and show the way how to drastically decrease the amount of json sample data stored in the nova tree. Weekly meeting -- This week's meeting is cancelled due to the Forum in Sydney. Next subteam meeting will be held on 14th of November, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171114T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 46
Hi, Here is the status update / focus settings mail for w46 Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation As the behavior is unchanged in the last 5 years a patch is proposed to update the documentation to reflect this long standing behavior. Matt's comments needs to be addressed on the patch. [Undecided] https://bugs.launchpad.net/nova/+bug/1728884 A missing versioned notification sample Fix has been proposed, needs a second +2: https://review.openstack.org/#/c/516582/ Versioned notification transformation - Live migration abort patch has been merged last week but the pre livemigration notification patch unovered a bug in that patch later so there will be some fix proposed soon. This means the livemigration series ca nnot move forward until that fix. So besides that fix we the review focus should be on another patches: * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager * https://review.openstack.org/#/c/403660 Transform instance.exists notification Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html Still waiting for the implementation to be proposed. I pinged the spec author about the coming milestone 2 deadline. Factor out duplicated notification sample -- https://review.openstack.org/#/q/topic:refactor-notification-samples The first set of patches have been merged. \o/ Now every new notification transformation needs to use the common json fragments when adding new sample files. I will continue proposing de-duplication patches for the already merged sample files but If somebody wants to help with this easy work then just ping me on IRC and we can distribute the work. Weekly meeting -- Next subteam meeting will be held on 14th of November, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171114T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 47
Hi, Here is the status update / focus settings mail for w47 Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation As the behavior is unchanged in the last 5 years a patch is proposed to update the documentation to reflect this long standing behavior. The fix merged to the master and backports has been proposed to the stable branches https://review.openstack.org/#/q/topic:bug/1535254 [Undecided] https://bugs.launchpad.net/nova/+bug/1732685 instance.snapshot notification samples are assigned to two different Notification classes This is a trivial doc bug with fix proposed: https://review.openstack.org/#/c/520579/ Versioned notification transformation - To move forward with the live migration notification transformations we have to merge couple of pre-requisits in the functional test enviornment. The live_migration_pre notifiacation transformation is basically r eady and approved in https://review.openstack.org/#/c/482070/ but the prerequisti patches needs some core attention. Also the following transformation patches are ready to look at: * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html The implementation was propose last week and code review is ongoing in https://review.openstack.org/#/c/519588/ . Factor out duplicated notification sample - https://review.openstack.org/#/q/topic:refactor-notification-samples It is so easy work that as soon as somebody pushes such a patch it is merged quickly. We still have plenty of samples to be deduplicated so I will monitor the above topic for incoming patches for easy review. :) Weekly meeting -- Next subteam meeting will be held on 21th of November, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171121T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 48
Hi, Here is the status update / focus settings mail for w48 Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation As the behavior is unchanged in the last 5 years a patch is proposed to update the documentation to reflect this long standing behavior. The fix merged to the master and backports has been proposed to the stable branches https://review.openstack.org/#/q/topic:bug/1535254 Versioned notification transformation - The following transformation patches are ready: * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager * https://review.openstack.org/#/c/396811 Transform instance.resize_revert notification Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html The implementation needs a second core to look at https://review.openstack.org/#/c/519588/ . Factor out duplicated notification sample - https://review.openstack.org/#/q/topic:refactor-notification-samples Nothing is open on that branch today. I have do some of these patches during the week. Weekly meeting -- Next subteam meeting will be held on 28th of November, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171128T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Notification update week 49
Hi, Here is the status update / focus settings mail for w49 Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation The fix merged to the master and backports has been proposed to the stable branches https://review.openstack.org/#/q/topic:bug/1535254 Versioned notification transformation - The following transformation patches are ready: * https://review.openstack.org/#/c/396811 Transform instance.resize_revert notification This only needs a second +2 * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification Implementation has been merged. One follow up patch fixing nits needs a second core to look at: https://review.openstack.org/#/c/523162 Factor out duplicated notification sample - https://review.openstack.org/#/q/topic:refactor-notification-samples There are two patches on the branch, one of them only need a second core to look at. Weekly meeting -- This week's subteam meeting has been cancelled. Next subteam meeting will be held on 12th of December, Tuesday 17:00 UTC on openstack-meeting-4. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171212T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] resource providers update 43
On Sat, Dec 2, 2017 at 3:26 AM, Matt Riedemann wrote: On 12/1/2017 10:42 AM, Chris Dent wrote: > > December? Wherever does the time go? This is resource providers and > placement update 43. The first one of these was more than a year ago > > > http://lists.openstack.org/pipermail/openstack-dev/2016-November/107171.html > > > I like to think they've been pretty useful. I know they've helped me > keep track of stuff, and have a bit of focus. I'll carry on doing them > but I'm starting to worry that they are getting too big, both to read > and to create, and that this means something, not sure what, for the > volume of work we're trying to accomplish. There's so much work going > on all the time related to placement, writing it down in one place is > rather challenging, so surely creating and reviewing it all is also > challenging? And that's not taking into consideration the vast volume > of all the other stuff within the nova umbrella. Not sure what to do > about it, but something to start thinking about. > Thanks for continuing to do these. I don't read every one, but when I do, like tonight (read the whole damn thing), I end up clicking on a lot of the review links and going through a lot of them which moves the ball forward on some simple but important patches. I really like these placement summary mails. It gives me a weekly reminder to look at patches in series I'm not actively following. Also even if it is long it is in priority order. So If somebody starts at the top and do some review but never reach the end of the mail (like me most of the time); the mail still helps moving the important things forward. Thank you Chris for the effort! Cheers, gibi -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]Notification update week 50
Hi, Here is the status update / focus settings mail for 50 Bugs [Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration of 'notify_on_state_change' are different from implementation The fix merged to the master and stable/pike. The stable/ocata backport has been proposed https://review.openstack.org/#/q/topic:bug/1535254 [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface [Undecided] https://bugs.launchpad.net/nova/+bug/1736976 test_live_migration_actions functional test randomly fails with "AssertionError: The migration table left empty." This most probably a test instability. Versioned notification transformation - The following transformation patches are ready: * https://review.openstack.org/#/c/410297 Transform missing delete notifications * https://review.openstack.org/#/c/476459 Send soft_delete from context manager It only needs a second +2 * https://review.openstack.org/#/c/403660 Transform instance.exists notification Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification Implementation has been merged. One follow up patch fixing nits needs a second core to look at: https://review.openstack.org/#/c/523162 Introducing instance.lock and instance.unlock notifications --- A new request is coming from the Watcher project to send instance action notification for lock and unlock instance actions[1]. As the request[2] came late for Queens we will discuss the BP for Rocky. [1] https://developer.openstack.org/api-ref/compute/#lock-server-lock-action [2] https://review.openstack.org/#/c/526251/ Factor out duplicated notification sample - https://review.openstack.org/#/q/topic:refactor-notification-samples There are two patches on the branch, one of them only need a second core to look at. Weekly meeting -- Next subteam meeting will be held on 12th of December, Tuesday 17:00 UTC on openstack-meeting-4. This will be the last subteam meeting in 2017. The first meeting in 2018 is expected to be held on 9th of January. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171212T17 Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]Notification update week 51
Hi, Here is the last status update / focus settings mail for 2017 Bugs [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix proposed and needs a second core to look at: https://review.openstack.org/#/c/527920/ Versioned notification transformation - Only have two transformation patches ready as the rest are in merge conflict or failing tests: * https://review.openstack.org/#/c/403660 Transform instance.exists notification * https://review.openstack.org/#/c/480955 Add sample test for instance audit Service create and destroy notifications https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification Every related patch has been merged and the bluperint has been set to implemented state. Introducing instance.lock and instance.unlock notifications --- Needs a specless bp to be proposed to the Rocky cycle https://review.openstack.org/#/c/526251/ Factor out duplicated notification sample - https://review.openstack.org/#/q/topic:refactor-notification-samples There are two patches on the branch, one of them only need a second core to look at. Weekly meeting -- The vacation period is close so there is no meeting any more in 2017. The first meeting in 2018 is expected to be held on 9th of January. https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180109T17 Thanks for the attention in 2017, let's continue the work next year. Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Feedback for upcoming user survey questionnaire
On Tue, Dec 27, 2016 at 12:08 AM, Matt Riedemann wrote: We have the opportunity to again [1] ask a question in the upcoming user survey which will be conducted in February. We can ask one question and have it directed to either *users* of Nova, people *testing* nova, or people *interested* in using/adopting nova. Given the existing adoption of Nova in OpenStack deployments (98% as of October 2016) I think that sliding scale really only makes sense to direct a question at existing users of the project. It's also suggested that for projects with over 50% adoption to make the question quantitative rather than qualitative. We have until January 9th to submit a question. If you have any quantitative questions about Nova to users, please reply to this thread before then. Personally I tend to be interested in feedback on recent development, so I'd like to ask questions about cells v2 or the placement API, i.e. they were optional in Newton but how many deployments that have upgraded to Newton are deploying those features (maybe also noting they will be required to upgrade to Ocata)? However, the other side of me knows that most major production deployments are also lagging behind by a few releases, and may only now be upgrading, or planning to upgrade, to Mitaka since we've recently end-of-life'd the Liberty release. So asking questions about cells v2 or the placement service is probably premature. It might be better to ask about microversion adoption, i.e. if you're monitoring API request traffic to your cloud, what % of compute API requests are using a microversion > 2.1. Previous release priorities might spur some other ideas [2]. [1] http://lists.openstack.org/pipermail/openstack-dev/2016-September/103396.html [2] https://specs.openstack.org/openstack/nova-specs/#priorities -- Thanks, Matt Riedemann I would like to ask about notifications if possible: Do you consume (or plan to consume) nova notifications in your deployment? Have you heard about the versioned notifications? Do you consume (or plan to consume) versioned notifications? Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Consistency in versioned notification payloads
On Sat, Dec 31, 2016 at 5:08 PM, Doug Hellmann wrote: Excerpts from Matt Riedemann's message of 2016-12-30 16:23:24 -0600: While reviewing patches today to add versioned notifications for CRUD operations on aggregates and flavors I've come across some inconsistency. The existing non-versioned notification for aggregate.delete just sends the aggregate id, but the versioned notification is sending the whole aggregate object in the payload: https://review.openstack.org/#/c/394512/9/doc/notification_samples/aggregate-delete-end.json But with the flavor-delete versioned notification, it's just sending the flavorid: https://review.openstack.org/#/c/398171/16/doc/notification_samples/flavor-delete.json So which should we be doing? Either way you can correlate the id on the resource in the notification back to the full record if needed, but should we be sending the full object in the versioned notification payload while we have it? I don't much care either way which we do as long as we're consistent. Thanks Matt for taking this up. I agree that we need to make a consistent solution. The instance.delete notification also uses the same InstanceActionPayload as the instance.create. I think this is a good precedent to send the full payload at delete as well. When we originally wrote ceilometer's notification consumption code, we ran into issues processing the data for delete notifications that only included identifiers. IIRC, the primary issue at the time was with some of the CRUD operations in neutron, and we asked them to add all known data about objects to all notifications so the consumer could filter notifications based on those properties (maybe the receive wants to only pay attention to certain tenants, for example) and ensure it has the most current settings for an object as it is being deleted (useful for ensuring that a billing record includes the right flavor, for example). Thanks Doug for adding the historic background. I think the use case you described are still valid so I propose to make a recommendation to emit a full payload at entity delete. I proposed this recommendation to the notification devref [1]. [1] https://review.openstack.org/415991 Cheers, gibi Doug __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Next notification meeting
Hi, The next notification subteam meeting will be held on 2017.01.10 17:00 UTC [1] on #openstack-meeting-4. Cheers, gibi [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170110T17 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] [nova] Accessing instance.flavor.projects fails due to orphaned Flavor
On Fri, Jan 13, 2017 at 9:51 AM, Balazs Gibizer wrote: On Thu, Jan 12, 2017 at 4:56 PM, Jay Pipes wrote: On 01/12/2017 05:31 AM, Balazs Gibizer wrote: Hi, The flavor field of the Instance object is a lazy-loaded field and the projects field of the Flavor object is also lazy-loaded. Now it seems to me that when the Instance object lazy loads instance.flavor then the created Flavor object is orphaned [1] therefore instance.flavor.projects will never work and result in an exceptuion: OrphanedObjectError: Cannot call _load_projects on orphaned Flavor object. Is the Flavor left orphaned by intention or it is a bug? Depends :) I would say it is intentional for the most part. Is there a reason why the Flavor *notification* payload needs to contain a list of projects associated with the flavor? My gut says that information isn't particularly germane to the relationship of the Instance to the Flavor? The whole thing came up as part of the https://blueprints.launchpad.net/nova/+spec/flavor-notifications where the FlavorPayload was extended with flavor.projects. As the same FlavorPayload is used in the instance. notifications the instance notification code path also needs the flavor.projects field. The payload of instance. notifications contains the flavor related data of the instance in question and to have the flavor.projects in the payload as well the code would need to access the projects field via instance.flavor.projects. Sure, I understand it would ease the access to the projects field in the notification payload packing, but is there really a reason to bother retrieving and sending that data each time an Instance notification event is made (which is quite often)? So it is mainly there to have a single, consistent FlavorPayload used across notifications. Sure we could include only just the flavor_id in the instance. notifications. However there was a similar discussions how to handle delete notification [1]. There we decided to include the whole entity to the delete not just the uuid of the deleted entity. There the main reasoning (besides consistency) was that a notification consumer might want to listen only to certain notification but and still want to get enough information to avoid the need of a subsequent REST query. I think the same reasoning could be applied here. Cheers, gibi Posting to openstack-dev as it was wrongly went to the openstack list. Cheers, gibi [1] http://lists.openstack.org/pipermail/openstack-dev/2017-January/109508.html Best, -jay ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openst...@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Next notification meeting
Hi, The next notification subteam meeting will be held on 2017.01.24 17:00 UTC [1] on #openstack-meeting-4. Cheers, gibi [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170124T17 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Next notification meeting
Hi, The next notification subteam meeting will be held on 2017.01.31 17:00 UTC [1] on #openstack-meeting-4. Cheers, gibi [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170131T17 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Next notification meeting on 02.14.
Hi, As we agreed last week's meeting there won't be meeting this week. The next notification subteam meeting will be held on 2017.02.14 17:00 UTC [1] on #openstack-meeting-4. Cheers, gibi [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170214T17 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Next notification meeting on 02.14.
On Tue, Feb 7, 2017 at 3:56 PM, Matt Riedemann wrote: On 2/7/2017 7:40 AM, Balazs Gibizer wrote: Hi, As we agreed last week's meeting there won't be meeting this week. The next notification subteam meeting will be held on 2017.02.14 17:00 UTC [1] on #openstack-meeting-4. Cheers, gibi [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170214T17 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I certainly hope you'll be passing out Valentines. Sure. My only problem is that I don't know what the community prefers chocolates or flowers. :) Cheers, gibi -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] notification impact of moving the instance create to the conductor
Hi, It seems that the Move instance creation to conductor commit [1] changed when and how the instance.delete notification is emitted for an unscheduled instance. Unfortunately the legacy notification doesn't have test coverage and the versioned notification coverage are still on review [2] for this case. Before [1] the instance.delete for an unscheduled instance is emitted from here [3]. But after [1] the execution of the same delete operation goes to a new direction [4] and never reaches [3]. Before [1] the new test coverage in [2] was passing but now after [1] is merged the test_create_server_error fails as the instance.delete notification is not emitted. Is it an intentional change or a bug? If it is a bug could you give me some pointers how to restore the original notification behavior? Cheers, gibi [1] https://review.openstack.org/#/c/319379 [2] https://review.openstack.org/#/c/410297 [3] https://review.openstack.org/#/c/410297/9/nova/compute/api.py@1860 [4] https://review.openstack.org/#/c/319379/84/nova/compute/api.py@1790 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] notification impact of moving the instance create to the conductor
On Wed, Feb 15, 2017 at 10:31 PM, Matt Riedemann wrote: On 2/15/2017 11:07 AM, Balazs Gibizer wrote: Hi, It seems that the Move instance creation to conductor commit [1] changed when and how the instance.delete notification is emitted for an unscheduled instance. Unfortunately the legacy notification doesn't have test coverage and the versioned notification coverage are still on review [2] for this case. Before [1] the instance.delete for an unscheduled instance is emitted from here [3]. But after [1] the execution of the same delete operation goes to a new direction [4] and never reaches [3]. Before [1] the new test coverage in [2] was passing but now after [1] is merged the test_create_server_error fails as the instance.delete notification is not emitted. Is it an intentional change or a bug? If it is a bug could you give me some pointers how to restore the original notification behavior? Cheers, gibi [1] https://review.openstack.org/#/c/319379 [2] https://review.openstack.org/#/c/410297 [3] https://review.openstack.org/#/c/410297/9/nova/compute/api.py@1860 [4] https://review.openstack.org/#/c/319379/84/nova/compute/api.py@1790 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev This isn't intentional, it was just missed. So please create a bug for tracking this and we'll have to backport that to stable/ocata. I think we could send the notification here [1] if we have a build request and delete it. Or here [2] if the build request is already deleted and we have to delete the instance in the cell. The thing that is tricking me here is we might also need to handle it in conductor here [3]. Needless to say we're going to want a utility method probably so we don't have to duplicate the same notify_start/delete/notify_end block of code all over compute API and conductor. [1] https://github.com/openstack/nova/blob/93bf6ba5186a3663606aa843a2f247709173f073/nova/compute/api.py#L1759 [2] https://github.com/openstack/nova/blob/93bf6ba5186a3663606aa843a2f247709173f073/nova/compute/api.py#L1790 [3] https://github.com/openstack/nova/blob/93bf6ba5186a3663606aa843a2f247709173f073/nova/conductor/manager.py#L963 Thanks for the confirmation. I filed a bug https://bugs.launchpad.net/nova/+bug/1665263 for this issue. I will try to propose a solution based on your input in the coming days. Cheers, gibi -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev