[Yahoo-eng-team] [Bug 1796236] [NEW] Failed to create network "admin"
Public bug reported: Hello I have a problem with creating Network . I got this error /// Error: Failed to create network "admin": Unable to create the network. No tenant network is available for allocation. Neutron server returns request_ids: ['req-d3120c4d-3fe4-4a61-b398-0465df1ee205'] /// System Ubuntu 16.04.5 LTS Openstack version : 3.2.0 Newton. Same errors in log : /// /// /// 2018-10-05 09:10:46.161 25792 ERROR neutron.api.v2.resource NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation. /// /// /// openstack network agent list +--++-+---+---+---+---+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary| +--++-+---+---+---+---+ | 46643b60-1b3c-4195-a807-ec73a0d16180 | Metadata agent | controller..com | None | True | UP| neutron-metadata-agent| | 7fbb4eb2-7335-45e3-a162-1a211d215937 | Linux bridge agent | compute1..com | None | True | UP| neutron-linuxbridge-agent | | b9d6bb8c-0275-46cd-9eda-3000c9ce59b4 | DHCP agent | controller..com | nova | True | UP| neutron-dhcp-agent | | f97e7891-b917-4923-8032-02885c951f66 | Linux bridge agent | controller..com | None | True | UP| neutron-linuxbridge-agent | +--++-+---+---+---+---+ root@controller:~# neutron ext-list +---+-+ | alias | name| +---+-+ | default-subnetpools | Default Subnetpools | | availability_zone | Availability Zone | | network_availability_zone | Network Availability Zone | | binding | Port Binding| | agent | agent | | subnet_allocation | Subnet Allocation | | dhcp_agent_scheduler | DHCP Agent Scheduler| | tag | Tag support | | external-net | Neutron external network| | flavors | Neutron Service Flavors | | net-mtu | Network MTU | | network-ip-availability | Network IP Availability | | quotas| Quota management support| | provider | Provider Network| | multi-provider| Multi Provider Network | | address-scope | Address scope | | subnet-service-types | Subnet service types| | standard-attr-timestamp | Resource timestamps | | service-type | Neutron Service Type Management | | extra_dhcp_opt| Neutron Extra DHCP opts | | standard-attr-revisions | Resource revision numbers | | pagination| Pagination support | | sorting | Sorting support | | security-group| security-group | | rbac-policies | RBAC Policies | | standard-attr-description | standard-attr-description | | port-security | Port Security | | allowed-address-pairs | Allowed Address Pairs | | project-id| project_id field enabled| +---+-+ What can cause this problem? Thanks in advance. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1796236 Title: Failed to create network "admin" Status in neutron: New Bug description: Hello I have a problem with creating Network . I got this error /// Error: Failed to create network "admin": Unable to create the network. No tenant network is available for allocation. Neutron server returns request_ids: ['req-d3120c4d-3fe4-4a61-b398-0465df1ee205'] /// System Ubuntu 16.04.5 LTS Openstack version : 3.2.0 Newton. Same errors in log : /// /// /// 2018-10-05 09:10:46.161 25792 ERROR neutron.api.v2.resource NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation. /// /// /// openstack network agent list
[Yahoo-eng-team] [Bug 1794333] Re: Local delete emits only legacy start and end notifications
https://review.openstack.org/#/c/410297/ has been merged. ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1794333 Title: Local delete emits only legacy start and end notifications Status in OpenStack Compute (nova): Fix Released Bug description: If the compute api service does a 'local delete', it only emits legacy notifications when the operation starts and ends. If the delete goes to a compute host, the compute host emits both legacy and versioned notifications. This is both inconsistent, and a gap in versioned notifications. It would appear that every caller of compute_utils.notify_about_instance_delete in compute.API fails to emit versioned notifications. I suggest that the best way to fix this will be to fix compute_utils.notify_about_instance_delete, but note that there's also a caller in compute.Manager which emits versioned notifications explicitly. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1794333/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796230] [NEW] no libnetfilter-log1 package on centos
Public bug reported: the recent version of neutron-fwaas devstack plugin tries to install libnetfilter-log1 package. [1] unfortunately it isn't available on centos and thus make networking-midonet centos job fail. [1] https://review.openstack.org/#/c/530694/ eg. http://logs.openstack.org/95/490295/5/check/networking-midonet- tempest-aio-ml2-full-centos-7/0505fe4/logs/devstacklog.txt.gz 2018-10-04 17:23:30.718 | Installing collected packages: neutron, pyzmq, neutron-fwaas 2018-10-04 17:23:30.718 | Found existing installation: neutron 13.0.0.0rc2.dev196 2018-10-04 17:23:30.728 | Uninstalling neutron-13.0.0.0rc2.dev196: 2018-10-04 17:23:30.742 | Successfully uninstalled neutron-13.0.0.0rc2.dev196 2018-10-04 17:23:30.742 | Running setup.py develop for neutron 2018-10-04 17:23:33.497 | Found existing installation: neutron-fwaas 13.0.0 2018-10-04 17:23:33.549 | Uninstalling neutron-fwaas-13.0.0: 2018-10-04 17:23:33.617 | Successfully uninstalled neutron-fwaas-13.0.0 2018-10-04 17:23:33.617 | Running setup.py develop for neutron-fwaas 2018-10-04 17:23:35.421 | Successfully installed neutron neutron-fwaas pyzmq-17.1.2 2018-10-04 17:23:35.688 | You are using pip version 9.0.3, however version 18.0 is available. 2018-10-04 17:23:35.688 | You should consider upgrading via the 'pip install --upgrade pip' command. 2018-10-04 17:23:35.938 | Loaded plugins: fastestmirror 2018-10-04 17:23:36.003 | Loading mirror speeds from cached hostfile 2018-10-04 17:23:37.080 | No package libnetfilter-log1 available. 2018-10-04 17:23:37.293 | Error: Nothing to do 2018-10-04 17:23:37.312 | YUM_FAILED 1 2018-10-04 17:23:37.481 | Loaded plugins: fastestmirror 2018-10-04 17:23:37.546 | Loading mirror speeds from cached hostfile 2018-10-04 17:23:38.389 | No package libnetfilter-log1 available. 2018-10-04 17:23:38.597 | Error: Nothing to do 2018-10-04 17:23:38.615 | YUM_FAILED 1 2018-10-04 17:23:38.620 | Error on exit 2018-10-04 17:23:39.135 | World dumping... see /opt/stack/new/screen-logs/worlddump-2018-10-04-172339.txt for details ** Affects: neutron Importance: Undecided Status: New ** Tags: fwaas ** Tags added: fwaas -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1796230 Title: no libnetfilter-log1 package on centos Status in neutron: New Bug description: the recent version of neutron-fwaas devstack plugin tries to install libnetfilter-log1 package. [1] unfortunately it isn't available on centos and thus make networking-midonet centos job fail. [1] https://review.openstack.org/#/c/530694/ eg. http://logs.openstack.org/95/490295/5/check/networking-midonet- tempest-aio-ml2-full-centos-7/0505fe4/logs/devstacklog.txt.gz 2018-10-04 17:23:30.718 | Installing collected packages: neutron, pyzmq, neutron-fwaas 2018-10-04 17:23:30.718 | Found existing installation: neutron 13.0.0.0rc2.dev196 2018-10-04 17:23:30.728 | Uninstalling neutron-13.0.0.0rc2.dev196: 2018-10-04 17:23:30.742 | Successfully uninstalled neutron-13.0.0.0rc2.dev196 2018-10-04 17:23:30.742 | Running setup.py develop for neutron 2018-10-04 17:23:33.497 | Found existing installation: neutron-fwaas 13.0.0 2018-10-04 17:23:33.549 | Uninstalling neutron-fwaas-13.0.0: 2018-10-04 17:23:33.617 | Successfully uninstalled neutron-fwaas-13.0.0 2018-10-04 17:23:33.617 | Running setup.py develop for neutron-fwaas 2018-10-04 17:23:35.421 | Successfully installed neutron neutron-fwaas pyzmq-17.1.2 2018-10-04 17:23:35.688 | You are using pip version 9.0.3, however version 18.0 is available. 2018-10-04 17:23:35.688 | You should consider upgrading via the 'pip install --upgrade pip' command. 2018-10-04 17:23:35.938 | Loaded plugins: fastestmirror 2018-10-04 17:23:36.003 | Loading mirror speeds from cached hostfile 2018-10-04 17:23:37.080 | No package libnetfilter-log1 available. 2018-10-04 17:23:37.293 | Error: Nothing to do 2018-10-04 17:23:37.312 | YUM_FAILED 1 2018-10-04 17:23:37.481 | Loaded plugins: fastestmirror 2018-10-04 17:23:37.546 | Loading mirror speeds from cached hostfile 2018-10-04 17:23:38.389 | No package libnetfilter-log1 available. 2018-10-04 17:23:38.597 | Error: Nothing to do 2018-10-04 17:23:38.615 | YUM_FAILED 1 2018-10-04 17:23:38.620 | Error on exit 2018-10-04 17:23:39.135 | World dumping... see /opt/stack/new/screen-logs/worlddump-2018-10-04-172339.txt for details To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1796230/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1793419] Re: database online data migration fail due to missing request spec marker
Reviewed: https://review.openstack.org/605164 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=ff03b157b930de23e5912802cbfbc86889c869c2 Submitter: Zuul Branch:master commit ff03b157b930de23e5912802cbfbc86889c869c2 Author: Jack Ding Date: Tue Sep 25 13:20:25 2018 -0400 Handle missing marker during online data migration During upgrade the instance used by the request spec marker could be deleted and purged between sessions. This would cause the database online data migration to fail as the marker instance couldn't be found. Fix by handling the MarkerNotFound exception and re-trying without the marker. This will go through all the instances and reset the marker when done. Closes-Bug: #1793419 Change-Id: If96e3d038346f16cc93209bccf3db028bacfe59b Signed-off-by: Jack Ding ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1793419 Title: database online data migration fail due to missing request spec marker Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: Triaged Status in OpenStack Compute (nova) queens series: Triaged Status in OpenStack Compute (nova) rocky series: Triaged Bug description: Description === During upgrade we run nova online migration that goes through the list of instances and creates a request spec record in the db if one does not exist. As the online migrations are batched, the request spec migration leaves a marker record in the request_specs table to indicate the last instance uuid that was processed. It continues processing starting from that instances on the next batch. In our upgrade test, we hit a scenario where the marker instance from the online migration that was run during the Mitaka->Newton upgrade had been deleted and purged from the db by time we ran the Newton->Pike upgrade. This caused the online migration to fail as the marker instance couldn't be found. Steps to reproduce == - run data online migration on installed Newton load. nova-manage db online_data_migrations - delete the instance referenced by the marker (instance_uuid ----) - purge db: nova-manage db purge - upgrade to Pike. Expected result === Upgrade successful with no exceptions. Actual result = Exceptions occur during upgrade with missing marker an upgrade failed. Error attempting to run 14 rows matched query service_uuids_online_data_migration, 14 migrated 13 rows matched query migrate_quota_limits_to_api_db, 13 migrated Error attempting to run +-+--+---+ | Migration | Total Needed | Completed | +-+--+---+ | delete_build_requests_with_no_instance_uuid | 0 | 0 | | migrate_aggregate_reset_autoincrement | 0 | 0 | | migrate_aggregates | 0 | 0 | | migrate_flavor_reset_autoincrement | 0 | 0 | | migrate_flavors | 0 | 0 | | migrate_instance_groups_to_api_db | 0 | 0 | | migrate_instance_keypairs | 0 | 0 | | migrate_instances_add_request_spec | 0 | 0 | | migrate_keypairs_to_api_db | 0 | 0 | | migrate_quota_classes_to_api_db | 0 | 0 | | migrate_quota_limits_to_api_db | 0 | 0 | | service_uuids_online_data_migration | 0 | 0 | +-+--+---+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1793419/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1757061] Re: vm_state become ERROR after execute 'nova set-password ' failed
Reviewed: https://review.openstack.org/555160 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=513f2d3d254e2ffcc5c9eb786bc1c7d52036d392 Submitter: Zuul Branch:master commit 513f2d3d254e2ffcc5c9eb786bc1c7d52036d392 Author: jichen Date: Thu Mar 22 14:07:20 2018 +0800 Not set instance to ERROR if set_admin_password failed In some cases, an instance will be set to ERROR state when set_admin_password failed (some Exception like Forbidden) this is inconsistent to other exceptions and also set_admin_password is a sync call from API to compute, we can simply return the error to the upper layer (operator or user) to avoid make user run reset to restore instance status since no changes to guest at all. Change-Id: If1c901b974bc7295927b3f033a04eaa6ac36f603 Closes-Bug: 1757061 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1757061 Title: vm_state become ERROR after execute 'nova set-password ' failed Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: Confirmed Status in OpenStack Compute (nova) queens series: Confirmed Status in OpenStack Compute (nova) rocky series: Confirmed Bug description: Description === Virtual Machine's vm_state become ERROR when execute 'nova set-password ' failed.In fact, the virtual machine is normal. Steps to reproduce == [root@nail-5300-1 ~(keystone_admin)]# nova set-password test_tx New password: Again: ERROR (Conflict): Failed to set admin password on cfa4ae9f-346f-46fe-984e-08d050d3a2fc because error setting admin password (HTTP 409) (Request-ID: req-77e9f49b-0fc7-40d5-8112-9da965b0304d) [root@nail-5300-1 ~(keystone_admin)]# nova list +--+-+++-+-+ | ID | Name| Status | Task State | Power State | Networks| +--+-+++-+-+ | cfa4ae9f-346f-46fe-984e-08d050d3a2fc | test_tx | ERROR | - | Running | HAHAHA=192.168.0.12 | +--+-+++-+-+ [root@nail-5300-1 ~(keystone_admin)]# [root@nail-5300-1 ~(keystone_admin)]# [root@nail-5300-1 ~(keystone_admin)]# [root@nail-5300-1 ~(keystone_admin)]# nova show test_tx|grep instance_name | OS-EXT-SRV-ATTR:instance_name| instance-0011 | [root@nail-5300-1 ~(keystone_admin)]# ssh nail-5300-2 Last login: Tue Mar 20 15:14:52 2018 from 10.43.203.85 [root@nail-5300-2 ~]# virsh list IdName State 274 instance-0011 running Expected result: Set admin password failed,but virtual machine's vm_state is ACTIVE. Actual result = virtual machine's vm_state is ERROR. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1757061/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3
** Changed in: manila Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1280105 Title: urllib/urllib2 is incompatible for python 3 Status in Ceilometer: Fix Released Status in Cinder: Fix Released Status in Fuel for OpenStack: Fix Released Status in Glance: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in neutron: Fix Released Status in python-troveclient: Fix Released Status in refstack: Fix Released Status in Sahara: Fix Released Status in OpenStack Object Storage (swift): Fix Released Status in tacker: Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): In Progress Status in Zuul: In Progress Bug description: urllib/urllib2 is incompatible for python 3 To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1794773] Re: Unnecessary warning when ironic node properties are not set
Reviewed: https://review.openstack.org/605754 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=63b9c88386998bc584786ecb8ea7a2aae971a384 Submitter: Zuul Branch:master commit 63b9c88386998bc584786ecb8ea7a2aae971a384 Author: Mark Goddard Date: Thu Sep 27 15:32:36 2018 +0100 Don't emit warning when ironic properties are zero If an ironic node is registered without either of the 'memory_mb' or 'cpus' properties, the following warning messages are seen in the nova-compute logs: Warning, memory usage is 0 for on baremetal node . Warning, number of cpus is 0 for on baremetal node . As of the Rocky release [1], the standard compute resources (VCPU, MEMORY_MB, DISK_GB) are not registered with placement for ironic nodes. They were not required to be set since the Pike release, but still this warning is emitted. This change removes these warning messages. Backport: rocky, queens, pike [1] https://review.openstack.org/#/c/565841/ Change-Id: I342b9b12ec869431c3abad75eb8194c34151a281 Closes-Bug: #1794773 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1794773 Title: Unnecessary warning when ironic node properties are not set Status in OpenStack Compute (nova): Fix Released Bug description: If an ironic node is registered without either of the 'memory_mb' or 'cpus' properties, the following warning messages are seen in the nova-compute logs: Warning, memory usage is 0 for on baremetal node . Warning, number of cpus is 0 for on baremetal node . As of the Rocky release [1], the standard compute resources (VCPU, MEMORY_MB, DISK_GB) are not registered with placement for ironic nodes. They were not required to be set since the Pike release, but still this warning is emitted. [1] https://review.openstack.org/#/c/565841/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1794773/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796200] [NEW] Network security group logging not working: empty file being created w/o actual logs
Public bug reported: Network security group logging not working: empty file being created w/o actual logs On the clear Openstack (Ubuntu Xenial, Queens release) I have tried to enable a security groups logging as stated in https://docs.openstack.org/neutron/queens/admin/config-logging.html doc, and it's not working as expected. Actual behaviour: Logfile has been created in place specified in config from "neutron" user, but it's empty. Expected behaviour: Logfile has been created & NSG traffic data also being logged into. Additional information: a) OpenStack has been deployed from scratch using Juju and upstream bundles (with only two charms being modified locally, enabling necessary config changes for following upstream documentation mentioned above), here is actual charm link: http://paste.openstack.org/show/731530/ b) Full OpenStack configuration commands from flavors till verifying that networking itself is working: http://paste.openstack.org/show/731529/ (take a look at the EOF: I'm trying to ping my instance floating IP, I cannot, but after enabling a rule in NSG it succeeded - so traffic is actually being passed to instance and security groups are working); c) Config files that should be modified, according to documentation: neutron-api neutron.conf: http://paste.openstack.org/show/731531/ neutron-gateway /etc/neutron/plugins/ml2/openvswitch_agent.ini: http://paste.openstack.org/show/731534/ nova-compute /etc/neutron/plugins/ml2/openvswitch_agent.ini: http://paste.openstack.org/show/731535/ ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1796200 Title: Network security group logging not working: empty file being created w/o actual logs Status in neutron: New Bug description: Network security group logging not working: empty file being created w/o actual logs On the clear Openstack (Ubuntu Xenial, Queens release) I have tried to enable a security groups logging as stated in https://docs.openstack.org/neutron/queens/admin/config-logging.html doc, and it's not working as expected. Actual behaviour: Logfile has been created in place specified in config from "neutron" user, but it's empty. Expected behaviour: Logfile has been created & NSG traffic data also being logged into. Additional information: a) OpenStack has been deployed from scratch using Juju and upstream bundles (with only two charms being modified locally, enabling necessary config changes for following upstream documentation mentioned above), here is actual charm link: http://paste.openstack.org/show/731530/ b) Full OpenStack configuration commands from flavors till verifying that networking itself is working: http://paste.openstack.org/show/731529/ (take a look at the EOF: I'm trying to ping my instance floating IP, I cannot, but after enabling a rule in NSG it succeeded - so traffic is actually being passed to instance and security groups are working); c) Config files that should be modified, according to documentation: neutron-api neutron.conf: http://paste.openstack.org/show/731531/ neutron-gateway /etc/neutron/plugins/ml2/openvswitch_agent.ini: http://paste.openstack.org/show/731534/ nova-compute /etc/neutron/plugins/ml2/openvswitch_agent.ini: http://paste.openstack.org/show/731535/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1796200/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796192] [NEW] online_data_migrations exceptions quietly masked
Public bug reported: When online_data_migrations raise exceptions, nova/cinder-manage catches the exception, prints a fairly useless "something didn't work" message, and moves on. Two issues: 1) The user(/admin) has no way to see what actually failed (exception is not logged) 2) The command returns exit status 0, as if all possible migrations have been completed successfully - this can cause failures to get missed, especially if automated ** Affects: cinder Importance: Undecided Status: New ** Affects: nova Importance: Undecided Assignee: iain MacDonnell (imacdonn) Status: In Progress ** Also affects: cinder Importance: Undecided Status: New ** Description changed: When online_data_migrations raise exceptions, nova/cinder-manage catches the exception, prints a fairly useless "something didn't work" message, and moves on. Two issues: 1) The user(/admin) has no way to see what actually failed (exception is not logged) - 2) The command returns exit status 0, as if all possible migrations have been completed successfully + 2) The command returns exit status 0, as if all possible migrations have been completed successfully - this can cause failures to get missed, especially if automated ** Changed in: nova Assignee: (unassigned) => iain MacDonnell (imacdonn) ** Changed in: nova Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1796192 Title: online_data_migrations exceptions quietly masked Status in Cinder: New Status in OpenStack Compute (nova): In Progress Bug description: When online_data_migrations raise exceptions, nova/cinder-manage catches the exception, prints a fairly useless "something didn't work" message, and moves on. Two issues: 1) The user(/admin) has no way to see what actually failed (exception is not logged) 2) The command returns exit status 0, as if all possible migrations have been completed successfully - this can cause failures to get missed, especially if automated To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1796192/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796178] [NEW] community type images are not view able by admin, potential cause of dos
Public bug reported: At the moment is is for an admin not possible to list all images being uploaded to glance. Images marked as public are only listed by the user doing the upload. That means for example an automated system uploads community images as the result of a build process they will take up space. That could lead to disks being filled with images if this process does not implement any kind of cleanup. The admin will not be able to see that this is causing the problem. Currently the only mitigation I can think of, is putting quota's in place which will limit the amount of storage that can be allocated by the user. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1796178 Title: community type images are not view able by admin, potential cause of dos Status in Glance: New Bug description: At the moment is is for an admin not possible to list all images being uploaded to glance. Images marked as public are only listed by the user doing the upload. That means for example an automated system uploads community images as the result of a build process they will take up space. That could lead to disks being filled with images if this process does not implement any kind of cleanup. The admin will not be able to see that this is causing the problem. Currently the only mitigation I can think of, is putting quota's in place which will limit the amount of storage that can be allocated by the user. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1796178/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 74747] Re: Default sources.list file has source packages enabled by default
** Changed in: cloud-init Assignee: (unassigned) => Scott Moser (smoser) ** Changed in: cloud-init Status: Confirmed => In Progress ** Changed in: apt-setup (Ubuntu Bionic) Status: New => Fix Released ** Changed in: apt-setup (Ubuntu Bionic) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/74747 Title: Default sources.list file has source packages enabled by default Status in cloud-init: In Progress Status in apt-setup package in Ubuntu: Fix Released Status in cloud-init package in Ubuntu: Confirmed Status in apt-setup source package in Xenial: Fix Released Status in cloud-init source package in Xenial: Confirmed Status in apt-setup source package in Bionic: Fix Released Status in cloud-init source package in Bionic: Confirmed Status in apt-setup source package in Cosmic: Fix Released Status in cloud-init source package in Cosmic: Confirmed Bug description: The default sources.list file has source packages enabled by default, this is bad for the average user (especially those on modems) because they are very unlikely to use source packages, however they will still have the download overhead of the packages list. For most people the deb-src lines could simply be commented out by default. (Bug reported at the behest of Robert Collins) Implementing this would probably invalidate bug 301602. See also bug 987264. Mailing list discussion: https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2013-May/thread.html#14503 https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2013-July/thread.html#14617 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/74747/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796074] Re: interface-attach to instance with a large number of attached interfaces fails with RequestURITooLong from neutron
This issue can only be addressed in nova, so marking as Invalid for python-novaclient. ** Summary changed: - python-novaclient Unexpected API Error + interface-attach to instance with a large number of attached interfaces fails with RequestURITooLong from neutron ** Also affects: nova Importance: Undecided Status: New ** Changed in: python-novaclient Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1796074 Title: interface-attach to instance with a large number of attached interfaces fails with RequestURITooLong from neutron Status in OpenStack Compute (nova): In Progress Status in python-novaclient: Invalid Bug description: Hello! # nova-manage --version 14.0.0 Command which produce error: nova interface-attach --net-id I got Unexpected API Error when i try nova interface-attach to instance with attached 250 network interface. And after execute nova interface-attach i can't manipulate network interface, i can't see interface inside instance, only delete port. DEBUG (session:727) GET call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269 used request id req-34fe7aae-75ed-4a90-833d-86ef8cd3d2a4 DEBUG (client:85) GET call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269 used request id req-34fe7aae-75ed-4a90-833d-86ef8cd3d2a4 DEBUG (session:375) REQ: curl -g -i -X POST http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.37" -H "X-OpenStack-Nova-API-Version: 2.37" -H "X-Auth-Token: {SHA1}04925ba60ec47cac9d6e099b287f94ba49e99113" -H "Content-Type: application/json" -d '{"interfaceAttachment": {"net_id": "728b6584-8f52-4613-b799-b1bff4f42f53"}}' DEBUG (connectionpool:396) http://ip:8774 "POST /v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface HTTP/1.1" 500 211 DEBUG (session:423) RESP: [500] Openstack-Api-Version: compute 2.37 X-Openstack-Nova-Api-Version: 2.37 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version Content-Type: application/json; charset=UTF-8 Content-Length: 211 X-Compute-Request-Id: req-0725bd5b-f86e-4194-aa35-efe229413e90 Date: Thu, 04 Oct 2018 09:12:44 GMT RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} DEBUG (session:727) POST call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface used request id req-0725bd5b-f86e-4194-aa35-efe229413e90 DEBUG (client:85) POST call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface used request id req-0725bd5b-f86e-4194-aa35-efe22413e90 DEBUG (shell:984) Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0725bd5b-f86e-4194-aa35-efe229413e90) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 982, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 909, in main args.func(self.cs, args) File "/usr/lib/python2.7/dist-packages/novaclient/v2/shell.py", line 5047, in do_interface_attach res = server.interface_attach(args.port_id, args.net_id, args.fixed_ip) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 552, in interface_attach return self.manager.interface_attach(self, port_id, net_id, fixed_ip) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1822, in interface_attach body, 'interfaceAttachment') File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 356, in _create resp, body = self.api.client.post(url, body=body) File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 294, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 117, in request raise exceptions.from_response(resp, body, url, method) ClientException: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0725bd5b-f86e-4194-aa35-efe229413e90) ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP
[Yahoo-eng-team] [Bug 1796132] Re: SSL Verification Error on Launch Instance (queens)
** Also affects: nova Importance: Undecided Status: New ** No longer affects: openstack-ansible ** Tags added: api -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1796132 Title: SSL Verification Error on Launch Instance (queens) Status in OpenStack Compute (nova): New Bug description: I'm enforcing all endpoints to use HTTPS in the user_variables.yml openstack_service_publicuri_proto: https openstack_service_adminuri_proto: https openstack_service_internaluri_proto: https Trying to launch an instance, fails and directs me to report a bug Here are the logs that show the SSL verification erros: http://paste.openstack.org/show/731504/ Error in the UI show: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-64108432-a960-4758-afc5-36c37ec1) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1796132/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796077] Re: policy.json doesn't allow user to change password
Users have their own self-service API[0] they can call to change their own password. This is separate from the update_user one, and is currently not covered by any policy. There are ways to enforce security regulations (PCI-DSS) on users, which is more defined here[1]. [0] https://developer.openstack.org/api-ref/identity/v3/#change-password-for-user [1] https://docs.openstack.org/keystone/pike/admin/identity-security-compliance.html ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1796077 Title: policy.json doesn't allow user to change password Status in OpenStack Identity (keystone): Invalid Bug description: Currently in Keystone the default policy.v3cloudsample.json doesn't allow user to change its password. It's defined in: "identity:update_user": "rule:cloud_admin or rule:admin_and_matching_target_user_domain_id" which make user (which is owner in policy.json) unable to change it own password. Not sure if this change is intended or not, but as a operator, I would like to allow users to change its password by default. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1796077/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1788833] Re: Error during ComputeManager.update_available_resource: AttributeError: '_TransactionContextManager' object has no attribute 'async_
Ubuntu Rocky py3.7 is tripping over the async keyword issue. Since we have python-oslo.db 4.40.0 in Ubuntu Rocky I'm planning to cherry-pick the following patch to the nova rocky package: commit 964832d37dd244f4f4ebc0dba46e4316241a2120 Author: Stephen Finucane Date: Tue Aug 28 17:15:24 2018 +0100 Revert "Don't use '_TransactionContextManager._async'" ... ** Also affects: nova (Ubuntu) Importance: Undecided Status: New ** Also affects: nova (Ubuntu Cosmic) Importance: Undecided Status: New ** Changed in: nova (Ubuntu Cosmic) Status: New => Incomplete ** Changed in: nova (Ubuntu Cosmic) Status: Incomplete => Triaged ** Changed in: nova (Ubuntu Cosmic) Importance: Undecided => High ** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/rocky Importance: Undecided Status: New ** Changed in: cloud-archive/rocky Status: New => Triaged ** Changed in: cloud-archive/rocky Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1788833 Title: Error during ComputeManager.update_available_resource: AttributeError: '_TransactionContextManager' object has no attribute 'async_ Status in Ubuntu Cloud Archive: Triaged Status in Ubuntu Cloud Archive rocky series: Triaged Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) rocky series: Fix Committed Status in nova package in Ubuntu: Triaged Status in nova source package in Cosmic: Triaged Bug description: Hi all, I have a rocky openstack cluster. I am using Mariadb galera cluster (3 galera nodes Active/Active) behind haproxy. When i resize or migrate an instance, I am hitting with the following errors. 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task [req-67381a5e-24e2-4dd2-bfc6-693bd1fabb8d 290bb90f6cbc46548951cbcaee0c0a34 9804c6f8ffe148bc9fa7ed409d41cb16 - default default] Error during ComputeManager._heal_instance_info_cache: AttributeError: '_TransactionContextManager' object has no attribute 'async_' Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 126, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 184, in wrapper result = fn(cls, context, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 1351, in get_by_host use_slave=use_slave) File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 218, in wrapper reader_mode = get_context_manager(context).async_ AttributeError: '_TransactionContextManager' object has no attribute 'async_' 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task Traceback (most recent call last): 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 220, in run_periodic_tasks 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task task(self, context) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6632, in _heal_instance_info_cache 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task context, self.host, expected_attrs=[], use_slave=True) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 177, in wrapper 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task args, kwargs) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 241, in object_class_action_versions 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task args=args, kwargs=kwargs) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 179, in call 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task retry=self.retry) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 133, in _send 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task retry=retry) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 584, in send 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task call_monitor_timeout, retry=retry) 2018-08-24 12:06:37.668 19857 ERROR oslo_service.periodic_task File
[Yahoo-eng-team] [Bug 1778227] Re: Docs needed for optional placement database
This is differently out of date now that we have an extracted placement. ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1778227 Title: Docs needed for optional placement database Status in OpenStack Compute (nova): Invalid Bug description: Blueprint https://blueprints.launchpad.net/nova/+spec/optional- placement-database added support for configuring a separate database for the placement service so it's not just part of the nova_api database (even though it's the same schema for now). This is important for extracting placement from nova, and should be part of a new base install so people don't have to migrate later. There are at least two places we should update in the docs: 1. The install guide for fresh installs should tell people to create a database for placement and configure nova.conf appropriately using the new placement database config options. Looking at the install guide, we have 3 different guides to update (ubuntu, rdo, suse), but looks like: a) https://docs.openstack.org/nova/latest/install/controller-install- ubuntu.html#prerequisites - create a placement database using the nova_api schema. b) configure the placement db https://docs.openstack.org/nova/latest/install/controller-install- ubuntu.html#install-and-configure-components I think that's it. "nova-manage api_db sync" will sync the placement database and the nova_api database, so we should be good there. 2. Update the placement upgrade docs for Rocky to mention the new config option and, at a high level, the options people have for migrating from nova_api to placement db, e.g. stop api services, copy the nova_api db, deploy placement db using the nova_api copy, config and restart api services. https://docs.openstack.org/nova/latest/user/placement.html#rocky-18-0-0 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1778227/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1783654] Re: DVR process flow not installed on physical bridge for shared tenant network
This bug was fixed in the package neutron - 2:13.0.1-0ubuntu1 --- neutron (2:13.0.1-0ubuntu1) cosmic; urgency=medium * New stable point release for OpenStack Rocky. * d/p/revert-dvr-add-error-handling.patch: Cherry-picked from upstream to revert DVR regressions (LP: #1751396) * d/p/revert-dvr-inter-tenant.patch: Cherry-picked from upstream to revert DVR regression (LP: #1783654). -- Corey Bryant Tue, 02 Oct 2018 17:18:19 -0400 ** Changed in: neutron (Ubuntu Cosmic) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1783654 Title: DVR process flow not installed on physical bridge for shared tenant network Status in Ubuntu Cloud Archive: Triaged Status in Ubuntu Cloud Archive pike series: Invalid Status in Ubuntu Cloud Archive queens series: Triaged Status in Ubuntu Cloud Archive rocky series: Triaged Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Bionic: Triaged Status in neutron source package in Cosmic: Fix Released Bug description: Seems like collateral from https://bugs.launchpad.net/neutron/+bug/1751396 In DVR, the distributed gateway port's IP and MAC are shared in the qrouter across all hosts. The dvr_process_flow on the physical bridge (which replaces the shared router_distributed MAC address with the unique per-host MAC when its the source), is missing, and so is the drop rule which instructs the bridge to drop all traffic destined for the shared distributed MAC. Because of this, we are seeing the router MAC on the network infrastructure, causing it on flap on br-int on every compute host: root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec1 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec2 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 1 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 1 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 1 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 1 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 1 4 fa:16:3e:42:a2:ec1 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec0 root@milhouse:~# ovs-appctl fdb/show br-int | grep fa:16:3e:42:a2:ec 11 4 fa:16:3e:42:a2:ec0 Where port 1 is phy-br-vlan, connecting to the physical bridge, and port 11 is the correct local qr-interface. Because these dvr flows are missing on br-vlan, pkts w/ source mac ingress into the host and br- int learns it upstream. The symptom is when pinging a VM's floating IP, we see occasional packet loss (10-30%), and sometimes the responses are sent upstream by br-int instead of the qrouter, so the ICMP replies come with fixed IP of the replier since no NAT'ing took place, and on the tenant network rather than external network. When I force net_shared_only to False here, the problem goes away: https://github.com/openstack/neutron/blob/stable/pike/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py#L436 It should we noted we *ONLY* need to do this on our dvr_snat host. The dvr process's are missing on every compute host. But if we shut qrouter on the snat host, FIP functionality works and DVR mac stops flapping on others. Or if we apply fix only to snat host, it works. Perhaps there is something on SNAT node that is unique Ubuntu SRU details: --- [Impact] See above [Test Case] Deploy OpenStack with dvr enabled and then follow the steps above. [Regression Potential] The patches that are backported have already landed upstream in the corresponding stable branches, helping to minimize any regression potential. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1783654/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1751396] Re: DVR: Inter Tenant Traffic between two networks and connected through a shared network not reachable with DVR routers
This bug was fixed in the package neutron - 2:13.0.1-0ubuntu1 --- neutron (2:13.0.1-0ubuntu1) cosmic; urgency=medium * New stable point release for OpenStack Rocky. * d/p/revert-dvr-add-error-handling.patch: Cherry-picked from upstream to revert DVR regressions (LP: #1751396) * d/p/revert-dvr-inter-tenant.patch: Cherry-picked from upstream to revert DVR regression (LP: #1783654). -- Corey Bryant Tue, 02 Oct 2018 17:18:19 -0400 ** Changed in: neutron (Ubuntu Cosmic) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1751396 Title: DVR: Inter Tenant Traffic between two networks and connected through a shared network not reachable with DVR routers Status in Ubuntu Cloud Archive: Triaged Status in Ubuntu Cloud Archive pike series: Invalid Status in Ubuntu Cloud Archive queens series: Triaged Status in Ubuntu Cloud Archive rocky series: Triaged Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Artful: Invalid Status in neutron source package in Bionic: Triaged Status in neutron source package in Cosmic: Fix Released Bug description: Inter Tenant Traffic between Two Tenants on two different private networks connected through a common shared network (created by Admin) is not route able through DVR routers Steps to reproduce it: (NOTE: No external, just shared network) This is only reproducable in Multinode scenario. ( 1 Controller - 2 compute ). Make sure that the two VMs are isolated in two different computes. openstack network create --share shared_net openstack subnet create shared_net_sn --network shared_net --subnet- range 172.168.10.0/24 openstack network create net_A openstack subnet create net_A_sn --network net_A --subnet-range 10.1.0.0/24 openstack network create net_B openstack subnet create net_B_sn --network net_B --subnet-range 10.2.0.0/24 openstack router create router_A openstack port create --network=shared_net --fixed-ip subnet=shared_net_sn,ip-address=172.168.10.20 port_router_A_shared_net openstack router add port router_A port_router_A_shared_net openstack router add subnet router_A net_A_sn openstack router create router_B openstack port create --network=shared_net --fixed-ip subnet=shared_net_sn,ip-address=172.168.10.30 port_router_B_shared_net openstack router add port router_B port_router_B_shared_net openstack router add subnet router_B net_B_sn openstack server create server_A --flavor m1.tiny --image cirros --nic net-id=net_A openstack server create server_B --flavor m1.tiny --image cirros --nic net-id=net_B Add static routes to the router. openstack router set router_A --route destination=10.1.0.0/24,gateway=172.168.10.20 openstack router set router_B --route destination=10.2.0.0/24,gateway=172.168.10.30 ``` Ping from one instance to the other times out Ubuntu SRU details: --- [Impact] See above [Test Case] Deploy OpenStack with dvr enabled and then follow the steps above. [Regression Potential] The patches that are backported have already landed upstream in the corresponding stable branches, helping to minimize any regression potential. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751396/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796077] [NEW] policy.json doesn't allow user to change password
Public bug reported: Currently in Keystone the default policy.v3cloudsample.json doesn't allow user to change its password. It's defined in: "identity:update_user": "rule:cloud_admin or rule:admin_and_matching_target_user_domain_id" which make user (which is owner in policy.json) unable to change it own password. Not sure if this change is intended or not, but as a operator, I would like to allow users to change its password by default. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1796077 Title: policy.json doesn't allow user to change password Status in OpenStack Identity (keystone): New Bug description: Currently in Keystone the default policy.v3cloudsample.json doesn't allow user to change its password. It's defined in: "identity:update_user": "rule:cloud_admin or rule:admin_and_matching_target_user_domain_id" which make user (which is owner in policy.json) unable to change it own password. Not sure if this change is intended or not, but as a operator, I would like to allow users to change its password by default. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1796077/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796054] [NEW] api-ref: the evacuate action should be in the admin action section
Public bug reported: The evacuate action can be performed by administrators only by default. https://github.com/openstack/nova/blob/8c3d02ac3d890f414ce4e05c41d44dca3b385424/nova/policies/evacuate.py#L27 But there is the description of the evacuate action in the "Servers - run an action (servers, action)" section. https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action It should be in the "Servers - run an administrative action (servers, action)" section. ** Affects: nova Importance: Undecided Assignee: Takashi NATSUME (natsume-takashi) Status: In Progress ** Tags: api-ref -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1796054 Title: api-ref: the evacuate action should be in the admin action section Status in OpenStack Compute (nova): In Progress Bug description: The evacuate action can be performed by administrators only by default. https://github.com/openstack/nova/blob/8c3d02ac3d890f414ce4e05c41d44dca3b385424/nova/policies/evacuate.py#L27 But there is the description of the evacuate action in the "Servers - run an action (servers, action)" section. https://developer.openstack.org/api-ref/compute/#evacuate-server-evacuate-action It should be in the "Servers - run an administrative action (servers, action)" section. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1796054/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1788009] Re: neutron bridge name is not always set for ml2/ovs
Reviewed: https://review.openstack.org/596896 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=995744c576503b1de90c922dbecf690ad49f244f Submitter: Zuul Branch:master commit 995744c576503b1de90c922dbecf690ad49f244f Author: Sean Mooney Date: Mon Aug 27 18:30:53 2018 +0100 Always set ovs bridge name in vif:binding-details - This change updates _set_bridge_name to set the bridge name field in the vif binding details. - This change adds the integration_bridge name to the agent configuration report. Change-Id: I454efcb226745c585935d5bd1b3d378f69a55ca2 Closes-Bug: #1788009 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1788009 Title: neutron bridge name is not always set for ml2/ovs Status in neutron: Fix Released Bug description: * Summary: neutron bridge name is not always set for ml2/ovs * High level description: to enable live migration between different ml2 drivers utilizing multiple port bindings nova cannot assume the bridge name of the destination host when generating the vm domain(e.g. libvirt xml) to this end the ovs bridge name which is set when using trunk ports also needs to be set in the general case. * Expected output: when a port is bound by the ml2/ovs mech driver the bridge name is set in the vif binding-details. * Actual output: because the bridge name is not in the general case when you live migrate from linux bridge to ovs the vm is attach to a new ovs bridge using the linux bridge, bride name of the source node node instead attaching the vm interface to the br-int bridge. as a result while the migration is successful the vm will have no network connectivity as this new bridge is not connect to the br-int via a patch port pair. * Version: ** OpenStack version master/Rocky RC1 ** Centos 7.5 ** DevStack * Environment: multinode default devstack install. * Perceived severity: low( it is a trival fix but it does prevent migrating form lb to ovs hosts) Click on extra options for the following: To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1788009/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp