[Yahoo-eng-team] [Bug 1626010] Re: OVS Firewall cannot handle non unique MACs
Reviewed: https://review.openstack.org/385085 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=6370a0471076ccb095a90f97ffc869ae7ea2e5ed Submitter: Jenkins Branch:master commit 6370a0471076ccb095a90f97ffc869ae7ea2e5ed Author: Jakub LibosvarDate: Tue Jun 13 12:07:28 2017 + ovsfw: Fix overlapping MAC addresses on integration bridge The patch relies on the fact that traffic not going from instance (and thus port not managed by firewall) is tagged. Traffic coming from the instance is not tagged and thus net register is used for marking such traffic. These two approaches make matching rules unique even if two ports from different networks share its' mac addressess. Traffic coming from trusted ports is marked with network in registry so firewall can decide later to which network traffic belongs. Closes-bug: #1626010 Change-Id: Ia05d75a01b0469a0eaa82ada67b16a9481c50f1c ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1626010 Title: OVS Firewall cannot handle non unique MACs Status in neutron: Fix Released Bug description: It seems we have a case where the openvswitch firewall driver and a use of trunks interferes with each other. I tried using the parent's MAC address for a subport. Like this: openstack network create net0 openstack network create net1 openstack subnet create --network net0 --subnet-range 10.0.4.0/24 subnet0 openstack subnet create --network net1 --subnet-range 10.0.5.0/24 subnet1 openstack port create --network net0 port0 parent_mac="$( openstack port show port0 | awk '/ mac_address / { print $4 }' )" openstack port create --network net1 --mac-address "$parent_mac" port1 openstack network trunk create --parent-port port0 --subport port=port1,segmentation-type=vlan,segmentation-id=101 trunk0 openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec --nic port-id=port0 --key-name key0 --wait vm0 Then all packets are lost on the trunk's parent port: $ openstack server show vm0 | egrep addresses.*net0 | addresses| net0=10.0.4.6 | $ sudo ip netns exec "qdhcp-$( openstack network show net0 | awk '/ id / { print $4 }' )" ping -c3 10.0.4.6 WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils PING 10.0.4.6 (10.0.4.6) 56(84) bytes of data. --- 10.0.4.6 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2016ms If I change the firewall_driver to noop and redo the same I have connectivity. If I still have the openvswitch firewall_driver but I don't explicitly set the subport MAC, but let neutron automatically assign one, then again I have connectivity. devstack version: 81d89cf neutron version: 60010a8 relevant parts of local.conf: [[local|localrc]] enable_service neutron-api enable_service neutron-l3 enable_service neutron-agent enable_service neutron-dhcp enable_service neutron-metadata-agent [[post-config|$NEUTRON_CONF]] [DEFAULT] service_plugins = router,trunk [[post-config|$NEUTRON_PLUGIN_CONF]] [securitygroup] firewall_driver = openvswitch To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1626010/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705487] Re: placement logs an ERROR when PUT /allocation result in an invalid inventory
Reviewed: https://review.openstack.org/485726 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=66988da3db3bf73cd380db303bd0ca9bebcb3c64 Submitter: Jenkins Branch:master commit 66988da3db3bf73cd380db303bd0ca9bebcb3c64 Author: Jay PipesDate: Thu Jul 20 11:58:09 2017 -0400 Remove improper LOG.exception() calls in placement The PUT /allocations Placement API handler was improperly calling LOG.exception() when two normal-operation events were occurring: 1) When a concurrent attempt to allocate against the same resource providers had occurred 2) When, due to another process consuming resources concurrently resulted in capacity being exceeded on one or more of the requested providers Neither of the above scenarios is a software error and so the LOG.exception() calls have been removed. Change-Id: I569b28313e52d979ac9be5bea88c021a0664d851 Fixes-bug: #1705487 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705487 Title: placement logs an ERROR when PUT /allocation result in an invalid inventory Status in OpenStack Compute (nova): Fix Released Bug description: When an allocation requests on placement results in an invalid inventory due to the capacity is exceeded then placement return HTTP 409 properly but at the same time logs an ERROR log (see below). The ERROR log is not necessary as this is not a software error. Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: [pid: 9543|app: 0|req: 790/1579] 192.168.122.191 () {66 vars in 1440 bytes} [Thu Jul 20 11:44:38 2017] PUT /placement/allocations/485c7480-939c-4b88-8c Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG nova.api.openstack.placement.requestlog [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] Starting request: 192.168.122.191 "GET Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] Acquired semaphore "rc_cache" {{(pid=9542) lock /usr Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] Releasing semaphore "rc_cache" {{(pid=9542) lock /us Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: INFO nova.api.openstack.placement.requestlog [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] 192.168.122.191 "GET /placement/allocati Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: [pid: 9542|app: 0|req: 790/1580] 192.168.122.191 () {60 vars in 1344 bytes} [Thu Jul 20 11:44:38 2017] GET /placement/allocations/9fff84b6-71e4-4f53-ad Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG nova.api.openstack.placement.requestlog [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Starting request: 192.168.122.191 "PUT Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Acquired semaphore "rc_cache" {{(pid=9543) lock /usr Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Releasing semaphore "rc_cache" {{(pid=9543) lock /us Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Acquired semaphore "rc_cache" {{(pid=9543) lock /usr Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Releasing semaphore "rc_cache" {{(pid=9543) lock /us Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Acquired semaphore "rc_cache" {{(pid=9543) lock /usr Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Releasing semaphore "rc_cache" {{(pid=9543) lock /us Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Bad inventory: InvalidInventor Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation Traceback (most recent call last): Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation File
[Yahoo-eng-team] [Bug 1701097] Re: eni rendering of ipv6 gateways fails
This bug was fixed in the package cloud-init - 0.7.9-221-g7e41b2a7-0ubuntu1 --- cloud-init (0.7.9-221-g7e41b2a7-0ubuntu1) artful; urgency=medium * New upstream snapshot. - sysconfig: use MACADDR on bonds/bridges to configure mac_address [Ryan Harper] (LP: #1701417) - net: eni route rendering missed ipv6 default route config [Ryan Harper] (LP: #1701097) - sysconfig: enable mtu set per subnet, including ipv6 mtu [Ryan Harper] (LP: #1702513) - sysconfig: handle manual type subnets [Ryan Harper] (LP: #1687725) - sysconfig: fix ipv6 gateway routes [Ryan Harper] (LP: #1694801) - sysconfig: fix rendering of bond, bridge and vlan types. [Ryan Harper] (LP: #1695092) - Templatize systemd unit files for cross distro deltas. [Ryan Harper] - sysconfig: ipv6 and default gateway fixes. [Ryan Harper] (LP: #1704872) - net: fix renaming of nics to support mac addresses written in upper case. (LP: #1705147) -- Scott MoserThu, 20 Jul 2017 21:37:12 -0400 ** Changed in: cloud-init (Ubuntu Artful) Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1701097 Title: eni rendering of ipv6 gateways fails Status in cloud-init: Confirmed Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: Confirmed Status in cloud-init source package in Yakkety: Confirmed Status in cloud-init source package in Zesty: Confirmed Status in cloud-init source package in Artful: Fix Released Bug description: cloud-init trunk and xenial, yakkety, zesty and artful all fail A network config with a ipv6 gateway route like: subnets: - type: static address: 2001:4800:78ff:1b:be76:4eff:fe06:96b3 netmask: ':::::' routes: - gateway: 2001:4800:78ff:1b::1 netmask: '::' network: '::' For eni rendering, this should create a post-up/post-down route command that generates a default ipv6 route entry, like this: post-up route add -A inet6 default gw 2001:4800:78ff:1b::1 || true pre-down route del -A inet6 default gw 2001:4800:78ff:1b::1 || true However, what is currently generated is this: post-up route add -net :: netmask :: gw 2001:4800:78ff:1b::1 || true pre-down route del -net :: netmask :: gw 2001:4800:78ff:1b::1 || true That does not install the route correctly as a default gateway route. This is fallout from commit d00da2d5b0d45db5670622a66d833d2abb907388 net: normalize data in network_state object This commit removed ipv6 route 'netmask' values, and converted them to prefix length values, but failed to update the eni renderer's check for ipv6 default gateway. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1701097/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1702959] Re: MigrationNotFound in multi-cell setup doing server external events processing
Reviewed: https://review.openstack.org/445142 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b8be61eb39dc9f605ad853e5697a8f4bf73b025b Submitter: Jenkins Branch:master commit b8be61eb39dc9f605ad853e5697a8f4bf73b025b Author: Matthew BoothDate: Mon Mar 13 18:28:17 2017 + Fix and optimize external_events for multiple cells server_external_events was previously making an api db query and a cell db query for every instance reference. We improve this by making exactly 1 api db query to fetch all instance mappings, and then 1 cell db query per cell to fetch all relevant instances from that cell. Further, it wasn't properly handling the case where events were delivered in one request for multiple instances across cells, which this also fixes. We also document an obtuse edge condition in ComputeAPI.external_instance_event which will cause the current code to break when we support migration between cells. Note this includes a tweak to the SingleCellSimple fixture to mock out the new InstanceMappingList method we use, as well as a fix to the other InstanceMapping mock, which was returning mappings with bogus instance uuids. This patch relies on the results of those being realistic and thus requires those changes. Closes-Bug: #1702959 Co-Authored-By: Dan Smith Change-Id: If59453f1899e99040c554bcb9ad54c8a506adc56 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1702959 Title: MigrationNotFound in multi-cell setup doing server external events processing Status in OpenStack Compute (nova): Fix Released Bug description: I noticed this in the devstack change testing multi-cell, in the neutron multinode job: http://logs.openstack.org/56/477556/3/check/gate-tempest-dsvm-neutron- multinode-full-ubuntu-xenial- nv/ee3e9b6/logs/screen-n-api.txt.gz?level=TRACE#_Jul_05_20_55_53_749696 We're getting migration not found because we're not targeted to a cell to find the migration: Jul 05 20:55:53.749696 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions [None req-d4ca5b2e-140c-4331-88eb-50bae2de2230 service nova] Unexpected exception in API method: MigrationNotFound: Migration 15 could not be found. Jul 05 20:55:53.749806 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions Traceback (most recent call last): Jul 05 20:55:53.749888 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/api/openstack/extensions.py", line 336, in wrapped Jul 05 20:55:53.749965 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions return f(*args, **kwargs) Jul 05 20:55:53.750053 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/api/validation/__init__.py", line 108, in wrapper Jul 05 20:55:53.750134 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions return func(*args, **kwargs) Jul 05 20:55:53.750222 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/api/openstack/compute/server_external_events.py", line 120, in create Jul 05 20:55:53.750303 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions context, accepted_instances, mappings, accepted_events) Jul 05 20:55:53.750438 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/compute/api.py", line 4046, in external_instance_event Jul 05 20:55:53.750564 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions for host in self._get_relevant_hosts(context, instance): Jul 05 20:55:53.750647 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/compute/api.py", line 4072, in _get_relevant_hosts Jul 05 20:55:53.750722 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions migration = objects.Migration.get_by_id(context, migration_id) Jul 05 20:55:53.750798 ubuntu-xenial-2-node-osic-cloud1-s3500-9668742 devstack@n-api.service[27811]: ERROR nova.api.openstack.extensions
[Yahoo-eng-team] [Bug 1661503] Re: If public_endpoint is set, the first call will be always public endpoint
That configuration options acts as a hard coded value for public endpoint [0]. If left unset - the service will generate the endpoint from the request environment [1]. Try unsetting public_endpoint if you can and see if that helps your internal clients. External clients using the public endpoint should have the same experience since they are using port 5000 for requests. Let me know if that helps. [0] https://github.com/openstack/keystone/blob/025e844fc485c23be1de033473f3cadd7486b642/keystone/conf/default.py#L43-L49 [1] https://github.com/openstack/keystone/blob/025e844fc485c23be1de033473f3cadd7486b642/keystone/common/wsgi.py#L330-L337 ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1661503 Title: If public_endpoint is set, the first call will be always public endpoint Status in OpenStack Identity (keystone): Invalid Bug description: I have setup a keystone service(Mitaka) on ubuntu, and it seems that the first call will always be to keystone's public api url, when you have set "public_endpoint" in keystone.conf. For example, when I do the following openstack commands, I always get the following error. $ubuntu@client:~$ openstack token issue Unable to establish connection to http://10.12.2.2:5000/fuga/v3/auth/tokens The keystone's endpoint are like this: public: http://10.12.2.2:5000/fuga/v3 admin:http://10.12.1.2:35357/fuga/v3 internal: http://10.12.3.2:5000/fuga/v3 openstack client is installed in a client node, which is seperate to keystone node, and this client node has no network access to public api network. So if accessing to public api, this is expected, but I have set the env variables like this, ubuntu@client:~$ env | grep OS_ OS_USER_DOMAIN_NAME=default OS_PROJECT_NAME=admin OS_IDENTITY_API_VERSION=3 OS_PASSWORD=openstack OS_AUTH_URL=http://10.12.1.2:35357/fuga/v3 OS_USERNAME=admin OS_INTERFACE=admin OS_PROJECT_DOMAIN_NAME=default Therefore, my expectation is that api access goes only through admin url. I have tried also with internal api url, but get the same error. And of course if the client node has public api network access, the openstack client worked perfectly. Also, if you just not use the special path for api urls, so by not setting "public_api", it will also work perfectly. According to this: https://github.com/openstack/keystone/blob/stable/mitaka/keystone/version/service.py#L160 "public" string is given, and here: https://github.com/openstack/keystone/blob/stable/mitaka/keystone/common/wsgi.py#L372 the string is being combined with "_endpoint", which will become "public_endpoint", and if the url is set, this public url will be the initial access. I have attached some info, - /etc/keystone/keystone.conf - /etc/apache2/sites-enabled/wsgi-keystone.conf - output with debug option To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1661503/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705467] Re: [Get me a network] auto-allocated networks does not verify tenant ID while creating
As admin, you could create a network for a tenant that does not exist. This is a 'well-known gotcha'. To address this we'd have to have neutron talk to Keystone to validate that the UUID is a valid one. Doable, but we never go around to do it for a variety of reasons (performance being one of them). ** Changed in: neutron Status: Invalid => Opinion ** Changed in: neutron Importance: Undecided => Wishlist ** Summary changed: - [Get me a network] auto-allocated networks does not verify tenant ID while creating + project ID is not verified when creating neutron resources ** Tags added: rfe ** Summary changed: - project ID is not verified when creating neutron resources + [RFE] project ID is not verified when creating neutron resources ** Changed in: neutron Status: Opinion => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1705467 Title: [RFE] project ID is not verified when creating neutron resources Status in neutron: Confirmed Bug description: Environment: Ocata According to current docs, in order to create auto-allocated network in tenant, following command should be invoked: `neutron auto-allocated-topology-show [project_id]` Unfortunatelly, this command does not verify if given project exists, so it's easy to create completely unusefull network components, even breaking naming convention for project_id field in database tables. Example: » neutron auto-allocated-topology-show 37-e088febf-64bd-4b5e-b4dd-1f5a80e ++--+ | Field | Value| ++--+ | id | 63f5f696-8a64-4bbb-af1e-b55a26f78f38 | | project_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | | tenant_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | ++--+ » To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1705467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705467] Re: [Get me a network] auto-allocated networks does not verify tenant ID while creating
None of the neutron commands do. ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1705467 Title: [RFE] project ID is not verified when creating neutron resources Status in neutron: Confirmed Bug description: Environment: Ocata According to current docs, in order to create auto-allocated network in tenant, following command should be invoked: `neutron auto-allocated-topology-show [project_id]` Unfortunatelly, this command does not verify if given project exists, so it's easy to create completely unusefull network components, even breaking naming convention for project_id field in database tables. Example: » neutron auto-allocated-topology-show 37-e088febf-64bd-4b5e-b4dd-1f5a80e ++--+ | Field | Value| ++--+ | id | 63f5f696-8a64-4bbb-af1e-b55a26f78f38 | | project_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | | tenant_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | ++--+ » To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1705467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705567] [NEW] gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial failed with timeout
Public bug reported: http://logs.openstack.org/85/385085/30/gate/gate-tempest-dsvm-neutron- linuxbridge-ubuntu-xenial/75e271d/console.html 2017-07-20 19:18:32.693567 | full runtests: commands[2] | tempest run --combine --serial --regex (?!.*\[.*\bslow\b.*\])(^tempest\.scenario) --concurrency=4 2017-07-20 19:18:38.317292 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ 2017-07-20 19:18:38.317404 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ 2017-07-20 19:18:38.317449 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \ 2017-07-20 19:18:38.317501 | OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \ 2017-07-20 19:18:38.317574 | ${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./} ${OS_TEST_PATH:-./tempest/test_discover} --list 2017-07-20 19:18:38.317621 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ 2017-07-20 19:18:38.317657 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ 2017-07-20 19:18:38.317692 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \ 2017-07-20 19:18:38.317733 | OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \ 2017-07-20 19:18:38.317814 | ${PYTHON:-python} -m subunit.run discover -t ${OS_TOP_LEVEL:-./} ${OS_TEST_PATH:-./tempest/test_discover} --load-list /tmp/tmpqrNAOh 2017-07-20 19:20:27.479411 | {0} tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario [101.261843s] ... ok 2017-07-20 19:20:27.479460 | 2017-07-20 19:20:27.479481 | Captured stderr: 2017-07-20 19:20:27.479498 | 2017-07-20 19:20:27.479565 | /opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead. 2017-07-20 19:20:27.479587 | _warn_sign_verify_deprecated() 2017-07-20 19:20:27.479600 | 2017-07-20 19:22:09.799157 | {0} tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_reboot [93.151292s] ... ok 2017-07-20 19:23:37.650731 | {0} tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_hotplug_nic [84.459821s] ... ok 2017-07-20 19:25:34.994829 | {0} tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops [117.306077s] ... ok 2017-07-20 19:28:02.218494 | {0} tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac [143.769198s] ... ok 2017-07-20 19:28:05.681801 | {0} tempest.scenario.test_object_storage_basic_ops.TestObjectStorageBasicOps.test_swift_basic_ops [0.381328s] ... ok 2017-07-20 19:30:17.429097 | {0} tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_cross_tenant_traffic [128.538566s] ... ok 2017-07-20 19:31:43.217495 | {0} tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_in_tenant_traffic [85.784122s] ... ok 2017-07-20 19:32:41.287100 | {0} tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops [49.211398s] ... ok 2017-07-20 19:32:48.257653 | {0} setUpClass (tempest.scenario.test_server_multinode.TestServerMultinode) ... SKIPPED: Less than 2 compute nodes, skipping multinode tests. 2017-07-20 19:33:30.275095 | {0} tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_boot_server_from_encrypted_volume_luks [30.113681s] ... ok 2017-07-20 19:34:42.975903 | {0} tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_create_ebs_image_and_check_boot [72.662936s] ... ok 2017-07-20 19:37:50.719757 | {0} tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern [187.710856s] ... ok 2017-07-20 19:38:03.835802 | /home/jenkins/workspace/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/devstack-gate/functions.sh: line 1127: 12175 Killed timeout -s 9 ${REMAINING_TIME}m bash -c "source $WORKSPACE/devstack-gate/functions.sh && $cmd" 2017-07-20 19:38:03.835887 | ERROR: the main setup script run by this job failed - exit code: 137 2017-07-20 19:38:03.845801 | please look at the relevant log files to determine the root cause It's not fully clear why despite --concurrency=4, it looks like a single test thread is running. That being said, the slow tests had only 20 minutes to run before being abrupted, so it may as well be an issue in the previous phase of the job. ** Affects: neutron Importance: High Status: New ** Tags: gate-failure linuxbridge tempest ** Changed in: neutron Importance: Undecided => High ** Tags added: gate-failure linuxbridge ** Tags added: tempest -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1705567 Title: gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial failed with timeout Status in neutron: New Bug description: http://logs.openstack.org/85/385085/30/gate/gate-tempest-dsvm-neutron-
[Yahoo-eng-team] [Bug 1705548] [NEW] Add virtio-forwarder VNIC type
Public bug reported: Add support for virtio-forwarder VNIC type * This is a feature request to add support for the virtio-forwarder VNIC type. * The virtio-forwarder VNIC type has been added as another option for setting the "binding:vnic_type" property on a port. This requests a low-latency virtio port inside the instance, likely backed by hardware acceleration and requires a supporting Neutron plugin. * Corresponding neutronclient change: https://review.openstack.org/#/c/483533/ ** Affects: horizon Importance: Undecided Assignee: Jan Gutter (jangutter) Status: In Progress ** Description changed: Add support for virtio-forwarder VNIC type - * This is a feature request to adds support for the virtio-forwarder VNIC type. - * The virtio-forwarder VNIC type has been added as another option for - setting the "binding:vnic_type" property on a port. This requests a - low-latency virtio port inside the instance, likely backed by hardware - acceleration and requires a supporting Neutron plugin. - * Corresponding neutronclient change: - https://review.openstack.org/#/c/483533/ + * This is a feature request to add support for the virtio-forwarder VNIC type. + * The virtio-forwarder VNIC type has been added as another option for + setting the "binding:vnic_type" property on a port. This requests a + low-latency virtio port inside the instance, likely backed by hardware + acceleration and requires a supporting Neutron plugin. + * Corresponding neutronclient change: + https://review.openstack.org/#/c/483533/ -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1705548 Title: Add virtio-forwarder VNIC type Status in OpenStack Dashboard (Horizon): In Progress Bug description: Add support for virtio-forwarder VNIC type * This is a feature request to add support for the virtio-forwarder VNIC type. * The virtio-forwarder VNIC type has been added as another option for setting the "binding:vnic_type" property on a port. This requests a low-latency virtio port inside the instance, likely backed by hardware acceleration and requires a supporting Neutron plugin. * Corresponding neutronclient change: https://review.openstack.org/#/c/483533/ To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1705548/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705536] [NEW] [RFE] L3-agent agent-mode dvr bridge.
Public bug reported: The use of linux network namespaces in the l3 agent routers causes a choke point for bandwidth on east/west and north/south traffic. In the case of east/west traffic, the source and destination interfaces are known to Neutron and could be routed using Open vSwitch if it is selected as the mechanism_driver for the L2-agent. This should allow the l3-agent to be compatible with DPDK and Windows. When using network namespaces with Open vSwitch to route an l3 ping packet: - arp from source vm -> tap1 (vlan tagging skipped) + broadcast to other ports - tap1-> kernel network stack - kernel sends arp reply tap1 - tap1-> source vm (vlan tagging skipped) - icmp from source vm -> tap1(vlan tagging skipped) - kernel receives icmp on tap1 and send arp request to dest vm via tap2(broadcast) - arp via tap2 -> dest vm (vlan tagging skipped) - dest vm replies -> tap2 - kernel updates dest mac and decrement ttl the forward icmp packet to tap2 - tap2 -> dest vm-> dest vm replies->tap2.(vlan tagging skipped) - kernel updates dest mac and decrement ttl the forward icmp reply packet to tap1 - tap1-> source vm When OpenFlow is used to route the same traffic: - arp from source vm -> arp rewritten to reply -> sent to source vm ( single openflow action). - icmp from source vm -> destination mac update, ttl decremented -> dest vm ( single openflow action) - icmp from dest vm -> destination mac update, ttl decremented -> source vm ( single openflow action) Introducing a new agent_mode would allow an operator to select which implementation is most suitable to their use case. ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1705536 Title: [RFE] L3-agent agent-mode dvr bridge. Status in neutron: New Bug description: The use of linux network namespaces in the l3 agent routers causes a choke point for bandwidth on east/west and north/south traffic. In the case of east/west traffic, the source and destination interfaces are known to Neutron and could be routed using Open vSwitch if it is selected as the mechanism_driver for the L2-agent. This should allow the l3-agent to be compatible with DPDK and Windows. When using network namespaces with Open vSwitch to route an l3 ping packet: - arp from source vm -> tap1 (vlan tagging skipped) + broadcast to other ports - tap1-> kernel network stack - kernel sends arp reply tap1 - tap1-> source vm (vlan tagging skipped) - icmp from source vm -> tap1(vlan tagging skipped) - kernel receives icmp on tap1 and send arp request to dest vm via tap2(broadcast) - arp via tap2 -> dest vm (vlan tagging skipped) - dest vm replies -> tap2 - kernel updates dest mac and decrement ttl the forward icmp packet to tap2 - tap2 -> dest vm-> dest vm replies->tap2.(vlan tagging skipped) - kernel updates dest mac and decrement ttl the forward icmp reply packet to tap1 - tap1-> source vm When OpenFlow is used to route the same traffic: - arp from source vm -> arp rewritten to reply -> sent to source vm ( single openflow action). - icmp from source vm -> destination mac update, ttl decremented -> dest vm ( single openflow action) - icmp from dest vm -> destination mac update, ttl decremented -> source vm ( single openflow action) Introducing a new agent_mode would allow an operator to select which implementation is most suitable to their use case. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1705536/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705446] Re: filter scheduler raises TypeError: argument of type 'NoneType' is not iterable when placement returns no allocation candidates
I'm going to mark this as invalid given it's a bug against an in-review change, so we'll get it fixed in: https://review.openstack.org/#/c/483566/ ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705446 Title: filter scheduler raises TypeError: argument of type 'NoneType' is not iterable when placement returns no allocation candidates Status in OpenStack Compute (nova): Invalid Bug description: Building on top of the 'claim resources in placement API during schedule()' patch series [1] I tried to consume every resources available. When the placement API return no allocation candidates as there is no resources left the scheduler/manager blows up with a stack trace [2]. It seems that problem is introduces in [3]. The effect is not severe as the exception fails the scheduling which is the expected behavior when there is no resources left. [1] https://review.openstack.org/#/c/483566 [2] http://paste.openstack.org/show/616000/ [3] https://review.openstack.org/#/c/483565/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1705446/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705071] Re: [placement] Attempting to find allocation candidates for shared-only resources results in KeyError
Reviewed: https://review.openstack.org/484900 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d67d3459afd5271f09cc6bbeb0359ad419a4ec7a Submitter: Jenkins Branch:master commit d67d3459afd5271f09cc6bbeb0359ad419a4ec7a Author: Jay PipesDate: Tue Jul 18 14:03:53 2017 -0400 placement: alloc candidates only shared resources When attempting to perform a GET /allocation_candidates request for only resources that are shared, a KeyError was being produced: http://paste.openstack.org/show/615753/ The problem is that the _get_usages_by_provider_and_rc() method returns a dict with only the sharing resource provider usage information but the _get_all_with_shared() returns a list of resource provider IDs including both the *sharing* resource provider *and* the shared-with providers. When the code attempts to cross-reference provider summaries (which are constructed by looping over the usage dicts) with each provider ID in the result from _get_all_with_shared(), we hit a KeyError on the shared-with provider IDs because there was no usage record (because the usage query filters on resource class ID and we requested only a resource class ID that was shared) This patch fixes the problem with a KeyError being produced for those providers that do not provide any resources (i.e. they are only included in the returned results because they have requested resource shared *with* them) by returning both the internal integer ID and the UUID of providers from the _get_all_shared_with() function and then in the loop to create allocation requests, simply ignoring any resource provider that doesn't exist in the provider_summaries dict. Closes-Bug: #1705071 Change-Id: I742fd093a8b33ff88244b2990021784e4b65f51f ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705071 Title: [placement] Attempting to find allocation candidates for shared-only resources results in KeyError Status in OpenStack Compute (nova): Fix Released Bug description: When attempting to perform a GET /allocation_candidates request for only resources that are shared, a KeyError is produced: http://paste.openstack.org/show/615753/ The problem is that the _get_usages_by_provider_and_rc() method returns a dict with only the sharing resource provider usage information but the _get_all_with_shared() returns a list of resource provider IDs including both the *sharing* resource provider *and* the shared-with providers. When the code attempts to cross-reference provider summaries (which are constructed by looping over the usage dicts) with each provider ID in the result from _get_all_with_shared(), we hit a KeyError on the shared-with provider IDs because there was no usage record (because the usage query filters on resource class ID and we requested only a resource class ID that was shared) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1705071/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705485] [NEW] policy rule identity:change password is no longer needed
Public bug reported: With policy in code changes below rule is added in keystone/common/policies/user.py, but enforcement of this rule is removed with change-set [0] against user change_password API. As this rule is no longer used, it can be removed. policy.DocumentedRuleDefault( name=base.IDENTITY % 'change_password', check_str=base.RULE_ADMIN_OR_OWNER, description='Self-service password change.', operations=[{'path': '/v3/users/{user_id}/password', 'method': 'POST'}]) [0] https://github.com/openstack/keystone/commit/3ae73b67522bf388a0fdcecceb662831d853a313 ** Affects: keystone Importance: Undecided Status: New ** Summary changed: - policy rule identity:change_password is not used with change_password API + policy rule identity:change password is not enforced with API ** Summary changed: - policy rule identity:change password is not enforced with API + policy rule identity:change password is no longer needed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1705485 Title: policy rule identity:change password is no longer needed Status in OpenStack Identity (keystone): New Bug description: With policy in code changes below rule is added in keystone/common/policies/user.py, but enforcement of this rule is removed with change-set [0] against user change_password API. As this rule is no longer used, it can be removed. policy.DocumentedRuleDefault( name=base.IDENTITY % 'change_password', check_str=base.RULE_ADMIN_OR_OWNER, description='Self-service password change.', operations=[{'path': '/v3/users/{user_id}/password', 'method': 'POST'}]) [0] https://github.com/openstack/keystone/commit/3ae73b67522bf388a0fdcecceb662831d853a313 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1705485/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705487] [NEW] placement logs an ERROR when PUT /allocation result in an invalid inventory
Public bug reported: When an allocation requests on placement results in an invalid inventory due to the capacity is exceeded then placement return HTTP 409 properly but at the same time logs an ERROR log (see below). The ERROR log is not necessary as this is not a software error. Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: [pid: 9543|app: 0|req: 790/1579] 192.168.122.191 () {66 vars in 1440 bytes} [Thu Jul 20 11:44:38 2017] PUT /placement/allocations/485c7480-939c-4b88-8c Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG nova.api.openstack.placement.requestlog [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] Starting request: 192.168.122.191 "GET Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] Acquired semaphore "rc_cache" {{(pid=9542) lock /usr Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] Releasing semaphore "rc_cache" {{(pid=9542) lock /us Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: INFO nova.api.openstack.placement.requestlog [None req-ce309a62-3819-4ccb-8491-828046235f2d service placement] 192.168.122.191 "GET /placement/allocati Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: [pid: 9542|app: 0|req: 790/1580] 192.168.122.191 () {60 vars in 1344 bytes} [Thu Jul 20 11:44:38 2017] GET /placement/allocations/9fff84b6-71e4-4f53-ad Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG nova.api.openstack.placement.requestlog [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Starting request: 192.168.122.191 "PUT Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Acquired semaphore "rc_cache" {{(pid=9543) lock /usr Jul 20 11:44:38 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Releasing semaphore "rc_cache" {{(pid=9543) lock /us Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Acquired semaphore "rc_cache" {{(pid=9543) lock /usr Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Releasing semaphore "rc_cache" {{(pid=9543) lock /us Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Acquired semaphore "rc_cache" {{(pid=9543) lock /usr Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: DEBUG oslo_concurrency.lockutils [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Releasing semaphore "rc_cache" {{(pid=9543) lock /us Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation [None req-149d9b07-f06c-4b5b-ae8d-504d2a51340e service placement] Bad inventory: InvalidInventor Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation Traceback (most recent call last): Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation File "/opt/stack/nova/nova/api/openstack/placement/handlers/allocation.py", line 249, in _set_ Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation allocations.create_all() Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation File "/opt/stack/nova/nova/objects/resource_provider.py", line 1869, in create_all Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation self._set_allocations(self._context, self.objects) Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 979, in Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation return fn(*args, **kwargs) Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation File "/opt/stack/nova/nova/objects/resource_provider.py", line 1829, in _set_allocations Jul 20 11:44:39 ubuntu devstack@placement-api.service[9538]: ERROR nova.api.openstack.placement.handlers.allocation before_gens = _check_capacity_exceeded(conn, allocs) Jul 20 11:44:39 ubuntu
[Yahoo-eng-team] [Bug 1458973] Re: Edit Port name without filling existing device id and device owner causes port detached
*** This bug is a duplicate of bug 1672215 *** https://bugs.launchpad.net/bugs/1672215 This has been already fixed in https://github.com/openstack/horizon/commit/61fece80e1c153dd7569011cb247b066095753ee ** This bug has been marked a duplicate of bug 1672215 device_id/device_owner field in Admin Edit Port form are not filled with the current values -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1458973 Title: Edit Port name without filling existing device id and device owner causes port detached Status in OpenStack Dashboard (Horizon): In Progress Bug description: Currently when user want to edit the port information, the port update form doesn't display the current attached device_id and device_owner value. This will make user inconvenient to update the port info as user needs to put back the device_id and device_owner value in the port update form. One common use case is that user wants to update the port name: 1. Select the port. 2. Edit port. 3. Input a new port name. 4. Click Save. After that, you will see the port is detached as the device_id and device_owner values are empty. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1458973/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1687187] Re: metadata-api requires iptables-save/restore
Reviewed: https://review.openstack.org/480765 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b7cb3b7523b70dd94135f07b6307fa48563119f8 Submitter: Jenkins Branch:master commit b7cb3b7523b70dd94135f07b6307fa48563119f8 Author: Michael StillDate: Tue Jul 4 18:19:44 2017 +1000 Only setup iptables for metadata if using nova-net As discussed in the bug report, we setup iptables rules for the metadata service even if we're using neutron (which routes to metadata in a different way). This is because of the split-brain behaviour of the network driver interface versus the network API interface. Instead, only setup iptables if we are _not_ using neutron. Change-Id: I43df9200aba1018d2c7cd2f118864326af15fd42 Closes-Bug: #1687187 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1687187 Title: metadata-api requires iptables-save/restore Status in OpenStack Compute (nova): Fix Released Bug description: The metadata-api still loads pieces of nova-network even when using neutron=True. Specifically, it is still loading linuxnet_interface_driver and it is adding in ACCEPT rules with iptables to allow the metadata port. While this may make sense with nova-network, it doesn't make sense for an api to be messing with iptables. Since neutron uses metadata-api through its proxy, it cannot be said that the metadata-api is purely a nova-network thing. The MetadataManager class that is loaded makes note of the fact that all the class does is add that ACCEPT rule [0]. Previously in Newton I was able to work around this by overriding the MetadataManager class with 'nova.manager.Manager', that that option was removed in Ocata [1]. Now the 'nova.api.manager.MetadataManager' name is hardcoded [2] and requires modifying nova source. TL;DR when using the metadata-api, bits of nova-network are still loaded when they shouldn't be. [0] https://github.com/openstack/nova/blob/4f91ed3a547965ed96a22520edcfb783e7936e95/nova/api/manager.py#L24 [1] https://github.com/openstack/nova/blob/stable/newton/nova/conf/service.py#L51 [2] https://github.com/openstack/nova/blob/065cd6a8d69c1ec862e5b402a3150131f35b2420/nova/service.py#L60 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1687187/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1692026] Re: failed to attach cinder volume with vstorage backend when log path doesn't exist
Reviewed: https://review.openstack.org/458557 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=7f5921fedf94f15c7290fa8bfde561b40e14e542 Submitter: Jenkins Branch:master commit 7f5921fedf94f15c7290fa8bfde561b40e14e542 Author: Pavel GluschakDate: Thu Apr 20 18:39:26 2017 +0300 VStorage: changed default log path When VStorage rpm is installed, default log path /var/log/vstorage is created. So this should be a default value for logging path. Otherwise user is forced to create logging path manually, because volume mount will fail. Closes-Bug: #1692026 Change-Id: If6be49dad553f7ad9a947ea56ce107f8d028d28a Signed-off-by: Pavel Gluschak ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1692026 Title: failed to attach cinder volume with vstorage backend when log path doesn't exist Status in OpenStack Compute (nova): Fix Released Bug description: VStorage share is mounted by fuse using vstorage-mount utility (pstorage-mount is symlink). This utility has the option to specify log file "-l". Currently default log path used by nova-compute is "/var/lib/pstorage". However when VStorage package is installed it creates "/var/lib/vstorage" instead. So this needs to be default value in nova.conf. Non-existent log path results in vstorage-mount fail like: Command: sudo cinder-rootwrap /etc/nova/rootwrap.conf pstorage-mount -c mycluster -l /var/log/pstorage/nova-mycluster.log.gz -u nova -g root -m 0770 /var/lib/nova/mnt/a0836a988e84c4c0245d1ffa7cee4921 Exit code: 253 Stdout: u'' Stderr: u'19-05-17 16:28:52.719 failed to access directory /var/log/pstorage - No such file or directory\n' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1692026/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705467] [NEW] [Get me a network] auto-allocated networks does not verify tenant ID while creating
Public bug reported: Environment: Ocata According to current docs, in order to create auto-allocated network in tenant, following command should be invoked: `neutron auto-allocated-topology-show [project_id]` Unfortunatelly, this command does not verify if given project exists, so it's easy to create completely unusefull network components, even breaking naming convention for project_id field in database tables. Example: » neutron auto-allocated-topology-show 37-e088febf-64bd-4b5e-b4dd-1f5a80e ++--+ | Field | Value| ++--+ | id | 63f5f696-8a64-4bbb-af1e-b55a26f78f38 | | project_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | | tenant_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | ++--+ » ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1705467 Title: [Get me a network] auto-allocated networks does not verify tenant ID while creating Status in neutron: New Bug description: Environment: Ocata According to current docs, in order to create auto-allocated network in tenant, following command should be invoked: `neutron auto-allocated-topology-show [project_id]` Unfortunatelly, this command does not verify if given project exists, so it's easy to create completely unusefull network components, even breaking naming convention for project_id field in database tables. Example: » neutron auto-allocated-topology-show 37-e088febf-64bd-4b5e-b4dd-1f5a80e ++--+ | Field | Value| ++--+ | id | 63f5f696-8a64-4bbb-af1e-b55a26f78f38 | | project_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | | tenant_id | 37-e088febf-64bd-4b5e-b4dd-1f5a80e | ++--+ » To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1705467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705161] Re: Instance is not able to launch
Looks like your database connection setup is not correct, either in nova.conf or with username and password on the databasea: 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions OperationalError: (pymysql.err.OperationalError) (1045, u"Access denied for user 'nova'@'controller' (using password: YES)") If this is not the case, pleae followup with more information. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705161 Title: Instance is not able to launch Status in OpenStack Compute (nova): Invalid Bug description: I followed the procedure from https://docs.openstack.org/ocata /install-guide-ubuntu/InstallGuide.pdf to build OpenStack environment. But while launching instance I got the error as follows; Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-d6a9eec9-0570-4844-9434-d85d04c52883) After getting this error, I made a new database for Nova but it is using the old database and giving the same error. This is the nova-api.log file: 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 338, in wrapped 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 181, in wrapper 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 181, in wrapper 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 214, in detail 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions servers = self._get_servers(req, is_detail=True) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 357, in _get_servers 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions sort_keys=sort_keys, sort_dirs=sort_dirs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 2466, in get_all 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions sort_dirs=sort_dirs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 2606, in _get_instances_by_filters 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions expected_attrs=fields, sort_keys=sort_keys, sort_dirs=sort_dirs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 184, in wrapper 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions result = fn(cls, context, *args, **kwargs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 1220, in get_by_filters 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions use_slave=use_slave, sort_keys=sort_keys, sort_dirs=sort_dirs) 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 235, in wrapper 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions with reader_mode.using(context): 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions return self.gen.next() 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", line 944, in _transaction_scope 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions allow_async=self._allow_async) as resource: 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions return self.gen.next() 2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions
[Yahoo-eng-team] [Bug 1705450] [NEW] Nova doesn't pass the conf object to oslo_reports
Public bug reported: oslo_reports accepts a few config options that cannot be used at the moment since nova does not pass the config object. For example: one may want to use the file trigger feature, which has to be configured and is not possible at the moment. This especially affects Windows, in which case we cannot use the default signals. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705450 Title: Nova doesn't pass the conf object to oslo_reports Status in OpenStack Compute (nova): New Bug description: oslo_reports accepts a few config options that cannot be used at the moment since nova does not pass the config object. For example: one may want to use the file trigger feature, which has to be configured and is not possible at the moment. This especially affects Windows, in which case we cannot use the default signals. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1705450/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1703390] Re: Allow setting default region to log into
Reviewed: https://review.openstack.org/478404 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=d8f95c64d64c1675dfbc32122cf1e418f1477a6c Submitter: Jenkins Branch:master commit d8f95c64d64c1675dfbc32122cf1e418f1477a6c Author: Timur SufievDate: Mon Oct 12 10:07:47 2015 -0700 Introduce DEFAULT_SERVICE_REGIONS It should be together with related change in django-openstack-auth, if specified it will change the default service region calculation: it will be taken from this setting (on a per-endpont basis) instead of a value stored in cookies. This value is still checked for sanity, i.e. it should be present in Keystone service catalog. Change-Id: I7e36f766870793f3e8fc391a06f0ee49deaa7add Related-Bug: #1506825 Closes-Bug: #1703390 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1703390 Title: Allow setting default region to log into Status in OpenStack Dashboard (Horizon): Fix Released Bug description: This is based on the solution provided with https://bugs.launchpad.net/horizon/+bug/1506825 by Timur Sufiev. The point is to have a setting (in local_settings.py) that allows user to set the value of the default region to log into. If it is specified (and no other region is set explicitly) and valid, it is used for logging in. If it is empty, the existing mechanism of checking the cookies is used. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1703390/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705141] Re: select_destinations results in "TypeError: 'NoneType' object is not iterable" if get_allocation_candidates fails to connect to Placement
Reviewed: https://review.openstack.org/484988 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=dcde535f957e2eb4b8bfddd48d6f19eef05ad4f9 Submitter: Jenkins Branch:master commit dcde535f957e2eb4b8bfddd48d6f19eef05ad4f9 Author: Matt RiedemannDate: Tue Jul 18 20:08:57 2017 -0400 Handle None returned from get_allocation_candidates due to connect failure The get_allocation_candidates method is decorated with the safe_connect decorator that handles any failures trying to connect to the Placement service. If keystoneauth raises an exception, safe_connect will log it and return None. The select_destinations() method in the SchedulerManager needs to handle the None case so it doesn't assume a tuple is coming back which would result in a TypeError. Change-Id: Iffd72f51f25a9e874eaacf374d80794675236ac1 Closes-Bug: #1705141 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705141 Title: select_destinations results in "TypeError: 'NoneType' object is not iterable" if get_allocation_candidates fails to connect to Placement Status in OpenStack Compute (nova): Fix Released Bug description: http://logs.openstack.org/99/471899/25/check/gate-grenade-dsvm- neutron-ubuntu- xenial/3f9a9e3/logs/new/screen-n-cond.txt.gz?level=TRACE#_2017-07-18_22_04_39_026 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager Traceback (most recent call last): 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/conductor/manager.py", line 920, in schedule_and_build_instances 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager instance_uuids) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/conductor/manager.py", line 624, in _schedule_instances 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager request_spec, instance_uuids) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/scheduler/utils.py", line 464, in wrapped 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager return func(*args, **kwargs) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 52, in select_destinations 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager instance_uuids) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in __run_method 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager return getattr(self.instance, __name)(*args, **kwargs) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/scheduler/client/query.py", line 33, in select_destinations 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager instance_uuids) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/opt/stack/new/nova/nova/scheduler/rpcapi.py", line 136, in select_destinations 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169, in call 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager retry=self.retry) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 123, in _send 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager timeout=timeout, retry=retry) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 578, in send 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager retry=retry) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 569, in _send 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager raise result 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager TypeError: 'NoneType' object is not iterable 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager Traceback (most recent call last): 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 153, in _process_incoming 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager res = self.dispatcher.dispatch(message) 2017-07-18 22:04:39.026 32126 ERROR nova.conductor.manager
[Yahoo-eng-team] [Bug 1705446] [NEW] filter scheduler raises TypeError: argument of type 'NoneType' is not iterable when placement returns no allocation candidates
Public bug reported: Building on top of the 'claim resources in placement API during schedule()' patch series [1] I tried to consume every resources available. When the placement API return no allocation candidates as there is no resources left the scheduler/manager blows up with a stack trace [2]. It seems that problem is introduces in [3]. The effect is not severe as the exception fails the scheduling which is the expected behavior when there is no resources left. [1] https://review.openstack.org/#/c/483566 [2] http://paste.openstack.org/show/616000/ [3] https://review.openstack.org/#/c/483565/ ** Affects: nova Importance: High Status: Confirmed ** Tags: placement scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705446 Title: filter scheduler raises TypeError: argument of type 'NoneType' is not iterable when placement returns no allocation candidates Status in OpenStack Compute (nova): Confirmed Bug description: Building on top of the 'claim resources in placement API during schedule()' patch series [1] I tried to consume every resources available. When the placement API return no allocation candidates as there is no resources left the scheduler/manager blows up with a stack trace [2]. It seems that problem is introduces in [3]. The effect is not severe as the exception fails the scheduling which is the expected behavior when there is no resources left. [1] https://review.openstack.org/#/c/483566 [2] http://paste.openstack.org/show/616000/ [3] https://review.openstack.org/#/c/483565/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1705446/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614054] Re: Incorrect host cpu is given to emulator threads when cpu_realtime_mask flag is set
** Also affects: cloud-archive Importance: Undecided Status: New ** Tags removed: mitaka-backport-potential ** Tags added: sts sts-sru-needed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614054 Title: Incorrect host cpu is given to emulator threads when cpu_realtime_mask flag is set Status in Ubuntu Cloud Archive: New Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Fix Committed Bug description: Description of problem: When using the cpu_realtime and cpu_realtim_mask flag to create new instance, the 'cpuset' of 'emulatorpin' option is using the id of vcpu which is incorrect. The id of host cpu should be used here. e.g. ### the cpuset should be '2' here, when cpu_realtime_mask=^0. How reproducible: Boot new instance with cpu_realtime_mask flavor. Steps to Reproduce: 1. Create RT flavor nova flavor-create m1.small.performance 6 2048 20 2 nova flavor-key m1.small.performance set hw:cpu_realtime=yes nova flavor-key m1.small.performance set hw:cpu_realtime_mask=^0 nova flavor-key m1.small.performance set hw:cpu_policy=dedicated 2. Boot a instance with this flavor 3. Check the xml of the new instance Actual results: Expected results: To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1614054/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705250] Re: OpenStack Administrator Guides: missing index for murano, cinder & keystone page
Reviewed: https://review.openstack.org/485185 Committed: https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=6ea7250c931a3feaad037938ea4f049922961144 Submitter: Jenkins Branch:master commit 6ea7250c931a3feaad037938ea4f049922961144 Author: Doug HellmannDate: Wed Jul 19 08:23:01 2017 -0400 be more accurate when checking for guide links Use URLs with index.html to explicitly look for content being published, not just an apache directory listing. Closes-Bug: #1705250 Change-Id: Iae116ee81210489d9de2a1833d341c3842ac95e5 Signed-off-by: Doug Hellmann ** Changed in: openstack-manuals Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1705250 Title: OpenStack Administrator Guides: missing index for murano, cinder & keystone page Status in Cinder: New Status in OpenStack Identity (keystone): New Status in Murano: New Status in openstack-manuals: Fix Released Bug description: These href's on https://docs.openstack.org/admin/ are generating directory listings instead of proper pages (missing index.html?): Block Storage service (cinder) (/cinder/latest/admin/) Identity service (keystone) (/keystone/latest/admin/) Application Catalog service (murano) (/murano/latest/admin/) To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1705250/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704212] Re: MAAS marks node 'Deployed' before sshd is up
We discussed this internally and decided that MAAS is doing the right thing here. Although unlikely, you may want to prevent enabling SSH at all, but MAAS will still ensure the machine is marked deployed even if the user prevents ssh from being run (and configures user login on a curtin script). ** Changed in: maas Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1704212 Title: MAAS marks node 'Deployed' before sshd is up Status in cloud-init: New Status in MAAS: Won't Fix Bug description: MAAS is marking one of my nodes as 'deployed' before SSHD comes up. My code is watching for the MAAS node to change to Deployed state, attempts SSH, and then fails because sshd is not listening yet. After seeing the node turn to deployed state: Node changed status - From 'Deploying' to 'Deployed' Thu, 13 Jul. 2017 20:17:34 my code attempts ssh to the node and fails: ssh: connect to host 10.245.208.32 port 22: Connection refused 2017-07-13-20:17:41 ERROR Fatal: Could not establish SSH connection to ubuntu@10.245.208.32 According to logs from the node, ssh didn't start until 20:17:42: Jul 13 20:17:42 swoobat systemd[1]: Starting OpenBSD Secure Shell server... Jul 13 20:17:42 swoobat sshd[2230]: Server listening on 0.0.0.0 port 22. Jul 13 20:17:42 swoobat sshd[2230]: Server listening on :: port 22. Jul 13 20:17:42 swoobat systemd[1]: Started OpenBSD Secure Shell server. In the cloud-init.log, you can see the request for user data at 20:17:35, 7 seconds before sshd starts up. That's when MAAS marks the node deployed. Here is /var/log/cloud-init.log from the node: http://paste.ubuntu.com/25084186/ And /var/log/cloud-init-output.log from the node: http://paste.ubuntu.com/25084192/ This is with maas 2.2.0 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1704212/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp