[Yahoo-eng-team] [Bug 1667817] Re: Unexpected API error on neutronclient
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1667817 Title: Unexpected API error on neutronclient Status in OpenStack Compute (nova): Expired Bug description: Recieved error which said to open a bug so doing so. Including command and nova-api log. This is a fresh install following the Ocata install- guide for Ubuntu while attempting to document how to inatall nova with the new nova-placement-api and cells v2. Can provide those steps if needed as well as confs. root@controller:~# openstack server create --debug --flavor m1.nano --image cirros --nic net-id=58da0f04-06a9-4f9f-89f7-f913e64eab67 --security-group default --key-name mykey testinstance START with options: [u'server', u'create', u'--debug', u'--flavor', u'm1.nano', u'--image', u'cirros', u'--nic', u'net-id=58da0f04-06a9-4f9f-89f7-f913e64eab67', u'--security-group', u'default', u'--key-name', u'mykey', u'testinstance'] options: Namespace(access_key='', access_secret='***', access_token='***', access_token_endpoint='', access_token_type='', auth_type='', auth_url='http://controller:35357/v3', cacert=None, cert='', client_id='', client_secret='***', cloud='', code='', consumer_key='', consumer_secret='***', debug=True, default_domain='default', default_domain_id='', default_domain_name='', deferred_help=False, discovery_endpoint='', domain_id='', domain_name='', endpoint='', identity_provider='', identity_provider_url='', insecure=None, interface='', key='', log_file=None, old_profile=None, openid_scope='', os_beta_command=False, os_compute_api_version='', os_dns_api_version='2', os_identity_api_version='3', os_image_api_version='2', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', passcode='', password='***', profile=None, project_domain_id='', project_domain_name='Default', project_id='', project_name='demo', protocol='', red irect_uri='', region_name='', timing=False, token='***', trust_id='', url='', user_domain_id='', user_domain_name='Default', user_id='', username='demo', verbose_level=3, verify=None) Auth plugin password selected auth_config_hook(): {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'cacert': None, 'auth_url': 'http://controller:35357/v3', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': '2', u'object_store_api_version': u'1', 'username': 'demo', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': {'user_domain_name': 'Default', 'project_name': 'demo', 'project_domain_name': 'Default'}, 'default_domain': 'default', u'container_api_version': u'1', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'timing': False, 'password': '***', u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'deferred_help': False, u'identity_api_versi on': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 'debug': True, u'interface': None, u'disable_vendor_agent': {}} defaults: {u'auth_type': 'password', u'status': u'active', u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': u'2', u'container_infra_api_version': u'1', u'metering_api_version': u'2', u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 'cacert': None, u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', 'verify': True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', u'container_api_version': u'1', u'dns_api_version': u'2', u'object_store_api_version': u'1', u'interface': None, u'disable_vendor_agent': {}} cloud cfg: {'auth_type': 'password', 'beta_command': False, u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 'cacert': None, 'auth_url': 'http://controller:35357/v3', u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': '2', u'object_store_api_version': u'1', 'username': 'demo', u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth':
[Yahoo-eng-team] [Bug 1686883] [NEW] sfc classifier flows deleted during ovs-agent restart
Public bug reported: [root@EXTENV-10-254-9-82 ~]# openstack sfc flow classifier list +--+--+--+---+---+--+--+ | ID | Name | Protocol | Source-IP | Destination-IP| Logical-Source-Port | Logical-Destination-Port | +--+--+--+---+---+--+--+ | 56933578-1384-47d8-ac0f-945917590387 | | tcp | None | 192.168.100.10/32 | 1d058e59-2ea8-4f02-8f5e-3b81b8b9306b | None | +--+--+--+---+---+--+--+ [root@EXTENV-10-254-9-82 ~]# openstack port show 1d058e59-2ea8-4f02-8f5e-3b81b8b9306b +---+--+ | Field | Value | +---+--+ | admin_state_up| UP | | allowed_address_pairs | | | binding_host_id | EXTENV-10-254-8-14 | | binding_profile | | | binding_vif_details | ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at| 2017-04-26T02:19:27Z | | description | | | device_id | d9d5f523-a402-4019-b490-b4d68d65c06d | | device_owner | network:router_interface | | dns_assignment| None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.100.1', subnet_id='14590ffd-2313-4cbf-aeed-dc522acb69ae' | | id| 1d058e59-2ea8-4f02-8f5e-3b81b8b9306b | | ip_address| None | | mac_address | fa:16:3e:e4:18:86 | | name | | | network_id| f15412d7-4aad-498e-ad44-275be87a5681 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id| d8cdc97536a545e4a135a9505665f87e | | qos_policy_id | None | | revision_number | 86 | | security_groups | | | status| ACTIVE | | subnet_id | None | | updated_at| 2017-04-28T03:54:11Z | +---+--+ [root@EXTENV-10-254-9-82 ~]# During ovs-agent restarting, i can fetch the classifier flows: [root@EXTENV-10-254-8-14 agent]# ovs-ofctl dump-flows br-int |grep group cookie=0xa4175c3a6393f0ba, duration=113.411s, table=0, n_packets=0, n_bytes=0, idle_age=113, priority=30,tcp,in_port=7539,nw_dst=192.168.100.10,tp_dst=0x1f40/0xffc0 actions=group:1 cookie=0xa4175c3a6393f0ba, duration=113.398s, table=0, n_packets=0, n_bytes=0,
[Yahoo-eng-team] [Bug 1686877] [NEW] ml2 physical network using Chinese characters fails to save to postgresql
Public bug reported: Steps to produce this bug: 1. configure ml2_conf.ini,and then dispatch the configuration, restart neutron-server service [ml2_type_vlan] network_vlan_ranges = physnet1:100:120,中文:121:125 expected result: configuration data correctly saves into postgresql real result: pass 2. reconfigure ml2_conf.ini,and then dispatch the configuration, restart neutron-server service [ml2_type_vlan] network_vlan_ranges = physnet1:100:120,中文:121:124 expected result: configuration data correctly saves into postgresql real result: fail neutron-server error:UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 143: ordinal not in range(128) ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1686877 Title: ml2 physical network using Chinese characters fails to save to postgresql Status in neutron: New Bug description: Steps to produce this bug: 1. configure ml2_conf.ini,and then dispatch the configuration, restart neutron-server service [ml2_type_vlan] network_vlan_ranges = physnet1:100:120,中文:121:125 expected result: configuration data correctly saves into postgresql real result: pass 2. reconfigure ml2_conf.ini,and then dispatch the configuration, restart neutron-server service [ml2_type_vlan] network_vlan_ranges = physnet1:100:120,中文:121:124 expected result: configuration data correctly saves into postgresql real result: fail neutron-server error:UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 143: ordinal not in range(128) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1686877/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686868] [NEW] live-migration failed when multipath command returned code 1
Public bug reported: ### Steps to reproduce I have two compute node, source com and dest com. I did execute "live migration" command. $ nova live-migration ### Expected result The instance is migrated to dest com host and the status change from MIGRATING to ACTIVE. ### Actual result During _post_livemigration function, the multipath command somehow returned "code 1" , I think it's a temporary error, on source com. And the status remains MIGRATING I think retrying multipath command can avoid this problem. ### Environment 1. [version] My environment is actually Kilo (openstack-nova-compute-2015.1.1-1.el7.noarch), but I belive the implementation is not much changed on the latest version. 2. [hypervisor] Libvirt + KVM (RHEL7.1) 3. [storage] FC Multipath 4. [network] openvswitch vlan ### Logs nova-compute.log on source com: Lock "connect_volume" acquired by "disconnect_volume" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:444 Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -l 123456789012345678901234567890123 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199 CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -l 123456789012345678901234567890123" returned: 1 in 0.109s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225 u'sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -l 123456789012345678901234567890123' failed. Not Retrying. execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:258 Multipath call failed exit (1) Lock "connect_volume" released by "disconnect_volume" :: held 0.112s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:456 [instance: abcdefgh-ijkl-mnlo-pqrs-tuvwxyz12345] Error monitoring migration: 'NoneType' object has no attribute '__getitem__' Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5708, in _live_migration dom, finish_event) File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5638, in _live_migration_monitor migrate_data) File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped payload) File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped return f(self, context, *args, **kw) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 355, in decorated_function kwargs['instance'], e, sys.exc_info()) File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 343, in decorated_function return function(self, context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5355, in _post_live_migration migrate_data) File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5983, in post_live_migration self._disconnect_volume(connection_info, disk_dev) File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1023, in _disconnect_volume return driver.disconnect_volume(connection_info, disk_dev) File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in inner return f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py", line 1402, in disconnect_volume devices = mdev_info['devices'] TypeError: 'NoneType' object has no attribute '__getitem__' [instance: abcdefgh-ijkl-mnlo-pqrs-tuvwxyz12345] Live migration monitoring is all done _live_migration /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5715 ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1686868 Title: live-migration failed when multipath command returned code 1 Status in OpenStack Compute (nova): New Bug description: ### Steps to reproduce I have two compute node, source com and dest com. I did execute "live migration" command. $ nova live-migration ### Expected result The instance is migrated to dest com host and the status change from MIGRATING to ACTIVE. ### Actual result During _post_livemigration function, the multipath command somehow returned "code 1" , I think it's a temporary error, on source com. And the status remains MIGRATING I think retrying multipath command can avoid this problem. ### Environment 1. [version] My environment is actually Kilo (openstack-nova-compute-2015.1.1-1.el7.noarch), but I belive the implementation is not much changed on the latest version. 2.
[Yahoo-eng-team] [Bug 1686584] Re: a few tempest tests are failing (fip related?)
** Changed in: networking-midonet Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1686584 Title: a few tempest tests are failing (fip related?) Status in networking-midonet: Fix Released Status in neutron: Fix Released Bug description: the following tests are failing for both of v2 and ml2. test_router_interface_fip test_update_floatingip_bumps_revision To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1686584/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686856] [NEW] sysconfig renderer does not render gateway settings in ifcfg-$iface files
Public bug reported: cloud-init trunk with the following network config: network: version: 1 config: - type: physical name: interface0 mac_address: "52:54:00:12:34:00" subnets: - type: static address: 10.0.2.15 netmask: 255.255.255.0 gateway: 10.0.2.2 renders an ifcfg-interface0 file without a GATEWAY=10.0.2.2 % cat /etc/sysconfig/network-scripts/ifcfg-interface0 # Created by cloud-init on instance boot automatically, do not edit. # BOOTPROTO=static DEVICE=interface0 HWADDR=52:54:00:12:34:00 IPADDR=10.0.2.15 NETMASK=255.255.255.0 NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet USERCTL=no Subsequently, route -n shows that a default gateway is not set. % route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 10.0.2.00.0.0.0 255.255.255.0 U 0 00 interface0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 00 interface0 If you add GATEWAY=10.0.2.2 you see a route like this: % route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 0.0.0.0 10.0.2.20.0.0.0 UG0 00 interface0 10.0.2.00.0.0.0 255.255.255.0 U 0 00 interface0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 00 interface0 ** Affects: cloud-init Importance: Undecided Status: New ** Tags: centos7 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1686856 Title: sysconfig renderer does not render gateway settings in ifcfg-$iface files Status in cloud-init: New Bug description: cloud-init trunk with the following network config: network: version: 1 config: - type: physical name: interface0 mac_address: "52:54:00:12:34:00" subnets: - type: static address: 10.0.2.15 netmask: 255.255.255.0 gateway: 10.0.2.2 renders an ifcfg-interface0 file without a GATEWAY=10.0.2.2 % cat /etc/sysconfig/network-scripts/ifcfg-interface0 # Created by cloud-init on instance boot automatically, do not edit. # BOOTPROTO=static DEVICE=interface0 HWADDR=52:54:00:12:34:00 IPADDR=10.0.2.15 NETMASK=255.255.255.0 NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet USERCTL=no Subsequently, route -n shows that a default gateway is not set. % route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 10.0.2.00.0.0.0 255.255.255.0 U 0 00 interface0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 00 interface0 If you add GATEWAY=10.0.2.2 you see a route like this: % route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 0.0.0.0 10.0.2.20.0.0.0 UG0 00 interface0 10.0.2.00.0.0.0 255.255.255.0 U 0 00 interface0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 00 interface0 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1686856/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1663170] Re: call neutron api twice for list subnet when count subnet quotas
Reviewed: https://review.openstack.org/431793 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=af8a01b362acd85aff34dfde0a7ace197007f768 Submitter: Jenkins Branch:master commit af8a01b362acd85aff34dfde0a7ace197007f768 Author: MinSunDate: Fri Feb 10 09:00:35 2017 +0800 usage: Ensure to count resources of a given project When retrieving resource usage, the current code calls subnet_list twice for shared=False/True, but 'shared' attribute of subnet is not defined in the Networking API and actually there is no need to use it (even though it works as expected accidentally). What we need here is to specify tenant_id as a query parameter with a single API to limit the scope to a given project. The same way can be used for network_list() and router_list() calls and it is more efficient. By doing so, we now need only one network_list() call. Change-Id: I40d61ed9cbae4b083e4f3cec9caa269e92daf306 Closes-Bug: #1663170 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1663170 Title: call neutron api twice for list subnet when count subnet quotas Status in OpenStack Dashboard (Horizon): Fix Released Bug description: When update a tenant's subnet quota, we have to list all existed subnets to count the minimal quota for subnet. In current code, it first lists none-shared subnets and then lists shared subnets. At last, put the two values together. It has to call neutron list subnet api twice. In fact, it can be done by one api call without shared parameter. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1663170/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1406333] Re: LOG messages localized, shouldn't be
Reviewed: https://review.openstack.org/455635 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=ced987815d6a091fbcedbd4c319395038cb3f976 Submitter: Jenkins Branch:master commit ced987815d6a091fbcedbd4c319395038cb3f976 Author: Akihiro MotokiDate: Tue Apr 11 10:24:01 2017 + Ensure log messages are not translated Previously translated messages are included in log messages and it was determined what language is chosen by users. It makes difficult for operators to understand log messgaes. This commit tries to use English messages for all log messages. The following policies are applied based on the past discussions in the bug 1406333 and related reviews. - English messages are used for log messages. - log messages include exception messages if possible to help operators identify what happens. - Use ID rather than name for log messages as ID is much more unique compared to name. - LOG.debug() in success code path are deleted. We don't log success messages in most places and API calls to back-end services can be logged from python bindings. Change-Id: Ie554463908327435d886d0d0f1671fd327c0cd00 Closes-Bug: #1406333 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1406333 Title: LOG messages localized, shouldn't be Status in OpenStack Dashboard (Horizon): Fix Released Bug description: LOG messages should not be localized. There are a few places in project/firewalls/forms.py that they are. These instances should be removed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1406333/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1651887] Re: Ephemeral storage encryption is broken with interface mismatch
** Changed in: nova Importance: Undecided => Medium ** Also affects: nova/newton Importance: Undecided Status: New ** Also affects: nova/ocata Importance: Undecided Status: New ** Changed in: nova/newton Status: New => Confirmed ** Changed in: nova/ocata Status: New => Confirmed ** Changed in: nova/ocata Importance: Undecided => Medium ** Changed in: nova/newton Importance: Undecided => Medium ** Tags added: newton-backport-potential ocata-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1651887 Title: Ephemeral storage encryption is broken with interface mismatch Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) newton series: Confirmed Status in OpenStack Compute (nova) ocata series: Confirmed Bug description: Description === Ephemeral storage encryption is broken because of interface mismatch. The default key manager (Castellan with Barbican)'s create_key() interface required at least 4 arguments. See https://github.com/openstack/castellan/blob/0.4.0/castellan/key_manager/barbican_key_manager.py#L200 However, Nova is only passing in 3. Looks like the 'algorithm' argument is missing. See https://github.com/openstack/nova/blob/stable/newton/nova/compute/api.py#L1401 This will result in "TypeError: create_key() takes exactly 4 arguments (3 given)" on server create. Steps to reproduce == 1. Install devstack with Barbican plugin enabled. i.e. cat local.conf [[local|localrc]] enable_plugin barbican https://git.openstack.org/openstack/barbican stable/newton 2. After devstack is installed, enable ephemeral storage encryption in nova.conf. i.e. [libvirt] images_type = lvm images_volume_group = vg-comp [ephemeral_storage_encryption] key_size = 256 cipher = aes-xts-plain64 enabled = True 3. restart nova-api 4. using the nova user account, try to create a server. i.e. gyee@abacus:~$ env | grep OS_ OS_PROJECT_DOMAIN_ID=default OS_USER_DOMAIN_ID=default OS_PROJECT_NAME=service OS_IDENTITY_API_VERSION=3 OS_PASSWORD=secrete OS_AUTH_URL=http://localhost:5000 OS_USERNAME=nova gyee@abacus:~$ openstack flavor list ++---+---+--+---+---+---+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | ++---+---+--+---+---+---+ | 1 | m1.tiny | 512 |1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 42 | m1.nano |64 |0 | 0 | 1 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | | 84 | m1.micro | 128 |0 | 0 | 1 | True | | c1 | cirros256 | 256 |0 | 0 | 1 | True | | d1 | ds512M| 512 |5 | 0 | 1 | True | | d2 | ds1G | 1024 | 10 | 0 | 1 | True | | d3 | ds2G | 2048 | 10 | 0 | 2 | True | | d4 | ds4G | 4096 | 20 | 0 | 4 | True | ++---+---+--+---+---+---+ gyee@abacus:~$ openstack image list +--+-++ | ID | Name| Status | +--+-++ | da447cd9-619a- | cirros-0.3.4-x86_64-uec | active | | 41b3-9772-4a9a80fa55f9 | | | | 718fff25-9d61-4a37-a974-fdef2f1f | cirros-0.3.4-x86_64-uec-ramdisk | active | | 570a | | | | 91c06518-a752-48ec-a7fd- | cirros-0.3.4-x86_64-uec-kernel | active | | 3c0ad020d9a4 | | | +--+-++ gyee@abacus:~$ openstack server create --image 91c06518-a752-48ec-a7fd-3c0ad020d9a4 --flavor 1 test_eph_enc Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-6d2eb531-b239-429d-8d25-f06b4fe6309c) 5. And you'll see a traceback similiar to this. 2016-12-21 14:04:40.903 ERROR nova.api.openstack.extensions [req-6d2eb531-b239-429d-8d25-f06b4fe6309c nova service] Unexpected exception in API method 2016-12-21 14:04:40.903 TRACE nova.api.openstack.extensions Traceback (most recent call last): 2016-12-21 14:04:40.903 TRACE
[Yahoo-eng-team] [Bug 1683542] Re: After configuing Ubuntu Core system still displays subiquity wizard
** Changed in: maas Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1683542 Title: After configuing Ubuntu Core system still displays subiquity wizard Status in cloud-init: Won't Fix Status in MAAS: Fix Released Bug description: After deploying Ubuntu-Core using MAAS the console-conf wizard still runs on the deployed system. With it a person with physical/console access can change networking configuration and add a user which has sudo access. When cloud-init runs and creates a user console-conf should be disabled as cloud-init has already created a user. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1683542/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686764] [NEW] technical debt: replace a location_stragegy test
Public bug reported: Here's the situation. There was a discussion during the Newton-Ocata time frame about whether glance supports custom location_strategy modules. We decided that it does not, and at that time used the 'choices' parameter in the location_strategy api config option (defined using oslo.config) to restrict the names the option could take to the two location strategy modules that come with glance. There's code that checks to make sure the module set by the config module actually exists. We had a test for this that used an override in oslo.config to set the location_strategy option to something like 'invalid_module' and then tested to make sure the missing module was handled properly. In the meantime, oslo.config has removed the ability to do this kind of override. So the test has to be changed. We don't have bandwidth to get into this right now, so I'm filing this bug to remind us to go back and fix this when someone has time. Here's the patch that removes the test: https://review.openstack.org/#/c/457050/ ** Affects: glance Importance: Medium Status: Triaged -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1686764 Title: technical debt: replace a location_stragegy test Status in Glance: Triaged Bug description: Here's the situation. There was a discussion during the Newton-Ocata time frame about whether glance supports custom location_strategy modules. We decided that it does not, and at that time used the 'choices' parameter in the location_strategy api config option (defined using oslo.config) to restrict the names the option could take to the two location strategy modules that come with glance. There's code that checks to make sure the module set by the config module actually exists. We had a test for this that used an override in oslo.config to set the location_strategy option to something like 'invalid_module' and then tested to make sure the missing module was handled properly. In the meantime, oslo.config has removed the ability to do this kind of override. So the test has to be changed. We don't have bandwidth to get into this right now, so I'm filing this bug to remind us to go back and fix this when someone has time. Here's the patch that removes the test: https://review.openstack.org/#/c/457050/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1686764/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686754] [NEW] sysconfig renderer leaves CIDR notation in IPADDR field
Public bug reported: % cat simple-v1.yaml network: version: 1 config: - type: physical name: interface0 mac_address: "52:54:00:12:34:00" subnets: - type: static address: 10.0.2.15/24 gateway: 10.0.2.2 % cat target/etc/sysconfig/network-scripts/ifcfg-interface0 # Created by cloud-init on instance boot automatically, do not edit. # BOOTPROTO=static DEVICE=interface0 HWADDR=52:54:00:12:34:00 IPADDR=10.0.2.15/24 NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet USERCTL=no When running an ifcf-interface0 under centos7, network-scripts complain with: arping: 10.0.2.15/24: Name or service not known Error: ??? prefix is expected rather than "10.0.2.15/24/". ERROR: [/etc/sysconfig/network-scripts/ifup-eth] Error adding address 10.0.22 .15/24 for interface0. arping: 10.0.2.15/24: Name or service not known Error: ??? prefix is expected rather than "10.0.2.15/24/". Changing ifcfg-interface to this, fixes the error: IPADDR=10.0.2.15 NETMASK=255.255.255.0 CIDR expression should get expanded into a NETMASK value ** Affects: cloud-init Importance: Undecided Status: New ** Tags: centos7 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1686754 Title: sysconfig renderer leaves CIDR notation in IPADDR field Status in cloud-init: New Bug description: % cat simple-v1.yaml network: version: 1 config: - type: physical name: interface0 mac_address: "52:54:00:12:34:00" subnets: - type: static address: 10.0.2.15/24 gateway: 10.0.2.2 % cat target/etc/sysconfig/network-scripts/ifcfg-interface0 # Created by cloud-init on instance boot automatically, do not edit. # BOOTPROTO=static DEVICE=interface0 HWADDR=52:54:00:12:34:00 IPADDR=10.0.2.15/24 NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet USERCTL=no When running an ifcf-interface0 under centos7, network-scripts complain with: arping: 10.0.2.15/24: Name or service not known Error: ??? prefix is expected rather than "10.0.2.15/24/". ERROR: [/etc/sysconfig/network-scripts/ifup-eth] Error adding address 10.0.22 .15/24 for interface0. arping: 10.0.2.15/24: Name or service not known Error: ??? prefix is expected rather than "10.0.2.15/24/". Changing ifcfg-interface to this, fixes the error: IPADDR=10.0.2.15 NETMASK=255.255.255.0 CIDR expression should get expanded into a NETMASK value To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1686754/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686751] [NEW] cloud-init boot errors on centos7
Public bug reported: [root@localhost ubuntu]# systemctl status cloud-init-local -l ● cloud-init-local.service - LSB: The initial cloud-init job (local fs contingent) Loaded: loaded (/etc/rc.d/init.d/cloud-init-local; bad; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2017-04-27 14:39:57 UTC; 35s ago Docs: man:systemd-sysv-generator(8) Process: 462 ExecStart=/etc/rc.d/init.d/cloud-init-local start (code=exited, status=1/FAILURE) Apr 27 14:39:57 localhost cloud-init-local[462]: self.selinux.restorecon(path, recursive=self.recursive) Apr 27 14:39:57 localhost cloud-init-local[462]: File "/usr/lib64/python2.7/site-packages/selinux/__init__.py", line 100, in restorecon Apr 27 14:39:57 localhost cloud-init-local[462]: restorecon(os.path.join(root, name)) Apr 27 14:39:57 localhost cloud-init-local[462]: File "/usr/lib64/python2.7/site-packages/selinux/__init__.py", line 85, in restorecon Apr 27 14:39:57 localhost cloud-init-local[462]: status, context = matchpathcon(path, mode) Apr 27 14:39:57 localhost cloud-init-local[462]: OSError: [Errno 2] No such file or directory Apr 27 14:39:57 localhost systemd[1]: cloud-init-local.service: control process exited, code=exited status=1 Apr 27 14:39:57 localhost systemd[1]: Failed to start LSB: The initial cloud-init job (local fs contingent). Apr 27 14:39:57 localhost systemd[1]: Unit cloud-init-local.service entered failed state. Apr 27 14:39:57 localhost systemd[1]: cloud-init-local.service failed. cloud-init (net) errors: Apr 27 14:30:48 localhost cloud-init: Starting cloud-init: Cloud-init v. 0.7.9 running 'init' at Thu, 27 Apr 2017 14:30:48 +. Up 4.77 seconds. Apr 27 14:30:48 localhost cloud-init: 2017-04-27 14:30:48,387 - util.py[WARNING]: Route info failed: Unexpected error while running command. Apr 27 14:30:48 localhost cloud-init: Command: ['netstat', '-rn'] ** Affects: cloud-init Importance: Undecided Status: New ** Tags: centos7 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1686751 Title: cloud-init boot errors on centos7 Status in cloud-init: New Bug description: [root@localhost ubuntu]# systemctl status cloud-init-local -l ● cloud-init-local.service - LSB: The initial cloud-init job (local fs contingent) Loaded: loaded (/etc/rc.d/init.d/cloud-init-local; bad; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2017-04-27 14:39:57 UTC; 35s ago Docs: man:systemd-sysv-generator(8) Process: 462 ExecStart=/etc/rc.d/init.d/cloud-init-local start (code=exited, status=1/FAILURE) Apr 27 14:39:57 localhost cloud-init-local[462]: self.selinux.restorecon(path, recursive=self.recursive) Apr 27 14:39:57 localhost cloud-init-local[462]: File "/usr/lib64/python2.7/site-packages/selinux/__init__.py", line 100, in restorecon Apr 27 14:39:57 localhost cloud-init-local[462]: restorecon(os.path.join(root, name)) Apr 27 14:39:57 localhost cloud-init-local[462]: File "/usr/lib64/python2.7/site-packages/selinux/__init__.py", line 85, in restorecon Apr 27 14:39:57 localhost cloud-init-local[462]: status, context = matchpathcon(path, mode) Apr 27 14:39:57 localhost cloud-init-local[462]: OSError: [Errno 2] No such file or directory Apr 27 14:39:57 localhost systemd[1]: cloud-init-local.service: control process exited, code=exited status=1 Apr 27 14:39:57 localhost systemd[1]: Failed to start LSB: The initial cloud-init job (local fs contingent). Apr 27 14:39:57 localhost systemd[1]: Unit cloud-init-local.service entered failed state. Apr 27 14:39:57 localhost systemd[1]: cloud-init-local.service failed. cloud-init (net) errors: Apr 27 14:30:48 localhost cloud-init: Starting cloud-init: Cloud-init v. 0.7.9 running 'init' at Thu, 27 Apr 2017 14:30:48 +. Up 4.77 seconds. Apr 27 14:30:48 localhost cloud-init: 2017-04-27 14:30:48,387 - util.py[WARNING]: Route info failed: Unexpected error while running command. Apr 27 14:30:48 localhost cloud-init: Command: ['netstat', '-rn'] To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1686751/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686729] [NEW] Creating object storage container causes user to be logged out
Public bug reported: Version = openstack-dashboard3:11.0.1-0ubuntu1~cloud0 Ceph version = 10.2.7 When using ceph RGW swift interface for open stack and the open stack dashboard version above to create a swift container the dashboard does a number of curl requests to check if the bucket name already exists to prevent the user from trying to create a bucket with the same name as an existing bucket. In most cases this works as expected, however if I try to create a bucket that starts with the same name as an existing bucket that has the ACL set to private I am unexpectedly logged out of the dashboard. In my tests I have open stack user 'paul' and project 'paul that owns a private swift bucket called 'paul' I then as a second user 'sean' and project 'sean' try to create a swift container called 'paul1' this will result in me getting logged out of the dashboard, The below shows the log file for when I try and create this bucket: `` REQ: curl -i https://rgw.domain.com/swift/v1/p/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 400 Bad Request RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:01 GMT', u'Content-Length': u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id': u'{hidden}'} RESP BODY: InvalidBucketName REQ: curl -i https://rgw.domain.com/swift/v1/pa/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 400 Bad Request RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:02 GMT', u'Content-Length': u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id': u'{hidden}'} RESP BODY: InvalidBucketName REQ: curl -i https://rgw.domain.com/swift/v1/pau/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 404 Not Found RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:04 GMT', u'Content-Length': u'12', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id': u'{hidden}'} RESP BODY: NoSuchBucket REQ: curl -i https://rgw.domain.com/swift/v1/paul/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 401 Unauthorized RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:04 GMT', u'Content-Length': u'12', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id': u'{hidden}'} RESP BODY: AccessDenied Logging out user "sean `` As you can see this works until the 401 is received by horizon from the rgw when checking bucket 'paul' I believe this is because the bucket ACL of Paul (created by user Paul) is set to ACL private as I don't have the same issue when the ACL is set to public or when the ACL is private and I try and create the bucket 'paul1' as the user 'paul' ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1686729 Title: Creating object storage container causes user to be logged out Status in OpenStack Dashboard (Horizon): New Bug description: Version = openstack-dashboard3:11.0.1-0ubuntu1~cloud0 Ceph version = 10.2.7 When using ceph RGW swift interface for open stack and the open stack dashboard version above to create a swift container the dashboard does a number of curl requests to check if the bucket name already exists to prevent the user from trying to create a bucket with the same name as an existing bucket. In most cases this works as expected, however if I try to create a bucket that starts with the same name as an existing bucket that has the ACL set to private I am unexpectedly logged out of the dashboard. In my tests I have open stack user 'paul' and project 'paul that owns a private swift bucket called 'paul' I then as a second user 'sean' and project 'sean' try to create a swift container called 'paul1' this will result in me getting logged out of the dashboard, The below shows the log file for when I try and create this bucket: `` REQ: curl -i https://rgw.domain.com/swift/v1/p/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 400 Bad Request RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:01 GMT', u'Content-Length': u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id': u'{hidden}'} RESP BODY: InvalidBucketName REQ: curl -i https://rgw.domain.com/swift/v1/pa/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 400 Bad Request RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:02 GMT', u'Content-Length': u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id': u'{hidden}'} RESP BODY: InvalidBucketName REQ: curl -i https://rgw.domain.com/swift/v1/pau/ -X GET -H "X-Auth-Token: {hidden}" RESP STATUS: 404 Not Found RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:04 GMT', u'Content-Length': u'12', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': u'bytes', u'X-Trans-Id':
[Yahoo-eng-team] [Bug 1686717] [NEW] Default value of OPENSTACK_ENDPOINT_TYPE is inconsistent
Public bug reported: The default value of OPENSTACK_ENDPOINT_TYPE has two values: - publicURL in openstack_dashboard/api/base.py - internalURL in openstack_dashboard/api/keystone.py One setting value should have one default value. Otherwise operators are sometimes confused. local/local_settings.py says: # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints # in the Keystone service catalog. Use this setting when Horizon is running # external to the OpenStack environment. The default is 'publicURL'. In addition, the recent **devstack** only configures 'public' interface (endpoint type) and does not configure 'internal' interface. As a result, the current default configuration prevents a member role user to open 'Projects' panel. ** Affects: horizon Importance: High Assignee: Akihiro Motoki (amotoki) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1686717 Title: Default value of OPENSTACK_ENDPOINT_TYPE is inconsistent Status in OpenStack Dashboard (Horizon): New Bug description: The default value of OPENSTACK_ENDPOINT_TYPE has two values: - publicURL in openstack_dashboard/api/base.py - internalURL in openstack_dashboard/api/keystone.py One setting value should have one default value. Otherwise operators are sometimes confused. local/local_settings.py says: # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints # in the Keystone service catalog. Use this setting when Horizon is running # external to the OpenStack environment. The default is 'publicURL'. In addition, the recent **devstack** only configures 'public' interface (endpoint type) and does not configure 'internal' interface. As a result, the current default configuration prevents a member role user to open 'Projects' panel. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1686717/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686703] [NEW] Error in finish_migration results in image deletion on source with no copy
Public bug reported: ML post describing the issue here: http://lists.openstack.org/pipermail/openstack- dev/2017-April/115989.html User was resizing an instance whose glance image had been deleted. An ssh failure occurred in finish_migration, which runs on the destination, attempting to copy the image out of the image cache on the source. This left the instance and migration in an error state on the destination, but with no copy of the image on the destination. Cache manager later ran on the source and expired the image from the image cache there, leaving no remaining copies. At this point the user's instance was unrecoverable. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1686703 Title: Error in finish_migration results in image deletion on source with no copy Status in OpenStack Compute (nova): New Bug description: ML post describing the issue here: http://lists.openstack.org/pipermail/openstack- dev/2017-April/115989.html User was resizing an instance whose glance image had been deleted. An ssh failure occurred in finish_migration, which runs on the destination, attempting to copy the image out of the image cache on the source. This left the instance and migration in an error state on the destination, but with no copy of the image on the destination. Cache manager later ran on the source and expired the image from the image cache there, leaving no remaining copies. At this point the user's instance was unrecoverable. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1686703/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1504725] Re: rabbitmq-server restart twice, log is crazy increasing until service restart
** Also affects: nova Importance: Undecided Status: New ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1504725 Title: rabbitmq-server restart twice, log is crazy increasing until service restart Status in neutron: Expired Status in oslo.messaging: Won't Fix Bug description: After I restart the rabbitmq-server for the second time, the service log(such as nova,neutron and so on) is increasing crazy, log is such as " TypeError: 'NoneType' object has no attribute '__getitem__'". It seems that the channel is setted to None. trace log: 2015-10-10 15:20:59.413 29515 TRACE root Traceback (most recent call last): 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 95, in inner_func 2015-10-10 15:20:59.413 29515 TRACE root return infunc(*args, **kwargs) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/oslo_messaging/_executors/impl_eventlet.py", line 96, in _executor_thread 2015-10-10 15:20:59.413 29515 TRACE root incoming = self.listener.poll() 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 122, in poll 2015-10-10 15:20:59.413 29515 TRACE root self.conn.consume(limit=1, timeout=timeout) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1202, in consume 2015-10-10 15:20:59.413 29515 TRACE root six.next(it) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1100, in iterconsume 2015-10-10 15:20:59.413 29515 TRACE root error_callback=_error_callback) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 868, in ensure 2015-10-10 15:20:59.413 29515 TRACE root ret, channel = autoretry_method() 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 458, in _ensured 2015-10-10 15:20:59.413 29515 TRACE root return fun(*args, **kwargs) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 545, in __call__ 2015-10-10 15:20:59.413 29515 TRACE root self.revive(create_channel()) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 251, in channel 2015-10-10 15:20:59.413 29515 TRACE root chan = self.transport.create_channel(self.connection) 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 91, in create_channel 2015-10-10 15:20:59.413 29515 TRACE root return connection.channel() 2015-10-10 15:20:59.413 29515 TRACE root File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 289, in channel 2015-10-10 15:20:59.413 29515 TRACE root return self.channels[channel_id] 2015-10-10 15:20:59.413 29515 TRACE root TypeError: 'NoneType' object has no attribute '__getitem__' 2015-10-10 15:20:59.413 29515 TRACE root To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1504725/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1682060] Re: empty nova service and hypervisor list
Reviewed: https://review.openstack.org/456920 Committed: https://git.openstack.org/cgit/openstack/kolla-ansible/commit/?id=5dfb81efc8975d68ed9b343ab85f6f937bdbec5d Submitter: Jenkins Branch:master commit 5dfb81efc8975d68ed9b343ab85f6f937bdbec5d Author: Eduardo GonzalezDate: Fri Apr 14 16:02:40 2017 +0100 Update simple_cell_setup to manual creation Simple_cell_setup is not recomended to use. Is better create map_cell0 manually, create base cell for non cell deployments and run discover_hosts. This PS migrate actual config to make use of described workflow at [1]. We our actual workflow we're running into the issue that services are not mapped until cells are present, breaking deployment waiting for compute services to appear. [1] https://docs.openstack.org/developer/nova/cells.html#fresh-install Change-Id: Id061e8039e72de77a04c51657705457193da2d0f Closes-Bug: #1682060 ** Changed in: kolla-ansible Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1682060 Title: empty nova service and hypervisor list Status in kolla-ansible: Fix Released Status in OpenStack Compute (nova): Fix Released Status in openstack-manuals: Fix Released Bug description: In current master, openstack compute service list and openstack hypervisor list (same issue with nova cli) result in an empty list. If I check the database, services are registered in the database. [root@controller tools]# docker exec -ti kolla_toolbox mysql -unova -pYVDa3l8vA57Smnbu9Q5qdgSKJckNxP3Q3rYvVxsD -h 192.168.100.10 nova -e "SELECT * from services WHERE topic = 'compute'"; +-+-++++--+-+--+--+-+-+-+-+-+ | created_at | updated_at | deleted_at | id | host | binary | topic | report_count | disabled | deleted | disabled_reason | last_seen_up| forced_down | version | +-+-++++--+-+--+--+-+-+-+-+-+ | 2017-04-12 09:12:10 | 2017-04-12 09:14:33 | NULL | 9 | controller | nova-compute | compute | 13 |0 | 0 | NULL| 2017-04-12 09:14:33 | 0 | 17 | +-+-++++--+-+--+--+-+-+-+-+-+ [root@controller tools]# openstack compute service list --long [root@controller tools]# [root@controller tools]# openstack hypervisor list --long [root@controller tools]# Logs from kolla deploy gates http://logs.openstack.org/08/456108/1/check/gate-kolla-dsvm-deploy- centos-source-centos-7-nv/9cf1e73/ Environment: - source code - OS: centos/ubuntu/oraclelinux - Deployment type: kolla-ansible Please let me know if more info is needed or if there is a workaround. Regards To manage notifications about this bug go to: https://bugs.launchpad.net/kolla-ansible/+bug/1682060/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686671] [NEW] Horizon doesn't have option of "in-use" Cinder volume backup
Public bug reported: Created 3PAR volume and launched KVM/hLinux instance, attached volume to the instance. Tried taking backup of "in-use" volume where option is not available.Attached screenshot for reference. Observed in Openstack version : Mitaka ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1686671 Title: Horizon doesn't have option of "in-use" Cinder volume backup Status in OpenStack Dashboard (Horizon): New Bug description: Created 3PAR volume and launched KVM/hLinux instance, attached volume to the instance. Tried taking backup of "in-use" volume where option is not available.Attached screenshot for reference. Observed in Openstack version : Mitaka To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1686671/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686584] Re: a few tempest tests are failing (fip related?)
Reviewed: https://review.openstack.org/460396 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=98373690b271cfd0fa0b30083544484deb49a34b Submitter: Jenkins Branch:master commit 98373690b271cfd0fa0b30083544484deb49a34b Author: YAMAMOTO TakashiDate: Thu Apr 27 14:05:07 2017 +0900 l3_db: Fix a regression in the recent CommonDbMixin change The recently merged change [1] modified the logic in the way to break deployments without dns-integration. It broke networking-midonet gate jobs. [1] If1252c42c49cd59dba7ec7c02c9b887fdc169f51 Closes-Bug: #1686584 Change-Id: I80696fa1227d1d36ae67d8f2b70e4206c4cdcd70 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1686584 Title: a few tempest tests are failing (fip related?) Status in networking-midonet: In Progress Status in neutron: Fix Released Bug description: the following tests are failing for both of v2 and ml2. test_router_interface_fip test_update_floatingip_bumps_revision To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1686584/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686627] [NEW] Remove QoS rule check in agent
Public bug reported: Now https://bugs.launchpad.net/neutron/+bug/1586056 is implemented and merged, the check made in QoS agent extension when handling the QoS rules is not needed. In [1] and [2], a check was made to test if the set of rules in the QoS policy was supported by the backend. Now this check is not needed because during the assignment of a QoS policy to a port, this check is made. The agent will handle only rules supported by the driver. [1] https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L93 [2] https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L114 ** Affects: neutron Importance: Undecided Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez) Status: New ** Tags: qos ** Tags added: qos ** Changed in: neutron Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1686627 Title: Remove QoS rule check in agent Status in neutron: New Bug description: Now https://bugs.launchpad.net/neutron/+bug/1586056 is implemented and merged, the check made in QoS agent extension when handling the QoS rules is not needed. In [1] and [2], a check was made to test if the set of rules in the QoS policy was supported by the backend. Now this check is not needed because during the assignment of a QoS policy to a port, this check is made. The agent will handle only rules supported by the driver. [1] https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L93 [2] https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L114 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1686627/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1686540] Re: test_create_server_invalid_bdm_in_2nd_dict Failed
** Also affects: newton Importance: Undecided Status: New ** No longer affects: newton -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1686540 Title: test_create_server_invalid_bdm_in_2nd_dict Failed Status in OpenStack Compute (nova): New Bug description: When I run the test case test_create_server_invalid_bdm_in_2nd_dict in tempest - the test case failed with the below result. Openstack Version: Newton == Failed 1 tests - output below: == tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_invalid_bdm_in_2nd_dict[id-12146ac1-d7df-4928-ad25-b1f99e5286cd,negative] -- Captured traceback: ~~~ Traceback (most recent call last): File "tempest/test.py", line 163, in wrapper raise exc tempest.lib.exceptions.ServerFault: Got server fault Details: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. Captured pythonlogging: ~~~ 2017-04-26 14:57:42,265 22886 INFO [tempest.lib.common.rest_client] Request (ServersNegativeTestJSON:setUp): 200 GET http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5 0.138s 2017-04-26 14:57:42,266 22886 DEBUG[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} Body: None Response - Headers: {'status': '200', u'content-length': '1676', 'content-location': 'http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5', u'date': 'Wed, 26 Apr 2017 21:57:42 GMT', u'x-compute-request-id': 'req-b24ea609-e7bb-4806-8679-98c32d75a780', u'content-type': 'application/json', u'connection': 'close'} Body: {"server": {"OS-EXT-STS:task_state": null, "addresses": {"rally_verify_3842fe6d_39ORN5Gi": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:30:8d:2a", "version": 4, "addr": "10.2.0.4", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5;, "rel": "self"}, {"href": "http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5;, "rel": "bookmark"}], "image": {"id": "2897cc0b-1d3c-40b9-8587-447b8d3e0445", "links": [{"href": "http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/images/2897cc0b-1d3c-40b9-8587-447b8d3e0445;, "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-04-26T21:57:41.00", "flavor": {"id": "c742038a-4d78-4899-aa5f-269f502c8665", "links": [{"href": "http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/flavors/c742038a-4d78-4899-aa5f-269f502c8665;, "rel": "boo kmark"}]}, "id": "6239b0ff-6900-4af8-8e49-5c4e0199afa5", "security_groups": [{"name": "default"}], "user_id": "5104ec988e964669997b4f8a80914288", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "metadata": {}, "status": "ACTIVE", "updated": "2017-04-26T21:57:41Z", "hostId": "e49a0d4ef3c82f4d32305bbfd64ae2131f549992e45b4c1cb9be0de7", "OS-SRV-USG:terminated_at": null, "key_name": null, "name": "tempest-ServersNegativeTestJSON-server-1443765423", "created": "2017-04-26T21:57:35Z", "tenant_id": "84fddff4dcc44eecbfa6e8dc824e291d", "os-extended-volumes:volumes_attached": [], "config_drive": ""}} 2017-04-26 14:57:42,891 22886 INFO [tempest.lib.common.rest_client] Request (ServersNegativeTestJSON:test_create_server_invalid_bdm_in_2nd_dict): 200 POST http://172.26.232.170:8776/v1/84fddff4dcc44eecbfa6e8dc824e291d/volumes 0.620s 2017-04-26 14:57:42,892 22886 DEBUG[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} Body: {"volume": {"display_name": "tempest-ServersNegativeTestJSON-volume-1449183921", "size": 1}} Response - Headers: {'status': '200', u'content-length': '426', 'content-location': 'http://172.26.232.170:8776/v1/84fddff4dcc44eecbfa6e8dc824e291d/volumes', u'x-compute-request-id': 'req-e4dcf42c-7c6c-4339-807a-a12de5627b28', u'connection': 'close', u'date': 'Wed, 26 Apr 2017 21:57:42 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-e4dcf42c-7c6c-4339-807a-a12de5627b28'} Body: {"volume": {"status": "creating",