[Yahoo-eng-team] [Bug 1710131] Re: auto generation of language list does not work expected
Reviewed: https://review.openstack.org/493335 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=00f74fc06c37c58c23ff00ee9f36048d158ceeb1 Submitter: Jenkins Branch:master commit 00f74fc06c37c58c23ff00ee9f36048d158ceeb1 Author: Akihiro MotokiDate: Sat Aug 12 22:30:13 2017 + Revert "Generate language list automatically" This reverts commit a88092630014c53c9e6fe4ee9265f8443eea96bd. The reverted implementation depends on LOCALE_PATHS but this assumption turns out not correct. Django searches messages catalogs LOCALE_PATHS and locale directory in individual INSTALLED_APPS, but the usage of LOCALE_PATHS varies on deployers and we cannot assume the default value of LOCALE_PATSH. In addition, the logic of auto-generating the language list cannot handle locale name alias ('fallback' in the Django code). Django 1.9 or later perfers to zh-hant and zh-hans, and zh-cn and zh-tw are now defined as fallback. We can explore a better approach for auto-generation of the language list, but we do not have more reliable way so far. Cconsidering the timing of Pike release, the safest approach looks like to revert the original patch back to the manula maintenance of the lang list. Languages with over 50% progress (based on the number of translated messages as total) are listed in settings.LANGUAGES. (http://paste.openstack.org/show/618254/) Closes-Bug: #1710131 Change-Id: I5133d6317aba6107fc37bd5f30388c130b1fdaac ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1710131 Title: auto generation of language list does not work expected Status in OpenStack Dashboard (Horizon): Fix Released Bug description: During Pike cycle, we introduced a mechanism to auto-generate the language list based on PO file availability [1]. However we received a couple of bug reports which seem to be triggered by this change. One is "can't change the language in user settings" and only "English" is available in the user setting menu [2]. Note that all messages are displayed in a local language so it is just a problem in the generation logic of the language list. Another report is Simplified Chinese is not available in the language list [3]. It looks better to revert the change [3]. It turns out there are several cases the patch did not assume when it was implemented. [1] https://review.openstack.org/#/c/450126/ [2] http://eavesdrop.openstack.org/irclogs/%23openstack-i18n/%23openstack-i18n.2017-08-03.log.html#t2017-08-03T14:05:05 [3] http://lists.openstack.org/pipermail/openstack-i18n/2017-August/003017.html To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1710131/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1709779] Re: reboot ovs service lose dhcp port in dhcp namespace
Hello, Boden: I have installed ocata using devstack today, this issue can't reproduce too. So I think my configuration maybe incorrect. I will double check to figure out what's wrong. Moving it to 'invalid' ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1709779 Title: reboot ovs service lose dhcp port in dhcp namespace Status in neutron: Invalid Bug description: a. i install ocata on two host, all agent work well, as below: [root@controller openstack]# openstack network agent list +--+++---+---+---+---+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary| +--+++---+---+---+---+ | 1296f653-7e28-47dc-b0c7-73e9fabb695f | Metadata agent | controller | None | True | UP| neutron-metadata-agent| | 47bd5b59-feb7-47a6-864e-0cf7ed90ab8e | Open vSwitch agent | compute| None | True | UP| neutron-openvswitch-agent | | 9d8f5a9d-2fd4-4c6f-b6d6-1730843738e3 | DHCP agent | controller | nova | True | UP| neutron-dhcp-agent| | c420da8e-7028-4589-bd2f-9d25756e08f2 | Open vSwitch agent | controller | None | True | UP| neutron-openvswitch-agent | | f79bf249-874b-422a-9d21-949786fbf367 | L3 agent | controller | nova | True | UP| neutron-l3-agent | +--+++---+---+---+---+ [root@controller openstack]# openstack compute service list ++--++--+-+---++ | ID | Binary | Host | Zone | Status | State | Updated At | ++--++--+-+---++ | 1 | nova-consoleauth | controller | internal | enabled | up| 2017-08-10T05:47:04.00 | | 3 | nova-conductor | controller | internal | enabled | up| 2017-08-10T05:47:04.00 | | 7 | nova-scheduler | controller | internal | enabled | up| 2017-08-10T05:47:05.00 | | 10 | nova-compute | controller | nova | enabled | up| 2017-08-10T05:47:06.00 | | 11 | nova-compute | compute| nova | enabled | up| 2017-08-10T05:47:09.00 | ++--++--+-+---++ b. create a tenant with vlan mode, [root@controller openstack]# ip netns qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 [root@controller openstack]# ip netns exec qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 ifconfig lo: flags=73mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tapbfe934a3-9d: flags=323 mtu 1500 inet 1.2.3.4 netmask 255.255.255.0 broadcast 1.2.3.255 inet6 fe80::f816:3eff:feed:ea19 prefixlen 64 scopeid 0x20 ether fa:16:3e:ed:ea:19 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5 bytes 438 (438.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 3. when reboot ovs sevice, port tapbfe934a3-9d in dhcp namespace will be lose [root@controller openstack]# systemctl restart openvswitch [root@controller openstack]# ip netns exec qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 ifconfig -a lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 4. when reboot dhcp agent, that port appear again [root@controller openstack]# systemctl restart neutron-dhcp-agent.service [root@controller openstack]# ip netns exec qdhcp-006b70a9-9c44-40e9-b3a1-3334a472dda6 ifconfig -a lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid
[Yahoo-eng-team] [Bug 1710509] [NEW] ServerMovingTests.test_evacuate sometimes fails but not always
Public bug reported: The newly added test_evacuate test in ServerMovingTests is lightly racey. It seems to fail about 1 in 10 times. A recent failure is at http://logs.openstack.org/72/489772/2/gate/gate-nova-tox-functional-py35 -ubuntu-xenial/07f4a29/console.html#_2017-08-12_12_51_52_867765 Will look into this more closely tomorrow when I've got time, and add elastic recheck entry etc, but wanted to get it written down. ** Affects: nova Importance: Undecided Status: New ** Tags: resource-tracker scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1710509 Title: ServerMovingTests.test_evacuate sometimes fails but not always Status in OpenStack Compute (nova): New Bug description: The newly added test_evacuate test in ServerMovingTests is lightly racey. It seems to fail about 1 in 10 times. A recent failure is at http://logs.openstack.org/72/489772/2/gate/gate-nova-tox-functional- py35-ubuntu-xenial/07f4a29/console.html#_2017-08-12_12_51_52_867765 Will look into this more closely tomorrow when I've got time, and add elastic recheck entry etc, but wanted to get it written down. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1710509/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1689278] Re: Compute Node goes down periodically
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1689278 Title: Compute Node goes down periodically Status in OpenStack Compute (nova): Expired Bug description: Hello, Recently a node on my Openstack Cloud is going down periodically without nothing relevant happening, i have not found an explanation for this yet. I have the following environment: 1.- Version 13.1.2 2.- KVM 3.- Ceph. 4.- Neutron with VXlan tunneling. After reviewing nova logs i have seen these Error lines repeteadly: 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions [req-a88c0032-190f-4d44-a570-78e23714e761 5a2f3552384d4d6a9b45ddda6dd7b1a9 c7d1479503644cd0b793f2c81af5ee60 - - -] Unexpected exception in API method 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/remote_consoles.py", line 56, in get_vnc_console 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions console_type) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 171, in wrapped 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions return func(self, context, target, *args, **kwargs) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 151, in wrapped 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions return function(self, context, instance, *args, **kwargs) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 2933, in get_vnc_console 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions instance=instance, console_type=console_type) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/rpcapi.py", line 574, in get_vnc_console 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions instance=instance, console_type=console_type) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions retry=self.retry) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 91, in _send 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions timeout=timeout, retry=retry) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 512, in send 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions retry=retry) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 501, in _send 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions result = self._waiter.wait(msg_id, timeout) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 379, in wait 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions message = self.waiters.get(msg_id, timeout=timeout) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 277, in get 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions 'to message ID %s' % msg_id) 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions MessagingTimeout: Timed out waiting for a reply to message ID c670a34162914270ad9ca6b0bd40b2fb 2017-05-05 00:33:51.117 10983 ERROR nova.api.openstack.extensions 2017-05-05 00:33:51.119 10983 INFO nova.api.openstack.wsgi
[Yahoo-eng-team] [Bug 1697932] Re: Nova doesn't show instances in error state when a marker specified
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1697932 Title: Nova doesn't show instances in error state when a marker specified Status in OpenStack Compute (nova): Expired Bug description: Description === When we use pagination in nova it doesn't show instances in "Error" state. But if we do just `nova list` it show all instances. Steps to reproduce == * Create one instance in Error state * Create two instances in Active state Let's show all instances $ nova list +--+---+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-++ | 197a0316-a156-47b7-8e2b-3e915f8010bc | inst1 | ERROR | - | NOSTATE || | e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c | inst2 | ACTIVE | - | Running | public=2001:db8::b, 172.24.4.4 | | 4cb491a7-79fd-47d4-a9f7-96a5594f940a | inst3 | ACTIVE | - | Running | public=2001:db8::c, 172.24.4.2 | +--+---+++-++ or $ nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort display_name:desc ++--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | ++--+++-+--+ ++--+++-+--+ $ nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort display_name:asc +--+---+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-++ | 4cb491a7-79fd-47d4-a9f7-96a5594f940a | inst3 | ACTIVE | - | Running | public=2001:db8::c, 172.24.4.2 | +--+---+++-++ Expected result === After the execution of the steps above on the step when we do `nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort display_name:desc` we should see "inst1". Actual result = After the execution of the steps above on the step when we do `nova list --marker=e8d6b5cd-c8a0-4836-95f0-6af249bd9a4c --sort display_name:desc` we can't see anything. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1697932/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1709242] Re: inconsistent -behavior-for-live-migration
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1709242 Title: inconsistent -behavior-for-live-migration Status in OpenStack Compute (nova): Invalid Bug description: Live-migration one instance, if not give destination host, disabled nova-compute node will not be scheduled.but if give a disabled nova- compute node as the destination host, the migration will execute. What is the consideration of this inconsistent behavior? It's better that the disabled nova-compute node shouldn't migrate in any instances. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1709242/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp