[Yahoo-eng-team] [Bug 1364770] [NEW] Use simple file operation when running sync as a separate program for Nuage plugin's sync
Public bug reported: During the review of https://blueprints.launchpad.net/neutron/+spec/nuage-neutron-sync, point was brought up about using simple file operation instead of relying on oslo config when running sync as a separate program. This bug is to track those changes for nuage's sync functionality. ** Affects: neutron Importance: Undecided Status: New ** Tags: nuage -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364770 Title: Use simple file operation when running sync as a separate program for Nuage plugin's sync Status in OpenStack Neutron (virtual network service): New Bug description: During the review of https://blueprints.launchpad.net/neutron/+spec/nuage-neutron-sync, point was brought up about using simple file operation instead of relying on oslo config when running sync as a separate program. This bug is to track those changes for nuage's sync functionality. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364770/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364814] [NEW] Neutron multiple api workers can't send cast message to agent when use zeromq
Public bug reported: When I set api_workers 0 in Neutron configuration, delelting or adding router interface, Neutron L3 agent can't receive message from Neutron Server. In this situation, L3 agent report state can cast to Neutron Server, meanwhile it can receive cast message from Neutron Server.(use call method) Obviously, Neutron Server can use cast method for sending message to L3 agent, But why cast routers_updated fails? This also occurs in other Neutron agent. Then I make a test, write some codes in Neutron server starts or l3_router_plugins, sends cast periodic message to L3 agent directly. From L3 agent rpc-zmq-receiver log file shows it receives message from Neutron Server. By the way, all things well when api_workers = 0. Test environment: neutron(master) + oslo.messaging(master) + zeromq ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364814 Title: Neutron multiple api workers can't send cast message to agent when use zeromq Status in OpenStack Neutron (virtual network service): New Bug description: When I set api_workers 0 in Neutron configuration, delelting or adding router interface, Neutron L3 agent can't receive message from Neutron Server. In this situation, L3 agent report state can cast to Neutron Server, meanwhile it can receive cast message from Neutron Server.(use call method) Obviously, Neutron Server can use cast method for sending message to L3 agent, But why cast routers_updated fails? This also occurs in other Neutron agent. Then I make a test, write some codes in Neutron server starts or l3_router_plugins, sends cast periodic message to L3 agent directly. From L3 agent rpc-zmq-receiver log file shows it receives message from Neutron Server. By the way, all things well when api_workers = 0. Test environment: neutron(master) + oslo.messaging(master) + zeromq To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364814/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364839] [NEW] DVR namespaces not deleted on LBaaS VIP Port removal
Public bug reported: The removal of LBaaS VIP Port (and other DVR Serviced ports except compute port) does not delete the DVR namespace from the service nodes. ** Affects: neutron Importance: Undecided Assignee: Vivekanandan Narasimhan (vivekanandan-narasimhan) Status: In Progress ** Tags: l3-dvr-backlog ** Changed in: neutron Assignee: (unassigned) = Vivekanandan Narasimhan (vivekanandan-narasimhan) ** Changed in: neutron Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364839 Title: DVR namespaces not deleted on LBaaS VIP Port removal Status in OpenStack Neutron (virtual network service): In Progress Bug description: The removal of LBaaS VIP Port (and other DVR Serviced ports except compute port) does not delete the DVR namespace from the service nodes. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364839/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364849] [NEW] VMware driver doesn't return typed console
Public bug reported: Change I8f6a857b88659ee30b4aa1a25ac52d7e01156a68 added typed consoles, and updated drivers to use them. However, when it touched the VMware driver, it modified get_vnc_console in VMwareVMOps, but not in VMwareVCVMOps, which is the one which is actually used. Incidentally, VMwareVMOps has now been removed, so this type of confusion should not happen again. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1364849 Title: VMware driver doesn't return typed console Status in OpenStack Compute (Nova): New Bug description: Change I8f6a857b88659ee30b4aa1a25ac52d7e01156a68 added typed consoles, and updated drivers to use them. However, when it touched the VMware driver, it modified get_vnc_console in VMwareVMOps, but not in VMwareVCVMOps, which is the one which is actually used. Incidentally, VMwareVMOps has now been removed, so this type of confusion should not happen again. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1364849/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364851] [NEW] CONF.allow_migrate_to_same_host is no work
Public bug reported: there are such codes in the “resize” function from nova/compute/api.py if not CONF.allow_resize_to_same_host: filter_properties['ignore_hosts'].append(instance['host']) # Here when flavor_id is None, the process is considered as migrate. if (not flavor_id and not CONF.allow_migrate_to_same_host): filter_properties['ignore_hosts'].append(instance['host']) when running the “migrate” operation and the CONF.allow_resize_to_same_host is set as false the CONF.allow_migrate_to_same_host is no work ** Affects: nova Importance: Undecided Assignee: zhangtralon (zhangchunlong1) Status: New ** Changed in: nova Assignee: (unassigned) = zhangtralon (zhangchunlong1) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1364851 Title: CONF.allow_migrate_to_same_host is no work Status in OpenStack Compute (Nova): New Bug description: there are such codes in the “resize” function from nova/compute/api.py if not CONF.allow_resize_to_same_host: filter_properties['ignore_hosts'].append(instance['host']) # Here when flavor_id is None, the process is considered as migrate. if (not flavor_id and not CONF.allow_migrate_to_same_host): filter_properties['ignore_hosts'].append(instance['host']) when running the “migrate” operation and the CONF.allow_resize_to_same_host is set as false the CONF.allow_migrate_to_same_host is no work To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1364851/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352813] [NEW] [Sahara-dashboard] Button 'delete tag' does not work
You have been subscribed to a public bug: Precondition steps. Install horizon and sahara-dashboard (master branch). Open openstack dashboard. Register any image with any tags. Steps to reproduce: 1. Go to sahara image registry panel. 2. Click button 'edit tags' for registered image. Buttons for delete tags does not visible. -- webdriver logs: http://paste.openstack.org/show/90404/ ** Affects: horizon Importance: Medium Assignee: Nikita Konovalov (nkonovalov) Status: In Progress -- [Sahara-dashboard] Button 'delete tag' does not work https://bugs.launchpad.net/bugs/1352813 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1349807] [NEW] [UI] Failed to copy cluster template
You have been subscribed to a public bug: Failed to copy cluster template. Reproduction steps: 1. Create one or a few node group templates. 2. Create cluster template via API request. Cluster template must use IDs of created node group templates and node groups described in cluster template directly. For example, request body for cluster template may be { name: some-cluster-template, description: Some Cluster template, plugin_name: vanilla, hadoop_version: 1.2.1, cluster_configs: {}, node_groups: [ { name: master, node_group_template_id: 35c17bf0-d74b-43b4-99b7-a395a6e4b407, count: 1 }, { name: worker, flavor_id: 2, node_processes: [tasktracker, datanode], node_configs: {}, count: 2 } ] } 3. Go to Sahara dashboard. Try to copy cluster template. Expected result: Template has been successfully copied. Observed result: Template has not been copied. Error from Horizon is the following: Error: {u'count': 2, u'name': u'worker', u'node_group_template_id': u'None'} is not valid under any of the given schemas ** Affects: horizon Importance: Medium Assignee: Nikita Konovalov (nkonovalov) Status: In Progress -- [UI] Failed to copy cluster template https://bugs.launchpad.net/bugs/1349807 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352812] [NEW] [sahara] NodeGroupCreate first frame looks incorrect
You have been subscribed to a public bug: After clicking on CreateNodeGroupTemplate button sahara-dashboard shows several drop down elements of Hadoop Version field. Attached screenshot of the issue. ** Affects: horizon Importance: High Assignee: Chad Roberts (croberts) Status: In Progress ** Tags: dashboard -- [sahara] NodeGroupCreate first frame looks incorrect https://bugs.launchpad.net/bugs/1352812 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364876] [NEW] Specifying both rpc_workers and api_workers make stoping neutron-server fail
Public bug reported: Hi, By setting both rpc_workers and api_workers to something bigger than 1, when you try to stop the service with e.g. upstart the stop doesn't kill all neutron-server processes, which result to failure when starting neutron-server again. Details: == neutron-server will create to openstack.common.service.ProcessLauncher instances one for each service i.e. rpc and api, now the ProcessLauncher wasn't meant to be instantiated more than once in a single process and here is why: 1. Each ProcessLauncher instance register a callback to catch signals like SIGTERM, SIGINT and SIGHUB, having two instances of ProcessLauncher mean the signal.signal will be called twice with different callbacks, only the last one executed will take effect. 2. Each ProcessLauncher think that he own all children processes of the current process, for example take a look at _wait_child method that will catch all killed child processes. 3. When only one ProcessLauncher instance is handling the process termination while the other doesn't this lead to race condition between both: 3.1. Running stop neutron-server will kill also children processes too, but b/c we have 2 ProcessLauncher the one that didn't catch the kill signal will keep respawning new children processes when it detect that they died, the other want because self.running was set to False. 3.2. When children processes dies (i.e. stop neutron-server), one of the ProcessLauncher will catch that with os.waitpid(0, os.WNOHANG) (both do that), and if the death of a child process is catched by the wrong ProcessLauncher i.e. not the one that has it in his children instance variable, the parent process will hang forever in this loop b/c self.children will always contain that child process: if self.children: LOG.info(_LI('Waiting on %d children to exit'), len(self.children)) while self.children: self._wait_child() Hopefully I made this clear. Cheers, ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364876 Title: Specifying both rpc_workers and api_workers make stoping neutron- server fail Status in OpenStack Neutron (virtual network service): New Bug description: Hi, By setting both rpc_workers and api_workers to something bigger than 1, when you try to stop the service with e.g. upstart the stop doesn't kill all neutron-server processes, which result to failure when starting neutron-server again. Details: == neutron-server will create to openstack.common.service.ProcessLauncher instances one for each service i.e. rpc and api, now the ProcessLauncher wasn't meant to be instantiated more than once in a single process and here is why: 1. Each ProcessLauncher instance register a callback to catch signals like SIGTERM, SIGINT and SIGHUB, having two instances of ProcessLauncher mean the signal.signal will be called twice with different callbacks, only the last one executed will take effect. 2. Each ProcessLauncher think that he own all children processes of the current process, for example take a look at _wait_child method that will catch all killed child processes. 3. When only one ProcessLauncher instance is handling the process termination while the other doesn't this lead to race condition between both: 3.1. Running stop neutron-server will kill also children processes too, but b/c we have 2 ProcessLauncher the one that didn't catch the kill signal will keep respawning new children processes when it detect that they died, the other want because self.running was set to False. 3.2. When children processes dies (i.e. stop neutron-server), one of the ProcessLauncher will catch that with os.waitpid(0, os.WNOHANG) (both do that), and if the death of a child process is catched by the wrong ProcessLauncher i.e. not the one that has it in his children instance variable, the parent process will hang forever in this loop b/c self.children will always contain that child process: if self.children: LOG.info(_LI('Waiting on %d children to exit'), len(self.children)) while self.children: self._wait_child() Hopefully I made this clear. Cheers, To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364876/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354087] Re: [UI] 'dropdown' config types displays as checkboxes
** No longer affects: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1354087 Title: [UI] 'dropdown' config types displays as checkboxes Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: provisioning configs returned from plugin which has config_type attribute value equals to 'dropdown' diplays on dashboard as checkbox, not as dropdown list To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1354087/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352813] Re: [Sahara-dashboard] Button 'delete tag' does not work
** Project changed: sahara = horizon ** Changed in: horizon Milestone: juno-3 = None ** Summary changed: - [Sahara-dashboard] Button 'delete tag' does not work + [Sahara] Button 'delete tag' does not work -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352813 Title: [Sahara] Button 'delete tag' does not work Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: Precondition steps. Install horizon and sahara-dashboard (master branch). Open openstack dashboard. Register any image with any tags. Steps to reproduce: 1. Go to sahara image registry panel. 2. Click button 'edit tags' for registered image. Buttons for delete tags does not visible. -- webdriver logs: http://paste.openstack.org/show/90404/ To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352813/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1350110] Re: [UI][Sahara] In case of job execution validation failure UI doesn't display error right
** No longer affects: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1350110 Title: [UI][Sahara] In case of job execution validation failure UI doesn't display error right Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: Steps to repro: 1. create cluster without oozie 2. run job on it using UI Expected result: UI will display that oozie is missing Observed result: Failure error with 'None' description. See screenshot. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1350110/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1349807] Re: [UI] Failed to copy cluster template
** Project changed: sahara = horizon ** Changed in: horizon Milestone: juno-3 = None ** Summary changed: - [UI] Failed to copy cluster template + [sahara] Failed to copy cluster template -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1349807 Title: [sahara] Failed to copy cluster template Status in OpenStack Dashboard (Horizon): In Progress Bug description: Failed to copy cluster template. Reproduction steps: 1. Create one or a few node group templates. 2. Create cluster template via API request. Cluster template must use IDs of created node group templates and node groups described in cluster template directly. For example, request body for cluster template may be { name: some-cluster-template, description: Some Cluster template, plugin_name: vanilla, hadoop_version: 1.2.1, cluster_configs: {}, node_groups: [ { name: master, node_group_template_id: 35c17bf0-d74b-43b4-99b7-a395a6e4b407, count: 1 }, { name: worker, flavor_id: 2, node_processes: [tasktracker, datanode], node_configs: {}, count: 2 } ] } 3. Go to Sahara dashboard. Try to copy cluster template. Expected result: Template has been successfully copied. Observed result: Template has not been copied. Error from Horizon is the following: Error: {u'count': 2, u'name': u'worker', u'node_group_template_id': u'None'} is not valid under any of the given schemas To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1349807/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352812] Re: [sahara] NodeGroupCreate first frame looks incorrect
It was fixed by https://review.openstack.org/#/c/110680/ ** Summary changed: - [UI] NodeGroupCreate first frame looks incorrect + [sahara] NodeGroupCreate first frame looks incorrect ** Project changed: sahara = horizon ** Changed in: horizon Milestone: juno-3 = None ** Changed in: horizon Status: In Progress = Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352812 Title: [sahara] NodeGroupCreate first frame looks incorrect Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: After clicking on CreateNodeGroupTemplate button sahara-dashboard shows several drop down elements of Hadoop Version field. Attached screenshot of the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352812/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364893] [NEW] New version of requests library breaks unit tests
Public bug reported: The newest version of requests library - 2.4.0 - updated underlying library 'urllib3' to version 1.9. Unfortunately this version of urllib3 introduced new exception, ProtocolError, which breaks unit tests. This causes Jenkins to fail in every change set. https://pypi.python.org/pypi/requests (Updated bundled urllib3 version.) https://pypi.python.org/pypi/urllib3 (urllib3.exceptions.ConnectionError renamed to urllib3.exceptions.ProtocolError. (Issue #326)) My solution is to change requirements so we will not use the newest version of requests in python-glanceclient. ** Affects: glance Importance: Undecided Assignee: Pawel Koniszewski (pawel-koniszewski) Status: In Progress ** Changed in: glance Assignee: (unassigned) = Pawel Koniszewski (pawel-koniszewski) ** Changed in: glance Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1364893 Title: New version of requests library breaks unit tests Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Bug description: The newest version of requests library - 2.4.0 - updated underlying library 'urllib3' to version 1.9. Unfortunately this version of urllib3 introduced new exception, ProtocolError, which breaks unit tests. This causes Jenkins to fail in every change set. https://pypi.python.org/pypi/requests (Updated bundled urllib3 version.) https://pypi.python.org/pypi/urllib3 (urllib3.exceptions.ConnectionError renamed to urllib3.exceptions.ProtocolError. (Issue #326)) My solution is to change requirements so we will not use the newest version of requests in python-glanceclient. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1364893/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1338470] Re: LBaaS Round Robin does not work as expected
unfortunately and despite repeated efforts, The issue won't reproduce. Will reopen the bug in case of reproduction. Thanks for looking into it. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1338470 Title: LBaaS Round Robin does not work as expected Status in OpenStack Neutron (virtual network service): Invalid Bug description: Description of problem: === I configured a load balancing pool with 2 members using round robin mechanism. My expectation was that each request will be directed to the next available pool member. Meaning, the expected result was: Req #1 - Member #1 Req #2 - Member #2 Req #3 - Member #1 Req #4 - Member #2 etc.. I configured the instances guest image to replay to the request with the private ip address of the instance, and by that i can easily see who handled the request. This is the result I witnessed: # for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done 192.168.208.4 192.168.208.4 192.168.208.2 192.168.208.2 192.168.208.4 192.168.208.4 192.168.208.2 192.168.208.4 192.168.208.2 192.168.208.4 Details about the pool: http://pastebin.com/index/MwRX7HCR Version-Release number of selected component (if applicable): = Icehouse: python-neutronclient-2.3.4-2 python-neutron-2014.1-35 openstack-neutron-2014.1-35 openstack-neutron-openvswitch-2014.1-35 haproxy-1.5-0.3.dev22.el7 How reproducible: = 100% Steps to Reproduce: === 1. As detailed above, configure a LB pool with round robin and two members. 2. Additional info: Tested with RHEL7 haproxy.cfg: http://pastebin.com/vuNe1p7H To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1338470/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1024586] Re: avoid the use of kpartx in file injection
For other reasons we stopped using this path for file injection by default. This bug is sufficiently old that I assume it is no longer in progress. ** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1024586 Title: avoid the use of kpartx in file injection Status in OpenStack Compute (Nova): Invalid Bug description: kpartx has various problems... 1. The git repo on kernel.org is no longer available. 2. kpartx -l had side effects: $ kpartx -l /bin/ls $ ls text file busy To fix you need to run losetup -a to find the assigned loopback device and then losetup -d /dev/loop... 3. On an unconnected loop device we get warnings, but an EXIT_SUCCESS ? # kpartx -a /dev/loop1 echo EXIT_SUCCESS read error, sector 0 llseek error llseek error llseek error EXIT_SUCCESS 4. Also for a loop device that is connected, I get a failed warning, but the EXIT_SUCCESS is appropriate in that case as the mapped device is present and usable # kpartx -a /dev/loop0 /dev/mapper/loop0p1: mknod for loop0p1 failed: File exists 5. There are issues with qcow2 encoded cirros images # qemu-img info cirros-0.3.0-x86_64-disk.img image: cirros-0.3.0-x86_64-disk.img file format: qcow2 virtual size: 39M (41126400 bytes) disk size: 9.3M cluster_size: 65536 # qemu-nbd -c /dev/nbd15 $PWD/cirros-0.3.0-x86_64-disk.img # ls -la /sys/block/nbd15/pid -r--r--r--. 1 root root 4096 Jun 8 10:19 /sys/block/nbd15/pid # kpartx -a /dev/nbd15 device-mapper: resume ioctl on nbd15p1 failed: Invalid argument create/reload failed on nbd15p1 6. There was a report that `kpartx -[ad]` were not synchronous with the creation/deletion of /dev/mapper/nbdXXpX requiring sleep calls to avoid failures. The best way to avoid the need for kpartx is to use the newer kernel auto partition mapping feature available since kernel 3.2 and only fallback to kpartx if not exists ... '%sp%s' % (self.device, self.partition) Note the nbd module must be mounted with param max_part=16 etc. so that would need documentation. Also we would need to test the same applies for raw loopback images as well as nbd To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1024586/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1006725] Re: Incorrect error returned during Create Image and multi byte characters used for Image name
We are running this test in the gate, and not seeing it. Can you provide links to complete logs somewhere so we can figure out what's going on here? ** Changed in: nova Status: Confirmed = Invalid ** Changed in: nova Status: Invalid = Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1006725 Title: Incorrect error returned during Create Image and multi byte characters used for Image name Status in OpenStack Compute (Nova): Incomplete Status in Tempest: Fix Released Bug description: Our tempest tests that checks for 400 Bad Request return code fails with a ComputeFault instead. Pass multi-byte character image name during Create Image Actual Response Code: ComputeFault, 500 Expected Response Code: 400 Bad Request Return an error if the server name has a multi-byte character ... FAIL == FAIL: Return an error if the server name has a multi-byte character -- Traceback (most recent call last): File /opt/stack/tempest/tests/test_images.py, line 251, in test_create_image_specify_multibyte_character_server_name self.fail(Should return 400 Bad Request if multi byte characters AssertionError: Should return 400 Bad Request if multi byte characters are used for image name begin captured logging tempest.config: INFO: Using tempest config file /opt/stack/tempest/etc/tempest.conf tempest.common.rest_client: ERROR: Request URL: http://10.2.3.164:8774/v2/1aeac1cfbfdd43c2845b2cb3a4f15790/images/24ceff93-1af3-41ab-802f-9fc4d8b90b69 tempest.common.rest_client: ERROR: Request Body: None tempest.common.rest_client: ERROR: Response Headers: {'date': 'Thu, 31 May 2012 06:02:33 GMT', 'status': '404', 'content-length': '62', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-7a15d284-e934-47a1-87f4-7746e949c7a2'} tempest.common.rest_client: ERROR: Response Body: {itemNotFound: {message: Image not found., code: 404}} tempest.common.rest_client: ERROR: Request URL: http://10.2.3.164:8774/v2/1aeac1cfbfdd43c2845b2cb3a4f15790/servers/ecb51dfb-493d-4ef8-9178-1adc3d96a04d/action tempest.common.rest_client: ERROR: Request Body: {createImage: {name: \ufeff43802479847}} tempest.common.rest_client: ERROR: Response Headers: {'date': 'Thu, 31 May 2012 06:02:44 GMT', 'status': '500', 'content-length': '128', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-1a9505f5-4dfc-44e7-b04a-f8daec0f956e'} tempest.common.rest_client: ERROR: Response Body: {u'computeFault': {u'message': u'The server has either erred or is incapable of performing the requested operation.', u'code': 500}} - end captured logging - To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1006725/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364912] [NEW] why i get one fixed ip when use a net which has two subnet to create a port
Public bug reported: a net include two subnet,i use this net to create a port , it alway retrun a fixed ip of one subnet code: def _try_generate_ip(context, subnets): Generate an IP address. The IP address will be generated from one of the subnets defined on the network. range_qry = context.session.query( models_v2.IPAvailabilityRange).join( models_v2.IPAllocationPool).with_lockmode('update') for subnet in subnets: range = range_qry.filter_by(subnet_id=subnet['id']).first() if not range: LOG.debug(_(All IPs from subnet %(subnet_id)s (%(cidr)s) allocated), {'subnet_id': subnet['id'], 'cidr': subnet['cidr']}) continue ip_address = range['first_ip'] LOG.debug(_(Allocated IP - %(ip_address)s from %(first_ip)s to %(last_ip)s), {'ip_address': ip_address, 'first_ip': range['first_ip'], 'last_ip': range['last_ip']}) if range['first_ip'] == range['last_ip']: # No more free indices on subnet = delete LOG.debug(_(No more free IP's in slice. Deleting allocation pool.)) context.session.delete(range) else: # increment the first free range['first_ip'] = str(netaddr.IPAddress(ip_address) + 1) return {'ip_address': ip_address, 'subnet_id': subnet['id']} raise n_exc.IpAddressGenerationFailure(net_id=subnets[0]['network_id']) multiple subnets only return a ip ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364912 Title: why i get one fixed ip when use a net which has two subnet to create a port Status in OpenStack Neutron (virtual network service): New Bug description: a net include two subnet,i use this net to create a port , it alway retrun a fixed ip of one subnet code: def _try_generate_ip(context, subnets): Generate an IP address. The IP address will be generated from one of the subnets defined on the network. range_qry = context.session.query( models_v2.IPAvailabilityRange).join( models_v2.IPAllocationPool).with_lockmode('update') for subnet in subnets: range = range_qry.filter_by(subnet_id=subnet['id']).first() if not range: LOG.debug(_(All IPs from subnet %(subnet_id)s (%(cidr)s) allocated), {'subnet_id': subnet['id'], 'cidr': subnet['cidr']}) continue ip_address = range['first_ip'] LOG.debug(_(Allocated IP - %(ip_address)s from %(first_ip)s to %(last_ip)s), {'ip_address': ip_address, 'first_ip': range['first_ip'], 'last_ip': range['last_ip']}) if range['first_ip'] == range['last_ip']: # No more free indices on subnet = delete LOG.debug(_(No more free IP's in slice. Deleting allocation pool.)) context.session.delete(range) else: # increment the first free range['first_ip'] = str(netaddr.IPAddress(ip_address) + 1) return {'ip_address': ip_address, 'subnet_id': subnet['id']} raise n_exc.IpAddressGenerationFailure(net_id=subnets[0]['network_id']) multiple subnets only return a ip To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364912/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364712] Re: Child processes not monitored by dhcp-agent
This should be covered by blueprint agent-child-processes-status ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364712 Title: Child processes not monitored by dhcp-agent Status in OpenStack Neutron (virtual network service): Invalid Bug description: Currently child processes dnsmasq and neutron-ns-metadata-agent are not monitored by neutron-dhcp-agent. If any of them crash there is no indication and service will be broken for corresponding virtual networks. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364712/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1092605] Re: Inconsistency between nova-manage help message and actual usage.
nova-manage is largely not used beyond db-sync at this point, marking as invalid because I think this is probably quite out of date. ** Changed in: nova Assignee: Mark McLoughlin (markmc) = (unassigned) ** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1092605 Title: Inconsistency between nova-manage help message and actual usage. Status in OpenStack Compute (Nova): Invalid Bug description: In current implementation, a lot of optional arguments for nova-manage sub actions are required (aka non-optional). For example: $nova-manage shell script -h usage: nova-manage shell script [-h] [--path path] [action_args [action_args ...]] positional arguments: action_args optional arguments: -h, --help show this help message and exit --path path Script path What the help message says is, --path is optional, which means user can safely ignore this argument but in fact they can't. $nova-manage shell script Runs the script from the specifed path with flags set properly. arguments: path An argument is missing: path It seems 'nova-manage' detect a missing argument but that's confusing and inconsistent. Why the helpl message doesn't suggest so? Looking into the implementation, nova-manage relies on a cliutils module from oslo to do argument inspection. This is kind of an indirect and inefficient way to do it. The argparse module (used by oslo cfg module) is able to do argument checking when parsing arguments. nova-manage failed to do this due to incorrect usage of @args decorator: - @@ -202,7 +202,7 @@ class ShellCommands(object): readline.parse_and_bind(tab:complete) code.interact() -@args('--path', dest='path', metavar='path', help='Script path') +@args('--path', required=True, dest='path', metavar='path', help='Script path') def script(self, path): Simply adding a 'required=True' to @args allows argparse module to detect incorrect input as well as generate consistent help message. And cliutils module is _not_ needed any more. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1092605/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364893] Re: New version of requests library breaks unit tests
Review: https://review.openstack.org/#/c/118627/ ** Project changed: glance = python-glanceclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1364893 Title: New version of requests library breaks unit tests Status in Python client library for Glance: In Progress Bug description: The newest version of requests library - 2.4.0 - updated underlying library 'urllib3' to version 1.9. Unfortunately this version of urllib3 introduced new exception, ProtocolError, which breaks unit tests. This causes Jenkins to fail in every change set. https://pypi.python.org/pypi/requests (Updated bundled urllib3 version.) https://pypi.python.org/pypi/urllib3 (urllib3.exceptions.ConnectionError renamed to urllib3.exceptions.ProtocolError. (Issue #326)) My solution is to change requirements so we will not use the newest version of requests in python-glanceclient. To manage notifications about this bug go to: https://bugs.launchpad.net/python-glanceclient/+bug/1364893/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1190533] Re: Foreign keys are not enabled in sqlite
** Changed in: nova Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1190533 Title: Foreign keys are not enabled in sqlite Status in OpenStack Compute (Nova): Invalid Bug description: Foreign key constraints are not enabled in sqlite. It is impossible to write tests that involve foreign key constraints. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1190533/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364976] [NEW] ML2 Cisco Nexus MD: Create vlan sent twice
Public bug reported: In the ML2 Cisco Nexus MD UT, the function test_nexus_add_trunk tries to verify that the create vlan message is only sent to the Nexus once, when two ports are created With the commit of https://review.openstack.org/#/c/113009, test_nexus_add_trunk sends two create vlan messages and this error is not caught in the UT. ** Affects: neutron Importance: Undecided Assignee: Robert Pothier (rpothier) Status: New ** Changed in: neutron Assignee: (unassigned) = Robert Pothier (rpothier) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1364976 Title: ML2 Cisco Nexus MD: Create vlan sent twice Status in OpenStack Neutron (virtual network service): New Bug description: In the ML2 Cisco Nexus MD UT, the function test_nexus_add_trunk tries to verify that the create vlan message is only sent to the Nexus once, when two ports are created With the commit of https://review.openstack.org/#/c/113009, test_nexus_add_trunk sends two create vlan messages and this error is not caught in the UT. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1364976/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1315095] Re: grenade nova network (n-net) fails to start
I think this is a screen issue ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1315095 Title: grenade nova network (n-net) fails to start Status in Grenade - OpenStack upgrade testing: Confirmed Bug description: Here we see that n-net never started logging to it's screen: http://logs.openstack.org/02/91502/1/check/check-grenade- dsvm/912e89e/logs/new/ The errors in n-cpu seem to support that the n-net service never started. According to http://logs.openstack.org/02/91502/1/check/check-grenade- dsvm/912e89e/logs/grenade.sh.log.2014-05-01-042623, circa 2014-05-01 04:31:15.580 the interesting bits should be in: /opt/stack/status/stack/n-net.failure But I don't see that captured. I'm not sure why n-net did not start. To manage notifications about this bug go to: https://bugs.launchpad.net/grenade/+bug/1315095/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1353131] Re: Failed to commit reservations in gate
Sounds like oslo.db should fix this: https://review.openstack.org/#/c/101901/ ** No longer affects: openstack-ci -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1353131 Title: Failed to commit reservations in gate Status in OpenStack Compute (Nova): In Progress Bug description: From: http://logs.openstack.org/31/105031/14/gate/gate-tempest-dsvm- full/c05b927/console.html 2014-08-05 02:54:01.131 | Log File Has Errors: n-cond 2014-08-05 02:54:01.132 | *** Not Whitelisted *** 2014-08-05 02:25:47.799 ERROR nova.quota [req-19feeaa2-e1d4-419b-a7bb-a19bb7000b1d AggregatesAdminTestJSON-2075387658 AggregatesAdminTestJSON-270189725] Failed to commit reservations [u'ceaa6ce7-db8d-4ba6-871a-b29c59f4a338', u'10d7550d-d791-44dd-8396-2fa6eaea7c20', u'e7a322e2-948d-45f7-892f-7ea4d9aa0e7c'] There are a number of errors happening in that file that arent whitelisted. This one *seems* to be a possible cause of others.as there is then a number of InstanceNotFound errors. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1353131/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364986] [NEW] oslo.db now wraps all DB exceptions
Public bug reported: tl;dr In a few versions of oslo.db (maybe when we release 1.0.0?), every project using oslo.db should inspect their code and remove usages of 'raw' DB exceptions like IntegrityError/OperationalError/etc from except clauses and replace them with the corresponding custom exceptions from oslo.db (at least a base one - DBError). Full version A recent commit to oslo.db changed the way the 'raw' DB exceptions are wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we used decorators on Session methods and wrapped those exceptions with oslo.db custom ones. This is mostly useful for handling them later (e.g. to retry DB API methods on deadlocks). The problem with Session decorators was that it wasn't possible to catch and wrap all possible exceptions. E.g. SA Core exceptions and exceptions raised in Query.all() calls were ignored. Now we are using a low level SQLAlchemy event to catch all possible DB exceptions. This means that if consuming projects had workarounds for those cases and expected 'raw' exceptions instead of oslo.db ones, they would be broken. That's why we *temporarily* added both 'raw' exceptions and new ones to expect clauses in consuming projects code when they were ported to using of oslo.db to make the transition smooth and allow them to work with different oslo.db versions. On the positive side, we now have a solution for problems like https://bugs.launchpad.net/nova/+bug/1283987 when exceptions in Query methods calls weren't handled properly. In a few releases of oslo.db we can safely remove 'raw' DB exceptions like IntegrityError/OperationalError/etc from projects code and except only oslo.db specific ones like DBDuplicateError/DBReferenceError/DBDeadLockError/etc (at least, we wrap all the DB exceptions with our base exception DBError, if we haven't found a better match). oslo.db exceptions and their description: https://github.com/openstack/oslo.db/blob/master/oslo/db/exception.py ** Affects: nova Importance: Undecided Status: New ** Tags: db ** Description changed: tl;dr - In a few versions of oslo.db (maybe when we release 1.0.0), every + In a few versions of oslo.db (maybe when we release 1.0.0?), every project using oslo.db should inspect their code and remove usages of 'raw' DB exceptions like IntegrityError/OperationalError/etc from except clauses and replace them with the corresponding custom exceptions from oslo.db (at least a base one - DBError). - Full version - - A recent commit to oslo.db changed the way the 'raw' DB exceptions are wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we used decorators on Session methods and wrapped those exceptions with oslo.db custom ones. This is mostly useful for handling them later (e.g. to retry DB API methods on deadlocks). + A recent commit to oslo.db changed the way the 'raw' DB exceptions are + wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we + used decorators on Session methods and wrapped those exceptions with + oslo.db custom ones. This is mostly useful for handling them later (e.g. + to retry DB API methods on deadlocks). The problem with Session decorators was that it wasn't possible to catch and wrap all possible exceptions. E.g. SA Core exceptions and exceptions raised in Query.all() calls were ignored. Now we are using a low level SQLAlchemy event to catch all possible DB exceptions. This means that if consuming projects had workarounds for those cases and expected 'raw' exceptions instead of oslo.db ones, they would be broken. That's why we *temporarily* added both 'raw' exceptions and new ones to expect clauses in consuming projects code when they were ported to using of oslo.db. On the positive side, we now have a solution for problems like https://bugs.launchpad.net/nova/+bug/1283987 when exceptions in Query methods calls weren't handled properly. In a few releases of oslo.db we can safely remove 'raw' DB exceptions like IntegrityError/OperationalError/etc from projects code and except only oslo.db specific ones like DBDuplicateError/DBReferenceError/DBDeadLockError/etc (at least, we wrap all the DB exceptions with our base exception DBError, if we haven't found a better match). oslo.db exceptions and their description: https://github.com/openstack/oslo.db/blob/master/oslo/db/exception.py ** Tags added: db ** Description changed: tl;dr In a few versions of oslo.db (maybe when we release 1.0.0?), every project using oslo.db should inspect their code and remove usages of 'raw' DB exceptions like IntegrityError/OperationalError/etc from except clauses and replace them with the corresponding custom exceptions from oslo.db (at least a base one - DBError). Full version A recent commit to oslo.db changed the way the 'raw' DB exceptions are wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we used decorators on
[Yahoo-eng-team] [Bug 1358362] Re: TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
*** This bug is a duplicate of bug 1241275 *** https://bugs.launchpad.net/bugs/1241275 So I think this bug is actually in python-neutronclient, however in looking at the code I can't see any way that such a thing could happen as that code path should have been protected from this since - 2013-10-23 commit commit e49819caf95fc6985036231b1e5717f0ff7b6c61 Author: Drew Thorstensen tho...@us.ibm.com Date: Wed Oct 23 16:41:45 2013 -0500 New exception when auth_url is not specified Certain scenarios into the neutron client will not specify the auth_url. This is typically when a token is specified. However, when the token is expired the neutron client will attempt to refresh the token. Users of this may not have passed in all of the required information for this reauthentication to properly occur. This code fixes an error that occurs in this flow where the auth_url (which is None) is appended to another string. This results in a core Python error. The update will provide a more targetted error message specifying to the user that the auth_url needs to be specified. An associated unit test is also included to validate this behavior. Change-Id: I577ce0c009a9a281acdc238d290a22c5e561ff82 Closes-Bug: #1241275 ** Changed in: nova Status: New = Incomplete ** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: nova Status: Incomplete = Invalid ** This bug has been marked a duplicate of bug 1241275 Nova / Neutron Client failing upon re-authentication after token expiration -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1358362 Title: TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' Status in OpenStack Compute (Nova): Invalid Status in Python client library for Neutron: New Bug description: We had several instances go into error state on bootstack with the following traceback: 2014-08-17 22:12:37.022 1232 ERROR nova.api.openstack.wsgi [req-068c2700-29a4-46ec-a9f7-9e956c06f3c6 4e68a0dd10e04db5b57c917ca8c521b1 d97d645e7867484b81311b7f9ee2ab15] Exception handling resource: unsupported operand type(s) for +: 'NoneType' and 'str' 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi Traceback (most recent call last): 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 887, in post_process_extensions 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi **action_args) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py, line 590, in show 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi return self._show(req, resp_obj) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py, line 586, in _show 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi self._extend_servers(req, [resp_obj.obj['server']]) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py, line 550, in _extend_servers 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi servers)) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py, line 345, in get_instances_security_groups_bindings 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi ports = self._get_ports_from_server_list(servers, neutron) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py, line 304, in _get_ports_from_server_list 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi ports.extend(neutron.list_ports(**search_opts).get('ports')) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 111, in with_params 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi ret = self.function(instance, *args, **kwargs) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 306, in list_ports 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi **_params) 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi File /usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 1250, in list 2014-08-17 22:12:37.022 1232 TRACE nova.api.openstack.wsgi for r in self._pagination(collection, path, **params): 2014-08-17 22:12:37.022 1232 TRACE
[Yahoo-eng-team] [Bug 1239484] Re: failed nova db migration upgrading from grizzly to havana
Honestly, upgrade up from Folsom is pretty out of scope now, as the folsom and grizzly branches have been eoled, and havana is eol in a couple of weeks. ** Changed in: nova Status: In Progress = Won't Fix ** Changed in: nova Importance: High = Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1239484 Title: failed nova db migration upgrading from grizzly to havana Status in Ubuntu Cloud Archive: New Status in OpenStack Compute (Nova): Won't Fix Status in OpenStack Compute (nova) icehouse series: New Bug description: I recently upgraded a Nova cluster from grizzly to havana. We're using the Ubuntu Cloud Archive and so in terms of package versions the upgrade was from 1:2013.1.3-0ubuntu1~cloud0 to 1:2013.2~rc2-0ubuntu1~cloud0. We're using mysql-server-5.5 5.5.32-0ubuntu0.12.04.1 from Ubuntu 12.04 LTS. After the upgrade, nova-manage db sync failed as follows: # nova-manage db sync 2013-10-13 21:08:54.132 26592 INFO migrate.versioning.api [-] 161 - 162... 2013-10-13 21:08:54.138 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.140 26592 INFO migrate.versioning.api [-] 162 - 163... 2013-10-13 21:08:54.145 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.146 26592 INFO migrate.versioning.api [-] 163 - 164... 2013-10-13 21:08:54.154 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.154 26592 INFO migrate.versioning.api [-] 164 - 165... 2013-10-13 21:08:54.162 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.162 26592 INFO migrate.versioning.api [-] 165 - 166... 2013-10-13 21:08:54.167 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.170 26592 INFO migrate.versioning.api [-] 166 - 167... 2013-10-13 21:08:54.175 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.176 26592 INFO migrate.versioning.api [-] 167 - 168... 2013-10-13 21:08:54.184 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.184 26592 INFO migrate.versioning.api [-] 168 - 169... 2013-10-13 21:08:54.189 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.189 26592 INFO migrate.versioning.api [-] 169 - 170... 2013-10-13 21:08:54.199 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.199 26592 INFO migrate.versioning.api [-] 170 - 171... 2013-10-13 21:08:54.204 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.205 26592 INFO migrate.versioning.api [-] 171 - 172... 2013-10-13 21:08:54.841 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.842 26592 INFO migrate.versioning.api [-] 172 - 173... 2013-10-13 21:08:54.883 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 409 from table: key_pairs 2013-10-13 21:08:54.888 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 257 from table: key_pairs 2013-10-13 21:08:54.889 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 383 from table: key_pairs 2013-10-13 21:08:54.897 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 22 from table: key_pairs 2013-10-13 21:08:54.905 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 65 from table: key_pairs 2013-10-13 21:08:54.911 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 106 from table: key_pairs 2013-10-13 21:08:54.911 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 389 from table: key_pairs 2013-10-13 21:08:54.923 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 205 from table: key_pairs 2013-10-13 21:08:54.928 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 259 from table: key_pairs 2013-10-13 21:08:54.934 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 127 from table: key_pairs 2013-10-13 21:08:54.946 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 337 from table: key_pairs 2013-10-13 21:08:54.951 26592 INFO nova.db.sqlalchemy.utils [-] Deleted duplicated row with id: 251 from table: key_pairs 2013-10-13 21:08:54.991 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:54.991 26592 INFO migrate.versioning.api [-] 173 - 174... 2013-10-13 21:08:55.052 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.053 26592 INFO migrate.versioning.api [-] 174 - 175... 2013-10-13 21:08:55.146 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.147 26592 INFO migrate.versioning.api [-] 175 - 176... 2013-10-13 21:08:55.171 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.172 26592 INFO migrate.versioning.api [-] 176 - 177... 2013-10-13 21:08:55.236 26592 INFO migrate.versioning.api [-] done 2013-10-13 21:08:55.237 26592 INFO migrate.versioning.api [-] 177 - 178... 2013-10-13
[Yahoo-eng-team] [Bug 1172774] Re: NameError: name '_' is not defined while running unit tests
** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1172774 Title: NameError: name '_' is not defined while running unit tests Status in OpenStack Compute (Nova): Invalid Bug description: when running nosetests -v against master of nova I get this error: 15:19:49 Traceback (most recent call last): 15:19:49 File /usr/lib64/python2.6/site-packages/nose/loader.py, line 413, in loadTestsFromName 15:19:49 addr.filename, addr.module) 15:19:49 File /usr/lib64/python2.6/site-packages/nose/importer.py, line 47, in importFromPath 15:19:49 return self.importFromDir(dir_path, fqname) 15:19:49 File /usr/lib64/python2.6/site-packages/nose/importer.py, line 94, in importFromDir 15:19:49 mod = load_module(part_fqname, fh, filename, desc) 15:19:49 File /var/lib/openstack-nova-test/nova/conductor/__init__.py, line 17, in module 15:19:49 from nova.conductor import api as conductor_api 15:19:49 File /var/lib/openstack-nova-test/nova/conductor/api.py, line 19, in module 15:19:49 from nova.conductor import manager 15:19:49 File /var/lib/openstack-nova-test/nova/conductor/manager.py, line 17, in module 15:19:49 from nova.api.ec2 import ec2utils 15:19:49 File /var/lib/openstack-nova-test/nova/api/ec2/__init__.py, line 31, in module 15:19:49 from nova.api.ec2 import apirequest 15:19:49 File /var/lib/openstack-nova-test/nova/api/ec2/apirequest.py, line 27, in module 15:19:49 from nova.api.ec2 import ec2utils 15:19:49 File /var/lib/openstack-nova-test/nova/api/ec2/ec2utils.py, line 22, in module 15:19:49 from nova import availability_zones 15:19:49 File /var/lib/openstack-nova-test/nova/availability_zones.py, line 20, in module 15:19:49 from nova import db 15:19:49 File /var/lib/openstack-nova-test/nova/db/__init__.py, line 23, in module 15:19:49 from nova.db.api import * 15:19:49 File /var/lib/openstack-nova-test/nova/db/api.py, line 48, in module 15:19:49 from nova.cells import rpcapi as cells_rpcapi 15:19:49 File /var/lib/openstack-nova-test/nova/cells/rpcapi.py, line 27, in module 15:19:49 from nova import exception 15:19:49 File /var/lib/openstack-nova-test/nova/exception.py, line 123, in module 15:19:49 class NovaException(Exception): 15:19:49 File /var/lib/openstack-nova-test/nova/exception.py, line 131, in NovaException 15:19:49 message = _(An unknown exception occurred.) 15:19:49 NameError: name '_' is not defined To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1172774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1283987] Re: Query Deadlock when creating 200 servers at once in sqlalchemy
** Changed in: oslo.db Status: In Progress = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1283987 Title: Query Deadlock when creating 200 servers at once in sqlalchemy Status in OpenStack Compute (Nova): In Progress Status in Oslo Database library: Fix Released Bug description: Query Deadlock when creating 200 servers at once in sqlalchemy. This bug occurred when I test this bug: https://bugs.launchpad.net/nova/+bug/1270725 The original info is logged here: http://paste.openstack.org/show/61534/ -- After checking the error-log, we can notice that the deadlock function is 'all()' in sqlalchemy framework. Previously, we use '@retry_on_dead_lock' function to retry requests when deadlock occurs. But it's only available for session deadlock(query/flush/execute). It doesn't cover some 'Query' actions in sqlalchemy. So, we need to add the same protction for 'all()' in sqlalchemy. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1283987/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1060451] Re: detach volume has no effect with HpSanISCSIDriver
** Changed in: cinder Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1060451 Title: detach volume has no effect with HpSanISCSIDriver Status in Cinder: Invalid Status in devstack - openstack dev environments: Expired Status in OpenStack Dashboard (Horizon): Invalid Bug description: How to produce: Get lastest devstack folsom as 10/2/2012 once ./stack.sh starts in dashborad under project demo create a vm out a default image create a volume attach volume to the instance by clicking Edit Attachments once the instance is in-use detach volume from the instance ty click Edit Attachments click Detach Volume for the volume check screen -r horizon n-vol n-cpu doesn't seem to have any activity in the log. However if I run nova command nova volume-detach id of my vm id of my volume it works fine. I think this is more of GUI problem. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1060451/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365031] [NEW] VMware fake session doesn't detect implicitly created directory
Public bug reported: The VMware fake session keeps an internal list of created files and directories. Directories can be created explicitly, e.g. by MakeDirectory(createParentDirectories=True), but the fake session will not recognise these. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1365031 Title: VMware fake session doesn't detect implicitly created directory Status in OpenStack Compute (Nova): New Bug description: The VMware fake session keeps an internal list of created files and directories. Directories can be created explicitly, e.g. by MakeDirectory(createParentDirectories=True), but the fake session will not recognise these. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1365031/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364463] Re: Incorrect key in endpoint dictionary
** Also affects: python-keystoneclient Importance: Undecided Status: New ** Changed in: python-keystoneclient Assignee: (unassigned) = Sergey Kraynev (skraynev) ** Changed in: python-keystoneclient Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1364463 Title: Incorrect key in endpoint dictionary Status in OpenStack Identity (Keystone): In Progress Status in Python client library for Keystone: In Progress Bug description: Keystone v3 has keyword 'region_id' in endpoint dictionary instead 'region'. It leads to bug in when you try to get endpoint with specific region. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1364463/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365061] [NEW] Warn against sorting requirements
Public bug reported: Contrary to bug 1285478, requirements files should not be sorted alphabetically. Given that requirements files can contain comments, I'd suggest a header in all requirements files along the lines of: # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. ** Affects: keystone Importance: Low Status: Triaged ** Changed in: keystone Status: New = Triaged ** Changed in: keystone Importance: Undecided = Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1365061 Title: Warn against sorting requirements Status in OpenStack Identity (Keystone): Triaged Bug description: Contrary to bug 1285478, requirements files should not be sorted alphabetically. Given that requirements files can contain comments, I'd suggest a header in all requirements files along the lines of: # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1365061/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1285478] Re: Enforce alphabetical ordering in requirements file
See bug 1365061 instead. ** Changed in: blazar Status: Triaged = Invalid ** Changed in: glance Status: In Progress = Invalid ** Changed in: keystone Status: In Progress = Invalid ** Changed in: trove Status: In Progress = Invalid ** Changed in: python-cinderclient Status: In Progress = Invalid ** Changed in: python-glanceclient Status: In Progress = Invalid ** Changed in: python-troveclient Status: In Progress = Invalid ** Changed in: storyboard Status: In Progress = Invalid ** Changed in: tempest Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1285478 Title: Enforce alphabetical ordering in requirements file Status in Blazar: Invalid Status in Cinder: Invalid Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Status in Orchestration API (Heat): Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Bare Metal Provisioning Service (Ironic): Won't Fix Status in OpenStack Identity (Keystone): Invalid Status in OpenStack Neutron (virtual network service): Invalid Status in Python client library for Cinder: Invalid Status in Python client library for Glance: Invalid Status in Python client library for Ironic: Fix Committed Status in Python client library for Neutron: Invalid Status in Trove client binding: Invalid Status in OpenStack contribution dashboard: Fix Released Status in Storyboard database creator: Invalid Status in Tempest: Invalid Status in Openstack Database (Trove): Invalid Status in Tuskar: Fix Released Status in OpenStack Messaging and Notifications Service (Zaqar): Won't Fix Bug description: Sorting requirement files in alphabetical order makes code more readable, and can check whether specific library in the requirement files easily. Hacking donesn't check *.txt files. We had enforced this check in oslo-incubator https://review.openstack.org/#/c/66090/. This bug is used to track syncing the check gating. How to sync this to other projects: 1. Copy tools/requirements_style_check.sh to project/tools. 2. run tools/requirements_style_check.sh requirements.txt test- requirements.txt 3. fix the violations To manage notifications about this bug go to: https://bugs.launchpad.net/blazar/+bug/1285478/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1251266] Re: allow_resize_to_same_host=true should be the default
This flag exists solely for all in one testing, and is not intended to be used on a real deployment. ** Changed in: nova Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1251266 Title: allow_resize_to_same_host=true should be the default Status in OpenStack Compute (Nova): Won't Fix Bug description: The flag allow_resize_to_same_host in the nova.conf file is set to 'false' as default. Thus, the the command 'nova resize instance uuid' will fail. The function this flag offers doesn't rise any vulnerability in the system, but gives the freedom to the user to change the flavor of an instance. There's no logic in create a functionality just to not give it in the default configuration. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1251266/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365061] Re: Warn against sorting requirements
** Also affects: python-keystoneclient Importance: Undecided Status: New ** Also affects: keystonemiddleware Importance: Undecided Status: New ** Changed in: keystone Assignee: (unassigned) = Dolph Mathews (dolph) ** Changed in: python-keystoneclient Assignee: (unassigned) = Dolph Mathews (dolph) ** Changed in: keystonemiddleware Assignee: (unassigned) = Dolph Mathews (dolph) ** Changed in: keystonemiddleware Importance: Undecided = Low ** Changed in: python-keystoneclient Importance: Undecided = Low ** Changed in: python-keystoneclient Status: New = In Progress ** Changed in: keystone Status: Triaged = In Progress ** Changed in: keystonemiddleware Status: New = In Progress ** Also affects: nova Importance: Undecided Status: New ** Also affects: glance Importance: Undecided Status: New ** Also affects: cinder Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1365061 Title: Warn against sorting requirements Status in Cinder: New Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in OpenStack Identity (Keystone): In Progress Status in OpenStack Identity (Keystone) Middleware: In Progress Status in OpenStack Compute (Nova): In Progress Status in Python client library for Keystone: In Progress Bug description: Contrary to bug 1285478, requirements files should not be sorted alphabetically. Given that requirements files can contain comments, I'd suggest a header in all requirements files along the lines of: # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1365061/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365061] Re: Warn against sorting requirements
** Also affects: neutron Importance: Undecided Status: New ** Also affects: horizon Importance: Undecided Status: New ** Also affects: swift Importance: Undecided Status: New ** Changed in: horizon Status: New = Fix Committed ** No longer affects: horizon -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1365061 Title: Warn against sorting requirements Status in Cinder: In Progress Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in OpenStack Identity (Keystone): In Progress Status in OpenStack Identity (Keystone) Middleware: In Progress Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Compute (Nova): In Progress Status in Python client library for Keystone: In Progress Status in OpenStack Object Storage (Swift): In Progress Bug description: Contrary to bug 1285478, requirements files should not be sorted alphabetically. Given that requirements files can contain comments, I'd suggest a header in all requirements files along the lines of: # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. This is the result of a mailing list discussion (thanks, Sean!): http://www.mail-archive.com/openstack- d...@lists.openstack.org/msg33927.html To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1365061/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365088] [NEW] Spelling mistakes in the comments
Public bug reported: Spelling mistake in comment in neutron/neutron/agent/l2population_rpc.py ** Affects: neutron Importance: Undecided Assignee: Rishabh (rishabja) Status: New ** Changed in: neutron Assignee: (unassigned) = Rishabh (rishabja) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1365088 Title: Spelling mistakes in the comments Status in OpenStack Neutron (virtual network service): New Bug description: Spelling mistake in comment in neutron/neutron/agent/l2population_rpc.py To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1365088/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365089] [NEW] selected region is lost on project selection
Public bug reported: When a user has selected region B, and then changes to a different project, the region is selected back to region A (or the default/first region). This confuses users and puts the burden on them to really understand the project and region scoping activity. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1365089 Title: selected region is lost on project selection Status in OpenStack Dashboard (Horizon): New Bug description: When a user has selected region B, and then changes to a different project, the region is selected back to region A (or the default/first region). This confuses users and puts the burden on them to really understand the project and region scoping activity. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1365089/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364659] Re: [HEAT] It's impossible to assign 'default' security group to node group
** Also affects: horizon Importance: Undecided Status: New ** Summary changed: - [HEAT] It's impossible to assign 'default' security group to node group + [Sahara][HEAT engine] It's impossible to assign 'default' security group to node group ** Changed in: horizon Assignee: (unassigned) = Andrew Lazarev (alazarev) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1364659 Title: [Sahara][HEAT engine] It's impossible to assign 'default' security group to node group Status in OpenStack Dashboard (Horizon): New Status in OpenStack Data Processing (Sahara, ex. Savanna): In Progress Bug description: Steps to repro: 1. Use HEAT provisioning engine 2. Login as admin user who has access to several tenants 3. Create node group template with 'default' security group assigned 4. Create cluster with this node group Expected result: cluster is created Observed result: Cluster in error state. Heat stack is in state {stack_status_reason: Resource CREATE failed: PhysicalResourceNameAmbiguity: Multiple physical resources were found with name (default)., stack_status: CREATE_FAILED} Problem investigation: Heat searches security group name in all tenants accessible for user, not only in tenant where stack is going to be created (heat bug?). Steps to make things better: 1. We can allow specifying security group with ID 2. Horizon UI can use IDs instead of names for security groups To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1364659/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365099] [NEW] Spelling errors in display messages
Public bug reported: Spelling errors in messages in tests.py ** Affects: horizon Importance: Undecided Assignee: Rishabh (rishabja) Status: New ** Changed in: horizon Assignee: (unassigned) = Rishabh (rishabja) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1365099 Title: Spelling errors in display messages Status in OpenStack Dashboard (Horizon): New Bug description: Spelling errors in messages in tests.py To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1365099/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365061] Re: Warn against sorting requirements
** Also affects: designate Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1365061 Title: Warn against sorting requirements Status in Cinder: In Progress Status in Designate: New Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in OpenStack Identity (Keystone): In Progress Status in OpenStack Identity (Keystone) Middleware: In Progress Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Compute (Nova): In Progress Status in Python client library for Keystone: In Progress Status in OpenStack Object Storage (Swift): In Progress Bug description: Contrary to bug 1285478, requirements files should not be sorted alphabetically. Given that requirements files can contain comments, I'd suggest a header in all requirements files along the lines of: # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. This is the result of a mailing list discussion (thanks, Sean!): http://www.mail-archive.com/openstack- d...@lists.openstack.org/msg33927.html To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1365061/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365107] [NEW] No xstatic-font-awesome in requirements
Public bug reported: $ tools/with_venv.sh python manage.py runserver 0.0.0.0:8080 ... ImportError: Could not import settings 'openstack_dashboard.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named font_awesome $ .venv/bin/pip install XStatic-Font-Awesome ... Successfully installed XStatic-Font-Awesome $ tools/with_venv.sh python manage.py runserver 0.0.0.0:8080 ... Starting development server at http://0.0.0.0:8080/ ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1365107 Title: No xstatic-font-awesome in requirements Status in OpenStack Dashboard (Horizon): New Bug description: $ tools/with_venv.sh python manage.py runserver 0.0.0.0:8080 ... ImportError: Could not import settings 'openstack_dashboard.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named font_awesome $ .venv/bin/pip install XStatic-Font-Awesome ... Successfully installed XStatic-Font-Awesome $ tools/with_venv.sh python manage.py runserver 0.0.0.0:8080 ... Starting development server at http://0.0.0.0:8080/ To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1365107/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1364685] Re: VMware: Broken pipe ERROR when boot VM
** Also affects: openstack-vmwareapi-team Importance: Undecided Status: New ** Changed in: openstack-vmwareapi-team Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1364685 Title: VMware: Broken pipe ERROR when boot VM Status in OpenStack Compute (Nova): New Status in The OpenStack VMwareAPI subTeam: New Bug description: This error happens intermittently, but always can be reproduced after long run and have multiple vmware computer connect to the same vCenter in our test environment: 2014-09-02 09:34:53.489 9439 ERROR nova.virt.vmwareapi.io_util [-] [Errno 32] Broken pipe 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util Traceback (most recent call last): 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/io_util.py, line 178, in _inner 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util self.output.write(data) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/read_write_util.py, line 138, in write 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util self.file_handle.send(data) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib64/python2.6/httplib.py, line 759, in send 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util self.sock.sendall(str) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 131, in sendall 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util v = self.send(data[count:]) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 107, in send 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util super(GreenSSLSocket, self).send, data, flags) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib/python2.6/site-packages/eventlet/green/ssl.py, line 77, in _call_trampolining 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util return func(*a, **kw) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util File /usr/lib64/python2.6/ssl.py, line 174, in send 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util v = self._sslobj.write(data) 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util error: [Errno 32] Broken pipe 2014-09-02 09:34:53.489 9439 TRACE nova.virt.vmwareapi.io_util We are using the 'VMware vCenter Server Appliance', version is 5.5.0. Normally, there are about 2000+ connection in TIME_WAIT status on port 443 when this error happens, and have 80 session in idle in our test env. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1364685/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1329737] Re: Valid tokens may remain after token's user was deleted
This has been addressed on the Keystone side with the above BP. ** Changed in: keystone Status: Triaged = Invalid ** Changed in: keystone Milestone: juno-3 = None -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1329737 Title: Valid tokens may remain after token's user was deleted Status in OpenStack Identity (Keystone): Invalid Status in OpenStack Security Advisories: Won't Fix Bug description: When user is deleted, deleted user's tokens are expired after committing transaction for deleting user. If database dies while tokens are being expired, remaining tokens will lose the chance to expire until 24 hours later. (Because user is already deleted.) In this case, remaining tokens are able to used to authenticate despite the fact that token's user was deleted. I think this case is dangerous from the security point of view. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1329737/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1318973] Re: Inconsistent summaries of nova v3 api
I feel like with the new v2.1 plan this is currently invalid. We should reopen later when it's something that we might actively have on our horizon. ** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1318973 Title: Inconsistent summaries of nova v3 api Status in OpenStack Compute (Nova): Invalid Bug description: We can get summaries of each api extension through show extensions api, and they seem inconsistent. Many summaries include support., but the other ones include Extension.. $ nova --os-compute-api-version 3 extension-list +--+---+---+-+ | Name | Summary | Alias | Version | +--+---+---+-+ | Consoles | Consoles. | consoles | 1 | | Extensions | Extension information. | extensions| 1 | | FlavorAccess | Flavor access support. | flavor-access | 1 | | FlavorsExtraSpecs| Flavors Extension. | flavor-extra-specs| 1 | | FlavorManage | Flavor create/delete API support. | flavor-manage | 1 | | Flavors | Flavors Extension. | flavors | 1 | | Ips | Server addresses. | ips | 1 | | Keypairs | Keypair Support. | keypairs | 1 | | AccessIPs| Access IPs support. | os-access-ips | 1 | | AdminActions | Enable admin-only server actions... | os-admin-actions | 1 | | AdminPassword| Admin password management support. | os-admin-password | 1 | | Agents | Agents support. | os-agents | 1 | | Aggregates | Admin-only aggregate administration. | os-aggregates | 1 | | AttachInterfaces | Attach interface support. | os-attach-interfaces | 1 | | AvailabilityZone | 1. Add availability_zone to the Create Server API | os-availability-zone | 1 | | BlockDeviceMapping | Block device mapping boot support. | os-block-device-mapping | 1 | | Cells| Enables cells-related functionality such as adding neighbor cells,... | os-cells | 1 | | Certificates | Certificates support. | os-certificates | 1 | | ConfigDrive | Config Drive Extension. | os-config-drive | 1 | | ConsoleAuthTokens| Console token authentication support. | os-console-auth-tokens| 1 | | ConsoleOutput| Console log output support, with tailing ability. | os-console-output | 1 | | CreateBackup | Create a backup of a server. | os-create-backup | 1 | | DeferredDelete | Instance deferred delete. | os-deferred-delete| 1 | | Evacuate | Enables server evacuation. | os-evacuate | 1 | | ExtendedAvailabilityZone | Extended Server Attributes support. | os-extended-availability-zone | 1 | | ExtendedServerAttributes | Extended Server Attributes support. | os-extended-server-attributes | 1 | | ExtendedStatus |
[Yahoo-eng-team] [Bug 1265416] Re: Use 'project' instead of 'tenant' in v3 api
** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1265416 Title: Use 'project' instead of 'tenant' in v3 api Status in OpenStack Compute (Nova): Invalid Bug description: For v3 api consistent, we prefer use 'project' instead of 'tenant'. Discussion at: http://lists.openstack.org/pipermail/openstack-dev/2013-November/020222.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1265416/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297605] Re: VMware: Error when snapshotting ISO instance with 0GB root disk
** Changed in: nova Status: In Progress = Fix Committed ** Changed in: nova Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1297605 Title: VMware: Error when snapshotting ISO instance with 0GB root disk Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) icehouse series: Fix Released Bug description: When using the VC Driver, snapshotting an instance that was boot with an ISO and has no root disk will cause the following error (full trace below): AttributeError: 'NoneType' object has no attribute 'split' Scenario is as follows: 1. Boot an instance using an ISO. Make sure the flavor specifies a 0GB root disk size 2. Snapshot the instance Full traceback: Traceback (most recent call last): File /opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 133, in _dispatch_and_reply incoming.message)) File /opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 176, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File /opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 122, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File /opt/stack/nova/nova/exception.py, line 88, in wrapped payload) File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/stack/nova/nova/exception.py, line 71, in wrapped return f(self, context, *args, **kw) File /opt/stack/nova/nova/compute/manager.py, line 280, in decorated_function pass File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/stack/nova/nova/compute/manager.py, line 266, in decorated_function return function(self, context, *args, **kwargs) File /opt/stack/nova/nova/compute/manager.py, line 309, in decorated_function e, sys.exc_info()) File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/stack/nova/nova/compute/manager.py, line 296, in decorated_function return function(self, context, *args, **kwargs) File /opt/stack/nova/nova/compute/manager.py, line 359, in decorated_function % image_id, instance=instance) File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/stack/nova/nova/compute/manager.py, line 349, in decorated_functionot)[0] File /opt/stack/nova/nova/virt/vmwareapi/ds_util.py, line 38, in split_datastore_path spl = datastore_path.split('[', 1)[1].split(']', 1) AttributeError: 'NoneType' object has no attribute 'split' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1297605/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1178156] Re: resource tracker for bare metal nodes tries to subdivide resource
Extremely old bm bug, marking as won't fix ** Changed in: nova Status: Triaged = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1178156 Title: resource tracker for bare metal nodes tries to subdivide resource Status in OpenStack Compute (Nova): Won't Fix Bug description: after deploying a small instance on big hardware: 2013-05-09 08:48:30,085.085 19736 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2013-05-09 08:48:30,208.208 19736 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 97792 2013-05-09 08:48:30,208.208 19736 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 2038 2013-05-09 08:48:30,209.209 19736 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 23 2013-05-09 08:48:30,308.308 19736 INFO nova.compute.resource_tracker [-] Compute_service record updated for ubuntu:96deccd5-0ad9-4bb5-979b-009bebac52fc This should show 0, 0 and 0 : the size of the instance is not the amount to subtract :). I don't know if this is just cosmetic. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1178156/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1174518] Re: rescue extension not supported by bare metal
** Changed in: nova Status: Triaged = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1174518 Title: rescue extension not supported by bare metal Status in OpenStack Compute (Nova): Won't Fix Status in tripleo - openstack on openstack: Triaged Bug description: And it would be super-useful there. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1174518/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1180664] Re: How to update flavor parameters
** Changed in: nova Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1180664 Title: How to update flavor parameters Status in OpenStack Compute (Nova): Invalid Bug description: I have created new flavors using rest API , but i have not found any API for updating the parameters of a flavor like vcpu ,memory,disk,etc.let me know how can i proceed regarding this. I have ubuntu 12.10 and grizzle installed To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1180664/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1158328] Re: passwords in config files stored in plaintext
I feel like this is pretty strongly out of scope. Applications that need to talk to databases that require passwords need access to those passwords in plain text. While we could do obfuscation, it doesn't really address the issue, it just makes you think you addressed it. Honestly better to leave things clear so people rightly understand that a compromise of that file means all bets are off. ** Changed in: nova Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1158328 Title: passwords in config files stored in plaintext Status in OpenStack Compute (Nova): Won't Fix Bug description: The credentials for database conenctions and the keystone authtoken are stored in plaintext within the nova.conf and apipaste config files. These values should be encrypted. A scheme similar to /etc/shadow would be great. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1158328/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1089128] Re: Tests require MySQL to develop locally
** Changed in: nova Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1089128 Title: Tests require MySQL to develop locally Status in OpenStack Compute (Nova): Won't Fix Bug description: Currently developers need MySQL installed in order to develop locally (due to MySQL-Python being in the test-requires file). This can result in the following error my_config missing error: http://paste.openstack.org/show/27856/ The workaround for Mac is to install MySQL. If you use brew, `brew install mysql` will work. Long term, it would be nice to not require MySQL and instead make it optional. It looks like Monty started on this but later reverted this work with commit 6e9f3bb10a105411b0eb3e8f22a252af0784cb0b. This bug is to track somehow fixing this, either by completing what Monty started, or finding some other approach. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1089128/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1180540] Re: Conductor manager imports compute api
** Changed in: nova Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1180540 Title: Conductor manager imports compute api Status in OpenStack Compute (Nova): Invalid Bug description: There is work to move calls to conductor into the compute api. This creates a circular dependency which must be worked around. Ideally the conductor would not need to call into the compute api, so we should fix the three calls that occur. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1180540/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1160026] Re: If nova-dhcpbridge.conf is missing, no message will warn about it.
No longer valid ** Changed in: nova Status: Triaged = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1160026 Title: If nova-dhcpbridge.conf is missing, no message will warn about it. Status in OpenStack Compute (Nova): Won't Fix Bug description: Hello, I'm not quite sure it's a bug or more of a very nice to have for everybody. But last few days we've been investigating some weird behaviour which was due to the fact that the dhcp bridge wasn't working properly because /etc/nova/nova-dhcpbridge.conf was missing. Would it be possible to have a DEBUG/VERBOSE/WARNING of some kind when it's missing? Or even better, set the default options in nova- dhcpbridge and use them if /etc/nova/nova-dhcpbridge.conf is missing! Thank you very much, Dave To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1160026/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 991531] Re: Use users credentials in s3 connection if using keystone
No longer seems valid, the code in this area has radically changed. ** Changed in: nova Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/991531 Title: Use users credentials in s3 connection if using keystone Status in OpenStack Compute (Nova): Invalid Bug description: When nova talks to an s3 image service it currently uses hard coded credentials FLAGS.s3_access_key and FLAGS.s3_secret_key. If using keystone auth it should/can use the users keystone credentials. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/991531/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365147] [NEW] No test for tokens using inherited domain role
Public bug reported: If a user has an inherited role in a domain, he is able to get a token on every project inside that domain, even if he doesn't have a specific grant on that project. Currently, there is no test for this feature, we should create it. ** Affects: keystone Importance: Undecided Assignee: Samuel de Medeiros Queiroz (samuel-z) Status: New ** Changed in: keystone Assignee: (unassigned) = Samuel de Medeiros Queiroz (samuel-z) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1365147 Title: No test for tokens using inherited domain role Status in OpenStack Identity (Keystone): New Bug description: If a user has an inherited role in a domain, he is able to get a token on every project inside that domain, even if he doesn't have a specific grant on that project. Currently, there is no test for this feature, we should create it. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1365147/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1275906] Re: Remove str() from message formating block
honestly, I think this is too low a priority to even keep in the tracker ** Changed in: nova Status: Triaged = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1275906 Title: Remove str() from message formating block Status in OpenStack Compute (Nova): Invalid Bug description: Remove str() from message formatting code, example: Error %s % str(x), because %s conversion converts any Python object using str() To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1275906/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1028688] Re: flavor details should include arch
This is so old now. While it's probably worth thinking about this in the current nova arch, we should do so with fresh eyes if it's important. ** Changed in: nova Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1028688 Title: flavor details should include arch Status in OpenStack Compute (Nova): Invalid Bug description: for supporting arm/arm64 in addition to x86 instances within the same region/zone, flavor details should also list arch, so a user/program can select the flavor appropriately. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1028688/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1333746] Re: novncproxy crash at start
*** This bug is a duplicate of bug 1334327 *** https://bugs.launchpad.net/bugs/1334327 ** This bug has been marked a duplicate of bug 1334327 spice not working on debian 7 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1333746 Title: novncproxy crash at start Status in OpenStack Compute (Nova): New Bug description: Hi everyone, since the last upgrade you have done on icehouse, novncproxy won't start. This is the Trace I get : Traceback (most recent call last): File /usr/bin/nova-novncproxy, line 10, in module sys.exit(main()) File /usr/lib/python2.7/dist-packages/nova/cmd/novncproxy.py, line 87, in main wrap_cmd=None) File /usr/lib/python2.7/dist-packages/nova/console/websocketproxy.py, line 38, in __init__ ssl_target=None, *args, **kwargs) File /usr/lib/python2.7/dist-packages/websockify/websocketproxy.py, line 231, in __init__ websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'no_parent' it seems there is a conflict with websockify. I'm running on debian wheezy amd64. If you need more information, please ask. regards, Axel Vanzaghi To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1333746/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 810493] Re: No support for sparse images
Until the glance issue is addressed, it's not possible to do anything in nova for this. Removing nova. ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/810493 Title: No support for sparse images Status in OpenStack Image Registry and Delivery Service (Glance): Confirmed Bug description: I could have sworn I filed this bug already, but I don't see it now. Oh, well. Glance does not seem to support any sort of sparse images. For example, Ubuntu's cloud images are a 1½ GB filesystem, but if it were sparsely allocated it would only take up a couple of hundred MB. Amazon handles this by using tarballs as their image transport format. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/810493/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1329995] Re: Sporadic tempest failures: The server could not comply with the request since it is either malformed or otherwise incorrect
The logs aren't available any more, without more info this isn't possible to address. ** Changed in: nova Status: New = Incomplete ** No longer affects: openstack-ci ** Changed in: nova Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1329995 Title: Sporadic tempest failures: The server could not comply with the request since it is either malformed or otherwise incorrect Status in OpenStack Compute (Nova): Invalid Bug description: In one of my Tempest review runs, I'm seeing the following error fail some tests: Traceback (most recent call last): File tempest/services/compute/xml/servers_client.py, line 388, in wait_for_server_status raise_on_error=raise_on_error) File tempest/common/waiters.py, line 86, in wait_for_server_status _console_dump(client, server_id) File tempest/common/waiters.py, line 27, in _console_dump resp, output = client.get_console_output(server_id, None) File tempest/services/compute/xml/servers_client.py, line 596, in get_console_output length=length) File tempest/services/compute/xml/servers_client.py, line 439, in action resp, body = self.post(servers/%s/action % server_id, str(doc)) File tempest/common/rest_client.py, line 209, in post return self.request('POST', url, extra_headers, headers, body) File tempest/common/rest_client.py, line 419, in request resp, resp_body) File tempest/common/rest_client.py, line 468, in _error_checker raise exceptions.BadRequest(resp_body) BadRequest: Bad request Details: {'message': 'The server could not comply with the request since it is either malformed or otherwise incorrect.', 'code': '400'} Full log for the run here: http://logs.openstack.org/93/98693/5/check /check-tempest-dsvm-full-icehouse/71d6c8c/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1329995/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1331213] Re: Compute node configuration issue
looks like support request ** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1331213 Title: Compute node configuration issue Status in OpenStack Compute (Nova): Invalid Bug description: Hello. I'm having a problem using nova node configuration. I have a controller node and a compute node. When I start the service on compute node I get 2014-06-17 18:08:27 15268 DEBUG nova.virt.libvirt.driver [-] Connecting to libvirt: qemu:///system _get_connection /usr/lib/python2.7/dist-packages/n ova/virt/libvirt/driver.py:344 2014-06-17 18:08:28 15268 CRITICAL nova [-] (OperationalError) no such table: instances u'SELECT instances.created_at ... Environment : Ubuntu 12.x, IceHouse Any help is appreciated. Thank you To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1331213/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1334151] Re: tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_create_backup
*** This bug is a duplicate of bug 1329995 *** https://bugs.launchpad.net/bugs/1329995 ** This bug has been marked a duplicate of bug 1329995 Sporadic tempest failures: The server could not comply with the request since it is either malformed or otherwise incorrect -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1334151 Title: tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_create_backup Status in OpenStack Compute (Nova): New Bug description: http://logs.openstack.org/76/101876/1/gate/gate-tempest-dsvm- full/1543d84/ https://review.openstack.org/#/c/101876/ 2014-06-24 22:40:12.195 | == 2014-06-24 22:40:12.196 | Failed 1 tests - output below: 2014-06-24 22:40:12.196 | == 2014-06-24 22:40:12.197 | 2014-06-24 22:40:12.197 | tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_create_backup[gate] 2014-06-24 22:40:12.197 | - 2014-06-24 22:40:12.198 | 2014-06-24 22:40:12.198 | Captured traceback: 2014-06-24 22:40:12.199 | ~~~ 2014-06-24 22:40:12.199 | Traceback (most recent call last): 2014-06-24 22:40:12.200 | File tempest/api/compute/servers/test_server_actions.py, line 316, in test_create_backup 2014-06-24 22:40:12.200 | self.servers_client.wait_for_server_status(self.server_id, 'ACTIVE') 2014-06-24 22:40:12.201 | File tempest/services/compute/xml/servers_client.py, line 390, in wait_for_server_status 2014-06-24 22:40:12.201 | raise_on_error=raise_on_error) 2014-06-24 22:40:12.201 | File tempest/common/waiters.py, line 106, in wait_for_server_status 2014-06-24 22:40:12.202 | _console_dump(client, server_id) 2014-06-24 22:40:12.202 | File tempest/common/waiters.py, line 27, in _console_dump 2014-06-24 22:40:12.203 | resp, output = client.get_console_output(server_id, None) 2014-06-24 22:40:12.203 | File tempest/services/compute/xml/servers_client.py, line 598, in get_console_output 2014-06-24 22:40:12.204 | length=length) 2014-06-24 22:40:12.204 | File tempest/services/compute/xml/servers_client.py, line 441, in action 2014-06-24 22:40:12.205 | resp, body = self.post(servers/%s/action % server_id, str(doc)) 2014-06-24 22:40:12.205 | File tempest/common/rest_client.py, line 218, in post 2014-06-24 22:40:12.206 | return self.request('POST', url, extra_headers, headers, body) 2014-06-24 22:40:12.206 | File tempest/common/rest_client.py, line 430, in request 2014-06-24 22:40:12.206 | resp, resp_body) 2014-06-24 22:40:12.207 | File tempest/common/rest_client.py, line 479, in _error_checker 2014-06-24 22:40:12.207 | raise exceptions.BadRequest(resp_body) 2014-06-24 22:40:12.208 | BadRequest: Bad request 2014-06-24 22:40:12.208 | Details: {'message': 'The server could not comply with the request since it is either malformed or otherwise incorrect.', 'code': '400'} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1334151/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297261] Re: messages when the module import fails are very misleading and not descriptive
this is an oslo issue, it's in openstack/ namespace ** Also affects: oslo-incubator Importance: Undecided Status: New ** Changed in: oslo-incubator Status: New = Confirmed ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1297261 Title: messages when the module import fails are very misleading and not descriptive Status in The Oslo library incubator: Confirmed Bug description: I had problem to import vmware module due to lack of it's dep. The message in log though was saying that there is no vmware module, which is far from true. The problematic code is at least: /usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py def import_class(import_str): ... try: __import__(mod_str) return getattr(sys.modules[mod_str], class_str) except (ValueError, AttributeError): raise ImportError('Class %s cannot be found (%s)' % (class_str, traceback.format_exception(*sys.exc_info( Which would obfuscate the error message when some module import dies on ValueError, and: def import_object_ns(name_space, import_str, *args, **kwargs): Tries to import object from default namespace. Imports a class and return an instance of it, first by trying to find the class in a default namespace, then failing back to a full path if not found in the default namespace. import_value = %s.%s % (name_space, import_str) try: return import_class(import_value)(*args, **kwargs) except ImportError: return import_class(import_str)(*args, **kwargs) Which will say ImportError: Missing module import_str, but only as a result of failure of the first import in the try-except block, effectively hiding the true reason of the failure. In other words, if the first import fails for some interesting reasons, some other, possible meaningles import is tried and only it's error is let to propagate. +++ This bug was initially created as a clone of Bug #1080424 +++ Description of problem: When having nova-compute alone on a node, I cannot start it (using PYTHONVERBOSE=1 and slightly modified init script for debuging purposes): 2014-03-25 11:10:42.153 10009 INFO nova.virt.driver [-] Loading compute driver 'vmwareapi.VMwareVCDriver' import nova.virt.vmwareapi # directory /usr/lib/python2.6/site-packages/nova/virt/vmwareapi # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/__init__.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/__init__.py import nova.virt.vmwareapi # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/__init__.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py import nova.virt.vmwareapi.driver # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.py import nova.virt.vmwareapi.error_util # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/host.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/host.py import nova.virt.vmwareapi.host # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/host.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim_util.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim_util.py import nova.virt.vmwareapi.vim_util # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim_util.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vm_util.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vm_util.py import nova.virt.vmwareapi.vm_util # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vm_util.pyc # /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.pyc matches /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.py import nova.virt.vmwareapi.vim # precompiled from /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.pyc 2014-03-25 11:10:42.155 10009 ERROR nova.virt.driver [-] Unable to load the virtualization driver 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver Traceback (most recent call last): 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver File /usr/lib/python2.6/site-packages/nova/virt/driver.py, line 1115, in load_compute_driver 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver virtapi) 2014-03-25 11:10:42.155 10009 TRACE nova.virt.driver File /usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py,
[Yahoo-eng-team] [Bug 1290679] Re: Nova Cells cannot work with zmq
Honestly, zmq is basically unsupported at this point, I think that if this comes back as a feature request it needs to be through the oslo.messaging program ** Changed in: nova Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1290679 Title: Nova Cells cannot work with zmq Status in OpenStack Compute (Nova): Won't Fix Bug description: I use OpenStack Nova Havana with zeromq to build my environment. In my environment, there is a controller node, two child cells node and four compute node like follows: http://pastebin.com/WtG0GVDv controller is the parent cell, cell1 and cell2 are child cell. While, when i start nova service , there are many errors in controller , cell1 and cell2 Controller cells.log: http://pastebin.com/VBVMdDym cell1 cells.log:http://pastebin.com/LsbpGvYc cell2 cells.log:http://pastebin.com/Q91STJWn And follows are my matchmaker_ring.json controller: http://pastebin.com/BduvHw3H cell1: http://pastebin.com/Lv4F9MHw cell2: http://pastebin.com/1aKMiZJx I think , the file imp_zmq must implemente the function cast_to_server and fanout_cast_to_server To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1290679/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1281748] Re: nova-compute crash due to missing DB column 'compute_nodes_1.metrics'
This is a support request, I see a solution attached ** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1281748 Title: nova-compute crash due to missing DB column 'compute_nodes_1.metrics' Status in OpenStack Compute (Nova): Invalid Bug description: Doing a fresh install from http://docs.openstack.org/trunk/install- guide/install/apt/content/index.html Running Ubuntu 12.04.4 LTS + cloud-archive:havana PPA as per guide. The only variation from the guide is that I have setup a RBD store for Glance. Here are the package versions: # dpkg -l | grep nova ii nova-ajax-console-proxy 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - AJAX console proxy - transitional package ii nova-api 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - API frontend ii nova-cert1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - certificate management ii nova-common 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - common files ii nova-compute 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - compute node ii nova-compute-kvm 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - compute node (KVM) ii nova-conductor 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - conductor service ii nova-consoleauth 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - Console Authenticator ii nova-doc 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - documentation ii nova-network 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - Network manager ii nova-novncproxy 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute - virtual machine scheduler ii python-nova 1:2013.2.1-0ubuntu1~cloud0 OpenStack Compute Python libraries ii python-novaclient1:2.15.0-0ubuntu1~cloud0 client library for OpenStack Compute API Everything is smooth up to setting up nova-compute. Other nova services seem to be working correctly, but when I got to deploying the first image the instance didn't start. I am attaching the nova- compute.log file. The nova-compute process crashes immediately when you attempt to start it. According to the errors it appears that there is a missing field (metrics) in the compute_nodes table. The error doesn't mention it, but the table also appears to be missing the extra_resources column as well. Here is what my MySQL says the schema is for compute_nodes CREATE TABLE `compute_nodes` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `deleted_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `service_id` int(11) NOT NULL, `vcpus` int(11) NOT NULL, `memory_mb` int(11) NOT NULL, `local_gb` int(11) NOT NULL, `vcpus_used` int(11) NOT NULL, `memory_mb_used` int(11) NOT NULL, `local_gb_used` int(11) NOT NULL, `hypervisor_type` mediumtext NOT NULL, `hypervisor_version` int(11) NOT NULL, `cpu_info` mediumtext NOT NULL, `disk_available_least` int(11) DEFAULT NULL, `free_ram_mb` int(11) DEFAULT NULL, `free_disk_gb` int(11) DEFAULT NULL, `current_workload` int(11) DEFAULT NULL, `running_vms` int(11) DEFAULT NULL, `hypervisor_hostname` varchar(255) DEFAULT NULL, `deleted` int(11) DEFAULT NULL, `host_ip` varchar(39) DEFAULT NULL, `supported_instances` text, `pci_stats` text, PRIMARY KEY (`id`), KEY `fk_compute_nodes_service_id` (`service_id`), CONSTRAINT `fk_compute_nodes_service_id` FOREIGN KEY (`service_id`) REFERENCES `services` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$ So it would appear that there is no metrics column to select as the error indicates. I thought this might have been an issue with `nova- manage db sync`, so I dropped, recreated, ran `nova-manage db sync` again on the DB, and restarted the services but I am getting the same issue. The nova-manage log for the db schema updates doesn't have anything that would indicate any issues (schema appears to be at version 216). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1281748/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to :
[Yahoo-eng-team] [Bug 1274767] Re: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf
** Changed in: nova Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1274767 Title: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova- br100.conf Status in OpenStack Compute (Nova): Won't Fix Bug description: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova- br100.conf http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres- full/4860441/logs/syslog.txt.gz logstash query: message:bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf AND filename:logs/syslog.txt Seen in the gate Jan 30 22:38:43 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 2 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:43 localhost dnsmasq-dhcp[3604]: read /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 2 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 3 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 4 of /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq-dhcp[3604]: read /opt/stack/data/nova/networks/nova-br100.conf Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1274767/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259323] Re: Libvirt console parameter incorrect for ARM KVM
Calxeda ARM is no long in businesss, so marking as won't fix. If other arm folks come forward, please feel free to reopen. ** Changed in: nova Importance: Undecided = Low ** Changed in: nova Status: Triaged = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259323 Title: Libvirt console parameter incorrect for ARM KVM Status in OpenStack Compute (Nova): Won't Fix Bug description: If you configure nova to run on Calxeda ARM with libvirt/KVM, the generated libvirt configuration passes console as: os … cmdlineroot=/dev/vda console=tty0 console=ttyS0/cmdline /os For ARM guests the libvirt configuration should use 'console=ttyAMA0', hence as it stands you lose serial output for the guest. Currently the console settings are hard coded in nova/virt/libvirt/driver.py . I think we should modify that to be operator configurable via an option in nova.conf. I can submit a change accordingly, but would like feedback on if this sounds reasonable. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1259323/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1361230] Re: ad248f6 jsonutils sync breaks if simplejson 2.2.0 (under python 2.6)
** Changed in: oslo-incubator Milestone: None = juno-rc1 ** Also affects: oslo.serialization Importance: Undecided Status: New ** Changed in: oslo.serialization Assignee: (unassigned) = Matt Riedemann (mriedem) ** Changed in: oslo.serialization Milestone: None = juno-rc1 ** Changed in: oslo.serialization Status: New = Triaged ** Changed in: oslo.serialization Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1361230 Title: ad248f6 jsonutils sync breaks if simplejson 2.2.0 (under python 2.6) Status in OpenStack Identity (Keystone): Invalid Status in The Oslo library incubator: Triaged Status in Oslo library for sending and saving object: Triaged Bug description: This keystone sync: https://github.com/openstack/keystone/commit/94efafd6d6066f63a9226a6b943d0e86699e7edd Pulled in this change to jsonutils: https://review.openstack.org/#/c/113760/ That uses a flag in json.dumps which is only in simplejson = 2.2.0. If you don't have a new enough simplejson the keystone database migrations fail. Keystone doesn't even list simplejson as a requirement and oslo- incubator lists simplejson = 2.0.9 as a test-requirement since it's optional in the code. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1361230/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365169] [NEW] Endpoint grouping extension does handle deletion callbacks
Public bug reported: If a project or endpoint group is deleted, the endpoint grouping extension should respond by deleting associated data. Instead, stale data remains in the backend. ** Affects: keystone Importance: Medium Assignee: Bob Thyne (bob-thyne) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1365169 Title: Endpoint grouping extension does handle deletion callbacks Status in OpenStack Identity (Keystone): In Progress Bug description: If a project or endpoint group is deleted, the endpoint grouping extension should respond by deleting associated data. Instead, stale data remains in the backend. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1365169/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1308983] Re: 'NameError: global name 'endpoint' is not defined' after browser crash
The method throwing the exception, delete_all_tokens, last existed in version 1.1.2 of django_openstack_auth. The current version is 1.1.6. Having an out of date version of django_openstack_auth will miss bug fixes and potentially cause all sorts of exceptions. If a similar problem occurs with an up-to-date version of django_openstack_auth, please post that here. And I'll reopen the bug. ** Changed in: horizon Status: New = Incomplete ** Changed in: horizon Status: Incomplete = Won't Fix ** Changed in: horizon Milestone: juno-rc1 = None -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1308983 Title: 'NameError: global name 'endpoint' is not defined' after browser crash Status in OpenStack Dashboard (Horizon): Won't Fix Bug description: I was working on horizon when my firefox browser stopped responding and I had to force shut down the browser and restart it. once I restored the session, none of the objects could be retrieved by horizon and when I logged out I could not log in with http access error. the only way I was able to refresh the session is logging in with hostname instead of ip which somehow refreshed the session and allowed me to log in. Version-Release number of selected component (if applicable): python-httplib2-0.7.2-1.el6.noarch httpd-tools-2.2.15-29.el6_4.x86_64 httpd-2.2.15-29.el6_4.x86_64 firefox-17.0.10-1.el6_4.x86_64 rhel-csb-firefox-config-0.1-1.el6.rhis.x86_64 python-django-horizon-2013.2.3-1.el6ost.noarch How reproducible: unknown - below is what happened to me. Steps to Reproduce: 1. open a session to horizon in firefox using ip (http://10.35.x.x) 2. crash firefox (I suppose killing -9 the pid should simulate this) 3. restore the browser 4. log out/log in with ip again 5. log in with dnsname of host (http://hostname) Actual results: we fail to get objects from the setup and when we log out we fail to login again until we change from ip to dns name of host Expected results: we should refresh the session after a crash in browser. Additional info: [Thu Apr 17 10:03:10 2014] [error] RESP BODY: {limits: {rate: [], absolute: {maxServerMeta: 128, maxPersonality: 5, maxImageMeta: 128, maxPersonalitySize: 10240, maxSecurityGroupRules: 20, maxTotalKeypairs: 100, totalR AMUsed: 4096, totalInstancesUsed: 1, maxSecurityGroups: 10, totalFloatingIpsUsed: 0, maxTotalCores: 20, totalSecurityGroupsUsed: 0, maxTotalFloatingIps: 10, maxTotalInstances: 10, totalCoresUsed: 2, maxTotalRAMSize: 51 200}}} [Thu Apr 17 10:03:10 2014] [error] [Thu Apr 17 10:07:11 2014] [error] Exception in thread Thread-5: [Thu Apr 17 10:07:11 2014] [error] Traceback (most recent call last): [Thu Apr 17 10:07:11 2014] [error] File /usr/lib64/python2.6/threading.py, line 532, in __bootstrap_inner [Thu Apr 17 10:07:11 2014] [error] self.run() [Thu Apr 17 10:07:11 2014] [error] File /usr/lib64/python2.6/threading.py, line 484, in run [Thu Apr 17 10:07:11 2014] [error] self.__target(*self.__args, **self.__kwargs) [Thu Apr 17 10:07:11 2014] [error] File /usr/lib/python2.6/site-packages/openstack_auth/views.py, line 93, in delete_all_tokens [Thu Apr 17 10:07:11 2014] [error] endpoint=endpoint, [Thu Apr 17 10:07:11 2014] [error] NameError: global name 'endpoint' is not defined [Thu Apr 17 10:07:11 2014] [error] [Thu Apr 17 10:46:28 2014] [error] Exception in thread Thread-6: [Thu Apr 17 10:46:28 2014] [error] Traceback (most recent call last): [Thu Apr 17 10:46:28 2014] [error] File /usr/lib64/python2.6/threading.py, line 532, in __bootstrap_inner [Thu Apr 17 10:46:28 2014] [error] self.run() [Thu Apr 17 10:46:28 2014] [error] File /usr/lib64/python2.6/threading.py, line 484, in run [Thu Apr 17 10:46:28 2014] [error] self.__target(*self.__args, **self.__kwargs) [Thu Apr 17 10:46:28 2014] [error] File /usr/lib/python2.6/site-packages/openstack_auth/views.py, line 93, in delete_all_tokens [Thu Apr 17 10:46:28 2014] [error] endpoint=endpoint, [Thu Apr 17 10:46:28 2014] [error] NameError: global name 'endpoint' is not defined [Thu Apr 17 10:46:28 2014] [error] [Thu Apr 17 10:55:03 2014] [error] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 10.35.160.71 [Thu Apr 17 10:55:03 2014] [error] DEBUG:urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1 200 1309 [Thu Apr 17 10:55:03 2014] [error] DEBUG:iso8601.iso8601:Parsed 2014-04-18T10:55:03Z into {'tz_sign': None, 'second_fraction': None, 'hour': u'10', 'tz_hour': None, 'month': u'04', 'timezone': u'Z', 'second': u'03', 'tz_minute': None, 'y ear': u'2014', 'separator': u'T', 'day': u'18', 'minute': u'55'} with default timezone iso8601.iso8601.Utc object at 0x7f973cec5750 To manage notifications about this bug go to:
[Yahoo-eng-team] [Bug 1064854] Re: nova should have a option to reset(or delete) the user quota to default
Agreed, looks like this is already implemented ** Changed in: nova Status: New = Fix Released ** Changed in: python-novaclient Status: New = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1064854 Title: nova should have a option to reset(or delete) the user quota to default Status in OpenStack Compute (Nova): Fix Released Status in Python client library for Nova: Fix Released Bug description: At present nova client has following commands for the quota operation. $nova --help | grep quota quota-defaults List the default quotas for a tenant. quota-show List the quotas for a tenant. quota-updateUpdate the quotas for a tenant. It will be very helpful to have a command to reset(or delete) quota values to defaults . For ex: User who wants to do huge tests on the system and rollback once the test is done. So a new command quota-reset need to be added to the nova client which reverts the quota value supplied for the tenant ,to the default. Something similar to nova quota-reset( tenant-id key) we can use nova quota-defaults to list the default quotas then use the quota-update to update the quota's to default, but the problem with this approach is that if you then change the default quotas, they are not reflected for the tenant. similar discussion I have started here :https://lists.launchpad.net/openstack/msg17306.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1064854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365226] [NEW] Add security group to running instance with nexus monolithic plugin throws error
Public bug reported: While adding new security group to an existing instance with cisco nexus plugin(provider network) throws the following error. 2014-09-02 20:10:22.116 52259 INFO neutron.wsgi [req-091df3c8-7bdb-42b5 -801a-a26a650a451a None] (52259) accepted ('172.21.9.134', 39434) 2014-09-02 20:10:22.211 52259 ERROR neutron.plugins.cisco.models.virt_phy_sw_v2 [-] Unable to update port '' on Nexus switch 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 Traceback (most recent call last): 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py, line 405, in u pdate_port 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 self._invoke_nexus_for_net_create(context, *create_args) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py, line 263, in _ invoke_nexus_for_net_create 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 [network, attachment]) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py, line 148, in _ invoke_plugin_per_device 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 return func(*args, **kwargs) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py, line 79, in create_network 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 raise cisco_exc.NexusComputeHostNotConfigured(host=host) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 NexusComputeHostNotConfigured: Connection to None is not configured. 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 2014-09-02 20:10:22.256 52259 INFO neutron.api.v2.resource [req-8ebea742-09b5-416b-820f-69461c496319 None] update failed (client error): Connection to None is not configured. 2014-09-02 20:10:22.257 52259 INFO neutron.wsgi [req-8ebea742-09b5-416b-820f-69461c496319 None] 172.21.9.134 - - [02/Sep/2014 20:10:22] PUT //v2.0/ports/c2e6b716-5c7d-4d23-ab78-ecd2a649469b.json HTTP/1.1 404 322 0.140213 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1365226 Title: Add security group to running instance with nexus monolithic plugin throws error Status in OpenStack Neutron (virtual network service): New Bug description: While adding new security group to an existing instance with cisco nexus plugin(provider network) throws the following error. 2014-09-02 20:10:22.116 52259 INFO neutron.wsgi [req-091df3c8-7bdb- 42b5-801a-a26a650a451a None] (52259) accepted ('172.21.9.134', 39434) 2014-09-02 20:10:22.211 52259 ERROR neutron.plugins.cisco.models.virt_phy_sw_v2 [-] Unable to update port '' on Nexus switch 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 Traceback (most recent call last): 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py, line 405, in u pdate_port 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 self._invoke_nexus_for_net_create(context, *create_args) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py, line 263, in _ invoke_nexus_for_net_create 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 [network, attachment]) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py, line 148, in _ invoke_plugin_per_device 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 return func(*args, **kwargs) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 File /usr/lib/python2.7/site-packages/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py, line 79, in create_network 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 raise cisco_exc.NexusComputeHostNotConfigured(host=host) 2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 NexusComputeHostNotConfigured: Connection to None is not configured. 2014-09-02 20:10:22.211 52259 TRACE
[Yahoo-eng-team] [Bug 1365228] [NEW] Rename cli variable in ironic driver
Public bug reported: In nova/virt/ironic/driver.py there is the IronicDriver class. It abbreviates references to the ironicclient as 'icli'. This should be unabbreviated to make the code clearer. This came up as part of https://review.openstack.org/#/c/111425/19/nova/virt/ironic/driver.py ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1365228 Title: Rename cli variable in ironic driver Status in OpenStack Compute (Nova): New Bug description: In nova/virt/ironic/driver.py there is the IronicDriver class. It abbreviates references to the ironicclient as 'icli'. This should be unabbreviated to make the code clearer. This came up as part of https://review.openstack.org/#/c/111425/19/nova/virt/ironic/driver.py To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1365228/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365230] [NEW] Improve ironic driver logging configurability
Public bug reported: As part of review https://review.openstack.org/#/c/111425/19/nova/virt/ironic/driver.py it was suggested that logging configurability be addressed in the ironic driver. A number of different viewpoints exist regarding whether the ironic driver should be independently configurable; and whether there should be a way to turn down the chatyness of the driver for operator sanity. This is raised a bug so that the right solution for nova can be considered and implemented independently. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1365230 Title: Improve ironic driver logging configurability Status in OpenStack Compute (Nova): New Bug description: As part of review https://review.openstack.org/#/c/111425/19/nova/virt/ironic/driver.py it was suggested that logging configurability be addressed in the ironic driver. A number of different viewpoints exist regarding whether the ironic driver should be independently configurable; and whether there should be a way to turn down the chatyness of the driver for operator sanity. This is raised a bug so that the right solution for nova can be considered and implemented independently. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1365230/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365255] [NEW] ofagent: possible crash in arp responder
Public bug reported: ofagent's arp responder does not handle possible exceptions when parsing packet-in data. ** Affects: neutron Importance: Undecided Assignee: YAMAMOTO Takashi (yamamoto) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1365255 Title: ofagent: possible crash in arp responder Status in OpenStack Neutron (virtual network service): In Progress Bug description: ofagent's arp responder does not handle possible exceptions when parsing packet-in data. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1365255/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365259] [NEW] Lack of failure information (Failure due to host key verification failure)displayed, during instance migration from one host to another
Public bug reported: I am using (1 controller + compute1 + compute 2) openstack environment. During live migrate server from one compute host to another using CLI - If migration is failed due to host key verification failure from one compute host to another, then failure information should be as a output to console. otherwise user will not be able to know what is happening. For user, migration is successful but actually it is failed. Set of operation is as - 1. root@nechldcst-PowerEdge-2950:# nova list +--+-+++-+---+ | ID | Name| Status | Task State | Power State | Networks | +--+-+++-+---+ | 1aea212b-0bee-498b-a10d-5b58a69e3293 | test-server | ACTIVE | - | Running | demo-net=203.0.113.26 | +--+-+++-+---+ 2. root@nechldcst-PowerEdge-2950:# nova migrate 1aea212b-0bee-498b-a10d-5b58a69e3293 root@nechldcst-PowerEdge-2950:# At this point user thinks that migration is successful but see below - 3. root@nechldcst-PowerEdge-2950:# nova list +--+-+++-+---+ | ID | Name| Status | Task State | Power State | Networks | +--+-+++-+---+ | 1aea212b-0bee-498b-a10d-5b58a69e3293 | test-server | ERROR | - | Running | demo-net=203.0.113.26 | +--+-+++-+---+ 4. root@nechldcst-PowerEdge-2950:# nova show 1aea212b-0bee-498b-a10d-5b58a69e3293 +--+---+ | Property | Value | +--+---+ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute2 | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute2 | | OS-EXT-SRV-ATTR:instance_name| instance-0003 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2014-09-04T03:41:08.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-09-04T03:41:06Z | | demo-net network