[Yahoo-eng-team] [Bug 1361683] Re: Instance pci_devices and security_groups refreshing can break backporting
Since the bug reporter hasn't provided the information requested by Sean, closing it for now. Feel free to reopen the bug by providing the requested information and set the bug status back to ''New''. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1361683 Title: Instance pci_devices and security_groups refreshing can break backporting Status in OpenStack Compute (nova): Invalid Bug description: In the Instance object, on a remotable operation such as save(), we refresh the pci_devices and security_groups with the information we get back from the database. Since this *replaces* the objects currently attached to the instance object (which might be backlevel) with current versions, an older client could get a failure upon deserializing the result. We need to figure out some way to either backport the results of remoteable methods, or put matching backlevel objects into the instance during the refresh in the first place. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1361683/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1400574] Re: Create VMs sometimes failure when use mellonax nic as SR-IOV
Since the bug reporter hasn't provided the necessary information,bug has been closed. Feel free to reopen the bug by providing the requested information and set the bug status back to ''New''. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1400574 Title: Create VMs sometimes failure when use mellonax nic as SR-IOV Status in OpenStack Compute (nova): Invalid Bug description: SYMPTOM: I used the mellonax nic as SR-IOV, then create VMs has a problem (some can create success,some failed) ,and the traffic is also affected (the VLAN of some VMs is error). CAUSE: Due to the particularity of mellonax nic, a PCI number corresponding to the two physical net ports, so lead to that nova side only scan to one net port from the PCI , and doesn't perceive another one. In my environment, eth0 has three available VF resources, eth1 has four available VF resources. the comment of nova-compute.conf: pci_passthrough_whitelist={"devname":"eth1","physical_network":"sriov_net2","bandwidths":"0"} pci_passthrough_whitelist={"devname":"eth0","physical_network":"sriov_net","bandwidths":"1"} Even if the whitelist is correctly configured, Nova from whitelist is still unable to get VF resources information network port, Nova side only scan by PCI , when scanned a corresponding PCI net port, does not scan a network the rest of the ports, causing direct plane in the net export under the network not VF resources,so create a virtual machine failed in the through plane.And the net port that has been scanned occupies the VF resources of another net port,this will result in front of a portion of the virtual machine settings VLAN error. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1400574/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1043148] Re: snapshots fail with client read timeout when using swift
Closing this bug based Sam's comment. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1043148 Title: snapshots fail with client read timeout when using swift Status in OpenStack Compute (nova): Invalid Bug description: Apologies if this is a glance or swift bug (or not a bug at all!) but I think I've nailed it down to nova. Setup: * We are using latest nova in ubuntu precise. * using swift as a backend to glance * Compute nodes are running nova-compute and nova-network (have 10G ethernet) * glance-api and swift-proxy are installed on the same host. (also has 10G ethernet) When snapshotting instances we regularly get the snapshot failing. Sometimes it works, sometimes it fails adding the 1st hunk and sometimes it fails after a few. Logs below show it failing after 34 hunks have been added to swift successfully (takes around 3 seconds to PUT a hunk until the error) The reason I think this has something to do with nova is that I can successfully use the glance client from the compute node to upload the image. There's a lot of log info below, happy to provide more information if needed. It been bugging us for sometime. I think the client read timeout is between glance and nova as glance and swift are on the same host so I would doubt they would timeout. Thanks in advance. Sam Nova: 2012-08-29 07:33:08 ERROR nova.rpc.amqp [req-33589abd-db75-4a1f-b36c-a64f41e8862f 25 23] Exception during message handling 2012-08-29 07:33:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-08-29 07:33:08 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in _process_data 2012-08-29 07:33:08 TRACE nova.rpc.amqp Locals:{'args': {u'backup_type': None, 2012-08-29 07:33:08 TRACE nova.rpc.amqpu'image_id': u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a', 2012-08-29 07:33:08 TRACE nova.rpc.amqpu'image_type': u'snapshot', 2012-08-29 07:33:08 TRACE nova.rpc.amqpu'instance_uuid': u'2fb5ba40-0f61-4360-bb57-f28871f7cebf', 2012-08-29 07:33:08 TRACE nova.rpc.amqpu'rotation': None}, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'ctxt': , 2012-08-29 07:33:08 TRACE nova.rpc.amqp'e': Invalid(Invalid(),), 2012-08-29 07:33:08 TRACE nova.rpc.amqp'method': u'snapshot_instance', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'node_args': {'backup_type': None, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_id': u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_type': u'snapshot', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'instance_uuid': u'2fb5ba40-0f61-4360-bb57-f28871f7cebf', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'rotation': None}, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'node_func': >, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'self': } 2012-08-29 07:33:08 TRACE nova.rpc.amqp 2012-08-29 07:33:08 TRACE nova.rpc.amqp 2012-08-29 07:33:08 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped 2012-08-29 07:33:08 TRACE nova.rpc.amqp Locals:{'args': (,), 2012-08-29 07:33:08 TRACE nova.rpc.amqp'e': Invalid(Invalid(),), 2012-08-29 07:33:08 TRACE nova.rpc.amqp'event_type': None, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'exc_info': (, 2012-08-29 07:33:08 TRACE nova.rpc.amqpInvalid(Invalid(),), 2012-08-29 07:33:08 TRACE nova.rpc.amqp), 2012-08-29 07:33:08 TRACE nova.rpc.amqp'f': , 2012-08-29 07:33:08 TRACE nova.rpc.amqp'kw': {'backup_type': None, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'context': , 2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_id': u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_type': u'snapshot', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'instance_uuid': u'2fb5ba40-0f61-4360-bb57-f28871f7cebf', 2012-08-29 07:33:08 TRACE nova.rpc.amqp'rotation': None}, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'level': None, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'notifier': , 2012-08-29 07:33:08 TRACE nova.rpc.amqp'payload': {'args': (,), 2012-08-29 07:33:08 TRACE nova.rpc.amqp'backup_type': None, 2012-08-29 07:33:08 TRACE nova.rpc.amqp'context': , 2012-08-29 07:33:08 TRACE nova.rpc.amqp'exception': Invalid(Invalid(),), 2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_id': u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a', 2012-08-29 07:33:08 TRACE nova.rpc.amqp
[Yahoo-eng-team] [Bug 1560472] Re: nova interface-attach command removes pre-existing neutron ports from the environment if it fails to attach to an instance _even_ where '--port-id' has been specifie
This bug lacks the necessary information to effectively reproduce and fix it, therefore it has been closed. Feel free to reopen the bug by providing the requested information and set the bug status back to ''New' ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1560472 Title: nova interface-attach command removes pre-existing neutron ports from the environment if it fails to attach to an instance _even_ where '--port-id' has been specified Status in OpenStack Compute (nova): Invalid Bug description: Problem description: The nova interface-attach command removes pre-existing neutron ports from the environment if it fails to attach to an instance _even_ where '--port-id' has been specified. This behaviour was introduced by fixing bug #1338551 [1]. Steps to reproduce: 1) create a new neutron port $ neutron port-create --name 2) boot an instance (make sure to specify a keypair and check sec groups for ssh connectivity to the instance) $ nova boot ... 3) [OPTIONAL] add/remove the port several times over to prove the funtionality is working OK. $ nova interface-attach --port-id $ nova interface-detach 4) simulate a kernel crash on the instance, as this should cause a scenario where an interface attach will fail (ssh connectivity is assumed for this step) $ ssh "sudo kill -11 1" # OR execute 'echo c > /proc/sysrq-trigger' while connected to the instance 4a. Verify the kernel has actually crashed $ nova console-log 5) try to attach the port while the instance is still crashed # **note** if the port hasn't been attached before (i.e. you skipped step 3, it may succeed initially then it will fail on subsequent attach attempts). Also, at this point it should not matter if the port is still attached to the instance. $ nova interface-attach --port-id Errors observed: $ nova interface-attach --port-id ERROR: Failed to attach interface (HTTP 500) (Request-ID: req-----) Expected results: The port should still exist after failure in this scenario. Actual results: 'neutron port-list' will no longer show the port. It has been removed. The port is removed from the environment and therefore is no longer available. Snippet from /var/log/nova/nova-compute.log [instance: ----] attaching network adapter failed. Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1263, in attach_interface virt_dom.attachDeviceFlags(cfg.to_xml(), flags) File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute six.reraise(c, e, tb) File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker rv = meth(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 513, in attachDeviceFlags if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) libvirtError: Unable to create tap device tap-xx: Device or resource busy attach interface failed , try to deallocate port ----, reason: Failed to attach network adapter device to ---- Exception during message handling: Failed to attach network adapter device to ---- Full error: http://pastebin.ubuntu.com/15471511/ $ sudo apt-cache policy nova-compute nova-compute: Installed: 1:2015.1.2-0ubuntu2~cloud0 Ubuntu 14.04.4 LTS Why does this matter: As specified in [1], where a port has been attached using --net-id option it is automatically created before attaching to the VM. Therefore, it is the correct behaviour to cleanup after a failure to attach. Where "--port-id" has been specified, it should not be assumed that it was auto created, it has been specifically created and therefore may have pre-existed the VM, this means the port should be re-usable if desired and therefore should not be cleaned up in the case of attach failure. When the port has been pre-created and '--port-id' is specified in the interface-attach command, if the action fails to attach it should be handled without being removed from the environment and exist for re-assignment to another instance or for retry to the original instance once it has recovered from it's failure. This behaviour is confirmed on both Kilo and Liberty. Related bugs: [1]
[Yahoo-eng-team] [Bug 1490238] Re: Configdrive fails to properly display within Windows Guest (Xenapi)
This bug lacks the necessary information, therefore it has been closed. Feel free to reopen the bug by providing the requested information and set the bug status back to ''New' ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1490238 Title: Configdrive fails to properly display within Windows Guest (Xenapi) Status in OpenStack Compute (nova): Invalid Bug description: Windows guests within a XenServer environment currently do not have the ability to properly have ConfigDrive attached unless the environment has its nova.conf set up as: config_drive_format=vfat This issue ultimately results from this value being defaulted to ISO9660 (CDFS) and the VBD object being used for it being a disk (the nova.virt.xenapi.vm_utils.create_vbd default). After testing, While the VBD is attached without issue and in the proper stateI was unable to get this drive to show up within Windows at all. I was unable to see the drive detected within the GUI, or within Windows Powershell. This can be addressed by detecting the nova.conf configuration setting, and adjusting the VBD attach accordingly. I will be submitting a follow-up commit shortly. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1490238/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1514550] Re: nova.cells.messaging.instance_update_at_top is assuming it gets an Instance object
It was decided not to work on cells bugs as cells v1 is going to be deprecated and it is being planned to move to cells V2 ** Changed in: nova Status: In Progress => Won't Fix ** Changed in: nova Assignee: Kasey Alusi (kasey-alusi) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1514550 Title: nova.cells.messaging.instance_update_at_top is assuming it gets an Instance object Status in OpenStack Compute (nova): Won't Fix Bug description: This code assumes that the instance parameter is a nova Instance object: https://github.com/openstack/nova/blob/86fe90f7056432416ea3c2335ea8c2ad6e16b79a/nova/cells/messaging.py#L1020 But if you're using cells RPC API < 1.35 it's a primitive dict: https://github.com/openstack/nova/blob/86fe90f7056432416ea3c2335ea8c2ad6e16b79a/nova/cells/rpcapi.py#L205 This was introduced with eaaa659333c7586a71155c065dfb0f7b7e3758fc in 12.0.0 (Liberty). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1514550/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1591434] Re: get 'itemNotFound' when flavor-show flavor_name
Hi PanFengyun! I was able to get flavor details using the nova flavor-show command. In order to add/change API please change the bug description or log a different bug. Regards, Siva. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1591434 Title: get 'itemNotFound' when flavor-show flavor_name Status in OpenStack Compute (nova): Invalid Bug description: when I use 'flavor-show' to get detail info of flavor, novaclient get "Flavor m1.small could not be found". 1.create m1.small flavor $ nova flavor-list | grep m1.small | 2| m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | 2.get the detail info of m1.small flavor nova --debug flavor-show m1.small ... RESP BODY: {"itemNotFound": {"message": "Flavor m1.small could not be found.", "code": 404}} ... ++--+ | Property | Value| ++--+ | OS-FLV-DISABLED:disabled | False| | OS-FLV-EXT-DATA:ephemeral | 0| | disk | 20 | | extra_specs| {} | | id | 2| | name | m1.small | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 1| ++--+ reason: nova not allow user to get flavor by name. Nova just have get_flavor_by_flavor_id(), and have no get_flavor_by_flavor_name. - def show(self, req, id): """Return data about the given flavor id.""" context = req.environ['nova.context'] try: flavor = flavors.get_flavor_by_flavor_id(id, ctxt=context)#just get_flavor_by_flavor_id req.cache_db_flavor(flavor) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return self._view_builder.show(req, flavor) - helpful: Add get_flavor_by_flavor_name() into show() reason: novaclient just allow user to create flavor by unique id and unique name, so we can get flavor by the id or name. And we can add get_flavor_by_flavor_name(). - Positional arguments: Unique name of the new flavor. Unique ID of the new flavor. Specifying 'auto' will generated a UUID for the ID. - To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1591434/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1516536] Re: image-list should check filter correctness
Since this command is deprecated in 15.0.0 release, this bug won't be fixed. ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1516536 Title: image-list should check filter correctness Status in OpenStack Compute (nova): Invalid Bug description: curl -g -i -X GET http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/images/detail?status=ACTIVE can get correct image list, however, (note status --> stats) curl -g -i -X GET http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/images/detail?stats=ACTIVE still returns same result, we should report this is incorrect param ... To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1516536/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1376316] Re: nova absolute-limits floating ip count is incorrect in a neutron based deployment
we should not be tracking usage of security groups in Nova when using Neutron. Below patch is filtering out network related limits from API response. https://review.openstack.org/#/c/344947/7 you can check this by trying out curl request with OpenStack-API-Version: compute 2.36 curl -g -i -X GET http://192.168.0.31:8774/v2.1/limits -H "OpenStack-API-Version: compute 2.36" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.32" -H "X-Auth-Token: $OS_TOKEN" ** Changed in: nova Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1376316 Title: nova absolute-limits floating ip count is incorrect in a neutron based deployment Status in neutron: Incomplete Status in OpenStack Compute (nova): Invalid Status in nova package in Ubuntu: In Progress Bug description: 1. $ lsb_release -rd Description: Ubuntu 14.04 LTS Release: 14.04 2. $ apt-cache policy python-novaclient python-novaclient: Installed: 1:2.17.0-0ubuntu1 Candidate: 1:2.17.0-0ubuntu1 Version table: *** 1:2.17.0-0ubuntu1 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status 3. nova absolute-limits should report the correct value of allocated floating ips 4. nova absolute-limits shows 0 floating ips when I have 5 allocated $ nova absolute-limits | grep Floating | totalFloatingIpsUsed| 0 | | maxTotalFloatingIps | 10 | $ nova floating-ip-list +---+---++-+ | Ip| Server Id | Fixed Ip | Pool| +---+---++-+ | 10.98.191.146 | | - | ext_net | | 10.98.191.100 | | 10.5.0.242 | ext_net | | 10.98.191.138 | | 10.5.0.2 | ext_net | | 10.98.191.147 | | - | ext_net | | 10.98.191.102 | | - | ext_net | +---+---++-+ ProblemType: Bug DistroRelease: Ubuntu 14.04 Package: python-novaclient 1:2.17.0-0ubuntu1 ProcVersionSignature: User Name 3.13.0-24.47-generic 3.13.9 Uname: Linux 3.13.0-24-generic x86_64 ApportVersion: 2.14.1-0ubuntu3.2 Architecture: amd64 Date: Wed Oct 1 15:19:08 2014 Ec2AMI: ami-0001 Ec2AMIManifest: FIXME Ec2AvailabilityZone: nova Ec2InstanceType: m1.small Ec2Kernel: aki-0002 Ec2Ramdisk: ari-0002 PackageArchitecture: all ProcEnviron: TERM=xterm PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: python-novaclient UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1376316/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1456899] Re: nova absolute-limits Security groups count incorrect when using Neutron
As john had mentioned in his comment, we should not be tracking usage of security groups in Nova. Below patch is filtering out network related limits from API response. https://review.openstack.org/#/c/344947/7 you can check this by trying out curl request with OpenStack-API-Version: compute 2.36 curl -g -i -X GET http://192.168.0.31:8774/v2.1/limits -H "OpenStack-API-Version: compute 2.36" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.32" -H "X-Auth-Token: $OS_TOKEN" ** Changed in: nova Status: Confirmed => Invalid ** Changed in: nova Assignee: Sivasathurappan Radhakrishnan (siva-radhakrishnan) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1456899 Title: nova absolute-limits Security groups count incorrect when using Neutron Status in OpenStack Compute (nova): Invalid Bug description: Used security groups show always 1- even if i have 2 or 0 assigned to a VM nova absolute-limits ++--+---+ | Name | Used | Max | ++--+---+ | Cores | 2| 20| | FloatingIps| 0| 10| | ImageMeta | -| 128 | | Instances | 1| 10| | Keypairs | -| 100 | | Personality| -| 5 | | Personality Size | -| 10240 | | RAM| 4096 | 51200 | | SecurityGroupRules | -| 20| | SecurityGroups | 1| 10| | Server Meta| -| 128 | | ServerGroupMembers | -| 10| | ServerGroups | 0| 10| ++--+---+ nova show 2e722ad7-d54b-4122-8b90-0debec882668 +--+--+ | Property | Value | +--+--+ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | puma09.scl.lab.tlv.redhat.com | | OS-EXT-SRV-ATTR:hypervisor_hostname | puma09.scl.lab.tlv.redhat.com | | OS-EXT-SRV-ATTR:instance_name| instance-0001 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2015-05-18T06:20:45.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2015-05-18T06:18:33Z | | flavor | m1.medium (3) | | hostId | 3e2a5e99d50824f33c61f2408bab8e92fd70f1af4e4f23d569c04a4f | | id | 2e722ad7-d54b-4122-8b90-0debec882668 | | image| rhel (565e7dc4-67d1-46d7-8ef5-765c1455e530) | | int_net network | 192.168.1.3, 10.35.170.2 | | key_name | - | | metadata | {} | | name | VM-1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default, test | | status | ACTIVE | | tenant_id| 2c238e6d92af464889aca6a16d80f857 | | updated
[Yahoo-eng-team] [Bug 1643623] [NEW] Not able to abort live migration
Public bug reported: Tried to live migrate instance to destination host which was an invalid one. Got a error message saying host was not available . Did a nova list and found status and task state was stuck in migrating status forever. Couldn't see the instance in 'nova migration-list' and not able to abort the migration using 'nova live-migration-abort'. Steps to reproduce: 1) Create an instance test_1 2) live migrate instance using 'nova live-migration test_1 ' 3) Check status of the instance using 'nova show test_1' or 'nova list'. Expected Result: Status of the instance should have been in Active status as live migration failed with invalid host name Actual Result: Instance is stuck in 'migrating' status forever. Environment: Multinode devstack environment with 2 compute nodes 1)Current master 2)Networking-neutron 3)Hypervisor Libvirt-KVM ** Affects: nova Importance: Undecided Assignee: Sivasathurappan Radhakrishnan (siva-radhakrishnan) Status: New ** Tags: live-migration ** Tags added: live-migration ** Changed in: nova Assignee: (unassigned) => Sivasathurappan Radhakrishnan (siva-radhakrishnan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1643623 Title: Not able to abort live migration Status in OpenStack Compute (nova): New Bug description: Tried to live migrate instance to destination host which was an invalid one. Got a error message saying host was not available . Did a nova list and found status and task state was stuck in migrating status forever. Couldn't see the instance in 'nova migration-list' and not able to abort the migration using 'nova live-migration-abort'. Steps to reproduce: 1) Create an instance test_1 2) live migrate instance using 'nova live-migration test_1 ' 3) Check status of the instance using 'nova show test_1' or 'nova list'. Expected Result: Status of the instance should have been in Active status as live migration failed with invalid host name Actual Result: Instance is stuck in 'migrating' status forever. Environment: Multinode devstack environment with 2 compute nodes 1)Current master 2)Networking-neutron 3)Hypervisor Libvirt-KVM To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1643623/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1597686] Re: the return value in func process_request in nova/wsgi.py is not proper
I see the above method is present in the base class of Middleware. Currently, Nova doesn't implement any middleware which overrides the above method. In future it might be helpful if any middlewares are developed specific to nova. This doesn't seem to be a valid bug for me. Hence invalidating it. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1597686 Title: the return value in func process_request in nova/wsgi.py is not proper Status in OpenStack Compute (nova): Invalid Bug description: In nova/wsgi.py, there is a function but the return value is limited to None, which can also be a response. def process_request(self, req): """Called on each request. If this returns None, the next application down the stack will be executed. If it returns a response then that response will be returned and execution will stop here. """ return None From thte comments we can see, the return value for this function should be "None" or response. Thanks, Jeffrey Guan To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1597686/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1286463] Re: Security-group-name is case sensitive when booting instance with neutron
Since nova-network is gonna be deprecated in near future, I don't feel the real need of making this change. I see most of the commands in OpenStack client seems to be case sensitive. Hence I am changing the bug status to 'won't fix'. If this needs to be fixed, please feel free to reopen the bug. ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1286463 Title: Security-group-name is case sensitive when booting instance with neutron Status in OpenStack Compute (nova): Won't Fix Status in python-novaclient: Invalid Bug description: When using nova-networking an instance boots correctly despite the case of the security-group name that is used (assuming it exists, case-insensitive). http://paste.openstack.org/show/70477/ However, when using neutron the instance will queue with the scheduler but fail to boot. stack@devstack:~$ neutron security-group-list +--+-+-+ | id | name| description | +--+-+-+ | 57597299-782e-4820-b814-b27c2f125ee2 | FooBar | | | 9ae55da3-5246-4a28-b4d6-d45affe7b5d8 | default | default | +--+-+-+ stack@devstack:~$ nova boot --image e051efff-ddd7-4b57-88af-d47b65aaa333 --flavor 1 --security-group NotARealGroup myinst2 ERROR: Unable to find security_group with name 'NotARealGroup' (HTTP 400) (Request-ID: req-bb34592c-fc38-4a39-be8f-787e2a754b98) stack@devstack:~/devstack$ nova boot --image e051efff-ddd7-4b57-88af-d47b65aaa333 --flavor 1 --security-group FOOBAR myinst2 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state| scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass| ZzsCcS5AHHGR | | config_drive | | | created | 2014-03-01T07:30:24Z | | flavor | m1.tiny (1) | | hostId | | | id | 050af9f8-dbe0-4e69-afa4-d29d1e153913 | | image| cirros-0.3.1-x86_64-uec (e051efff-ddd7-4b57-88af-d47b65aaa333) | | key_name | - | | metadata | {} | | name | myinst2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | FOOBAR | | status | BUILD | | tenant_id| be91fea7b53e4ad189dd66ef2d65cfa8 | | updated | 2014-03-01T07:30:24Z | | user_id | 4f0af1fd11a140e5807f2c436fd2660f
[Yahoo-eng-team] [Bug 1462366] Re: nova compute info cache refresh should detach obsolete ports
I tried to reproduce the bugs using following commands 1) neutron port-create 2) nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.small --nic port-id= test1 3) nova interface-list 4) neutron port-delete 5) nova interface-list I wasn't able to reproduce the scenario described above. Hence I am invalidating this bug. If reproducible please feel free to reopen the bug. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1462366 Title: nova compute info cache refresh should detach obsolete ports Status in OpenStack Compute (nova): Invalid Bug description: Nova conduct periodic task to heal/refresh its info cache. Obsolete ports should be detached during that process. commit 4a02d9415f64e8d579d1b674d6d2efda902b01fa Merge: 9fc5c05 13cf0c2 Author: JenkinsDate: Thu Jun 4 11:32:03 2015 + Merge "Get rid of oslo-incubator copy of middleware" To test it, create an instance with neutron ports, and then delete one of the neutron ports by using neutron CLI. The deleted port remains attached To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1462366/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1632486] [NEW] add debug to tox environment
Public bug reported: Using pdb breakpoints with testr fails with BdbQuit exception rather than stopping at the breakpoint. The oslotest package also distributes a shell file that may be used to assist in debugging python code. The shell file uses testtools, and supports debugging with pdb. Debug tox environment implements following test instructions https://wiki.openstack.org/wiki/Testr#Debugging_.28pdb.29_Tests ** Affects: nova Importance: Wishlist Assignee: Sivasathurappan Radhakrishnan (siva-radhakrishnan) Status: New ** Changed in: nova Assignee: (unassigned) => Sivasathurappan Radhakrishnan (siva-radhakrishnan) ** Changed in: nova Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1632486 Title: add debug to tox environment Status in OpenStack Compute (nova): New Bug description: Using pdb breakpoints with testr fails with BdbQuit exception rather than stopping at the breakpoint. The oslotest package also distributes a shell file that may be used to assist in debugging python code. The shell file uses testtools, and supports debugging with pdb. Debug tox environment implements following test instructions https://wiki.openstack.org/wiki/Testr#Debugging_.28pdb.29_Tests To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1632486/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1671011] [NEW] Live migration of paused instance fails when post copy is enabled
Public bug reported: live migration paused instance fails when post copy is enabled. Steps to Reproduce: * spin up a instance and pause it nova pause * Live migrate the instance nova live-migration Expected result === Since post copy doesn't support live migration of paused instance. We need to return a error stating "Paused instance can't be migrated when post copy is enabled" to give better user experience Actual result = Live migration command returns 202 but I could see libvirt failure compute logs. Environment: Multinode devstack environment with 2 compute nodes. 1)Current master 2)Networking-neutron 3)Hypervisor Libvirt-KVM 3) Enable post copy for which libvirt version should be greater than or equal to 1.3.3. Logs: Following error found in compute log http://paste.openstack.org/show/601362/ ** Affects: nova Importance: Undecided Assignee: Sivasathurappan Radhakrishnan (siva-radhakrishnan) Status: New ** Tags: live-migration ** Changed in: nova Assignee: (unassigned) => Sivasathurappan Radhakrishnan (siva-radhakrishnan) ** Tags added: live-migration -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1671011 Title: Live migration of paused instance fails when post copy is enabled Status in OpenStack Compute (nova): New Bug description: live migration paused instance fails when post copy is enabled. Steps to Reproduce: * spin up a instance and pause it nova pause * Live migrate the instance nova live-migration Expected result === Since post copy doesn't support live migration of paused instance. We need to return a error stating "Paused instance can't be migrated when post copy is enabled" to give better user experience Actual result = Live migration command returns 202 but I could see libvirt failure compute logs. Environment: Multinode devstack environment with 2 compute nodes. 1)Current master 2)Networking-neutron 3)Hypervisor Libvirt-KVM 3) Enable post copy for which libvirt version should be greater than or equal to 1.3.3. Logs: Following error found in compute log http://paste.openstack.org/show/601362/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1671011/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp