[Yahoo-eng-team] [Bug 1333103] [NEW] wrong_info_is_displayed_during_no_probe_neutron_debug_probe_clear

2014-06-23 Thread Mh Raies
Public bug reported:

Take a case, when there is no probe present. In this case if we try to
clear probes using following CLI -

#neutron-debug probe-clear

it should ideally give a message like - Nothing to delete/clear or No
probe is present to clear

But when I tries above command following is traced -


2014-06-23 11:48:43.924 12056 INFO neutron.common.config [-] Logging enabled!
2014-06-23 11:48:43.925 12056 WARNING neutron.agent.common.config [-] 
Deprecated: DEFAULT.root_helper is deprecated! Please move root_helper 
configuration to [AGENT] section.
2014-06-23 11:48:43.925 12056 WARNING neutron.agent.common.config [-] 
Deprecated: DEFAULT.root_helper is deprecated! Please move root_helper 
configuration to [AGENT] section.
2014-06-23 11:48:43.927 12056 DEBUG neutron.debug.commands.ClearProbe [-] 
run(Namespace(request_format='json')) run 
/opt/stack/neutron/neutron/debug/commands.py:105
2014-06-23 11:48:43.928 12056 DEBUG neutronclient.client [-] 
REQ: curl -i 
http://10.0.9.40:9696/v2.0/ports.json?device_owner=network%3Aprobedevice_owner=compute%3Aprobedevice_id=openstack
 -X GET -H X-Auth-Token: 
MIIWRAYJKoZIhvcNAQcCoIIWNTCCFjECAQExCTAHBgUrDgMCGjCCFJoGCSqGSIb3DQEHAaCCFIsEghSHeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNi0yM1QwNjoxODo0My44MzA5MDUiLCAiZXhwaXJlcyI6ICIyMDE0LTA2LTIzVDA3OjE4OjQzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUiLCAibmFtZSI6ICJkZW1vIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjg3NzYvdjEvMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjg3NzYvdjEvMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUiLCAiaWQiOiAiMTkyMDBkOTgzZTE1NGNkNTkzZTM2MTQ5NjgxODQzZmEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6ODc3Ni92MS8yNjkyOTQyNDEyZjQ0NGFmODQ5MWNhNTgwZGQ5MjkwZSJ9XSwgImVuZHBvaW50c19saW5rcyI6
 
IFtdLCAidHlwZSI6ICJ2b2x1bWUiLCAibmFtZSI6ICJjaW5kZXIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjg3NzQvdjIvMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjg3NzQvdjIvMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUiLCAiaWQiOiAiNzEyMTg0NjRjNzY2NGM2NmI0YTM1OWIyM2UzN2JhZWYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6ODc3NC92Mi8yNjkyOTQyNDEyZjQ0NGFmODQ5MWNhNTgwZGQ5MjkwZSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6OTY5Ni8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjk2OTYvIiwgImlkIjogIjRjNTkyNTBmMWJjNzQzNmFiYzY4NjQzM2Q3ZjI1MzY2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMC45LjQwOjk2OTYvIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm5ldHdvcmsiLCAibmFtZSI6ICJuZXV0cm9uIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjAuOS40MDo4Nzc2L3YyLzI2OTI5NDI0MTJmNDQ0YWY4NDkxY2E1O
 
DBkZDkyOTBlIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjAuOS40MDo4Nzc2L3YyLzI2OTI5NDI0MTJmNDQ0YWY4NDkxY2E1ODBkZDkyOTBlIiwgImlkIjogIjE0OWJlNmI1YzI0ZjRiNzk4YjU5NDkyOGE2MGQ3NTZiIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMC45LjQwOjg3NzYvdjIvMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAidm9sdW1ldjIiLCAibmFtZSI6ICJjaW5kZXJ2MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6ODc3NC92MyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6ODc3NC92MyIsICJpZCI6ICIxZTY2MTkxMmI2YjI0MGRhOGYzYWU1NmI2NDc5NGFmOCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjAuOS40MDo4Nzc0L3YzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGV2MyIsICJuYW1lIjogIm5vdmF2MyJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6MzMzMyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6MzMzMyIsICJpZCI6ICI4NWI0YjYxNTVhN2M0Yzg4OGEzNjQ4YWZjODBlZjM3YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjAuOS
 
40MDozMzMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjkyOTIiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjkyOTIiLCAiaWQiOiAiM2FlMmFmZGI4NGEzNDc4OTljZDFlMTZkZjQ5ODMzNTgiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6ODc3OS92MS4wLzI2OTI5NDI0MTJmNDQ0YWY4NDkxY2E1ODBkZDkyOTBlIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjAuOS40MDo4Nzc5L3YxLjAvMjY5Mjk0MjQxMmY0NDRhZjg0OTFjYTU4MGRkOTI5MGUiLCAiaWQiOiAiMjc1ZWJiNjYwNDFmNDgwOTkwMzA1ZTAzM2YwOTQ1OWYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4wLjkuNDA6ODc3OS92MS4wLzI2OTI5NDI0MTJmNDQ0YWY4NDkxY2E1ODBkZDkyOTBlIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImRhdGFiYXNlIiwgIm5hbWUiOiAidHJvdmUifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMC45LjQwOjg3NzcvIiw
 

[Yahoo-eng-team] [Bug 1333106] [NEW] Tempest:Running test_network_basic_ops scenario in tempest results is failing with internal server error

2014-06-23 Thread Ashish Kumar Gupta
Public bug reported:

Tested on build: 2014.2.dev543.g8bdc649

Pre-requisite : External network exist.
Both the instances are created successfully with internal and external network  
connectivity passed.


neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48 GMT', 
'status': '204', 'content-length': '0', 'x-openstack-request-id': 
'req-c75f44c1-42e1-41ac-a163-8821d78ecddc'}

tempest.scenario.manager: DEBUG: Deleting {u'status': u'ACTIVE', u'subnets': 
[], u'name': u'network-smoke--1921748135', u'provider:physical_network': None, 
u'admin_state_up': True, u'tenant_id': u'b61abe9a4c8e4e439603941040610d90', 
u'provider:network_type': u'vxlan', u'shared': False, u'id': 
u'9a32273d-fc1c-4b9e-90cc-44702236b173', u'provider:segmentation_id': 1003} 
from shared resources of TestNetworkBasicOps
neutronclient.client: DEBUG:
REQ: curl -i 
http://192.0.2.26:9696//v2.0/networks/9a32273d-fc1c-4b9e-90cc-44702236b173.json 
-X DELETE -H X-Auth-Token: b345c371a7364c8ba4d4e9269c9db9b9 -H Content-Type: 
application/json -H Accept: application/json -H User-Agent: 
python-neutronclient

neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48
GMT', 'status': '500', 'content-length': '88', 'content-type':
'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-
7b8270d5-6346-4525-a6b4-19a58f400e78'} {NeutronError: Request Failed:
internal server error while processing your request.}

neutronclient.v2_0.client: DEBUG: Error message: {NeutronError: Request 
Failed: internal server error while processing your request.}
tempest.scenario.manager: DEBUG: Deleting {u'tenant_id': 
u'b61abe9a4c8e4e439603941040610d90', u'name': u'secgroup-smoke--1142035513', 
u'description': u'secgroup-smoke--1142035513 description', 
u'security_group_rules': [{u'remote_group_id': None, u'direction': u'egress', 
u'remote_ip_prefix': None, u'protocol': None, u'tenant_id': 
u'b61abe9a4c8e4e439603941040610d90', u'port_range_max': None, 
u'security_group_id': u'6737784c-ba3a-4c2b-805c-97a69c6ccf4b', 
u'port_range_min': None, u'ethertype': u'IPv4', u'id': 
u'92a73bea-0c43-4490-9470-da2b23013760'}, {u'remote_group_id': None, 
u'direction': u'egress', u'remote_ip_prefix': None, u'protocol': None, 
u'tenant_id': u'b61abe9a4c8e4e439603941040610d90', u'port_range_max': None, 
u'security_group_id': u'6737784c-ba3a-4c2b-805c-97a69c6ccf4b', 
u'port_range_min': None, u'ethertype': u'IPv6', u'id': 
u'406eb703-0d20-4a71-b13f-15ecbe832fbd'}], u'id': 
u'6737784c-ba3a-4c2b-805c-97a69c6ccf4b'} from shared resources of 
TestNetworkBasicOps
neutronclient.client: DEBUG:
REQ: curl -i 
http://192.0.2.26:9696//v2.0/security-groups/6737784c-ba3a-4c2b-805c-97a69c6ccf4b.json
 -X DELETE -H X-Auth-Token: b345c371a7364c8ba4d4e9269c9db9b9 -H 
Content-Type: application/json -H Accept: application/json -H User-Agent: 
python-neutronclient

neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48
GMT', 'status': '204', 'content-length': '0', 'x-openstack-request-id':
'req-876d170a-448c-4bb2-b358-adca177d3bd9'}

-  end captured logging  -

--
Ran 2 tests in 142.676s

FAILED (errors=2)

Actual Result :  Just after deleting the subnet , deletion of  network
throws internal server error. (Attached is the server log of the
controller error are logged in that file also)

Giving some time delay in /tempest/tempest/api/network/common.py file onto the 
deletion of network  will get rid of this issue. 
class DeletableNetwork(DeletableResource):

def delete(self):
#time.sleep(3)
self.client.delete_network(self.id)

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: log.zip
   https://bugs.launchpad.net/bugs/1333106/+attachment/4137225/+files/log.zip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333106

Title:
  Tempest:Running test_network_basic_ops scenario in tempest results is
  failing with internal server error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Tested on build: 2014.2.dev543.g8bdc649

  Pre-requisite : External network exist.
  Both the instances are created successfully with internal and external 
network  connectivity passed.

  
  neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48 GMT', 
'status': '204', 'content-length': '0', 'x-openstack-request-id': 
'req-c75f44c1-42e1-41ac-a163-8821d78ecddc'}

  tempest.scenario.manager: DEBUG: Deleting {u'status': u'ACTIVE', u'subnets': 
[], u'name': u'network-smoke--1921748135', u'provider:physical_network': None, 
u'admin_state_up': True, u'tenant_id': u'b61abe9a4c8e4e439603941040610d90', 
u'provider:network_type': u'vxlan', u'shared': False, u'id': 
u'9a32273d-fc1c-4b9e-90cc-44702236b173', u'provider:segmentation_id': 1003} 
from shared resources of 

[Yahoo-eng-team] [Bug 1329929] Re: Cannot 'resize' while instance is in task_state resize_migrating

2014-06-23 Thread hzxiongwenwu
I think it's right, nova does not allow resizing when vm  task_state  is
in resize_migrating.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329929

Title:
  Cannot 'resize' while instance is in task_state resize_migrating

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/00/97500/4/check/check-tempest-dsvm-
  postgres-full/3376b43/

  2014-06-13 14:12:58.805 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestXML.test_resize_server_from_manual_to_auto[gate]
  2014-06-13 14:12:58.805 | 
-
  2014-06-13 14:12:58.806 | 
  2014-06-13 14:12:58.806 | Captured traceback:
  2014-06-13 14:12:58.806 | ~~~
  2014-06-13 14:12:58.806 | Traceback (most recent call last):
  2014-06-13 14:12:58.806 |   File 
tempest/api/compute/servers/test_disk_config.py, line 96, in 
test_resize_server_from_manual_to_auto
  2014-06-13 14:12:58.806 | self.client.resize(self.server_id, 
flavor_id, disk_config='AUTO')
  2014-06-13 14:12:58.806 |   File 
tempest/services/compute/xml/servers_client.py, line 508, in resize
  2014-06-13 14:12:58.806 | return self.action(server_id, 'resize', 
None, **kwargs)
  2014-06-13 14:12:58.806 |   File 
tempest/services/compute/xml/servers_client.py, line 439, in action
  2014-06-13 14:12:58.806 | resp, body = self.post(servers/%s/action 
% server_id, str(doc))
  2014-06-13 14:12:58.807 |   File tempest/common/rest_client.py, line 
209, in post
  2014-06-13 14:12:58.807 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-06-13 14:12:58.807 |   File tempest/common/rest_client.py, line 
419, in request
  2014-06-13 14:12:58.807 | resp, resp_body)
  2014-06-13 14:12:58.807 |   File tempest/common/rest_client.py, line 
473, in _error_checker
  2014-06-13 14:12:58.807 | raise exceptions.Conflict(resp_body)
  2014-06-13 14:12:58.807 | Conflict: An object with that identifier 
already exists
  2014-06-13 14:12:58.807 | Details: {'message': Cannot 'resize' while 
instance is in task_state resize_migrating, 'code': '409'}
  2014-06-13 14:12:58.807 | 
  2014-06-13 14:12:58.807 | 
  2014-06-13 14:12:58.808 | Captured pythonlogging:
  2014-06-13 14:12:58.808 | ~~~
  2014-06-13 14:12:58.808 | 2014-06-13 13:43:01,084 Request 
(ServerDiskConfigTestXML:test_resize_server_from_manual_to_auto): 200 GET 
http://127.0.0.1:8774/v2/76d9e5601274471ca2f91e4de5489f55/servers/28fe0236-95fe-4716-9e38-8d54eaf74e14
 0.081s
  2014-06-13 14:12:58.808 | 2014-06-13 13:43:01,178 Request 
(ServerDiskConfigTestXML:test_resize_server_from_manual_to_auto): 200 GET 
http://127.0.0.1:8774/v2/76d9e5601274471ca2f91e4de5489f55/servers/28fe0236-95fe-4716-9e38-8d54eaf74e14
 0.091s
  2014-06-13 14:12:58.808 | 2014-06-13 13:43:01,232 Request 
(ServerDiskConfigTestXML:test_resize_server_from_manual_to_auto): 409 POST 
http://127.0.0.1:8774/v2/76d9e5601274471ca2f91e4de5489f55/servers/28fe0236-95fe-4716-9e38-8d54eaf74e14/action
 0.052s
  2014-06-13 14:12:58.808 | 
  2014-06-13 14:12:58.808 | 
  2014-06-13 14:12:58.808 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestXML.test_update_server_from_auto_to_manual[gate]
  2014-06-13 14:12:58.808 | 
-
  2014-06-13 14:12:58.808 | 
  2014-06-13 14:12:58.808 | Captured traceback:
  2014-06-13 14:12:58.808 | ~~~
  2014-06-13 14:12:58.809 | Traceback (most recent call last):
  2014-06-13 14:12:58.809 |   File 
tempest/api/compute/servers/test_disk_config.py, line 124, in 
test_update_server_from_auto_to_manual
  2014-06-13 14:12:58.809 | 
self._update_server_with_disk_config(disk_config='AUTO')
  2014-06-13 14:12:58.809 |   File 
tempest/api/compute/servers/test_disk_config.py, line 43, in 
_update_server_with_disk_config
  2014-06-13 14:12:58.809 | 
self.client.wait_for_server_status(server['id'], 'ACTIVE')
  2014-06-13 14:12:58.809 |   File 
tempest/services/compute/xml/servers_client.py, line 388, in 
wait_for_server_status
  2014-06-13 14:12:58.809 | raise_on_error=raise_on_error)
  2014-06-13 14:12:58.809 |   File tempest/common/waiters.py, line 106, 
in wait_for_server_status
  2014-06-13 14:12:58.809 | _console_dump(client, server_id)
  2014-06-13 14:12:58.809 |   File tempest/common/waiters.py, line 27, in 
_console_dump
  2014-06-13 14:12:58.809 | resp, output = 
client.get_console_output(server_id, None)
  2014-06-13 14:12:58.810 |   File 
tempest/services/compute/xml/servers_client.py, line 

[Yahoo-eng-team] [Bug 1304409] Re: VMs can't be booted with networks without subnet

2014-06-23 Thread hzxiongwenwu
it no sense to boot a instance without a network.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304409

Title:
  VMs can't be booted with networks without subnet

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Recently a change in the nova/network/neutronv2/api.py file is causing
  nova boots to fail for networks that do not have a subnet associated
  with them.

  The following line in the api.py file is causing the issue:

  for net in nets:
  if not net.get('subnets'):
  raise exception.NetworkRequiresSubnet(
  network_uuid=net['id'])

  This has to be fixed to allow users to do boots with networks that do
  not have a subnet associated with them.

  The issue seems to be occuring post the commit done here:

  
https://review.openstack.org/gitweb?p=openstack%2Fnova.git;a=commitdiff;h=45e2398f0c01c327db46ce92fb9dda886455db9d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333137] [NEW] Can't launch Nova instance: no boot image available

2014-06-23 Thread Hong-Guang
Public bug reported:

Testing step:
1:login as admin
2:go to project/instance/launch instance
3:No image available whether what flavor is choosed

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333137

Title:
  Can't launch Nova instance: no boot image available

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Testing step:
  1:login as admin
  2:go to project/instance/launch instance
  3:No image available whether what flavor is choosed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332149] Re: Can't login with master django_openstack_auth: 'module' object has no attribute 'Login'

2014-06-23 Thread Alan Pevec
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Status: New = Confirmed

** Changed in: horizon/icehouse
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332149

Title:
  Can't login with master django_openstack_auth: 'module' object has no
  attribute 'Login'

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress

Bug description:
  One of the recent django_openstack_auth cleanup patches
  (2ead8838e72ff02ced36133866046c4c1a7c0931) removed a relative import
  that Horizon was incorrectly relying on.

  Traceback:
  File 
/opt/stack/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py
 in get_response
112. response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
  File 
/opt/stack/horizon/.venv/lib/python2.7/site-packages/django/views/decorators/vary.py
 in inner_func
36. response = func(*args, **kwargs)
  File /opt/stack/horizon/openstack_dashboard/views.py in splash
43. form = views.Login(request)

  Exception Type: AttributeError at /
  Exception Value: 'module' object has no attribute 'Login'

  The Login form should be imported directly from forms.py, not
  indirectly from views.py.

  We need to fix this very shortly or we will get bitten by it once
  critical bug 1331406 is merged and a new django_openstack_auth version
  is released.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333144] [NEW] Jenkins fails patch verification on: ConnectionError: HTTPConnectionPool(host='public.nova.example.com', port=8774): Max retries exceeded

2014-06-23 Thread Daniel Korn
Public bug reported:

Jenkins fails on three tests:
==

gate-horizon-python26
gate-horizon-python27
gate-horizon-python27-django14

The error that repeats in the log files is:


14-06-22 20:18:42.483 | ERROR: test_change_password_shows_message_on_login_page 
(openstack_dashboard.dashboards.settings.password.tests.ChangePasswordTests)
| --
 Traceback (most recent call last):
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py,
 line 81, in instance_stub_out
return fn(self, *args, **kwargs)
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/settings/password/tests.py,
 line 65, in test_change_password_shows_message_on_login_page
res = self.client.post(INDEX_URL, formData, follow=True)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 485, in post
response = self._handle_redirects(response, **extra)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 617, in _handle_redirects
 response = self.get(url.path, QueryDict(url.query), follow=False, **extra)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 473, in get
response = super(Client, self).get(path, data=data, **extra)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 280, in get
return self.request(**r)
  File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 444, in request
 six.reraise(*exc_info)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
   File /home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py, 
line 36, in dec
return view_func(request, *args, **kwargs)
   File /home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py, 
line 52, in dec
 return view_func(request, *args, **kwargs)
   File /home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py, 
line 36, in dec
 return view_func(request, *args, **kwargs)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
 return self.dispatch(request, *args, **kwargs)
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
return handler(request, *args, **kwargs)
   File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
152, in get
 handled = self.construct_tables()
   File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
143, in construct_tables
 handled = self.handle_table(table)
   File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
116, in handle_table
 data = self._get_data_dict()
   File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
179, in _get_data_dict
 self._data = {self.table_class._meta.name: self.get_data()}
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/project/overview/views.py,
 line 55, in get_data
 super(ProjectOverview, self).get_data()
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/usage/views.py,
 line 43, in get_data
 self.usage.summarize(*self.usage.get_date_range())
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/usage/base.py,
 line 195, in summarize
 if not api.nova.extension_supported('SimpleTenantUsage', self.request):
   File 
/home/jenkins/workspace/gate-horizon-python27/horizon/utils/memoized.py, line 
90, in wrapped
 value = cache[key] = func(*args, **kwargs)
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/api/nova.py,
 line 750, in extension_supported
 extensions = list_extensions(request)
   File 
/home/jenkins/workspace/gate-horizon-python27/horizon/utils/memoized.py, line 
90, in wrapped
value = cache[key] = func(*args, **kwargs)
   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/api/nova.py,
 line 741, in list_extensions
 return nova_list_extensions.ListExtManager(novaclient(request)).show_all()
   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/novaclient/v1_1/contrib/list_extensions.py,
 line 37, in show_all
 return self._list(/extensions, 'extensions')
  File 

[Yahoo-eng-team] [Bug 1333145] [NEW] quota-usage error in soft-delete

2014-06-23 Thread hzxiongwenwu
Public bug reported:

how to reproduct it:

i am project_id='30528b0d602c4a9c9d8b4cd3d416d710', and I have an
instance:

ubuntu@xfolsom:/opt/stack/nova$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| 6f6c1258-6eda-43f1-9531-7a4eb0b44724 | test | ACTIVE | -  | Running   
  | private=10.0.0.2 |
+--+--+++-+--+

1.first select from quota_usage, the result is :

mysql select * from quota_usages;
+-+-+++--+-++--+---+-+--+
| created_at  | updated_at  | deleted_at | id | project_id  
 | resource| in_use | reserved | until_refresh | 
deleted | user_id  |
+-+-+++--+-++--+---+-+--+
| 2014-06-20 08:24:35 | 2014-06-23 08:35:03 | NULL   |  1 | 
30528b0d602c4a9c9d8b4cd3d416d710 | instances   |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:35 | 2014-06-23 08:35:03 | NULL   |  2 | 
30528b0d602c4a9c9d8b4cd3d416d710 | ram | 64 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:35 | 2014-06-23 08:35:03 | NULL   |  3 | 
30528b0d602c4a9c9d8b4cd3d416d710 | cores   |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:35 | 2014-06-20 08:24:35 | NULL   |  4 | 
30528b0d602c4a9c9d8b4cd3d416d710 | security_groups |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:36 | 2014-06-23 03:56:03 | NULL   |  5 | 
30528b0d602c4a9c9d8b4cd3d416d710 | fixed_ips   |  1 |0 |
  NULL |   0 | NULL |
+-+-+++--+-++--+---+-+--+
5 rows in set (0.00 sec)

2.using nova-network, set reclaim_instance_interval=600 in nova.conf.
3.nova delete 6f6c1258-6eda-43f1-9531-7a4eb0b44724
4. select from quota_usages, result is :

mysql select * from quota_usages;
+-+-+++--+-++--+---+-+--+
| created_at  | updated_at  | deleted_at | id | project_id  
 | resource| in_use | reserved | until_refresh | 
deleted | user_id  |
+-+-+++--+-++--+---+-+--+
| 2014-06-20 08:24:35 | 2014-06-23 08:42:30 | NULL   |  1 | 
30528b0d602c4a9c9d8b4cd3d416d710 | instances   |  0 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:35 | 2014-06-23 08:42:30 | NULL   |  2 | 
30528b0d602c4a9c9d8b4cd3d416d710 | ram |  0 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:35 | 2014-06-23 08:42:30 | NULL   |  3 | 
30528b0d602c4a9c9d8b4cd3d416d710 | cores   |  0 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:35 | 2014-06-20 08:24:35 | NULL   |  4 | 
30528b0d602c4a9c9d8b4cd3d416d710 | security_groups |  1 |0 |
  NULL |   0 | e522bb6fecaa4a69b6d7df69211dab13 |
| 2014-06-20 08:24:36 | 2014-06-23 03:56:03 | NULL   |  5 | 
30528b0d602c4a9c9d8b4cd3d416d710 | fixed_ips   |  1 |0 |
  NULL |   0 | NULL |
+-+-+++--+-++--+---+-+--+
5 rows in set (0.00 sec)

5. nova delete 6f6c1258-6eda-43f1-9531-7a4eb0b44724 again

then select from quota_usages, result is :

mysql select * from quota_usages;
+-+-+++--+-++--+---+-+--+
| created_at  | updated_at  | deleted_at | id 

[Yahoo-eng-team] [Bug 1324363] Re: Firewall status remains 'PENDING_CREATE' for regular user

2014-06-23 Thread Jakub Libosvar
This is designed behavior. Router becomes ACTIVE once router with
interfaces is added.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324363

Title:
  Firewall status remains 'PENDING_CREATE' for regular user

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Description of problem:
  Status of firewall for regular user remains at 'PENDING_CREATE'. The same 
works fine for admin user.

  Steps to Reproduce:
  1. Install method : packstack --alinone

  2. Configure FwaaS.
 driver = 
neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
 enabled = True
  
 Set value for service_plugins and service_provider in neutron.conf. 

  3. Create firewall rule, policy and then firewall using 'demo' user

  Actual results:
  Status of firewall stuck at 'PENDING_CREATE'

  Expected results:
  Firewall should be created successfully with status changing to 'ACTIVE'

  Additional info:
  This happens only in case of regular user. For admin user, firewall is 
created successfully and status is changed to ACTIVE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333160] [NEW] Creating and updating empty names for lbaas pool and vip is successfull

2014-06-23 Thread Vishal Agarwal
Public bug reported:

For VIP and Pool name field is mandatory in horizon and CLI. But If
curl is used, in both POST or PUT even if we give name field as empty
the API processing is successfull.

** Affects: neutron
 Importance: Undecided
 Assignee: Vishal Agarwal (vishala)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) = Vishal Agarwal (vishala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333160

Title:
  Creating and updating empty names for lbaas pool and vip is
  successfull

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  For VIP and Pool name field is mandatory in horizon and CLI. But If
  curl is used, in both POST or PUT even if we give name field as empty
  the API processing is successfull.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333161] [NEW] delete image url in glanceclient v2

2014-06-23 Thread Guangyu Suo
Public bug reported:

I notice the delete image method in v2/images.py has no slash(/) in
front of v2, but others have:

def delete(self, image_id):
self.http_client.json_request('DELETE', 'v2/images/%s' % image_id)

def get(self, image_id):
url = '/v2/images/%s' % image_id

And the log like follows:

curl -i -X DELETE -H 'X-Auth-Token: ***' -H 'Content-Type:
application/json' -H 'User-Agent: python-glanceclient'
http://127.0.0.1:9292v2/images/ad44c714-d4f3-4568-b5fc-d4f2dbbe1f89

There is no slash between port and path_info, this may causes some
problems if there is nginx in front of glance-api

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  I notice the delete image method in v2/images.py has no slash(/) in
  front of v2, but others have:
  
  def delete(self, image_id):
- self.http_client.json_request('DELETE', 'v2/images/%s' % image_id)
+ self.http_client.json_request('DELETE', 'v2/images/%s' % image_id)
  
- def get(self, image_id):   
- url = '/v2/images/%s' % image_id
+ def get(self, image_id):
+ url = '/v2/images/%s' % image_id
  
- And when the log like follows:
+ And the log like follows:
  
  curl -i -X DELETE -H 'X-Auth-Token: ***' -H 'Content-Type:
  application/json' -H 'User-Agent: python-glanceclient'
  http://127.0.0.1:9292v2/images/ad44c714-d4f3-4568-b5fc-d4f2dbbe1f89
  
  There is no slash between port and path_info

** Description changed:

  I notice the delete image method in v2/images.py has no slash(/) in
  front of v2, but others have:
  
  def delete(self, image_id):
  self.http_client.json_request('DELETE', 'v2/images/%s' % image_id)
  
  def get(self, image_id):
  url = '/v2/images/%s' % image_id
  
  And the log like follows:
  
  curl -i -X DELETE -H 'X-Auth-Token: ***' -H 'Content-Type:
  application/json' -H 'User-Agent: python-glanceclient'
  http://127.0.0.1:9292v2/images/ad44c714-d4f3-4568-b5fc-d4f2dbbe1f89
  
- There is no slash between port and path_info
+ There is no slash between port and path_info, this may causes some
+ problems if there is nginx in front of glance-api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1333161

Title:
  delete image url in glanceclient v2

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I notice the delete image method in v2/images.py has no slash(/) in
  front of v2, but others have:

  def delete(self, image_id):
  self.http_client.json_request('DELETE', 'v2/images/%s' % image_id)

  def get(self, image_id):
  url = '/v2/images/%s' % image_id

  And the log like follows:

  curl -i -X DELETE -H 'X-Auth-Token: ***' -H 'Content-Type:
  application/json' -H 'User-Agent: python-glanceclient'
  http://127.0.0.1:9292v2/images/ad44c714-d4f3-4568-b5fc-d4f2dbbe1f89

  There is no slash between port and path_info, this may causes some
  problems if there is nginx in front of glance-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1333161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331217] [NEW] keystone should not import pbr

2014-06-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

pbr is a build time tool, and pulls in  dependencies that are not
appropriate for runtime.  It is only used for the version string in
order to load the config file.  Longer issues with pbr are discussed
https://bugs.launchpad.net/keystone/+bug/1330771

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
keystone should not import pbr
https://bugs.launchpad.net/bugs/1331217
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Keystone.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333144] Re: Jenkins fails patch verification on: ConnectionError: HTTPConnectionPool(host='public.nova.example.com', port=8774): Max retries exceeded

2014-06-23 Thread Julie Pichon
The problem appears due to 1.1.6 and bug 1308637. However as far as I
can tell that patch is not causing the problem itself but only surfacing
an issue that already existed within the Horizon test. At this point
when we redirect the user to /auth/logout/ in a unit test the user is
not properly logged out and the django session is not terminated. (It
works fine testing in a real environment.)

To fix the test we need to mock the logout method properly, though it
doesn't seem totally straightforward because of the way the redirection
is handled. I think the test is trying to do too much and would fit
better as an integration test.

My suggestion for now would be to disable the test to get the horizon
gate going again. I'll propose a second patch later today to change it
so it has a reduced scope.

This also affects Icehouse.

** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333144

Title:
  Jenkins fails patch verification on: ConnectionError:
  HTTPConnectionPool(host='public.nova.example.com', port=8774): Max
  retries exceeded

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Dashboard (Horizon) icehouse series:
  New

Bug description:
  Jenkins fails on three tests:
  ==

  gate-horizon-python26
  gate-horizon-python27
  gate-horizon-python27-django14

  The error that repeats in the log files is:
  

  14-06-22 20:18:42.483 | ERROR: 
test_change_password_shows_message_on_login_page 
(openstack_dashboard.dashboards.settings.password.tests.ChangePasswordTests)
  | --
   Traceback (most recent call last):
     File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py,
 line 81, in instance_stub_out
  return fn(self, *args, **kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/settings/password/tests.py,
 line 65, in test_change_password_shows_message_on_login_page
  res = self.client.post(INDEX_URL, formData, follow=True)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 485, in post
  response = self._handle_redirects(response, **extra)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 617, in _handle_redirects
   response = self.get(url.path, QueryDict(url.query), follow=False, 
**extra)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 473, in get
  response = super(Client, self).get(path, data=data, **extra)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 280, in get
  return self.request(**r)
    File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/client.py,
 line 444, in request
   six.reraise(*exc_info)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 112, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py, line 36, 
in dec
  return view_func(request, *args, **kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py, line 52, 
in dec
   return view_func(request, *args, **kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/decorators.py, line 36, 
in dec
   return view_func(request, *args, **kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
   return self.dispatch(request, *args, **kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
  return handler(request, *args, **kwargs)
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
152, in get
   handled = self.construct_tables()
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
143, in construct_tables
   handled = self.handle_table(table)
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
116, in handle_table
   data = self._get_data_dict()
     File 
/home/jenkins/workspace/gate-horizon-python27/horizon/tables/views.py, line 
179, in _get_data_dict
   self._data = {self.table_class._meta.name: self.get_data()}
     File 

[Yahoo-eng-team] [Bug 1333219] [NEW] Virt driver impls don't match ComputeDriver base class API

2014-06-23 Thread Daniel Berrange
Public bug reported:

There are a number of problems where the virt driver impls do not match the API 
defined by the base ComputeDriver class.
For example

 - Libvirt:  Adds 'SOFT' as default value for 'reboot' method but no other 
class does
 - XenAPI: set_admin_passwd takes 2 parameters but base class defines it with 3 
parameters in a different order
 - VMWare: update_host_status method which doesn't exist in base class  is 
never called in entire codebase
 - All: names of parameters are not the same as names of parameters in the base 
class
 - ...more...

These inconsistencies are functional bugs in the worst, or misleading to
maintainers in the best case. It should be possible to write a test
using the python 'inspect' module which guarantees that the sub-class
APis actually match what they claim to implement from the base class.

** Affects: nova
 Importance: Undecided
 Assignee: Daniel Berrange (berrange)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333219

Title:
  Virt driver impls don't match ComputeDriver base class API

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are a number of problems where the virt driver impls do not match the 
API defined by the base ComputeDriver class.
  For example

   - Libvirt:  Adds 'SOFT' as default value for 'reboot' method but no other 
class does
   - XenAPI: set_admin_passwd takes 2 parameters but base class defines it with 
3 parameters in a different order
   - VMWare: update_host_status method which doesn't exist in base class  is 
never called in entire codebase
   - All: names of parameters are not the same as names of parameters in the 
base class
   - ...more...

  These inconsistencies are functional bugs in the worst, or misleading
  to maintainers in the best case. It should be possible to write a test
  using the python 'inspect' module which guarantees that the sub-class
  APis actually match what they claim to implement from the base class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333160] Re: Creating and updating empty names for lbaas pool and vip is successfull

2014-06-23 Thread Eugene Nikanorov
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333160

Title:
  Creating and updating empty names for lbaas pool and vip is
  successfull

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  For VIP and Pool name field is mandatory in horizon and CLI. But If
  curl is used, in both POST or PUT even if we give name field as empty
  the API processing is successfull.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333232] [NEW] Gate failure: autodoc: failed to import module X

2014-06-23 Thread Matthew Booth
Public bug reported:

Spurious gate failure: http://logs.openstack.org/65/99065/4/check/gate-
nova-docs/af27af8/console.html

Logs are full of:

2014-06-23 09:55:32.057 | 
/home/jenkins/workspace/gate-nova-docs/doc/source/devref/api.rst:39: WARNING: 
autodoc: failed to import module u'nova.api.cloud'; the following exception was 
raised:
2014-06-23 09:55:32.057 | Traceback (most recent call last):
2014-06-23 09:55:32.057 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 335, in import_object
2014-06-23 09:55:32.057 | __import__(self.modname)
2014-06-23 09:55:32.057 | ImportError: No module named cloud
2014-06-23 09:55:32.057 | 
/home/jenkins/workspace/gate-nova-docs/doc/source/devref/api.rst:66: WARNING: 
autodoc: failed to import module u'nova.api.openstack.backup_schedules'; the 
following exception was raised:
2014-06-23 09:55:32.057 | Traceback (most recent call last):
2014-06-23 09:55:32.057 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 335, in import_object
2014-06-23 09:55:32.057 | __import__(self.modname)
2014-06-23 09:55:32.058 | ImportError: No module named backup_schedules

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333232

Title:
  Gate failure: autodoc: failed to import module X

Status in OpenStack Compute (Nova):
  New

Bug description:
  Spurious gate failure: http://logs.openstack.org/65/99065/4/check
  /gate-nova-docs/af27af8/console.html

  Logs are full of:

  2014-06-23 09:55:32.057 | 
/home/jenkins/workspace/gate-nova-docs/doc/source/devref/api.rst:39: WARNING: 
autodoc: failed to import module u'nova.api.cloud'; the following exception was 
raised:
  2014-06-23 09:55:32.057 | Traceback (most recent call last):
  2014-06-23 09:55:32.057 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 335, in import_object
  2014-06-23 09:55:32.057 | __import__(self.modname)
  2014-06-23 09:55:32.057 | ImportError: No module named cloud
  2014-06-23 09:55:32.057 | 
/home/jenkins/workspace/gate-nova-docs/doc/source/devref/api.rst:66: WARNING: 
autodoc: failed to import module u'nova.api.openstack.backup_schedules'; the 
following exception was raised:
  2014-06-23 09:55:32.057 | Traceback (most recent call last):
  2014-06-23 09:55:32.057 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 335, in import_object
  2014-06-23 09:55:32.057 | __import__(self.modname)
  2014-06-23 09:55:32.058 | ImportError: No module named backup_schedules

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333161] Re: delete image url in glanceclient v2

2014-06-23 Thread Erno Kuvaja
This is not glance bug, but seems to be correct for python-glanceclient

** Project changed: glance = python-glanceclient

** Changed in: python-glanceclient
 Assignee: (unassigned) = Erno Kuvaja (jokke)

** Changed in: python-glanceclient
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1333161

Title:
  delete image url in glanceclient v2

Status in Python client library for Glance:
  In Progress

Bug description:
  I notice the delete image method in v2/images.py has no slash(/) in
  front of v2, but others have:

  def delete(self, image_id):
  self.http_client.json_request('DELETE', 'v2/images/%s' % image_id)

  def get(self, image_id):
  url = '/v2/images/%s' % image_id

  And the log like follows:

  curl -i -X DELETE -H 'X-Auth-Token: ***' -H 'Content-Type:
  application/json' -H 'User-Agent: python-glanceclient'
  http://127.0.0.1:9292v2/images/ad44c714-d4f3-4568-b5fc-d4f2dbbe1f89

  There is no slash between port and path_info, this may causes some
  problems if there is nginx in front of glance-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1333161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333240] [NEW] filter project doesn't work per all pages

2014-06-23 Thread Ami Jeain
Public bug reported:

go to the admin page, and have a big list of instances (I had around 100), 
enough to have multiple pages of them.
While in the first page of the instances, enter a project name that doesn't 
exist in the first page to filter.
I would expect it to find it, but rather, it finds it only if I surf to the 
page where it actually exists.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333240

Title:
  filter project doesn't work per all pages

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  go to the admin page, and have a big list of instances (I had around 100), 
enough to have multiple pages of them.
  While in the first page of the instances, enter a project name that doesn't 
exist in the first page to filter.
  I would expect it to find it, but rather, it finds it only if I surf to the 
page where it actually exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269134] Re: ML2 unit test coverage - driver_api

2014-06-23 Thread Ilya Shakhat
File neutron/plugins/ml2/driver_api is a collection of abstract classes.
The only code (which coverage complains to) is 'pass' statements. Thus
marking this bug as invalid.

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269134

Title:
  ML2 unit test coverage - driver_api

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  From tox -e cover neutron.tests.unit.ml2; coverage report -m

  neutron/plugins/ml2/driver_api  94
  20 10  081%   55, 65, 85, 99, 115, 129, 150, 160, 165,
  186, 196, 217, 227, 232, 237, 246, 260, 293, 583, 594

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333325] [NEW] glance-api workers should default to number of CPUs available

2014-06-23 Thread Matt Riedemann
Public bug reported:

The docs recommend setting the 'workers' option equal to the number of
CPUs on the host but defaults to 1.  I proposed a change to devstack to
set workers=`nproc` but it was decided to move this into glance itself:

https://review.openstack.org/#/c/99739/

Note that nova changed in Icehouse to default to number of CPUs
available also, and Cinder will most likely be doing the same for it's
osapi_volume_workers option.

This will have a DocImpact and probably UpgradeImpact is also necessary
since if you weren't setting the workers value explicitly before the
change you'll now have `nproc` glance API workers by default after
restarting the service.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/125

Title:
  glance-api workers should default to number of CPUs available

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The docs recommend setting the 'workers' option equal to the number of
  CPUs on the host but defaults to 1.  I proposed a change to devstack
  to set workers=`nproc` but it was decided to move this into glance
  itself:

  https://review.openstack.org/#/c/99739/

  Note that nova changed in Icehouse to default to number of CPUs
  available also, and Cinder will most likely be doing the same for it's
  osapi_volume_workers option.

  This will have a DocImpact and probably UpgradeImpact is also
  necessary since if you weren't setting the workers value explicitly
  before the change you'll now have `nproc` glance API workers by
  default after restarting the service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1101404] Re: nova syslog logging to /dev/log race condition in python 2.6

2014-06-23 Thread Bogdan Dobrelya
** Also affects: mos
   Importance: Undecided
   Status: New

** Changed in: mos
   Status: New = Confirmed

** Changed in: mos
   Importance: Undecided = High

** Changed in: mos
Milestone: None = 5.1

** Changed in: mos
 Assignee: (unassigned) = MOS Nova (mos-nova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1101404

Title:
  nova syslog logging to /dev/log race condition in python 2.6

Status in OpenStack Identity (Keystone):
  Confirmed
Status in Mirantis OpenStack:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Confirmed

Bug description:
  
  running nova-api-ec2
  running rsyslog

  service rsyslog restart ; service nova-api-ec2 restart

  nova-api-ec2 consumes up to 100% of the available CPU (or at least a
  full core) and s not responsive.  /var/log/nova/nova-api-ec2.lgo
  states the socket is already in use.

  strace the process

  sendto(3, 1422013-01-18 20:00:22 24882 INFO nova.service [-] Caught
  SIGTERM, exiting\0, 77, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint
  is not connected)

  service nova-api-ec2 restart fails as upstart already thinks the
  process has been terminated.

  The only way to recover is to pkill -9 nova-api-ec2 and then restart
  it with 'service nova-api-ec2 restart'.

  The same behavior has been seen in all nova-api services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1101404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307346] Re: Directory /var/lib/glance not owned by glance user

2014-06-23 Thread Justin Shepherd
** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1307346

Title:
  Directory /var/lib/glance not owned by glance user

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in “glance” package in Ubuntu:
  New

Bug description:
  Gusy,

  Glance packages from Ubuntu 14.04 have wrong permissions on
  /var/lib/glance, look:

  ---
  root@controller:~# ll /var/lib/glance/
  total 60
  drwxr-xr-x  4 glance glance  4096 Apr 14 01:06 ./
  drwxr-xr-x 40 root   root4096 Apr 14 04:47 ../
  -rw-r--r--  1 glance glance 37888 Apr 14 01:06 glance.sqlite
  drwxr-xr-x  5 root   root4096 Apr 14 01:05 image-cache/
  drwxr-xr-x  2 root   root4096 Apr  2 10:40 images/
  ---

  This triggers an error:

  ---
  2014-04-14 04:52:59.475 973 WARNING glance.store.base 
[4f6aa565-e15e-4201-bb24-417add066796 - - - - -] Failed to configure store 
correctly: Store filesystem could not be configured correctly. Reason: 
Permission to write in /var/lib/glance/images/ denied Disabling add method.
  ---

  To fix it, I just did:

  ---
  root@controller:~# chown glance: /var/lib/glance -R
  ---

  Cheers!
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1307346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333325] Re: glance-api workers should default to number of CPUs available

2014-06-23 Thread Matt Riedemann
Added oslo since I'd like to move nova.utils.cpu_count() from nova into
oslo-incubator, thinking service.py module, so that it can be re-used in
glance and cinder.

https://review.openstack.org/#/c/69266/1/nova/utils.py

** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/125

Title:
  glance-api workers should default to number of CPUs available

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  The docs recommend setting the 'workers' option equal to the number of
  CPUs on the host but defaults to 1.  I proposed a change to devstack
  to set workers=`nproc` but it was decided to move this into glance
  itself:

  https://review.openstack.org/#/c/99739/

  Note that nova changed in Icehouse to default to number of CPUs
  available also, and Cinder will most likely be doing the same for it's
  osapi_volume_workers option.

  This will have a DocImpact and probably UpgradeImpact is also
  necessary since if you weren't setting the workers value explicitly
  before the change you'll now have `nproc` glance API workers by
  default after restarting the service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333365] [NEW] Deleting a VM port does not remove Security rules in ip tables

2014-06-23 Thread chandrasekaran natarajan
Public bug reported:

Deleting a VM port does not remove security rules associated to VM port
in ip tables.


Setup : 

ICEHOUSE GA with KVM Compute node,network node, controller

1. Spawn a VM with security group attached.
2. Delete a VM port 
3. Verify the ip tables


VM IP  :  10.10.1.4
Rules attached : TCP and icmp rule


root@ICN-KVM:~# ovs-vsctl show
f3b34ea5-9799-460d-99bb-26359fd26e38
Bridge br-eth1
Port br-eth1
Interface br-eth1
type: internal
Port phy-br-eth1
Interface phy-br-eth1
Port eth1
Interface eth1
Bridge br-int
Port br-int
Interface br-int
type: internal
Port qvof28b18dc-c3   VM tap port 
tag: 1
Interface qvof28b18dc-c3
Port int-br-eth1
Interface int-br-eth1
ovs_version: 2.0.1
root@ICN-KVM:~#


After Deleting a port security rules are still present in iptables.
-

oot@ICN-KVM:~# iptables-save | grep 28b18dc
:neutron-openvswi-if28b18dc-c - [0:0]
:neutron-openvswi-of28b18dc-c - [0:0]
:neutron-openvswi-sf28b18dc-c - [0:0]
-A neutron-openvswi-FORWARD -m physdev --physdev-out tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-of28b18dc-c
-A neutron-openvswi-if28b18dc-c -m state --state INVALID -j DROP
-A neutron-openvswi-if28b18dc-c -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-if28b18dc-c -p tcp -m tcp -j RETURN
-A neutron-openvswi-if28b18dc-c -p icmp -j RETURN
-A neutron-openvswi-if28b18dc-c -s 10.10.1.3/32 -p udp -m udp --sport 67 
--dport 68 -j RETURN
-A neutron-openvswi-if28b18dc-c -j neutron-openvswi-sg-fallback
-A neutron-openvswi-of28b18dc-c -p udp -m udp --sport 68 --dport 67 -j RETURN
-A neutron-openvswi-of28b18dc-c -j neutron-openvswi-sf28b18dc-c
-A neutron-openvswi-of28b18dc-c -p udp -m udp --sport 67 --dport 68 -j DROP
-A neutron-openvswi-of28b18dc-c -m state --state INVALID -j DROP
-A neutron-openvswi-of28b18dc-c -m state --state RELATED,ESTABLISHED -j RETURN
-A neutron-openvswi-of28b18dc-c -j RETURN
-A neutron-openvswi-of28b18dc-c -j neutron-openvswi-sg-fallback
-A neutron-openvswi-sf28b18dc-c -s 10.10.1.4/32 -m mac --mac-source 
FA:16:3E:D4:47:F8 -j RETURN
-A neutron-openvswi-sf28b18dc-c -j DROP
-A neutron-openvswi-sg-chain -m physdev --physdev-out tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-if28b18dc-c
-A neutron-openvswi-sg-chain -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-of28b18dc-c
root@ICN-KVM:~#

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/165

Title:
  Deleting a VM port does not remove Security rules in ip tables

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Deleting a VM port does not remove security rules associated to VM
  port in ip tables.

  
  Setup : 

  ICEHOUSE GA with KVM Compute node,network node, controller

  1. Spawn a VM with security group attached.
  2. Delete a VM port 
  3. Verify the ip tables


  VM IP  :  10.10.1.4
  Rules attached : TCP and icmp rule

  
  root@ICN-KVM:~# ovs-vsctl show
  f3b34ea5-9799-460d-99bb-26359fd26e38
  Bridge br-eth1
  Port br-eth1
  Interface br-eth1
  type: internal
  Port phy-br-eth1
  Interface phy-br-eth1
  Port eth1
  Interface eth1
  Bridge br-int
  Port br-int
  Interface br-int
  type: internal
  Port qvof28b18dc-c3   VM tap port 
  tag: 1
  Interface qvof28b18dc-c3
  Port int-br-eth1
  Interface int-br-eth1
  ovs_version: 2.0.1
  root@ICN-KVM:~#

  
  After Deleting a port security rules are still present in iptables.
  -

  oot@ICN-KVM:~# iptables-save | grep 28b18dc
  :neutron-openvswi-if28b18dc-c - [0:0]
  :neutron-openvswi-of28b18dc-c - [0:0]
  :neutron-openvswi-sf28b18dc-c - [0:0]
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-sg-chain
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-sg-chain
  -A neutron-openvswi-INPUT -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-of28b18dc-c
  -A neutron-openvswi-if28b18dc-c -m state --state INVALID -j DROP
  -A neutron-openvswi-if28b18dc-c -m state --state RELATED,ESTABLISHED -j RETURN
  -A 

[Yahoo-eng-team] [Bug 1333325] Re: glance-api workers should default to number of CPUs available

2014-06-23 Thread Matt Riedemann
** No longer affects: oslo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/125

Title:
  glance-api workers should default to number of CPUs available

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The docs recommend setting the 'workers' option equal to the number of
  CPUs on the host but defaults to 1.  I proposed a change to devstack
  to set workers=`nproc` but it was decided to move this into glance
  itself:

  https://review.openstack.org/#/c/99739/

  Note that nova changed in Icehouse to default to number of CPUs
  available also, and Cinder will most likely be doing the same for it's
  osapi_volume_workers option.

  This will have a DocImpact and probably UpgradeImpact is also
  necessary since if you weren't setting the workers value explicitly
  before the change you'll now have `nproc` glance API workers by
  default after restarting the service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332601] Re: Refactor Authenticates and generates a token docs for Keystone v3

2014-06-23 Thread Dolph Mathews
Happy to see this improved, but we don't require a bug to track the
work. Thanks!

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1332601

Title:
  Refactor Authenticates and generates a token docs for Keystone v3

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  The external docs for the Authenticates and generates a token API
  call in Keystone v3 are a mess, specifically related to how they lay
  out the various requests and their associated responses. There are
  many ways that a token can be generated (8 as far as the existing docs
  reflect), and there is no indication given to the application
  developer that if they submit a token request with a request body of
  X, they will receive a response that looks like Y. This seems to be an
  obvious way to lay out the possible request/response options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1332601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287938] Re: Keystoneclient logs auth tokens

2014-06-23 Thread Dolph Mathews
Closes-Bug should actually work, but unfortunately the bug was targeted
at keystone rather than python-keystoneclient.

** Project changed: keystone = python-keystoneclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1287938

Title:
  Keystoneclient logs auth tokens

Status in Python client library for Keystone:
  In Progress

Bug description:
  ``keystoneclient/middleware/auth_token.py`` contains a bunch of places
  where auth tokens are logged out.  This could be useful for debugging,
  but log files are the kinds of things that users often forget to
  secure.  We should make them not contain sensitive data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-keystoneclient/+bug/1287938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331406] Re: can not login to Dashboard on devstack

2014-06-23 Thread David Lyle
While this effects Horizon the problem was in django_openstack_auth and
a fix has been released

** Changed in: horizon
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331406

Title:
  can not login to Dashboard on devstack

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Using fresh master of devstack and fresh masters of all services.

  When I try to login into the Dashboard, I do not leave the login page
  (as if nothing happened, no error displayed). Strangely the screen log
  for horizon service in devstack displays

  [Wed Jun 18 10:09:46.533780 2014] [:error] [pid 24605:tid 139679844230912] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.122.162
  [Wed Jun 18 10:09:46.535449 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:Setting read timeout to None
  [Wed Jun 18 10:09:46.623021 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1 200 1352
  [Wed Jun 18 10:09:46.633130 2014] [:error] [pid 24605:tid 139679844230912] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.122.162
  [Wed Jun 18 10:09:46.633459 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:Setting read timeout to None
  [Wed Jun 18 10:09:46.652504 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:GET /v2.0/tenants HTTP/1.1 200 244
  [Wed Jun 18 10:09:46.654398 2014] [:error] [pid 24605:tid 139679844230912] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.122.162
  [Wed Jun 18 10:09:46.654701 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:Setting read timeout to None
  [Wed Jun 18 10:09:46.750292 2014] [:error] [pid 24605:tid 139679844230912] 
DEBUG:urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1 200 7457
  [Wed Jun 18 10:09:46.753146 2014] [:error] [pid 24605:tid 139679844230912] 
Login successful for user demo.
  [Wed Jun 18 10:09:46.753354 2014] [:error] [pid 24605:tid 139679844230912] 
DeprecationWarning: check_for_test_cookie is deprecated; ensure your login view 
is CSRF-protected.
  [Wed Jun 18 10:09:46.753396 2014] [:error] [pid 24605:tid 139679844230912] 
WARNING:py.warnings:DeprecationWarning: check_for_test_cookie is deprecated; 
ensure your login view is CSRF-protected.

  
  Note the Login successful line. All the OS cli clients work as expected 
with the same credentials I use to login.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1331406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323299] Re: fwaas: firewall is not working for when destination ip address is VM's floating ip in firewall rule

2014-06-23 Thread Sumit Naiksatam
This is working as designed. The current FWaaS implementation sees the
ip address before it's DNAT'ed, not after it. Changing this would
probably be a bigger change, and a feature.

** Changed in: neutron
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323299

Title:
  fwaas: firewall is not working for when destination  ip address is
  VM's floating ip in firewall rule

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  DESCRIPTION: 
   
  Firewal is not working when setting the destination-ip-address as VM's 
floating ip  
  Steps to Reproduce: 
  1. create one network and attached it to the newly created router
  2. Create VMs on the above network
  3. create security group rule for icmp 
  4. create an external network and attach it to the router as gateway
  5. create floating ip and associate it to the VMs
  6. create a first firewall rule as protocol=icmp , action =deny and 
desitination-ip-address as floatingip
  7. create second firewall rule as protocol=any action=allow
  8. attach the rule to the policy and the policy to the firewall
  9. ping the VMs floating ip from network node which is having the external 
network configured.

  Actual Results: 
  Ping succeeds

  Expected Results: 
  Ping should fail as per the firewall rule

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333410] [NEW] Several nova unit tests failing related to IPs

2014-06-23 Thread Matt Riedemann
Public bug reported:

There are several different tests failing here:

http://logs.openstack.org/79/101579/4/check/gate-nova-
python26/990cd05/console.html

Checking on the ec2 failure shows it started on 6/23:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5hcGkuZWMyLnRlc3RfY2xvdWQuQ2xvdWRUZXN0Q2FzZS50ZXN0X2Fzc29jaWF0ZV9kaXNhc3NvY2lhdGVfYWRkcmVzc1wiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNTU0ODE0OTQ2fQ==

I'm guessing this is the change that caused the problem:

https://github.com/openstack/nova/commit/077e3c770ebeebd037ce882863a6b5dcefd644cf

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333410

Title:
  Several nova unit tests failing related to IPs

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are several different tests failing here:

  http://logs.openstack.org/79/101579/4/check/gate-nova-
  python26/990cd05/console.html

  Checking on the ec2 failure shows it started on 6/23:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5hcGkuZWMyLnRlc3RfY2xvdWQuQ2xvdWRUZXN0Q2FzZS50ZXN0X2Fzc29jaWF0ZV9kaXNhc3NvY2lhdGVfYWRkcmVzc1wiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNTU0ODE0OTQ2fQ==

  I'm guessing this is the change that caused the problem:

  
https://github.com/openstack/nova/commit/077e3c770ebeebd037ce882863a6b5dcefd644cf

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333407] [NEW] Secure Site Recommendations recommends setting a flag that is already default

2014-06-23 Thread Matt Fischer
Public bug reported:

See: http://docs.openstack.org/developer/horizon/topics/deployment.html
#secure-site-recommendations

The docs recommend setting SESSION_COOKIE_HTTPONLY = True, however this
is already the default:

https://github.com/openstack/horizon/blob/master/openstack_dashboard/settings.py#L166

When I tried to add this line to the example config file I was told it's
already default and not needed there, since that is the case, the docs
need to be fixed.


See discussion in:

https://review.openstack.org/#/c/101259/


 david-lyle I don't agree with your change, 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/settings.py#L166
 already sets that
 mfisch so then its a doc bug
 mfisch see my comment
 mfisch I'll file a doc bug

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333407

Title:
  Secure Site Recommendations recommends setting a flag that is already
  default

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  See:
  http://docs.openstack.org/developer/horizon/topics/deployment.html
  #secure-site-recommendations

  The docs recommend setting SESSION_COOKIE_HTTPONLY = True, however
  this is already the default:

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/settings.py#L166

  When I tried to add this line to the example config file I was told
  it's already default and not needed there, since that is the case, the
  docs need to be fixed.

  
  See discussion in:

  https://review.openstack.org/#/c/101259/

  
   david-lyle I don't agree with your change, 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/settings.py#L166
 already sets that
   mfisch so then its a doc bug
   mfisch see my comment
   mfisch I'll file a doc bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158684] Re: Pre-created ports get deleted on VM delete

2014-06-23 Thread Clint Byrum
This affects Heat, but it isn't a bug in Heat. Marking Invalid.

** Changed in: heat
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158684

Title:
  Pre-created ports get deleted on VM delete

Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  1) Pre create a port using port-create
  2) Boot a VM with nova boot --nic port_id=created port
  3) Delete a VM.

  Expected: VM should boot using provided port_id at boot time.
  When VM is deleted, port corresponding to pre-created port_id should not get 
deleted,
  as a lot of application, security settings could have port properties 
configured in them in a large network.

  Observed behavior:
  There is no way, I could prevent port_id associated with VM from being 
deleted with nova delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1158684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333434] [NEW] Floating ip 172.24.4.1 is not associated with instance

2014-06-23 Thread clayg
Public bug reported:

Got a failured temptest test in the gate:

2014-06-23 20:41:24.763 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescued_vm_associate_dissociate_floating_ip[gate]
2014-06-23 20:41:24.763 | 
-
2014-06-23 20:41:24.763 | 
2014-06-23 20:41:24.763 | Captured traceback:
2014-06-23 20:41:24.763 | ~~~
2014-06-23 20:41:24.763 | Traceback (most recent call last):
2014-06-23 20:41:24.763 |   File 
tempest/api/compute/servers/test_server_rescue.py, line 109, in 
test_rescued_vm_associate_dissociate_floating_ip
2014-06-23 20:41:24.764 | self.server_id)
2014-06-23 20:41:24.764 |   File 
tempest/services/compute/xml/floating_ips_client.py, line 100, in 
disassociate_floating_ip_from_server
2014-06-23 20:41:24.764 | resp, body = self.post(url, str(doc))
2014-06-23 20:41:24.764 |   File tempest/common/rest_client.py, line 
218, in post
2014-06-23 20:41:24.764 | return self.request('POST', url, 
extra_headers, headers, body)
2014-06-23 20:41:24.764 |   File tempest/common/rest_client.py, line 
430, in request
2014-06-23 20:41:24.764 | resp, resp_body)
2014-06-23 20:41:24.764 |   File tempest/common/rest_client.py, line 
497, in _error_checker
2014-06-23 20:41:24.764 | raise 
exceptions.UnprocessableEntity(resp_body)
2014-06-23 20:41:24.764 | UnprocessableEntity: Unprocessable entity
2014-06-23 20:41:24.764 | Details: {'message': 'Floating ip 172.24.4.1 
is not associated with instance aa77a874-e802-43f0-beb5-e9c91d86519b.', 'code': 
'422'}

I'm trying to do log stash more, cause it seems like that helps
something?  It does sorta look like this one happens more today than the
previous run of the mill failures:

http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkZsb2F0aW5nIGlwIDE3Mi4yNC40LjEgaXMgbm90IGFzc29jaWF0ZWQgd2l0aCBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNTU4OTA3MTI2fQ==

but, idk, I never feel like the reporting the symptom is really the bug?
I see errors in the n-cpu logs that look more like something broke:

http://logs.openstack.org/91/101991/1/check/check-tempest-dsvm-
full/ddf95f6/logs/screen-n-cpu.txt.gz?#_2014-06-23_20_39_05_833

oh well, I don't know why for some reason still think filing these bugs
is useful to someone... let me know.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333434

Title:
  Floating ip 172.24.4.1 is not associated with instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  Got a failured temptest test in the gate:

  2014-06-23 20:41:24.763 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescued_vm_associate_dissociate_floating_ip[gate]
  2014-06-23 20:41:24.763 | 
-
  2014-06-23 20:41:24.763 | 
  2014-06-23 20:41:24.763 | Captured traceback:
  2014-06-23 20:41:24.763 | ~~~
  2014-06-23 20:41:24.763 | Traceback (most recent call last):
  2014-06-23 20:41:24.763 |   File 
tempest/api/compute/servers/test_server_rescue.py, line 109, in 
test_rescued_vm_associate_dissociate_floating_ip
  2014-06-23 20:41:24.764 | self.server_id)
  2014-06-23 20:41:24.764 |   File 
tempest/services/compute/xml/floating_ips_client.py, line 100, in 
disassociate_floating_ip_from_server
  2014-06-23 20:41:24.764 | resp, body = self.post(url, str(doc))
  2014-06-23 20:41:24.764 |   File tempest/common/rest_client.py, 
line 218, in post
  2014-06-23 20:41:24.764 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-06-23 20:41:24.764 |   File tempest/common/rest_client.py, 
line 430, in request
  2014-06-23 20:41:24.764 | resp, resp_body)
  2014-06-23 20:41:24.764 |   File tempest/common/rest_client.py, 
line 497, in _error_checker
  2014-06-23 20:41:24.764 | raise 
exceptions.UnprocessableEntity(resp_body)
  2014-06-23 20:41:24.764 | UnprocessableEntity: Unprocessable entity
  2014-06-23 20:41:24.764 | Details: {'message': 'Floating ip 
172.24.4.1 is not associated with instance 
aa77a874-e802-43f0-beb5-e9c91d86519b.', 'code': '422'}

  I'm trying to do log stash more, cause it seems like that helps
  something?  It does sorta look like this one happens more today than
  the previous run of the mill failures:

  

[Yahoo-eng-team] [Bug 1333440] [NEW] Secure Site Recommendations does not discuss LOGGING settings

2014-06-23 Thread Matt Fischer
Public bug reported:

The Secure Site Recommendations
(http://docs.openstack.org/developer/horizon/topics/deployment.html
#secure-site-recommendations) does not mention anything about the
LOGGING section. One specific issue that should be covered is that if
you ship the example config file, it will log the keystone requests as
DEBUG and that will log plaintext passwords. This is very dangerous.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333440

Title:
  Secure Site Recommendations does not discuss LOGGING settings

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Secure Site Recommendations
  (http://docs.openstack.org/developer/horizon/topics/deployment.html
  #secure-site-recommendations) does not mention anything about the
  LOGGING section. One specific issue that should be covered is that if
  you ship the example config file, it will log the keystone requests as
  DEBUG and that will log plaintext passwords. This is very dangerous.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298495] Re: Associate and disassociate floating IP broken logic in PLUMgrid Plugin

2014-06-23 Thread Fawad Khaliq
** Changed in: neutron/icehouse
 Assignee: (unassigned) = Fawad Khaliq (fawadkhaliq)

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298495

Title:
  Associate and disassociate floating IP broken logic in PLUMgrid Plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  It has been observed that PLUMgrid Plugin does not do well in association and 
disassociation of floating IPs. Incomplete data is sent to the backend while 
doing a delete update. 
  The fix would require changes in update_floatingip and 
disassociate_floationgips.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333442] [NEW] PLUMgrid Plugin DHCP and gateway IP conflict

2014-06-23 Thread Fawad Khaliq
Public bug reported:

In PLUMgrid Plugin, DHCP server IP is reserved as the first IP in the
CIDR. This is a bad assumption as it conflicts with someone who would
like to set it as gateway IP. This should be fixed.

** Affects: neutron
 Importance: Undecided
 Assignee: Fawad Khaliq (fawadkhaliq)
 Status: New


** Tags: havana-backport-potential icehouse-backport-potential

** Tags added: icehouse-backport-potential

** Changed in: neutron
 Assignee: (unassigned) = Fawad Khaliq (fawadkhaliq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333442

Title:
  PLUMgrid Plugin DHCP and gateway IP conflict

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In PLUMgrid Plugin, DHCP server IP is reserved as the first IP in the
  CIDR. This is a bad assumption as it conflicts with someone who would
  like to set it as gateway IP. This should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327425] Re: With default configuration Horizon is exposed to session-fixation attack

2014-06-23 Thread Travis McPeak
** Changed in: ossn
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1327425

Title:
  With default configuration Horizon is exposed to session-fixation
  attack

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  With the default configuration, if an attacker can obtain a sessionid
  value from a user, the attacker can view and perform actions as that
  user.  This ability does not go away after the user has logged out.

  To view a potential exploit:
  1)  Create an admin profile with access to the admin project and a non admin 
profile with no access to the admin project
  2)  Log in to Horizon as the admin, navigate to the project/instances page.  
Launch some vms.
  3)  Open up firebug and capture the sessionid value.
  4)  Log out of the admin user.
  5)  Log in as the non admin user
  6)  navigate to the project/instances page
  7)  Use firebug to past in the admin value of the session id value
  8)  click the project/instances link again to force a round trip.
  *!* It's possible for the non admin user to view all of the admin project vms
  9)  In the action column choose More-Terminate Instance
  *!* It's possible for the non admin user to delete an admin project vm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1327425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333471] [NEW] Checking security group in nova immediately after instance is created results in error

2014-06-23 Thread Bill Rich
Public bug reported:

Environment:
Openstack Havana with Neutron for networking and security groups

Error:
Response from nova:
The server could not comply with the request since it is either malformed or 
otherwise incorrect., code: 400

In nova-api log
014-06-19 00:48:39.483 17462 ERROR nova.api.openstack.wsgi 
[req-60aa8941-d129-4018-a30f-f815f0770118 10764ccc2d154d0a919f5104872fb9a8 
2b60ae3ba5bd41d893674d0e57ae4390] Exception handling resource: 'NoneType' 
object is not iterable
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 997, in 
_process_stack
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 1078, in 
dispatch
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py,
 line 438, in index
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi for group in 
groups]
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi TypeError: 
'NoneType' object is not iterable
2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi 
2014-06-19 00:48:39.485 17462 INFO nova.osapi_compute.wsgi.server 
[req-60aa8941-d129-4018-a30f-f815f0770118 10764ccc2d154d0a919f5104872fb9a8 
2b60ae3ba5bd41d893674d0e57ae4390] 10.147.22.73,54.225.248.128 GET 
/v2/2b60ae3ba5bd41d893674d0e57ae4390/servers/c7e5f472-57fb-4a95-95cf-45c6506db0cd/os-security-groups
 HTTP/1.1 status: 400 len: 362 time: 0.0710380

Steps to reproduce:
1) Create new instance
2) Immediately check security group through nova 
(/v2/$tenant/servers/$server_id/os-security-groups
3) Wait several seconds and try again (Works if given a small delay between 
instance creation and checking sec group)

Notes: This error did not come up in earlier versions of havana, but
started after a recent upgrade

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333471

Title:
  Checking security group in nova immediately after instance is created
  results in error

Status in OpenStack Compute (Nova):
  New

Bug description:
  Environment:
  Openstack Havana with Neutron for networking and security groups

  Error:
  Response from nova:
  The server could not comply with the request since it is either malformed or 
otherwise incorrect., code: 400

  In nova-api log
  014-06-19 00:48:39.483 17462 ERROR nova.api.openstack.wsgi 
[req-60aa8941-d129-4018-a30f-f815f0770118 10764ccc2d154d0a919f5104872fb9a8 
2b60ae3ba5bd41d893674d0e57ae4390] Exception handling resource: 'NoneType' 
object is not iterable
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 997, in 
_process_stack
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi action_result 
= self.dispatch(meth, request, action_args)
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 1078, in 
dispatch
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/security_groups.py,
 line 438, in index
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi for group in 
groups]
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi TypeError: 
'NoneType' object is not iterable
  2014-06-19 00:48:39.483 17462 TRACE nova.api.openstack.wsgi 
  2014-06-19 00:48:39.485 17462 INFO nova.osapi_compute.wsgi.server 
[req-60aa8941-d129-4018-a30f-f815f0770118 10764ccc2d154d0a919f5104872fb9a8 
2b60ae3ba5bd41d893674d0e57ae4390] 10.147.22.73,54.225.248.128 GET 
/v2/2b60ae3ba5bd41d893674d0e57ae4390/servers/c7e5f472-57fb-4a95-95cf-45c6506db0cd/os-security-groups
 HTTP/1.1 status: 400 len: 362 time: 0.0710380

  Steps to reproduce:
  1) Create new instance
  2) Immediately check security group through nova 
(/v2/$tenant/servers/$server_id/os-security-groups
  3) Wait several seconds and try again (Works if given a small delay between 
instance creation and checking sec group)

  Notes: This error did not come up in earlier versions of havana, but
  started after a recent upgrade

To manage 

[Yahoo-eng-team] [Bug 1333475] [NEW] ML2 : network filters for provider attributes not implemented

2014-06-23 Thread Manish Godara
Public bug reported:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L207

TBD line item.

** Affects: neutron
 Importance: Undecided
 Assignee: Manish Godara (manishatyhoo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Manish Godara (manishatyhoo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333475

Title:
  ML2 : network filters for provider attributes  not implemented

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L207

  TBD line item.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333476] [NEW] keypair-add botches key from stdin

2014-06-23 Thread Mike Spreitzer
Public bug reported:

`cat X | nova keypair-add K` produces a different result from `nova
keypair-add --pub-key X K`.

The latter takes the contents of X as the public key; the former does
not.

For example:

ubuntu@mjs-dstk-623b:~$ cat bar.pub | nova keypair-add bar_stdin
-BEGIN RSA PRIVATE KEY-
MIIEpQIBAAKCAQEA7aiDpcq1JOLBS6l471eFAvBe7DcJqKaK5uM4+73DG+99aihF
RxY8kwYQfqS3MPdlAfVhNNBI4ehLpgDsAVfqm5QOZSZChkZPnJpjifOqszBynL99
txmwfDqFJnOvzLT1eY1Q/9O1GXHquolAIruB4T9N/6VW8yWnzK6fJ733uaJCnvIp
rgqem8Hjc8g+Yn6o3HnXkeykd9ayElB+d+79yd82LysXtZlmb+QOKLn8RHTWIgyT
qIqCp+qlxRQtqc04DT/qyhEJjC50il8jAi4DEYOWrSJSlQGTKsj/z9giVXyTp/Dx
Ops/zGpz6dAH8TBFldnr4XrEPm3mTEcXBEOfZwIDAQABAoIBAQCfZ+ZVb++sfAPW
8idRskxfOkcQ/aGW445La6EvCYsy06I1cCl3kuyyWOD7cRQG3gl8FNBMkmAwVpVX
FUs3Y3bTP62gHteEJOkFS3D0eOHIKvjVNoPmKm78BGyG7BXAoqf8DdOEpMXV+VjO
IX1JTqfBI6r3jDkUAe/ZFE9gYsUkVuybXhwZzksPJRRvoDU+76/Zqq9dEK8nUuqs
+Oq4bHOGZPdOLtGsOLfe4tgJ5vDCu4CNkmXjxcDwHQmId53n/xX5magsGC3YCyTg
iTy05A3XlO7EEPPEuhlrIoHToRfgnRFJRsc5DiY8jzqdS1MEYH4N+eafCszVdQfr
u8xIVWmRAoGBAPlrg9UK4k8kC4XLgcZKNm5+gir7Jceqx3fmK9qV1m+6RmHatdKH
zu8CHXpFiLstVwk+I+BjvpQoUK5yO4GwINNN3JPPsfEiPlIb9QLfrE+Hjtis4FBF
Qy2vwBTl+CK03J2OUc79wDJQffEhlgYMQbQtYQ2KK0mFbQz/IbMSqrYtAoGBAPPt
kLGbp3j7hq751r3MYbhBvhVBM9XHDIljOffvbOr9wwetTeoK/GYTjNXymKggNijO
Uhb+VNf5FbYE4hGx601AKizctwXGzDJX+BzccU8dzf0uyoqxmFHn2JYtZtgk6VRT
glyOwLXPMeBd2bo9nmJcdr7FWPwFFeBMCor5Q1xjAoGBAKtWLC3BWE09WZ0De5aX
jGTDCvAzrnRG4NeAikeR/sipkYfPEnAZUxHkxhMkiRTrxIpY4ZRXcKeeOi5b0nz4
XNRK/GedmYMoHt+QzPK4bEoFuR8nQsBhlBBiVvUENTzCOXsSNSiYL9tgZ+OpSsHE
0a3QLod6jtnmik8PRDsba6HRAoGACDMVKRM9ZvC1j04wrMKhCkuTcy1065u8TSX7
vdzbgW60Tp7BvrtNzrSbiFmWThh/GZIN6l30RipGU48IdmXPrhIZGNb2hAgxtwOE
AJxcZrduxDL9dfoQT7iGbE3sZhmfikkgWbImwjXLzGn7Nqp5l37aMwF5Q0d8e8Sy
mgdU/1cCgYEA6e+DGrKIZlTXij1pdCTgC/A6sQgrAu4yY/7duUdCquAtX4L5cjza
dERodDaWj584RlHSot5GRP3RIPRfS5TGozH3nkPSyQ3+6vJC8Af9uAj+TcVnlK+n
4iEC5PnZnfXG79rPu0YpdFTXPM/IzQZOFaMbwQ43+qjPcLX2pFdoyzA=
-END RSA PRIVATE KEY-

ubuntu@mjs-dstk-623b:~$ nova keypair-add --pub-key bar.pub bar_file

ubuntu@mjs-dstk-623b:~$ nova keypair-list
+---+-+
| Name  | Fingerprint |
+---+-+
...
| bar_stdin | 51:b4:0b:bc:6d:5d:3f:7e:bc:4c:e9:5c:03:fd:c2:4d |
| bar_file  | 93:85:70:f2:a5:40:ae:30:da:4a:1e:01:fd:80:79:24 |
+---+-+
ubuntu@mjs-dstk-623b:~$ 


This is with a DevStack install about an hour ago.  The following shows exact 
identifiers.

ubuntu@mjs-dstk-623b:~$ cd /opt/stack/nova

ubuntu@mjs-dstk-623b:/opt/stack/nova$ git branch -v
* master 80b827d Merge Drop support for conductor 1.x rpc interface

ubuntu@mjs-dstk-623b:/opt/stack/nova$ cd ../keystone/

ubuntu@mjs-dstk-623b:/opt/stack/keystone$ git branch -v
* master db0519d Merge Make gen_pki.sh  debug_helper.sh bash8 compliant

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333476

Title:
  keypair-add botches key from stdin

Status in OpenStack Compute (Nova):
  New

Bug description:
  `cat X | nova keypair-add K` produces a different result from `nova
  keypair-add --pub-key X K`.

  The latter takes the contents of X as the public key; the former does
  not.

  For example:

  ubuntu@mjs-dstk-623b:~$ cat bar.pub | nova keypair-add bar_stdin
  -BEGIN RSA PRIVATE KEY-
  MIIEpQIBAAKCAQEA7aiDpcq1JOLBS6l471eFAvBe7DcJqKaK5uM4+73DG+99aihF
  RxY8kwYQfqS3MPdlAfVhNNBI4ehLpgDsAVfqm5QOZSZChkZPnJpjifOqszBynL99
  txmwfDqFJnOvzLT1eY1Q/9O1GXHquolAIruB4T9N/6VW8yWnzK6fJ733uaJCnvIp
  rgqem8Hjc8g+Yn6o3HnXkeykd9ayElB+d+79yd82LysXtZlmb+QOKLn8RHTWIgyT
  qIqCp+qlxRQtqc04DT/qyhEJjC50il8jAi4DEYOWrSJSlQGTKsj/z9giVXyTp/Dx
  Ops/zGpz6dAH8TBFldnr4XrEPm3mTEcXBEOfZwIDAQABAoIBAQCfZ+ZVb++sfAPW
  8idRskxfOkcQ/aGW445La6EvCYsy06I1cCl3kuyyWOD7cRQG3gl8FNBMkmAwVpVX
  FUs3Y3bTP62gHteEJOkFS3D0eOHIKvjVNoPmKm78BGyG7BXAoqf8DdOEpMXV+VjO
  IX1JTqfBI6r3jDkUAe/ZFE9gYsUkVuybXhwZzksPJRRvoDU+76/Zqq9dEK8nUuqs
  +Oq4bHOGZPdOLtGsOLfe4tgJ5vDCu4CNkmXjxcDwHQmId53n/xX5magsGC3YCyTg
  iTy05A3XlO7EEPPEuhlrIoHToRfgnRFJRsc5DiY8jzqdS1MEYH4N+eafCszVdQfr
  u8xIVWmRAoGBAPlrg9UK4k8kC4XLgcZKNm5+gir7Jceqx3fmK9qV1m+6RmHatdKH
  zu8CHXpFiLstVwk+I+BjvpQoUK5yO4GwINNN3JPPsfEiPlIb9QLfrE+Hjtis4FBF
  Qy2vwBTl+CK03J2OUc79wDJQffEhlgYMQbQtYQ2KK0mFbQz/IbMSqrYtAoGBAPPt
  kLGbp3j7hq751r3MYbhBvhVBM9XHDIljOffvbOr9wwetTeoK/GYTjNXymKggNijO
  Uhb+VNf5FbYE4hGx601AKizctwXGzDJX+BzccU8dzf0uyoqxmFHn2JYtZtgk6VRT
  glyOwLXPMeBd2bo9nmJcdr7FWPwFFeBMCor5Q1xjAoGBAKtWLC3BWE09WZ0De5aX
  jGTDCvAzrnRG4NeAikeR/sipkYfPEnAZUxHkxhMkiRTrxIpY4ZRXcKeeOi5b0nz4
  XNRK/GedmYMoHt+QzPK4bEoFuR8nQsBhlBBiVvUENTzCOXsSNSiYL9tgZ+OpSsHE
  0a3QLod6jtnmik8PRDsba6HRAoGACDMVKRM9ZvC1j04wrMKhCkuTcy1065u8TSX7
  vdzbgW60Tp7BvrtNzrSbiFmWThh/GZIN6l30RipGU48IdmXPrhIZGNb2hAgxtwOE
  

[Yahoo-eng-team] [Bug 1333483] [NEW] pie chart doesn't display properly if total = 0

2014-06-23 Thread Cindy Lu
Public bug reported:

Example:

If you go to Admin  Project and change the number of volumes to 0 and
then go to Project  Overview page, it shows Volumes Used 2 of 0.  It
shows as an empty pie chart.

Please see attached image.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: plane.png
   https://bugs.launchpad.net/bugs/1333483/+attachment/4137758/+files/plane.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333483

Title:
  pie chart doesn't display properly if total = 0

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Example:

  If you go to Admin  Project and change the number of volumes to 0 and
  then go to Project  Overview page, it shows Volumes Used 2 of 0.  It
  shows as an empty pie chart.

  Please see attached image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333484] [NEW] Launch Instance boot from image (creates a new volume) broken

2014-06-23 Thread Cindy Lu
Public bug reported:

Once you select Instance Boot Source as Boot from Image (creates a new
volume) you expect another dynamic dropdown menu, but nothing happens
(see [A] in image)

Please see image.

When I press Launch, it stays on the modal, but appends the other tab
content to the first tab (Details) (see [B] in the image)

At first I thought it was because I exceeded my volume quota, but that
doesn't seem to be the case.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Untitled.png
   
https://bugs.launchpad.net/bugs/1333484/+attachment/4137760/+files/Untitled.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333484

Title:
  Launch Instance  boot from image (creates a new volume) broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Once you select Instance Boot Source as Boot from Image (creates a
  new volume) you expect another dynamic dropdown menu, but nothing
  happens (see [A] in image)

  Please see image.

  When I press Launch, it stays on the modal, but appends the other tab
  content to the first tab (Details) (see [B] in the image)

  At first I thought it was because I exceeded my volume quota, but that
  doesn't seem to be the case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333486] [NEW] Create Volume Snapshot incorrect infographic

2014-06-23 Thread Cindy Lu
Public bug reported:

Go to Project  Volumes and click on 'Create Snapshot' from a volume you
have.

The infographic says Number of Volumes (2) -- 10.  However, I
have changed by Volume Snapshots quota (from Admin  Projects  Modify
Quota tab).

This graph is taking the *Volume* quota information instead of the
*Volume Snapshot* quota information.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333486

Title:
  Create Volume Snapshot incorrect infographic

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Go to Project  Volumes and click on 'Create Snapshot' from a volume
  you have.

  The infographic says Number of Volumes (2) -- 10.  However,
  I have changed by Volume Snapshots quota (from Admin  Projects 
  Modify Quota tab).

  This graph is taking the *Volume* quota information instead of the
  *Volume Snapshot* quota information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332558] Re: Instance snapshot is created with wrong image format

2014-06-23 Thread hzxiongwenwu
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332558

Title:
  Instance  snapshot is created with wrong image format

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When you boot an instance with image of RAW image format then after
  taking instance snapshot it creates a snapshot with image format
  QCOW2.

  Steps to reproduce using Horizon:

  1. Go to Project -- Compute -- Images and click on 'Create Image'.
  2. Create an image with 'Raw' format.
  3. Go to Project -- Compute -- Instances and click on 'Launch Instance'.
  4. Boot an instance by selecting the 'Boot from image' as source and newly 
created raw image from the Image Name drop-down.
  5. Click on 'Create Snapshot' to create a snapshot of this instance.
  6. You will be redirected to image list page where you will see the format of 
snapshot as 'QCOW2'.

  Ideally the snapshot format should be same as its source image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1332558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333494] [NEW] os-agents api update return string that is different with index return integer

2014-06-23 Thread Alex Xu
Public bug reported:

This bug found by Dan Smith in https://review.openstack.org/#/c/101995/

First problem, there is inconsistent in api samples.

create and index action return integer for agent id actually. But in api 
samples file the agent id is string.
This is because api sample file provide a wrong fake data.

Second problem, the update action return string for agent id.
For back compatibility problem, we can't fix this problem for v2 and v2.1 api

We only can fix this problem for v3 api. And we need to add translator
for v2.1 api for this later.

This problem will be fixed after v3 api feature exposed by microversion

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333494

Title:
  os-agents api update return string that is different with index return
  integer

Status in OpenStack Compute (Nova):
  New

Bug description:
  This bug found by Dan Smith in
  https://review.openstack.org/#/c/101995/

  First problem, there is inconsistent in api samples.

  create and index action return integer for agent id actually. But in api 
samples file the agent id is string.
  This is because api sample file provide a wrong fake data.

  Second problem, the update action return string for agent id.
  For back compatibility problem, we can't fix this problem for v2 and v2.1 api

  We only can fix this problem for v3 api. And we need to add translator
  for v2.1 api for this later.

  This problem will be fixed after v3 api feature exposed by
  microversion

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333498] [NEW] table nova.pci_devices lost device status every time. PciDeviceList.get_by_compute_node pass a wrong parameter

2014-06-23 Thread Young
Public bug reported:


I'm trying to use SR-IOV in openstack havana.

After a pci device(virtual function in my case) is allocated to a vm, the 
status of  according record in table 'nova.pci_devices' is updated to allocated.
However,   when I restart the openstack services,  the devices' records are 
updated to available again.   Actually, the pci devices are allocated to vm.

I looked into the code and found the problem below.

In the __init__ function of PciDevTracker in pci/pci_manager.py ,  it requires 
node_id. 
If a node_id is passed in, it will fetch pci devices information
from database, otherwise, it will create an empty devices list

However, the code initiating PciDevTracker (in
compute/resource_tracker.py) never passes node_id.   So  it will never
fetch pci devices information from database and the status will be
updated to 'available' every time we restart services.


=


Then I  try do add the node id and  want to see what will happen.

Then I got this error
 self.pci_tracker = pci_manager.PciDevTracker(node_id=1)
   File /usr/lib/python2.6/site-packages/nova/pci/pci_manager.py, line 67, in 
__init__
 context, node_id)
   File /usr/lib/python2.6/site-packages/nova/objects/base.py, line 106, in 
wrapper
 args, kwargs)
   File /usr/lib/python2.6/site-packages/nova/conductor/rpcapi.py, line 492, 
in object_class_action
 objver=objver, args=args, kwargs=kwargs)
   File /usr/lib/python2.6/site-packages/nova/rpcclient.py, line 85, in call
 return self._invoke(self.proxy.call, ctxt, method, **kwargs)
   File /usr/lib/python2.6/site-packages/nova/rpcclient.py, line 63, in 
_invoke
 return cast_or_call(ctxt, msg, **self.kwargs)
   File /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/proxy.py, 
line 126, in call
 result = rpc.call(context, real_topic, msg, timeout)
   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py, line 
139, in call
 return _get_impl().call(CONF, context, topic, msg, timeout)
   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
783, in call
 rpc_amqp.get_connection_pool(conf, Connection))
   File /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, 
line 572, in call
 rv = multicall(conf, context, topic, msg, timeout, connection_pool)
   File /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, 
line 558, in multicall
 pack_context(msg, context)
   File /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, 
line 308, in pack_context
 for (key, value) in context.to_dict().iteritems()])
 AttributeError: 'module' object has no attribute 'to_dict'


It pass the module context to pci_device_obj.PciDeviceList.get_by_compute_node. 
 But to_dict is a function of RequestContext in module context. It seems 
that it should pass a RequestContext instance instead of the module context.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333498

Title:
  table nova.pci_devices  lost device status every time. 
  PciDeviceList.get_by_compute_node pass a wrong parameter

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  I'm trying to use SR-IOV in openstack havana.

  After a pci device(virtual function in my case) is allocated to a vm, the 
status of  according record in table 'nova.pci_devices' is updated to allocated.
  However,   when I restart the openstack services,  the devices' records are 
updated to available again.   Actually, the pci devices are allocated to vm.

  I looked into the code and found the problem below.

  In the __init__ function of PciDevTracker in pci/pci_manager.py ,  it 
requires node_id. 
  If a node_id is passed in, it will fetch pci devices information
  from database, otherwise, it will create an empty devices list

  However, the code initiating PciDevTracker (in
  compute/resource_tracker.py) never passes node_id.   So  it will never
  fetch pci devices information from database and the status will be
  updated to 'available' every time we restart services.

  
  =

  
  Then I  try do add the node id and  want to see what will happen.

  Then I got this error
   self.pci_tracker = pci_manager.PciDevTracker(node_id=1)
 File /usr/lib/python2.6/site-packages/nova/pci/pci_manager.py, line 67, 
in __init__
   context, node_id)
 File /usr/lib/python2.6/site-packages/nova/objects/base.py, line 106, in 
wrapper
   args, kwargs)
 File /usr/lib/python2.6/site-packages/nova/conductor/rpcapi.py, line 
492, in object_class_action
   objver=objver, args=args, kwargs=kwargs)
 File /usr/lib/python2.6/site-packages/nova/rpcclient.py, line 85, in call
   return self._invoke(self.proxy.call, ctxt, method, **kwargs)

[Yahoo-eng-team] [Bug 1079452] Re: access denied when starting compute

2014-06-23 Thread Young
** Changed in: nova
   Status: Incomplete = Confirmed

** Changed in: nova
   Status: Confirmed = Incomplete

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1079452

Title:
  access denied when starting compute

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Ubuntu 12.10, XCP 1.6, Folsom

  I receive an access denied error when I try to start the nova-compute
  service.

  
  Here's a piece of nova-compute.log:

  2012-11-15 18:01:02 DEBUG nova.service [-] log_file : None from (pid=1397) 
wait /usr/lib/python2.7/dist-packages/nova/service.py:188
  2012-11-15 18:01:02 DEBUG nova.service [-] compute_manager : 
nova.compute.manager.ComputeManager from (pid=1397) wait 
/usr/lib/python2.7/dist-packages/nova/service.py:188
  2012-11-15 18:01:02 DEBUG nova.service [-] network_topic : network from 
(pid=1397) wait /usr/lib/python2.7/dist-packages/nova/service.py:188
  2012-11-15 18:01:02 AUDIT nova.service [-] Starting compute node (version 
2012.2-LOCALBRANCH:LOCALREVISION)
  2012-11-15 18:01:02 CRITICAL nova [-] [Errno 13] Permission denied
  2012-11-15 18:01:02 TRACE nova Traceback (most recent call last):
  2012-11-15 18:01:02 TRACE nova   File /usr/bin/nova-compute, line 48, in 
module
  2012-11-15 18:01:02 TRACE nova service.wait()

  At the very end, at the end of the trace, it says:

  2012-11-15 18:01:02 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py, line 1803, in 
get_this_vm_uuid
  2012-11-15 18:01:02 TRACE nova return f.readline().strip()
  2012-11-15 18:01:02 TRACE nova IOError: [Errno 13] Permission denied
  2012-11-15 18:01:02 TRACE nova

  
  Not having any luck figuring out where access is being denied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1079452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333476] Re: keypair-add botches key from stdin

2014-06-23 Thread Joe Gordon
python-novaclient doesn't support reading the keypair from stdin. I see
now reason why novaclient shouldn't support this model

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

** Changed in: python-novaclient
   Status: New = Triaged

** Changed in: python-novaclient
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333476

Title:
  keypair-add botches key from stdin

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Nova:
  Triaged

Bug description:
  `cat X | nova keypair-add K` produces a different result from `nova
  keypair-add --pub-key X K`.

  The latter takes the contents of X as the public key; the former does
  not.

  For example:

  ubuntu@mjs-dstk-623b:~$ cat bar.pub | nova keypair-add bar_stdin
  -BEGIN RSA PRIVATE KEY-
  MIIEpQIBAAKCAQEA7aiDpcq1JOLBS6l471eFAvBe7DcJqKaK5uM4+73DG+99aihF
  RxY8kwYQfqS3MPdlAfVhNNBI4ehLpgDsAVfqm5QOZSZChkZPnJpjifOqszBynL99
  txmwfDqFJnOvzLT1eY1Q/9O1GXHquolAIruB4T9N/6VW8yWnzK6fJ733uaJCnvIp
  rgqem8Hjc8g+Yn6o3HnXkeykd9ayElB+d+79yd82LysXtZlmb+QOKLn8RHTWIgyT
  qIqCp+qlxRQtqc04DT/qyhEJjC50il8jAi4DEYOWrSJSlQGTKsj/z9giVXyTp/Dx
  Ops/zGpz6dAH8TBFldnr4XrEPm3mTEcXBEOfZwIDAQABAoIBAQCfZ+ZVb++sfAPW
  8idRskxfOkcQ/aGW445La6EvCYsy06I1cCl3kuyyWOD7cRQG3gl8FNBMkmAwVpVX
  FUs3Y3bTP62gHteEJOkFS3D0eOHIKvjVNoPmKm78BGyG7BXAoqf8DdOEpMXV+VjO
  IX1JTqfBI6r3jDkUAe/ZFE9gYsUkVuybXhwZzksPJRRvoDU+76/Zqq9dEK8nUuqs
  +Oq4bHOGZPdOLtGsOLfe4tgJ5vDCu4CNkmXjxcDwHQmId53n/xX5magsGC3YCyTg
  iTy05A3XlO7EEPPEuhlrIoHToRfgnRFJRsc5DiY8jzqdS1MEYH4N+eafCszVdQfr
  u8xIVWmRAoGBAPlrg9UK4k8kC4XLgcZKNm5+gir7Jceqx3fmK9qV1m+6RmHatdKH
  zu8CHXpFiLstVwk+I+BjvpQoUK5yO4GwINNN3JPPsfEiPlIb9QLfrE+Hjtis4FBF
  Qy2vwBTl+CK03J2OUc79wDJQffEhlgYMQbQtYQ2KK0mFbQz/IbMSqrYtAoGBAPPt
  kLGbp3j7hq751r3MYbhBvhVBM9XHDIljOffvbOr9wwetTeoK/GYTjNXymKggNijO
  Uhb+VNf5FbYE4hGx601AKizctwXGzDJX+BzccU8dzf0uyoqxmFHn2JYtZtgk6VRT
  glyOwLXPMeBd2bo9nmJcdr7FWPwFFeBMCor5Q1xjAoGBAKtWLC3BWE09WZ0De5aX
  jGTDCvAzrnRG4NeAikeR/sipkYfPEnAZUxHkxhMkiRTrxIpY4ZRXcKeeOi5b0nz4
  XNRK/GedmYMoHt+QzPK4bEoFuR8nQsBhlBBiVvUENTzCOXsSNSiYL9tgZ+OpSsHE
  0a3QLod6jtnmik8PRDsba6HRAoGACDMVKRM9ZvC1j04wrMKhCkuTcy1065u8TSX7
  vdzbgW60Tp7BvrtNzrSbiFmWThh/GZIN6l30RipGU48IdmXPrhIZGNb2hAgxtwOE
  AJxcZrduxDL9dfoQT7iGbE3sZhmfikkgWbImwjXLzGn7Nqp5l37aMwF5Q0d8e8Sy
  mgdU/1cCgYEA6e+DGrKIZlTXij1pdCTgC/A6sQgrAu4yY/7duUdCquAtX4L5cjza
  dERodDaWj584RlHSot5GRP3RIPRfS5TGozH3nkPSyQ3+6vJC8Af9uAj+TcVnlK+n
  4iEC5PnZnfXG79rPu0YpdFTXPM/IzQZOFaMbwQ43+qjPcLX2pFdoyzA=
  -END RSA PRIVATE KEY-

  ubuntu@mjs-dstk-623b:~$ nova keypair-add --pub-key bar.pub bar_file

  ubuntu@mjs-dstk-623b:~$ nova keypair-list
  +---+-+
  | Name  | Fingerprint |
  +---+-+
  ...
  | bar_stdin | 51:b4:0b:bc:6d:5d:3f:7e:bc:4c:e9:5c:03:fd:c2:4d |
  | bar_file  | 93:85:70:f2:a5:40:ae:30:da:4a:1e:01:fd:80:79:24 |
  +---+-+
  ubuntu@mjs-dstk-623b:~$ 

  
  This is with a DevStack install about an hour ago.  The following shows exact 
identifiers.

  ubuntu@mjs-dstk-623b:~$ cd /opt/stack/nova

  ubuntu@mjs-dstk-623b:/opt/stack/nova$ git branch -v
  * master 80b827d Merge Drop support for conductor 1.x rpc interface

  ubuntu@mjs-dstk-623b:/opt/stack/nova$ cd ../keystone/

  ubuntu@mjs-dstk-623b:/opt/stack/keystone$ git branch -v
  * master db0519d Merge Make gen_pki.sh  debug_helper.sh bash8 compliant

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333476] Re: keypair-add botches key from stdin

2014-06-23 Thread Mike Spreitzer
Those two commands used to produce the same result.  They have diverged
only recently.

Sorry about the wrong project identification; will remove the wrong one.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333476

Title:
  keypair-add botches key from stdin

Status in Python client library for Nova:
  Triaged

Bug description:
  `cat X | nova keypair-add K` produces a different result from `nova
  keypair-add --pub-key X K`.

  The latter takes the contents of X as the public key; the former does
  not.

  For example:

  ubuntu@mjs-dstk-623b:~$ cat bar.pub | nova keypair-add bar_stdin
  -BEGIN RSA PRIVATE KEY-
  MIIEpQIBAAKCAQEA7aiDpcq1JOLBS6l471eFAvBe7DcJqKaK5uM4+73DG+99aihF
  RxY8kwYQfqS3MPdlAfVhNNBI4ehLpgDsAVfqm5QOZSZChkZPnJpjifOqszBynL99
  txmwfDqFJnOvzLT1eY1Q/9O1GXHquolAIruB4T9N/6VW8yWnzK6fJ733uaJCnvIp
  rgqem8Hjc8g+Yn6o3HnXkeykd9ayElB+d+79yd82LysXtZlmb+QOKLn8RHTWIgyT
  qIqCp+qlxRQtqc04DT/qyhEJjC50il8jAi4DEYOWrSJSlQGTKsj/z9giVXyTp/Dx
  Ops/zGpz6dAH8TBFldnr4XrEPm3mTEcXBEOfZwIDAQABAoIBAQCfZ+ZVb++sfAPW
  8idRskxfOkcQ/aGW445La6EvCYsy06I1cCl3kuyyWOD7cRQG3gl8FNBMkmAwVpVX
  FUs3Y3bTP62gHteEJOkFS3D0eOHIKvjVNoPmKm78BGyG7BXAoqf8DdOEpMXV+VjO
  IX1JTqfBI6r3jDkUAe/ZFE9gYsUkVuybXhwZzksPJRRvoDU+76/Zqq9dEK8nUuqs
  +Oq4bHOGZPdOLtGsOLfe4tgJ5vDCu4CNkmXjxcDwHQmId53n/xX5magsGC3YCyTg
  iTy05A3XlO7EEPPEuhlrIoHToRfgnRFJRsc5DiY8jzqdS1MEYH4N+eafCszVdQfr
  u8xIVWmRAoGBAPlrg9UK4k8kC4XLgcZKNm5+gir7Jceqx3fmK9qV1m+6RmHatdKH
  zu8CHXpFiLstVwk+I+BjvpQoUK5yO4GwINNN3JPPsfEiPlIb9QLfrE+Hjtis4FBF
  Qy2vwBTl+CK03J2OUc79wDJQffEhlgYMQbQtYQ2KK0mFbQz/IbMSqrYtAoGBAPPt
  kLGbp3j7hq751r3MYbhBvhVBM9XHDIljOffvbOr9wwetTeoK/GYTjNXymKggNijO
  Uhb+VNf5FbYE4hGx601AKizctwXGzDJX+BzccU8dzf0uyoqxmFHn2JYtZtgk6VRT
  glyOwLXPMeBd2bo9nmJcdr7FWPwFFeBMCor5Q1xjAoGBAKtWLC3BWE09WZ0De5aX
  jGTDCvAzrnRG4NeAikeR/sipkYfPEnAZUxHkxhMkiRTrxIpY4ZRXcKeeOi5b0nz4
  XNRK/GedmYMoHt+QzPK4bEoFuR8nQsBhlBBiVvUENTzCOXsSNSiYL9tgZ+OpSsHE
  0a3QLod6jtnmik8PRDsba6HRAoGACDMVKRM9ZvC1j04wrMKhCkuTcy1065u8TSX7
  vdzbgW60Tp7BvrtNzrSbiFmWThh/GZIN6l30RipGU48IdmXPrhIZGNb2hAgxtwOE
  AJxcZrduxDL9dfoQT7iGbE3sZhmfikkgWbImwjXLzGn7Nqp5l37aMwF5Q0d8e8Sy
  mgdU/1cCgYEA6e+DGrKIZlTXij1pdCTgC/A6sQgrAu4yY/7duUdCquAtX4L5cjza
  dERodDaWj584RlHSot5GRP3RIPRfS5TGozH3nkPSyQ3+6vJC8Af9uAj+TcVnlK+n
  4iEC5PnZnfXG79rPu0YpdFTXPM/IzQZOFaMbwQ43+qjPcLX2pFdoyzA=
  -END RSA PRIVATE KEY-

  ubuntu@mjs-dstk-623b:~$ nova keypair-add --pub-key bar.pub bar_file

  ubuntu@mjs-dstk-623b:~$ nova keypair-list
  +---+-+
  | Name  | Fingerprint |
  +---+-+
  ...
  | bar_stdin | 51:b4:0b:bc:6d:5d:3f:7e:bc:4c:e9:5c:03:fd:c2:4d |
  | bar_file  | 93:85:70:f2:a5:40:ae:30:da:4a:1e:01:fd:80:79:24 |
  +---+-+
  ubuntu@mjs-dstk-623b:~$ 

  
  This is with a DevStack install about an hour ago.  The following shows exact 
identifiers.

  ubuntu@mjs-dstk-623b:~$ cd /opt/stack/nova

  ubuntu@mjs-dstk-623b:/opt/stack/nova$ git branch -v
  * master 80b827d Merge Drop support for conductor 1.x rpc interface

  ubuntu@mjs-dstk-623b:/opt/stack/nova$ cd ../keystone/

  ubuntu@mjs-dstk-623b:/opt/stack/keystone$ git branch -v
  * master db0519d Merge Make gen_pki.sh  debug_helper.sh bash8 compliant

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1333476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 918238] Re: wrong error message of nova-api after failing to login to rabbitmq-server

2014-06-23 Thread Christopher Yeoh
Sorry doesn't appear to be fixable atm

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/918238

Title:
  wrong error message of nova-api after failing to login to rabbitmq-
  server

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  the rabbitmq-server is not unreachable. the error message is wrong, it
  should be login failed or something like that.

  log if nova-api:

  2012-01-18 14:55:29,625 ERROR nova.rpc [-] AMQP server on
  192.168.53.132:5672 is unreachable: Socket closed. Trying again in 1
  seconds.

  
  log of rabbitmq-server:

  
  =INFO REPORT 18-Jan-2012::14:55:30 ===
  starting TCP connection 0.762.0 from 192.168.53.130:42483

  =ERROR REPORT 18-Jan-2012::14:55:33 ===
  exception on TCP connection 0.762.0 from 192.168.53.130:42483
  {channel0_error,starting,
  {amqp_error,access_refused,
  AMQPLAIN login refused: user 'guest' - invalid 
credentials,
  'connection.start_ok'}}

  =INFO REPORT 18-Jan-2012::14:55:33 ===
  closing TCP connection 0.762.0 from 192.168.53.130:42483

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/918238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333520] [NEW] Used VCPU's can show bigger value than Total VCPU's in nova hypervisor-show

2014-06-23 Thread Ami Jeain
Public bug reported:

I have a system, where, when I type:
nova hypervisor-show hypervisor mame I get:

vcpus | 32
vcpus_used| 38

This is wired to have used value bigger then Total value

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1333520

Title:
  Used VCPU's can show bigger value than Total VCPU's in nova
  hypervisor-show

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a system, where, when I type:
  nova hypervisor-show hypervisor mame I get:

  vcpus | 32
  vcpus_used| 38

  This is wired to have used value bigger then Total value

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1333520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1054501] Re: Fail safe wsgi logging

2014-06-23 Thread Christopher Yeoh
The first thing we do is log the exception. This happens before we
attempt to return any information to the client or do any other
processing. So I don't think we need any other fallback. Unless you have
a testcase which demonstrates this is necessary.

** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1054501

Title:
  Fail safe wsgi logging

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  If there are any uncaught exceptions from api controllers, they are
  handled in nova/api/openstack/__init__.py:FaultWrapper, logged the
  handled exception and HTTP 500 response is sent to the client.

  There could be error while logging these uncaught exceptions and we
  missout them in the nova logs. In such a case, these exceptions should
  be logged into syslog for further investigation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1054501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333528] [NEW] Horizon doesn't allow access to dashboard

2014-06-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

After devstack install of openstack on baremetal of laptop, Horizon
doesn't allow access to dashboard and keeps the credentials area clean
even after entering the correct username and password.

On entering wrong credentials it notifies of the error but on entering
correct details it doesn't do anything.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit
-- 
Horizon doesn't allow access to dashboard
https://bugs.launchpad.net/bugs/1333528
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333528] [NEW] Horizon doesn't allow access to dashboard

2014-06-23 Thread Amit Prakash Pandey
Public bug reported:

After devstack install of openstack on baremetal of laptop, Horizon
doesn't allow access to dashboard and keeps the credentials area clean
even after entering the correct username and password.

On entering wrong credentials it notifies of the error but on entering
correct details it doesn't do anything.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Summary changed:

- It refreshes the page after entering correct username and password
+ Horizon doesn't allow access to dashboard

** Tags removed: horizon
** Tags added: low-hanging-fruit

** Project changed: tempest = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333528

Title:
  Horizon doesn't allow access to dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After devstack install of openstack on baremetal of laptop, Horizon
  doesn't allow access to dashboard and keeps the credentials area clean
  even after entering the correct username and password.

  On entering wrong credentials it notifies of the error but on entering
  correct details it doesn't do anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp