[Yahoo-eng-team] [Bug 1597613] [NEW] OVS firewall fails if of_interface=native and ovsdb_interface=native

2016-06-29 Thread IWAMOTO Toshihiro
Public bug reported:

OVSFirewallDriver fails to run with the following errors.
A fix is to follow.

2016-06-30 13:14:32.721 DEBUG neutron.agent.linux.utils 
[req-e89302ff-35eb-4bd8-bb6a-9e705fb1cdc2 None None] Running command (rootwrap 
daemon): ['ovs-ofctl', 'add-flows', 'br-int', '-'] from (pid=26921) 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:98  
 
2016-06-30 13:14:32.725 ERROR neutron.agent.linux.utils 
[req-e89302ff-35eb-4bd8-bb6a-9e705fb1cdc2 None None] Exit code: 1; Stdin: 
hard_timeout=0,idle_timeout=0,priority=0,table=71,cookie=13680950857646023732,actions=drop;
 Stdout: ; Stderr: 
2016-06-30T04:14:32Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x01, peer supports version 
0x04)   
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

** Affects: neutron
 Importance: Undecided
 Assignee: IWAMOTO Toshihiro (iwamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597613

Title:
  OVS firewall fails if of_interface=native and ovsdb_interface=native

Status in neutron:
  In Progress

Bug description:
  OVSFirewallDriver fails to run with the following errors.
  A fix is to follow.

  2016-06-30 13:14:32.721 DEBUG neutron.agent.linux.utils 
[req-e89302ff-35eb-4bd8-bb6a-9e705fb1cdc2 None None] Running command (rootwrap 
daemon): ['ovs-ofctl', 'add-flows', 'br-int', '-'] from (pid=26921) 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:98  
 
  2016-06-30 13:14:32.725 ERROR neutron.agent.linux.utils 
[req-e89302ff-35eb-4bd8-bb6a-9e705fb1cdc2 None None] Exit code: 1; Stdin: 
hard_timeout=0,idle_timeout=0,priority=0,table=71,cookie=13680950857646023732,actions=drop;
 Stdout: ; Stderr: 
2016-06-30T04:14:32Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x01, peer supports version 
0x04)   
  ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597609] [NEW] Single Vm instance assigned with two ips

2016-06-29 Thread priya
Public bug reported:

While testing Netvirt 1 node openstack in csit, I could see two ips
assigned to a single vm instance.

Steps to be followed to reproduce:
--

1. Create Network
2. Create subnet
3. Boot three vm instances in the same subnet


Expected Result:
---
Each vm instance should be assigned with one ip.

Actual Result:
--
A vm instance assigned with two ips.

Observation:


first two vm instances properly assigned with ips (vm1 as 30.0.0.3 and vm2 as 
30.0.0.4), where as the third vm instance assigned with two ips as (30.0.0.5 
and 3.0.0.6) since the subnet range is 30.0.0.0/24 and Gateway is enabled.

after creating all three vm instances i printed them with nova show commands.
i could see the below output for third vm instance.

+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | centos7-devstack-721   
|
| OS-EXT-SRV-ATTR:hostname | mythirdinstance-1  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | centos7-devstack-721   
|
| OS-EXT-SRV-ATTR:instance_name| instance-0003  
|
| OS-EXT-SRV-ATTR:kernel_id| 55ce53be-9fd0-4df7-ab55-5e78a296376c   
|
| OS-EXT-SRV-ATTR:launch_index | 0  
|
| OS-EXT-SRV-ATTR:ramdisk_id   | 179666d0-e4f4-4a9b-bbf3-3e8f23fd5488   
|
| OS-EXT-SRV-ATTR:reservation_id   | r-k85fjf88 
|
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda   
|
| OS-EXT-SRV-ATTR:user_data| -  
|
| OS-EXT-STS:power_state   | 1  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2016-06-28T07:10:10.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive | True   
|
| created  | 2016-06-28T07:10:05Z   
|
| description  | -  
|
| flavor   | m1.nano (42)   
|
| hostId   | 
f7c621bc0d8cb4621e5a33dc1fdf12b1fd0adcfd96abc51eaf7c6e62   |
| host_status  | UP 
|
| id   | f7d294d6-9b23-40f5-9151-1d7a1f8168e8   
|
| image| cirros-0.3.4-x86_64-uec 
(05d7928d-4221-4e7f-ad27-f9d0bcf3a742) |
| key_name | -  
|
| l2_network_1 network | 30.0.0.5, 30.0.0.6 
|
| locked   | False  
|
| metadata | {} 
|
| name | MyThirdInstance_1  
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | default
|
| status   | ACTIVE 
|
| 

[Yahoo-eng-team] [Bug 1597596] [NEW] network not always cleaned up when spawning VMs

2016-06-29 Thread Aihua Edward Li
Public bug reported:

Here are the scenario:
1). Nova scheduler/conductor selects a nova-compute A to spin a VM
2). Nova compute A tries to spin the VM, but the process failed, and generates 
a RE-SCHEDULE exception.
3). in re-schedule exception, only when retry is none, network resource is 
properly cleaned up. when retry is not none, the network is not cleaned up, the 
port information still stays with the VM.
4). Nova condutor was notified about the failure. It selects nova-compute-B to 
spin VM.
5). nova compute B spins up VM successfully. However, from the 
instance_info_cache, the network_info showed two ports allocated for VM, one 
from the origin network A that associated with nova-compute A nad one from 
network B that associated with nova compute B.

To simulate the case, raise a fake exception in
_do_build_and_run_instance in nova-compute A:

diff --git a/nova/compute/manager.py b/nova/compute/manager.py
index ac6d92c..8ce8409 100644
--- a/nova/compute/manager.py
+++ b/nova/compute/manager.py
@@ -1746,6 +1746,7 @@ class ComputeManager(manager.Manager):
 filter_properties)
 LOG.info(_LI('Took %0.2f seconds to build instance.'),
  timer.elapsed(), instance=instance)
+raise exception.RescheduledException( instance_uuid=instance.uuid, 
reason="simulated-fault")
 return build_results.ACTIVE
 except exception.RescheduledException as e:
 retry = filter_properties.get('retry')

environments: 
*) nova master branch
*) ubuntu 12.04
*) kvm
*) bridged network.

** Affects: nova
 Importance: Undecided
 Assignee: Aihua Edward Li (aihuaedwardli)
 Status: New

** Summary changed:

- network not alwasy cleaned up when spawning VMs
+ network not always cleaned up when spawning VMs

** Changed in: nova
 Assignee: (unassigned) => Aihua Edward Li (aihuaedwardli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597596

Title:
  network not always cleaned up when spawning VMs

Status in OpenStack Compute (nova):
  New

Bug description:
  Here are the scenario:
  1). Nova scheduler/conductor selects a nova-compute A to spin a VM
  2). Nova compute A tries to spin the VM, but the process failed, and 
generates a RE-SCHEDULE exception.
  3). in re-schedule exception, only when retry is none, network resource is 
properly cleaned up. when retry is not none, the network is not cleaned up, the 
port information still stays with the VM.
  4). Nova condutor was notified about the failure. It selects nova-compute-B 
to spin VM.
  5). nova compute B spins up VM successfully. However, from the 
instance_info_cache, the network_info showed two ports allocated for VM, one 
from the origin network A that associated with nova-compute A nad one from 
network B that associated with nova compute B.

  To simulate the case, raise a fake exception in
  _do_build_and_run_instance in nova-compute A:

  diff --git a/nova/compute/manager.py b/nova/compute/manager.py
  index ac6d92c..8ce8409 100644
  --- a/nova/compute/manager.py
  +++ b/nova/compute/manager.py
  @@ -1746,6 +1746,7 @@ class ComputeManager(manager.Manager):
   filter_properties)
   LOG.info(_LI('Took %0.2f seconds to build instance.'),
timer.elapsed(), instance=instance)
  +raise exception.RescheduledException( 
instance_uuid=instance.uuid, reason="simulated-fault")
   return build_results.ACTIVE
   except exception.RescheduledException as e:
   retry = filter_properties.get('retry')

  environments: 
  *) nova master branch
  *) ubuntu 12.04
  *) kvm
  *) bridged network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597584] [NEW] error: Installed (but unpackaged) file(s) found: /usr/bin/keystone-all

2016-06-29 Thread wuwei
Public bug reported:

When using the Mitaka version of the source code to compile RPM keystone
and keystoneclient package,the following error is reported as follows;

keystone problem;
RPM build errors:
Installed (but unpackaged) file(s) found:
   /usr/bin/keystone-all

keystoneclient problem;
RPM build errors:
Installed (but unpackaged) file(s) found:
   /usr/bin/keystone

I found that in the spec file to add the following sentenc, the problem
can be solved.So I think the default generated spec file should have the
problem.

openstack-keystone.spec;
%{_bindir}/keystone-all

openstack-keystoneclient.spec;
%{_bindir}/keystone

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597584

Title:
  error: Installed (but unpackaged) file(s) found: /usr/bin/keystone-all

Status in OpenStack Identity (keystone):
  New

Bug description:
  When using the Mitaka version of the source code to compile RPM
  keystone and keystoneclient package,the following error is reported as
  follows;

  keystone problem;
  RPM build errors:
  Installed (but unpackaged) file(s) found:
 /usr/bin/keystone-all

  keystoneclient problem;
  RPM build errors:
  Installed (but unpackaged) file(s) found:
 /usr/bin/keystone

  I found that in the spec file to add the following sentenc, the
  problem can be solved.So I think the default generated spec file
  should have the problem.

  openstack-keystone.spec;
  %{_bindir}/keystone-all

  openstack-keystoneclient.spec;
  %{_bindir}/keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597561] [NEW] L3 agent allows multiple gateway ports in fip namespace

2016-06-29 Thread Carl Baldwin
Public bug reported:

At the end of deleting a GW port for a router, l3_dvr_db.py will look
for any more router gw ports on the external network.  If there are
none, then it calls delete_floatingip_agent_gateway_port [1].  This
should fan out to all l3 agents on all compute nodes [2].  Each agent
should then delete the port [3].

In some cases, the fip namespace and the gateway port are not deleted.
I don't know where things are going wrong.  This seems pretty
straight-forward.  Do some agents miss the fanout?  We know at least
some of them are getting the fanout.  So, it is definitely being sent.

When I checked, the port had been deleted from the database.  The fact
that a new one is created supports this because if one existed in the DB
already then it would be returned.


[1] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/db/l3_dvr_db.py#L179
[2] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
[3] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/agent/l3/dvr.py#L73

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: l3-dvr-backlog l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: l3-dvr-backlog l3-ipam-dhcp

** Description changed:

- At the end of deleting a GW port for a router, l3_dvr_db.py will look for any
- more router gw ports on the external network.  If there are none, then it 
calls
- delete_floatingip_agent_gateway_port [1].  This should fan out to all l3 
agents
- on all compute nodes [2].  Each agent should then delete the port [3].
+ At the end of deleting a GW port for a router, l3_dvr_db.py will look
+ for any more router gw ports on the external network.  If there are
+ none, then it calls delete_floatingip_agent_gateway_port [1].  This
+ should fan out to all l3 agents on all compute nodes [2].  Each agent
+ should then delete the port [3].
  
- In some cases, the fip namespace and the gateway port are not deleted.  I 
don't
- know where things are going wrong.  This seems pretty straight-forward.  Do
- some agents miss the fanout?  We know at least some of them are getting the
- fanout.  So, it is definitely being sent.
+ In some cases, the fip namespace and the gateway port are not deleted.
+ I don't know where things are going wrong.  This seems pretty
+ straight-forward.  Do some agents miss the fanout?  We know at least
+ some of them are getting the fanout.  So, it is definitely being sent.
  
- When I checked, the port had been deleted from the database.  The fact that a
- new one is created supports this because if one existed in the DB already then
- it would be returned.
+ When I checked, the port had been deleted from the database.  The fact
+ that a new one is created supports this because if one existed in the DB
+ already then it would be returned.
+ 
  
  [1] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/db/l3_dvr_db.py#L179
  [2] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
  [3] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/agent/l3/dvr.py#L73

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597561

Title:
  L3 agent allows multiple gateway ports in fip namespace

Status in neutron:
  Confirmed

Bug description:
  At the end of deleting a GW port for a router, l3_dvr_db.py will look
  for any more router gw ports on the external network.  If there are
  none, then it calls delete_floatingip_agent_gateway_port [1].  This
  should fan out to all l3 agents on all compute nodes [2].  Each agent
  should then delete the port [3].

  In some cases, the fip namespace and the gateway port are not deleted.
  I don't know where things are going wrong.  This seems pretty
  straight-forward.  Do some agents miss the fanout?  We know at least
  some of them are getting the fanout.  So, it is definitely being sent.

  When I checked, the port had been deleted from the database.  The fact
  that a new one is created supports this because if one existed in the DB
  already then it would be returned.

  
  [1] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/db/l3_dvr_db.py#L179
  [2] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
  [3] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/agent/l3/dvr.py#L73

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597561/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1597557] [NEW] getting CSRF token missing or incorrect. /api/nova/servers/ when CSRF_COOKIE_HTTPONLY=True

2016-06-29 Thread Tracy Jones
Public bug reported:

Using stable/mitkaka if I set CSRF_COOKIE_HTTPONLY=True in
local_settings.py, when i try to launch an instance i get

Forbidden (CSRF token missing or incorrect.): /api/nova/servers/

If i set it to false (or don't set it) then it works fine.

This is what does not work

# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
# prevent certain client-side attacks, such as cross-site scripting
CSRF_COOKIE_HTTPONLY = True
SESSION_COOKIE_HTTPONLY = True


this is what does work

# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
# prevent certain client-side attacks, such as cross-site scripting
CSRF_COOKIE_HTTPONLY = False
SESSION_COOKIE_HTTPONLY = True

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597557

Title:
  getting CSRF token missing or incorrect. /api/nova/servers/ when
  CSRF_COOKIE_HTTPONLY=True

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using stable/mitkaka if I set CSRF_COOKIE_HTTPONLY=True in
  local_settings.py, when i try to launch an instance i get

  Forbidden (CSRF token missing or incorrect.): /api/nova/servers/

  If i set it to false (or don't set it) then it works fine.

  This is what does not work

  # If Horizon is being served through SSL, then uncomment the following two
  # settings to better secure the cookies from security exploits
  CSRF_COOKIE_SECURE = True
  SESSION_COOKIE_SECURE = True
  # prevent certain client-side attacks, such as cross-site scripting
  CSRF_COOKIE_HTTPONLY = True
  SESSION_COOKIE_HTTPONLY = True

  
  this is what does work

  # If Horizon is being served through SSL, then uncomment the following two
  # settings to better secure the cookies from security exploits
  CSRF_COOKIE_SECURE = True
  SESSION_COOKIE_SECURE = True
  # prevent certain client-side attacks, such as cross-site scripting
  CSRF_COOKIE_HTTPONLY = False
  SESSION_COOKIE_HTTPONLY = True

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592463] Re: Avoid removing SegmentHostMapping in other host when update agent

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/329540
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b20188d2652e4907dcfc173f4d5d1a8d557227e1
Submitter: Jenkins
Branch:master

commit b20188d2652e4907dcfc173f4d5d1a8d557227e1
Author: Hong Hui Xiao 
Date:   Tue Jun 14 15:35:40 2016 +

Only update SegmentHostMapping for the given host

Now, when an agent removes a physical network from its configuration,
not only the segments with that physical network will be unbinded
from the host of upated agent. All the segments with that physical
network will be unbinded from all hosts. This is incorrect, the
segments should only be unbinded from the host of updated agent.

Change-Id: Iccca843d1682ac54ec87c3b003a33a0fc5c62205
Closes-bug: #1592463


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592463

Title:
  Avoid removing SegmentHostMapping in other host when update agent

Status in neutron:
  Fix Released

Bug description:
  Found this when working on OVN, but it should also apply to topology
  with l2 agent.

  Steps to reproduce:
  1) Have segment1 with physical network physical_net1
  Have segment2 with physical network physical_net2

  2) Have 2 agents(host1, host2), both configured with physical_net1.
  When the agent created/updated in neutron, there will be a
  SegmentHostMapping for segment1->host1, and a SegmentHostMapping for
  segment1->host2.

  3) Update agent at host2 to only configure with physical_net2. There will be 
only one SegmentHostMapping for host2, segment2->host2.
  But the SegmentHostMapping for segment1->host1 will also be deleted. This is 
not expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597552] [NEW] neutron quota-update returns a list

2016-06-29 Thread Manjeet Singh Bhatia
Public bug reported:

When i do neutron quota-list it returns nothing 
and neutron quota-update just returns the list of quota instead showing the 
option 
for updating actual neutron resource quotas.


http://paste.openstack.org/show/524140/

** Affects: python-neutronclient
 Importance: Undecided
 Status: New

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597552

Title:
  neutron quota-update returns a list

Status in python-neutronclient:
  New

Bug description:
  When i do neutron quota-list it returns nothing 
  and neutron quota-update just returns the list of quota instead showing the 
option 
  for updating actual neutron resource quotas.

  
  http://paste.openstack.org/show/524140/

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1597552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597551] [NEW] XenAPI plugin failure with fetch_all_bandwidth

2016-06-29 Thread Jianghua Wang
Public bug reported:

XenAPI can't fetch the bandwidth with the following error in log:
2016-06-29 09:02:27.639 30900 ERROR oslo_service.periodic_task Failure: 
['XENAPI_PLUGIN_FAILURE', 'fetch_all_bandwidth', 'ValueError', 'need more than 
1 value to unpack']

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597551

Title:
  XenAPI plugin failure with fetch_all_bandwidth

Status in OpenStack Compute (nova):
  New

Bug description:
  XenAPI can't fetch the bandwidth with the following error in log:
  2016-06-29 09:02:27.639 30900 ERROR oslo_service.periodic_task Failure: 
['XENAPI_PLUGIN_FAILURE', 'fetch_all_bandwidth', 'ValueError', 'need more than 
1 value to unpack']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597503] Re: Preview Page: Material: Code Icon is wrong

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335654
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=878e53d19c1c543dcc33d6eb5aea653794e2af30
Submitter: Jenkins
Branch:master

commit 878e53d19c1c543dcc33d6eb5aea653794e2af30
Author: Diana Whitten 
Date:   Wed Jun 29 13:05:33 2016 -0700

Preview Page: Material: Code Icon fix

Somewhere along the lines, the preview page code button, when
viewed in 'Material' shows an alarm clock instead of a code icon.

Closes-bug: #1597503

Change-Id: I577cc5853f6580fa60dadc8fa34a4955b6aae710


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597503

Title:
  Preview Page: Material: Code Icon is wrong

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Somewhere along the lines, the preview page code button, when viewed
  in 'Material' shows an alarm clock instead of a code icon.  Seen here:
  https://i.imgur.com/MfpXYpC.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549793] Re: force_metadata = True : qdhcp namespace has no interface with ip 169.254.169.254

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/305615
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=dfacba0f2d7586df51962c2b927e5272358ea3c1
Submitter: Jenkins
Branch:master

commit dfacba0f2d7586df51962c2b927e5272358ea3c1
Author: Li Xipeng 
Date:   Tue Feb 23 13:54:34 2016 -0800

Add 169.254.169.254 when enable force_metadata

When enable force_metadata in dhcp.ini, and create a network
and a subnet, none 169.254.169.254/24 ip info set in related
namespace(qdhcp-XXX) with `ip a` command. In this case, vms could
not get metadata any more.

Change-Id: Ibd73824658c9759d32fa53ffcf41f2b719c1028b
Closes-Bug: #1549793


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549793

Title:
  force_metadata = True : qdhcp namespace has no interface with ip
  169.254.169.254

Status in neutron:
  Fix Released

Bug description:
  [root@overcloud-controller-0 ~]#  cat /etc/neutron/dhcp_agent.ini | grep 
metadata | grep -v "#"
  force_metadata = True
  enable_isolated_metadata = False
  enable_metadata_network = False

  [stack@undercloud ~]$ neutron net-list
  
+--++---+
  | id   | name 
  | subnets   |

  | d7ebddcd-9989-4068-a8d9-66381e83d1f5 | int_net  
  | 739b813d-4863-44e3-acd5-0bf6c3aaec76 192.168.3.0/24   |
  
+--++---+

  [root@overcloud-controller-0 ~]# ip netns exec  
qdhcp-d7ebddcd-9989-4068-a8d9-66381e83d1f5 ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  36: tap7002581e-a4:  mtu 1500 qdisc noqueue 
state UNKNOWN
  link/ether fa:16:3e:3b:e9:ae brd ff:ff:ff:ff:ff:ff
  inet 192.168.3.3/24 brd 192.168.3.255 scope global tap7002581e-a4
     valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe3b:e9ae/64 scope link
     valid_lft forever preferred_lft forever

  We should have interface on qdhcp namespace  with 169.254.169.254 ip
  for metadata when "force_metadata = True" in /etc/neutron/dhcp-
  agent.ini.

  VMs are not receiving metadata in this scenario


  [root@overcloud-controller-0 ~]# rpm -qa | grep neutron
  openstack-neutron-bigswitch-lldp-2015.1.38-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.2-9.el7ost.noarch
  python-neutronclient-2.4.0-2.el7ost.noarch
  python-neutron-2015.1.2-9.el7ost.noarch
  openstack-neutron-2015.1.2-9.el7ost.noarch
  openstack-neutron-lbaas-2015.1.2-1.el7ost.noarch
  python-neutron-lbaas-2015.1.2-1.el7ost.noarch
  openstack-neutron-common-2015.1.2-9.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.2-9.el7ost.noarch
  openstack-neutron-metering-agent-2015.1.2-9.el7ost.noarch


  [root@overcloud-controller-0 ~]# rpm -qa | grep meta
  yum-metadata-parser-1.1.4-10.el7.x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594756] Re: pep8 job runs a single check only

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/334986
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3ff5b704a7a52fe46f24061d266a30ee44bdeaef
Submitter: Jenkins
Branch:master

commit 3ff5b704a7a52fe46f24061d266a30ee44bdeaef
Author: Jakub Libosvar 
Date:   Tue Jun 28 07:26:26 2016 -0400

pep8: Register checks with their code

pep8 searches for check codes in their docstrings [1]. This allows us to
either ignore or select particular checks by their codes.

e.g. flake8 --select=N333

[1] 
https://github.com/PyCQA/pycodestyle/blob/4438622d0b62df53a1999301d1bdc9fa119ae763/pycodestyle.py#L110

Change-Id: I4644ab087abc441beed52a170df8b5279fed76a4
Closes-bug: 1594756


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594756

Title:
  pep8 job runs a single check only

Status in neutron:
  Fix Released

Bug description:
  Due to https://github.com/PyCQA/pycodestyle/issues/390 , we now run
  only check that makes sure we use unittest2 instead of unittest.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404867] Re: Volume remains in-use status, if instance booted from volume is deleted in error state

2016-06-29 Thread melanie witt
The last fix was reverted https://review.openstack.org/#/c/335652/

** Changed in: nova
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404867

Title:
  Volume remains in-use status, if instance booted from volume is
  deleted in error state

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  If the instance is booted from volume and goes in to error state due to some 
reason.
  Volume from which instance is booted, remains in-use state even the instance 
is deleted.
  IMO, volume should be detached so that it can be used to boot other instance.

  Steps to reproduce:

  1. Log in to Horizon, create a new volume.
  2. Create an Instance using newly created volume.
  3. Verify instance is in active state.
  $ source devstack/openrc demo demo
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  Note:
  Use shelve-unshelve api to see the instance goes into error state.
  unshelving volumed back instance does not work and sets instance state to 
error state (ref: https://bugs.launchpad.net/nova/+bug/1404801)

  4. Shelve the instance
  $ nova shelve 

  5. Verify the status is SHELVED_OFFLOADED.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | - 
 | Shutdown| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Unshelve the instance.
  $ nova unshelve 

  5. Verify the instance is in Error state.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | Error | 
unshelving | Spawning| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Delete the instance using Horizon.

  7. Verify that volume still in in-use state
  $ cinder list
  
+--++--+--+-+--+--+
  |  ID  | Status | Name | Size | Volume Type | 
Bootable | Attached to  |
  
+--++--+--+-+--+--+
  | 4aeefd25-10aa-42c2-9a2d-1c89a95b4d4f | in-use | test |  1   | lvmdriver-1 | 
  true   | 8f7bdc24-1891-4bbb-8f0c-732b9cbecae7 |
  
+--++--+--+-+--+--+

  8. In Horizon, volume "Attached To" information is displayed as
  "Attached to None on /dev/vda".

  9. User is not able to delete this volume, or attached it to another
  instance as it is still in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596585] Re: test_db_find_column_type_list is failing depending on ovsdb result order

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335450
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7803175840cce3c7eb5d593f3bdfb79a57c48a43
Submitter: Jenkins
Branch:master

commit 7803175840cce3c7eb5d593f3bdfb79a57c48a43
Author: Jakub Libosvar 
Date:   Wed Jun 29 07:38:53 2016 -0400

functional: Use assertItemsEqual for db_find outputs

Change-Id: I3fc0fbecebb811fda669600173fb7c0832848935
Closes-Bug: 1596585


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596585

Title:
  test_db_find_column_type_list is failing depending on ovsdb result
  order

Status in neutron:
  Fix Released

Bug description:
  
  ft38.5: 
neutron.tests.functional.agent.test_ovs_lib.OVSLibTestCase.test_db_find_column_type_list(vsctl)_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('pika = 
oslo_messaging._drivers.impl_pika:PikaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
 DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
 DEBUG [oslo_policy._cache_handler] Reloading cached file 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
 DEBUG [oslo_policy.policy] Reloaded policy file: 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
  }}}

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_ovs_lib.py", line 395, in 
test_db_find_column_type_list
  self.assertEqual(tags_present, len_0_list)
File 

[Yahoo-eng-team] [Bug 1597479] Re: Preview Page: Form: Everything says Legend

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335631
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=93bbec1385a1b827398ab9f8270770415df61aa2
Submitter: Jenkins
Branch:master

commit 93bbec1385a1b827398ab9f8270770415df61aa2
Author: Diana Whitten 
Date:   Wed Jun 29 11:47:19 2016 -0700

Preview Page: Form: Everything shouldn't say Legend

Under the form section of the preview page, when you click the 'code'
button, everything shows with a text node of the text "Legend", this
has been fixed.

Change-Id: I40afdca853c5e41f873deb00e27ead0520adb095
Closes-bug: #1597479


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597479

Title:
  Preview Page: Form: Everything says Legend

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Under the form section of the preview page, when you click the 'code'
  button, everything shows with a text node of "Legend"

  See here:
  https://i.imgur.com/24Abkvf.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597532] [NEW] Containers/Swift has a LOT of padding

2016-06-29 Thread Diana Whitten
Public bug reported:

Containers/Swift has a LOT of padding

The two columns should align together ... and the empty state should use
an info or well or something more bootstrappy.

** Affects: horizon
 Importance: Undecided
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: branding containers swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597532

Title:
  Containers/Swift has a LOT of padding

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Containers/Swift has a LOT of padding

  The two columns should align together ... and the empty state should
  use an info or well or something more bootstrappy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597528] [NEW] Containers/Swift URL Redirection Loop

2016-06-29 Thread Diana Whitten
Public bug reported:

If you hit /project/containers/container/ directly, you hit an infinite
reloading loop.

See the captured animated gif attached.

** Affects: horizon
 Importance: Undecided
 Status: Confirmed


** Tags: containers swift

** Attachment added: "swift_reload.gif"
   
https://bugs.launchpad.net/bugs/1597528/+attachment/4692478/+files/swift_reload.gif

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597528

Title:
  Containers/Swift URL Redirection Loop

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  If you hit /project/containers/container/ directly, you hit an
  infinite reloading loop.

  See the captured animated gif attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597521] [NEW] files should not be injected if config drive is configured

2016-06-29 Thread Vladik Romanovsky
Public bug reported:

A regression introduced by [1], which made
files to be injected into the root disk even if the
config disk is not in use.

[1] https://review.openstack.org/#/c/303335

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: New


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Vladik Romanovsky (vladik-romanovsky)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597521

Title:
  files should not be injected if config drive is configured

Status in OpenStack Compute (nova):
  New

Bug description:
  A regression introduced by [1], which made
  files to be injected into the root disk even if the
  config disk is not in use.
  
  [1] https://review.openstack.org/#/c/303335

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597503] [NEW] Preview Page: Material: Code Icon is wrong

2016-06-29 Thread Diana Whitten
Public bug reported:

Somewhere along the lines, the preview page code button, when viewed in
'Material' shows an alarm clock instead of a code icon.  Seen here:
https://i.imgur.com/MfpXYpC.png

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: branding

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597503

Title:
  Preview Page: Material: Code Icon is wrong

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Somewhere along the lines, the preview page code button, when viewed
  in 'Material' shows an alarm clock instead of a code icon.  Seen here:
  https://i.imgur.com/MfpXYpC.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597502] [NEW] Windows ISO won't detect ephemeral drive with virtio driver

2016-06-29 Thread Kevin
Public bug reported:

Environment:

Fuel 9.0 deployed Mitaka, ceph backs everyone except ephemeral drives.

Procedure:

Download en_windows_server_2012_r2_x64_dvd_2707946.iso from MSDN.

Download https://fedorapeople.org/groups/virt/virtio-win/direct-
downloads/latest-virtio/virtio-win.iso

Extract contents of en_windows_server_2012_r2_x64_dvd_2707946.iso to
C:\Images\ISO

Extract contents of virtio-win.iso (virtio-win-0.1.118.iso) to C:\Images
\virtio-win-0.1.118

copy virtio-win-0.1.118/ folder into .\ISO folder.

Run the following command:

oscdimg -n -m -bc:\Images\ISO\boot\etfsboot.com C:\Images\ISO 
C:\Images\en_windows_server_2012_r2_x64_dvd_
2707946_Openstack.iso

At this point if I look at the contents of en_windows_server_2012_r2_x64_dvd_
2707946_Openstack.iso I see a folder inside with the name "virtio-win-0.1.118" 
(which contains all of the virtio drivers).

Upload image to OS through Horizon. Boot a VM with Horizon, when the
Windows installation asks "Where do you want to install Windows?" the
box is blank (there is no disk drive detected to install the OS). I
select "Load driver" select "Browse..." select the virtio driver
viostor\2k12R2\amd64 no driver is found. If I uncheck the box that says
"Hide drivers that aren't compatible with this computer's hardware." I
see "Red Hat VirtIO SCSI controller (D:\VIRTIO-
WIN-0.1.118\VIOSTOR\2k12R2\AMD64\VIOSTOR.INF)" show up. If I click
"Next" it installs the driver but I still see a blank box on where to
install the operating system.

>From everything I can find the virtio drivers are the correct drivers I
need for Windows to install on OS.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597502

Title:
  Windows ISO won't detect ephemeral drive with virtio driver

Status in OpenStack Compute (nova):
  New

Bug description:
  Environment:

  Fuel 9.0 deployed Mitaka, ceph backs everyone except ephemeral drives.

  Procedure:

  Download en_windows_server_2012_r2_x64_dvd_2707946.iso from MSDN.

  Download https://fedorapeople.org/groups/virt/virtio-win/direct-
  downloads/latest-virtio/virtio-win.iso

  Extract contents of en_windows_server_2012_r2_x64_dvd_2707946.iso to
  C:\Images\ISO

  Extract contents of virtio-win.iso (virtio-win-0.1.118.iso) to
  C:\Images\virtio-win-0.1.118

  copy virtio-win-0.1.118/ folder into .\ISO folder.

  Run the following command:

  oscdimg -n -m -bc:\Images\ISO\boot\etfsboot.com C:\Images\ISO 
C:\Images\en_windows_server_2012_r2_x64_dvd_
  2707946_Openstack.iso

  At this point if I look at the contents of en_windows_server_2012_r2_x64_dvd_
  2707946_Openstack.iso I see a folder inside with the name 
"virtio-win-0.1.118" (which contains all of the virtio drivers).

  Upload image to OS through Horizon. Boot a VM with Horizon, when the
  Windows installation asks "Where do you want to install Windows?" the
  box is blank (there is no disk drive detected to install the OS). I
  select "Load driver" select "Browse..." select the virtio driver
  viostor\2k12R2\amd64 no driver is found. If I uncheck the box that
  says "Hide drivers that aren't compatible with this computer's
  hardware." I see "Red Hat VirtIO SCSI controller (D:\VIRTIO-
  WIN-0.1.118\VIOSTOR\2k12R2\AMD64\VIOSTOR.INF)" show up. If I click
  "Next" it installs the driver but I still see a blank box on where to
  install the operating system.

  From everything I can find the virtio drivers are the correct drivers
  I need for Windows to install on OS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597479] [NEW] Preview Page: Form: Everything says Legend

2016-06-29 Thread Diana Whitten
Public bug reported:

Under the form section of the preview page, when you click the 'code'
button, everything shows with a text node of "Legend"

See here:
https://i.imgur.com/24Abkvf.png

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: branding

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597479

Title:
  Preview Page: Form: Everything says Legend

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Under the form section of the preview page, when you click the 'code'
  button, everything shows with a text node of "Legend"

  See here:
  https://i.imgur.com/24Abkvf.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597478] [NEW] Clearer error message when deleting a protected image

2016-06-29 Thread Rafael Rivero
Public bug reported:

It would be more helpful to provide a clearer error message when
deleting an image that is set to 'Protected' - that it can not be
deleted, while displaying the unset commands through the CLI clients,
and navigational directions through Horizon.

Horizon
Error: You are not allowed to delete image: cirros

openstackclient
ERROR: openstack 403 Forbidden: Image is protected (HTTP 403)

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1597478

Title:
  Clearer error message when deleting a protected image

Status in Glance:
  New

Bug description:
  It would be more helpful to provide a clearer error message when
  deleting an image that is set to 'Protected' - that it can not be
  deleted, while displaying the unset commands through the CLI clients,
  and navigational directions through Horizon.

  Horizon
  Error: You are not allowed to delete image: cirros

  openstackclient
  ERROR: openstack 403 Forbidden: Image is protected (HTTP 403)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1597478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568708] Re: translation sync broken due to wrong usage of venv

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335527
Committed: 
https://git.openstack.org/cgit/openstack/networking-ovn/commit/?id=e11be5726a51d4a7a41abb1899a6dcc85691b3e7
Submitter: Jenkins
Branch:master

commit e11be5726a51d4a7a41abb1899a6dcc85691b3e7
Author: Kyle Mestery 
Date:   Wed Jun 29 09:23:10 2016 -0500

tox.ini: Stop using zuul-cloner for venv

This should fix the translation job, similar to the fix for
networking-midonet [1].

[1] https://review.openstack.org/#/c/330871/

Change-Id: I83f7b41221614e9eb28e2f95bfd11bc54d7d404f
Closes-Bug: #1568708
Signed-off-by: Kyle Mestery 


** Changed in: networking-ovn
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568708

Title:
  translation sync broken due to wrong usage of venv

Status in networking-midonet:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Sahara mitaka series:
  Fix Released

Bug description:
  post jobs cannot use zuul-cloner and have no access currently to
  upper-constraints. See how nova or neutron itself handle this.

  Right now the sync with translations fails:

  https://jenkins.openstack.org/job/neutron-fwaas-propose-translation-
  update/119/console

  Please fix venv tox environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1568708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597160] Re: linuxbridget agent-id is not linuxbridge agent uuid

2016-06-29 Thread Sean M. Collins
** Tags added: linuxbridge

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597160

Title:
  linuxbridget agent-id is not linuxbridge agent uuid

Status in neutron:
  Opinion

Bug description:
  i found that linuxbridget call rpc to api server,call method is
  'get_devices_details_list' and pass a param agent_id to rpc server.
  and i also found that param agent_id does not used in api server, it
  is just user to debug output message, so i think agent_id is just to
  used for mark the incoming agent. and if it is used like that, why
  doesn't use agent uuid instead of 'lb'+pythical_interface mac(like
  lb00505636ff2d), because uuid is more quickly to see the incoming
  linuxbridge host than mac address. of course you have another ways to
  know the incoming linuxbrige host linke param host,so why not delete
  the useless param


  ./server.log:48649:2016-06-27 16:36:56.403 108124 DEBUG
  neutron.plugins.ml2.rpc [req-d8a17733-6e4e-4012-8157-e7b9403c127d - -
  - - -] Device tapb57d75e8-f4 details requested by agent lb00505636ff2d
  with host com-net get_device_details /usr/lib/python2.7/site-
  packages/neutron/plugins/ml2/rpc.py:70

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240043] Re: get_server_diagnostics must define a hypervisor-independent API

2016-06-29 Thread Roman Podoliaka
I'll double check this on my devstack and see what we can do.

** Changed in: nova
 Assignee: Gary Kotton (garyk) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240043

Title:
  get_server_diagnostics must define a hypervisor-independent API

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  get_server_diagnostics currently returns an unrestricted dictionary, which is 
only lightly documented in a few places, e.g.:
  
http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

  That documentation shows explicit differences between libvirt and
  XenAPI.

  There are moves to test + enforce the return values, and suggestions
  that Ceilometer may be interested in consuming the output, therefore
  we need an API which is explicitly defined and not depend on
  hypervisor-specific behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597461] [NEW] L3 HA + DVR: 2 masters after reboot of controller

2016-06-29 Thread Ann Taraday
Public bug reported:

ENV: Mitaka 3 controllers 45 computes DVR + L3 HA

After reboot of controller on which l3 agent is active, another l3 agent
becomes active. When rebooted node recover, that l3 agent becomes active
as well - this lead to extra loss of external connectivity in tenant
network. After some time the only one agent remains to be active - the
one from rebooted node. Sometimes connectivity does not come back, as
snat port ends up on wrong host.

The root cause of this problem is that routers are processed by l3 agent
before openvswitch agent sets up appropriate ha ports, so for some time
recovered ha routers is isolated from ha routers on other hosts and
becomes active.

The possible solution for this is proper serialization of ha network
creation by l3 agent after ha network is set up on controller.

With 100 routers and networks this issues has been reproduced with every
reboot.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog l3-ha

** Attachment added: "openvswitch agent logs"
   
https://bugs.launchpad.net/bugs/1597461/+attachment/4692415/+files/neutron-openvswitch-agent.log.3.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597461

Title:
  L3 HA + DVR: 2 masters after reboot of controller

Status in neutron:
  New

Bug description:
  ENV: Mitaka 3 controllers 45 computes DVR + L3 HA

  After reboot of controller on which l3 agent is active, another l3
  agent becomes active. When rebooted node recover, that l3 agent
  becomes active as well - this lead to extra loss of external
  connectivity in tenant network. After some time the only one agent
  remains to be active - the one from rebooted node. Sometimes
  connectivity does not come back, as snat port ends up on wrong host.

  The root cause of this problem is that routers are processed by l3
  agent before openvswitch agent sets up appropriate ha ports, so for
  some time recovered ha routers is isolated from ha routers on other
  hosts and becomes active.

  The possible solution for this is proper serialization of ha network
  creation by l3 agent after ha network is set up on controller.

  With 100 routers and networks this issues has been reproduced with
  every reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573605] Re: Fixed type:dict validator passes unexpected keys

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/309964
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=bb13c505bd0d8c032bacbf5d6615281bf29b
Submitter: Jenkins
Branch:master

commit bb13c505bd0d8c032bacbf5d6615281bf29b
Author: Pavel Gluschak 
Date:   Mon Apr 25 15:59:18 2016 +0300

Fixed type:dict validator passes unexpected keys

Validation should fail if passed parameter
does not match defined schema.

Change-Id: Ia93ff849396c6e2a5a170d7c01629a38e412f037
Closes-Bug: #1573605
Signed-off-by: Pavel Gluschak 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573605

Title:
  Fixed type:dict validator passes unexpected keys

Status in neutron:
  Fix Released

Bug description:
  Validation schema definition:
  'params': {
  ...
  'validate': {
  'type:dict': {
  'name': {'type:string': None}
  }
  }
  }

  Passed data:
  {'params': {'bad_param': 'val'}}

  Expected result:
  Validation fails, because bad_param is not defined in schema.

  Actual result:
  Validation passes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597458] [NEW] Branding: New LI should use themable selects

2016-06-29 Thread Diana Whitten
Public bug reported:

Most of Horizon now uses themable selects ... the new LI should too.

** Affects: horizon
 Importance: Medium
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: branding

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597458

Title:
  Branding: New LI should use themable selects

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Most of Horizon now uses themable selects ... the new LI should too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597433] [NEW] xenapi test failures on Newton

2016-06-29 Thread Corey Bryant
Public bug reported:

We're seeing the following errors in ubuntu package builds since the
recent XenAPI patches landed:

commit 5d15918076eef173a3d140f968260aeb19d0614e
Merge: 44d8b0d 5915da7
Author: Jenkins 
Date:   Tue Jun 28 12:55:20 2016 +

Merge "XenAPI: Stream config drive to XAPI"

commit 44d8b0d0cfde1a0938f42eb6a89dbd210d9f88c5
Merge: 7c0de90 b19e377
Author: Jenkins 
Date:   Tue Jun 28 11:07:05 2016 +

Merge "Moving test helpers to a common place"

commit 7c0de90203c5b9efd527c65f4d8dd32a1513c5bc
Merge: 0c85650 f39e660
Author: Jenkins 
Date:   Tue Jun 28 11:01:34 2016 +

Merge "Improve image signature verification failure notification"

commit 0c8565097c690a0c93f7544c86318cee31417b76
Merge: 5f75130 3e85b80
Author: Jenkins 
Date:   Tue Jun 28 10:28:36 2016 +

Merge "XenAPI: Perform disk operations in dom0"


Test failures:

==
Failed 8 tests - output below:
==

nova.tests.unit.virt.xenapi.plugins.test_partition_utils.PartitionUtils.test_wait_for_dev_ok


Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/��PKGBUILDDIR��/nova/tests/unit/virt/xenapi/plugins/test_partition_utils.py", 
line 25, in setUp
self.partition_utils = self.load_plugin("partition_utils.py")
  File 
"/��PKGBUILDDIR��/nova/tests/unit/virt/xenapi/plugins/plugin_test.py", line 68, 
in load_plugin
return imp.load_source(name, path)
  File 
"/��PKGBUILDDIR��/plugins/xenserver/xenapi/etc/xapi.d/plugins/partition_utils.py",
 line 26, in 
pluginlib.configure_logging("disk_utils")
  File 
"/��PKGBUILDDIR��/plugins/xenserver/xenapi/etc/xapi.d/plugins/pluginlib_nova.py",
 line 42, in configure_logging
sysh = logging.handlers.SysLogHandler('/dev/log')
  File "/usr/lib/python2.7/logging/handlers.py", line 761, in __init__
self._connect_unixsocket(address)
  File "/usr/lib/python2.7/logging/handlers.py", line 789, in 
_connect_unixsocket
self.socket.connect(address)
  File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 
237, in connect
while not socket_connect(fd, address):
  File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 
39, in socket_connect
raise socket.error(err, errno.errorcode[err])
socket.error: [Errno 2] ENOENT


nova.tests.unit.virt.xenapi.plugins.test_partition_utils.PartitionUtils.test_mkfs_ext3_no_label
---

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/��PKGBUILDDIR��/nova/tests/unit/virt/xenapi/plugins/test_partition_utils.py", 
line 25, in setUp
self.partition_utils = self.load_plugin("partition_utils.py")
  File 
"/��PKGBUILDDIR��/nova/tests/unit/virt/xenapi/plugins/plugin_test.py", line 68, 
in load_plugin
return imp.load_source(name, path)
  File 
"/��PKGBUILDDIR��/plugins/xenserver/xenapi/etc/xapi.d/plugins/partition_utils.py",
 line 26, in 
pluginlib.configure_logging("disk_utils")
  File 
"/��PKGBUILDDIR��/plugins/xenserver/xenapi/etc/xapi.d/plugins/pluginlib_nova.py",
 line 42, in configure_logging
sysh = logging.handlers.SysLogHandler('/dev/log')
  File "/usr/lib/python2.7/logging/handlers.py", line 761, in __init__
self._connect_unixsocket(address)
  File "/usr/lib/python2.7/logging/handlers.py", line 789, in 
_connect_unixsocket
self.socket.connect(address)
  File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 
237, in connect
while not socket_connect(fd, address):
  File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 
39, in socket_connect
raise socket.error(err, errno.errorcode[err])
socket.error: [Errno 2] ENOENT


nova.tests.unit.virt.xenapi.plugins.test_partition_utils.PartitionUtils.test_wait_for_dev_timeout
-

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/��PKGBUILDDIR��/nova/tests/unit/virt/xenapi/plugins/test_partition_utils.py", 
line 25, in setUp
self.partition_utils = self.load_plugin("partition_utils.py")
  File 
"/��PKGBUILDDIR��/nova/tests/unit/virt/xenapi/plugins/plugin_test.py", line 68, 
in load_plugin
return imp.load_source(name, path)
  File 
"/��PKGBUILDDIR��/plugins/xenserver/xenapi/etc/xapi.d/plugins/partition_utils.py",
 line 26, in 
pluginlib.configure_logging("disk_utils")
  File 
"/��PKGBUILDDIR��/plugins/xenserver/xenapi/etc/xapi.d/plugins/pluginlib_nova.py",
 line 

[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/329787
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=c3de38ed530b8db77185e819af65574e35ebe134
Submitter: Jenkins
Branch:master

commit c3de38ed530b8db77185e819af65574e35ebe134
Author: zhengyao1 
Date:   Wed Jun 15 16:42:42 2016 +0800

Use correct order of arguments to assertEqual

The correct order of arguments to assertEqual that is expected by
testtools is (expected, observed).

This patch fixes the inverted usage of arguments in some places
that have cropped up since the last fix of this bug.

Change-Id: If8c0dcb58496bc2fcf4c635f384522a1f7d2b2af
Closes-Bug: #1259292


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  In Progress
Status in Barbican:
  In Progress
Status in Blazar:
  New
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in Freezer:
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-glanceclient:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in sqlalchemy-migrate:
  New
Status in SWIFT:
  New
Status in tacker:
  New
Status in tempest:
  New
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481416] Re: Horizon UI refreshes the active panel when clicked again

2016-06-29 Thread Diana Whitten
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1481416

Title:
  Horizon UI refreshes the active panel when clicked again

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Horizon refreshes the page even the panel selected is the same one as
  the one clicked

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1481416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349617] Re: SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer

2016-06-29 Thread Scott Moser
** No longer affects: cirros

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349617

Title:
  SSHException: Error reading SSH protocol banner[Errno 104] Connection
  reset by peer

Status in grenade:
  Invalid
Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Noticed a drop in categorized bugs on grenade jobs, so looking at
  latest I see this:

  http://logs.openstack.org/63/108363/5/gate/gate-grenade-dsvm-partial-
  ncpu/1458072/console.html

  Running this query:

  message:"Failed to establish authenticated ssh connection to cirros@"
  AND message:"(Error reading SSH protocol banner[Errno 104] Connection
  reset by peer). Number attempts: 18. Retry after 19 seconds." AND
  tags:"console"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGVzdGFibGlzaCBhdXRoZW50aWNhdGVkIHNzaCBjb25uZWN0aW9uIHRvIGNpcnJvc0BcIiBBTkQgbWVzc2FnZTpcIihFcnJvciByZWFkaW5nIFNTSCBwcm90b2NvbCBiYW5uZXJbRXJybm8gMTA0XSBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIpLiBOdW1iZXIgYXR0ZW1wdHM6IDE4LiBSZXRyeSBhZnRlciAxOSBzZWNvbmRzLlwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA2NTkwMTEwMzMyLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  I get 28 hits in 7 days, and it seems to be very particular to grenade
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1349617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597362] [NEW] ML2 manager throws spurious error messages

2016-06-29 Thread Ryan Moats
Public bug reported:

ML2 manager, while extending the network dictionary, reports an logs an
ERROR level message if the network being created has no segments.  Since
this doesn't appear to halt operation, the error message pollutes the
log file.

Version: master

** Affects: neutron
 Importance: Undecided
 Assignee: Ryan Moats (rmoats)
 Status: In Progress

** Description changed:

- ML2 manager, while extending the network dictionary, reports an logs an ERROR
- level message if the network being created has no segments.  Since this 
doesn't
- appear to halt operation, the error message pollutes the log file.
+ ML2 manager, while extending the network dictionary, reports an logs an
+ ERROR level message if the network being created has no segments.  Since
+ this doesn't appear to halt operation, the error message pollutes the
+ log file.
  
  Version: master

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597362

Title:
  ML2 manager throws spurious error messages

Status in neutron:
  In Progress

Bug description:
  ML2 manager, while extending the network dictionary, reports an logs
  an ERROR level message if the network being created has no segments.
  Since this doesn't appear to halt operation, the error message
  pollutes the log file.

  Version: master

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597357] [NEW] When keystone is slow to respond: getting user fails

2016-06-29 Thread Sofer Athlan-Guyot
Public bug reported:

To test if an user exists we check the keystone db by using

openstack show user 'foo' ...

If the user doesn't exists then we get an error.  The usual retry of
openstack lib would imply that we wait the full request_timeout to get
this.  This is currently ~170s.  So 170s times the number of user
in the catalog!

To overcome this a the call is wrapped inside a no retry outer
function[1]

The problem is that on very slow platform legit timeout can occur,
this is especially true for CI.  Here is an example of such failure:

Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]:
Could not evaluate: Command: 'openstack ["user", "show", "--format",
"shell", ["admin", "--domain", "default"]]' has been running for more
then 20 seconds (tried 0, for a total of 0 seconds)

>From  http://logs.openstack.org/58/322858/11/check-tripleo/gate-tripleo-
ci-centos-7-ha/7e5b0a6/logs/postci.txt.gz


[1] 
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L81

** Affects: puppet-keystone
 Importance: High
 Assignee: Sofer Athlan-Guyot (sofer-athlan-guyot)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597357

Title:
  When keystone is slow to respond: getting user fails

Status in puppet-keystone:
  Confirmed

Bug description:
  To test if an user exists we check the keystone db by using

  openstack show user 'foo' ...

  If the user doesn't exists then we get an error.  The usual retry of
  openstack lib would imply that we wait the full request_timeout to get
  this.  This is currently ~170s.  So 170s times the number of user
  in the catalog!

  To overcome this a the call is wrapped inside a no retry outer
  function[1]

  The problem is that on very slow platform legit timeout can occur,
  this is especially true for CI.  Here is an example of such failure:

  Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]:
  Could not evaluate: Command: 'openstack ["user", "show", "--format",
  "shell", ["admin", "--domain", "default"]]' has been running for more
  then 20 seconds (tried 0, for a total of 0 seconds)

  From  http://logs.openstack.org/58/322858/11/check-tripleo/gate-
  tripleo-ci-centos-7-ha/7e5b0a6/logs/postci.txt.gz

  
  [1] 
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L81

To manage notifications about this bug go to:
https://bugs.launchpad.net/puppet-keystone/+bug/1597357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597357] Re: When kestone is slow to respond getting user fails

2016-06-29 Thread Emilien Macchi
** Also affects: puppet-keystone
   Importance: Undecided
   Status: New

** No longer affects: puppet-keystone

** Project changed: keystone => puppet-keystone

** Changed in: puppet-keystone
   Status: New => Confirmed

** Changed in: puppet-keystone
   Importance: Undecided => High

** Changed in: puppet-keystone
 Assignee: (unassigned) => Sofer Athlan-Guyot (sofer-athlan-guyot)

** Summary changed:

- When kestone is slow to respond getting user fails
+ When keystone is slow to respond: getting user fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597357

Title:
  When keystone is slow to respond: getting user fails

Status in puppet-keystone:
  Confirmed
Status in tripleo:
  Confirmed

Bug description:
  To test if an user exists we check the keystone db by using

  openstack show user 'foo' ...

  If the user doesn't exists then we get an error.  The usual retry of
  openstack lib would imply that we wait the full request_timeout to get
  this.  This is currently ~170s.  So 170s times the number of user
  in the catalog!

  To overcome this a the call is wrapped inside a no retry outer
  function[1]

  The problem is that on very slow platform legit timeout can occur,
  this is especially true for CI.  Here is an example of such failure:

  Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]:
  Could not evaluate: Command: 'openstack ["user", "show", "--format",
  "shell", ["admin", "--domain", "default"]]' has been running for more
  then 20 seconds (tried 0, for a total of 0 seconds)

  From  http://logs.openstack.org/58/322858/11/check-tripleo/gate-
  tripleo-ci-centos-7-ha/7e5b0a6/logs/postci.txt.gz

  
  [1] 
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L81

To manage notifications about this bug go to:
https://bugs.launchpad.net/puppet-keystone/+bug/1597357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597353] [NEW] TabbedTable redirection is broken in Horizon after switch to Django 1.9

2016-06-29 Thread Timur Sufiev
Public bug reported:

Horizon uses URL query for specifying which tab in a tabbed table should
be visible, see
https://github.com/openstack/horizon/blob/10.0.0.0b1/openstack_dashboard/dashboards/project/volumes/urls.py#L30-L36

This worked until Django 1.9 came in. Now `reverse()` on such URLs
escapes the '?' sign (a query part delimiter), which leads to
TabbedTables always opening on first tab and URL redirection not
working.

While obviously we have to invent a better solution, in the meantime it
would be better to change integration tests expectations which fail due
to the proper tab not opening in some scenarios.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests

** Tags added: integration-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597353

Title:
  TabbedTable redirection is broken in Horizon after switch to Django
  1.9

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon uses URL query for specifying which tab in a tabbed table
  should be visible, see
  
https://github.com/openstack/horizon/blob/10.0.0.0b1/openstack_dashboard/dashboards/project/volumes/urls.py#L30-L36

  This worked until Django 1.9 came in. Now `reverse()` on such URLs
  escapes the '?' sign (a query part delimiter), which leads to
  TabbedTables always opening on first tab and URL redirection not
  working.

  While obviously we have to invent a better solution, in the meantime
  it would be better to change integration tests expectations which fail
  due to the proper tab not opening in some scenarios.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404867] Re: Volume remains in-use status, if instance booted from volume is deleted in error state

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/256059
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b7f83337658181f0e7117c7f3b07f69856ffe405
Submitter: Jenkins
Branch:master

commit b7f83337658181f0e7117c7f3b07f69856ffe405
Author: ankitagrawal 
Date:   Wed Sep 23 03:58:19 2015 -0700

Detach volume after deleting instance with no host

If an instance is booted from a volume, shelved, and goes into an error
state due to some reason, the volume from which instance is booted
remains even the instance is deleted because instance has no host
associated with it.

Called _local_delete() to detach volume and destroy bdm if instance is
in shelved_offloaded state or has no host associated with it. This will
cleanup both volumes and the networks.

Note:
Ankit had submitted same patch [1] earlier which was reverted [2] due
to a race condition on jenkins if an instance is deleted when it is in
building state.  The patch was then rebumitted [3] fixing the
the failure of race condition by reverting the ObjectActionError
exception handling in _delete.  This patch was later re-reverted [4]
due to continued jenkins race conditions.

The current patch avoids the jenkins race condition by leaving the flow
for instances in the BUILDING state unchanged and only calling
_local_delete() on instances in the shelved_offloaded or error states
when the instance has no host associated with it.  This addresses the
concerns of the referenced bugs.

[1] Ic630ae7d026a9697afec46ac9ea40aea0f5b5ffb
[2] Id4e405e7579530ed1c1f22ccc972d45b6d185f41
[3] Ic107d8edc7ee7a4ebb04eac58ef0cdbf506d6173
[4] Ibcbe35b5d329b183c4d0e8233e8ada26ebc512c2

Co-Authored-By: Ankit Agrawal 

Closes-Bug: 1404867
Closes-Bug: 1408527

Change-Id: I928a397c75b857e94bf5c002e50ec43a2bed9848


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404867

Title:
  Volume remains in-use status, if instance booted from volume is
  deleted in error state

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If the instance is booted from volume and goes in to error state due to some 
reason.
  Volume from which instance is booted, remains in-use state even the instance 
is deleted.
  IMO, volume should be detached so that it can be used to boot other instance.

  Steps to reproduce:

  1. Log in to Horizon, create a new volume.
  2. Create an Instance using newly created volume.
  3. Verify instance is in active state.
  $ source devstack/openrc demo demo
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  Note:
  Use shelve-unshelve api to see the instance goes into error state.
  unshelving volumed back instance does not work and sets instance state to 
error state (ref: https://bugs.launchpad.net/nova/+bug/1404801)

  4. Shelve the instance
  $ nova shelve 

  5. Verify the status is SHELVED_OFFLOADED.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | SHELVED_OFFLOADED | - 
 | Shutdown| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. Unshelve the instance.
  $ nova unshelve 

  5. Verify the instance is in Error state.
  $ nova list
  
+--+--+---++-+--+
  | ID   | Name | Status| Task 
State | Power State | Networks |
  
+--+--+---++-+--+
  | dae3a13b-6aa8-4794-93cd-5ab7bf90f604 | nova | Error | 
unshelving | Spawning| private=10.0.0.3 |
  
+--+--+---++-+--+

  6. 

[Yahoo-eng-team] [Bug 1597168] Re: There's no way to update '--shared' argument of qos-policy to False by using qos-policy-update command.

2016-06-29 Thread Darek Smigiel
Lack of '--shared' parameter means that qos-policy won't be shared.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597168

Title:
  There's no way to update '--shared' argument of qos-policy to False by
  using qos-policy-update command.

Status in neutron:
  Invalid

Bug description:
  In Mitaka,
  There's no way to update '--shared' argument of qos-policy to False by using 
qos-policy-update command.
  Actually,we may need to update --shared to False in qos-policy-update command,
  Such as in the qos-policy rbac scenario, we may need to change the rbac 
policy  
  by updating ‘--shared’ parameters to False,so that we can cancel sharing the 
qos-policy to all tenants

  [root@localhost devstack]# neutron qos-policy-create qos-policy-01
  Created a new policy:
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 53e278d6-ad97-4875-9eb3-374e46d5fe9c |
  | name| qos-policy-01|
  | rules   |  |
  | shared  | False|
  | tenant_id   | aced7a29bb134dec82307a880d1cc542 |
  +-+--+
  [root@localhost devstack]# neutron qos-policy-update qos-policy-01 --shared   
  
  Updated policy: qos-policy-01
  [root@localhost devstack]# neutron qos-policy-show qos-policy-01
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 53e278d6-ad97-4875-9eb3-374e46d5fe9c |
  | name| qos-policy-01|
  | rules   |  |
  | shared  | True |
  | tenant_id   | aced7a29bb134dec82307a880d1cc542 |
  +-+--+
  [root@localhost devstack]# neutron qos-policy-update qos-policy-01 
--shared=False
  usage: neutron qos-policy-update [-h] [--request-format {json}] [--name NAME]
   [--description DESCRIPTION] [--shared]
   POLICY
  neutron qos-policy-update: error: argument --shared: ignored explicit 
argument u'False'
  Try 'neutron help qos-policy-update' for more information.
  [root@localhost devstack]# 

  [root@localhost devstack]# neutron help qos-policy-update
  usage: neutron qos-policy-update [-h] [--request-format {json}] [--name NAME]
   [--description DESCRIPTION] [--shared]
   POLICY

  Update a given qos policy.

  positional arguments:
POLICYID or name of policy to update.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json}
  DEPRECATED! Only JSON request format is supported.
--name NAME   Name of QoS policy.
--description DESCRIPTION
  Description of the QoS policy.
--shared  Accessible by other tenants. Set shared to True
  (default is False).
  [root@localhost devstack]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597196] Re: doc: sample schema code in API change tutorial code is incorrect

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335341
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=4db765198dcad94d480e1a8bb0314dd125f3a5de
Submitter: Jenkins
Branch:master

commit 4db765198dcad94d480e1a8bb0314dd125f3a5de
Author: guoshan 
Date:   Wed Jun 29 15:12:14 2016 +0800

API Change Tutorial doc code modify

The refactor of code cause the inappropriate guide.
Code in tutorial is out of date.

Change-Id: Ic986af1072f158f0f0f5608a9754db9d3e507409
Closes-Bug: #1597196


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597196

Title:
  doc: sample schema code in API change tutorial code is incorrect

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The refactor of code cause the inappropriate guide.
  Code in tutorial is out of date.

  For example:
  role_create = {
  'properties': {
  'name': parameter_types.name,
  'description': parameter_types.description
  }
  ...
  }

  role_update = {
  'properties': {
  'name': parameter_types.name,
  'description': parameter_types.description
  }
  ...
  }

  For better tutorial, it would be better modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597317] [NEW] Launch instance by using default lvm backend volume failed

2016-06-29 Thread ganesan
Public bug reported:


Installed OpenStack Mitaka release (openstack 2.2.0)

Default hypervisor libvirt+KVM is used
Default volume backend "LVM" is used


Steps followed

1. Create bootable volume by using image "cirros" - OK
2. Launch VM instance using the volume - NOK

/var/log/cinder/volume.log


2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
[req-a3c3015e-353c-49d6-af28-8e97010a588b 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] Exception during message handling: list 
index out of range
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1441, in 
initialize_connection
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher volume, 
connector)
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 760, in 
create_export
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
volume_path)
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 195, in 
create_export
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher chap_auth 
= self._get_target_chap_auth(context, iscsi_name)
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 328, in 
_get_target_chap_auth
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher vol_id = 
iscsi_name.split(':volume-')[1]
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher IndexError: 
list index out of range
2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher
2016-06-29 07:45:22.280 24205 ERROR oslo_messaging._drivers.common 
[req-a3c3015e-353c-49d6-af28-8e97010a588b 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] Returning exception list index out of 
range to caller
2016-06-29 07:45:22.281 24205 ERROR oslo_messaging._drivers.common 
[req-a3c3015e-353c-49d6-af28-8e97010a588b 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
"/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1441, in 
initialize_connection\nvolume, connector)\n', '  File 
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 760, in 
create_export\nvolume_path)\n', '  File 
"/usr/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py",
  line 195, in create_export\nchap_auth = 
self._get_target_chap_auth(context, iscsi_name)\n', '  File 
"/usr/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 328, in 
_get_target_chap_auth\nvol_id = iscsi_name.split(\':volume-\')[1]\n', 
'IndexError: list index out of range\n']
2016-06-29 07:45:22.397 24205 ERROR cinder.volume.manager 
[req-864e09a5-1d40-49ce-b7c2-a46b1a91fe25 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] Terminate volume connection failed: 
'NoneType' object has no attribute 'split'
2016-06-29 07:45:22.398 24205 ERROR oslo_messaging.rpc.dispatcher 
[req-864e09a5-1d40-49ce-b7c2-a46b1a91fe25 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] Exception during message handling: Bad 
or unexpected response from the storage volume backend API: Terminate volume 
connection failed: 'NoneType' object has no attribute 'split'
2016-06-29 07:45:22.398 24205 ERROR 

[Yahoo-eng-team] [Bug 1597302] [NEW] Horizon documentation about creating panels is outdated

2016-06-29 Thread Timur Sufiev
Public bug reported:

So, the docs at
http://docs.openstack.org/developer/horizon/topics/tutorial.html#defining-a-panel
tell than in order to define a panel one needs to define both panel,
dashboard and panel group in python code, while actually they can be
already created using plugin files, see
https://github.com/openstack/horizon/tree/master/openstack_dashboard/enabled
- which allows for greater flexibility and pluggability than the same
thing in python code.

Documentation needs to be updated.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: documentation low-hanging-fruit

** Tags added: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597302

Title:
  Horizon documentation about creating panels is outdated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  So, the docs at
  
http://docs.openstack.org/developer/horizon/topics/tutorial.html#defining-a-panel
  tell than in order to define a panel one needs to define both panel,
  dashboard and panel group in python code, while actually they can be
  already created using plugin files, see
  https://github.com/openstack/horizon/tree/master/openstack_dashboard/enabled
  - which allows for greater flexibility and pluggability than the same
  thing in python code.

  Documentation needs to be updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595878] Re: Memory leak in unit tests

2016-06-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/333827
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a5d19b41a57141ef51208b8a821bb7d2e0b6c159
Submitter: Jenkins
Branch:master

commit a5d19b41a57141ef51208b8a821bb7d2e0b6c159
Author: Oleg Bondarev 
Date:   Fri Jun 24 11:42:52 2016 +0300

Mock threading.Thread to prevent daemon creation by unit tests

tests.unit.agent.ovsdb.native.test_connection.TestOVSNativeConnection
calls Connection.start() which starts a daemon with a while True loop
full of mocks. mock._CallList of those mocks start to grow very
quick and finally eat all available memory.

Closes-Bug: #1595878
Change-Id: Ie053a2248925ce5bb960207c16c23b261d1d458c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595878

Title:
  Memory leak in unit tests

Status in neutron:
  Fix Released

Bug description:
  tests.unit.agent.ovsdb.native.test_connection.TestOVSNativeConnection
  calls Connection.start() which starts a daemon with a while True loop
  full of mocks. mock._CallList of those mocks start to grow very quick
  and finally eat all available memory.

  mem_top output during unit tests run:

  refs:
  18118  [call(1),
   call().get_nowait(),
   call().get_nowait().do_commit(),
   call().get_nowait().results.put(),
   call().task_do
  18117  [call.get_nowait(),
   call.get_nowait().do_commit(),
   call.get_nowait().results.put(),
   call.task_done(),
   call.get_no
  17990  [call(1),
   call().get_nowait(),
   call().get_nowait().do_commit(),
   call().get_nowait().results.put(),
   call().task_do
  17989  [call.get_nowait(),
   call.get_nowait().do_commit(),
   call.get_nowait().results.put(),
   call.task_done(),
   call.get_no
  13592  [call(),
   call().fd_wait(, 1),
   call().timer_wait(),
   call().block(),
   call().fd_wait( [call.fd_wait(, 1),
   call.timer_wait(),
   call.block(),
   call.fd_wait( [call.fd_wait(, 1),
   call.timer_wait(),
   call.block(),
   call.fd_wait( [call(),
   call().do_commit(),
   call().results.put(),
   call(),
   call().do_commit(),
   call().results.put( [call(),
   call().fd_wait(, 1),
   call().timer_wait(),
   call().block(),
   call().fd_wait( [call.fd_wait(, 1),
   call.timer_wait(),
   call.block(),
   call.fd_wait( [call.fd_wait(, 1),
   call.timer_wait(),
   call.block(),
   call.fd_wait( [call(),
   call().do_commit(),
   call().results.put(),
   call(),
   call().do_commit(),
   call().results.put( {'keystoneclient.service_catalog': ,
 'oslo_messaging.r
  9061   [call(, ),
   call().wait(),
   call().run(),
   call().wait( [call.wait(),
   call.run(),
   call.wait(),
   call.run(),
   call.wait( [call.wait(),
   call.run(),
   call.wait(),
   call.run(),
   call.wait( [call.do_commit(),
   call.results.put(),
   call.do_commit(),
   call.results.put( [call.do_commit(),
   call.results.put(),
   call.do_commit(),
   call.results.put( [call.get_nowait(),
   call.task_done(),
   call.get_nowait(),
   call.task_done(),
   call.get_nowait(),
   call.task_done(),
   call.get_nowait(),
   call.task_done(),
   call.get_nowait(),
   call.task_done(),
   call
  8997   [call(, ),
   call().wait(),
   call().run(),
   call().wait(
  79091  
  47269  
  45542  
  30758  
  14696  
  8601   
  6579   
  5639   
  4940   
  3858   
  3291   
  3275   
  3267   
  2439   
  2304   
     
  2219   
  1869   
  1424   

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585122] Re: Uploading image failed with affirmative log

2016-06-29 Thread Erno Kuvaja
** Changed in: glance
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1585122

Title:
  Uploading image failed with affirmative log

Status in Glance:
  Won't Fix

Bug description:
  description:

  When uploading image by the way of  "copy-from" failed because of "Remote 
server where the image is present is unavailable".The function return, the  
process will work on and print the log like this:
  "Uploaded data of image 952a0e62-500e-44fb-bbc4-f8cc17b5b6a4 from request 
payload successfully"
  That is unreasonable.

  glance/api/v1/images.py

  if copy_from:
  try:
  image_data, image_size = self._get_from_store(req.context, copy_from, 
dest=store)
    except Exception:
  upload_utils.safe_kill(req, image_meta['id'], 'queued')
       msg = (_LE("Copy from external source '%(scheme)s' failed for "
     "image: %(image)s") % {'scheme': scheme, 'image': 
image_meta['id']})
  LOG.exception(msg)
  return

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1585122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-06-29 Thread Ji.Wei
** Also affects: daisycloud-core
   Importance: Undecided
   Status: New

** Changed in: daisycloud-core
 Assignee: (unassigned) => Ji.Wei (jiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  In Progress
Status in Barbican:
  In Progress
Status in Blazar:
  New
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in Freezer:
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-glanceclient:
  In Progress
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in sqlalchemy-migrate:
  New
Status in SWIFT:
  New
Status in tacker:
  New
Status in tempest:
  New
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597237] [NEW] nova service-list error 500

2016-06-29 Thread fuegoel
Public bug reported:

I'm installing OpenStack Liberty on Ubuntu 14.04 to get OpenStack and
MidoNet up and running, nova image-list and nova endpoints work fine,
but nova service-list returns the following error:

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: req-...)

This is the nova-api.log file:
2016-06-29 11:44:37.331 2070 INFO nova.osapi_compute.wsgi.server 
[req-5f633b2e-0251-4274-8233-efd74f39e26f 2fef13dec3c84026aff7cd82ba8f25a9 
c3f3f423740546a7b0095e15f34db99c - - -] 10.10.10.11 "GET 
/v2/c3f3f423740546a7b0095e15f34db99c/os$
2016-06-29 11:44:44.606 2069 INFO nova.osapi_compute.wsgi.server 
[req-5aa1e4c3-aa60-4fb4-aecf-38e26430f25d 2fef13dec3c84026aff7cd82ba8f25a9 
c3f3f423740546a7b0095e15f34db99c - - -] 10.10.10.11 "GET /v2/ HTTP/1.1" status: 
200 len: 572 tim$
2016-06-29 11:44:49.404 2065 INFO nova.osapi_compute.wsgi.server 
[req-31cd18c4-2471-4adf-92d3-e44b7b238376 2fef13dec3c84026aff7cd82ba8f25a9 
c3f3f423740546a7b0095e15f34db99c - - -] 10.10.10.11 "GET /v2/ HTTP/1.1" status: 
200 len: 572 tim$
2016-06-29 11:44:49.723 2065 INFO nova.osapi_compute.wsgi.server 
[req-89c7dffd-5d7f-41ef-8188-27013a7cf059 2fef13dec3c84026aff7cd82ba8f25a9 
c3f3f423740546a7b0095e15f34db99c - - -] 10.10.10.11 "GET 
/v2/c3f3f423740546a7b0095e15f34db99c/im$
2016-06-29 11:49:53.786 2069 INFO nova.osapi_compute.wsgi.server 
[req-4dadce44-e6f4-4420-9ed6-a75563d6ac03 2fef13dec3c84026aff7cd82ba8f25a9 
c3f3f423740546a7b0095e15f34db99c - - -] 10.10.10.11 "GET /v2/ HTTP/1.1" status: 
200 len: 572 tim$
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions 
[req-0977797e-613e-44e3-bdf7-f210a99c4de1 2fef13dec3c84026aff7cd82ba8f25a9 
c3f3f423740546a7b0095e15f34db99c - - -] Unexpected exception in API method
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/services.py", line 
188, in index
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions _services 
= self._get_services_list(req)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/services.py", line 
80, in _get_services_list
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions _services 
= self._get_services(req)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/services.py", line 
44, in _get_services
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions _services 
= self.host_api.service_get_all(context, set_zones=True)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 3497, in 
service_get_all
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions 
set_zones=set_zones)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 171, in 
wrapper
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/service.py", line 313, in get_all
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions 
db_services = db.service_get_all(context, disabled=disabled)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 115, in service_get_all
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions return 
IMPL.service_get_all(context, disabled)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 449, in 
service_get_all
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions return 
query.all()
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2399, in all
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions return 
list(self)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2516, in 
__iter__
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions return 
self._execute_and_instances(context)
2016-06-29 11:49:53.957 2069 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1597234] [NEW] VM with encrypted volume goes to error state when hard reboot

2016-06-29 Thread Lisa Li
Public bug reported:

In current master branch with LVM as backend:

Steps to reproduce
==

1. cinder type-create LUKS

2.  cinder encryption-type-create --cipher aes-xts-plain64 --key_size
512   --control_location front-end LUKS
nova.volume.encryptors.luks.LuksEncryptor

3.  cinder create --volume-type LUKS 1

4.  nova boot --flavor 1 --image 3feb30f7-d171-4b58-a126-2127016a6051
lisa

5.  nova volume-attach c2ee07df-f1d2-4c1c-b08f-9d001209d4cf
72ce7ebf-7400-47da-91f5-3173e01a199e

6. nova reboot --hard c2ee07df-f1d2-4c1c-b08f-9d001209d4cf

Actual result
=

The VM goes into error state.

2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server device_info = 
self.connector.connect_volume(connection_info['data'])
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner


2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server cmd=sanitized_cmd)
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server ProcessExecutionError: 
Unexpected error while running command.
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf scsi_id --page 0x83 --whitelisted 
/dev/disk/by-path/ip-10.239.48.111:3260-iscsi-iqn.2010-10.org.openstack:volume-72ce7ebf-7400-47da-91f5-3173e01a199e-lun-1
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Exit code: 1
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Stdout: u''
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Stderr: u''
2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server

Analysis:

When attaching the encrypted volume to the VM, it finally makes the symlink 
path point to the dm device. 
When reboot, there is no unattach dm device. May problem is here. Need to 
investigate more.

-HP-Compaq-Elite-8300-CMT:/dev$ ls -lrta 
/dev/mapper/ip-10.239.48.111:3260-iscsi-iqn.2010-10.org.openstack:volume-72ce7ebf-7400-47da-91f5-3173e01a199e-lun-1
lrwxrwxrwx 1 root root 7 Jun 29 16:05 
/dev/mapper/ip-10.239.48.111:3260-iscsi-iqn.2010-10.org.openstack:volume-72ce7ebf-7400-47da-91f5-3173e01a199e-lun-1
 -> ../dm-2

** Affects: nova
 Importance: Undecided
 Assignee: Lisa Li (lisali)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597234

Title:
  VM with encrypted volume goes to error state when hard reboot

Status in OpenStack Compute (nova):
  New

Bug description:
  In current master branch with LVM as backend:

  Steps to reproduce
  ==

  1. cinder type-create LUKS

  2.  cinder encryption-type-create --cipher aes-xts-plain64 --key_size
  512   --control_location front-end LUKS
  nova.volume.encryptors.luks.LuksEncryptor

  3.  cinder create --volume-type LUKS 1

  4.  nova boot --flavor 1 --image 3feb30f7-d171-4b58-a126-2127016a6051
  lisa

  5.  nova volume-attach c2ee07df-f1d2-4c1c-b08f-9d001209d4cf
  72ce7ebf-7400-47da-91f5-3173e01a199e

  6. nova reboot --hard c2ee07df-f1d2-4c1c-b08f-9d001209d4cf

  Actual result
  =

  The VM goes into error state.

  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server device_info = 
self.connector.connect_volume(connection_info['data'])
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner

  
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server cmd=sanitized_cmd)
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server 
ProcessExecutionError: Unexpected error while running command.
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf scsi_id --page 0x83 --whitelisted 
/dev/disk/by-path/ip-10.239.48.111:3260-iscsi-iqn.2010-10.org.openstack:volume-72ce7ebf-7400-47da-91f5-3173e01a199e-lun-1
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Exit code: 1
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Stdout: u''
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server Stderr: u''
  2016-06-29 16:05:11.925 TRACE oslo_messaging.rpc.server

  Analysis:

  When attaching the encrypted volume to the VM, it finally makes the symlink 
path point to the dm device. 
  When reboot, there is no unattach dm device. May problem is here. Need to 
investigate more.

  -HP-Compaq-Elite-8300-CMT:/dev$ ls -lrta 
/dev/mapper/ip-10.239.48.111:3260-iscsi-iqn.2010-10.org.openstack:volume-72ce7ebf-7400-47da-91f5-3173e01a199e-lun-1
  lrwxrwxrwx 1 root root 7 Jun 29 16:05 
/dev/mapper/ip-10.239.48.111:3260-iscsi-iqn.2010-10.org.openstack:volume-72ce7ebf-7400-47da-91f5-3173e01a199e-lun-1
 -> ../dm-2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1595773] Re: Make print py3 compatible

2016-06-29 Thread Ji.Wei
** Also affects: daisycloud-core
   Importance: Undecided
   Status: New

** Changed in: daisycloud-core
 Assignee: (unassigned) => Ji.Wei (jiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595773

Title:
  Make print py3 compatible

Status in daisycloud-core:
  New
Status in Fuel Plugins:
  In Progress
Status in glance_store:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  In PY3,

  Remove the print "", join the print () function to achieve the same
  function.

  Python 3:

  #!/usr/bin/python
  # -*- coding: utf-8 -*-
  print ("cinder")

  print "cinder"

  
File "code", line 5
  print "cinder"
   ^
  SyntaxError: Missing parentheses in call to 'print'

To manage notifications about this bug go to:
https://bugs.launchpad.net/daisycloud-core/+bug/1595773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595830] Re: StandardError raises NameError in Python3

2016-06-29 Thread Ji.Wei
** Also affects: daisycloud-core
   Importance: Undecided
   Status: New

** Changed in: daisycloud-core
 Assignee: (unassigned) => Ji.Wei (jiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595830

Title:
  StandardError raises NameError in Python3

Status in daisycloud-core:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  It raises NameError: name 'StandardError' is not defined in Python3.

  class TimeoutError(StandardError):
  pass

  Traceback (most recent call last):
File "Z:/test_py3.py", line 1, in 
  class TimeoutError(StandardError):
  NameError: name 'StandardError' is not defined

To manage notifications about this bug go to:
https://bugs.launchpad.net/daisycloud-core/+bug/1595830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597233] [NEW] rbac-create should return an duplicated error when use same 'object_id', 'object_type' and 'target_tenant'

2016-06-29 Thread JianGang Weng
Public bug reported:

RBAC entry should be unique by combination of 'object_id','object_type' and 
'target_tenant'. 
But in fact, if we only change the 'action' value, we can get another entry 
with same  'object_id','object_type' and 'target_tenant'. 

the process is:

[root@localhost devstack]# neutron rbac-create 
a539e28b-5e6c-4436-b44f-e1f966b6a6a4 --type network --target_tenant tenant_id 
--action access_as_shared
Created a new rbac_policy:
+---+--+
| Field | Value|
+---+--+
| action| access_as_shared |
| id| 0897f09b-1799-416e-9b5d-99d0e153a1b1 |
| object_id | a539e28b-5e6c-4436-b44f-e1f966b6a6a4 |
| object_type   | network  |
| target_tenant | tenant_id|
| tenant_id | aced7a29bb134dec82307a880d1cc542 |
+---+--+
[root@localhost devstack]# neutron rbac-create 
a539e28b-5e6c-4436-b44f-e1f966b6a6a4 --type network --target_tenant tenant_id 
--action access_as_external
Created a new rbac_policy:
+---+--+
| Field | Value|
+---+--+
| action| access_as_external   |
| id| 2c12609e-7878-4161-b533-17b6413bcf0b |
| object_id | a539e28b-5e6c-4436-b44f-e1f966b6a6a4 |
| object_type   | network  |
| target_tenant | tenant_id|
| tenant_id | aced7a29bb134dec82307a880d1cc542 |
+---+--+
[root@localhost devstack]#

** Affects: neutron
 Importance: Undecided
 Assignee: JianGang Weng (weng-jiangang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => JianGang Weng (weng-jiangang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597233

Title:
  rbac-create should return an duplicated error when use same
  'object_id','object_type' and 'target_tenant'

Status in neutron:
  New

Bug description:
  RBAC entry should be unique by combination of 'object_id','object_type' and 
'target_tenant'. 
  But in fact, if we only change the 'action' value, we can get another entry 
with same  'object_id','object_type' and 'target_tenant'. 

  the process is:

  [root@localhost devstack]# neutron rbac-create 
a539e28b-5e6c-4436-b44f-e1f966b6a6a4 --type network --target_tenant tenant_id 
--action access_as_shared
  Created a new rbac_policy:
  +---+--+
  | Field | Value|
  +---+--+
  | action| access_as_shared |
  | id| 0897f09b-1799-416e-9b5d-99d0e153a1b1 |
  | object_id | a539e28b-5e6c-4436-b44f-e1f966b6a6a4 |
  | object_type   | network  |
  | target_tenant | tenant_id|
  | tenant_id | aced7a29bb134dec82307a880d1cc542 |
  +---+--+
  [root@localhost devstack]# neutron rbac-create 
a539e28b-5e6c-4436-b44f-e1f966b6a6a4 --type network --target_tenant tenant_id 
--action access_as_external
  Created a new rbac_policy:
  +---+--+
  | Field | Value|
  +---+--+
  | action| access_as_external   |
  | id| 2c12609e-7878-4161-b533-17b6413bcf0b |
  | object_id | a539e28b-5e6c-4436-b44f-e1f966b6a6a4 |
  | object_type   | network  |
  | target_tenant | tenant_id|
  | tenant_id | aced7a29bb134dec82307a880d1cc542 |
  +---+--+
  [root@localhost devstack]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596135] Re: Make raw_input py3 compatible

2016-06-29 Thread Ji.Wei
** No longer affects: freezer

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1596135

Title:
  Make  raw_input py3 compatible

Status in anvil:
  New
Status in daisycloud-core:
  New
Status in Packstack:
  New
Status in Poppy:
  New
Status in python-solumclient:
  New
Status in vmware-nsx:
  New

Bug description:
  In py3,

  Raw_input renamed to input, 
  so it need to modify the code to make it compatible.


  https://github.com/openstack/python-
  solumclient/blob/ea37d226a6ba55d7ad4024233b9d8001aab92ca5/contrib
  /setup-tools/solum-app-setup.py#L76

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1596135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597219] [NEW] default value display error in rbac-networks.rst

2016-06-29 Thread JianGang Weng
Public bug reported:

In the below document:
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/rbac-networks.rst

In the Table of The Section "REST API Impact", the default value of the
'target_tenant' should be '*', but the displayed value is a black dot
'●'.

** Affects: neutron
 Importance: Undecided
 Assignee: JianGang Weng (weng-jiangang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => JianGang Weng (weng-jiangang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597219

Title:
  default value display error in rbac-networks.rst

Status in neutron:
  New

Bug description:
  In the below document:
  
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/rbac-networks.rst

  In the Table of The Section "REST API Impact", the default value of
  the 'target_tenant' should be '*', but the displayed value is a black
  dot '●'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597213] [NEW] Deprecate method get_ipv6_addr_by_EUI64

2016-06-29 Thread ChangBo Guo(gcb)
Public bug reported:

 Oslo.utils provides same method get_ipv6_addr_by_EUI64, so deprecate it
in Newton and remove it in Ocata.

** Affects: neutron
 Importance: Undecided
 Assignee: ChangBo Guo(gcb) (glongwave)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597213

Title:
  Deprecate method get_ipv6_addr_by_EUI64

Status in neutron:
  In Progress

Bug description:
   Oslo.utils provides same method get_ipv6_addr_by_EUI64, so deprecate
  it in Newton and remove it in Ocata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597208] [NEW] Failed to Create an Instance using macvtap cause existence of "_" in interface vf_name (Regex miss)

2016-06-29 Thread Amichay Polishuk
Public bug reported:

Step to reproduce :

1) neutron port-create --name port1 --binding:vnic_type=macvtap private
2) nova boot --flavor 2 --image  --nic port-id= vm_1

Interface Name on Compute = "p2p1_0"

>From n-cpu.log :

2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Failed to allocate network(s) 
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Traceback (most recent call last):
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2064, in 
_build_and_run_instance  2016-06-29 06:01:04.592 1265 ERROR 
nova.compute.manager [instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab] 
block_device_info=block_device_info)
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2780, in spawn  
  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager 
[instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab] 
block_device_info=block_device_info)
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4946, in 
_create_domain_and_network   2016-06-29 06:01:04.592 1265 ERROR 
nova.compute.manager [instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab] raise 
exception.VirtualInterfaceCreateException()
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] VirtualInterfaceCreateException: Virtual 
Interface creation failed   2016-06-29 
06:01:04.592 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]
2016-06-29 06:01:04.593 1265 DEBUG nova.compute.utils 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Virtual Interface creation failed 
notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:284
2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Build of instance 
7a3063f5-43a7-4d25-b23e-335a2a3274ab aborted: Failed to allocate the 
network(s), not rescheduling.
2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Traceback (most recent call last):
   2016-06-29 
06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1926, in 
_do_build_and_run_instance
2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] filter_properties)
   2016-06-29 
06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2102, in _build_and_run_instance
2016-06-29 06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] reason=msg)   
   2016-06-29 
06:01:04.593 1265 ERROR nova.compute.manager [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] BuildAbortException: Build of instance 
7a3063f5-43a7-4d25-b23e-335a2a3274ab aborted: Failed to alloca
te the network(s), not rescheduling.

  2016-06-29 06:01:04.593 1265 
ERROR nova.compute.manager [instance: 7a3063f5-43a7-4d25-b23e-335a2a3274ab]

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: sriov-pci-pt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597208

Title:
  Failed to Create an Instance using macvtap cause existence of "_" in
  interface vf_name (Regex miss)

Status in neutron:
  New

Bug description:
  Step to reproduce :

  1) neutron port-create --name port1 --binding:vnic_type=macvtap private
  2) nova boot --flavor 2 --image  --nic port-id= vm_1

  Interface Name on Compute = "p2p1_0"

  From n-cpu.log :

  2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager 
[req-a4694341-f7e0-4e7b-b98e-5f640abbf950 admin admin] [instance: 
7a3063f5-43a7-4d25-b23e-335a2a3274ab] Failed to allocate network(s) 
2016-06-29 06:01:04.592 1265 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1597196] [NEW] API Change Tutorial doc code modify

2016-06-29 Thread guoshan
Public bug reported:

The refactor of code cause the inappropriate guide.
Code in tutorial is out of date.

For example:
role_create = {
'properties': {
'name': parameter_types.name,
'description': parameter_types.description
}
...
}

role_update = {
'properties': {
'name': parameter_types.name,
'description': parameter_types.description
}
...
}

For better tutorial, it would be better modified.

** Affects: keystone
 Importance: Undecided
 Assignee: guoshan (guoshan)
 Status: New


** Tags: documentation low-hanging-fruit

** Tags added: documentation

** Tags added: low-hanging-fruit

** Changed in: keystone
 Assignee: (unassigned) => guoshan (guoshan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597196

Title:
  API Change Tutorial doc code modify

Status in OpenStack Identity (keystone):
  New

Bug description:
  The refactor of code cause the inappropriate guide.
  Code in tutorial is out of date.

  For example:
  role_create = {
  'properties': {
  'name': parameter_types.name,
  'description': parameter_types.description
  }
  ...
  }

  role_update = {
  'properties': {
  'name': parameter_types.name,
  'description': parameter_types.description
  }
  ...
  }

  For better tutorial, it would be better modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596869] Re: APIv3 compatibility broken in Mitaka and Liberty

2016-06-29 Thread A.Ojea
I see my fault now, I was supposing that "default" in liberty was the
Name field  and it's the ID, and in Mitaka following the docs the
"default" domain is the name but the ID is an uuid randomly generated.

Apologize for any inconvenience and appreciate your help @stevemar and
@guang-yee.



** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1596869

Title:
  APIv3 compatibility broken in Mitaka and Liberty

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Current API documentation [1] uses the fields   "domain": { "id":
  "default" }, to select a domain.

  This call works in Liberty as you can see in the following snippet:

  curl -i   -H "Content-Type: application/json"   -d '
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {
    "name": "admin",
    "domain": { "id": "default" },
    "password": "admin"
  }
    }
  },
  "scope": {
    "project": {
  "name": "admin",
  "domain": { "id": "default" }
    }
  }
    }
  }'   http://172.17.0.3:5000/v3/auth/tokens ; echo
  HTTP/1.1 201 Created
  X-Subject-Token: 8e861d59fb1847a388b27ab7150f2d15
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  Content-Type: application/json
  Content-Length: 2794
  X-Openstack-Request-Id: req-2fcb81ac-4adf-4d0d-85f9-41d355c0606d
  Date: Tue, 28 Jun 2016 08:59:42 GMT

  {"token": {"methods": ["password"], "roles": [{"id":
  "b1abb292e4af4ead9a1b62b4a6e39ba4", "name": "__member__"}, {"id":
  "f071d23c5131434e8823101f3b8e33db", "name": "admin"}], "expires_at":
  "2016-06-28T09:59:42.646127Z", "project": {"domain": {"id": "default",
  "name": "Default"}, "id": "890fc0394fe34024b62aab12fb335960", "name":
  "admin"}, "catalog": [{"..."}], "extras": {}, "user": {"domain":
  {"id": "default", "name": "Default"}, "id":
  "d1b7876ff28e4db29296797296daecfe", "name": "admin"}, "audit_ids":
  ["7p_bhw8tTvqAOjKRpkHE2Q"], "issued_at":
  "2016-06-28T08:59:42.646167Z"}}

  but it's turned out that in mitaka it fails if you use the id field
  with the name of the domain:

  curl -i   -H "Content-Type: application/json"   -d '
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {
    "name": "admin",
    "domain": { "id": "default" },
    "password": "openstack"
  }
    }
  },
  "scope": {
    "project": {
  "name": "admin",
  "domain": { "id": "default" }
    }
  }
    }
  }'   http://localhost:5000/v3/auth/tokens ; echo
  HTTP/1.1 401 Unauthorized
  Date: Tue, 28 Jun 2016 09:01:04 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-4898044b-25d4-4b9d-96c4-d823c0107cb0
  WWW-Authenticate: Keystone uri="http://localhost:5000;
  Content-Length: 114
  Content-Type: application/json

  {"error": {"message": "The request you have made requires
  authentication.", "code": 401, "title": "Unauthorized"}}

  in order to work you need to use name instead id:

  curl -i   -H "Content-Type: application/json"   -d '
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {
    "name": "admin",
    "domain": { "name": "default" },
    "password": "openstack"
  }
    }
  },
  "scope": {
    "project": {
  "name": "admin",
  "domain": { "name": "default" }
    }
  }
    }
  }'   http://localhost:5000/v3/auth/tokens ; echo
  TTP/1.1 201 Created
  Date: Tue, 28 Jun 2016 09:01:53 GMT
  Server: Apache/2.4.7 (Ubuntu)
  X-Subject-Token: 0c293d9ceeba4a9f8c1a9edba99a1b11
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-1a414584-472f-4b87-9981-a838e3df6f4a
  Content-Length: 4155
  Content-Type: application/json

  {"token": {"methods": ["password"], "roles": [{"id":
  "444fc66b35834eafb3936dca445b56de", "name": "admin"}], "expires_at":
  "2016-06-28T10:01:53.680623Z", "project": {"domain": {"id":
  "0a686f9a064c46eda176a8670d2af12e", "name": "default"}, "id":
  "7c34e27bfb53415daef0b1696886fec5", "name": "admin"}, "catalog":
  [{"...}], "user": {"domain": {"id":
  "0a686f9a064c46eda176a8670d2af12e", "name": "default"}, "id":
  "bcc79501b12948d1b48540bea231b89c", "name": "admin"}, "audit_ids":
  ["U-uBxUKqStWW557xSCmgKA"], "issued_at":
  "2016-06-28T09:01:53.680711Z"}}

  breaking all the compatibility

  [1]
  http://docs.openstack.org/developer/keystone/api_curl_examples.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1596869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe