[Yahoo-eng-team] [Bug 1476123] [NEW] serial console does not honor webroot setting

2015-07-20 Thread Matthias Runge
Public bug reported:

Steps to Reproduce:
===
1. install openstack-nova-serialproxy
2. On /etc/openstack-dashboard/local_settings set CONSOLE_TYPE = SERIAL
3. On compute node edit /etc/nova/nova.conf set 
[serial_console]  
 enabled=True
 base_url=ws://10.35.64.150:6083/
 listen=0.0.0.0
4. Restart services
5. Restasrt httpd 
6. launch instance.

Actual results:
===
The requested URL
/project/instances/2dd47f65-4563-4281-af96-b88ba50e9a25/serial was not found on
this server.

Expected results:
=
serial console opened successfully

Additional info:

Serial console opened successfully via CLI

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476123

Title:
  serial console does not honor webroot setting

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to Reproduce:
  ===
  1. install openstack-nova-serialproxy
  2. On /etc/openstack-dashboard/local_settings set CONSOLE_TYPE = SERIAL
  3. On compute node edit /etc/nova/nova.conf set 
  [serial_console]  
   enabled=True
   base_url=ws://10.35.64.150:6083/
   listen=0.0.0.0
  4. Restart services
  5. Restasrt httpd 
  6. launch instance.

  Actual results:
  ===
  The requested URL
  /project/instances/2dd47f65-4563-4281-af96-b88ba50e9a25/serial was not found 
on
  this server.

  Expected results:
  =
  serial console opened successfully

  Additional info:
  
  Serial console opened successfully via CLI

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476097] [NEW] [fwaas]Support fwaas to control east-west traffic in dvr router

2015-07-20 Thread lee jian
Public bug reported:

when fwaas is enabled with dvr router, the firewall rules will only be
added to snat-ROUTER_ID  on controller and floating ip namespaces on
compute, this will result that, only south-north traffic can be
controlled by fwaas,  and the east-west traffic,which produced from one
subnet to another is out of fwaas' control.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476097

Title:
  [fwaas]Support fwaas to control east-west traffic in dvr router

Status in neutron:
  New

Bug description:
  when fwaas is enabled with dvr router, the firewall rules will only be
  added to snat-ROUTER_ID  on controller and floating ip namespaces on
  compute, this will result that, only south-north traffic can be
  controlled by fwaas,  and the east-west traffic,which produced from
  one subnet to another is out of fwaas' control.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476114] [NEW] Launch instance failed using instances' snapshot created volume

2015-07-20 Thread Zhenyu Zheng
Public bug reported:

Launching instance fails when using a volume that is created using a
snapshot of  a volume-backended instance.

How to reproduce:

Step 1:
Create an volume backended instance.

root@zheng-dev1:/var/log/nova# nova boot --flavor 1 --boot-volume
daaddb77-4257-4ccd-86f2-220b31a0ce9b --nic net-id=8744ee96-7690-43bb-
89b4-fcac805557bc test1

root@zheng-dev1:/var/log/nova# nova list
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| ef3c6074-4d38-4d7b-8d93-d0ace58d3a6a | test1 | ACTIVE | -  | Running  
   | public=2001:db8::6, 172.24.4.5 |
+--+---+++-++

Step 2:
Create a snapshot of this instance using nova image-create, this will create an 
image in glance.

root@zheng-dev1:/var/log/nova# nova image-create 
ef3c6074-4d38-4d7b-8d93-d0ace58d3a6a test-image
root@zheng-dev1:/var/log/nova# glance image-list
+--+-+-+--+--++
| ID   | Name| Disk 
Format | Container Format | Size | Status |
+--+-+-+--+--++
| 7bdff9a3-d051-4e75-bcd3-de69dbffe063 | cirros-0.3.4-x86_64-uec | ami  
   | ami  | 25165824 | active |
| 2af2dce2-f778-4d73-b827-5281741fc1cf | cirros-0.3.4-x86_64-uec-kernel  | aki  
   | aki  | 4979632  | active |
| 60ea7020-fcc1-4535-af5e-0e894a01a44a | cirros-0.3.4-x86_64-uec-ramdisk | ari  
   | ari  | 3740163  | active |
| ce7b2d17-196a-4871-bc1b-9dcb184863be | test-image  |  
   |  |  | active |
+--+-+-+--+--++

Step 3:
Create a new volume using the previously created image.

root@zheng-dev1:/var/log/nova# cinder create --image-id 
ce7b2d17-196a-4871-bc1b-9dcb184863be --name test-volume 1
+---+--+
|Property   |Value |
+---+--+
|  attachments  |  []  |
|   availability_zone   | nova |
|bootable   |false |
|  consistencygroup_id  | None |
|   created_at  |  2015-07-20T06:44:41.00  |
|  description  | None |
|   encrypted   |False |
|   id  | cc21dc7d-aa4b-4e24-8f11-8b916c5d6347 |
|metadata   |  {}  |
|  multiattach  |False |
|  name | test-volume  |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat| None |
| os-vol-mig-status-attr:name_id| None |
|  os-vol-tenant-attr:tenant_id |   b8112a8d8227490eba99419b8a8c2555   |
|   os-volume-replication:driver_data   | None |
| os-volume-replication:extended_status | None |
|   replication_status  |   disabled   |
|  size |  1   |
|  snapshot_id  | None |
|  source_volid | None |
| status|   creating   |
|user_id|   ed64bccd0227444fa02dbd7695769a7d   |
|  volume_type  | lvmdriver-1  |
+---+--+
root@zheng-dev1:/var/log/nova# cinder list
+--+---+-+--+-+--+--+
|  ID  |   Status  |   Name  | Size | 
Volume 

[Yahoo-eng-team] [Bug 1476145] [NEW] the port for floating IP should not include IPv6 address

2015-07-20 Thread shihanzhang
Public bug reported:

Now if we create a floating IP,  neutron will create a internal port for this 
floating IP which is used purely for internal system and admin use when 
managing floating IPs, but if a external network with a IPv4 subnet and a IPv6 
subnet, then the port for floating IP will
has two IPs, one is IPv4, one is IPv6.
reproduce steps:
1. create a external  network
2. create a IPv4 subnet and a IPv6 subnet for this network
3. create a floatingIP without paramter '--floating-ip-address'

you will find the port for this floatingIP will has two IPs

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476145

Title:
  the port for floating IP should not include IPv6 address

Status in neutron:
  New

Bug description:
  Now if we create a floating IP,  neutron will create a internal port for this 
floating IP which is used purely for internal system and admin use when 
managing floating IPs, but if a external network with a IPv4 subnet and a IPv6 
subnet, then the port for floating IP will
  has two IPs, one is IPv4, one is IPv6.
  reproduce steps:
  1. create a external  network
  2. create a IPv4 subnet and a IPv6 subnet for this network
  3. create a floatingIP without paramter '--floating-ip-address'

  you will find the port for this floatingIP will has two IPs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474284] Re: Adding users from different domain to a group

2015-07-20 Thread Bajarang Jadhav
@ Steve Martinelli (stevemar), @Henry Nash (henry-nash),@jiaxi


In UI, I found that,  users from one domain are not allowed  to be part of the 
group of another domain..

Steps followed:
1. Created 2 domains, domain1 and domain2
2. Created users, user1 in domain1 and user2 in domain2.
3. Created groups, group1 in domain1 and group2 in domain2.
4. In UI, tried to add user1 to group2. While Add users is clicked in Group 
Management page of group2, it shows only user2.Have attached the screenshot of 
the same.
5. Same behavior is observed while adding user2 to group1.

As per the discussion above, users from one domain are allowed to be
part of the group of another domain.In CLI, same behavior is observed,
however in UI, the behavior is different as mentioned in the above
steps.

Can you please let me know if UI is behaving as designed?




** Attachment added: 7.adding_users_group2_in_domain2.png
   
https://bugs.launchpad.net/keystone/+bug/1474284/+attachment/4431484/+files/7.adding_users_group2_in_domain2.png

** Changed in: keystone
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474284

Title:
  Adding users from different domain to a group

Status in Keystone:
  New

Bug description:
  I have created two domains. And I have created users in both the
  domains. I created a group in first domain, and I tried adding those
  users from other domains to this group, it added successfully.

  
  But according to this page https://wiki.openstack.org/wiki/Domains, it should 
not allow.

  Here are the steps to reproduce this :-
  created new domain Domain9


  
  curl -i -k -X POST https://url/v3/domains -H Content-Type: application/json 
-H X-Auth-Token: $token -d @domain.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:48:15 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 214
  Content-Type: application/json

  {domain: {links: {self:
  https://url/v3/domains/dc1d36c037ac4e47b3b21424f1a13273}, enabled:
  true, description: Description., name: Domain9, id:
  dc1d36c037ac4e47b3b21424f1a13273}}



  
  created  user fd22 in domain Domain9


   curl -i -k -X POST https://url/v3/users -H Content-Type: application/json 
-H X-Auth-Token: $token -d @user.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:49:27 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 269
  Content-Type: application/json

  {user: {links: {self:
  https://url/v3/users/533979e9b80645799028c51ccec55cce},
  description: Sample keystone test user, name: fd22, enabled:
  true, id: 533979e9b80645799028c51ccec55cce, domain_id:
  dc1d36c037ac4e47b3b21424f1a13273}}

  
  created user fd23 in default domain

  
  vi user.json
  provo-sand:~/bajarang # curl -i -k -X POST https://url/v3/users -H 
Content-Type: application/json -H X-Auth-Token: $token -d @user.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:50:56 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 244
  Content-Type: application/json

  {user: {links: {self:
  https://url/v3/users/8a43e5f3facb4fc2985a18a40de2046e},
  description: Sample keystone test user, name: fd23, enabled:
  true, id: 8a43e5f3facb4fc2985a18a40de2046e, domain_id:
  default}}


  created group DomainGroup10 in default domain


  curl -i -k -X POST https://url/v3/groups -H Content-Type: application/json 
-H X-Auth-Token: $token -d @newgroup.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:52:49 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 225
  Content-Type: application/json

  {group: {domain_id: default, description: Description.,
  id: 0b72f1dd6f514adb989a752b9a72e005, links: {self:
  url/v3/groups/0b72f1dd6f514adb989a752b9a72e005}, name:
  DomainGroup10}}


  Added user 'fd22' from  Domain9 to DomainGroup10

  
  curl -i -k -X PUT 
https://url/v3/groups/0b72f1dd6f514adb989a752b9a72e005/users/533979e9b80645799028c51ccec55cce
 -H Content-Type: application/json -H X-Auth-Token: $token
  HTTP/1.1 204 No Content
  Date: Fri, 10 Jul 2015 09:53:17 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 0

  Added user 'fd23'  from Default  to DomainGroup10

   curl -i -k -X PUT 
https:/url/v3/groups/0b72f1dd6f514adb989a752b9a72e005/users/8a43e5f3facb4fc2985a18a40de2046e
 -H Content-Type: application/json -H X-Auth-Token: $token
  HTTP/1.1 204 No Content
  Date: Fri, 10 Jul 2015 09:54:20 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1474284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476213] [NEW] Adding users from different domain to a group

2015-07-20 Thread Bajarang Jadhav
Public bug reported:

In Horizon, I found that, users from one domain are not allowed to be
part of the group of another domain..

Steps followed:
1. Created 2 domains, domain1 and domain2
2. Created users, user1 in domain1 and user2 in domain2.
3. Created groups, group1 in domain1 and group2 in domain2.
4. In UI, tried to add user1 to group2. While Add users is clicked in Group 
Management page of group2, it shows only user2.Have attached the screenshot of 
the same.
5. Same behavior is observed while adding user2 to group1.

As per the discussion above, users from one domain are allowed to be
part of the group of another domain.In CLI, same behavior is observed,
however in UI, the behavior is different as mentioned in the above
steps.

Can you please let me know if UI is behaving as designed?

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 7.adding_users_group2_in_domain2.png
   
https://bugs.launchpad.net/bugs/1476213/+attachment/4431535/+files/7.adding_users_group2_in_domain2.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476213

Title:
  Adding users from different domain to a group

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Horizon, I found that, users from one domain are not allowed to be
  part of the group of another domain..

  Steps followed:
  1. Created 2 domains, domain1 and domain2
  2. Created users, user1 in domain1 and user2 in domain2.
  3. Created groups, group1 in domain1 and group2 in domain2.
  4. In UI, tried to add user1 to group2. While Add users is clicked in 
Group Management page of group2, it shows only user2.Have attached the 
screenshot of the same.
  5. Same behavior is observed while adding user2 to group1.

  As per the discussion above, users from one domain are allowed to be
  part of the group of another domain.In CLI, same behavior is observed,
  however in UI, the behavior is different as mentioned in the above
  steps.

  Can you please let me know if UI is behaving as designed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472347] Re: With multiple Neutron api/rpc workers enabled, intermittent failure deleting dhcp_port

2015-07-20 Thread Danny Choi
I was running a private RPM for cisco-networking neutron.

I could not reproduce the problem with stable/juno nor stable/kilo.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472347

Title:
  With multiple Neutron api/rpc workers enabled, intermittent failure
  deleting dhcp_port

Status in neutron:
  Invalid

Bug description:
  Neutron multiple workers are enabled as follows in neutron.conf:
 - api_workers=3
 - rpc_workers=3

  The following were configured:
 - 20 tenants
 - Each tenant had 5 tenant networks
 - For each network, one VM at each Compute nodes (2) for a total of 10 VMs
 - Total 100 VLANs/200 VMs
   
  A script which did the following at tenant-1:
 - Delete all 10 VMs
 - For each network, delete its router interface
 - Delete the subnet
 - Delete the network
 - Re-create the network, subnet and router interface
 - For each network, launch 2 VMs (one at each Compute node)
 - Repeat steps 1 – 6

  Intermittently the following delete port error is encountered:

  2015-07-06 16:17:51.903 43190 DEBUG neutron.plugins.ml2.plugin 
[req-f18af2a1-0047-4301-9fa1-01632fa5b2b8 None] Calling delete_port for 
fcf17b5d-235c-466b-b54b-ce80acca7359 owned by network:dhcp delete_p
  ort /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py:1076
  2015-07-06 16:17:51.904 43216 ERROR oslo.db.sqlalchemy.exc_filters 
[req-cbb23fa8-5043-405c-a569-fcfdc912555a ] DB exception wrapped.
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 781, in 
fetchall
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters l = 
self.process_rows(self._fetchall_impl())
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 750, in 
_fetchall_impl
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters 
self._non_result()
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 755, in 
_non_result
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters This 
result object does not return rows. 
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters 
ResourceClosedError: This result object does not return rows. It has been 
closed automatically.
  2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters 
  2015-07-06 16:17:51.906 43216 DEBUG neutron.openstack.common.lockutils 
[req-cbb23fa8-5043-405c-a569-fcfdc912555a ] Releasing semaphore db-access 
lock /usr/lib/python2.7/site-packages/neutron/openstack
  /common/lockutils.py:238
  2015-07-06 16:17:51.906 43216 ERROR neutron.api.v2.resource 
[req-cbb23fa8-5043-405c-a569-fcfdc912555a None] delete failed
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 81, in 
resource
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 476, in delete
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 680, in 
delete_network
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource continue
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py, line 
82, in __exit__
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 640, in 
delete_network
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
with_lockmode('update').all())
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2300, in all
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource return 
list(self)
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py, line 66, in 
instances
  2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1476264] [NEW] Cannot delete resources in remote services once project is deleted

2015-07-20 Thread Adam Young
Public bug reported:

Steps to reproduce:

Create project

Assign non-admin role to user

As non-admin user Go to Glance and create image

As admin Delete project

As non-admin, cannot delete image

If policy requires as scoped token, even admin cannot delete the image.

This has the effect of forcing admin somewhere is admin everywhere

** Affects: keystone
 Importance: High
 Assignee: Adam Young (ayoung)
 Status: New

** Changed in: keystone
   Importance: Undecided = High

** Changed in: keystone
 Assignee: (unassigned) = Adam Young (ayoung)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1476264

Title:
  Cannot delete resources in remote services once project is deleted

Status in Keystone:
  New

Bug description:
  Steps to reproduce:

  Create project

  Assign non-admin role to user

  As non-admin user Go to Glance and create image

  As admin Delete project

  As non-admin, cannot delete image

  If policy requires as scoped token, even admin cannot delete the
  image.

  This has the effect of forcing admin somewhere is admin everywhere

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1476264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476252] [NEW] lbaas table actions are not implemented well

2015-07-20 Thread Eric Peterson
Public bug reported:

The lbaas table actions all use a post() call with a lot of if/then/else
blocks checking the actions.  This makes extending this page very
tricky, and does not follow documentation and examples / best practices
of how to implement table actions in a more flexible / extensible way.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476252

Title:
  lbaas table actions are not implemented well

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The lbaas table actions all use a post() call with a lot of
  if/then/else blocks checking the actions.  This makes extending this
  page very tricky, and does not follow documentation and examples /
  best practices of how to implement table actions in a more flexible /
  extensible way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476253] [NEW] Create a member with unexpected body returns 500

2015-07-20 Thread Niall Bunting
Public bug reported:

Overview:
If when creating a member the user either puts a number as the body or sends 
some type of string will cause the server to fall over due to a type error.

How to produce:
curl -X POST http://your-ip:9292/v2/images/your-image-id/members -H 
X-Auth-Token: your-auth-token -d 123

Rather than a number lots of inputs can cause the type error to be
thrown such as '[]' or '[some text]'.

Actual:
500

Expected:
400 with body invalid message.

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Niall Bunting (niall-bunting)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476253

Title:
  Create a member with unexpected body returns 500

Status in Glance:
  New

Bug description:
  Overview:
  If when creating a member the user either puts a number as the body or sends 
some type of string will cause the server to fall over due to a type error.

  How to produce:
  curl -X POST http://your-ip:9292/v2/images/your-image-id/members -H 
X-Auth-Token: your-auth-token -d 123

  Rather than a number lots of inputs can cause the type error to be
  thrown such as '[]' or '[some text]'.

  Actual:
  500

  Expected:
  400 with body invalid message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1476253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476347] [NEW] LDAP Resource backend should be deprecated

2015-07-20 Thread Samuel de Medeiros Queiroz
Public bug reported:

Change 8ff5520713251ec247eeeb783f140d757cbdceb0 deprecated LDAP
Assignment backend, which meant our current Assignment + Resource
backends.

The resource backend must be explicitly deprecated as of Kilo and
removed in Mitaka.

** Affects: keystone
 Importance: Medium
 Assignee: Samuel de Medeiros Queiroz (samueldmq)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1476347

Title:
  LDAP Resource backend should be deprecated

Status in Keystone:
  In Progress

Bug description:
  Change 8ff5520713251ec247eeeb783f140d757cbdceb0 deprecated LDAP
  Assignment backend, which meant our current Assignment + Resource
  backends.

  The resource backend must be explicitly deprecated as of Kilo and
  removed in Mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1476347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476332] [NEW] update local.conf lbaas config

2015-07-20 Thread David Lyle
Public bug reported:

lbaas is now a plugin and cannot be enabled via enable_service q-lbaas

local.conf sample in horizon needs to be updated.

** Affects: horizon
 Importance: Medium
 Assignee: David Lyle (david-lyle)
 Status: In Progress


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476332

Title:
  update local.conf lbaas config

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  lbaas is now a plugin and cannot be enabled via enable_service q-lbaas

  local.conf sample in horizon needs to be updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-07-20 Thread Doug Hellmann
** Changed in: os-brick
   Status: Fix Committed = Fix Released

** Changed in: os-brick
Milestone: None = 0.3.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in Ceilometer:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in congress:
  Fix Committed
Status in Designate:
  Fix Committed
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Committed
Status in Keystone:
  Fix Committed
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Committed
Status in murano:
  Fix Committed
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  In Progress
Status in os-brick:
  Fix Released
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Committed
Status in OpenStack Search (Searchlight):
  Fix Committed
Status in Trove:
  Fix Committed

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476329] [NEW] v2 tokens validated on the v3 API are missing timezones

2015-07-20 Thread Dolph Mathews
Public bug reported:

v3 tokens contain the issued_at and expires_at timestamps for each
token. If a token is created on the v2 API and then validated on the v3
API, this timezone information is missing (the 'Z' at the end of the
timestamp), and thus cannot be validated as ISO 8601 extended format
timestamps.

This patch contains two FIXMEs which, if uncommented, will reproduce
this bug:

  https://review.openstack.org/#/c/203250/

This appears to affect all token formats.

** Affects: keystone
 Importance: Medium
 Status: Triaged

** Description changed:

  v3 tokens contain the issued_at and expires_at timestamps for each
  token. If a token is created on the v2 API and then validated on the v3
  API, this timezone information is missing (the 'Z' at the end of the
  timestamp), and thus cannot be validated as ISO 8601 extended format
  timestamps.
+ 
+ This patch contains two FIXMEs which, if uncommented, will reproduce
+ this bug:
+ 
+   https://review.openstack.org/#/c/203250/

** Description changed:

  v3 tokens contain the issued_at and expires_at timestamps for each
  token. If a token is created on the v2 API and then validated on the v3
  API, this timezone information is missing (the 'Z' at the end of the
  timestamp), and thus cannot be validated as ISO 8601 extended format
  timestamps.
  
  This patch contains two FIXMEs which, if uncommented, will reproduce
  this bug:
  
-   https://review.openstack.org/#/c/203250/
+   https://review.openstack.org/#/c/203250/
+ 
+ This appears to affect all token formats.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1476329

Title:
  v2 tokens validated on the v3 API are missing timezones

Status in Keystone:
  Triaged

Bug description:
  v3 tokens contain the issued_at and expires_at timestamps for each
  token. If a token is created on the v2 API and then validated on the
  v3 API, this timezone information is missing (the 'Z' at the end of
  the timestamp), and thus cannot be validated as ISO 8601 extended
  format timestamps.

  This patch contains two FIXMEs which, if uncommented, will reproduce
  this bug:

    https://review.openstack.org/#/c/203250/

  This appears to affect all token formats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1476329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476336] [NEW] Invalid parameters in list image requests return inconsistent responses

2015-07-20 Thread Anna Eilering
Public bug reported:

Using glance v2, most recent commit is
6dc5477a12b9b904332ac6fe7932abbc7a0275a7.

I see that GET(list image) requests with different invalid parameters
return different response codes. I would expect that invalid parameters
would be treated consistently. In other words I would expect invalid
parameters to always return a 400 or ignore invalid parameters and
return a 200.

Examples:

An invalid parameter of 'id=invalid' returns a 200

REQUEST SENT

request method..: GET
request url.: ENDPOINT/v2/images
request params..: id=invalid
request headers.: {'Accept-Encoding': 'gzip, deflate', 'Accept': 
'application/json', 'User-Agent': 'python-requests/2.7.0 CPython/2.7.8 
Linux/2.6.32-431.29.2.el6.x86_64', 'Connection': 'keep-alive', 'X-Auth-Token': 
u'TOKEN', 'Content-Type': 'application/json'}
request body: None

-
RESPONSE RECEIVED
-
response status..: Response [200]
response time: 0.236920833588
response headers.: {'content-length': '80', 'via': '1.1 Repose (Repose/2.12)', 
'server': 'Jetty(8.0.y.z-SNAPSHOT)', 'date': 'Mon, 20 Jul 2015 15:17:43 GMT', 
'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-req-093b3157-451e-4266-8297-fa42c0605c2f'}
response body: {images: [], schema: /v2/schemas/images, first: 
/v2/images?id=invalid}
---


An invalid parameter of 'limit=invalid' returns a 400


REQUEST SENT

request method..: GET
request url.:ENDPOINT/v2/images
request params..: limit=invalid
request headers.: {'Accept-Encoding': 'gzip, deflate', 'Accept': 
'application/json', 'User-Agent': 'python-requests/2.7.0 CPython/2.7.8 
Linux/2.6.32-431.29.2.el6.x86_64', 'Connection': 'keep-alive', 'X-Auth-Token': 
u'TOKEN', 'Content-Type': 'application/json'}
request body: None

-
RESPONSE RECEIVED
-
response status..: Response [400]
response time: 0.143214941025
response headers.: {'content-length': '52', 'via': '1.1 Repose (Repose/2.12)', 
'server': 'Jetty(8.0.y.z-SNAPSHOT)', 'date': 'Mon, 20 Jul 2015 15:17:43 GMT', 
'content-type': 'text/plain;charset=UTF-8', 'x-openstack-request-id': 
'req-req-5067dcf8-a765-4336-88d5-3a85a3d50910'}
response body: 400 Bad Request

limit param must be an integer


Here are the different invalid params I have attempted and their
results:

Returns a 200:
request params..: auto_disk_config=invalid
request params..: checksum=invalid
request params..: container_format=invalid
request params..: created_at=invalid
request params..: disk_format=invalid
request params..: id=invalid
request params..: image_type=invalid
request params..: min_disk=invalid
request params..: min_ram=invalid
request params..: name=invalid
request params..: os_type=invalid
request params..: owner=invalid
request params..: protected=invalid
request params..: size=invalid
request params..: status=invalid
request params..: tag=invalid
request params..: updated_at=invalid


Returns a 400:
request params..: limit=invalid
request params..: marker=invalid
request params..: member_status=invalidvisibility=invalid
request params..: size_max=invalid
request params..: size_min=invalid
request params..: sort_dir=invalid
request params..: sort_key=invalid
request params..: visibility=invalid

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476336

Title:
  Invalid parameters in list image requests return inconsistent
  responses

Status in Glance:
  New

Bug description:
  Using glance v2, most recent commit is
  6dc5477a12b9b904332ac6fe7932abbc7a0275a7.

  I see that GET(list image) requests with different invalid parameters
  return different response codes. I would expect that invalid
  parameters would be treated consistently. In other words I would
  expect invalid parameters to always return a 400 or ignore invalid
  parameters and return a 200.

  Examples:

  An invalid parameter of 'id=invalid' returns a 200
  
  REQUEST SENT
  
  request method..: GET
  request url.: ENDPOINT/v2/images
  request params..: id=invalid
  request headers.: {'Accept-Encoding': 'gzip, deflate', 'Accept': 
'application/json', 'User-Agent': 'python-requests/2.7.0 CPython/2.7.8 
Linux/2.6.32-431.29.2.el6.x86_64', 'Connection': 'keep-alive', 'X-Auth-Token': 
u'TOKEN', 'Content-Type': 'application/json'}
  request body: None

  -
  RESPONSE RECEIVED
  -
  response status..: Response [200]
  response time: 0.236920833588
  response headers.: {'content-length': '80', 'via': '1.1 Repose 
(Repose/2.12)', 'server': 'Jetty(8.0.y.z-SNAPSHOT)', 'date': 'Mon, 20 Jul 2015 
15:17:43 GMT', 'content-type': 'application/json; charset=UTF-8', 

[Yahoo-eng-team] [Bug 1382440] Re: Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device

2015-07-20 Thread Doug Hellmann
** Changed in: os-brick
   Status: Fix Committed = Fix Released

** Changed in: os-brick
Milestone: None = 0.3.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382440

Title:
  Detaching multipath volume doesn't work properly when using different
  targets with same portal for each multipath device

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  Overview:
  On Icehouse(2014.1.2) with iscsi_use_multipath=true, detaching iSCSI 
  multipath volume doesn't work properly. When we use different targets(IQNs) 
  associated with same portal for each different multipath device, all of 
  the targets will be deleted via disconnect_volume().

  This problem is not yet fixed in upstream. However, the attached patch
  fixes this problem.

  Steps to Reproduce:

  We can easily reproduce this issue without any special storage
  system in the following Steps:

1. configure iscsi_use_multipath=True in nova.conf on compute node.
2. configure volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
   in cinder.conf on cinder node.
2. create an instance.
3. create 3 volumes and attach them to the instance.
4. detach one of these volumes.
5. check multipath -ll and iscsiadm --mode session.

  Detail:

  This problem was introduced with the following patch which modified
  attaching and detaching volume operations for different targets
  associated with different portals for the same multipath device.

commit 429ac4dedd617f8c1f7c88dd8ece6b7d2f2accd0
Author: Xing Yang xing.y...@emc.com
Date:   Date: Mon Jan 6 17:27:28 2014 -0500

  Fixed a problem in iSCSI multipath

  We found out that:

   # Do a discovery to find all targets.
   # Targets for multiple paths for the same multipath device
   # may not be the same.
   out = self._run_iscsiadm_bare(['-m',
 'discovery',
 '-t',
 'sendtargets',
 '-p',
 iscsi_properties['target_portal']],
 check_exit_code=[0, 255])[0] \
   or 
  
   ips_iqns = self._get_target_portals_from_iscsiadm_output(out)
  ...
   # If no other multipath device attached has the same iqn
   # as the current device
   if not in_use:
   # disconnect if no other multipath devices with same iqn
   self._disconnect_mpath(iscsi_properties, ips_iqns)
   return
   elif multipath_device not in devices:
   # delete the devices associated w/ the unused multipath
   self._delete_mpath(iscsi_properties, multipath_device, ips_iqns)

  When we use different targets(IQNs) associated with same portal for each 
different
  multipath device, the ips_iqns has all targets in compute node from the 
result of
  iscsiadm -m discovery -t sendtargets -p the same portal.
  Then, the _delete_mpath() deletes all of the targets in the ips_iqns
  via /sys/block/sdX/device/delete.

  For example, we create an instance and attach 3 volumes to the
  instance:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
# multipath -ll
330030001 dm-7 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 23:0:0:1 sdd 8:48 active ready running
330010001 dm-5 IET,VIRTUAL-DISK
size=2.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 21:0:0:1 sdb 8:16 active ready running
330020001 dm-6 IET,VIRTUAL-DISK
size=3.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 22:0:0:1 sdc 8:32 active ready running

  Then we detach one of these volumes:

# nova volume-detach 95f959cd-d180-4063-ae03-9d21dbd7cc50 5c526ffa-
  ba88-4fe2-a570-9e35c4880d12

  As a result of detaching the volume, the compute node remains 3 iSCSI sessions
  and the instance fails to access the attached multipath devices:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 

[Yahoo-eng-team] [Bug 1398267] Re: when restart the vpn and l3 agent, the firewall rule apply to all tenants' router.

2015-07-20 Thread Kyle Mestery
I believe this was addressed during Kilo when we refactored FWaaS to
allow FW's to apply per-router.

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398267

Title:
  when restart the vpn and l3 agent, the firewall rule apply to all
  tenants' router.

Status in neutron:
  Invalid

Bug description:
  Hi all:
     when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
     step:
     1. Create network and router in A and B tenant.
     2. Create a firewall in A tenant.
     3. Restart vpn and l3 agent serivce.
     4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn 

  Then I find the firewall rule in chain neutron-l3-agent-FORWARD and
  neutron-vpn-agen-FORWARD.

  So I  debug the code,and add some code in
  neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :

   def _process_router_add(self, ri):
  On router add, get fw with rules from plugin and update driver.
  LOG.debug(_(Process router add, router_id: '%s'), ri.router['id'])
  routers = []
  routers.append(ri.router)
  router_info_list = self._get_router_info_list_for_tenant(
  routers,
  ri.router['tenant_id'])
  if router_info_list:
  # Get the firewall with rules
  # for the tenant the router is on.
  ctx = context.Context('', ri.router['tenant_id'])
  fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
  LOG.debug(_(Process router add, fw_list: '%s'),
    [fw['id'] for fw in fw_list])
  for fw in fw_list:
  +if fw['tenant_id'] == ri.router['tenant_id']:
     self._invoke_driver_for_sync_from_plugin(
  ctx,
  router_info_list,
   fw)

  My neutron version is icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476383] [NEW] collectstatic and compress are not producing consistent results

2015-07-20 Thread Eric Peterson
Public bug reported:

I have 3 identical machines (host names are different), and I have an
ansible script to deploy horizon.  As part of the script we run
collectstatic and compress by hand (offline compression).

Each machine's static contents looks slightly different.

This causes horizon to fail when behind a load balancer, specifically
when round robin or least conn policies are in place.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476383

Title:
  collectstatic and compress are not producing consistent results

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have 3 identical machines (host names are different), and I have an
  ansible script to deploy horizon.  As part of the script we run
  collectstatic and compress by hand (offline compression).

  Each machine's static contents looks slightly different.

  This causes horizon to fail when behind a load balancer, specifically
  when round robin or least conn policies are in place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476368] [NEW] FAIL in nova.tests.unit.virt.disk.test_api.APITestCase.test_can_resize_need_fs_type_specified

2015-07-20 Thread Davanum Srinivas (DIMS)
Public bug reported:

oslo.utils 2.0.0 entriely removes import oslo.* we need to switch
over. Here's the failure in python27 tests

==
FAIL: 
nova.tests.unit.virt.disk.test_api.APITestCase.test_can_resize_need_fs_type_specified
--
Traceback (most recent call last):
testtools.testresult.real._StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

traceback-1: {{{
Traceback (most recent call last):
  File nova/tests/unit/virt/disk/test_api.py, line 78, in 
test_can_resize_need_fs_type_specified
fake_import_fails))
  File /usr/local/lib/python2.7/site-packages/testtools/testcase.py, line 
670, in useFixture
gather_details(fixture.getDetails(), self.getDetails())
  File /usr/local/lib/python2.7/site-packages/fixtures/fixture.py, line 170, 
in getDetails
result = dict(self._details)
TypeError: 'NoneType' object is not iterable
}}}

** Affects: nova
 Importance: Undecided
 Assignee: Davanum Srinivas (DIMS) (dims-v)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476368

Title:
  FAIL in
  
nova.tests.unit.virt.disk.test_api.APITestCase.test_can_resize_need_fs_type_specified

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  oslo.utils 2.0.0 entriely removes import oslo.* we need to switch
  over. Here's the failure in python27 tests

  ==
  FAIL: 
nova.tests.unit.virt.disk.test_api.APITestCase.test_can_resize_need_fs_type_specified
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  traceback-1: {{{
  Traceback (most recent call last):
File nova/tests/unit/virt/disk/test_api.py, line 78, in 
test_can_resize_need_fs_type_specified
  fake_import_fails))
File /usr/local/lib/python2.7/site-packages/testtools/testcase.py, line 
670, in useFixture
  gather_details(fixture.getDetails(), self.getDetails())
File /usr/local/lib/python2.7/site-packages/fixtures/fixture.py, line 
170, in getDetails
  result = dict(self._details)
  TypeError: 'NoneType' object is not iterable
  }}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381536] Re: ResourceClosedError occurs when neutron API run in parallel

2015-07-20 Thread Kyle Mestery
Marking Invalid per comment #19.

** Changed in: neutron
   Status: Incomplete = Invalid

** Changed in: neutron
Milestone: liberty-2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381536

Title:
  ResourceClosedError occurs when neutron API run in parallel

Status in neutron:
  Invalid

Bug description:
  When DHCP agent creates port from neutron and another create port
  request is received, ResourceClosedError occurs in sqlalchemy.

  this may be related to bug #1282922 https://bugs.launchpad.net/bugs/1282922
  Above bug is related to nec plugin and it is mentioned that other plugins may 
be affected.
  This error occurred in ML2 plugin both for create and delete ports.
  Tested using 2014.3 Icehouse

  ==
  2014-10-15 21:58:59.837 26167 INFO neutron.wsgi [-] (26167) accepted 
('172.16.2.86', 47007)

  2014-10-15 21:58:59.870 26167 INFO neutron.wsgi [req-424a01ca-f52b-
  43a6-8844-d0d3590feb8d None] 172.16.2.86 - - [15/Oct/2014 21:58:59]
  GET /v2.0/networks.json?fields=idname=testnw2 HTTP/1.1 200 251
  0.031936

  2014-10-15 21:58:59.872 26167 INFO neutron.wsgi [req-424a01ca-f52b-
  43a6-8844-d0d3590feb8d None] (26167) accepted ('172.16.2.86', 47008)

  2014-10-15 21:58:59.950 26167 INFO neutron.wsgi [req-
  7ee742ef-6370-46b3-8f8b-f46ae5d262bc None] 172.16.2.86 - -
  [15/Oct/2014 21:58:59] POST /v2.0/subnets.json HTTP/1.1 201 572
  0.076879

  2014-10-15 21:59:00.074 26167 INFO neutron.wsgi [req-a6ef6c65-811f-
  40d8-9443-b9590809994a None] (26167) accepted ('172.16.2.86', 47010)

  2014-10-15 21:59:00.088 26167 INFO urllib3.connectionpool [-] Starting new 
HTTPS connection (1): 10.68.42.86
  2014-10-15 21:59:00.111 26167 INFO neutron.wsgi 
[req-22a84d34-f454-423d-bb7b-b4c7e2e6e08c None] 172.16.2.86 - - [15/Oct/2014 
21:59:00] GET /v2.0/networks.json?fields=idname=testnw2 HTTP/1.1 200 251 
0.033298

  2014-10-15 21:59:00.113 26167 INFO neutron.wsgi [req-22a84d34-f454
  -423d-bb7b-b4c7e2e6e08c None] (26167) accepted ('172.16.2.86', 47012)

  2014-10-15 21:59:51.165 26167 ERROR neutron.api.v2.resource [-] create failed
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 87, in 
resource
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 448, in create
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 632, in 
create_port
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).create_port(context, port)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 1371, 
in create_port
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource ips = 
self._allocate_ips_for_port(context, network, port)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 678, 
in _allocate_ips_for_port
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource result = 
NeutronDbPluginV2._generate_ip(context, subnets)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 359, 
in _generate_ip
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource return 
NeutronDbPluginV2._try_generate_ip(context, subnets)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 376, 
in _try_generate_ip
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource range = 
range_qry.filter_by(subnet_id=subnet['id']).first()
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2282, in 
first
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource ret = 
list(self[0:1])
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2149, in 
__getitem__
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource return 
list(res)
  2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py, line 65, in 
instances
  2014-10-15 21:59:51.165 26167 

[Yahoo-eng-team] [Bug 1476360] Re: stable/juno gate is failing

2015-07-20 Thread Lin Hua Cheng
** Description changed:

  File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/openstack_auth/utils.py,
 line 24, in module
  2015-07-20 18:48:01.107 | from keystoneclient.v2_0 import client as 
client_v2
  2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/__init__.py,
 line 33, in module
  2015-07-20 18:48:01.107 | from keystoneclient import access
  2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/access.py,
 line 20, in module
  2015-07-20 18:48:01.107 | from oslo.utils import timeutils
  2015-07-20 18:48:01.107 | ImportError: No module named utils
+ 
+ Error is due to the oslo namespace

** Changed in: horizon
   Status: New = Confirmed

** Changed in: horizon
   Importance: Undecided = High

** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Summary changed:

- stable/juno gate is failing
+ stable/juno gate is failing on oslo import

** Changed in: horizon/juno
   Status: New = Confirmed

** Changed in: horizon/juno
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476360

Title:
  stable/juno gate is failing on oslo import

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Dashboard (Horizon) juno series:
  Confirmed

Bug description:
  File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/openstack_auth/utils.py,
 line 24, in module
  2015-07-20 18:48:01.107 | from keystoneclient.v2_0 import client as 
client_v2
  2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/__init__.py,
 line 33, in module
  2015-07-20 18:48:01.107 | from keystoneclient import access
  2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/access.py,
 line 20, in module
  2015-07-20 18:48:01.107 | from oslo.utils import timeutils
  2015-07-20 18:48:01.107 | ImportError: No module named utils

  Error is due to the oslo namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476360] [NEW] stable/juno gate is failing on oslo import

2015-07-20 Thread Yash Bathia
Public bug reported:

File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/openstack_auth/utils.py,
 line 24, in module
2015-07-20 18:48:01.107 | from keystoneclient.v2_0 import client as 
client_v2
2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/__init__.py,
 line 33, in module
2015-07-20 18:48:01.107 | from keystoneclient import access
2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/access.py,
 line 20, in module
2015-07-20 18:48:01.107 | from oslo.utils import timeutils
2015-07-20 18:48:01.107 | ImportError: No module named utils

Error is due to the oslo namespace

** Affects: horizon
 Importance: High
 Assignee: Yash Bathia (ybathia)
 Status: Confirmed

** Affects: horizon/juno
 Importance: High
 Status: Confirmed

** Changed in: horizon
 Assignee: (unassigned) = Yash Bathia (ybathia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476360

Title:
  stable/juno gate is failing on oslo import

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Dashboard (Horizon) juno series:
  Confirmed

Bug description:
  File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/openstack_auth/utils.py,
 line 24, in module
  2015-07-20 18:48:01.107 | from keystoneclient.v2_0 import client as 
client_v2
  2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/__init__.py,
 line 33, in module
  2015-07-20 18:48:01.107 | from keystoneclient import access
  2015-07-20 18:48:01.107 |   File 
/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/access.py,
 line 20, in module
  2015-07-20 18:48:01.107 | from oslo.utils import timeutils
  2015-07-20 18:48:01.107 | ImportError: No module named utils

  Error is due to the oslo namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443522] Re: Neutron dsvm functional tests fail with TimeoutException in test_killed_monitor_respawns

2015-07-20 Thread Kyle Mestery
The fix referenced in #1 was merged during Kilo, so marking this Fix
Released.

** Changed in: neutron
   Status: Confirmed = Fix Released

** Changed in: neutron
Milestone: None = 2015.1.1

** Changed in: neutron
 Assignee: (unassigned) = Dane LeBlanc (leblancd)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443522

Title:
  Neutron dsvm functional tests fail with TimeoutException in
  test_killed_monitor_respawns

Status in neutron:
  Fix Released

Bug description:
  Occasionally the check-neutron-dsvm-functional upstream gating tests
  fail with a TimeoutException error in the
  
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestOvsdbMonitor.test_killed_monitor_respawns
  tests (both vsctl and native):

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpbmUgODEsIGluIHRlc3Rfa2lsbGVkX21vbml0b3JfcmVzcGF3bnNcIiAiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjg5NDA3NDYwMDh9

  Here's a sample log from a failing check-neutron-dsvm-functional test
  run:

  ft1.123: 
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestOvsdbMonitor.test_killed_monitor_respawns(vsctl)_StringException:
 Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File neutron/tests/functional/agent/linux/test_ovsdb_monitor.py, line 81, 
in test_killed_monitor_respawns
  output1 = self.collect_initial_output()
File neutron/tests/functional/agent/linux/test_ovsdb_monitor.py, line 76, 
in collect_initial_output
  eventlet.sleep(0.01)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 34, in sleep
  hub.switch()
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
  return self.greenlet.switch()
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 346, in run
  self.wait(sleep_time)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 85, in wait
  presult = self.do_poll(seconds)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/epolls.py,
 line 62, in do_poll
  return self.poll.poll(seconds)
File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py,
 line 52, in signal_handler
  raise TimeoutException()
  fixtures._fixtures.timeout.TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476448] [NEW] Private volume types don't show in admin volume type page

2015-07-20 Thread Liyingjun
Public bug reported:

The admin volume types page doesn't display private volume types:

Here is all my volume types
~$ cinder type-list --all
+--+--+-+---+
|  ID  |   Name   | Description | Is_Public |
+--+--+-+---+
| 9de17b91-f9a9-4424-b470-7b45a91a995e |  test4   |  -  |   False   |
| ab95f9b2-f76b-47b9-af9d-1359448c483e |   ssd| ssd |   False   |
  |   False   |-44fb-8763-289aad656460 |   test   |  test xxx
|   |  |  |
|  |  | xxx |   |
| f3406818-007d-41f2-877e-8afc5c4b0bac | defaults |  -  |True   |
+--+--+-+---+

Attachment is admin volume type page.

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Attachment added: Screen Shot 2015-07-21 at 10.12.56 AM.png
   
https://bugs.launchpad.net/bugs/1476448/+attachment/4431786/+files/Screen%20Shot%202015-07-21%20at%2010.12.56%20AM.png

** Changed in: horizon
 Assignee: (unassigned) = Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476448

Title:
  Private volume types don't show in admin volume type page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The admin volume types page doesn't display private volume types:

  Here is all my volume types
  ~$ cinder type-list --all
  +--+--+-+---+
  |  ID  |   Name   | Description | Is_Public |
  +--+--+-+---+
  | 9de17b91-f9a9-4424-b470-7b45a91a995e |  test4   |  -  |   False   |
  | ab95f9b2-f76b-47b9-af9d-1359448c483e |   ssd| ssd |   False   |
|   False   |-44fb-8763-289aad656460 |   test   |  test xxx
  |   |  |  |
  |  |  | xxx |   |
  | f3406818-007d-41f2-877e-8afc5c4b0bac | defaults |  -  |True   |
  +--+--+-+---+

  Attachment is admin volume type page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476439] [NEW] update_metadata for flavors and images shows blank. static basePath not set correctly.

2015-07-20 Thread Ross Annetts
Public bug reported:

Currently using OpenStack Kilo on CentOS 7. Issue is with:

openstack-dashboard-2015.1.0-7.el7.noarch
/usr/share/openstack-dashboard/static/angular/widget.module.js

When using the update_metadata feature in horizon in the flavors and
images section, the meta data table is not displayed. Have also seen
this cause problems when using heat.

The basePath in the javascript is not being set correctly and resulting
in a redirect loop:

[Tue Jul 21 00:14:22.097739 2015] [core:error] [pid 14453] (36)File name
too long: [client ] AH00036: access to
/dashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboardauth/login/
failed (filesystem path
'/var/www/html/dashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboardauth')

I was able to fix by modifying the widget.module.js file

$ diff -u /usr/share/openstack-dashboard/static/angular/widget.module.js.orig 
/usr/share/openstack-dashboard/static/angular/widget.module.js
--- /usr/share/openstack-dashboard/static/angular/widget.module.js.orig   
2015-07-21 00:55:07.641502063 +
+++ /usr/share/openstack-dashboard/static/angular/widget.module.js  
2015-07-21 00:41:37.476953146 +
@@ -17,6 +17,6 @@
 'hz.widget.metadata-display',
 'hz.framework.validators'
   ])
-.constant('basePath', '/static/angular/');
+.constant('basePath', '/dashboard/static/angular/');
 
 })();

Ideally this file should not need to be modified and should be generated
using WEBROOT in local_settings, alternatively documentation should be
updated if this file must be modified by hand.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angular metadata static

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476439

Title:
  update_metadata for flavors and images  shows blank.  static basePath
  not set correctly.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently using OpenStack Kilo on CentOS 7. Issue is with:

  openstack-dashboard-2015.1.0-7.el7.noarch
  /usr/share/openstack-dashboard/static/angular/widget.module.js

  When using the update_metadata feature in horizon in the flavors and
  images section, the meta data table is not displayed. Have also seen
  this cause problems when using heat.

  The basePath in the javascript is not being set correctly and
  resulting in a redirect loop:

  [Tue Jul 21 00:14:22.097739 2015] [core:error] [pid 14453] (36)File
  name too long: [client ] AH00036: access to
  
/dashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboardauth/login/
  failed (filesystem path
  
'/var/www/html/dashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboardauth')

  I was able to fix by modifying the widget.module.js file

  $ diff -u /usr/share/openstack-dashboard/static/angular/widget.module.js.orig 
/usr/share/openstack-dashboard/static/angular/widget.module.js
  --- /usr/share/openstack-dashboard/static/angular/widget.module.js.orig   
2015-07-21 00:55:07.641502063 +
  +++ /usr/share/openstack-dashboard/static/angular/widget.module.js  
2015-07-21 00:41:37.476953146 +
  @@ -17,6 +17,6 @@
   'hz.widget.metadata-display',
   'hz.framework.validators'
 ])
  -.constant('basePath', '/static/angular/');
  +.constant('basePath', '/dashboard/static/angular/');
   
   })();

  Ideally this file should not need to be modified and should be
  generated using WEBROOT in local_settings, alternatively documentation
  should be updated if this file must be modified by hand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476469] [NEW] with DVR, a VM can't use floatingIP and VPN at the same time

2015-07-20 Thread shihanzhang
Public bug reported:

Now VPN Service is available for Distributed Routers by patch 
#https://review.openstack.org/#/c/143203/, 
but there is another problem,  with DVR, a VM can't use floatingIP and VPN at 
the same time.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476469

Title:
  with DVR, a VM can't use floatingIP and VPN at the same time

Status in neutron:
  New

Bug description:
  Now VPN Service is available for Distributed Routers by patch 
#https://review.openstack.org/#/c/143203/, 
  but there is another problem,  with DVR, a VM can't use floatingIP and VPN at 
the same time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476481] [NEW] RFE - Add Neutron API extension for packet forwarding

2015-07-20 Thread YujiAzama
Public bug reported:

[Existing problem]
   This spec is for adding fowarding rules to Neutron API extension.

   There are several use cases to forward packets without considering about 
packet
   header. For example, in Service Function Chaining(SFC) use cases,
   when a transparent network function (e.g. IDS, WAN Accelator) is dynamically 
   inserted into the chain, the destination of packets are changed regardless of
   the IP or MAC addresses. These functions can be described as forwarding 
rules,
   but in Neutron API it is not defined. So in this spec, we propose forwarding
   rules as new extention of Neutron API. However, this is not Chain.

[What is the enhancement?]
   Add the forwarding rules to Neutron API as extension.

-
Spec: https://review.openstack.org/#/c/186663
-

** Affects: neutron
 Importance: Undecided
 Assignee: YujiAzama (azama-yuji)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) = YujiAzama (azama-yuji)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476481

Title:
  RFE - Add Neutron API extension for packet forwarding

Status in neutron:
  New

Bug description:
  [Existing problem]
 This spec is for adding fowarding rules to Neutron API extension.

 There are several use cases to forward packets without considering about 
packet
 header. For example, in Service Function Chaining(SFC) use cases,  
 when a transparent network function (e.g. IDS, WAN Accelator) is 
dynamically   
 inserted into the chain, the destination of packets are changed regardless 
of
 the IP or MAC addresses. These functions can be described as forwarding 
rules,
 but in Neutron API it is not defined. So in this spec, we propose 
forwarding
 rules as new extention of Neutron API. However, this is not Chain.

  [What is the enhancement?]
 Add the forwarding rules to Neutron API as extension.

  -
  Spec: https://review.openstack.org/#/c/186663
  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp