[Yahoo-eng-team] [Bug 1417762] Re: XSS in network create error reporting

2015-03-09 Thread Jeremy Stanley
Agreed on class D for this report, and since nobody has objected I've
switched it to public, tagged as a security hardening opportunity and
switched the advisory task to won't fix.

** Information type changed from Private Security to Public

** Tags added: security

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1417762

Title:
  XSS in network create error reporting

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  ---

  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  ---

  The error reporting in Horizon for creating a new network is
  susceptible to a Cross-Site Scripting vulnerability. Example
  request/response:

  Request

  POST /project/networks/create HTTP/1.1
  ...

  csrfmiddlewaretoken=6MGUvp62x8c6GU7TfRXQLZERmJuN7nXTnet_profile_id=img
  src=zz
  
onerror=alert(1)net_name=foobaradmin_state=Truewith_subnet=onsubnet_name=cidr=ip_version=4gateway_ip=enable_dhcp=onipv6_modes=none%2Fnoneallocation_pools=dns_nameservers=host_routes=

  Response

  HTTP/1.1 200 OK
  Date: Tue, 03 Feb 2015 20:42:28 GMT
  Server: Apache/2.4.10 (Debian)
  Vary: Accept-Language,Cookie
  X-Frame-Options: SAMEORIGIN
  Content-Language: en
  Keep-Alive: timeout=5, max=100
  Connection: Keep-Alive
  Content-Type: application/json
  Content-Length: 209

  {has_errors: true, errors: {createnetworkinfoaction:
  {net_profile_id: [Select a valid choice. img src=zz
  onerror=alert(1) is not one of the available choices.]}},
  workflow_slug: create_network}

  In the above example if the net_profile_id does not exist, the json
  response contains the user input and Horizon echo's it out. Although
  it would be difficult to exploit this vulnerability because an
  attacker would need to manipulate the hidden HTML net_profile_id
  parameter or the POST body, Horizon should still HTML encode the
  output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1417762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429126] Re: miss moving unlock_override policy enforcement into V2.1 REST API layer

2015-03-09 Thread Jeremy Stanley
I've switched the security advisory task to won't fix since this
shouldn't need an advisory published (class Y bug per
https://wiki.openstack.org/wiki/Vulnerability_Management#Incident_report_taxonomy
).

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429126

Title:
  miss moving unlock_override policy enforcement into V2.1 REST API
  layer

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Commit 01be083 misses unlock_override policy check in V2.1 REST API
  layer.

  The V2.1 REST API can always call this policy check, for this is no
  skip_policy_check coniditon in underlying layer.

But for V2.1 API, we should not check any policy in underlying layer.
This is the principle of V2.1 API policy. 
https://blueprints.launchpad.net/openstack/?searchtext=v3-api-policy
https://review.openstack.org/#/c/147782/ has cleaned it. But it miss this 
one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410536] Re: glance wsgi server should log requests

2015-03-09 Thread Erno Kuvaja
eventlet.wsgi.server does log all requests to glance on INFO.

** Changed in: glance
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1410536

Title:
  glance wsgi server should log requests

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Maybe glance wsgi server should log every request on INFO level as it is done 
in nova or neutron. I was not find possiblity to enable it in glance.
  Example log from neutron:
  2015-01-13 06:31:11.944 16150 INFO neutron.wsgi [-] IP.ADDRESS - - 
[13/Jan/2015 06:31:11] GET 
/v2.0/floatingips.json?fixed_ip_address=port_id=9b297a7c-d032-4b3e-8be5-4de83e5aaf91
 HTTP/1.1 200 142 0.002960

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1410536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429879] [NEW] ML2 Cisco Nexus MD: model/db migration updates required for stackforge cisco nexus MD

2015-03-09 Thread Rich Curran
Public bug reported:

 ML2 Cisco Nexus MD: model/db migration updates required for stackforge
cisco nexus MD

** Affects: neutron
 Importance: Undecided
 Assignee: Rich Curran (rcurran)
 Status: New


** Tags: cisco ml2

** Changed in: neutron
 Assignee: (unassigned) = Rich Curran (rcurran)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429879

Title:
   ML2 Cisco Nexus MD: model/db migration updates required for
  stackforge cisco nexus MD

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
   ML2 Cisco Nexus MD: model/db migration updates required for
  stackforge cisco nexus MD

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429875] [NEW] dashboard_url is not used in horizon.conf and should be removed

2015-03-09 Thread Wu Hong Guang
Public bug reported:

dashboard_url is not used in horizon.conf and should be removed

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1429875

Title:
  dashboard_url is not used in horizon.conf and should be removed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  dashboard_url is not used in horizon.conf and should be removed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1429875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366067] Re: Neutron internal error on empty port update

2015-03-09 Thread Numan Siddique
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366067

Title:
  Neutron internal error on empty port update

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  PUTting an empty update object to neutron port-update call will cause
  an internal server error:

  $ curl -H 'Content-Type: application/json' -H 'X-Auth-Token:  ...' -v -i -X 
PUT -d '{port: {}}' 
'http://127.0.1.1:9696/v2.0/ports/fc092916-c766-4e70-8788-b9b3edcd4c22' 
  * Hostname was NOT found in DNS cache
  *   Trying 127.0.1.1...
  * Connected to 127.0.1.1 (127.0.1.1) port 9696 (#0)
   PUT /v2.0/ports/fc092916-c766-4e70-8788-b9b3edcd4c22 HTTP/1.1
   User-Agent: curl/7.35.0
   Host: 127.0.1.1:9696
   Accept: */*
   Content-Type: application/json
   X-Auth-Token:  ...
   Content-Length: 12
   
  * upload completely sent off: 12 out of 12 bytes
   HTTP/1.1 500 Internal Server Error
  HTTP/1.1 500 Internal Server Error
   Content-Type: application/json; charset=UTF-8
  Content-Type: application/json; charset=UTF-8
   Content-Length: 88
  Content-Length: 88
   X-Openstack-Request-Id: req-97b2b096-263d-466c-9349-b45b135db499
  X-Openstack-Request-Id: req-97b2b096-263d-466c-9349-b45b135db499
   Date: Fri, 05 Sep 2014 14:43:28 GMT
  Date: Fri, 05 Sep 2014 14:43:28 GMT

   
  * Connection #0 to host 127.0.1.1 left intact
  {NeutronError: Request Failed: internal server error while processing your 
request.}

  The neutron log shows an invalid update SQL command:

  2014-09-05 14:43:28.751 2487 INFO neutron.wsgi [-] (2487) accepted
  ('127.0.0.1', 53273)

  2014-09-05 14:43:28.812 2487 ERROR 
neutron.openstack.common.db.sqlalchemy.session [-] DB exception wrapped.
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session Traceback (most recent call 
last):
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py,
 line 597, in _wrap
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session return f(*args, **kwargs)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py,
 line 836, in flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session return super(Session, 
self).flush(*args, **kwargs)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1818, in 
flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session self._flush(objects)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1936, in 
_flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session 
transaction.rollback(_capture_exception=True)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 58, in 
__exit__
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session compat.reraise(exc_type, 
exc_value, exc_tb)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1900, in 
_flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session flush_context.execute()
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 372, in 
execute
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session rec.execute(self)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 525, in 
execute
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session uow
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 59, in 
save_obj
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session mapper, table, update)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 495, in 
_emit_update_statements
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session execute(statement, params)
  2014-09-05 

[Yahoo-eng-team] [Bug 1430005] [NEW] Improve security rule notification message

2015-03-09 Thread Kahou Lei
Public bug reported:

If we create a rule that allow all ports to go thru, we are displaying
the following notification message (see attached image too):

Successfully added rule:
ALLOW -1:-1 from 0.0.0.0/0

whereas -1:-1 doesn't deliver useful information.

Suggest to change -1:-1 to any port instead.

** Affects: horizon
 Importance: Undecided
 Assignee: Kahou Lei (kahou82)
 Status: New

** Attachment added: Screen Shot 2015-03-09 at 1.25.38 PM.png
   
https://bugs.launchpad.net/bugs/1430005/+attachment/4339229/+files/Screen%20Shot%202015-03-09%20at%201.25.38%20PM.png

** Changed in: horizon
 Assignee: (unassigned) = Kahou Lei (kahou82)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1430005

Title:
  Improve security rule notification message

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If we create a rule that allow all ports to go thru, we are displaying
  the following notification message (see attached image too):

  Successfully added rule:
  ALLOW -1:-1 from 0.0.0.0/0

  whereas -1:-1 doesn't deliver useful information.

  Suggest to change -1:-1 to any port instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1430005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1345947] Re: DHCPNAK after neutron-dhcp-agent restart

2015-03-09 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
 Assignee: (unassigned) = Kevin Bringard (kbringard)

** Changed in: neutron/icehouse
   Importance: Undecided = High

** Changed in: neutron/icehouse
Milestone: None = 2014.1.4

** Tags removed: in-stable-icehouse in-stable-juno

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
   Importance: Undecided = High

** Changed in: neutron/juno
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1345947

Title:
  DHCPNAK after neutron-dhcp-agent restart

Status in Grenade - OpenStack upgrade testing:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed
Status in neutron juno series:
  Fix Committed

Bug description:
  After rolling out a configuration change, we restarted neutron-dhcp-agent 
service, and then dnsmasq logs start flooding: DHCPNAK ... lease not found.
  DHCPNAK is replied by dnsmasq for all DHCPREQUEST renews from all VMs. 
However the MAC and IP pairs exist in host files.
  The log flooding increases when more and more VMs start renewing and they 
keep retrying until IP expire and send DHCPDISCOVER and reinit the IP.
  The log flooding gradually disappears when the VMs IP expire and send 
DHCPDISCOVER, to which dnsmasq respond DHCPOFFER properly.

  Analysis:  
  I noticed that option --leasefile-ro is used in dnsmasq command when started 
by neutron dhcp-agent. According to dnsmasq manual, this option should be used 
together with --dhcp-script to customize the lease database. However, the 
option --dhcp-script was removed when fixing bug 1202392.
  Because of this, dnsmasq will not save lease information in persistent 
storage, and when it is restarted, lease information is lost.

  Solution:
  Simply replace --leasefile-ro by --dhcp-leasefile=path to dhcp runtime 
files/lease would solve the problem. (patch attached)

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1345947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429968] [NEW] In Arista ML2 delete tenant if no resources associated with it

2015-03-09 Thread Sukhdev Kapur
Public bug reported:

When all resources for a tenant are deleted (ports and networks), the
tenant is removed Arista's DB. This operation is performed during
port/network_delete_precommit() operation. Arista's ML2 Sync mechanism
used to ensure that the tenant is deleted from the back-end (i.e from
Hardware) as well.

With enhancements to the Sync mechanism to accommodate large scale
deployments, it is prudent to move the deleting of tenant from
port/network_delete_precommit() to port/network_delete_postcommit().
This ensures that the tenant is immediately deleted from the DB as well
as the back-end and removes the dependency on the sync mechanism.

This bug is to make this fix.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: arista ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429968

Title:
  In Arista ML2 delete tenant if no resources associated with it

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When all resources for a tenant are deleted (ports and networks), the
  tenant is removed Arista's DB. This operation is performed during
  port/network_delete_precommit() operation. Arista's ML2 Sync mechanism
  used to ensure that the tenant is deleted from the back-end (i.e from
  Hardware) as well.

  With enhancements to the Sync mechanism to accommodate large scale
  deployments, it is prudent to move the deleting of tenant from
  port/network_delete_precommit() to port/network_delete_postcommit().
  This ensures that the tenant is immediately deleted from the DB as
  well as the back-end and removes the dependency on the sync mechanism.

  This bug is to make this fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365727] Re: Tenant able to create networks using N1kv network profiles not explicitly assigned to it

2015-03-09 Thread Jeremy Stanley
Thanks Kyle, in that case I've switched our security advisory task to
won't fix reflecting that.

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365727

Title:
  Tenant able to create networks using N1kv network profiles not
  explicitly assigned to it

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Tenants are able to create networks using network profiles that are
  not explicitly assigned to them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429993] [NEW] service catalog parsing should be delegated to keystoneclient

2015-03-09 Thread Lin Hua Cheng
Public bug reported:

Horizon or django_openstack_auth should not be manually parsing the
service catalog and determining the logic for v2/v3 catalog.

Ideally, keystoneclient should be leverage for endpoint looking in the
service catalog.

Preferably using the auth_plugin to perform the endpoint lookup, if not
available another thing to look at is the ServiceCatalog object.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1429993

Title:
  service catalog parsing should be delegated to keystoneclient

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon or django_openstack_auth should not be manually parsing the
  service catalog and determining the logic for v2/v3 catalog.

  Ideally, keystoneclient should be leverage for endpoint looking in the
  service catalog.

  Preferably using the auth_plugin to perform the endpoint lookup, if
  not available another thing to look at is the ServiceCatalog object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1429993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430003] [NEW] Corrupt POSTROUTING Chain when using Metering and VPN agents together

2015-03-09 Thread James Dempsey
Public bug reported:

I'm using the Icehouse UCA version 1:2014.1.3-0ubuntu1~cloud0 of the
VPN(openswan driver) and Metering(iptables driver) agents on Ubuntu
Precise.

The ordering of the POSTROUTING chain in the NAT table inside router
namespaces seems to be broken.  In many of my routers, the neutron-
postrouting-bottom rule is listed before the neutron-vpn-agen-
POSTROUTING rule.  This causes traffic that should have traversed a VPN
to be Source NAT'd as if it were traffic leaving via the default route.
Rescheduling the router or removing metering rules for it seem to cause
a re-ordering of rules.

It seems to me that neutron-postrouting-bottom should always be the last
rule in the POSTROUTING chain.  Is this correct?

In the following state, VPNs are broken regardless of the existence of
Phase 1 and Phase 2 IPsec

Chain POSTROUTING (policy ACCEPT 2194K packets, 129M bytes)
 pkts bytes target prot opt in out source   destination 

2199K  129M neutron-meter-POSTROUTING  all  --  *  *   0.0.0.0/0
0.0.0.0/0   
2568K  152M neutron-postrouting-bottom  all  --  *  *   0.0.0.0/0   
 0.0.0.0/0   
2563K  151M neutron-vpn-agen-POSTROUTING  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0  

Removing metering rules can cause the above chain to be changed into the
following, which I assume (but have not verified) would break metering,
were it enabled.

Chain POSTROUTING (policy ACCEPT 2199K packets, 129M bytes)
 pkts bytes target prot opt in out source   destination 

2569K  152M neutron-vpn-agen-POSTROUTING  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0   
2574K  152M neutron-postrouting-bottom  all  --  *  *   0.0.0.0/0   
 0.0.0.0/0   
2204K  130M neutron-meter-POSTROUTING  all  --  *  *   0.0.0.0/0
0.0.0.0/0

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3 l3-agent metering vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430003

Title:
  Corrupt POSTROUTING Chain when using Metering and VPN agents together

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm using the Icehouse UCA version 1:2014.1.3-0ubuntu1~cloud0 of the
  VPN(openswan driver) and Metering(iptables driver) agents on Ubuntu
  Precise.

  The ordering of the POSTROUTING chain in the NAT table inside router
  namespaces seems to be broken.  In many of my routers, the neutron-
  postrouting-bottom rule is listed before the neutron-vpn-agen-
  POSTROUTING rule.  This causes traffic that should have traversed a
  VPN to be Source NAT'd as if it were traffic leaving via the default
  route.  Rescheduling the router or removing metering rules for it seem
  to cause a re-ordering of rules.

  It seems to me that neutron-postrouting-bottom should always be the
  last rule in the POSTROUTING chain.  Is this correct?

  In the following state, VPNs are broken regardless of the existence of
  Phase 1 and Phase 2 IPsec

  Chain POSTROUTING (policy ACCEPT 2194K packets, 129M bytes)
   pkts bytes target prot opt in out source   
destination 
  2199K  129M neutron-meter-POSTROUTING  all  --  *  *   0.0.0.0/0  
  0.0.0.0/0   
  2568K  152M neutron-postrouting-bottom  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0   
  2563K  151M neutron-vpn-agen-POSTROUTING  all  --  *  *   0.0.0.0/0   
 0.0.0.0/0  

  Removing metering rules can cause the above chain to be changed into
  the following, which I assume (but have not verified) would break
  metering, were it enabled.

  Chain POSTROUTING (policy ACCEPT 2199K packets, 129M bytes)
   pkts bytes target prot opt in out source   
destination 
  2569K  152M neutron-vpn-agen-POSTROUTING  all  --  *  *   0.0.0.0/0   
 0.0.0.0/0   
  2574K  152M neutron-postrouting-bottom  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0   
  2204K  130M neutron-meter-POSTROUTING  all  --  *  *   0.0.0.0/0  
  0.0.0.0/0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429980] [NEW] Compress (losslessly) image files

2015-03-09 Thread mattfarina
Public bug reported:

The Images (gif and png) in the UI can be reduced in size without
loosing image quality. There is extra data in them not used for display.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1429980

Title:
  Compress (losslessly) image files

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Images (gif and png) in the UI can be reduced in size without
  loosing image quality. There is extra data in them not used for
  display.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1429980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429987] [NEW] [data processing] After update to use data-processing, no sahara panels show up

2015-03-09 Thread Chad Roberts
Public bug reported:

Recently, horizon was updated to use data-processing as the service
type for Sahara.  That change went through without updating the
permissions checks in each panel.  They are still using
data_processing...

permissions = ('openstack.services.data-processing',)

Each panel needs to be updated to the following:

permissions = ('openstack.services.data_processing',)

** Affects: horizon
 Importance: Undecided
 Assignee: Chad Roberts (croberts)
 Status: In Progress


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1429987

Title:
  [data processing] After update to use data-processing, no sahara
  panels show up

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Recently, horizon was updated to use data-processing as the service
  type for Sahara.  That change went through without updating the
  permissions checks in each panel.  They are still using
  data_processing...

  permissions = ('openstack.services.data-processing',)

  Each panel needs to be updated to the following:

  permissions = ('openstack.services.data_processing',)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1429987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430057] [NEW] Fake instance stuck in MIGRATING state

2015-03-09 Thread Lorenzo Affetti
Public bug reported:

I am using FakeDriver.

It seems that when concurrent resize and live-migration operations, a
fake instance can remain stucked in MIGRATING state.

To reproduce the bug, I spawned a fake instance and run a script that resized 
it to random flavors every second. Concurrently, from another node, I run a 
script that tried to live-migrate the instance to another host (every 1.2 
seconds).
Most of the times messages were something like 'cannot migrate instance in 
state VERIFY_RESIZE', but, when live-migration succeeded the instance was stuck 
in MIGRATING status and needed a `nova refresh --active` command.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  I am using FakeDriver.
  
- It seems that when there are a resize and a live-migration concurrent
- operation, a fake instance can remain stacked in MIGRATING state.
+ It seems that when concurrent resize and live-migration operations, a
+ fake instance can remain stucked in MIGRATING state.
  
- To reproduce the bug, I spawned a fake instance and run a script that resized 
it to random flavors every second. Concurrently, from another node, I run a 
script that tried to live-migrate the instance to another host.
+ To reproduce the bug, I spawned a fake instance and run a script that resized 
it to random flavors every second. Concurrently, from another node, I run a 
script that tried to live-migrate the instance to another host (every 1.2 
seconds).
  Most of the times messages were something like 'cannot migrate instance in 
state VERIFY_RESIZE', but, when live-migration succeeded the instance was stuck 
in MIGRATING status and needed a `nova refresh --active` command.

** Summary changed:

- Fake instance hung on MIGRATING state
+ Fake instance stuck in MIGRATING state

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430057

Title:
  Fake instance stuck in MIGRATING state

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am using FakeDriver.

  It seems that when concurrent resize and live-migration operations, a
  fake instance can remain stucked in MIGRATING state.

  To reproduce the bug, I spawned a fake instance and run a script that resized 
it to random flavors every second. Concurrently, from another node, I run a 
script that tried to live-migrate the instance to another host (every 1.2 
seconds).
  Most of the times messages were something like 'cannot migrate instance in 
state VERIFY_RESIZE', but, when live-migration succeeded the instance was stuck 
in MIGRATING status and needed a `nova refresh --active` command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430042] [NEW] Virtual Machine could not be evacuated because virtual interface creation failed

2015-03-09 Thread Matt Rabe
Public bug reported:

I believe this issue is related to Question 257358
(https://answers.launchpad.net/ubuntu/+source/nova/+question/257358).

On the source host we see the successful vif plug:

2015-03-09 01:22:12.363 629 DEBUG neutron.plugins.ml2.rpc 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
2015-03-09 01:22:12.392 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Acquired semaphore db-access lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:377
2015-03-09 01:22:12.436 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Releasing semaphore db-access 
lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:390
2015-03-09 01:22:12.437 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
740634ca8c7a49418a39c429669f2f27. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
2015-03-09 01:22:12.439 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
3264e8d7dd7c492d9aa17d3e9892b1fc. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
2015-03-09 01:22:14.436 629 DEBUG neutron.notifiers.nova [-] Sending events: 
[{'status': 'completed', 'tag': u'14ac5edd-269f-4808-9a34-c4cc93e9ab70', 
'name': 'network-vif-plugged', 'server_uuid': 
u'2790be4a-5285-46aa-8ee2-c68f5b936c1d'}] send_events 
/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:237

Later, the destination host of the evacuation attempts to plug the vif
but can't:

2015-03-09 02:15:41.441 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipxx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
2015-03-09 02:15:41.485 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 not bound to the agent host ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:163

The cause of the problem seems to be that the neutron port does not have
is binding:host_id properly updated on evacuation, the answer to
question 257358 looks like the fix.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430042

Title:
  Virtual Machine could not be evacuated because virtual interface
  creation failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  I believe this issue is related to Question 257358
  (https://answers.launchpad.net/ubuntu/+source/nova/+question/257358).

  On the source host we see the successful vif plug:

  2015-03-09 01:22:12.363 629 DEBUG neutron.plugins.ml2.rpc 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
  2015-03-09 01:22:12.392 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Acquired semaphore db-access lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:377
  2015-03-09 01:22:12.436 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Releasing semaphore db-access 
lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:390
  2015-03-09 01:22:12.437 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
740634ca8c7a49418a39c429669f2f27. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
  2015-03-09 01:22:12.439 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
3264e8d7dd7c492d9aa17d3e9892b1fc. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
  2015-03-09 01:22:14.436 629 DEBUG neutron.notifiers.nova [-] Sending events: 
[{'status': 'completed', 'tag': u'14ac5edd-269f-4808-9a34-c4cc93e9ab70', 
'name': 'network-vif-plugged', 'server_uuid': 
u'2790be4a-5285-46aa-8ee2-c68f5b936c1d'}] send_events 
/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:237

  Later, the destination host of the evacuation attempts to plug the vif
  but can't:

  2015-03-09 02:15:41.441 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipxx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
  2015-03-09 02:15:41.485 629 DEBUG neutron.plugins.ml2.rpc 

[Yahoo-eng-team] [Bug 1425258] Re: test_list_baremetal_nodes race fails with a node not found 404

2015-03-09 Thread Adam Gandelman
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: ironic
   Status: Confirmed = Invalid

** Changed in: nova
   Status: Confirmed = Invalid

** Changed in: tempest
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425258

Title:
  test_list_baremetal_nodes race fails with a node not found 404

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  In Progress

Bug description:
  http://logs.openstack.org/35/158435/1/check/check-grenade-dsvm-ironic-
  sideways/2beafaf/logs/new/screen-n-api.txt.gz?level=TRACE

  Apparently this is unhandled and we get a 500 response:

  http://logs.openstack.org/35/158435/1/check/check-grenade-dsvm-ironic-
  sideways/2beafaf/console.html#_2015-02-23_22_11_18_978

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiYmFyZW1ldGFsX25vZGVzLnB5XCIgQU5EIG1lc3NhZ2U6XCJOb2RlXCIgQU5EIG1lc3NhZ2U6XCJjb3VsZCBub3QgYmUgZm91bmRcIiBBTkQgdGFnczpcInNjcmVlbi1uLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNDgwNjgwMzM5MX0=

  21 hits in the last 7 days, check and gate, master and stable/juno,
  all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1425258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430062] [NEW] Fernet token response has wrong methods

2015-03-09 Thread Haneef Ali
Public bug reported:

If you validate fernet token, the token response has 2 methods.  Since
the  token is  obtained using the password method, the response should
only have password method


ex - token response

 expires_at: 2015-03-14T03:06:39Z,
extras: {},
issued_at: 2015-03-09T23:06:39Z,
methods: [
password,
token
],

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1430062

Title:
  Fernet token response has  wrong methods

Status in OpenStack Identity (Keystone):
  New

Bug description:
  If you validate fernet token, the token response has 2 methods.  Since
  the  token is  obtained using the password method, the response
  should only have password method

  
  ex - token response

   expires_at: 2015-03-14T03:06:39Z,
  extras: {},
  issued_at: 2015-03-09T23:06:39Z,
  methods: [
  password,
  token
  ],

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1430062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429792] [NEW] log_out doesn't work after test_dashboard_help_redirection executed

2015-03-09 Thread Wu Hong Guang
Public bug reported:

Test step :
1;  run test_dashboard_help_redirection executed

2:  self.home_pg.go_to_home_page()

 3: self.home_pg.log_out()

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1429792

Title:
  log_out doesn't work after test_dashboard_help_redirection executed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Test step :
  1;  run test_dashboard_help_redirection executed

  2:  self.home_pg.go_to_home_page()

   3: self.home_pg.log_out()

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1429792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429791] [NEW] Typo in nova/scheduler/filters/utils.py

2015-03-09 Thread Takashi NATSUME
Public bug reported:

In 'validate_num_values' method, nova/scheduler/filters/utils.py,
there is the following comment.

---
Returns a corretly casted value based on a set of values.
--

'corretly' should be 'correctly'.

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Takashi NATSUME (natsume-takashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429791

Title:
  Typo in nova/scheduler/filters/utils.py

Status in OpenStack Compute (Nova):
  New

Bug description:
  In 'validate_num_values' method, nova/scheduler/filters/utils.py,
  there is the following comment.

  ---
  Returns a corretly casted value based on a set of values.
  --

  'corretly' should be 'correctly'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430146] [NEW] nova notification order is not right when delete a VERIFY_RESIZE instance

2015-03-09 Thread hougangliu
Public bug reported:

when I delete a VERIFY_RESIZE instance, I will always accept
compute.instance.delete.end notification of the VM instance first,
then accept compute.instance.power_off.end. The notification issue
order may be not right.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430146

Title:
  nova notification order is not right when delete a VERIFY_RESIZE
  instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  when I delete a VERIFY_RESIZE instance, I will always accept
  compute.instance.delete.end notification of the VM instance first,
  then accept compute.instance.power_off.end. The notification issue
  order may be not right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430098] [NEW] linuxbridge UTs need more mocking

2015-03-09 Thread YAMAMOTO Takashi
Public bug reported:

after commit b7a56fd1b44649daa1f768157e68a135b9e01dd1 ,
some of linuxbridge UTs seem trying to run ip command and failes if it isn't 
available.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430098

Title:
  linuxbridge UTs need more mocking

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  after commit b7a56fd1b44649daa1f768157e68a135b9e01dd1 ,
  some of linuxbridge UTs seem trying to run ip command and failes if it isn't 
available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430112] [NEW] [sahara]loading node group template page takes a long time for cdh plugin

2015-03-09 Thread weiting-chen
Public bug reported:

Steps to reproduce:
1. Launch horizon console from browser
2. Click Node Group Templates page in Project - Data Processing
3. Click Create Template
4. You will see the page is loading and after several minutes the page could be 
error because of the timeout of loading

I can have success loading sometimes, the success cases usually happened in IE 
11.
And it always failed in Chrome.

However, it looks like the root cause is because there are so many services 
listed in node group templates for cdh plugin.
Is there any way to solve it?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1430112

Title:
  [sahara]loading node group template page takes a long time for cdh
  plugin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. Launch horizon console from browser
  2. Click Node Group Templates page in Project - Data Processing
  3. Click Create Template
  4. You will see the page is loading and after several minutes the page could 
be error because of the timeout of loading

  I can have success loading sometimes, the success cases usually happened in 
IE 11.
  And it always failed in Chrome.

  However, it looks like the root cause is because there are so many services 
listed in node group templates for cdh plugin.
  Is there any way to solve it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1430112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428872] Re: Nova CPU details socket not correct

2015-03-09 Thread Park
sorry I am NOT familiar with libvirt neither...

but from your virsh nodeinfo output, nova stays consistant with virsh,
so I don't think this is a nova bug...

I am going to close this bug, please feel free to reopen it if any
problem

** Changed in: nova
   Status: Confirmed = In Progress

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428872

Title:
  Nova CPU details  socket not correct

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  we are using Mirantis 6.0 
  we are trying to find a physical CPU information  from Nova compute host . e 
are using nova hypervisor-show command.

  when i tried running the nova hypervisor-show compute_node

  its show me correct info cpu_arch , cpu_info_features and cores ...
  but its show its only one sockets where as we are using 2 CPU .( see
  details from compute node below)

  
   

  
  we are trying to find physical CPU details from Nova-compute host blade , 

  Here is what nova shows

  root@node-9:~# nova hypervisor-show node-12
  
+---++
  | Property  | Value   






   |
  
+---++
  | cpu_info_arch | x86_64  






   |
  | cpu_info_features | [ssse3, pge, avx, clflush, sep, 
syscall, vme, dtes64, invpcid, msr, sse, xsave, vmx, erms, 
xtpr, cmov, tsc, smep, pbe, est, pat, monitor, smx, lm, 
abm, nx, fxsr, tm, sse4.1, pae, sse4.2, pclmuldq, acpi, 
fma, tsc-deadline, popcnt, mmx, osxsave, cx8, mce, mtrr, 
rdtscp, ht, dca, lahf_lm, pdcm, mca, pdpe1gb, apic, fsgsbase, 
f16c, pse, ds, pni, tm2, avx2, aes, sse2, ss, bmi1, bmi2, 
pcid, de, fpu, cx16, pse36, ds_cpl, movbe, rdrand, x2apic] |
  | cpu_info_model| SandyBridge 






   |
  | cpu_info_topology_cores   | 12  

   

[Yahoo-eng-team] [Bug 1430127] [NEW] Delete VM low probablity lead to vxlan flow lost

2015-03-09 Thread KaiLin
Public bug reported:

ENV:
3 controllers
Juno OpenStack

SYMPTOM:
1.create a vm on a vxlan network
2.login the vm, cannot contact with it
3.check flows in br-tun,the vm's income flows in br-tun is lost

CAUSE:
1.In neutron-openvswitch-agent log, the error info is: 
in _del_fdb_flow lvm.tun_ofports.remove(ofport) KeyError:'28'
the reason is: the ofport=28 flow is deleted twice.The following will say why 
delete twice.
2.The error will cause the lock flag of flow is open, and the delete operator 
is placed in the add process.so after create a vm, the normal flow table just 
issued, but is deleted by before abnormal flow operating.

why the flow is deleted twice? When delete VM, nova delete the tap device, and 
the ovs-agent scan the change, then do rpc handling(l2pop fdb_remove) in 
server1.
Then nova will send delete port to neutron, neutron server will do rpc handling 
again in server2.

** Affects: neutron
 Importance: Undecided
 Assignee: KaiLin (linkai3)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = KaiLin (linkai3)

** Summary changed:

- Delete VM low probablity lead to vxlan upstream flow lost
+ Delete VM low probablity lead to vxlan flow lost

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430127

Title:
  Delete VM low probablity lead to vxlan flow lost

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ENV:
  3 controllers
  Juno OpenStack

  SYMPTOM:
  1.create a vm on a vxlan network
  2.login the vm, cannot contact with it
  3.check flows in br-tun,the vm's income flows in br-tun is lost

  CAUSE:
  1.In neutron-openvswitch-agent log, the error info is: 
  in _del_fdb_flow lvm.tun_ofports.remove(ofport) KeyError:'28'
  the reason is: the ofport=28 flow is deleted twice.The following will say why 
delete twice.
  2.The error will cause the lock flag of flow is open, and the delete operator 
is placed in the add process.so after create a vm, the normal flow table just 
issued, but is deleted by before abnormal flow operating.

  why the flow is deleted twice? When delete VM, nova delete the tap device, 
and the ovs-agent scan the change, then do rpc handling(l2pop fdb_remove) in 
server1.
  Then nova will send delete port to neutron, neutron server will do rpc 
handling again in server2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430100] [NEW] vpnaas service doesn't work caused by a refactoring commit

2015-03-09 Thread Hua Zhang
Public bug reported:

The refactoring commit 56fd82 moves router_info and NAT rules staffs
from l3-agent into vpn device driver, which cause two problems:

1, The router is maintained in the driver, and not the VPN service. The
router instance should not be deleted.

2, NAT rules has been moved from l3-agent into vpn device driver, but
something in vpn device driver is still refering NAT rules related
methods in l3-agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Hua Zhang (zhhuabj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Hua Zhang (zhhuabj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430100

Title:
  vpnaas service doesn't work caused by a refactoring commit

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The refactoring commit 56fd82 moves router_info and NAT rules staffs
  from l3-agent into vpn device driver, which cause two problems:

  1, The router is maintained in the driver, and not the VPN service.
  The router instance should not be deleted.

  2, NAT rules has been moved from l3-agent into vpn device driver, but
  something in vpn device driver is still refering NAT rules related
  methods in l3-agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429755] [NEW] Fix difficult log output in nova/nova/network/linux_net.py

2015-03-09 Thread SHIGEMATSU Mitsuhiro
Public bug reported:

 When reloading  dnsmasq, this log message Hupping dnsmasq threw ...
is a little difficult.

** Affects: nova
 Importance: Undecided
 Assignee: SHIGEMATSU Mitsuhiro (pshige)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = SHIGEMATSU Mitsuhiro (pshige)

** Summary changed:

- Fix wrong log output in nova/nova/network/linux_net.py
+ Fix difficult log output in nova/nova/network/linux_net.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429755

Title:
  Fix difficult log output in nova/nova/network/linux_net.py

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
   When reloading  dnsmasq, this log message Hupping dnsmasq threw ...
  is a little difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429740] [NEW] Using DVR - Instance with a floating IP can't reach other instances connected to a different network

2015-03-09 Thread Itzik Brown
Public bug reported:


A distributed router with interfaces connected to two private networks and to 
an external network.
Instances without floating IP connected to network A can reach other instances 
connected to network B 
but instances with a floating IP connected to network A can't reach other 
instances connected to network B.

Version
===
openstack-neutron-2014.2.2-1.el7ost.noarch
python-neutron-2014.2.2-1.el7ost.noarch
openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429740

Title:
  Using DVR - Instance with a floating IP can't reach other instances
  connected to a different network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  A distributed router with interfaces connected to two private networks and to 
an external network.
  Instances without floating IP connected to network A can reach other 
instances connected to network B 
  but instances with a floating IP connected to network A can't reach other 
instances connected to network B.

  Version
  ===
  openstack-neutron-2014.2.2-1.el7ost.noarch
  python-neutron-2014.2.2-1.el7ost.noarch
  openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
  openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429753] [NEW] Improve performance of security groups rpc-related code

2015-03-09 Thread Eugene Nikanorov
Public bug reported:

In a case when large number of VMs ( 2-3 thousand) reside in one L2
network, security group listing for ports requested from OVS agents
consumes significant amount of CPU.

When VM is spawned on such network, every OVS agent requests update sec groups 
info on each of its devices. 
Total time needed to process all such RPC requests that were caused by 1 VM 
spawn may reach tens of cpu-seconds.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429753

Title:
  Improve performance of security groups rpc-related code

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In a case when large number of VMs ( 2-3 thousand) reside in one L2
  network, security group listing for ports requested from OVS agents
  consumes significant amount of CPU.

  When VM is spawned on such network, every OVS agent requests update sec 
groups info on each of its devices. 
  Total time needed to process all such RPC requests that were caused by 1 VM 
spawn may reach tens of cpu-seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429754] [NEW] always-true conditional in portsecurity_db

2015-03-09 Thread YAMAMOTO Takashi
Public bug reported:

portsecurity_db has a dubious conditional
attrs.is_attr_set('security_group'), which seems always true.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429754

Title:
  always-true conditional in portsecurity_db

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  portsecurity_db has a dubious conditional
  attrs.is_attr_set('security_group'), which seems always true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429759] [NEW] Fix wrong log output in nova/nova/tests/unit/fake_volume.py

2015-03-09 Thread SHIGEMATSU Mitsuhiro
Public bug reported:

When begining detaching volume, this wrong log outout beging detaching
volume .. occurs.

** Affects: nova
 Importance: Undecided
 Assignee: SHIGEMATSU Mitsuhiro (pshige)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = SHIGEMATSU Mitsuhiro (pshige)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429759

Title:
  Fix wrong log output in nova/nova/tests/unit/fake_volume.py

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When begining detaching volume, this wrong log outout beging
  detaching volume .. occurs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429729] [NEW] networks_associate V2 test case call networks_associate V2.1 controller

2015-03-09 Thread lvmxh
Public bug reported:

networks_associate V2 test case  should only focus on test V2 API,  it
should not test V2.1 API.

This is a bug. And this bug block this bp
https://blueprints.launchpad.net/openstack/?searchtext=v3-api-policy

** Affects: nova
 Importance: Undecided
 Assignee: lvmxh (shaohef)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = lvmxh (shaohef)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429729

Title:
  networks_associate V2 test case call networks_associate V2.1
  controller

Status in OpenStack Compute (Nova):
  New

Bug description:
  networks_associate V2 test case  should only focus on test V2 API,  it
  should not test V2.1 API.

  This is a bug. And this bug block this bp
  https://blueprints.launchpad.net/openstack/?searchtext=v3-api-policy

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429723] [NEW] Column role_id of table assignment should be properly reference with table role

2015-03-09 Thread Dave Chen
Public bug reported:

'role_id' should be referenced with 'id' of 'role' table, but this is not 
specified in current code, see
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L404

It seems the upgrade script is okay.
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/038_add_assignment_table.py#L39

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: New

** Description changed:

- 'role_id' should be referenced with 'id' of 'role', but this is not specified 
in current code, see
+ 'role_id' should be referenced with 'id' of 'role' table, but this is not 
specified in current code, see
  
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L404
- 
  
  It seems the upgrade script is okay.
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/038_add_assignment_table.py#L39

** Changed in: keystone
 Assignee: (unassigned) = Dave Chen (wei-d-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1429723

Title:
  Column role_id of table assignment should be properly reference with
  table role

Status in OpenStack Identity (Keystone):
  New

Bug description:
  'role_id' should be referenced with 'id' of 'role' table, but this is not 
specified in current code, see
  
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L404

  It seems the upgrade script is okay.
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/038_add_assignment_table.py#L39

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1429723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429737] [NEW] Do not notify dead DHCP agent about removed network

2015-03-09 Thread Eugene Nikanorov
Public bug reported:

In cases when networks are removed from the dead DHCP agent in the process of 
autorescheduling, notifying dead agent leaves messages on its queue.
If that agent is started again, these messages would be the first it will 
process. 
If there are a dozen of such messages, their processing may overlap with 
processing of active networks, so potentially DHCP agent may disable dhcp for 
active networks that it hosts.

An example of such problem could be a system with one DHCP agent where
it is stopped, networks removed and then it is started again. The more
networks the agent hosts the more chances that processing of some
network.delete.end notification would appear after dhcp is enabled on
that network.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429737

Title:
  Do not notify dead DHCP agent about removed network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In cases when networks are removed from the dead DHCP agent in the process of 
autorescheduling, notifying dead agent leaves messages on its queue.
  If that agent is started again, these messages would be the first it will 
process. 
  If there are a dozen of such messages, their processing may overlap with 
processing of active networks, so potentially DHCP agent may disable dhcp for 
active networks that it hosts.

  An example of such problem could be a system with one DHCP agent where
  it is stopped, networks removed and then it is started again. The more
  networks the agent hosts the more chances that processing of some
  network.delete.end notification would appear after dhcp is enabled on
  that network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429775] [NEW] Fix wrong log output in neutron/neutron/agent/linux/dhcp.py

2015-03-09 Thread SHIGEMATSU Mitsuhiro
Public bug reported:

When all subnets is turnning off dhcp and  killing the process, this
wrong log output Killing dhcpmasq for network since all subnets have
turned off DHCP ... occurs.

** Affects: neutron
 Importance: Undecided
 Assignee: SHIGEMATSU Mitsuhiro (pshige)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = SHIGEMATSU Mitsuhiro (pshige)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429775

Title:
  Fix wrong log output in neutron/neutron/agent/linux/dhcp.py

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When all subnets is turnning off dhcp and  killing the process, this
  wrong log output Killing dhcpmasq for network since all subnets have
  turned off DHCP ... occurs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429723] Re: Column role_id of table assignment should be properly referenced with table role

2015-03-09 Thread Henry Nash
No, we explicitly drop this constraint in the 062 migration.  The reason
is that roles are stored in a different backend to the assignment table
- and it isn't safe to have FK relationships across backends.

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1429723

Title:
  Column role_id of table assignment should be properly referenced
  with table role

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  'role_id' should be referenced with 'id' of 'role' table, but this is not 
specified in current code, see
  
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L404

  It seems the upgrade script is okay.
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/038_add_assignment_table.py#L39

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1429723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426816] Re: PageObject's switch_window doesn't switch to new tab

2015-03-09 Thread Wu Hong Guang
** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1426816

Title:
  PageObject's switch_window doesn't switch to new tab

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  switch_window works when the page is opened in a new windows, but
  doesn't work when the page is opened in a new tab.

  Take test_dashboard_help_redirection for example: the help link try to open 
docs.openstack.org, the chrome browser will open the link in a new tab,  the  
  switch_window won't switch to new tab

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1426816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp