[Yahoo-eng-team] [Bug 1519577] Re: project drop down menu reflect rename project immediately

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/316307
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f3825a5ff1bf927353a309812a7edb2afb543370
Submitter: Jenkins
Branch:master

commit f3825a5ff1bf927353a309812a7edb2afb543370
Author: Vijay Katam 
Date:   Fri May 13 11:00:36 2016 -0700

Fix project name refresh in project menu bar

When makes an inline update to a project name the name in the menu bar
does not get updated after reloading/refreshing the page. This happens
because the menu display is retrieved from a token in the session which
does not get updated until a switch to a different project is made.

This fix addresses this issue by retrieving project name from the request
context variable authorized_tenants which has updated information instead
of session.

Fix doc string to confirm to pep8. Update template tests to account for
tenant name in menu bar. Tests are not setting session variable for tenant
name so previosuly template did not render active tenant name, this fix
does not use session anymore so tests need to account for it.

Change-Id: Ia0d5b51699197904d1a578d188762ef39a4b67cf
Closes-Bug: 1519577


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1519577

Title:
  project drop down menu reflect rename project immediately

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  As admin, I'm member of 'admin' and 'demo' project. If I rename 'demo'
  to 'demo_renamed', it's listed in Identity - Projects correctly,
  reflecting latest change. But it's not working for project drop down
  menu in top bar. Even after refresh there is still old value, so just
  'demo'. It's still working, because (by URL) project is identified by
  UUID, and not name.

  When I do switch to this renamed project, it also refresh items in
  this drop down menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1519577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580898] Re: [RFE] Network node support and improved testing for macvtap agent

2016-08-02 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580898

Title:
  [RFE] Network node support and improved testing for macvtap agent

Status in neutron:
  Expired

Bug description:
  Today, only unit & some basic functional tests are executed for the
  macvtap agent.

  The goal is to extend the macvtap agent to 
  * support non-compute ports via macvlan (allows spawning a single node system 
with l3, dhcp and so on)
  * Add a tempest test job for it
  * enhance functional tests to also verify l2 connectivity (requires some code 
that can listen on a taps or macvtaps file descriptor and answer ping requests 
via it)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587992] Re: Services panels are not plugable

2016-08-02 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587992

Title:
  Services panels are not plugable

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Services Panels are currently hard coded to a list of projects

  Other projects who have dashboard plugins also have similar services
  heartbeat data and should show it to admins

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587679] Re: old instance wizard default

2016-08-02 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587679

Title:
  old instance wizard default

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Mitaka introduced a regression to the old instance launch wizard where
  it no longer checks the default security group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609217] [NEW] DVR: dvr router should not exist in not-binded network node

2016-08-02 Thread LIU Yulong
Public bug reported:

ENV:
stable/mitaka
hosts:
compute1 (nova-compute, l3-agent (dvr), metedate-agent)
compute2 (nova-compute, l3-agent (dvr), metedate-agent)
network1 (l3-agent (dvr_snat), metedata-agent, dhcp-agent)
network2 (l3-agent(dvr_snat), metedata-agent, dhcp-agent)

How to reproduce? (scenario 1)
set: dhcp_agents_per_network = 2

1. create a DVR router:
neutron router-create --ha False --distributed True test1

2. Create a network & subnet with dhcp enabled.
neutron net-create test1
neutron subnet-create --enable-dhcp test1 --name test1 192.168.190.0/24

3. Attach the router and subnet
neutron router-interface-add test1 subnet=test1

And for another scenario 2:
change the network2 node deployment to only run metedata-agent, dhcp-agent.
Both in the qdhcp-namespace and the VM could ping each other.
So the network node qrouter-namespace is not used.

Code:
The function in following position should not return the DVR router id in 
scenario 1.
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L263

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  ENV:
  stable/mitaka
  hosts:
  compute1 (nova-compute, l3-agent (dvr), metedate-agent)
  compute2 (nova-compute, l3-agent (dvr), metedate-agent)
- network1 (l3-agent (dvr_snat), metedate-agent, dhcp-agent)
- network2 (l3-agent(dvr_snat), metedate-agent, dhcp-agent)
+ network1 (l3-agent (dvr_snat), metedata-agent, dhcp-agent)
+ network2 (l3-agent(dvr_snat), metedata-agent, dhcp-agent)
  
  How to reproduce? (scenario 1)
  set: dhcp_agents_per_network = 2
  
  1. create a DVR router:
  neutron router-create --ha False --distributed True test1
  
  2. Create a network & subnet with dhcp enabled.
  neutron net-create test1
  neutron subnet-create --enable-dhcp test1 --name test1 192.168.190.0/24
  
  3. Attach the router and subnet
  neutron router-interface-add test1 subnet=test1
  
- 
  And for another scenario 2:
- change the network2 node deployment to only run metedate-agent, dhcp-agent.
+ change the network2 node deployment to only run metedata-agent, dhcp-agent.
  Both in the qdhcp-namespace and the VM could ping each other.
  So the network node qrouter-namespace is not used.
- 
  
  Code:
  The function in following position should not return the DVR router id in 
scenario 1.
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L249

** Description changed:

  ENV:
  stable/mitaka
  hosts:
  compute1 (nova-compute, l3-agent (dvr), metedate-agent)
  compute2 (nova-compute, l3-agent (dvr), metedate-agent)
  network1 (l3-agent (dvr_snat), metedata-agent, dhcp-agent)
  network2 (l3-agent(dvr_snat), metedata-agent, dhcp-agent)
  
  How to reproduce? (scenario 1)
  set: dhcp_agents_per_network = 2
  
  1. create a DVR router:
  neutron router-create --ha False --distributed True test1
  
  2. Create a network & subnet with dhcp enabled.
  neutron net-create test1
  neutron subnet-create --enable-dhcp test1 --name test1 192.168.190.0/24
  
  3. Attach the router and subnet
  neutron router-interface-add test1 subnet=test1
  
  And for another scenario 2:
  change the network2 node deployment to only run metedata-agent, dhcp-agent.
  Both in the qdhcp-namespace and the VM could ping each other.
  So the network node qrouter-namespace is not used.
  
  Code:
  The function in following position should not return the DVR router id in 
scenario 1.
- 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L249
+ 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L263

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609217

Title:
  DVR: dvr router should not exist in not-binded network node

Status in neutron:
  New

Bug description:
  ENV:
  stable/mitaka
  hosts:
  compute1 (nova-compute, l3-agent (dvr), metedate-agent)
  compute2 (nova-compute, l3-agent (dvr), metedate-agent)
  network1 (l3-agent (dvr_snat), metedata-agent, dhcp-agent)
  network2 (l3-agent(dvr_snat), metedata-agent, dhcp-agent)

  How to reproduce? (scenario 1)
  set: dhcp_agents_per_network = 2

  1. create a DVR router:
  neutron router-create --ha False --distributed True test1

  2. Create a network & subnet with dhcp enabled.
  neutron net-create test1
  neutron subnet-create --enable-dhcp test1 --name test1 192.168.190.0/24

  3. Attach the router and subnet
  neutron router-interface-add test1 subnet=test1

  And for another scenario 2:
  change the network2 node deployment to only run metedata-agent, dhcp-agent.
  Both in the qdhcp-namespace and the VM could ping each other.
  So the network node qrouter-namespace is not used.

  Code:
  The function in following position should not return the DVR router id in 
scenario 1.
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L263


[Yahoo-eng-team] [Bug 1609213] [NEW] gate-neutron-fwaas-dsvm-tempest failure

2016-08-02 Thread YAMAMOTO Takashi
Public bug reported:

eg.

http://logs.openstack.org/41/349341/2/check/gate-neutron-fwaas-dsvm-
tempest/42b33ce/logs/screen-q-l3.txt.gz

+ functions-common:_run_process:1440   :   [[ -n '' ]]
+ functions-common:_run_process:1443   :   setsid 
/usr/local/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/l3_agent.ini --config-file
+ functions-common:_run_process:1443   :   echo 22185
+ functions-common:_run_process:1447   :   exit 0
usage: neutron-l3-agent [-h] [--config-dir DIR] [--config-file PATH] [--debug]
[--log-config-append PATH]
[--log-date-format DATE_FORMAT] [--log-dir LOG_DIR]
[--log-file PATH] [--nodebug] [--nouse-syslog]
[--noverbose] [--nowatch-log-file]
[--state_path STATE_PATH]
[--syslog-log-facility SYSLOG_LOG_FACILITY]
[--use-syslog] [--verbose] [--version]
[--watch-log-file]
neutron-l3-agent: error: argument --config-file: expected one argument

** Affects: devstack
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New => In Progress

** Changed in: devstack
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609213

Title:
  gate-neutron-fwaas-dsvm-tempest failure

Status in devstack:
  In Progress
Status in neutron:
  In Progress

Bug description:
  eg.

  http://logs.openstack.org/41/349341/2/check/gate-neutron-fwaas-dsvm-
  tempest/42b33ce/logs/screen-q-l3.txt.gz

  + functions-common:_run_process:1440   :   [[ -n '' ]]
  + functions-common:_run_process:1443   :   setsid 
/usr/local/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/l3_agent.ini --config-file
  + functions-common:_run_process:1443   :   echo 22185
  + functions-common:_run_process:1447   :   exit 0
  usage: neutron-l3-agent [-h] [--config-dir DIR] [--config-file PATH] [--debug]
  [--log-config-append PATH]
  [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR]
  [--log-file PATH] [--nodebug] [--nouse-syslog]
  [--noverbose] [--nowatch-log-file]
  [--state_path STATE_PATH]
  [--syslog-log-facility SYSLOG_LOG_FACILITY]
  [--use-syslog] [--verbose] [--version]
  [--watch-log-file]
  neutron-l3-agent: error: argument --config-file: expected one argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1609213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607699] Re: floating ip mangle iptables rules incorrect format

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/348805
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=244ef910d5dc03a8d53d969ad0b62bb973c10f3b
Submitter: Jenkins
Branch:master

commit 244ef910d5dc03a8d53d969ad0b62bb973c10f3b
Author: Kevin Benton 
Date:   Wed Jul 27 17:39:57 2016 -0700

Set prefix on floating_ip_mangle rules

Set the /32 prefix that iptables will automatically do internally
so our format matches the iptables-save format and we don't
unnecessarily re-apply rules.

Testing for this is provided by enabling the IPTables convergence
check in I6bee1d51155488e91857ee8bc45470d6a224fa37

Closes-Bug: #1607699
Change-Id: I0088636d2f8409f0f6f17b3ed2288f6edfac1e68


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607699

Title:
  floating ip mangle iptables rules incorrect format

Status in neutron:
  Fix Released

Bug description:
  the floating IP iptables mangle rules are generated without a prefix
  on the source address. IPtables converts this into a /32 so every time
  the _apply function is called the iptables_manager thinks it has to
  delete a rule (the one with the prefix) and add a rule (the one
  without the prefix). This is unnecessary performance overhead in the
  L3 agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546910] Re: args pass to securitygroup precommit event should include the complete info

2016-08-02 Thread Isaku Yamahata
** Also affects: networking-odl
   Importance: Undecided
   Status: New

** Changed in: networking-odl
   Importance: Undecided => High

** Changed in: networking-odl
   Status: New => In Progress

** Changed in: networking-odl
 Assignee: (unassigned) => Manjeet Singh Bhatia (manjeet-s-bhatia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546910

Title:
  args pass to securitygroup precommit event should include the complete
  info

Status in networking-odl:
  In Progress
Status in neutron:
  In Progress

Bug description:
  We introduced the PRECOMMIT_XXX event, but in securitygroups_db.py,
  the kwargs passed to it do not include the complete info of DB like
  AFTER_XXX event. For example, the id of the new created sg/rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1546910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608314] Re: vpn failure with neutron-lib 0.3.0

2016-08-02 Thread YAMAMOTO Takashi
** Changed in: networking-midonet
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608314

Title:
  vpn failure with neutron-lib 0.3.0

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/16/349016/1/check/gate-networking-midonet-
  python27-ubuntu-xenial/102bea5/testr_results.html.gz

  ft18.5: 
midonet.neutron.tests.unit.test_extension_vpnaas.VPNTestCase.test_update_ipsec_site_connection_error_StringException:
 Empty attachments:
stdout

  pythonlogging:'': {{{
  WARNING [stevedore.named] Could not load 
midonet.neutron.plugin_v2.MidonetPluginV2
   WARNING [stevedore.named] Could not load 
neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin
   WARNING [stevedore.named] Could not load 
midonet.neutron.services.vpn.service_drivers.midonet_ipsec.MidonetIPsecVPNDriver
   WARNING [neutron.api.extensions] Extension address-scope not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension auto-allocated-topology not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension availability_zone not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension default-subnetpools not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension dns-integration not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension dvr not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension flavors not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension ip_allocation not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension l3-ha not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension l3-flavors not supported by any 
of loaded plugins
   WARNING [neutron.api.extensions] Extension l3_agent_scheduler not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension metering not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension multi-provider not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension net-mtu not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension network_availability_zone not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension network-ip-availability not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension qos not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension revisions not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension router_availability_zone not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension router-service-type not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension segment not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension tag not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension timestamp_core not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension trunk not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension trunk-details not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension vlan-transparent not supported by 
any of loaded plugins
 ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
   WARNING [neutron.api.extensions] Extension bgp-speaker-router-insertion not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension gateway-device not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension logging-resource not supported by 
any of loaded plugins
   WARNING [neutron.quota.resource_registry] network is already registered
   WARNING [neutron.quota.resource_registry] subnetpool is already registered
   WARNING [neutron.quota.resource_registry] port is already registered
   WARNING [neutron.quota.resource_registry] subnet is already registered
   WARNING [neutron.quota.resource_registry] router is already registered
   WARNING [neutron.quota.resource_registry] floatingip is already registered
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network 1d884b17-a640-4392-9e83-ca704a8338fb: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network ce5d24a2-ab47-4c49-bff0-2e980e009b79: no agents available; 
will retry on subsequent 

[Yahoo-eng-team] [Bug 1489853] Re: when hard-reboot a instance with serial-port multiple times, instance will not start for port exhausted

2016-08-02 Thread Matt Riedemann
** Changed in: nova/liberty
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489853

Title:
  when hard-reboot a instance with serial-port multiple times, instance
  will not start for port exhausted

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Won't Fix
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  My running environment is
  openstack-nova-compute-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch
  openstack-nova-serialproxy-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch

  In my nova.conf ,port_range=2:20020

  I boot a instance with two serial-port ,it works well.
  When i hard reboot this instance muti-times,it can't start and its status is 
shut-off
  The log is below :
  2015-08-28 17:06:46.635 7258 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/d
  river.py", line 3830, in _create_serial_console_devices
  2015-08-28 17:06:46.635 7258 TRACE oslo_messaging.rpc.dispatcher 
console.listen_host))
  2015-08-28 17:06:46.635 7258 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lo
  ckutils.py", line 445, in inner
  2015-08-28 17:06:46.635 7258 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-08-28 17:06:46.635 7258 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/console/serial
  .py", line 82, in acquire_port
  2015-08-28 17:06:46.635 7258 TRACE oslo_messaging.rpc.dispatcher raise 
exception.SocketPortRangeExhaustedException(host=h
  ost)

  after check code,i thought the hard-reboot code process has some
  problem。

  The function release_port() is only in  function cleanup()

  When a instance is deleted ,function cleanup() is called

  But when a instance is hard-boot,in _hard_reboot function it only call
  _destroy().

  and then it call _get_guest_xml() and in it acquire_port() function is
  called.

  so the instance will always acquire port but not release. at last port
  will be exhausted in logic

  I think in _hard_reboot() of libvrit/driver.py,it should be edited
  below:

  .
  self._destroy(instance)
  if CONF.serial_console.enabled:
  serials = self._get_serial_ports_from_instance(instance)
  for hostname, port in serials:
  serial_console.release_port(host=hostname, port=port)
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609193] [NEW] resize error on the same current host with enough vcpu resource

2016-08-02 Thread Charlotte Han
Public bug reported:

Steps to reproduce
==
A chronological list of steps which will bring off the
issue you noticed:
* I had a compute node, set allow_resize_to_same_host=true.

* I did boot a instance with flavor m1.tiny
1   | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 
| True

* then I did boot many instance
* then I did Z
A list of openstack client commands (with correct argument value)
would be the most descriptive example. To get more information use:

$ nova --debug   

[Yahoo-eng-team] [Bug 1276214] Re: Live migration failure in API doesn't revert task_state to None

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276214

Title:
  Live migration failure in API doesn't revert task_state to None

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  If API times out on a RPC during the processing of a migrate_server it
  does not revert the task_state back to NULL before or after sending
  the error response back to the user. This can prevent further API
  operations on the VM and leave a good VMs in non-operable state with
  the exception of perhaps a delete.

  This is one possible reproducer. I'm not sure if this is always true,
  and I'd appreciate if someone else confirm it.

  1. Somehow make RPC requests hang
  2. Issue a live migration request
  3. The call should return an HTTP error (409 perhaps)
  4. Check VM. It should be in a good state but the task_state stuck in 
'migrating'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545675] Re: Resizing a pinned VM results in inconsistent state

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova
 Assignee: John Garbutt (johngarbutt) => Stephen Finucane (stephenfinucane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545675

Title:
  Resizing a pinned VM results in inconsistent state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  It appears that executing certain resize operations on a pinned
  instance results in inconsistencies in the "state machine" that Nova
  uses to track instances. This was identified using Tempest and
  manifests itself in failures in follow up shelve/unshelve operations.

  ---

  # Steps

  Testing was conducted on host containing a single-node, Fedora
  23-based (4.3.5-300.fc23.x86_64) OpenStack instance (built with
  DevStack). The '12d224e' commit of Nova was used. The Tempest tests
  (commit 'e913b82') were run using modified flavors, as seen below:

  nova flavor-create m1.small_nfv 420 2048 0 2
  nova flavor-create m1.medium_nfv 840 4096 0 4
  nova flavor-key 420 set "hw:numa_nodes=2"
  nova flavor-key 840 set "hw:numa_nodes=2"
  nova flavor-key 420 set "hw:cpu_policy=dedicated"
  nova flavor-key 840 set "hw:cpu_policy=dedicated"

  cd $TEMPEST_DIR
  cp etc/tempest.conf etc/tempest.conf.orig
  sed -i "s/flavor_ref = .*/flavor_ref = 420/" etc/tempest.conf
  sed -i "s/flavor_ref_alt = .*/flavor_ref_alt = 840/" etc/tempest.conf

  Tests were run in the order given below.

  1. 
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  2. 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
  3. 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert
  4. 
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  5. 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server

  Like so:

  ./run_tempest.sh --
  tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance

  # Expected Result

  The tests should pass.

  # Actual Result

  +---+--++
  | # | test id  | status |
  +---+--++
  | 1 | 1164e700-0af0-4a4c-8792-35909a88743c |   ok   |
  | 2 | 77eba8e0-036e-4635-944b-f7a8f3b78dc9 |   ok   |
  | 3 | c03aab19-adb1-44f5-917d-c419577e9e68 |   ok   |
  | 4 | 1164e700-0af0-4a4c-8792-35909a88743c |  FAIL  |
  | 5 | c03aab19-adb1-44f5-917d-c419577e9e68 |   ok*  |

  * this test reports as passing but is actually generating errors. Bad
  test! :)

  One test fails while the other "passes" but raises errors. The
  failures, where raised, are CPUPinningInvalid exceptions:

  CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following
  pinned set [0, 25]

  **NOTE:** I also think there are issues with the non-reverted resize
  test, though I've yet to investigate this:

  *
  
tempest.scenario.test_server_advanced_ops.TestServerAdvancedOps.test_resize_server_confirm

  What's worse, this error "snowballs" on successive runs. Because of
  the nature of the failure (a failure to pin/unpin CPUs), we're left
  with a list of CPUs that Nova thinks to be pinned but which are no
  longer actually used. This is reflected by the resource tracker.

  $ openstack server list

  $ cat /opt/stack/logs/screen/n-cpu.log | grep 'Total usable vcpus' | tail 
-1
  *snip* INFO nova.compute.resource_tracker [*snip*] Total usable vcpus: 
40, total allocated vcpus: 8

  The error messages for both are given below, along with examples of
  this "snowballing" CPU list:

  {0}
  tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  [36.713046s] ... FAILED

   Setting instance vm_state to ERROR
   Traceback (most recent call last):
     File "/opt/stack/nova/nova/compute/manager.py", line 2474, in 
do_terminate_instance
   self._delete_instance(context, instance, bdms, quotas)
     File "/opt/stack/nova/nova/hooks.py", line 149, in inner
   rv = f(*args, **kwargs)
     File "/opt/stack/nova/nova/compute/manager.py", line 2437, in 
_delete_instance
   quotas.rollback()
     File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
   self.force_reraise()
     File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
   six.reraise(self.type_, self.value, self.tb)
     File "/opt/stack/nova/nova/compute/manager.py", line 2432, in 

[Yahoo-eng-team] [Bug 1587386] Re: Unshelve results in duplicated resource deallocated

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: nova/mitaka
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587386

Title:
  Unshelve results in duplicated resource deallocated

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Description
  ===

  Shelve/unshelve operations fail when using "NFV flavors". This was
  reported on the mailing list initially.

  http://lists.openstack.org/pipermail/openstack-
  dev/2016-May/095631.html

  Steps to reproduce
  ==

  1. Create a flavor with 'hw:numa_nodes=2', 'hw:cpu_policy=dedicated' and 
'hw:mempage_size=large'
  2. Configure Tempest to use this new flavor
  3. Run Tempest tests

  Expected result
  ===

  All tests will pass.

  Actual result
  =

  The shelve/unshelve Tempest tests always result in a timeout exception 
  being raised, looking similar to the following, from [1]:

  Traceback (most recent call last):
File "tempest/api/compute/base.py", line 166, in server_check_teardown
  cls.server_id, 'ACTIVE')
File "tempest/common/waiters.py", line 95, in wait_for_server_status
  raise exceptions.TimeoutException(message)2016-05-22 22:25:30.697 
13974 ERROR tempest.api.compute.base TimeoutException: Request timed out
  Details: (ServerActionsTestJSON:tearDown) Server 
cae6fd47-0968-4922-a03e-3f2872e4eb52 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: 
SHELVED_OFFLOADED. Current task state: None.

  The following errors are raised in the compute logs:

  Traceback (most recent call last):
File "/opt/stack/new/nova/nova/compute/manager.py", line 4230, in 
_unshelve_instance
  with rt.instance_claim(context, instance, limits):
File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  return f(*args, **kwargs)
File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 151, 
in instance_claim
  self._update_usage_from_instance(context, instance_ref)
File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 827, 
in _update_usage_from_instance
  self._update_usage(instance, sign=sign)
File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 666, 
in _update_usage
  self.compute_node, usage, free)
File "/opt/stack/new/nova/nova/virt/hardware.py", line 1482, in 
get_host_numa_usage_from_instance
  host_numa_topology, instance_numa_topology, free=free))
File "/opt/stack/new/nova/nova/virt/hardware.py", line 1348, in 
numa_usage_from_instances
  newcell.unpin_cpus(pinned_cpus)
File "/opt/stack/new/nova/nova/objects/numa.py", line 94, in unpin_cpus
  pinned=list(self.pinned_cpus))
  CPUPinningInvalid: Cannot pin/unpin cpus [6] from the following pinned 
set [0, 2, 4]

  [1] http://intel-openstack-ci-logs.ovh/86/319686/1/check/tempest-dsvm-
  full-nfv/b463722/testr_results.html.gz

  Environment
  ===

  1. Exact version of OpenStack you are running. See the following

  Commit '25fdf64'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417723] Re: when using dedicated cpus, the guest topology doesn't match the host

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: nova/mitaka
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417723

Title:
  when using dedicated cpus, the guest topology doesn't match the host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  According to "http://specs.openstack.org/openstack/nova-
  specs/specs/juno/approved/virt-driver-cpu-pinning.html", the topology
  of the guest is set up as follows:

  "In the absence of an explicit vCPU topology request, the virt drivers
  typically expose all vCPUs as sockets with 1 core and 1 thread. When
  strict CPU pinning is in effect the guest CPU topology will be setup
  to match the topology of the CPUs to which it is pinned."

  What I'm seeing is that when strict CPU pinning is in use the guest
  seems to be configuring multiple threads, even if the host doesn't
  have theading enabled.

  As an example, I set up a flavor with 2 vCPUs and enabled dedicated
  cpus.  I then booted up an instance of this flavor on two separate
  compute nodes, one with hyperthreading enabled and one with
  hyperthreading disabled.  In both cases, "virsh dumpxml" gave the
  following topology:

  

  When running on the system with hyperthreading disabled, this should
  presumably have been set to "cores=2 threads=1".

  Taking this a bit further, even if hyperthreading is enabled on the
  host it would be more accurate to only specify multiple threads in the
  guest topology if the vCPUs are actually affined to multiple threads
  of the same host core.  Otherwise it would be more accurate to specify
  the guest topology with multiple cores of one thread each.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550317] Re: 'hw:cpu_thread_policy=isolate' does not schedule on non-HT hosts

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: nova/mitaka
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550317

Title:
  'hw:cpu_thread_policy=isolate' does not schedule on non-HT hosts

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  The 'isolate' policy is supposed to function on both hosts with
  HyperThreading (HT) and those without. The former works, but the
  latter does not. This appears to be a regression. Results below.

  ---

  # Platform

  Testing was conducted on two single-node, Fedora 23-based
  (4.3.5-300.fc23.x86_64) OpenStack instances (built with devstack). The systems
  are a dual-socket, ten core, systems with HT enabled on one and disabled
  on the other (2 sockets * 10 cores * 1/2 threads
  = 20/40 "pCPUs". 0-9/0-9,20-29 = node0, 10-19/10-19,30-39 = node1).

  Commit `8bafc9` of Nova was used.

  # Steps

  ## Create flavors

  $ openstack flavor create pinned.isolate \
  --id 103 --ram 2048 --disk 0 --vcpus 4
  $ openstack flavor set pinned.isolate \
  --property "hw:cpu_policy=dedicated" \
  --property "hw:cpu_thread_policy=isolate"

  ## Validate a HT-enabled node

  This should match the expectations of the spec and provide a single thread
  to guests while avoiding other guests scheduling on the other host
  sibling threads. Therefore, the guest should see four sockets, one core
  per socket, and one thread per core.

  $ openstack server create --flavor=pinned.isolate \
  --image=cirros-0.3.4-x86_64-uec --wait test1

  $ sudo virsh list
   IdName   State
  
   3 instance-0003  running

  $ sudo virsh dumpxml 3
  
instance-0003
...
4

  4096
  
  
  
  
  


  
  

...

  
  

  

...
  

  $ openstack server delete test1

  No problems here.

  ## Validate a HT-disabled node

  This should work exactly the same here as it did on the HT-enabled host,
  minus the reservation of any thread sibling (there aren't any)

  $ openstack server create --flavor=pinned.isolate \
  --image=cirros-0.3.4-x86_64-uec --wait test1
  Error creating server: test1

  Error creating server

  $ openstack server list
  +--+---++--+
  | ID   | Name  | Status | Networks |
  +--+---++--+
  | 1f212d45-585e-41df-abd7-6abb12ca86a1 | test1 | ERROR  |  |
  +--+---++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1550317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609184] [NEW] [RFE] ML2: Allow retry on db error by precommit

2016-08-02 Thread Isaku Yamahata
Public bug reported:

Allow retry on db retriable error by precommit mothod.

Currently ML2 driver manager swallows all exceptions by mechanism driver and 
return errors to user.
However, there are classes of retriable db errors(DBDeadlock, taleDataError, 
DDuplicateEntry, TretryRequest).
With those error by precommit, it's better to allow 
neutron.api.v2.base.Controller to retry request.

Sometimes, those db error by precommit is inevitable. Especially subnet
operation touches many resources(port, ip address, etc).

** Affects: neutron
 Importance: Undecided
 Assignee: Isaku Yamahata (yamahata)
 Status: New


** Tags: ml2 rfe

** Tags added: rfe

** Tags added: ml2

** Changed in: neutron
 Assignee: (unassigned) => Isaku Yamahata (yamahata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609184

Title:
  [RFE] ML2: Allow retry on db error by precommit

Status in neutron:
  New

Bug description:
  Allow retry on db retriable error by precommit mothod.

  Currently ML2 driver manager swallows all exceptions by mechanism driver and 
return errors to user.
  However, there are classes of retriable db errors(DBDeadlock, taleDataError, 
DDuplicateEntry, TretryRequest).
  With those error by precommit, it's better to allow 
neutron.api.v2.base.Controller to retry request.

  Sometimes, those db error by precommit is inevitable. Especially
  subnet operation touches many resources(port, ip address, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537062] Re: Fail to boot vm when set AggregateImagePropertiesIsolation filter and add custom metadata in the Host Aggregate

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
 Assignee: (unassigned) => Alexey Stupnikov (astupnikov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1537062

Title:
  Fail to boot vm when set AggregateImagePropertiesIsolation filter and
  add custom metadata in the Host Aggregate

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Image has no custom metadata, should not affect the
  AggregateImagePropertiesIsolation filter

  Reproduce steps:

  (1) add Host Aggregate with custom metadata
  ++---+---+--++
  | Id | Name  | Availability Zone | Hosts| Metadata   |
  ++---+---+--++
  | 1  | linux-agg | - | 'controller' | 'os=linux' |
  ++---+---+--++

  (2) add  AggregateImagePropertiesIsolation filter
  scheduler_default_filters = 
RetryFilter,AggregateImagePropertiesIsolation,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter

  (3) boot vm and error log:
  2016-01-22 21:00:10.834 ERROR oslo_messaging.rpc.dispatcher 
[req-1cded809-cfe6-4657-8e31-b494f1b3278d admin admin] Exception during messa
  ge handling: ImageMetaProps object has no attribute 'os'
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 143, in _dispatch_and_reply
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 189, in _dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 130, in _do_dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
  150, in inner
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 78, in select_destin
  ations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher dests = 
self.driver.select_destinations(ctxt, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 53, in sele
  ct_destinations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
selected_hosts = self._schedule(context, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 113, in _sc
  hedule
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher spec_obj, 
index=num)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 532, in get_fil
  tered_hosts
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher hosts, 
spec_obj, index)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 89, in get_filtered_objects
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher list_objs = 
list(objs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 44, in filter_all
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, spec_obj):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filters/__init__.py", line 26, in _filter_one
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filters/aggregate_image_properties_isolation.py",
 line 48, in host_passes
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher prop = 
image_props.get(key)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1609174] [NEW] [api] Document query option (is_domain) for projects

2016-08-02 Thread Steve Martinelli
Public bug reported:

The "is_domain" query parameter is missing from GET /v3/projects
documentation in the API site

It can be seen in the spec repo:
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-projects

edit: This is also missing from the create and update APIs

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

** Summary changed:

- [api] Document query option (is_domain) for listing projects
+ [api] Document query option (is_domain) for projects

** Description changed:

  The "is_domain" query parameter is missing from GET /v3/projects
  documentation in the API site
  
  It can be seen in the spec repo:
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-projects
+ 
+ edit: This is also missing from the create and update APIs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609174

Title:
  [api] Document query option (is_domain) for projects

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The "is_domain" query parameter is missing from GET /v3/projects
  documentation in the API site

  It can be seen in the spec repo:
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-projects

  edit: This is also missing from the create and update APIs

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609175] [NEW] [api] Document query options for GET /projects

2016-08-02 Thread Steve Martinelli
Public bug reported:

The following query options are missing from the GET /projects API site

parents_as_list (key-only, no value expected)
subtree_as_list (key-only, no value expected)
parents_as_ids (key-only, no value expected)
subtree_as_ids (key-only, no value expected)

They are already documented in the specs repo:
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609175

Title:
  [api] Document query options for GET /projects

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The following query options are missing from the GET /projects API
  site

  parents_as_list (key-only, no value expected)
  subtree_as_list (key-only, no value expected)
  parents_as_ids (key-only, no value expected)
  subtree_as_ids (key-only, no value expected)

  They are already documented in the specs repo:
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-project

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609177] [NEW] [api] Add "nocatalog" option to GET /v3/auth/tokens

2016-08-02 Thread Steve Martinelli
Public bug reported:

The following API route is missing from the API site (POST
/v3/auth/tokens?nocatalog), it can be seen in the specs repo:
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
api-v3.html#catalog-opt-out

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609177

Title:
  [api] Add "nocatalog" option to GET /v3/auth/tokens

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The following API route is missing from the API site (POST
  /v3/auth/tokens?nocatalog), it can be seen in the specs repo:
  http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
  api-v3.html#catalog-opt-out

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609178] [NEW] [api] Document GET /auth/catalog, GET /auth/projects, GET /auth/domains

2016-08-02 Thread Steve Martinelli
Public bug reported:

The following routes are missing from the API site, but are available in
the specs repo:

/auth/projects ->
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-available-project-scopes

/auth/domains ->
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-available-domain-scopes

/auth/catalog ->
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-service-catalog

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Wishlist

** Tags added: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609178

Title:
  [api] Document GET /auth/catalog, GET /auth/projects, GET
  /auth/domains

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The following routes are missing from the API site, but are available
  in the specs repo:

  /auth/projects ->
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-available-project-scopes

  /auth/domains ->
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-available-domain-scopes

  /auth/catalog ->
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-service-catalog

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444446] Re: VMware: resizing a instance that has no root disk fails

2016-08-02 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Chinmaya Bharadwaj (acbharadwaj)

** Changed in: nova/mitaka
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/146

Title:
  VMware: resizing a instance that has no root disk fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 1278, in 
_resize_create_ephemerals
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher ds_ref = 
vmdk.device.backing.datastore
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'NoneType' object has no attribute 'backing'
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher 
  2015-04-13 21:25:51.442 7852 ERROR oslo.messaging._drivers.common [-] 
Returning exception 'NoneType' object has no attribute 'backing' to caller

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609172] [NEW] [api] Document create region by ID

2016-08-02 Thread Steve Martinelli
Public bug reported:

For whatever reason, we allow user to allow a user to create a region by
ID. It was documented in our specs: http://specs.openstack.org/openstack
/keystone-specs/api/v3/identity-api-v3.html#create-region-with-specific-
id

But it's missing in our APIs

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609172

Title:
  [api] Document create region by ID

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  For whatever reason, we allow user to allow a user to create a region
  by ID. It was documented in our specs:
  http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
  api-v3.html#create-region-with-specific-id

  But it's missing in our APIs

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609173] [NEW] [api] Document default domain config behaviour

2016-08-02 Thread Steve Martinelli
Public bug reported:

The following are not documented in our API site:

Document default domain config: GET /domains/config/default
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#domain-configuration-management

As well as these two:
Document default for domain config group: GET /domains/config/{group}/default
Document default for domain config group option: GET 
/domains/config/{group}/{option}/default

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609173

Title:
  [api] Document default domain config behaviour

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The following are not documented in our API site:

  Document default domain config: GET /domains/config/default

http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#domain-configuration-management

  As well as these two:
  Document default for domain config group: GET /domains/config/{group}/default
  Document default for domain config group option: GET 
/domains/config/{group}/{option}/default

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609171] [NEW] [api] Document domain specific roles

2016-08-02 Thread Steve Martinelli
Public bug reported:

- The create and update APIs do not include the domain_id parameter
- The role examples in the APIs do not include the domain_id parameter

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609171

Title:
  [api] Document domain specific roles

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  - The create and update APIs do not include the domain_id parameter
  - The role examples in the APIs do not include the domain_id parameter

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527370] Re: IOvisor vif driver fails with some network names because of schema usage in ifc_ctl

2016-08-02 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => Won't Fix

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Changed in: nova/mitaka
 Assignee: (unassigned) => Muawia Khan (muawia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527370

Title:
  IOvisor vif driver fails with some network names because of schema
  usage in ifc_ctl

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Won't Fix
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  IOvisor vif driver fails with some network names because of schema
  usage in ifc_ctl

  This will require changing plug_iovisor code [1] to not use network in
  the ifc_ctl event schema.

  [1]
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439869] Re: encrypted iSCSI volume attach fails when iscsi_use_multipath is enabled

2016-08-02 Thread Matt Riedemann
** Changed in: nova/liberty
   Status: New => Won't Fix

** Changed in: nova/mitaka
 Assignee: Tomoki Sekiyama (tsekiyama) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439869

Title:
  encrypted iSCSI volume attach fails when iscsi_use_multipath is
  enabled

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Won't Fix
Status in OpenStack Compute (nova) mitaka series:
  In Progress
Status in os-brick:
  Fix Released

Bug description:
  When attempting to attach an encrypted iSCSI volume to an instance
  with iscsi_use_multipath set to True in nova.conf an error occurs in
  n-cpu.

  The devstack system being used had the following nova version:

  commit ab25f5f34b6ee37e495aa338aeb90b914f622b9d
  Merge "instance termination with update_dns_entries set fails"

  The following error occurs in n-cpu:

  Stack Trace:

  2015-04-02 13:46:22.641 ERROR nova.virt.block_device 
[req-61f49ff8-b814-42c0-8cf8-ffe7b6a3561c admin admin] [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Driver failed to attach volume 
4778e71c-a1b5-4d
  b5-b677-1d8191468e87 at /dev/vdb
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Traceback (most recent call last):
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 251, in attach
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] device_type=self['device_type'], 
encryption=encryption)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1064, in attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] 
self._disconnect_volume(connection_info, disk_dev)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1051, in attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] encryptor.attach_volume(context, 
**encryption)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/volume/encryptors/cryptsetup.py", line 93, in 
attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] self._open_volume(passphrase, 
**kwargs)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/volume/encryptors/cryptsetup.py", line 78, in _open_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] check_exit_code=True, 
run_as_root=True)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File "/opt/stack/nova/nova/utils.py", 
line 206, in execute
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] return processutils.execute(*cmd, 
**kwargs)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
233, in execute
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] cmd=sanitized_cmd)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] ProcessExecutionError: Unexpected error 
while running command.
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf cryptsetup create --key-file=- 36000eb37601bcf0200
  00036c /dev/mapper/36000eb37601bcf02036c
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Exit code: 1
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stdout: u''
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stderr: u''
  2015-04-02 13:46:22.641 TRACE 

[Yahoo-eng-team] [Bug 1609161] [NEW] [api] Document set user's tenant

2016-08-02 Thread Steve Martinelli
Public bug reported:

The following route is not documented:
https://github.com/openstack/keystone/blob/master/keystone/v2_crud/admin_crud.py#L122-L126

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609161

Title:
  [api] Document set user's tenant

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The following route is not documented:
  
https://github.com/openstack/keystone/blob/master/keystone/v2_crud/admin_crud.py#L122-L126

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609164] [NEW] [api] Document implied roles

2016-08-02 Thread Steve Martinelli
Public bug reported:

The following are missing from the new API site:

  - 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-role-inference-rule
  - 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-role-inference-rule
  - 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#confirm-a-role-inference-rule
  - 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#delete-role-inference-rule
  - 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-implied-roles-for-role
  - 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-all-role-inference-rules

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609164

Title:
  [api] Document implied roles

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  The following are missing from the new API site:

- 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#create-role-inference-rule
- 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-role-inference-rule
- 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#confirm-a-role-inference-rule
- 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#delete-role-inference-rule
- 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-implied-roles-for-role
- 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#list-all-role-inference-rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609159] [NEW] [api] add relationship links to all routes

2016-08-02 Thread Steve Martinelli
Public bug reported:

Nearly all the "relationship" links are missing in v3 APIs

For instance, in: http://specs.openstack.org/openstack/keystone-
specs/api/v3/identity-api-v3.html#list-user-s-roles-on-domain

We see: "Relationship: http://docs.openstack.org/api/openstack-
identity/3/rel/domain_user_roles"

But in the new API refs: http://developer.openstack.org/api-
ref/identity/v3/index.html?expanded=list-role-assignments-for-user-on-
domain-detail#list-role-assignments-for-user-on-domain

There is no Relationship, this affects ALL v3 routes and v3 extensions.

** Affects: keystone
 Importance: Wishlist
 Status: Triaged


** Tags: api-ref

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Wishlist

** Tags added: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609159

Title:
  [api] add relationship links to all routes

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Nearly all the "relationship" links are missing in v3 APIs

  For instance, in: http://specs.openstack.org/openstack/keystone-
  specs/api/v3/identity-api-v3.html#list-user-s-roles-on-domain

  We see: "Relationship: http://docs.openstack.org/api/openstack-
  identity/3/rel/domain_user_roles"

  But in the new API refs: http://developer.openstack.org/api-
  ref/identity/v3/index.html?expanded=list-role-assignments-for-user-on-
  domain-detail#list-role-assignments-for-user-on-domain

  There is no Relationship, this affects ALL v3 routes and v3
  extensions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349768
Committed: 
https://git.openstack.org/cgit/openstack/murano/commit/?id=0fe151d07f1f77bec1e8b7827823bf7197b52408
Submitter: Jenkins
Branch:master

commit 0fe151d07f1f77bec1e8b7827823bf7197b52408
Author: LiuNanke 
Date:   Tue Aug 2 11:20:41 2016 +0800

Fix string interpolation to delayed by logging

String interpolation should be delayed to be handled by the logging
code, rather than being done at the point of the logging call.

See the oslo i18n guideline.
* http://docs.openstack.org/developer/oslo.i18n/guidelines.html

References: https://review.openstack.org/#/c/339268

Change-Id: Ie4ea466f951db796fd85277c52be40018dfb01ac
Closes-Bug:#1596829


** Changed in: murano
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596829

Title:
  String interpolation should be delayed at logging calls

Status in Ceilometer:
  New
Status in Glance:
  In Progress
Status in glance_store:
  New
Status in heat:
  New
Status in Ironic:
  Fix Released
Status in masakari:
  In Progress
Status in Murano:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in os-brick:
  Fix Released
Status in os-vif:
  In Progress
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  New
Status in taskflow:
  New

Bug description:
  String interpolation should be delayed to be handled by the logging
  code, rather than being done at the point of the logging call.

  Wrong: LOG.debug('Example: %s' % 'bad')
  Right: LOG.debug('Example: %s', 'good')

  See the following guideline.

  * http://docs.openstack.org/developer/oslo.i18n/guidelines.html
  #adding-variables-to-log-messages

  The rule for it should be added to hacking checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1596829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609138] [NEW] UX: System Information Overflow Fail

2016-08-02 Thread Diana Whitten
Public bug reported:

System Information End points shouldn't ALWAYS overflow ...

https://i.imgur.com/sdKXDVu.png
https://i.imgur.com/9HY3PjO.png

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1609138

Title:
  UX: System Information Overflow Fail

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  System Information End points shouldn't ALWAYS overflow ...

  https://i.imgur.com/sdKXDVu.png
  https://i.imgur.com/9HY3PjO.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1609138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532222] Re: [api-ref]OS-FEDERATION extension missing

2016-08-02 Thread Steve Martinelli
fixed in https://review.openstack.org/#/c/342322/

** Changed in: keystone
Milestone: None => newton-3

** Changed in: keystone
   Status: Confirmed => Fix Released

** Changed in: keystone
 Assignee: Diane Fleming (diane-fleming) => Samuel de Medeiros Queiroz 
(samueldmq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/153

Title:
  [api-ref]OS-FEDERATION extension missing

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The OS-FEDERATION extension is missing for Identity v3:
  http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
  api-v3-os-federation-ext.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561558] Re: Untranslated help text found in Launch Instance window

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349442
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e3ef7ae51caa91dc7365e6ded634387f59141c43
Submitter: Jenkins
Branch:master

commit e3ef7ae51caa91dc7365e6ded634387f59141c43
Author: Kenji Ishii 
Date:   Mon Aug 1 19:04:08 2016 +0900

Fix untranslated help text in Launch Instance window

- Source tab
  Flow context is not allowed to write to inner phrasing content,
  At the moment untranslation is occurred by a incorrect value of .html().
  yyy -> elem.html() // value is ''
  xxxyyy -> elem.html() // value is 'xxx'
  This phenomenon is caused by below.

  html in html file
yyyxxx

  html rendered in browser
yyy

- Security Groups tab
  '&' needs html escape

Change-Id: Ic414d232063cc12333ac2d70fd8f351a35a00b6d
Closes-Bug: #1561558


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561558

Title:
  Untranslated help text found in Launch Instance window

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Project > Instances > Launch Instance > Source
  Project > Instances > Launch Instance > Security Groups

  Found the following untranslated help text in Launch Instance window.

  
  [Source tab]
  Image: This option uses an image to boot the instance.
  Instance Snapshot: This option uses an instance snapshot to boot the instance.

  Image (with Create New Volume checked): This options uses an image to
  boot the instance, and creates a new volume to persist instance data.
  You can specify volume size and whether to delete the volume on
  deletion of the instance.

  Volume: This option uses a volume that already exists. It does not
  create a new volume. You can choose to delete the volume on deletion
  of the instance. Note: when selecting Volume, you can only launch one
  instance.

  Volume Snapshot: This option uses a volume snapshot to boot the
  instance, and creates a new volume to persist instance data. You can
  choose to delete the volume on deletion of the instance.

  [Security Groups tab]

  Security groups define a set of IP filter rules that determine how
  network traffic flows to and from an instance. Users can add
  additional rules to an existing security group to further define the
  access options for an instance. To create additional rules, go to the
  Compute | Access & Security view, then find the security group and
  click Manage Rules.

  Translations are already completed in Zanata and other latest
  translations have been imported to the test envrionment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369465] Re: [SRU] nova resize doesn't resize(extend) rbd disk files when using rbd disk backend

2016-08-02 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu Wily)
   Status: In Progress => Won't Fix

** Changed in: cloud-archive/liberty
   Status: Triaged => In Progress

** Changed in: cloud-archive/liberty
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Patch removed: "lp1369465-wily.debdiff"
   
https://bugs.launchpad.net/nova/+bug/1369465/+attachment/4712517/+files/lp1369465-wily.debdiff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369465

Title:
  [SRU] nova resize doesn't resize(extend) rbd disk files when using rbd
  disk backend

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Wily:
  Won't Fix

Bug description:
  [Impact]

Instance resize does not work if the target host has a cached copy of
the root disk. The resize will silently fail but be displayed as
successful in Nova.

  [Test Case]

1 deploy nova-compute with RBDImageBackend enabled
2 boot an instance from a QCOW2 image (to guarantee it gets downloaded for 
reformat prior to re-upload to ceph)
3 nova resize using flavor with larger root disk
4 wait for instance resize migration to complete
5 verify root disk actually resized by checking /proc/partitions in vm
6 do nova resize-confirm
7 repeat steps 3-6

  [Regression Potential]

   * None

  == original description below ==

  tested with nova trunk commit eb860c2f219b79e4f4c5984415ee433145197570

  Configured Nova to use rbd disk backend

  nova.conf

  [libvirt]
  images_type=rbd

  instances booted successfully and instance disks are in rbd pools,
  when perform a nova resize  to an existing instance,  memory and CPU
  changed to be new flavors but instance disks size doesn't change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1369465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566706] Re: Update help text for [ml2] path_mtu

2016-08-02 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566706

Title:
  Update help text for [ml2] path_mtu

Status in neutron:
  Fix Released

Bug description:
  This help message is not optimal and overloaded with details. While at
  it, physnet_mtus option refers to segment_mtu that does not even exist
  anymore since Mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586025] Re: metadata: review fallback to neutronclient on RPC messaging failure

2016-08-02 Thread Ihar Hrachyshka
There is nothing more than a log message that we can do in upstream.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586025

Title:
  metadata: review fallback to neutronclient on RPC messaging failure

Status in neutron:
  Fix Released

Bug description:
  Till Kilo we used neutronclient library to get data from neutron-
  server to metadata agent. Since Kilo we correctly introduced rpc
  communication between server and metadata agent - with fallback
  mechanism for those who upgrade agents first. That could lead to
  situation where server is still on Juno which don't have rpc api
  needed by Kilo agent. In such situation, we start again using neutron-
  client.

  
https://github.com/openstack/neutron/blob/stable/liberty/neutron/agent/metadata/agent.py#L131

  The fallback mechanism stayed there and got to Liberty, where it's not
  needed anymore. Also there is a problem here, because we fallback on
  any exception that comes from rpc communication. So for new Liberty
  deployments, that are not supposed to configure metadata agent with
  credentials for neutron api (as since kilo it's not used), on any
  error that happens on rpc, it switches to unconfigured neutron client
  till metadata agent is restarted.

  We should just remove the fallback mechanism as we did in Mitaka:
  https://review.openstack.org/#/c/231065/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609097] [NEW] vif_port_id of ironic port is not updating after neutron port-delete

2016-08-02 Thread Andrey Shestakov
Public bug reported:

Steps to reproduce
==
1. Get list of attached ports of instance:
nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
++--+--+---+---+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
++--+--+---+---+
| ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
++--+--+---+---+
2. Show ironic port. it has vif_port_id in extra with id of neutron port:
ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
+---+---+
| Property | Value |
+---+---+
| address | 52:54:00:85:19:89 |
| created_at | 2016-07-20T13:15:23+00:00 |
| extra | {u'vif_port_id': u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
| local_link_connection | |
| node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741 |
| pxe_enabled | |
| updated_at | 2016-07-22T13:31:29+00:00 |
| uuid | 735fcaf5-145d-4125-8701-365c58c6b796 |
+---+---+
3. Delete neutron port:
neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
4. It is done from interface list:
nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
++-++--+--+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
++-++--+--+
++-++--+--+
5. ironic port still has vif_port_id with neutron's port id:
ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
+---+---+
| Property | Value |
+---+---+
| address | 52:54:00:85:19:89 |
| created_at | 2016-07-20T13:15:23+00:00 |
| extra | {u'vif_port_id': u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
| local_link_connection | |
| node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741 |
| pxe_enabled | |
| updated_at | 2016-07-22T13:31:29+00:00 |
| uuid | 735fcaf5-145d-4125-8701-365c58c6b796 |
+---+---+

Expected result
===
ironic port should not have vif_port_id in extra field.

Actual result
=
ironic port has vif_port_id with id of deleted neutron port.

This can confuse when user wants to get list of unused ports of ironic node.
vif_port_id should be removed after neutron port-delete.
Corresponding bug filed on neutron side 
https://bugs.launchpad.net/neutron/+bug/1606229

** Affects: nova
 Importance: Undecided
 Assignee: Andrey Shestakov (ashestakov)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Andrey Shestakov (ashestakov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609097

Title:
  vif_port_id of ironic port is not updating after neutron port-delete

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Steps to reproduce
  ==
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property | Value |
  
+---+---+
  | address | 52:54:00:85:19:89 |
  | created_at | 

[Yahoo-eng-team] [Bug 1609090] [NEW] [ovs firewall] VM can't be reached regardless of security group with icmp allowed

2016-08-02 Thread Inessa Vasilevskaya
Public bug reported:

Reproduced on upstream devstack.

/etc/neutron/plugins/ml2/ml2_conf.ini has

[securitygroup]
firewall_driver = openvswitch

The issue was triggered by the following script
http://paste.openstack.org/show/545720/ (output from reproduction
http://paste.openstack.org/show/545724/)

Steps to reproduce:
1. create internal network and router connected to this network; set devstack 
public network as gateway.
2. create security group with ping/ssh allowed.
3. boot vm with security group from step 2
4. try to ping created vm

Will result in Destination Host Unreachable.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609090

Title:
  [ovs firewall] VM can't be reached regardless of security group with
  icmp allowed

Status in neutron:
  New

Bug description:
  Reproduced on upstream devstack.

  /etc/neutron/plugins/ml2/ml2_conf.ini has

  [securitygroup]
  firewall_driver = openvswitch

  The issue was triggered by the following script
  http://paste.openstack.org/show/545720/ (output from reproduction
  http://paste.openstack.org/show/545724/)

  Steps to reproduce:
  1. create internal network and router connected to this network; set devstack 
public network as gateway.
  2. create security group with ping/ssh allowed.
  3. boot vm with security group from step 2
  4. try to ping created vm

  Will result in Destination Host Unreachable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609071] [NEW] test_list_pagination_with_href_links fails intermittently

2016-08-02 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/08/347708/2/gate/gate-neutron-dsvm-
api/d70465a/testr_results.html.gz

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_ports.py", line 
80, in test_list_pagination_with_href_links
self._test_list_pagination_with_href_links()
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 485, in 
inner
return f(self, *args, **kwargs)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 476, in 
inner
return f(self, *args, **kwargs)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 663, in 
_test_list_pagination_with_href_links
self._test_list_pagination_iteratively(self._list_all_with_hrefs)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 592, in 
_test_list_pagination_iteratively
len(expected_resources), sort_args
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 643, in 
_list_all_with_hrefs
self.assertNotIn('next', prev_links)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
455, in assertNotIn
self.assertThat(haystack, matcher, message)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: {u'previous': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797_reverse=True',
 u'next': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797'}
 matches Contains('next')


1 occurrence in gate queue, a few more in the check queue.

** Affects: neutron
 Importance: Critical
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: api gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Tags added: api gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609071

Title:
  test_list_pagination_with_href_links fails intermittently

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/08/347708/2/gate/gate-neutron-dsvm-
  api/d70465a/testr_results.html.gz

  Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_ports.py", line 
80, in test_list_pagination_with_href_links
  self._test_list_pagination_with_href_links()
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 485, 
in inner
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 476, 
in inner
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 663, 
in _test_list_pagination_with_href_links
  self._test_list_pagination_iteratively(self._list_all_with_hrefs)
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 592, 
in _test_list_pagination_iteratively
  len(expected_resources), sort_args
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 643, 
in _list_all_with_hrefs
  self.assertNotIn('next', prev_links)
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
455, in assertNotIn
  self.assertThat(haystack, matcher, message)
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {u'previous': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797_reverse=True',
 u'next': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797'}
 matches Contains('next')

  
  1 occurrence in gate queue, a few more in the check queue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608918] Re: Gate failures for neutron in test_dualnet_multi_prefix_dhcpv6_stateless

2016-08-02 Thread Armando Migliaccio
*** This bug is a duplicate of bug 1540983 ***
https://bugs.launchpad.net/bugs/1540983

http://logs.openstack.org/96/348396/1/gate/gate-tempest-dsvm-neutron-
dvr/f98482c/logs/testr_results.html.gz

http://logs.openstack.org/42/348642/2/gate/gate-tempest-dsvm-neutron-
dvr/fb7f31a/logs/testr_results.html.gz

Bost instances are on test:

test_dualnet_multi_prefix_slaac


** Changed in: neutron
   Importance: Undecided => Critical

** This bug has been marked a duplicate of bug 1540983
   Gate failures for neutron in test_dualnet_multi_prefix_slaac

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608918

Title:
  Gate failures for neutron in
  test_dualnet_multi_prefix_dhcpv6_stateless

Status in neutron:
  New

Bug description:
  This is failing consistent recently.

  2016-08-02 10:22:41.743463 | Captured traceback-1:
  2016-08-02 10:22:41.743475 | ~
  2016-08-02 10:22:41.743497 | Traceback (most recent call last):
  2016-08-02 10:22:41.743536 |   File 
"tempest/lib/common/utils/test_utils.py", line 83, in 
call_and_ignore_notfound_exc
  2016-08-02 10:22:41.743559 | return func(*args, **kwargs)
  2016-08-02 10:22:41.743590 |   File 
"tempest/lib/services/network/subnets_client.py", line 49, in delete_subnet
  2016-08-02 10:22:41.743611 | return self.delete_resource(uri)
  2016-08-02 10:22:41.743638 |   File 
"tempest/lib/services/network/base.py", line 41, in delete_resource
  2016-08-02 10:22:41.743656 | resp, body = self.delete(req_uri)
  2016-08-02 10:22:41.743679 |   File "tempest/lib/common/rest_client.py", 
line 304, in delete
  2016-08-02 10:22:41.743705 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2016-08-02 10:22:41.743729 |   File "tempest/lib/common/rest_client.py", 
line 667, in request
  2016-08-02 10:22:41.743741 | resp, resp_body)
  2016-08-02 10:22:41.743767 |   File "tempest/lib/common/rest_client.py", 
line 780, in _error_checker
  2016-08-02 10:22:41.743788 | raise exceptions.Conflict(resp_body, 
resp=resp)
  2016-08-02 10:22:41.743815 | tempest.lib.exceptions.Conflict: An object 
with that identifier already exists
  2016-08-02 10:22:41.743876 | Details: {u'detail': u'', u'type': 
u'SubnetInUse', u'message': u'Unable to complete operation on subnet 
01199023-eb08-49b5-88a3-38931bca824e: One or more ports have an IP allocation 
from this subnet.'}
  2016-08-02 10:22:41.743884 | 
  2016-08-02 10:22:41.743890 | 
  2016-08-02 10:22:41.743902 | Captured traceback-3:
  2016-08-02 10:22:41.743914 | ~
  2016-08-02 10:22:41.743930 | Traceback (most recent call last):
  2016-08-02 10:22:41.743960 |   File 
"tempest/lib/common/utils/test_utils.py", line 83, in 
call_and_ignore_notfound_exc
  2016-08-02 10:22:41.743976 | return func(*args, **kwargs)
  2016-08-02 10:22:41.744011 |   File 
"tempest/lib/services/network/networks_client.py", line 49, in delete_network
  2016-08-02 10:22:41.744029 | return self.delete_resource(uri)
  2016-08-02 10:22:41.744056 |   File 
"tempest/lib/services/network/base.py", line 41, in delete_resource
  2016-08-02 10:22:41.744073 | resp, body = self.delete(req_uri)
  2016-08-02 10:22:41.744097 |   File "tempest/lib/common/rest_client.py", 
line 304, in delete
  2016-08-02 10:22:41.744122 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2016-08-02 10:22:41.744146 |   File "tempest/lib/common/rest_client.py", 
line 667, in request
  2016-08-02 10:22:41.744159 | resp, resp_body)
  2016-08-02 10:22:41.744185 |   File "tempest/lib/common/rest_client.py", 
line 780, in _error_checker
  2016-08-02 10:22:41.744220 | raise exceptions.Conflict(resp_body, 
resp=resp)
  2016-08-02 10:22:41.744252 | tempest.lib.exceptions.Conflict: An object 
with that identifier already exists
  2016-08-02 10:22:41.744316 | Details: {u'detail': u'', u'type': 
u'NetworkInUse', u'message': u'Unable to complete operation on network 
7a254d4c-0bc8-4a94-836c-ea5c6b640fca. There are one or more ports still in use 
on the network.'}
  2016-08-02 10:22:41.744324 | 
  2016-08-02 10:22:41.744330 | 
  2016-08-02 10:22:41.744342 | Captured traceback-2:
  2016-08-02 10:22:41.744354 | ~
  2016-08-02 10:22:41.744372 | Traceback (most recent call last):
  2016-08-02 10:22:41.744403 |   File 
"tempest/lib/common/utils/test_utils.py", line 83, in 
call_and_ignore_notfound_exc
  2016-08-02 10:22:41.744419 | return func(*args, **kwargs)
  2016-08-02 10:22:41.78 |   File 
"tempest/lib/services/network/routers_client.py", line 49, in delete_router
  2016-08-02 10:22:41.744465 | return self.delete_resource(uri)
  2016-08-02 10:22:41.744491 |   File 
"tempest/lib/services/network/base.py", line 41, 

[Yahoo-eng-team] [Bug 1609039] [NEW] Should not be able to sort instances based on joined tables

2016-08-02 Thread Matt Riedemann
Public bug reported:

This came up at the newton summit and the newton midcycle:

https://etherpad.openstack.org/p/nova-newton-midcycle

But we shouldn't allow users to sort instances on any column in the
instances table in the database, which can include joined tables like
system_metadata, info_cache, extras, etc.

This is not considered an API change, it's a bug fix, since those joined
tables and internal data model can change and should not be part of the
API contract.

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: api db

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => High

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609039

Title:
  Should not be able to sort instances based on joined tables

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This came up at the newton summit and the newton midcycle:

  https://etherpad.openstack.org/p/nova-newton-midcycle

  But we shouldn't allow users to sort instances on any column in the
  instances table in the database, which can include joined tables like
  system_metadata, info_cache, extras, etc.

  This is not considered an API change, it's a bug fix, since those
  joined tables and internal data model can change and should not be
  part of the API contract.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1609039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/347290
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=1761d87fafb5e2d3cbb60896fa302b23f4252dc0
Submitter: Jenkins
Branch:master

commit 1761d87fafb5e2d3cbb60896fa302b23f4252dc0
Author: dineshbhor 
Date:   Tue Jul 26 15:23:59 2016 +0530

Replace OpenStack LLC with OpenStack Foundation

Change-Id: I199fffd139a4d077985373231354343228d9d8b8
Closes-Bug: #1214176


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1214176

Title:
  Fix copyright headers to be compliant with Foundation policies

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in devstack:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in PBR:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  Correct the copyright headers to be consistent with the policies
  outlined by the OpenStack Foundation at http://www.openstack.org/brand
  /openstack-trademark-policy/

  Remove references to OpenStack LLC, replace with OpenStack Foundation

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608314] Re: vpn failure with neutron-lib 0.3.0

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349342
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=06c8c0bbf9568dfcd518aacb3f12adcf4b4ef6cb
Submitter: Jenkins
Branch:master

commit 06c8c0bbf9568dfcd518aacb3f12adcf4b4ef6cb
Author: YAMAMOTO Takashi 
Date:   Mon Aug 1 13:06:18 2016 +0900

Update imports (common.config -> conf.common)

Update after the recent refactoring. [1]

[1] Ib5fa294906549237630f87b9c848eebe0644088c


This commit also includes the following unrelated change to
pass the gate.

Fix a typo in ipsec_site_connection dpd specification

Found by neutron-lib 0.3.0, which has a stricter validation
than previous versions. [2]

[2] Ia93ff849396c6e2a5a170d7c01629a38e412f037

Closes-Bug: #1608314
Partial-Bug: #1608346
Related-Bug: #1563069
Change-Id: I07430e3064d9900db94e6abcd6ab207030bd7c3d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608314

Title:
  vpn failure with neutron-lib 0.3.0

Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/16/349016/1/check/gate-networking-midonet-
  python27-ubuntu-xenial/102bea5/testr_results.html.gz

  ft18.5: 
midonet.neutron.tests.unit.test_extension_vpnaas.VPNTestCase.test_update_ipsec_site_connection_error_StringException:
 Empty attachments:
stdout

  pythonlogging:'': {{{
  WARNING [stevedore.named] Could not load 
midonet.neutron.plugin_v2.MidonetPluginV2
   WARNING [stevedore.named] Could not load 
neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin
   WARNING [stevedore.named] Could not load 
midonet.neutron.services.vpn.service_drivers.midonet_ipsec.MidonetIPsecVPNDriver
   WARNING [neutron.api.extensions] Extension address-scope not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension auto-allocated-topology not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension availability_zone not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension default-subnetpools not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension dns-integration not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension dvr not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension flavors not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension ip_allocation not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension l3-ha not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension l3-flavors not supported by any 
of loaded plugins
   WARNING [neutron.api.extensions] Extension l3_agent_scheduler not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension metering not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension multi-provider not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension net-mtu not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension network_availability_zone not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension network-ip-availability not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension qos not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension revisions not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension router_availability_zone not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension router-service-type not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension segment not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension tag not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension timestamp_core not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension trunk not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension trunk-details not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension vlan-transparent not supported by 
any of loaded plugins
 ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
   WARNING [neutron.api.extensions] Extension bgp-speaker-router-insertion not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension gateway-device not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension logging-resource not supported by 
any of loaded plugins
   

[Yahoo-eng-team] [Bug 1605743] Re: Create image default is not public

2016-08-02 Thread Anne Gentle
** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1605743

Title:
  Create image default is not public

Status in Glance:
  Invalid

Bug description:
  "Image visibility. Valid value is public or private. Default is
  public." located at http://developer.openstack.org/api-ref-
  image-v2.html#createImage-v2

  In practice, this is not accurate.

  2016-07-22 15:23:31,499 DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://devstack/v2/images -H "Content-Type: application/json" -H "User-Agent: 
openstacksdk/0.9.1 keystoneauth1/2.6.1 python-requests/2.9.1 CPython/3.4.1+" -H 
"X-Auth-Token: blah" -d '{"disk_format": "ami", "name": "lol", 
"container_format": "ami"}'
  2016-07-22 15:23:31,695 DEBUG: keystoneauth.session RESP: [201] 
X-Openstack-Request-Id: req-7b9b8a11-eb79-418b-9432-a63fdf9f85b2 Content-Type: 
application/json; charset=UTF-8 Content-Length: 545 Location: 
http://devstack/v2/images/543e75f4-5578-41c6-904d-135cfeaa3764 Connection: 
keep-alive Date: Fri, 22 Jul 2016 19:21:47 GMT
  RESP BODY: {"status": "queued", "name": "lol", "tags": [], 
"container_format": "ami", "created_at": "2016-07-22T19:21:47Z", "size": null, 
"disk_format": "ami", "updated_at": "2016-07-22T19:21:47Z", "visibility": 
"private", "self": "/v2/images/543e75f4-5578-41c6-904d-135cfeaa3764", 
"min_disk": 0, "protected": false, "id": 
"543e75f4-5578-41c6-904d-135cfeaa3764", "file": 
"/v2/images/543e75f4-5578-41c6-904d-135cfeaa3764/file", "checksum": null, 
"owner": "cb802e549b374492b884fdee89e8727b", "virtual_size": null, "min_ram": 
0, "schema": "/v2/schemas/image"}

  "visibility": "private" comes back in the response body

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1605743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608823] Re: a misspelling: "experiemental" in src code

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349840
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5b7bed2044034968f679d11af99526102961883b
Submitter: Jenkins
Branch:master

commit 5b7bed2044034968f679d11af99526102961883b
Author: Junjie Wang 
Date:   Tue Aug 2 15:16:08 2016 +0800

fixed a typo in src code

"experiemental" is a misspelling of "experimental".
Closes-Bug: #1608823

Change-Id: Ie9be69c86aa5b7595d01c31f6ce249300a548813


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608823

Title:
  a misspelling: "experiemental" in src code

Status in neutron:
  Fix Released

Bug description:
  There is a misspelling in source code.

  "experiemental" should be "experimental".

  
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/conf/common.py#n162

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608977] [NEW] When add custom rule of the security group rules, hidden field errors block form submit

2016-08-02 Thread liao...@hotmail.com
Public bug reported:

When create custom rule in the security group rules, if there are errors
in the hidden fields: 'icmp_code', 'icmp_type', 'ip_protocol',
'from_port', 'to_port', 'port', form submit will be blocked.

** Affects: horizon
 Importance: Undecided
 Assignee: liao...@hotmail.com (liaozd)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => liao...@hotmail.com (liaozd)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1608977

Title:
  When add custom rule of the security group rules, hidden field errors
  block form submit

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When create custom rule in the security group rules, if there are
  errors in the hidden fields: 'icmp_code', 'icmp_type', 'ip_protocol',
  'from_port', 'to_port', 'port', form submit will be blocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1608977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537062] Re: Fail to boot vm when set AggregateImagePropertiesIsolation filter and add custom metadata in the Host Aggregate

2016-08-02 Thread Alexey Stupnikov
Removed MOS project from the list of affected products since I have
opened separate bug #1608937. I will try to cherry-pick this issue to
stable/mitaka and then sync it to MOS.

@l-ivan, please check my comment above ^^^

** No longer affects: mos

** Tags removed: customer-found

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1537062

Title:
  Fail to boot vm when set AggregateImagePropertiesIsolation filter and
  add custom metadata in the Host Aggregate

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Image has no custom metadata, should not affect the
  AggregateImagePropertiesIsolation filter

  Reproduce steps:

  (1) add Host Aggregate with custom metadata
  ++---+---+--++
  | Id | Name  | Availability Zone | Hosts| Metadata   |
  ++---+---+--++
  | 1  | linux-agg | - | 'controller' | 'os=linux' |
  ++---+---+--++

  (2) add  AggregateImagePropertiesIsolation filter
  scheduler_default_filters = 
RetryFilter,AggregateImagePropertiesIsolation,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter

  (3) boot vm and error log:
  2016-01-22 21:00:10.834 ERROR oslo_messaging.rpc.dispatcher 
[req-1cded809-cfe6-4657-8e31-b494f1b3278d admin admin] Exception during messa
  ge handling: ImageMetaProps object has no attribute 'os'
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 143, in _dispatch_and_reply
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 189, in _dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 130, in _do_dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
  150, in inner
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 78, in select_destin
  ations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher dests = 
self.driver.select_destinations(ctxt, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 53, in sele
  ct_destinations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
selected_hosts = self._schedule(context, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 113, in _sc
  hedule
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher spec_obj, 
index=num)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 532, in get_fil
  tered_hosts
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher hosts, 
spec_obj, index)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 89, in get_filtered_objects
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher list_objs = 
list(objs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 44, in filter_all
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, spec_obj):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filters/__init__.py", line 26, in _filter_one
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filters/aggregate_image_properties_isolation.py",
 line 48, in host_passes
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher prop = 
image_props.get(key)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1608934] [NEW] ephemeral disk not available in checked path

2016-08-02 Thread Jan Klare
Public bug reported:

Description
===
I am currently trying to launch an instance in my mitaka cluster with a flavor 
with ephemeral and root storage. Whenever i am trying to start the instance i 
am running into an "DiskNotFound" Error (see trace below). Starting instances 
without ephemeral works perfectly fine and the root disk is created as expected 
in /var/lib/nova/instance/$INSTANCEID/disk .

Steps to reproduce
==
1. Create a flavor with ephemeral and root storage.
2. Start an instance with that flavor.

Expected result
===
Instance starts and ephemeral disk is created in 
/var/lib/nova/instances/$INSTANCEID/disk.eph0 or disk.local ? (Not sure where 
the switchase for the naming is)

Actual result
=
Instance does not start, ephemeral disk seems to be created at 
/var/lib/nova/instances/$INSTANCEID/disk.eph0, but nova checks 
/var/lib/nova/instances/_base/ephemeral_* for disk_size

TRACE: http://pastebin.com/raw/TwtiNLY2

Environment
===
I am running OpenStack mitaka on Ubuntu 16.04 in the latest version with 
Libvirt + KVM as hypervisor (also latest stable in xenial).

Config
==

nova.conf:

...
[libvirt]
images_type = raw
rbd_secret_uuid = XXX
virt_type = kvm
inject_key = true
snapshot_image_format = raw
disk_cachemodes = "network=writeback"
rng_dev_path = /dev/random
rbd_user = cinder
...

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Description
  ===
  I am currently trying to launch an instance in my mitaka cluster with a 
flavor with ephemeral and root storage. Whenever i am trying to start the 
instance i am running into an "DiskNotFound" Error (see trace below). Starting 
instances without ephemeral works perfectly fine and the root disk is created 
as expected in /var/lib/nova/instance/$INSTANCEID/disk .
  
  Steps to reproduce
  ==
  1. Create a flavor with ephemeral and root storage.
  2. Start an instance with that flavor.
  
  Expected result
  ===
  Instance starts and ephemeral disk is created in 
/var/lib/nova/instances/$INSTANCEID/disk.eph0 or disk.local ? (Not sure where 
the switchase for the naming is)
  
  Actual result
  =
  Instance does not start, ephemeral disk seems to be created at 
/var/lib/nova/instances/$INSTANCEID/disk.eph0, but nova checks 
/var/lib/nova/instances/_base/ephemeral_* for disk_size
  
  TRACE: http://pastebin.com/raw/TwtiNLY2
  
- 
  Environment
  ===
- I am running OpenStack mitaka on Ubuntu 16.04 in the latest version with 
Libvirt + KVM as hypervisor (also latest stable in xenial). I am using default
+ I am running OpenStack mitaka on Ubuntu 16.04 in the latest version with 
Libvirt + KVM as hypervisor (also latest stable in xenial).
  
  Config
  ==
  
  nova.conf:
  
  ...
  [libvirt]
  images_type = raw
  rbd_secret_uuid = XXX
  virt_type = kvm
  inject_key = true
  snapshot_image_format = raw
  disk_cachemodes = "network=writeback"
  rng_dev_path = /dev/random
  rbd_user = cinder
  ...

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608934

Title:
  ephemeral disk not available in checked path

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  I am currently trying to launch an instance in my mitaka cluster with a 
flavor with ephemeral and root storage. Whenever i am trying to start the 
instance i am running into an "DiskNotFound" Error (see trace below). Starting 
instances without ephemeral works perfectly fine and the root disk is created 
as expected in /var/lib/nova/instance/$INSTANCEID/disk .

  Steps to reproduce
  ==
  1. Create a flavor with ephemeral and root storage.
  2. Start an instance with that flavor.

  Expected result
  ===
  Instance starts and ephemeral disk is created in 
/var/lib/nova/instances/$INSTANCEID/disk.eph0 or disk.local ? (Not sure where 
the switchase for the naming is)

  Actual result
  =
  Instance does not start, ephemeral disk seems to be created at 
/var/lib/nova/instances/$INSTANCEID/disk.eph0, but nova checks 
/var/lib/nova/instances/_base/ephemeral_* for disk_size

  TRACE: http://pastebin.com/raw/TwtiNLY2

  Environment
  ===
  I am running OpenStack mitaka on Ubuntu 16.04 in the latest version with 
Libvirt + KVM as hypervisor (also latest stable in xenial).

  Config
  ==

  nova.conf:

  ...
  [libvirt]
  images_type = raw
  rbd_secret_uuid = XXX
  virt_type = kvm
  inject_key = true
  snapshot_image_format = raw
  disk_cachemodes = "network=writeback"
  rng_dev_path = /dev/random
  rbd_user = cinder
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1608934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1608928] [NEW] Wrong test indentation

2016-08-02 Thread Tatiana Ovchinnikova
Public bug reported:

Unit test "test_update_project_when_default_role_does_not_exist" from
identity/projects/tests.py has wrong indentation which stops it from
running. This should be fixed along with the test itself.

** Affects: horizon
 Importance: Low
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Ovchinnikova (tmazur)

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1608928

Title:
  Wrong test indentation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Unit test "test_update_project_when_default_role_does_not_exist" from
  identity/projects/tests.py has wrong indentation which stops it from
  running. This should be fixed along with the test itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1608928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598302] Re: failed forms are not calling correct context data

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/337703
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e5057c6584f0b348f6dd00b6b315468392f1a7c2
Submitter: Jenkins
Branch:master

commit e5057c6584f0b348f6dd00b6b315468392f1a7c2
Author: Timur Sufiev 
Date:   Tue Jul 26 20:20:43 2016 +0300

Correcting form_invalid get_context_data call

This change corrects a bug where the form_invalid call was
not calling the correct instance of get_context_data.

Co-Authored-By: Timur Sufiev 
Closes-bug: #1598302
Change-Id: I5391e86dc85c48a601d6417fd7cd1ecf007b7196


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1598302

Title:
  failed forms are not calling correct context data

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The base form class has a form_invalid implementation.  This class
  calls the super get context data, which then makes it so my form does
  not have the context data I would normally expect to have.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1598302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608921] [NEW] The signature of the QoSPluginBase methods is wrong

2016-08-02 Thread Miguel Angel Ajo
Public bug reported:


The automatic call wrapper calls:
https://github.com/openstack/neutron/blob/17d85e4748f05b9785686f1164b6a4fe2963b8eb/neutron/extensions/qos.py#L273


with: (context, rule_obj,    to rule related methods.

While the abstractmethods are defined with: (context, , rule_obj).

https://github.com/openstack/neutron/blob/17d85e4748f05b9785686f1164b6a4fe2963b8eb/neutron/extensions/qos.py#L314

And btw, those are not rule_obj (objects) but rule_cls (classes).

** Affects: neutron
 Importance: Wishlist
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Triaged


** Tags: qos

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Triaged

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608921

Title:
  The signature of the QoSPluginBase methods is wrong

Status in neutron:
  Triaged

Bug description:
  
  The automatic call wrapper calls:
  
https://github.com/openstack/neutron/blob/17d85e4748f05b9785686f1164b6a4fe2963b8eb/neutron/extensions/qos.py#L273

  
  with: (context, rule_obj,    to rule related methods.

  While the abstractmethods are defined with: (context, , rule_obj).

  
https://github.com/openstack/neutron/blob/17d85e4748f05b9785686f1164b6a4fe2963b8eb/neutron/extensions/qos.py#L314

  And btw, those are not rule_obj (objects) but rule_cls (classes).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608918] [NEW] Gate failures for neutron in test_dualnet_multi_prefix_dhcpv6_stateless

2016-08-02 Thread Sreekumar S
Public bug reported:

This is failing consistent recently.

2016-08-02 10:22:41.743463 | Captured traceback-1:
2016-08-02 10:22:41.743475 | ~
2016-08-02 10:22:41.743497 | Traceback (most recent call last):
2016-08-02 10:22:41.743536 |   File 
"tempest/lib/common/utils/test_utils.py", line 83, in 
call_and_ignore_notfound_exc
2016-08-02 10:22:41.743559 | return func(*args, **kwargs)
2016-08-02 10:22:41.743590 |   File 
"tempest/lib/services/network/subnets_client.py", line 49, in delete_subnet
2016-08-02 10:22:41.743611 | return self.delete_resource(uri)
2016-08-02 10:22:41.743638 |   File "tempest/lib/services/network/base.py", 
line 41, in delete_resource
2016-08-02 10:22:41.743656 | resp, body = self.delete(req_uri)
2016-08-02 10:22:41.743679 |   File "tempest/lib/common/rest_client.py", 
line 304, in delete
2016-08-02 10:22:41.743705 | return self.request('DELETE', url, 
extra_headers, headers, body)
2016-08-02 10:22:41.743729 |   File "tempest/lib/common/rest_client.py", 
line 667, in request
2016-08-02 10:22:41.743741 | resp, resp_body)
2016-08-02 10:22:41.743767 |   File "tempest/lib/common/rest_client.py", 
line 780, in _error_checker
2016-08-02 10:22:41.743788 | raise exceptions.Conflict(resp_body, 
resp=resp)
2016-08-02 10:22:41.743815 | tempest.lib.exceptions.Conflict: An object 
with that identifier already exists
2016-08-02 10:22:41.743876 | Details: {u'detail': u'', u'type': 
u'SubnetInUse', u'message': u'Unable to complete operation on subnet 
01199023-eb08-49b5-88a3-38931bca824e: One or more ports have an IP allocation 
from this subnet.'}
2016-08-02 10:22:41.743884 | 
2016-08-02 10:22:41.743890 | 
2016-08-02 10:22:41.743902 | Captured traceback-3:
2016-08-02 10:22:41.743914 | ~
2016-08-02 10:22:41.743930 | Traceback (most recent call last):
2016-08-02 10:22:41.743960 |   File 
"tempest/lib/common/utils/test_utils.py", line 83, in 
call_and_ignore_notfound_exc
2016-08-02 10:22:41.743976 | return func(*args, **kwargs)
2016-08-02 10:22:41.744011 |   File 
"tempest/lib/services/network/networks_client.py", line 49, in delete_network
2016-08-02 10:22:41.744029 | return self.delete_resource(uri)
2016-08-02 10:22:41.744056 |   File "tempest/lib/services/network/base.py", 
line 41, in delete_resource
2016-08-02 10:22:41.744073 | resp, body = self.delete(req_uri)
2016-08-02 10:22:41.744097 |   File "tempest/lib/common/rest_client.py", 
line 304, in delete
2016-08-02 10:22:41.744122 | return self.request('DELETE', url, 
extra_headers, headers, body)
2016-08-02 10:22:41.744146 |   File "tempest/lib/common/rest_client.py", 
line 667, in request
2016-08-02 10:22:41.744159 | resp, resp_body)
2016-08-02 10:22:41.744185 |   File "tempest/lib/common/rest_client.py", 
line 780, in _error_checker
2016-08-02 10:22:41.744220 | raise exceptions.Conflict(resp_body, 
resp=resp)
2016-08-02 10:22:41.744252 | tempest.lib.exceptions.Conflict: An object 
with that identifier already exists
2016-08-02 10:22:41.744316 | Details: {u'detail': u'', u'type': 
u'NetworkInUse', u'message': u'Unable to complete operation on network 
7a254d4c-0bc8-4a94-836c-ea5c6b640fca. There are one or more ports still in use 
on the network.'}
2016-08-02 10:22:41.744324 | 
2016-08-02 10:22:41.744330 | 
2016-08-02 10:22:41.744342 | Captured traceback-2:
2016-08-02 10:22:41.744354 | ~
2016-08-02 10:22:41.744372 | Traceback (most recent call last):
2016-08-02 10:22:41.744403 |   File 
"tempest/lib/common/utils/test_utils.py", line 83, in 
call_and_ignore_notfound_exc
2016-08-02 10:22:41.744419 | return func(*args, **kwargs)
2016-08-02 10:22:41.78 |   File 
"tempest/lib/services/network/routers_client.py", line 49, in delete_router
2016-08-02 10:22:41.744465 | return self.delete_resource(uri)
2016-08-02 10:22:41.744491 |   File "tempest/lib/services/network/base.py", 
line 41, in delete_resource
2016-08-02 10:22:41.744508 | resp, body = self.delete(req_uri)
2016-08-02 10:22:41.744532 |   File "tempest/lib/common/rest_client.py", 
line 304, in delete
2016-08-02 10:22:41.744558 | return self.request('DELETE', url, 
extra_headers, headers, body)
2016-08-02 10:22:41.744581 |   File "tempest/lib/common/rest_client.py", 
line 667, in request
2016-08-02 10:22:41.744594 | resp, resp_body)
2016-08-02 10:22:41.744620 |   File "tempest/lib/common/rest_client.py", 
line 780, in _error_checker
2016-08-02 10:22:41.744640 | raise exceptions.Conflict(resp_body, 
resp=resp)
2016-08-02 10:22:41.744680 | tempest.lib.exceptions.Conflict: An object 
with that identifier already exists
2016-08-02 10:22:41.744719 | Details: {u'detail': u'', u'type': 
u'RouterInUse', u'message': u'Router b4807ea6-fad3-4c4a-b1f7-3a7b83e69624 still 
has ports'}

[Yahoo-eng-team] [Bug 1598860] Re: live migration failing when nova-conductor is started before other services

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339072
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=dc6b3ab6ef59b4d3f9e540f1cc07326494cd8565
Submitter: Jenkins
Branch:master

commit dc6b3ab6ef59b4d3f9e540f1cc07326494cd8565
Author: John Garbutt 
Date:   Thu Jul 7 16:20:27 2016 +0100

Don't cache RPC pin when service_version is 0

Its possible for RPC calls to be made before a nova-compute process
starts up. When this happens we cache the RPC pin of kilo. This causes
breakages in live-migrate, but in Mitaka we only work with Liberty.

Change-Id: I5f1e0904c34c4fe26fc75998441602925d2f0593
Closes-Bug: #1598860


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598860

Title:
  live migration failing when nova-conductor is started before other
  services

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Deploying OpenStack current master via kolla source, with kvm and
  ceph/rbd enabled and attempting to perform a live-migration, it
  becomes stuck in state migrating with this appearing in the nova-
  compute log.

  Looking at nova git log, i'm seeing a bunch of live-migration changes
  in the last few days.  I suspect there might have been a regression.

  2016-07-04 10:00:15.643 1 INFO nova.compute.resource_tracker 
[req-1f6df1d8-a88a-4f27-8ce2-609ea25da4e2 - - - - -] Compute_service record 
updated for compute01:compute01
  2016-07-04 10:00:56.731 1 ERROR root 
[req-a2b97b2c-f479-4b7c-b3d9-142c1ab1b25f b3bedc85b7674ce2b7e0589a7427dc81 
22ab55c3ffe343639d872b1e4db6abb4 - - -] Original exception being dropped: 
['Traceback (most recent call last):\n', '  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 66, in wrapped\nreturn f(self, context, *args, **kw)\n', '  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/utils.py", line 
608, in decorated_function\n*args, **kwargs)\n', '  File 
"/usr/lib64/python2.7/inspect.py", line 980, in getcallargs\n\'arguments\' 
if num_required > 1 else \'argument\', num_total))\n', 'TypeError: 
live_migration() takes exactly 7 arguments (6 given)\n']
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server 
[req-a2b97b2c-f479-4b7c-b3d9-142c1ab1b25f b3bedc85b7674ce2b7e0589a7427dc81 
22ab55c3ffe343639d872b1e4db6abb4 - - -] Exception during message handling
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", 
line 133, in _process_incoming
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 150, in dispatch
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 121, in _do_dispatch
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 71, in wrapped
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server f, self, 
context, *args, **kw)
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", 
line 85, in _get_call_dict
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server context, *args, 
**kw)
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python2.7/inspect.py", line 980, in getcallargs
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server 'arguments' if 
num_required > 1 else 'argument', num_total))
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server TypeError: 
live_migration() takes exactly 7 arguments (6 given)
  2016-07-04 10:00:56.732 1 ERROR oslo_messaging.rpc.server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1598860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608623] Re: Policy service caching does not work within hz-dynamic-table

2016-08-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349594
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=deda07c4c35c1bc5a80a76c0e8041a8a4bd57874
Submitter: Jenkins
Branch:master

commit deda07c4c35c1bc5a80a76c0e8041a8a4bd57874
Author: Travis Tripp 
Date:   Fri Jul 8 17:04:31 2016 -0600

Memoize policy service

There is a hole in the policy service caching layer
that has something to do with ng-repeat. The angular
http cache service doesn't seem to work within a single
digest cycle or something... so I simplfied and
improved performance by using memoize to
prevent redundant checks which were happening in
hz-dynamic-table.

See bug for details on verifying this manually.

Change-Id: Ib0b3a806d1c6b065e20cb22b8255048bd836dd1b
Closes-Bug: 1608623


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1608623

Title:
  Policy service caching does not work within hz-dynamic-table

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The policy service caching (where it ensures that if you look up the
  same policy more than once) does not seem to work under hz-dynamic-
  table.

  Turn on ng-images which recently migrated to hz-dynamic-table and
  watch the network calls.  You'll note a bunch of policy calls and if
  you look at the request body, you'll see the same rules going in
  multiple times.

  http://imgur.com/a/Sh4fj

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1608623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-08-02 Thread Amrith
This is not a bug (in Trove).

** Changed in: trove
   Status: Incomplete => Invalid

** Changed in: trove
 Assignee: haobing1 (haobing1) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  Incomplete
Status in Cinder:
  Incomplete
Status in Glance:
  Incomplete
Status in OpenStack Identity (keystone):
  Won't Fix
Status in Magnum:
  Incomplete
Status in OpenStack Compute (nova):
  Invalid
Status in python-cinderclient:
  Incomplete
Status in python-glanceclient:
  Incomplete
Status in python-keystoneclient:
  Won't Fix
Status in python-neutronclient:
  Incomplete
Status in python-novaclient:
  Invalid
Status in python-rackclient:
  Incomplete
Status in python-swiftclient:
  Incomplete
Status in rack:
  Incomplete
Status in OpenStack Object Storage (swift):
  Incomplete
Status in OpenStack DBaaS (Trove):
  Invalid

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607625] Re: gate-neutron-fwaas-dsvm-tempest is not working properly

2016-08-02 Thread Jakub Libosvar
Patch was merged

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607625

Title:
  gate-neutron-fwaas-dsvm-tempest is not working properly

Status in neutron:
  Fix Released

Bug description:
  it's failing to enable q-fwwas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605742] Re: Paramiko 2.0 is incompatible with Mitaka

2016-08-02 Thread Markus Zoeller (markus_z)
The Nova periodic stable mitaka test job installs paramiko==1.16.0

http://logs.openstack.org/periodic-stable/periodic-nova-python27-db-mitaka/9d14b47/console.html#_2016-08-02_06_16_11_180799
This is the upper-constraint since Nov 2015

https://github.com/openstack/requirements/commit/6bb1357b2a4347a29ca5911499b86de71b92fdc8#diff-0bdd949ed8a7fdd4f95240bd951779c8R212
This works fine.

The openstack-ansible project used paramiko>=1.16.0 and installed 2.0.1

http://logs.openstack.org/09/342309/8/check/gate-openstack-ansible-dsvm-commit/224f9c0/console.html.gz#_2016-07-22_17_21_57_050330

https://github.com/openstack/openstack-ansible/commit/9de9f4def3a731563da5778546e7f9f73e2c4214#diff-b4ef698db8ca845e5845c4618278f29aR9
openstack-ansible removed the requirement "paramiko>=1.6.0" later

https://github.com/openstack/openstack-ansible/commit/b15363c#diff-b4ef698db8ca845e5845c4618278f29aL3
This however is only in openstack-ansible Newton and not in Mitaka:

https://github.com/openstack/openstack-ansible/blob/stable/mitaka/requirements.txt#L9

Based on ^ I believe this is an issue of the openstack-ansible project which
doesn't cap the upper-constraint of paramiko in its stable/mitaka branch.
I don't see the need to backport anything to Nova's stable/mitaka branch.
I leave this as "incomplete" to get a second pair of eyes from auggy.

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605742

Title:
  Paramiko 2.0 is incompatible with Mitaka

Status in OpenStack Compute (nova):
  Incomplete
Status in openstack-ansible:
  New

Bug description:
  Unexpected API Error. TypeError. Code: 500. os-keypairs v2.1 
  nova (stable/mitaka , 98b38df57bfed3802ce60ee52e4450871fccdbfa) 

  Tempest tests (for example
  TestMinimumBasicScenario:test_minimum_basic_scenario) are failed on
  gate job for project openstack-ansible  with such error (please find
  full logs [1]) :

  -
  2016-07-22 18:46:07.399604 | 
  2016-07-22 18:46:07.399618 | Captured pythonlogging:
  2016-07-22 18:46:07.399632 | ~~~
  2016-07-22 18:46:07.399733 | 2016-07-22 18:45:47,861 2312 DEBUG
[tempest.scenario.manager] paths: img: 
/opt/images/cirros-0.3.4-x86_64-disk.img, container_fomat: bare, disk_format: 
qcow2, properties: None, ami: /opt/images/cirros-0.3.4-x86_64-blank.img, ari: 
/opt/images/cirros-0.3.4-x86_64-initrd, aki: 
/opt/images/cirros-0.3.4-x86_64-vmlinuz
  2016-07-22 18:46:07.399799 | 2016-07-22 18:45:48,513 2312 INFO 
[tempest.lib.common.rest_client] Request 
(TestMinimumBasicScenario:test_minimum_basic_scenario): 201 POST 
http://172.29.236.100:9292/v1/images 0.651s
  2016-07-22 18:46:07.399889 | 2016-07-22 18:45:48,513 2312 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'x-image-meta-name': 
'tempest-scenario-img--306818818', 'x-image-meta-container_format': 'bare', 
'X-Auth-Token': '', 'x-image-meta-disk_format': 'qcow2', 
'x-image-meta-is_public': 'False'}
  2016-07-22 18:46:07.399907 | Body: None
  2016-07-22 18:46:07.400027 | Response - Headers: {'status': '201', 
'content-length': '481', 'content-location': 
'http://172.29.236.100:9292/v1/images', 'connection': 'close', 'location': 
'http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe', 
'date': 'Fri, 22 Jul 2016 18:45:48 GMT', 'content-type': 'application/json', 
'x-openstack-request-id': 'req-6b3c6218-b3e6-4884-bb3c-b88c70733d0c'}
  2016-07-22 18:46:07.400183 | Body: {"image": {"status": "queued", 
"deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": 
"2016-07-22T18:45:48.00", "owner": "1fbbcc542db344f394b4f1565a7e48fd", 
"min_disk": 0, "is_public": false, "deleted_at": null, "id": 
"5c390277-ec8d-4d82-b8d8-b8978473ecbe", "size": 0, "virtual_size": null, 
"name": "tempest-scenario-img--306818818", "checksum": null, "created_at": 
"2016-07-22T18:45:48.00", "disk_format": "qcow2", "properties": {}, 
"protected": false}}
  2016-07-22 18:46:07.400241 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request: PUT 
http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400359 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request Headers: {'Transfer-Encoding': 'chunked', 
'User-Agent': 'tempest', 'Content-Type': 'application/octet-stream', 
'X-Auth-Token': 
'gABXkmnbJaM7C2EMxfEELQEWlU27v4pCt_9tF_XGlYrgEu-eXvDcEclzZc2OyFnVy79Dfz_pH2gGvKveSTihW-hzV6ucHyF1JrdqwOYr6Z7ZoUe_0BQ4gOdxKZoqzSaqQKfdfrZnojq9OE9Dy11frFI59qqkk0303j3fWlFIUeV6NtrzX-s'}
  2016-07-22 18:46:07.400403 | 2016-07-22 18:45:48,517 2312 DEBUG
[tempest.common.glance_http] Actual Path: 
/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  

[Yahoo-eng-team] [Bug 1604306] Re: Log request_id for each api call

2016-08-02 Thread Akihiro Motoki
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604306

Title:
  Log request_id for each api call

Status in python-neutronclient:
  Confirmed

Bug description:
  Blueprint [1] is up for review in the last Mitaka release cycle which
  returns request_id back to the caller as per design proposed in the
  cross-project specs [2]. Now, in step 2, we would like to log x
  -openstack-request-id returned in the response header using the python
  logging module.

  Following log message will be logged in debug logging level.

  _logger.debug('%(method)s call to neutron for '
'%(url)s used request id '
'%(response_request_id)s',
{'method': resp.request.method,
 'url': resp.url,
 'response_request_id': request_id})

  method: HTTP request method (GET/POST/PUT/DELETE etc.)
  my_url: Request URL with endpoint
  response_request_id: request_id extracted from response header

  The log messages logged by the log handlers would be dependant on how
  the root loggers are configured.

  1. python-neutronclient used as a shell command:

  In this case, the root loggers will be configured in the client
  itself. So if --debug flag is passed in the command, then following
  log message will be shown on console:-

  DEBUG: neutronclient.v2_0.client GET call to neutron for
  http://127.0.0.1:9696/v2.0/networks.json used request id req-
  bbf8ce60-19d0-44d8-b868-cd39e86d7211

  Nothing will be logged on console if --debug flag is not set.

  2. python-neutronclient is used in applications (e.g. Nova)

  In this case, when Nova calls apis of neutron service using python-
  neutronclient, then following log message for get port-create api will
  be logged in nova log file:-

  DEBUG neutronclient.v2_0.client [req-35c6216e-53f1-4cb9-965a-
  5c1e5f532921 demo demo] POST call to neutron for
  http://127.0.0.1:9696/v2.0/ports.json used request id req-
  b7d44a15-137c-4760-bb77-3042d18ef0a5

  In the above log message, you will see both nova (callee) and neutron
  (caller) request_ids are logged in the same log message. This is
  because, the root loggers are configured in Nova and the same will be
  used by the client as well. Since nova uses oslo.log library, it
  internally logs request_id using ContextFormatter configured in the
  "formatter_context" section of the nova configuration file.

  This feature already have been in novaclient, cinderclient and
  glanceclient[3, 4, 5].

  References:
  [1] 
https://blueprints.launchpad.net/python-neutronclient/+spec/return-request-id-to-caller
  [2] 
http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html
  [3] https://review.openstack.org/322664
  [4] https://review.openstack.org/315925
  [5] https://review.openstack.org/331981

  Mailing List discussion:
  http://lists.openstack.org/pipermail/openstack-dev/2016-March/088001.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1604306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608842] [NEW] [api-ref] The 'id' parameters are defined as 'optional' in os-volume_attachments

2016-08-02 Thread Takashi NATSUME
Public bug reported:

http://developer.openstack.org/api-ref/compute/?expanded=#servers-with-
volume-attachments-servers-os-volume-attachments

In os-volume_attachments of api-ref, the 'id' (attachement ID) parameters are 
defined as 'optional'.
But they are not optional actually.

https://github.com/openstack/nova/blob/9fdb5a43f2d2853f67e28ed33a713c92c99e6869/nova/api/openstack/compute/volumes.py#L225
https://github.com/openstack/nova/blob/9fdb5a43f2d2853f67e28ed33a713c92c99e6869/nova/api/openstack/compute/volumes.py#L339

Originally they are defined as 'required'.
But it was changed by the following patch.

https://review.openstack.org/#/c/320048/

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608842

Title:
  [api-ref] The 'id' parameters are defined as 'optional' in os-
  volume_attachments

Status in OpenStack Compute (nova):
  New

Bug description:
  http://developer.openstack.org/api-ref/compute/?expanded=#servers-
  with-volume-attachments-servers-os-volume-attachments

  In os-volume_attachments of api-ref, the 'id' (attachement ID) parameters are 
defined as 'optional'.
  But they are not optional actually.

  
https://github.com/openstack/nova/blob/9fdb5a43f2d2853f67e28ed33a713c92c99e6869/nova/api/openstack/compute/volumes.py#L225
  
https://github.com/openstack/nova/blob/9fdb5a43f2d2853f67e28ed33a713c92c99e6869/nova/api/openstack/compute/volumes.py#L339

  Originally they are defined as 'required'.
  But it was changed by the following patch.

  https://review.openstack.org/#/c/320048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1608842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537062] Re: Fail to boot vm when set AggregateImagePropertiesIsolation filter and add custom metadata in the Host Aggregate

2016-08-02 Thread ivano
** Also affects: mos
   Importance: Undecided
   Status: New

** Description changed:

- 
- Image has no custom metadata, should not affect the 
AggregateImagePropertiesIsolation filter
+ Image has no custom metadata, should not affect the
+ AggregateImagePropertiesIsolation filter
  
  Reproduce steps:
  
  (1) add Host Aggregate with custom metadata
  ++---+---+--++
  | Id | Name  | Availability Zone | Hosts| Metadata   |
  ++---+---+--++
  | 1  | linux-agg | - | 'controller' | 'os=linux' |
- ++---+---+--++ 
+ ++---+---+--++
  
  (2) add  AggregateImagePropertiesIsolation filter
  scheduler_default_filters = 
RetryFilter,AggregateImagePropertiesIsolation,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter
-   
+ 
  (3) boot vm and error log:
  2016-01-22 21:00:10.834 ERROR oslo_messaging.rpc.dispatcher 
[req-1cded809-cfe6-4657-8e31-b494f1b3278d admin admin] Exception during messa
  ge handling: ImageMetaProps object has no attribute 'os'
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 143, in _dispatch_and_reply
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 189, in _dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 130, in _do_dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
- 2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
+ 2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
  150, in inner
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 78, in select_destin
  ations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher dests = 
self.driver.select_destinations(ctxt, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 53, in sele
  ct_destinations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
selected_hosts = self._schedule(context, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 113, in _sc
  hedule
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher spec_obj, 
index=num)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 532, in get_fil
  tered_hosts
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher hosts, 
spec_obj, index)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 89, in get_filtered_objects
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher list_objs = 
list(objs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 44, in filter_all
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, spec_obj):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filters/__init__.py", line 26, in _filter_one
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filters/aggregate_image_properties_isolation.py",
 line 48, in host_passes
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher prop = 
image_props.get(key)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/objects/image_meta.py", line 540, in get
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher if not 
self.obj_attr_is_set(name):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1608832] [NEW] cold migration of instances with affinity policy fails

2016-08-02 Thread Paul Carlton
Public bug reported:

migration of shutdown instances doesn't exclude the current compute host
in the conductor so given two instances with an affinity policy the host
that is chosen will always be the current host which leads to an
exception being generated on the compute host in prep_resize method.
This causes the migration to fail and thus place the instance in an
error state.

We should modify the conductor task code to assess if the operation is a
resize (in which case the same host is allowed) and if not exclude the
current host (as is done for live migration).

A wider issue here is the migration of instances with affinity policy,
seem to me there should be some option to override it to allow for the
migration of these instances.  Live Migration supports a force option
but that overrides the scheduler completely which is not an ideal
solution.  Cold migration provides no such option, effectively meaning
the migration of instances with affinity policy is not possible when
multiple instances with the same group membership exist.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608832

Title:
  cold migration of instances with affinity policy fails

Status in OpenStack Compute (nova):
  New

Bug description:
  migration of shutdown instances doesn't exclude the current compute
  host in the conductor so given two instances with an affinity policy
  the host that is chosen will always be the current host which leads to
  an exception being generated on the compute host in prep_resize
  method.  This causes the migration to fail and thus place the instance
  in an error state.

  We should modify the conductor task code to assess if the operation is
  a resize (in which case the same host is allowed) and if not exclude
  the current host (as is done for live migration).

  A wider issue here is the migration of instances with affinity policy,
  seem to me there should be some option to override it to allow for the
  migration of these instances.  Live Migration supports a force option
  but that overrides the scheduler completely which is not an ideal
  solution.  Cold migration provides no such option, effectively meaning
  the migration of instances with affinity policy is not possible when
  multiple instances with the same group membership exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1608832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608823] [NEW] a misspelling: "experiemental" in src code

2016-08-02 Thread Petro
Public bug reported:

There is a misspelling in source code.

"experiemental" should be "experimental".

https://git.openstack.org/cgit/openstack/neutron/tree/neutron/conf/common.py#n162

** Affects: neutron
 Importance: Undecided
 Assignee: Petro (wang-junjie)
 Status: New


** Tags: typo

** Changed in: neutron
 Assignee: (unassigned) => Petro (wang-junjie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608823

Title:
  a misspelling: "experiemental" in src code

Status in neutron:
  New

Bug description:
  There is a misspelling in source code.

  "experiemental" should be "experimental".

  
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/conf/common.py#n162

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-08-02 Thread Haifeng.Yan
** Also affects: oslo.middleware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Glance:
  In Progress
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in masakari:
  New
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  New
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  In Progress
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608817] [NEW] Creation new instance for Mitka 9.1.1 (Devstack) remove default route on the Hos

2016-08-02 Thread Yevgeniy Ovsyannikov
Public bug reported:

Creating new instance for Mitka 9.1.1 (Devstack) remove default route on
the Host on the Host (Ubuntu 16.4)

Reproduction steps:
1. Check that your default route on the host is persistent (make few reboots)
2. Run fresh devstack allinone node on ubuntu 16.04 (Mitaka 9.1.1)
3. Start new instance (I did it with horizon)
4. You will lost ssh connection and horizon connection, if you connect via 
console you will see that default route disappear

** Affects: nova
 Importance: Undecided
 Status: New

** Project changed: cinder => nova

** Description changed:

- Creation new instance for Mitka 9.1.1 (Devstack) remove default route on
+ Creating new instance for Mitka 9.1.1 (Devstack) remove default route on
  the Host on the Host (Ubuntu 16.4)
  
  Reproduction steps:
- 1. Check that your default route on the host is persistent (make few reboots) 
+ 1. Check that your default route on the host is persistent (make few reboots)
  2. Run fresh devstack allinone node on ubuntu 16.04 (Mitaka 9.1.1)
  3. Start new instance (I did it with horizon)
  4. You will lost ssh connection and horizon connection, if you connect via 
console you will see that default route disappear

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608817

Title:
  Creation new instance for Mitka 9.1.1 (Devstack) remove default route
  on the Hos

Status in OpenStack Compute (nova):
  New

Bug description:
  Creating new instance for Mitka 9.1.1 (Devstack) remove default route
  on the Host on the Host (Ubuntu 16.4)

  Reproduction steps:
  1. Check that your default route on the host is persistent (make few reboots)
  2. Run fresh devstack allinone node on ubuntu 16.04 (Mitaka 9.1.1)
  3. Start new instance (I did it with horizon)
  4. You will lost ssh connection and horizon connection, if you connect via 
console you will see that default route disappear

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1608817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608817] [NEW] Creation new instance for Mitka 9.1.1 (Devstack) remove default route on the Hos

2016-08-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Creation new instance for Mitka 9.1.1 (Devstack) remove default route on
the Host on the Host (Ubuntu 16.4)

Reproduction steps:
1. Check that your default route on the host is persistent (make few reboots) 
2. Run fresh devstack allinone node on ubuntu 16.04 (Mitaka 9.1.1)
3. Start new instance (I did it with horizon)
4. You will lost ssh connection and horizon connection, if you connect via 
console you will see that default route disappear

** Affects: nova
 Importance: Undecided
 Status: New

-- 
Creation new instance for Mitka 9.1.1 (Devstack) remove default route on the Hos
https://bugs.launchpad.net/bugs/1608817
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-08-02 Thread Takashi NATSUME
** Also affects: masakari
   Importance: Undecided
   Status: New

** Changed in: masakari
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Glance:
  In Progress
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in masakari:
  New
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  In Progress
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp