[Yahoo-eng-team] [Bug 1091605] Re: Internal interfaces defined via OVS are not brought up properly after a reboot

2017-02-24 Thread James Page
quantum is no longer found in any supported Ubuntu release

** Changed in: quantum (Ubuntu)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1091605

Title:
  Internal interfaces defined via OVS are not brought up properly after
  a reboot

Status in neutron:
  Fix Released
Status in neutron folsom series:
  Fix Released
Status in quantum package in Ubuntu:
  Won't Fix

Bug description:
  The L3 agents and DHCP agents both define internal (qg-, qr-, tap-)
  ports via OVS.  In both cases, the agents call plug() to configure and
  bring the device up if it does not exist.  If the device does exist,
  however, the agents neither call plug nor do they ensure the link is
  up (OVS ensures that the devices survive a reboot but does not ensure
  that they are brought up on boot).

  The responsibility for bringing devices up should probably remain in
  quantum/agent/linux/interface.py, so a suggested implementation would
  be delegating the device existence check to the driver's plug()
  method, which could then ensure that the device was brought up if
  necessary.

  This bug reveals a hole in our current testing strategy.   Most
  developers presumably work on devstack rather than installed code.
  Since devstack agents don't survive a reboot, most developers would
  never have the chance to validate whether a quantum agent node still
  works after a reboot.  Documenting use-cases that need to be tested
  (e.g. quantum agent nodes need to work properly after a reboot) is a
  good first step - is this currently captured somewhere or can we find
  a place to do so?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1091605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189909] Re: dhcp-agent does always provide IP address for instances with re-cycled IP addresses.

2017-02-24 Thread James Page
quantum is no longer found in any supported Ubuntu release

** Changed in: quantum (Ubuntu)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1189909

Title:
  dhcp-agent does always provide IP address for instances with re-cycled
  IP addresses.

Status in neutron:
  Fix Released
Status in quantum package in Ubuntu:
  Won't Fix
Status in CentOS:
  New

Bug description:
  Configuration: OpenStack Networking, OpenvSwitch Plugin (GRE tunnels), 
OpenStack Networking Security Groups
  Release: Grizzly

  Sometime when creating instances, the dnsmasq instance associated with
  the tenant l2 network does not have configuration for the requesting
  mac address:

  Jun 11 09:30:23 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available
  Jun 11 09:30:33 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available

  Restarting the quantum-dhcp-agent resolved the issue:

  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45
  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: DHCPOFFER(tap98031044-d8) 
10.5.0.2 fa:16:3e:da:41:45

  The IP address (10.5.0.2) was re-cycled from an instance that was
  destroyed just prior to creation of this one.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.04
  Package: quantum-dhcp-agent 1:2013.1.1-0ubuntu1
  ProcVersionSignature: Ubuntu 3.8.0-23.34-generic 3.8.11
  Uname: Linux 3.8.0-23-generic x86_64
  ApportVersion: 2.9.2-0ubuntu8.1
  Architecture: amd64
  Date: Tue Jun 11 09:31:38 2013
  MarkForUpload: True
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: quantum
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.quantum.dhcp.agent.ini: [deleted]
  modified.conffile..etc.quantum.rootwrap.d.dhcp.filters: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1189909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1050540] Re: neutron-server requires plugin config at the command line

2017-02-24 Thread James Page
quantum is no longer found in any supported Ubuntu release

** Changed in: quantum (Ubuntu)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1050540

Title:
  neutron-server requires plugin config at the command line

Status in neutron:
  Invalid
Status in quantum package in Ubuntu:
  Won't Fix

Bug description:
  Currently, quantum-server apparently requires plugin config paths to
  passed to the quantum-server binary at launch, along with the path to
  the standard quantum.conf.   This creates an issue for packagers who
  wish to keep quantum-server decoupled from specific plugins.  System
  init scripts need to either:

  - use a specific plugin as a default, and set a dependency between 
quantum-server and that plugin.
  - use mechanisms outside of quantum's configuration for specifying which 
plugin config file is to be used. (symlinks, variables from 
/etc/default/quantum)

  It would be useful if the path to the plugin config(s) to be loaded
  were contained in quantum-server.conf, similar to api_paste_config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1050540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663036] Re: api-ref: delete server async postcondition doc is missing some info

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/431190
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=26a961b58435300806444d6762c40f6c3f3528a4
Submitter: Jenkins
Branch:master

commit 26a961b58435300806444d6762c40f6c3f3528a4
Author: Matt Riedemann 
Date:   Wed Feb 8 16:41:31 2017 -0500

api-ref: fix delete server async postcondition docs

Apparently we thought it was useful to tell you that
while a server is being deleted you could watch it's
status, but not enough that we cared to tell you what
status.

Change-Id: Ibb175c448712cbc0ff80353b83dcab524b223e4d
Closes-Bug: #1663036


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663036

Title:
  api-ref: delete server async postcondition doc is missing some info

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://developer.openstack.org/api-ref/compute/?expanded=stop-server-
  os-stop-action-detail,delete-server-detail#delete-server

  "With correct permissions, you can see the server status as"

  AS WHAT?!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667877] [NEW] [RFE] DVR support for Configuring Floatingips in Network Node or in the Compute Node based on Config option.

2017-02-24 Thread Swaminathan Vasudevan
Public bug reported:

Provide a Configurable option to configure Floatingips for DVR based routers to 
reside on Compute Node or on Network Node.
Also proactively check the status of the agent on the destination node and if 
the agent health is down, then configure the Floatingip on the Network Node.

Provide a configuration Option in neutron.conf such as

DVR_FLOATINGIP_CENTRALIZED = 'enforced/circumstantial'

If DVR_FLOATINGIP_CENTRALIZED is configured as 'enforced' all floatingip will 
be configured on the Network NOde.
If the DVR_FLOATINGIP_CENTRALIZED is configured as 'circumstantial' based on 
the agent health the floatingip will be configured either in the compute node 
or on the Network Node.

If this option is not configured, the Floatingip will be distributed for
all bound ports and for just the unbound ports the floatingip will be
implemented in the Network Node.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Summary changed:

- [RFE] DVR support for Configurable Floatingips in Network Node or in the 
Compute Node.
+ [RFE] DVR support for Configuring Floatingips in Network Node or in the 
Compute Node based on Config option.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667877

Title:
  [RFE] DVR support for Configuring Floatingips in Network Node or in
  the Compute Node based on Config option.

Status in neutron:
  New

Bug description:
  Provide a Configurable option to configure Floatingips for DVR based routers 
to reside on Compute Node or on Network Node.
  Also proactively check the status of the agent on the destination node and if 
the agent health is down, then configure the Floatingip on the Network Node.

  Provide a configuration Option in neutron.conf such as

  DVR_FLOATINGIP_CENTRALIZED = 'enforced/circumstantial'

  If DVR_FLOATINGIP_CENTRALIZED is configured as 'enforced' all floatingip will 
be configured on the Network NOde.
  If the DVR_FLOATINGIP_CENTRALIZED is configured as 'circumstantial' based on 
the agent health the floatingip will be configured either in the compute node 
or on the Network Node.

  If this option is not configured, the Floatingip will be distributed
  for all bound ports and for just the unbound ports the floatingip will
  be implemented in the Network Node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667878] [NEW] fstab entries with x-systemd.requires=cloud-init.service mean cloud-init will always run.

2017-02-24 Thread Scott Moser
Public bug reported:

when cloud-init writes mount entries into /etc/fstab, it adds:
 x-systemd.requires=cloud-init.service

This is because a subsequent boot of cloud-init may need to remove that
entry (a new instance for example).

cloud-init has a feature where it can be disabled by:
  sudo touch /etc/cloud/cloud-init.disabled

The generator for cloud-init then says that the cloud-init.target does
not need to run.

However, due to the (possibly stale) x-systemd.requires=cloud-
init.service entries, cloud-init will still run.

I think what I'd like is a:
  x-systemd.after=cloud-init.service

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1667878

Title:
  fstab entries with x-systemd.requires=cloud-init.service mean cloud-
  init will always run.

Status in cloud-init:
  Confirmed

Bug description:
  when cloud-init writes mount entries into /etc/fstab, it adds:
   x-systemd.requires=cloud-init.service

  This is because a subsequent boot of cloud-init may need to remove
  that entry (a new instance for example).

  cloud-init has a feature where it can be disabled by:
sudo touch /etc/cloud/cloud-init.disabled

  The generator for cloud-init then says that the cloud-init.target does
  not need to run.

  However, due to the (possibly stale) x-systemd.requires=cloud-
  init.service entries, cloud-init will still run.

  I think what I'd like is a:
x-systemd.after=cloud-init.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1667878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667367] Re: V2 role create does not allow spaces in the role description

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/437797
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=eb535ca4768f20184db4885abec75996ffdeda0c
Submitter: Jenkins
Branch:master

commit eb535ca4768f20184db4885abec75996ffdeda0c
Author: Tin Lam 
Date:   Thu Feb 23 23:38:32 2017 -0600

Fix v2 role create schema validation

Currently, creating a new role using the v2 api no longer allows
spaces in the role description.  This patch set changes the
schema type, and also fixes a bug in the v2 validation test where
the validation is performed against the v3 schema instead of v2.

Co-Authored-By: Gage Hugo 

Change-Id: I858005c7805ba87f506933d49715cc3ce0d539e1
Closes-Bug: #1667367


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1667367

Title:
  V2 role create does not allow spaces in the role description

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Creating a new role using V2 APIs no longer allow spaces in the role
  description. Looks like it was broken since the introduction of JSON
  schema. See

  
https://github.com/openstack/keystone/blob/master/keystone/assignment/schema.py#L20

  Instead of parameter_types.id_string. It should be:
  parameter_types.description.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1667367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667679] Re: Setting quota fails saying admin project is not a valid project

2017-02-24 Thread Emilien Macchi
** Changed in: tripleo
   Status: Triaged => Fix Released

** Changed in: tripleo
 Assignee: (unassigned) => Emilien Macchi (emilienm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667679

Title:
  Setting quota fails saying admin project is not a valid project

Status in OpenStack Compute (nova):
  Confirmed
Status in tripleo:
  Fix Released

Bug description:
  This is what's in the logs http://logs.openstack.org/15/359215/63
  /check-tripleo/gate-tripleo-ci-centos-7-ovb-
  ha/3465882/console.html#_2017-02-24_11_07_08_893276

  2017-02-24 11:07:08.893276 | 2017-02-24 11:07:02.000 | 2017-02-24 
11:07:02,929 INFO: + openstack quota set --cores -1 --instances -1 --ram -1 
b0fe52b0ac15450ba0a38ac9acd8fea8
  2017-02-24 11:07:08.893365 | 2017-02-24 11:07:08.000 | 2017-02-24 
11:07:08,674 INFO: Project ID b0fe52b0ac15450ba0a38ac9acd8fea8 is not a valid 
project. (HTTP 400) (Request-ID: req-9e0a00b7-75ae-41d5-aeed-705bb1a54bae)
  2017-02-24 11:07:08.893493 | 2017-02-24 11:07:08.000 | 2017-02-24 
11:07:08,758 INFO: [2017-02-24 11:07:08,757] (os-refresh-config) [ERROR] during 
post-configure phase. [Command '['dib-run-parts', 
'/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit 
status 1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667831] Re: cloud-init dependency for open-vm-tools service

2017-02-24 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1667831

Title:
  cloud-init dependency for open-vm-tools service

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New

Bug description:
  Had a private chat conversation with Scott Moser (@smoser). As per his
  instructions, logging this bug. We need to add 'cloud-init' dependency
  for 'open-vm-tools' service.

  This is how 'Guest Customization' works for the 'VMware' managed
  guests.

  1. open-vm-tools service comes up and populates a 'customization' 
configuration file.
  2. cloud-init service starts and waits for the 'customization config' file, 
reads it and applies the customization.

  (1) should start before (2). Else, (2) will just block itself and not
  find the config file. Everything was working fine. But due to recent
  'systemd' changes done to 'cloud-init', 'cloud-init' service starts
  early in the boot process before 'open-vm-tools' service. Due to this,
  no customization actually happens.

  Need to add the 'cloud-init' dependency for 'open-vm-tools' service so
  that (1) always happens before (2).

  Logging a bug.

  Thanks
  Sankar.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1667831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667831] [NEW] cloud-init dependency for open-vm-tools service

2017-02-24 Thread Sankar Tanguturi
Public bug reported:

Had a private chat conversation with Scott Moser (@smoser). As per his
instructions, logging this bug. We need to add 'cloud-init' dependency
for 'open-vm-tools' service.

This is how 'Guest Customization' works for the 'VMware' managed guests.

1. open-vm-tools service comes up and populates a 'customization' configuration 
file.
2. cloud-init service starts and waits for the 'customization config' file, 
reads it and applies the customization.

(1) should start before (2). Else, (2) will just block itself and not
find the config file. Everything was working fine. But due to recent
'systemd' changes done to 'cloud-init', 'cloud-init' service starts
early in the boot process before 'open-vm-tools' service. Due to this,
no customization actually happens.

Need to add the 'cloud-init' dependency for 'open-vm-tools' service so
that (1) always happens before (2).

Logging a bug.

Thanks
Sankar.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1667831

Title:
  cloud-init dependency for open-vm-tools service

Status in cloud-init:
  New

Bug description:
  Had a private chat conversation with Scott Moser (@smoser). As per his
  instructions, logging this bug. We need to add 'cloud-init' dependency
  for 'open-vm-tools' service.

  This is how 'Guest Customization' works for the 'VMware' managed
  guests.

  1. open-vm-tools service comes up and populates a 'customization' 
configuration file.
  2. cloud-init service starts and waits for the 'customization config' file, 
reads it and applies the customization.

  (1) should start before (2). Else, (2) will just block itself and not
  find the config file. Everything was working fine. But due to recent
  'systemd' changes done to 'cloud-init', 'cloud-init' service starts
  early in the boot process before 'open-vm-tools' service. Due to this,
  no customization actually happens.

  Need to add the 'cloud-init' dependency for 'open-vm-tools' service so
  that (1) always happens before (2).

  Logging a bug.

  Thanks
  Sankar.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1667831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667827] [NEW] Quota list API doesn't return project_id

2017-02-24 Thread Hirofumi Ichihara
Public bug reported:

Quota list API returns tenant_id with the project's resource quota but
it doesn't return project_id.

$ curl -g -i -X GET http://127.0.0.1:9696/v2.0/quotas.json -H "User-
Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-
Token: $TOKEN"

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 223
X-Openstack-Request-Id: req-888fc950-bcc3-4afd-b41c-d5cec29bbc37
Date: Fri, 24 Feb 2017 22:07:06 GMT

{"quotas": [{"subnet": 10, "network": 20, "floatingip": 50, "tenant_id":
"98349a417b15492ab750c7705bfe2fa1", "subnetpool": -1,
"security_group_rule": 100, "security_group": 10, "router": 10,
"rbac_policy": 10, "port": 50}]}

** Affects: neutron
 Importance: Undecided
 Assignee: Hirofumi Ichihara (ichihara-hirofumi)
 Status: In Progress


** Tags: api

** Changed in: neutron
 Assignee: (unassigned) => Hirofumi Ichihara (ichihara-hirofumi)

** Tags added: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667827

Title:
  Quota list API doesn't return project_id

Status in neutron:
  In Progress

Bug description:
  Quota list API returns tenant_id with the project's resource quota but
  it doesn't return project_id.

  $ curl -g -i -X GET http://127.0.0.1:9696/v2.0/quotas.json -H "User-
  Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-
  Token: $TOKEN"

  HTTP/1.1 200 OK
  Content-Type: application/json
  Content-Length: 223
  X-Openstack-Request-Id: req-888fc950-bcc3-4afd-b41c-d5cec29bbc37
  Date: Fri, 24 Feb 2017 22:07:06 GMT

  {"quotas": [{"subnet": 10, "network": 20, "floatingip": 50,
  "tenant_id": "98349a417b15492ab750c7705bfe2fa1", "subnetpool": -1,
  "security_group_rule": 100, "security_group": 10, "router": 10,
  "rbac_policy": 10, "port": 50}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523369] Re: clean a user's default project if the project has been deleted

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/429047
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=51d5597df729158d15b71e2ba80ab103df5d55f8
Submitter: Jenkins
Branch:master

commit 51d5597df729158d15b71e2ba80ab103df5d55f8
Author: Kalaswan Datta 
Date:   Wed Jan 20 00:55:29 2016 -0500

Clear the project ID from user information

Currently when a project is deleted, the project ID details
still exists in user information. After this fix, when a project
is deleted the default project ID in user
information will be cleared.

Closes-Bug: #1523369
Signed-off-by: Kalaswan Datta 

Change-Id: I3db0cf27d3cfdf6cf7c5bb34ec1b27ef80c139f4


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1523369

Title:
  clean a user's default project if the project has been deleted

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  It is possible to create new keystone user with default tenant, but
  after tenant was deleted user continue contain info about non existing
  tenant.

  Step to reproduce:

  1. keystone tenant-create --name ten
   'ten' created successfully

  2. keystone user-create --name user --tenant ten
  user created successfully with 'ten' tenantId

  3. keystone tenant-delete ten

  4. keystone user-get user
  show non existing tenantId

  5. keystone tenant-get 'tenant-id'
  No tenant with a name or ID of 'tenant-id' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1523369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667817] [NEW] Unexpected API error on neutronclient

2017-02-24 Thread Amy Marrich
Public bug reported:

Recieved error which said to open a bug so doing so. Including command
and nova-api log. This is a fresh install following the Ocata install-
guide for Ubuntu while attempting to document how to inatall nova with
the new nova-placement-api and cells v2. Can provide those steps if
needed as well as confs.

root@controller:~# openstack server create --debug --flavor m1.nano --image 
cirros   --nic net-id=58da0f04-06a9-4f9f-89f7-f913e64eab67 --security-group 
default   --key-name mykey testinstance
START with options: [u'server', u'create', u'--debug', u'--flavor', u'm1.nano', 
u'--image', u'cirros', u'--nic', 
u'net-id=58da0f04-06a9-4f9f-89f7-f913e64eab67', u'--security-group', 
u'default', u'--key-name', u'mykey', u'testinstance']
options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', auth_type='', 
auth_url='http://controller:35357/v3', cacert=None, cert='', client_id='', 
client_secret='***', cloud='', code='', consumer_key='', consumer_secret='***', 
debug=True, default_domain='default', default_domain_id='', 
default_domain_name='', deferred_help=False, discovery_endpoint='', 
domain_id='', domain_name='', endpoint='', identity_provider='', 
identity_provider_url='', insecure=None, interface='', key='', log_file=None, 
old_profile=None, openid_scope='', os_beta_command=False, 
os_compute_api_version='', os_dns_api_version='2', os_identity_api_version='3', 
os_image_api_version='2', os_network_api_version='', os_object_api_version='', 
os_project_id=None, os_project_name=None, os_volume_api_version='', 
passcode='', password='***', profile=None, project_domain_id='', 
project_domain_name='Default', project_id='', project_name='demo', protocol='', 
redir
 ect_uri='', region_name='', timing=False, token='***', trust_id='', url='', 
user_domain_id='', user_domain_name='Default', user_id='', username='demo', 
verbose_level=3, verify=None)
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
'cacert': None, 'auth_url': 'http://controller:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'demo', 
u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'user_domain_name': 'Default', 'project_name': 'demo', 'project_domain_name': 
'Default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', 'timing': False, 'password': '***', 
u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', 
u'metering_api_version': u'2', 'deferred_help': False, u'identity_api_version
 ': '3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': 
u'neutron', u'status': u'active', 'debug': True, u'interface': None, 
u'disable_vendor_agent': {}}
defaults: {u'auth_type': 'password', u'status': u'active', 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': 
u'2', u'container_infra_api_version': u'1', u'metering_api_version': u'2', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', 'cacert': None, u'network_api_version': 
u'2', u'message': u'', u'image_format': u'qcow2', 
u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', 
'verify': True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 
'cert': None, u'secgroup_source': u'neutron', u'container_api_version': u'1', 
u'dns_api_version': u'2', u'object_store_api_version': u'1', u'interface': 
None, u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
'cacert': None, 'auth_url': 'http://controller:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'demo', 
u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'user_domain_name': 'Default', 'project_name': 'demo', 'project_domain_name': 
'Default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', 'timing': False, 'password': '***', 
u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', 
u'metering_api_version': u'2', 'deferred_help': False, 

[Yahoo-eng-team] [Bug 1659485] Re: Warnings about nova-status.rst and config in Document generation

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/425549
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1585ca189eb6e194441f9dff94c7a710d96e1acc
Submitter: Jenkins
Branch:master

commit 1585ca189eb6e194441f9dff94c7a710d96e1acc
Author: Takashi NATSUME 
Date:   Thu Jan 26 14:25:22 2017 +0900

Fix doc generation warnings

Fix the following warnings.

- A warning in config sample generation
- Warnings about nova-status.rst

Change-Id: Ifcc3b4a89eeea9d0dd62e2a8b560c5e6a9ff3d1a
Closes-Bug: #1659485


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659485

Title:
  Warnings about nova-status.rst and config in Document generation

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When executing 'tox -e docs', the following warnings are output.

  * WARNING:stevedore.named:Could not load nova.api, nova.cells
  * /tmp/nova/doc/source/man/nova-status.rst:66: WARNING: Literal block 
expected; none found.
  * /tmp/nova/doc/source/man/nova-status.rst:73: WARNING: Enumerated list ends 
without a blank line; 

  The warnings about nova-status.rst are related with code block.
  But it had better be replaced with a table.

  -
  stack@devstack-master:/tmp/nova$ tox -e docs
  docs create: /tmp/nova/.tox/docs
  docs installdeps: -r/tmp/nova/test-requirements.txt
  docs develop-inst: /tmp/nova
  docs installed: 
alabaster==0.7.9,alembic==0.8.10,amqp==1.4.9,anyjson==0.3.3,appdirs==1.4.0,Babel==2.3.4,bandit==1.4.0,cachetools==2.0.0,castellan==0.5.0,cffi==1.9.1,cliff==2.4.0,cmd2==0.6.9,colorama==0.3.7,contextlib2==0.5.4,coverage==4.3.4,cryptography==1.7.1,ddt==1.1.1,debtcollector==1.11.0,decorator==4.0.11,docutils==0.12,dogpile.cache==0.6.2,dulwich==0.16.3,enum34==1.1.6,eventlet==0.19.0,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.2.4,funcsigs==1.0.2,functools32==3.2.3.post2,futures==3.0.5,futurist==0.21.0,gabbi==1.31.0,gitdb2==2.0.0,GitPython==2.1.1,greenlet==0.4.11,hacking==0.10.2,httplib2==0.9.2,idna==2.2,ipaddress==1.0.18,iso8601==0.1.11,Jinja2==2.8.1,jsonpatch==1.15,jsonpath-rw==1.4.0,jsonpath-rw-ext==1.0.0,jsonpointer==1.10,jsonschema==2.5.1,keystoneauth1==2.18.0,keystonemiddleware==4.14.0,kombu==3.0.37,linecache2==1.0.0,lxml==3.7.2,Mako==1.0.6,MarkupSafe==0.23,mccabe==0.2.1,microversion-parse==0.1.4,mock==2.0.0,monotonic==1.2,mox3==0.20.0,msgpack-python==0.
 4.8,netaddr==0.7.19,netifaces==0.10.5,-e 
git+https://git.openstack.org/openstack/nova.git@5f59c0e3c61cd6f0600de3a69eeef2037358b782#egg=nova,numpy==1.12.0,openstackdocstheme==1.6.1,openstacksdk==0.9.10,os-api-ref==1.2.0,os-brick==1.11.0,os-client-config==1.26.0,os-testr==0.8.0,os-vif==1.4.0,os-win==1.4.0,os-xenapi==0.1.1,osc-lib==1.3.0,oslo.cache==1.17.0,oslo.concurrency==3.18.0,oslo.config==3.22.0,oslo.context==2.11.0,oslo.db==4.17.0,oslo.i18n==3.12.0,oslo.log==3.20.0,oslo.messaging==5.17.0,oslo.middleware==3.23.0,oslo.policy==1.18.0,oslo.privsep==1.16.0,oslo.reports==1.17.0,oslo.rootwrap==5.4.0,oslo.serialization==2.16.0,oslo.service==1.19.0,oslo.utils==3.22.0,oslo.versionedobjects==1.21.0,oslo.vmware==2.17.0,oslosphinx==4.10.0,oslotest==2.13.0,osprofiler==1.5.0,packaging==16.8,paramiko==2.1.1,Paste==2.0.3,PasteDeploy==1.5.2,pbr==1.10.0,pep8==1.5.7,pika==0.10.0,pika-pool==0.1.3,ply==3.9,positional==1.1.1,prettytable==0.7.2,psutil==5.0.1,psycopg2==2.6.2,py==1.4.32,pyasn1==0.1.9,pyca
 
df==2.5.0,pycparser==2.17,pyflakes==0.8.1,Pygments==2.2.0,pyinotify==0.9.6,PyMySQL==0.7.9,pyparsing==2.1.10,pytest==3.0.6,python-barbicanclient==4.1.0,python-cinderclient==1.10.0,python-dateutil==2.6.0,python-editor==1.0.3,python-glanceclient==2.6.0,python-ironicclient==1.10.0,python-keystoneclient==3.9.0,python-mimeparse==1.6.0,python-neutronclient==6.1.0,python-novaclient==6.0.0,python-openstackclient==3.7.0,python-subunit==1.2.0,pytz==2016.10,PyYAML==3.12,reno==2.0.3,repoze.lru==0.6,requests==2.12.5,requests-mock==1.2.0,requestsexceptions==1.1.3,retrying==1.3.3,rfc3986==0.4.1,Routes==2.4.1,simplejson==3.10.0,six==1.10.0,smmap2==2.0.1,snowballstemmer==1.2.1,Sphinx==1.3.6,sphinx-rtd-theme==0.1.9,SQLAlchemy==1.0.17,sqlalchemy-migrate==0.10.0,sqlparse==0.2.2,statsd==3.2.1,stevedore==1.19.1,suds-jurko==0.6,tempest-lib==1.0.0,Tempita==0.5.2,tenacity==3.7.1,testrepository==0.0.20,testresources==2.0.1,testscenarios==0.5.0,testtools==2.2.0,traceback2==1.4.0,unicodecsv==0.14.1,unittest2==1
 
.1.0,urllib3==1.20,warlock==1.2.0,WebOb==1.6.3,websockify==0.8.0,wrapt==1.10.8,wsgi-intercept==1.4.1
  docs runtests: PYTHONHASHSEED='1259549316'
  docs runtests: commands[0] | rm -rf doc/source/api doc/build api-guide/build 
api-ref/build
  docs runtests: commands[1] | python setup.py build_sphinx
  running build_sphinx
  creating /tmp/nova/doc/build
  creating /tmp/nova/doc/build/doctrees
  creating 

[Yahoo-eng-team] [Bug 1667777] [NEW] identify ovf datasource

2017-02-24 Thread Scott Moser
Public bug reported:

Currently there is no positive identification for ovf datasource in ds-identify.
The ovf specification for iso transport only says that there will be a cdrom 
attached in iso9660 format, and that it would have an 'ovf-env.xml' file.

At this point, ds-identify does not do any mounting of devices, so
mounting and looking for the file is not something I'd like to do if we
can avoid it at all.

For further reference:
  https://lists.ubuntu.com/archives/ubuntu-devel/2017-February/039697.html

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/166

Title:
  identify ovf datasource

Status in cloud-init:
  Confirmed

Bug description:
  Currently there is no positive identification for ovf datasource in 
ds-identify.
  The ovf specification for iso transport only says that there will be a cdrom 
attached in iso9660 format, and that it would have an 'ovf-env.xml' file.

  At this point, ds-identify does not do any mounting of devices, so
  mounting and looking for the file is not something I'd like to do if
  we can avoid it at all.

  For further reference:
https://lists.ubuntu.com/archives/ubuntu-devel/2017-February/039697.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667778] [NEW] Show MAC to IP map in Horizon

2017-02-24 Thread tom king
Public bug reported:

Feature request, please let me know if this needs to go elsewhere.

Users with multi-interface instances have no simple method to find which
MAC address corresponds to which IP address in Horizon. A good first
implementation may be a port list for all ports in the project, and
later allow for per-instance filtering.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: feature horizon ip mac

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1667778

Title:
  Show MAC to IP map in Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Feature request, please let me know if this needs to go elsewhere.

  Users with multi-interface instances have no simple method to find
  which MAC address corresponds to which IP address in Horizon. A good
  first implementation may be a port list for all ports in the project,
  and later allow for per-instance filtering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1667778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618769] Re: SR-IOV: add agent QoS driver to support egress minimum bandwidth

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/433834
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=f469c7f19bfb358d292af34c8d1a1b65971d5018
Submitter: Jenkins
Branch:master

commit f469c7f19bfb358d292af34c8d1a1b65971d5018
Author: Miguel Angel Ajo 
Date:   Tue Feb 14 18:56:45 2017 +0100

Document QoS support for egress minimum bandwidth

Change-Id: I606cc9b8ba2a8bbb0e23de5fd49d19cbb9d093db
Closes-Bug: #1618769


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618769

Title:
  SR-IOV: add agent QoS driver to support egress minimum bandwidth

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/347302
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 46de63c42e7c229529c0f101c9db8c84d73f6860
  Author: Rodolfo Alonso Hernandez 
  Date:   Mon Jul 18 11:52:12 2016 +0100

  SR-IOV: add agent QoS driver to support egress minimum bandwidth
  
  This patch adds SR-IOV agent driver, which uses eswitch manager, to set
  VF min_tx_rate parameter. This parameter defines the guaranteed minimum
  bandwidth for egress traffic.
  
  DocImpact
  Partial-Bug: #1560963
  
  Change-Id: Iefe5e698e99d186202d6ef170f84e93bfbba46dd

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667735] Re: cloud-init doesn't retry metadata lookups and hangs forever if metadata is down

2017-02-24 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

** Changed in: cloud-init (Ubuntu Precise)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Trusty)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Precise)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Trusty)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1667735

Title:
  cloud-init doesn't retry metadata lookups and hangs forever if
  metadata is down

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  Confirmed
Status in cloud-init source package in Trusty:
  Confirmed

Bug description:
  If a host SmartOS server is rebooted and the metadata service is not
  available, a KVM VM instance that use cloud-init (via the SmartOS
  datasource) will fail to start.

  If the metadata agent on the host server is not available the python
  code for cloud-init gets blocked forever waiting for data it will
  never receive. This causes the boot process for an instance to hang on
  cloud-init.

  This is problematic if there happens to be some reason the metadata
  agent is not available for any reason while a SmartOS KVM VM that
  relies on cloud-init is booting.

  From the engineer that worked on this (not the svadm command is run on
  the host SmartOS server):

  You can reproduce this by disabling the metadata service SmartOS host:

  svcadm disable metadata

  and then boot a KVM VM running an Ubuntu Certified Cloud image such
  as:

  c864f104-624c-43d2-835e-b49a39709b6b (ubuntu-certified-14.04
  20150225.2)

  when you do this, the VM's boot process will hang at cloud-init. If
  you then start the metadata service, cloud-init will not recover.

  On of our engineers who looked at this was able to cause forward
  progress by applying this patch:

  --- 
/usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceSmartOS.py.ori 
  2017-02-23 01:28:28.405885775 +
  +++ /usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceSmartOS.py   
2017-02-23 01:35:51.281885775 +
  @@ -286,7 +286,7 @@
   if not seed_device:
   raise AttributeError("seed_device value is not set")

  -ser = serial.Serial(seed_device, timeout=seed_timeout)
  +ser = serial.Serial(seed_device, timeout=10)
   if not ser.isOpen():
   raise SystemError("Unable to open %s" % seed_device)

  which causes the following strace output:

  [pid  2119] open("/dev/ttyS1", O_RDWR|O_NOCTTY|O_NONBLOCK) = 5
  [pid  2119] ioctl(5, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or 
TCGETS, {B9600 -opost -isig -icanon -echo ...}) = 0
  [pid  2119] write(5, "GET user-script\n", 16) = 16
  [pid  2119] select(6, [5], [], [], {10, 0}) = 0 (Timeout)
  [pid  2119] close(5)= 0
  [pid  2119] open("/dev/ttyS1", O_RDWR|O_NOCTTY|O_NONBLOCK) = 5
  [pid  2119] ioctl(5, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or 
TCGETS, {B9600 -opost -isig -icanon -echo ...}) = 0
  [pid  2119] write(5, "GET iptables_disable\n", 21) = 21
  [pid  2119] select(6, [5], [], [], {10, 0}) = 0 (Timeout)
  [pid  2119] close(5)= 0

  instead of:

  [pid  1977] open("/dev/ttyS1", O_RDWR|O_NOCTTY|O_NONBLOCK) = 5
  [pid  1977] ioctl(5, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or 
TCGETS, {B9600 -opost -isig -icanon -echo ...}) = 0
  [pid  1977] write(5, "GET base64_keys\n", 16) = 16
  [pid  1977] select(6, [5], [], [], NULL

  which you get without the patch (notice the NULL for the timeout
  parameter). The code that gets blocked in this version of cloud-init
  is:

  ser.write("GET %s\n" % noun.rstrip())
  status = str(ser.readline()).rstrip()

  in cloudinit/sources/DataSourceSmartOS.py. The ser.readline()
  documentation says

  (https://pyserial.readthedocs.io/en/latest/shortintro.html#readline):

  Be careful when using readline(). Do specify a timeout when opening
  the serial port otherwise it could block forever if no newline
  character is received. Also note that readlines() only works with a
  timeout. readlines() depends on having a timeout and interprets that
  as EOF (end of file). It raises an exception if the port is not opened
  correctly.

  which is exactly the situation we've hit here.


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2017-02-24 Thread Thomas Herve
** Changed in: heat
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in heat:
  Fix Released
Status in neutron:
  Confirmed
Status in octavia:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667755] [NEW] Default scope rules added to router may drop traffic unexpectedly

2017-02-24 Thread James Denton
Public bug reported:

Release: OpenStack-Ansible 13.3.4 (Mitaka)

Scenario:

Neutron routers are connected to single provider network and single
tenant network. Floating IPs are *not* used, and SNAT is disabled on the
router:

+-++
| Field   | Value   
   |
+-++
| admin_state_up  | True
   |
| availability_zone_hints | 
   |
| availability_zones  | nova
   |
| description | 
   |
| distributed | False   
   |
| external_gateway_info   | {"network_id": 
"ce830329-4133-41fe-868f-698cc761e247", "enable_snat": false, 
"external_fixed_ips": [{"subnet_id": "cf34a5c3-5d26   |
| | -449f-b22e-2e3fdd69f262", "ip_address": 
"10.152.114.39"}]}  
   |
| ha  | False   
   |
| id  | c965e7a1-98c0-4d5e-8dcb-cfafc2667ee1
   |
| name| RTR 
   |
| routes  | 
   |
| status  | ACTIVE  
   |
| tenant_id   | 2ed1712187674c64acae83948e5b1928
   |
+-++
 

Upstream routes exist that route tenant network traffic to the qg
interface of the routes (static, not BGP - yet).

In some cases, we have found that inbound/outbound traffic is getting
dropped within the Neutron qrouter namespace. Comparing to a working
router, we have found some differences in iptables:

Working router:

*mangle
-A neutron-l3-agent-scope -i qr-3dd65e85-f2 -j MARK --set-xmark 
0x401/0x
-A neutron-l3-agent-scope -i qg-2f55db22-5b -j MARK --set-xmark 
0x401/0x

*filter
-A neutron-l3-agent-scope -o qr-3dd65e85-f2 -m mark ! --mark 
0x401/0x -j DROP
-A neutron-l3-agent-scope -o qg-2f55db22-5b -m mark ! --mark 
0x401/0x -j DROP

Non-working router:

*mangle
-A neutron-l3-agent-scope -i qg-e3f65cf1-29 -j MARK --set-xmark 
0x401/0x
-A neutron-l3-agent-scope -i qr-125a3dc5-e3 -j MARK --set-xmark 
0x400/0x

*filter
-A neutron-l3-agent-scope -o qg-e3f65cf1-29 -m mark ! --mark 
0x401/0x -j DROP
-A neutron-l3-agent-scope -o qr-125a3dc5-e3 -m mark ! --mark 
0x400/0x -j DROP

Our working theory is that the marks in filter rules on the non-working
router are incorrectly set - traffic ingress to the qg interface is
being marked as x401, and the egress filter on the qr interface is
checking for x400. We were able to test this theory by swapping the
marks on those two filter rules and observed that inbound/outbound
traffic was working properly.

In the case of the working router, the mark set in the mangle rules is
the same (x401 for both), so the filter rules work fine.

We are not sure at this time how the mark is determined, and while we
can replicate the issue on new routers in the environment, we are unable
to replicate this behavior in other environments at this time.

Please let us know if you need any additional info.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 1663627] Re: Running db_sync --check against new installs fails

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/432376
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=99db3c83e09b8c1df25ec06fe99530aa088c325f
Submitter: Jenkins
Branch:master

commit 99db3c83e09b8c1df25ec06fe99530aa088c325f
Author: Richard Avelar 
Date:   Fri Feb 10 16:33:33 2017 +

Address db_sync check against new install

This patch fixes a bug and causes a log message along with an exit
code to be returned when a DBMigration error is raised.

Change-Id: Iba7aff606937561ad98e2ef551ca4005bd4f337d
Closes-Bug: #1663627


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1663627

Title:
  Running db_sync --check against new installs fails

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  If 'keystone-manage db_sync --check' is run against a new install,
  prior to any DB migrations, it will raise an exception.

  DBMigrationError: Invalid version : 66

  http://logs.openstack.org/34/432134/1/check/gate-openstack-ansible-
  os_keystone-ansible-func-ubuntu-
  
xenial/5920564/logs/openstack/keystone1/keystone/keystone.log.txt.gz#_2017-02-10_06_43_08_956

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1663627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667756] [NEW] Backup HA router sending traffic, traffic from switch interrupted

2017-02-24 Thread Aaron Smith
Public bug reported:

As outlined in https://review.openstack.org/#/c/142843/, backup HA
routers should not send any traffic.  Any traffic will cause the
connected switch to learn a new port for the associated src mac address
since the mac address will be in use on the primary HA router.

We are observing backup routers sending IPv6 RA and RS messages probably
in response to incoming IPv6 RA messages.  The subnets associated with
the HA routers are not intended for IPv6 traffic.

A typical traffic sequence is:

Packet from external switch...
08:81:f4:a6:dc:01 > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 110: 
(hlim 255, next-header ICMPv6 (58) payload length: 56) fe80:52:0:136c::fe > 
ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 56

Immediately followed by a packet from the backup HA router...
fa:16:3e:a7:ae:63 > 33:33:ff:a7:ae:63, ethertype IPv6 (0x86dd), length 86: 
(hlim 1, next-header Options (0) payload length: 32) :: > ff02::1:ffa7:ae63: 
HBH (rtalert: 0x) (padn) [icmp6 sum ok] ICMP6, multicast listener reportmax 
resp delay: 0 addr: ff02::1:ffa7:ae63

Another pkt...
fa:16:3e:a7:ae:63 > 33:33:ff:a7:ae:63, ethertype IPv6 (0x86dd), length 78: 
(hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ffa7:ae63: 
[icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has 
2620:52:0:136c:f816:3eff:fea7:ae63

Another Pkt...
fa:16:3e:a7:ae:63 > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 86: 
(hlim 255, next-header ICMPv6 (58) payload length: 32) 

At this point, the switch has updated its mac table and traffic to the
fa:16:3e:a7:ae:63 address has been redirected to the backup host.
SSH/ping traffic resumes at a later time when the primary router node
sends traffic with the fa:16:3e:a7:ae:63 source address.

This problem is reproducible in our environment as follows:

1. Deploy OSP10
2. Create external network
3. Create external subnet (IPv4)
4. Create an internal network and VM
5. Attach floating ip
6. ssh into the VM through the FIP or ping the FIP
7. you will start to see ssh freeze or the ping fail occasionally


Additional info:
Setting accept_ra=0 on the backup host routers stops the problem from
happening.  Unfortunately, on a reboot, we loose the setting.  The current
sysctl files have accept_ra=0.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667756

Title:
  Backup HA router sending traffic, traffic from switch interrupted

Status in neutron:
  New

Bug description:
  As outlined in https://review.openstack.org/#/c/142843/, backup HA
  routers should not send any traffic.  Any traffic will cause the
  connected switch to learn a new port for the associated src mac
  address since the mac address will be in use on the primary HA router.

  We are observing backup routers sending IPv6 RA and RS messages
  probably in response to incoming IPv6 RA messages.  The subnets
  associated with the HA routers are not intended for IPv6 traffic.

  A typical traffic sequence is:

  Packet from external switch...
  08:81:f4:a6:dc:01 > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 110: 
(hlim 255, next-header ICMPv6 (58) payload length: 56) fe80:52:0:136c::fe > 
ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 56

  Immediately followed by a packet from the backup HA router...
  fa:16:3e:a7:ae:63 > 33:33:ff:a7:ae:63, ethertype IPv6 (0x86dd), length 86: 
(hlim 1, next-header Options (0) payload length: 32) :: > ff02::1:ffa7:ae63: 
HBH (rtalert: 0x) (padn) [icmp6 sum ok] ICMP6, multicast listener reportmax 
resp delay: 0 addr: ff02::1:ffa7:ae63

  Another pkt...
  fa:16:3e:a7:ae:63 > 33:33:ff:a7:ae:63, ethertype IPv6 (0x86dd), length 78: 
(hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ffa7:ae63: 
[icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has 
2620:52:0:136c:f816:3eff:fea7:ae63

  Another Pkt...
  fa:16:3e:a7:ae:63 > 33:33:00:00:00:01, ethertype IPv6 (0x86dd), length 86: 
(hlim 255, next-header ICMPv6 (58) payload length: 32) 

  At this point, the switch has updated its mac table and traffic to the
  fa:16:3e:a7:ae:63 address has been redirected to the backup host.
  SSH/ping traffic resumes at a later time when the primary router node
  sends traffic with the fa:16:3e:a7:ae:63 source address.

  This problem is reproducible in our environment as follows:

  1. Deploy OSP10
  2. Create external network
  3. Create external subnet (IPv4)
  4. Create an internal network and VM
  5. Attach floating ip
  6. ssh into the VM through the FIP or ping the FIP
  7. you will start to see ssh freeze or the ping fail occasionally

  
  Additional info:
  Setting accept_ra=0 on the backup host routers stops the problem from
  happening.  Unfortunately, on a reboot, we loose the setting.  The current
  sysctl files have accept_ra=0.

To manage 

[Yahoo-eng-team] [Bug 1534030] Re: Add tox debug env

2017-02-24 Thread Rob Cresswell
** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534030

Title:
  Add tox debug env

Status in Magnum:
  Fix Released
Status in python-magnumclient:
  Fix Released

Bug description:
  Once we add tox debug env in tox.ini , we can debug test cases when
  tox is running.

  In fact, we use oslotest to debug. oslotest is OpenStack Testing Framework 
and Utilities.
  When we run "tox -e debug", tox uses oslotest to debug our test cases.
  links:  http://docs.openstack.org/developer/oslotest/index.html

  Usage:
  insert pdb;pdb.set_trace() into where you want to debug in test cases. And 
then run command "tox -e debug" to break the procedure.
  Details about how to debug, please click this link:
  http://docs.openstack.org/developer/oslotest/features.html

  It's easy to use and convient for us to debug those test cases.

  Meantime, I add [testenv:debug-py27].
  So, we can run "tox -e debug-py27" to designate python env of running debug. 
Just like we run "tox -e py27". In fact, why we run "tox" instead of "run -e 
py27" is that we have written py27 in [tox] envlist  already, so we don't need 
to write it again.
  But we don't need to  write debug in env, so we can write [debug-py27]. 
Actually, It's not necessary, but only to be a little more robust.

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1534030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552631] Re: [RFE] Bulk Floating IP allocation

2017-02-24 Thread Rob Cresswell
Removing Horizon here; this is just dead weight until clients support
it. We can revisit the idea in the future if a service decides to
implement it.

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1552631

Title:
  [RFE] Bulk Floating IP allocation

Status in python-neutronclient:
  Confirmed
Status in python-openstackclient:
  In Progress

Bug description:
  I needed to allocate 2 floating IPs to my project.
  Via GUI: 
  access and security -> Floating IPs -> Allocate IP to project. 

  I noticed that in order to allocate 2 FIPs, I need to execute
  "Allocate IP to project" twice.

  The costumers have no option to allocate a  range of FIPs with one
  action. They need to do it one by one.

  BR
  Alex

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1552631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648339] Re: cloud_admin in non-default domain cannot see other domains

2017-02-24 Thread Rob Cresswell
Fixed on master. Propose a backport if needed, please.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648339

Title:
  cloud_admin in non-default domain cannot see other domains

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When the cloud admin is in a domain that is not domain "Default", then
  the cloud admin user loses the ability to see other domains in the
  Domains tab. The tab appears, yet only one domain is shown, the users
  domain.

  When a domain is created in horizon, it gets created successfully, but
  does not show up in the list of domains, still only one domain is
  shown.

  This only happens with horizon. On the command line, all created
  domains appear when doing "openstack domain list", including those
  created in horizon.

  As a result, the cloud admin cannot set the domain context on other
  domains in horizon and all admin  tasks for the non-admin domains must
  be completed via the command line.

  When the cloud admin is in the "Default" domain, everything works
  correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667736] [NEW] gate-neutron-fwaas-dsvm-functional failure after recent localrc change

2017-02-24 Thread YAMAMOTO Takashi
Public bug reported:

eg. http://logs.openstack.org/59/286059/1/check/gate-neutron-fwaas-dsvm-
functional/a0f2285/console.html

2017-02-24 15:27:58.187720 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:26 : 
  source /opt/stack/new/devstack/localrc
2017-02-24 15:27:58.187833 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh: line 26: 
/opt/stack/new/devstack/localrc: No such file or directory

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress


** Tags: fwaas gate-failure

** Tags added: fwaas gate-failure

** Changed in: neutron
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667736

Title:
  gate-neutron-fwaas-dsvm-functional failure after recent localrc change

Status in neutron:
  In Progress

Bug description:
  eg. http://logs.openstack.org/59/286059/1/check/gate-neutron-fwaas-
  dsvm-functional/a0f2285/console.html

  2017-02-24 15:27:58.187720 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:26 : 
  source /opt/stack/new/devstack/localrc
  2017-02-24 15:27:58.187833 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh: line 26: 
/opt/stack/new/devstack/localrc: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] Re: Remove Cinder V1 support

2017-02-24 Thread Thomas Herve
** Changed in: heat
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 support

Status in Cinder:
  Won't Fix
Status in devstack:
  Fix Released
Status in grenade:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Compute (nova):
  Opinion
Status in os-client-config:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in Rally:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662762] Re: Authentication for LDAP user fails at MFA rule check

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/437402
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=4e0029455ab45e3b9a15fe9fc151c14c502b7bdd
Submitter: Jenkins
Branch:master

commit 4e0029455ab45e3b9a15fe9fc151c14c502b7bdd
Author: Matthew Edmonds 
Date:   Fri Feb 24 00:41:11 2017 -0500

Fix MFA rule checks for LDAP auth

LDAP authentication was broken by the addition of MFA rule checking.
This patch fixes that.

Change-Id: I4efe4b1b90c93110509cd599f9dd047c313dade3
Closes-Bug: #1662762


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1662762

Title:
  Authentication for LDAP user fails at MFA rule check

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) ocata series:
  Triaged

Bug description:
  I have a openstack master with LDAP server configured (fernet token
  provider). With the new changes around MFA rules
  (https://blueprints.launchpad.net/keystone/+spec/per-user-auth-plugin-
  reqs), I see that the authentication (POST /token) call fails at
  
https://github.com/openstack/keystone/blob/029476272fb869c6413aa4e70f4cae6f890e598f/keystone/auth/core.py#L377

  def check_auth_methods_against_rules(self, user_id, auth_methods):
  user_ref = self.identity_api.get_user(user_id)
  mfa_rules = user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])

  In the last line the code flow expects user_Ref to always have an
  options attribute and this is not present for LDAP users due to which
  we get the below and authentication fails

  INFO keystone.common.wsgi [req-279e9036-6c6a-4fc8-9dfe-1d219931195c - - - - 
-] POST https://ip9-114-192-140.pok.stglabs.ibm.com:5000/v3/auth/tokens
  ERROR keystone.common.wsgi [req-279e9036-6c6a-4fc8-9dfe-1d219931195c - - - - 
-] 'options'
  ERROR keystone.common.wsgi Traceback (most recent call last):
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in 
__call__
  ERROR keystone.common.wsgi result = method(req, **params)
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 132, in 
authenticate_for_token
  ERROR keystone.common.wsgi auth_context['user_id'], method_names_set):
  ERROR keystone.common.wsgi File 
"/usr/lib/python2.7/site-packages/keystone/auth/core.py", line 377, in 
check_auth_methods_against_rules
  ERROR keystone.common.wsgi mfa_rules = 
user_ref['options'].get(ro.MFA_RULES_OPT.option_name, [])
  ERROR keystone.common.wsgi KeyError: 'options'

  
  Conversation from #openstack-keystone on Freenode: 
  
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-02-07.log.html#t2017-02-07T14:01:09

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1662762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667679] Re: Setting quota fails saying admin project is not a valid project

2017-02-24 Thread Juan Antonio Osorio Robles
This is because checking for the validity of the project_id was recently
added to nova  with this patch
https://github.com/openstack/nova/commit/f6fbfc7ff07b790ef052a759552c69429b3d79c7
However, it seems that it's tied to using keystone v3, while it should
be using the keystoneclient's functions and attempt to be version
agnostic.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667679

Title:
  Setting quota fails saying admin project is not a valid project

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  New

Bug description:
  This is what's in the logs http://logs.openstack.org/15/359215/63
  /check-tripleo/gate-tripleo-ci-centos-7-ovb-
  ha/3465882/console.html#_2017-02-24_11_07_08_893276

  2017-02-24 11:07:08.893276 | 2017-02-24 11:07:02.000 | 2017-02-24 
11:07:02,929 INFO: + openstack quota set --cores -1 --instances -1 --ram -1 
b0fe52b0ac15450ba0a38ac9acd8fea8
  2017-02-24 11:07:08.893365 | 2017-02-24 11:07:08.000 | 2017-02-24 
11:07:08,674 INFO: Project ID b0fe52b0ac15450ba0a38ac9acd8fea8 is not a valid 
project. (HTTP 400) (Request-ID: req-9e0a00b7-75ae-41d5-aeed-705bb1a54bae)
  2017-02-24 11:07:08.893493 | 2017-02-24 11:07:08.000 | 2017-02-24 
11:07:08,758 INFO: [2017-02-24 11:07:08,757] (os-refresh-config) [ERROR] during 
post-configure phase. [Command '['dib-run-parts', 
'/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit 
status 1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667259] Re: one more pool is created for a loadbalancer

2017-02-24 Thread Rico Lin
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667259

Title:
  one more pool is created for a loadbalancer

Status in heat:
  New
Status in neutron:
  New

Bug description:
  One more pool is created when creating a load balancer with two pools.
  That pool doesn't have complete information but related to that
  loadblancer, which caused failure when deleting loadbalancer.

  heat resource-list lbvd
  WARNING (shell) "heat resource-list" is deprecated, please use "openstack 
stack resource list" instead
  
+---+--+---+-+--+
  | resource_name | physical_resource_id | resource_type
 | resource_status | updated_time |
  
+---+--+---+-+--+
  | listener  | 12dfe005-80e0-4439-a4f8-1333f688e73b | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | listener2 | 26ba1151-3d4b-4732-826b-7f318800070d | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | loadbalancer  | 3a5bfa24-220c-4316-9c3d-57dd9c13feb8 | 
OS::Neutron::LBaaS::LoadBalancer  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor   | 241bc328-4c9b-4f58-a34a-4e25ed7431ea | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor2  | 6592b768-f3be-4ff9-bbf4-2c30b94f98e2 | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool2 | fae40172-7f16-4b1a-93f0-877d404fe466 | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  
+---+--+---+-+--+

  
  neutron lbaas-pool-list | grep lbvd
  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81 | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | fae40172-7f16-4b1a-93f0-877d404fe466 | lbvd-pool2-kn7rlwltbdxh  
  | HTTPS| True  |

  
  neutron lbaas-pool-show 095c94b8-8c18-443f-9ce9-3d34e94f0c81
  +-++
  | Field  | Value  |
  +-++
  | admin_state_up  | True  |
  | description||
  | healthmonitor_id||
  | id  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81  |
  | lb_algorithm| ROUND_ROBIN|
  | listeners  ||
  | loadbalancers  | {"id": "3a5bfa24-220c-4316-9c3d-57dd9c13feb8"} |
  | members||
  | name| lbvd-pool-ujtp6ddt4g6o|
  | protocol| HTTP  |
  | session_persistence ||
  | tenant_id  | 3dcf8b12327c460a966c1c1d4a6e2887  |
  +-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1667259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667667] [NEW] Rebuilding instance ignores image's property hw_disk_bus

2017-02-24 Thread Adam Kijak
Public bug reported:

How to reproduce it:

Two images, one of them with with hw_disk_bus=scsi and hw_scsi_model
=virtio-scsi properties:

$ glance image-show 4bbf-ddbf-4ede-a30f-b5b4a8e68876
+--+--+
| Property | Value|
+--+--+
| checksum | 79b4436412283bb63c2cba4ac796bcd9 |
| container_format | bare |
| created_at   | 2017-02-24T08:45:48Z |
| disk_format  | qcow2|
| id   | 4bbf-ddbf-4ede-a30f-b5b4a8e68876 |
| min_disk | 0|
| min_ram  | 0|
| name | cirros-0.3.4-i386-disk   |
| owner| 277132bf94b040f0842bd66d71a0d574 |
| protected| False|
| size | 12506112 |
| status   | active   |
| tags | []   |
| updated_at   | 2017-02-24T08:45:52Z |
| virtual_size | None |
| visibility   | public   |
+--+--+

$ glance image-show 8ce9540e-a802-4e39-a1b4-20cbff14ec18
+--+--+
| Property | Value|
+--+--+
| checksum | 79b4436412283bb63c2cba4ac796bcd9 |
| container_format | bare |
| created_at   | 2017-02-24T09:07:44Z |
| disk_format  | qcow2|
| hw_disk_bus  | scsi |
| hw_scsi_model| virtio-scsi  |
| id   | 8ce9540e-a802-4e39-a1b4-20cbff14ec18 |
| min_disk | 0|
| min_ram  | 0|
| name | cirros-scsi  |
| owner| bd560276f6bd48219ddcd7c9fb245ec1 |
| protected| False|
| size | 12506112 |
| status   | active   |
| tags | []   |
| updated_at   | 2017-02-24T09:07:45Z |
| virtual_size | None |
| visibility   | shared   |
+--+--+

$ nova boot --flavor m1.small --image 4bbf-ddbf-4ede-a30f-b5b4a8e68876 vm1
$ virsh dumpxml instance-0003 | grep '.*target.*bus'
  

$ nova rebuild vm1 cirros-scsi
$ nova show vm1 | grep image
| image| cirros-scsi 
(8ce9540e-a802-4e39-a1b4-20cbff14ec18)
$ virsh dumpxml instance-0003 | grep '.*target.*bus'
  

The problem is that despite property hw_disk_bus set on cirros-scsi
(8ce9540e-a802-4e39-a1b4-20cbff14ec18), 'virtio' (and vda) is not
replaced in the instance's XML file.

The expected result IMO should be like a normal boot from this image:

$ nova boot --flavor m1.small --image 8ce9540e-a802-4e39-a1b4-20cbff14ec18 vm2
$ virsh dumpxml instance-0004 | grep '.*target.*bus'
  

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: image rebuild scsi

** Description changed:

  How to reproduce it:
  
  Two images, one of them with with hw_disk_bus=scsi and hw_scsi_model
  =virtio-scsi properties:
  
- $ glance image-show 4bbf-ddbf-4ede-a30f-b5b4a8e68876 
+ $ glance image-show 4bbf-ddbf-4ede-a30f-b5b4a8e68876
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | 79b4436412283bb63c2cba4ac796bcd9 |
  | container_format | bare |
  | created_at   | 2017-02-24T08:45:48Z |
  | disk_format  | qcow2|
  | id   | 4bbf-ddbf-4ede-a30f-b5b4a8e68876 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-0.3.4-i386-disk   |
  | owner| 277132bf94b040f0842bd66d71a0d574 |
  | protected| False|
  | size | 12506112 |
  | status   | active   |
  | tags | []   |
  | updated_at   | 2017-02-24T08:45:52Z |
  | virtual_size | None   

[Yahoo-eng-team] [Bug 1666831] Re: Nova recreates instance directory after migration/resize

2017-02-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/437356
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6347baf3d09036525b7f6df991ae440d558f9cc3
Submitter: Jenkins
Branch:master

commit 6347baf3d09036525b7f6df991ae440d558f9cc3
Author: Maciej Józefczyk 
Date:   Thu Feb 23 12:56:04 2017 +0100

Ensure that instance directory is removed after success migration/resize

Nova recreates instance directory on source host after successful 
migration/resize.
This patch removes directory of migrated instance from source host.

Change-Id: Ic683f83e428106df64be42287e2c5f3b40e73da4
Closes-Bug: #1666831


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666831

Title:
  Nova recreates instance directory after migration/resize

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Nova recreates instance directory on source host after successful 
migration/resize when using QEMU Qcow2 file drives.

  
  Nova after migration executes method driver.confirm_migration().
  This method cleans instance directory (instance directory with suffix 
_resize):

  nova/virt/libvirt/driver.py
  1115 if os.path.exists(target):
  1116 # Deletion can fail over NFS, so retry the deletion as 
required.
  1117 # Set maximum attempt as 5, most test can remove the 
directory
  1118 # for the second time.
  1119 utils.execute('rm', '-rf', target, delay_on_retry=True,
  1120   attempts=5)

  After that Nova executes:
  1122 root_disk = self.image_backend.by_name(instance, 'disk')

  root_disk is used to remove rdb snapshots, but during execution of
  self.image_backend.by_name() nova recreates instance directory.

  Flow:

  
driver.confirm_migration()->self._cleanup_resize()->self.image_backend.by_name()
  -> (nova/virt/libvirt/imagebackend.py)
  image_backend.by_name()->Qcow2.__init__()->Qcow2.resolve_driver_format().

  Qcow2.resolve_driver_format():
   344 if self.disk_info_path is not None:
   345 
fileutils.ensure_tree(os.path.dirname(self.disk_info_path))
   346 write_to_disk_info_file()

  
  Steps to reproduce
  ==

  - spawn instance
  - migrate/resize instance
  - check that instance dir on old host still exists (example: 
/home/instances//disk.info

  
  Expected result
  ===
  After migration directory /home/instances/ and file 
/home/instances/ should not exist.

  Actual result
  =
  Nova leaves instance directory after migration/resize.

  
  Environment
  ===
  1. Openstack Newton (it seems master is affected too).

  2. Libvirt + KVM

  3. Qcow2 file images on local disk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [Yahoo-eng-team] [Question #452737]: Have a picture and can't find out what it is

2017-02-24 Thread Launchpad Janitor
Question #452737 on anvil changed:
https://answers.launchpad.net/anvil/+question/452737

Status: Open => Expired

Launchpad Janitor expired the question:
This question was expired because it remained in the 'Open' state
without activity for the last 15 days.

-- 
You received this question notification because your team Yahoo!
Engineering Team is an answer contact for anvil.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp