[Yahoo-eng-team] [Bug 1460176] Re: Reschedules sometimes do not allocate networks

2015-06-22 Thread John Garbutt
its not really released yet, move back to fix committed.

** Changed in: nova
   Status: Fix Released = Fix Committed

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: (unassigned) = Jim Rollenhagen (jim-rollenhagen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460176

Title:
  Reschedules sometimes do not allocate networks

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  https://gist.github.com/jimrollenhagen/b6b45aa43878cdc89d89

  Fixed by https://review.openstack.org/#/c/177470/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467570] [NEW] Nova can't provision instance from snapshot with a ceph backend

2015-06-22 Thread J-PMethot
Public bug reported:

This is a weird issue that does not happen in our Juno setup, but
happens in our Kilo setup. The configuration between the two setups is
pretty much the same, with only kilo-specific changes done (namely,
moving lines around to new sections).

Here's how to reproduce: 
1.Provision an instance.
2.Make a snapshot of this instance.
3.Try to provision an instance with that snapshot.

Nova-compute will complain that it can't find the disk and the instance
will fall in error.

Here's what the default behavior is supposed to be from my observations: 
-When the image is uploaded into ceph, a snapshot is created automatically 
inside ceph (this is NOT an instance snapshot per say, but a ceph internal 
snapshot). 
-When an instance is booted from image in nova, this snapshot gets a clone in 
the nova ceph pool. Nova then uses that clone as the instance's disk. This is 
called copy-on-write cloning.

Here's when things get funky: -When an instance is booted from a
snapshot, the copy-on-write cloning does not happen. Nova looks for the
disk and, of course, fails to find it in its pool, thus failing to
provision the instance . There's no trace anywhere of the copy-on-write
clone failing (In part because ceph doesn't log client commands, from
what I see).

The compute logs I got are in this pastebin :
http://pastebin.com/ADHTEnhn

There's a few things I notice here that I'd like to point out :

-Nova create an ephemeral drive file, then proceeds to delete it before
using rbd_utils instead. While strange, this may be the intended but
somewhat dirty behavior, as nova consider it an ephemeral instance,
before realizing that it's actually a ceph instance and doesn't need its
ephemeral disk. Or maybe these conjectures are completely wrong and this
is part of the issue.

-Nova creates the image (I'm guessing it's the copy-on-write cloning
happening here). What exactly happens here isn't very clear, but then it
complains that it can't find the clone in its pool to use as block
device.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ceph

** Tags added: ceph

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467570

Title:
  Nova can't provision instance from snapshot with a ceph backend

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is a weird issue that does not happen in our Juno setup, but
  happens in our Kilo setup. The configuration between the two setups is
  pretty much the same, with only kilo-specific changes done (namely,
  moving lines around to new sections).

  Here's how to reproduce: 
  1.Provision an instance.
  2.Make a snapshot of this instance.
  3.Try to provision an instance with that snapshot.

  Nova-compute will complain that it can't find the disk and the
  instance will fall in error.

  Here's what the default behavior is supposed to be from my observations: 
  -When the image is uploaded into ceph, a snapshot is created automatically 
inside ceph (this is NOT an instance snapshot per say, but a ceph internal 
snapshot). 
  -When an instance is booted from image in nova, this snapshot gets a clone in 
the nova ceph pool. Nova then uses that clone as the instance's disk. This is 
called copy-on-write cloning.

  Here's when things get funky: -When an instance is booted from a
  snapshot, the copy-on-write cloning does not happen. Nova looks for
  the disk and, of course, fails to find it in its pool, thus failing to
  provision the instance . There's no trace anywhere of the copy-on-
  write clone failing (In part because ceph doesn't log client commands,
  from what I see).

  The compute logs I got are in this pastebin :
  http://pastebin.com/ADHTEnhn

  There's a few things I notice here that I'd like to point out :

  -Nova create an ephemeral drive file, then proceeds to delete it
  before using rbd_utils instead. While strange, this may be the
  intended but somewhat dirty behavior, as nova consider it an ephemeral
  instance, before realizing that it's actually a ceph instance and
  doesn't need its ephemeral disk. Or maybe these conjectures are
  completely wrong and this is part of the issue.

  -Nova creates the image (I'm guessing it's the copy-on-write cloning
  happening here). What exactly happens here isn't very clear, but then
  it complains that it can't find the clone in its pool to use as block
  device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] [NEW] Remove Cinder V1 supprt

2015-06-22 Thread Mike Perez
Public bug reported:

Cinder created v2 support in the Grizzly release. This is to track
progress in removing v1 support in other projects.

** Affects: cinder
 Importance: Undecided
 Assignee: Mike Perez (thingee)
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Assignee: Mike Perez (thingee)
 Status: In Progress

** Affects: rally
 Importance: Undecided
 Assignee: Ivan Kolodyazhny (e0ne)
 Status: In Progress

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Mike Perez (thingee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 supprt

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Rally:
  In Progress

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467610] [NEW] Searchbar directive should be prefix with hz

2015-06-22 Thread Thai Tran
Public bug reported:

We should rename search-bar directive to hz-search-bar for consistency.

** Affects: horizon
 Importance: Low
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467610

Title:
  Searchbar directive should be prefix with hz

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We should rename search-bar directive to hz-search-bar for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467610/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467615] [NEW] Integration test navigation overwrites itself.

2015-06-22 Thread Charles V Bock
Public bug reported:

The integration tests use a navigation menu structure to create go_to_ 
functions for each page path.
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L49
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L257

Unfortunately it only uses the last two segments of the path to generate
the name resulting in an overwrite of previously defined functions.

https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L279


Ex:
[Project/Compute/Volumes/Volumes] creates:
go_to_volumes_volumespage
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L69

Which is the same as

[Admin/System/Volumes/Volumes] creates:
go_to_volumes_volumespage
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L146


This causes issues when trying to generate tests for volumes or any other 
conflicting pair.

Thanks.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467615

Title:
  Integration test navigation overwrites itself.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The integration tests use a navigation menu structure to create go_to_ 
functions for each page path.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L49
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L257

  Unfortunately it only uses the last two segments of the path to
  generate the name resulting in an overwrite of previously defined
  functions.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L279

  
  Ex:
  [Project/Compute/Volumes/Volumes] creates:
  go_to_volumes_volumespage
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L69

  Which is the same as

  [Admin/System/Volumes/Volumes] creates:
  go_to_volumes_volumespage
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L146

  
  This causes issues when trying to generate tests for volumes or any other 
conflicting pair.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] Re: Remove Cinder V1 supprt

2015-06-22 Thread Ivan Kolodyazhny
** Also affects: rally
   Importance: Undecided
   Status: New

** Changed in: rally
 Assignee: (unassigned) = Ivan Kolodyazhny (e0ne)

** Changed in: rally
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 supprt

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Rally:
  In Progress

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464750] Re: Service accounts can be used to login horizon

2015-06-22 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Incomplete = Won't Fix

** Information type changed from Private Security to Public

** Also affects: ossn
   Importance: Undecided
   Status: New

** Changed in: ossn
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464750

Title:
  Service accounts can be used to login horizon

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Incomplete

Bug description:
  This is not a bug and may / may not be a security issue ... but it
  appears that the service account created in keystone are of the same
  privileges level as any other admin accounts created through keystone
  and I don't like that.

  Would it be possible to implement something that would distinguish
  user accounts from service accounts?  Is there a way to isolate some
  service accounts from the remaining of the openstack APIs?

  One kick example on this is that any service accounts have admin
  privileges on all the other services .   At this point, I'm trying to
  figure out why are we creating a distinct service account for each
  service if nothing isolate them.

  IE:

  glance account can spawn a VM
  cinder account can delete an image
  heat account can delete a volume
  nova account can create an image

  
  All of these service accounts have access to the horizon dashboard.  One 
small hack could be to prevent those accounts from logging in Horizon.

  Thanks,

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467560] [NEW] RFE: add instance uuid field to nova.quota_usages table

2015-06-22 Thread Dan Yocum
Public bug reported:

In Icehouse, the nova.quota_usages table frequently gets out-of-sync
with the currently active/stopped instances in a tenant/project,
specifically, there are times when the instance will be set to
terminated/deleted in the instances table and the quota_usages table
will retain the data, counting against the tenant's total quota.  As far
as I can tell there is no way to correlate instances.uuid with the
records in nova.quota_usages.

I propose adding an instance uuid column to make future cleanup of this
table easier.

I also propose a housecleaning task that does this clean up
automatically.

Thanks,
Dan

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467560

Title:
  RFE: add instance uuid field to nova.quota_usages table

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Icehouse, the nova.quota_usages table frequently gets out-of-sync
  with the currently active/stopped instances in a tenant/project,
  specifically, there are times when the instance will be set to
  terminated/deleted in the instances table and the quota_usages table
  will retain the data, counting against the tenant's total quota.  As
  far as I can tell there is no way to correlate instances.uuid with the
  records in nova.quota_usages.

  I propose adding an instance uuid column to make future cleanup of
  this table easier.

  I also propose a housecleaning task that does this clean up
  automatically.

  Thanks,
  Dan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467633] [NEW] OVS tunnel interface names are not host unique

2015-06-22 Thread Assaf Muller
Public bug reported:

OVS agent tunnel interfaces are named via:
'%s-%s' % (tunnel_type, destination_ip)

This means that the tunnel interface name is not unique if
two OVS agents on the same machine try to form a tunnel with a
third agent. This happens during full stack tests that start
multiple copies of the OVS agent on the test machine.

Making sure that the tunnel interface names are globally
unique on a host adds support to full stack tests that need 3+
OVS agents.

** Affects: neutron
 Importance: Medium
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467633

Title:
  OVS tunnel interface names are not host unique

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  OVS agent tunnel interfaces are named via:
  '%s-%s' % (tunnel_type, destination_ip)

  This means that the tunnel interface name is not unique if
  two OVS agents on the same machine try to form a tunnel with a
  third agent. This happens during full stack tests that start
  multiple copies of the OVS agent on the test machine.

  Making sure that the tunnel interface names are globally
  unique on a host adds support to full stack tests that need 3+
  OVS agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451931] Re: ironic password config not marked as secret

2015-06-22 Thread Matt Riedemann
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/juno
 Assignee: (unassigned) = Michael McCune (mimccune)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451931

Title:
  ironic password config not marked as secret

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  New
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  The ironic config option for the password and auth token are not
  marked as secret so the values will get logged during startup in debug
  mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467208] Re: firewall remove router doesn't work

2015-06-22 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467208

Title:
  firewall remove router doesn't  work

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  the label of form firewall remove router remove routers may be
  misunderstood,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467676] [NEW] OS-Brick connector needs to add HGST Solutions shim

2015-06-22 Thread Earle F. Philhower, III
Public bug reported:

When Nova is patched to use os-brick, there will be a couple changes
required to support the HGST Solutions connector type in it.

Three very minor changes need to be made to make this happen:

1) Because os-brick is a library and not an application, the rootwrap
for Nova will need to include the required CLI commands to attach/detach
volumes as it can't live in os-brick's repository

2) libvirt_volume_types needs to include the HGST=
LibvirtHGSTVolumeDriver mapping, because os-brick doesn't support
discovery of supported types

3) A small shim LibvirtHGSTVolumeDriver calling the os-brick library
needs to be added, again because there is no generic way presently for
os-brick to map to specific volume types in libvirtvolumetype.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: flash hgst os-brick

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467676

Title:
  OS-Brick connector needs to add HGST Solutions shim

Status in OpenStack Compute (Nova):
  New

Bug description:
  When Nova is patched to use os-brick, there will be a couple changes
  required to support the HGST Solutions connector type in it.

  Three very minor changes need to be made to make this happen:

  1) Because os-brick is a library and not an application, the rootwrap
  for Nova will need to include the required CLI commands to
  attach/detach volumes as it can't live in os-brick's repository

  2) libvirt_volume_types needs to include the HGST=
  LibvirtHGSTVolumeDriver mapping, because os-brick doesn't support
  discovery of supported types

  3) A small shim LibvirtHGSTVolumeDriver calling the os-brick library
  needs to be added, again because there is no generic way presently for
  os-brick to map to specific volume types in libvirtvolumetype.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467679] [NEW] Floating ip test cannot be turned off for horizon integration tests

2015-06-22 Thread Timmy Vanderwiel
Public bug reported:

The floating ip test in horizon integration tests cannot be turned off.
It needs to be able to be turned off because not all horizons have
floating ips so those not having floating ips will fail every time for
that test.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467679

Title:
  Floating ip test cannot be turned off for horizon integration tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The floating ip test in horizon integration tests cannot be turned
  off. It needs to be able to be turned off because not all horizons
  have floating ips so those not having floating ips will fail every
  time for that test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448813] Re: radvd running as neutron user in Kilo, attached network dead

2015-06-22 Thread Christopher Aedo
** No longer affects: app-catalog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448813

Title:
  radvd running as neutron user in Kilo, attached network dead

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Kilo RC1 release, mirantis Debian Jessie build

  Linux Kernel 3.19.3, ML2 vlan networking

  radvd version 1:1.9.1-1.3

  Network with IPv6 ULA SLAAC, IPv6 GUA SLAAC, Ipv4 RFC1918 configured.

  Radvd does not start, neutron-l3-agent does not set up OVS vlan
  forwarding between network and compute node, IPv4 completely
  disconnected as well. Looks like complete L2 breakage.

  Need to get this one fixed before release of Kilo.

  Work around:

  chown root:neutron /usr/sbin/radvd
  chmod 2750 /usr/sbin/radvd

  radvd gives message about not being able to create an IPv6 ICMP port
  in neutron-l3-agent log, just like when run as an non-root user.

  Notice radvd is not being executed via root wrap/sudo anymore, like
  all the other ip route/ip address/ip netns information gathering
  commands. Was executing in a privileged fashion missed in Neutron code
  refactor?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1448813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-06-22 Thread Sergey Vilgelm
Glance review: https://review.openstack.org/#/c/194408/

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
   Status: New = In Progress

** Changed in: glance
 Assignee: (unassigned) = Sergey Vilgelm (sergey.vilgelm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449775] [NEW] Got server fault when set admin_state_up=false for health monitor

2015-06-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

This happens for tempest for health monitor, when set admin_state_up
=false for creating or updating health monitor, will get the following
bug:

Traceback (most recent call last):
File 
/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py,
 line 504, in test_udpate_health_monitor_invalid_admin_state_up
  hm.get('id'), admin_state_up=False)
 File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 422, 
in assertRaises
  self.assertThat(our_callable, matcher)
 File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 435, 
in assertThat
   raise mismatch_error
 testtools.matchers._impl.MismatchError: bound method 
type._update_health_monitor of class 
'test_health_monitors_non_admin.TestHealthMonitors' returned 
{u'admin_state_up': False, u'tenant_id': u'7e24ec89b7df4a7d8738d415b6ac8422', 
u'delay': 3, u'expected_codes': u'200', u'max_retries': 10, u'http_method': 
u'GET', u'timeout': 5, u'pools': [{u'id': 
u'1409120f-fde8-49fc-8db5-25dc3941f460'}], u'url_path': 
 u'/', u'type': u'HTTP', u'id': u'5cee3bf8-d94e-42a4-ab30-b190c66f87de'}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Got server fault when set admin_state_up=false for health monitor
https://bugs.launchpad.net/bugs/1449775
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467684] [NEW] Karma configuration should be normalized

2015-06-22 Thread Shaoquan Chen
Public bug reported:

Currently, Karma config includes many random rules for files to be
listed, the rules can be normalized so that developer do not to always
manage the configuration manually when do refactoring.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Shaoquan Chen (sean-chen2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467684

Title:
  Karma configuration should be normalized

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently, Karma config includes many random rules for files to be
  listed, the rules can be normalized so that developer do not to always
  manage the configuration manually when do refactoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467692] [NEW] keystone-manage mapping_engine does not support regex

2015-06-22 Thread Fernando Diaz
Public bug reported:

When running keystone-manage mapping_engine --rules rules.json --input
input.txt, if rules.json contains any mapping containing regex, it will
fail.

This is because
https://github.com/openstack/keystone/blob/master/keystone/cli.py#L598
uses jsonutils.load(rules.json) and regex contains a boolean instead of
a string, which will invalidate the json file. If the regex is provided
as a string it will fail because it does not match the schema in
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/utils.py#L88-L90.

** Affects: keystone
 Importance: Undecided
 Assignee: Fernando Diaz (diazjf)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Fernando Diaz (diazjf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1467692

Title:
  keystone-manage mapping_engine does not support regex

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When running keystone-manage mapping_engine --rules rules.json --input
  input.txt, if rules.json contains any mapping containing regex, it
  will fail.

  This is because
  https://github.com/openstack/keystone/blob/master/keystone/cli.py#L598
  uses jsonutils.load(rules.json) and regex contains a boolean instead
  of a string, which will invalidate the json file. If the regex is
  provided as a string it will fail because it does not match the schema
  in
  
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/utils.py#L88-L90.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1467692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-06-22 Thread Sergey Vilgelm
ceilometer review: https://review.openstack.org/#/c/194423/

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
   Status: New = In Progress

** Changed in: ceilometer
 Assignee: (unassigned) = Sergey Vilgelm (sergey.vilgelm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448248] [NEW] Keystone Middleware Installation

2015-06-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Hi,

I was performing openstack devstack juno installation, downloaded the
scripts from github and got some keystone middleware error

+ install_keystonemiddleware
+ use_library_from_git keystonemiddleware
+ local name=keystonemiddleware
+ local enabled=1
+ [[ ,, =~ ,keystonemiddleware, ]]
+ return 1
+ pip_install_gr keystonemiddleware
+ local name=keystonemiddleware
++ get_from_global_requirements keystonemiddleware
++ local package=keystonemiddleware
+++ cut -d# -f1
+++ grep -h '^keystonemiddleware' 
/opt/stack/requirements/global-requirements.txt
++ local required_pkg=
++ [[ '' == '' ]]
++ die 1601 'Can'\''t find package keystonemiddleware in requirements'
++ local exitcode=0
++ set +o xtrace
[ERROR] /home/stack/devstack/functions-common:1601 Can't find package 
keystonemiddleware in requirements
+ local 'clean_name=[Call Trace]
./stack.sh:781:install_keystonemiddleware
/home/stack/devstack/lib/keystone:496:pip_install_gr
/home/stack/devstack/functions-common:1535:get_from_global_requirements
/home/stack/devstack/functions-common:1601:die'
+ pip_install '[Call' 'Trace]' ./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
++ set +o
++ grep xtrace
+ local 'xtrace=set -o xtrace'
+ set +o xtrace
+ sudo -H PIP_DOWNLOAD_CACHE=/var/cache/pip http_proxy= https_proxy= no_proxy= 
/usr/local/bin/pip install '[Call' 'Trace]' 
./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
Exception:
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 246, 
in main
status = self.run(options, args)
  File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 
308, in run
name, None, isolated=options.isolated_mode,
  File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 
220, in from_line
isolated=isolated)
  File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 
79, in __init__
req = pkg_resources.Requirement.parse(req)
  File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2960, in parse
reqs = list(parse_requirements(s))
  File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2891, in parse_requirements
raise ValueError(Missing distribution spec, line)
ValueError: ('Missing distribution spec', '[Call')

+ exit_trap
+ local r=2
++ jobs -p
+ jobs=
+ [[ -n '' ]]
+ kill_spinner
+ '[' '!' -z '' ']'
+ [[ 2 -ne 0 ]]
+ echo 'Error on exit'
Error on exit
+ [[ -z '' ]]
+ /home/stack/devstack/tools/worlddump.py

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: keystone middleware
-- 
Keystone Middleware Installation 
https://bugs.launchpad.net/bugs/1448248
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Keystone.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449492] [NEW] Cinder not working with IPv6 ISCSI

2015-06-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Testing configuring Openstack completely with IPv6

Found that IP address parsing was thrown in a lot of cases because of
need to have '[]' encasing the address, or not for use with URLs and the
parsing of some user space 3rd party C binaries - iscsiadm for example.
All the others are best left by using a name set to the IPv6 address in
the /etc/hosts file, iSCSI though its not possible.

Got Cinder working by setting iscsi_ip_address (/etc/cinder/cinder.conf)
to '[$my_ip]' where my ip is an IPv6 address like 2001:db08::1 (not RFC
documentation address ?) and changing one line of python iin the nova
virt/libvirt/volume.py code:


--- nova/virt/libvirt/volume.py.orig2015-04-27 23:00:00.208075644 +1200
+++ nova/virt/libvirt/volume.py 2015-04-27 22:38:08.938643636 +1200
@@ -833,7 +833,7 @@
 def _get_host_device(self, transport_properties):
 Find device path in devtemfs.
 device = (ip-%s-iscsi-%s-lun-%s %
-  (transport_properties['target_portal'],
+  
(transport_properties['target_portal'].replace('[','').replace(']',''),
transport_properties['target_iqn'],
transport_properties.get('target_lun', 0)))
 if self._get_transport() == default:

Nova-compute was looking for '/dev/disk/by-path/ip-[2001:db08::1]:3260
-iscsi-iqn.2010-10.org.openstack:*' when there were no '[]' in the udev
generated path

This one can't be worked around by using the /etc/hosts file. iscsiadm
and tgt ned the IPv6 address wrapped in '[]', and iscsadm uses it in
output.  The above patch could be matched with a bi ihte cinder code
that puts '[]' around iscsi_ip_address if if it is not supplied.

More work is obvioulsy need on a convention for writing IPv6 addresses
in the Openstack configuration files, and there will be a lot of places
where code will need to be tweaked.

Lets start by fixing this blooper/lo hanging one  first though as it
makes it possible to get Cinder working in a pure IPv6 environment.
Above may be a bit of a hack, but only one one code path needs
adjustment...

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cinder ipv6 iscsi
-- 
Cinder not working with IPv6 ISCSI
https://bugs.launchpad.net/bugs/1449492
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449775] Re: Got server fault when set admin_state_up=false for health monitor

2015-06-22 Thread Christopher Aedo
** Project changed: app-catalog = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449775

Title:
  Got server fault when set admin_state_up=false for health monitor

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This happens for tempest for health monitor, when set admin_state_up
  =false for creating or updating health monitor, will get the following
  bug:

  Traceback (most recent call last):
  File 
/opt/stack/neutron-lbaas/neutron_lbaas/tests/tempest/v2/api/test_health_monitors_non_admin.py,
 line 504, in test_udpate_health_monitor_invalid_admin_state_up
hm.get('id'), admin_state_up=False)
   File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
422, in assertRaises
self.assertThat(our_callable, matcher)
   File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
435, in assertThat
 raise mismatch_error
   testtools.matchers._impl.MismatchError: bound method 
type._update_health_monitor of class 
'test_health_monitors_non_admin.TestHealthMonitors' returned 
{u'admin_state_up': False, u'tenant_id': u'7e24ec89b7df4a7d8738d415b6ac8422', 
u'delay': 3, u'expected_codes': u'200', u'max_retries': 10, u'http_method': 
u'GET', u'timeout': 5, u'pools': [{u'id': 
u'1409120f-fde8-49fc-8db5-25dc3941f460'}], u'url_path': 
   u'/', u'type': u'HTTP', u'id': u'5cee3bf8-d94e-42a4-ab30-b190c66f87de'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448248] Re: Keystone Middleware Installation

2015-06-22 Thread Christopher Aedo
** Project changed: app-catalog = keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1448248

Title:
  Keystone Middleware Installation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi,

  I was performing openstack devstack juno installation, downloaded the
  scripts from github and got some keystone middleware error

  + install_keystonemiddleware
  + use_library_from_git keystonemiddleware
  + local name=keystonemiddleware
  + local enabled=1
  + [[ ,, =~ ,keystonemiddleware, ]]
  + return 1
  + pip_install_gr keystonemiddleware
  + local name=keystonemiddleware
  ++ get_from_global_requirements keystonemiddleware
  ++ local package=keystonemiddleware
  +++ cut -d# -f1
  +++ grep -h '^keystonemiddleware' 
/opt/stack/requirements/global-requirements.txt
  ++ local required_pkg=
  ++ [[ '' == '' ]]
  ++ die 1601 'Can'\''t find package keystonemiddleware in requirements'
  ++ local exitcode=0
  ++ set +o xtrace
  [ERROR] /home/stack/devstack/functions-common:1601 Can't find package 
keystonemiddleware in requirements
  + local 'clean_name=[Call Trace]
  ./stack.sh:781:install_keystonemiddleware
  /home/stack/devstack/lib/keystone:496:pip_install_gr
  /home/stack/devstack/functions-common:1535:get_from_global_requirements
  /home/stack/devstack/functions-common:1601:die'
  + pip_install '[Call' 'Trace]' ./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
  ++ set +o
  ++ grep xtrace
  + local 'xtrace=set -o xtrace'
  + set +o xtrace
  + sudo -H PIP_DOWNLOAD_CACHE=/var/cache/pip http_proxy= https_proxy= 
no_proxy= /usr/local/bin/pip install '[Call' 'Trace]' 
./stack.sh:781:install_keystonemiddleware 
/home/stack/devstack/lib/keystone:496:pip_install_gr 
/home/stack/devstack/functions-common:1535:get_from_global_requirements 
/home/stack/devstack/functions-common:1601:die
  DEPRECATION: --download-cache has been deprecated and will be removed in the 
future. Pip now automatically uses and configures its cache.
  Exception:
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 246, 
in main
  status = self.run(options, args)
File /usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 
308, in run
  name, None, isolated=options.isolated_mode,
File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 
220, in from_line
  isolated=isolated)
File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 
79, in __init__
  req = pkg_resources.Requirement.parse(req)
File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2960, in parse
  reqs = list(parse_requirements(s))
File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2891, in parse_requirements
  raise ValueError(Missing distribution spec, line)
  ValueError: ('Missing distribution spec', '[Call')

  + exit_trap
  + local r=2
  ++ jobs -p
  + jobs=
  + [[ -n '' ]]
  + kill_spinner
  + '[' '!' -z '' ']'
  + [[ 2 -ne 0 ]]
  + echo 'Error on exit'
  Error on exit
  + [[ -z '' ]]
  + /home/stack/devstack/tools/worlddump.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449492] Re: Cinder not working with IPv6 ISCSI

2015-06-22 Thread Christopher Aedo
** Project changed: app-catalog = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449492

Title:
  Cinder not working with IPv6 ISCSI

Status in OpenStack Compute (Nova):
  New

Bug description:
  Testing configuring Openstack completely with IPv6

  Found that IP address parsing was thrown in a lot of cases because of
  need to have '[]' encasing the address, or not for use with URLs and
  the parsing of some user space 3rd party C binaries - iscsiadm for
  example. All the others are best left by using a name set to the IPv6
  address in the /etc/hosts file, iSCSI though its not possible.

  Got Cinder working by setting iscsi_ip_address
  (/etc/cinder/cinder.conf) to '[$my_ip]' where my ip is an IPv6 address
  like 2001:db08::1 (not RFC documentation address ?) and changing one
  line of python iin the nova virt/libvirt/volume.py code:

  
  --- nova/virt/libvirt/volume.py.orig2015-04-27 23:00:00.208075644 +1200
  +++ nova/virt/libvirt/volume.py 2015-04-27 22:38:08.938643636 +1200
  @@ -833,7 +833,7 @@
   def _get_host_device(self, transport_properties):
   Find device path in devtemfs.
   device = (ip-%s-iscsi-%s-lun-%s %
  -  (transport_properties['target_portal'],
  +  
(transport_properties['target_portal'].replace('[','').replace(']',''),
  transport_properties['target_iqn'],
  transport_properties.get('target_lun', 0)))
   if self._get_transport() == default:

  Nova-compute was looking for '/dev/disk/by-path/ip-[2001:db08::1]:3260
  -iscsi-iqn.2010-10.org.openstack:*' when there were no '[]' in the
  udev generated path

  This one can't be worked around by using the /etc/hosts file. iscsiadm
  and tgt ned the IPv6 address wrapped in '[]', and iscsadm uses it in
  output.  The above patch could be matched with a bi ihte cinder code
  that puts '[]' around iscsi_ip_address if if it is not supplied.

  More work is obvioulsy need on a convention for writing IPv6 addresses
  in the Openstack configuration files, and there will be a lot of
  places where code will need to be tweaked.

  Lets start by fixing this blooper/lo hanging one  first though as it
  makes it possible to get Cinder working in a pure IPv6 environment.
  Above may be a bit of a hack, but only one one code path needs
  adjustment...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-06-22 Thread Sergey Vilgelm
Cinder reviews:
https://review.openstack.org/#/c/193951/

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) = Sergey Vilgelm (sergey.vilgelm)

** Changed in: cinder
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in Cinder:
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467466] [NEW] [vpnaas] Unit test failure: cannot import name dvr_router

2015-06-22 Thread Elena Ezhova
Public bug reported:

2015-06-22 10:14:26.271 | Traceback (most recent call last):
2015-06-22 10:14:26.272 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 445, in _find_test_path
2015-06-22 10:14:26.272 | module = self._get_module_from_name(name)
2015-06-22 10:14:26.272 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 384, in _get_module_from_name
2015-06-22 10:14:26.272 | __import__(name)
2015-06-22 10:14:26.272 |   File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 19, 
in module
2015-06-22 10:14:26.272 | from neutron.agent.l3 import dvr_router
2015-06-22 10:14:26.272 | ImportError: cannot import name dvr_router

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467466

Title:
  [vpnaas] Unit test failure: cannot import name dvr_router

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  2015-06-22 10:14:26.271 | Traceback (most recent call last):
  2015-06-22 10:14:26.272 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 445, in _find_test_path
  2015-06-22 10:14:26.272 | module = self._get_module_from_name(name)
  2015-06-22 10:14:26.272 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 384, in _get_module_from_name
  2015-06-22 10:14:26.272 | __import__(name)
  2015-06-22 10:14:26.272 |   File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 19, 
in module
  2015-06-22 10:14:26.272 | from neutron.agent.l3 import dvr_router
  2015-06-22 10:14:26.272 | ImportError: cannot import name dvr_router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467462] [NEW] Error causes image creation window to loose values

2015-06-22 Thread Liron Kuchlani
Public bug reported:

Description of problem:
In 'Create An Image' widow, when inserting an illegal URL in image location 
field, the action fails and the image format field looses its content

Version-Release number of selected component (if applicable):
python-django-horizon-2015.1.0-10.el7ost.noarch

How reproducible:
sometimes 

Steps to Reproduce:
1. From 'Create An Image' window, insert an illegal URL in image location field 
2. Try to confirm the creation image

Actual results:
The format field is not configured, although it was 

Expected results:
The format field should stay as it was configured 

Additional info:

From horizon log:
[Mon Jun 22 09:19:12.758853 2015] [:error] [pid 21067]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 129, in image_update
[Mon Jun 22 09:19:12.758997 2015] [:error] [pid 21067] return image
[Mon Jun 22 09:19:12.759045 2015] [:error] [pid 21067] UnboundLocalError: local 
variable 'image' referenced before assignment
[Mon Jun 22 09:19:33.647517 2015] [:error] [pid 21068] Unhandled exception in 
thread started by function image_update at 0x7fd151ff0320
[Mon Jun 22 09:19:33.655274 2015] [:error] [pid 21068] Traceback (most recent 
call last):
[Mon Jun 22 09:19:33.655319 2015] [:error] [pid 21068]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 129, in image_update
[Mon Jun 22 09:19:33.655379 2015] [:error] [pid 21068] return image
[Mon Jun 22 09:19:33.655418 2015] [:error] [pid 21068] UnboundLocalError: local 
variable 'image' referenced before assignment
[Mon Jun 22 09:20:07.599046 2015] [:error] [pid 21069] Unhandled exception in 
thread started by function image_update at 0x7fd151ff0320
[Mon Jun 22 09:20:07.606163 2015] [:error] [pid 21069] Traceback (most recent 
call last):
[Mon Jun 22 09:20:07.606205 2015] [:error] [pid 21069]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 129, in image_update
[Mon Jun 22 09:20:07.606293 2015] [:error] [pid 21069] return image
[Mon Jun 22 09:20:07.606336 2015] [:error] [pid 21069] UnboundLocalError: local 
variable 'image' referenced before assignment
[Mon Jun 22 09:21:05.728746 2015] [:error] [pid 21069] Unhandled exception in 
thread started by function image_update at 0x7fd151ff0320
[Mon Jun 22 09:21:05.728826 2015] [:error] [pid 21069] Traceback (most recent 
call last):
[Mon Jun 22 09:21:05.728846 2015] [:error] [pid 21069]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 129, in image_update
[Mon Jun 22 09:21:05.728923 2015] [:error] [pid 21069] return image
[Mon Jun 22 09:21:05.728972 2015] [:error] [pid 21069] UnboundLocalError: local 
variable 'image' referenced before assignment
[Mon Jun 22 09:21:26.640919 2015] [:error] [pid 21069] Unhandled exception in 
thread started by function image_update at 0x7fd151ff0320
[Mon Jun 22 09:21:26.640967 2015] [:error] [pid 21069] Traceback (most recent 
call last):
[Mon Jun 22 09:21:26.640990 2015] [:error] [pid 21069]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 129, in image_update
[Mon Jun 22 09:21:26.641080 2015] [:error] [pid 21069] return image
[Mon Jun 22 09:21:26.641123 2015] [:error] [pid 21069] UnboundLocalError: local 
variable 'image' referenced before assignment

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: horizon_error.log
   
https://bugs.launchpad.net/bugs/1467462/+attachment/4418450/+files/horizon_error.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467462

Title:
  Error causes image creation window to loose values

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  In 'Create An Image' widow, when inserting an illegal URL in image location 
field, the action fails and the image format field looses its content

  Version-Release number of selected component (if applicable):
  python-django-horizon-2015.1.0-10.el7ost.noarch

  How reproducible:
  sometimes 

  Steps to Reproduce:
  1. From 'Create An Image' window, insert an illegal URL in image location 
field 
  2. Try to confirm the creation image

  Actual results:
  The format field is not configured, although it was 

  Expected results:
  The format field should stay as it was configured 

  Additional info:

  From horizon log:
  [Mon Jun 22 09:19:12.758853 2015] [:error] [pid 21067]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py,
 line 129, in image_update
  [Mon Jun 22 09:19:12.758997 2015] [:error] [pid 21067] return image
  [Mon Jun 22 09:19:12.759045 2015] [:error] [pid 21067] 

[Yahoo-eng-team] [Bug 1467471] [NEW] RFE - Support Distributed SNAT with DVR

2015-06-22 Thread Takanori Miyagishi
Public bug reported:

In Juno release, DVR was implemented to Neutron.
So virtual router can running on each Compute node.
However, SNAT is still running on Network node and not distributed yet.

Our proposal is to distribute SNAT to each Compute node.
If we use SNAT feature, the packet doesn't need to go Network node.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467471

Title:
  RFE - Support Distributed SNAT with DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In Juno release, DVR was implemented to Neutron.
  So virtual router can running on each Compute node.
  However, SNAT is still running on Network node and not distributed yet.

  Our proposal is to distribute SNAT to each Compute node.
  If we use SNAT feature, the packet doesn't need to go Network node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450438] Re: loopingcall: if a time drift to the future occurs, all timers will be blocked

2015-06-22 Thread Elena Ezhova
Fix fro oslo.service was committed in review:
https://review.openstack.org/#/c/190372/

** Changed in: oslo.service
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450438

Title:
  loopingcall: if a time drift to the future occurs, all timers will be
  blocked

Status in OpenStack Compute (Nova):
  Triaged
Status in Library for running OpenStack services:
  Fix Released

Bug description:
  Due to the fact that loopingcall.py uses time.time for recording wall-
  clock time which is not guaranteed to be monotonic, if a time drift to
  the future occurs, and then gets corrected, all the timers will get
  blocked until the actual time reaches the moment of the original
  drift.

  This can be pretty bad if the interval is not insignificant - in
  Nova's case - all services uses FixedIntervalLoopingCall for it's
  heartbeat periodic tasks - if a drift is on the order of magnitude of
  several hours, no heartbeats will happen.

  DynamicLoopingCall is affected by this as well but because it relies
  on eventlet which would also use a non-monotonic time.time function
  for it's internal timers.

  Solving this will require looping calls to start using a monotonic
  timer (for python 2.7 there is a monotonic package).

  Also all services that want to use timers and avoid this issue should
  doe something like

import monotonic

hub = eventlet.get_hub()
hub.clock = monotonic.monotonic

  immediately after calling eventlet.monkey_patch()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440538] Re: Fix typo in oslo_messaging/_drivers/protocols/amqp/opts.py

2015-06-22 Thread OpenStack Infra
** Changed in: openstack-manuals
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1440538

Title:
  Fix typo in oslo_messaging/_drivers/protocols/amqp/opts.py

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Magnum:
  Fix Released
Status in OpenStack Manuals:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  verifing - verifying. This typo affects a lot of projects, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1440538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467409] [NEW] HyperV destroy vm fails on Windows Server 2008R2

2015-06-22 Thread Adelina Tuvenie
Public bug reported:

Delete vm fails on  Windows Server 2008R2 with the following error:

WindowsError: [Error 145] The directory is not empty:
'C:\\OpenStack\\Instances\\instance-0003'

This is happening because on Windows 2008R2 it takes a while (1-2 sec. )
for Hyper-V to delete the instance specific files that it stores in the
instance folder.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467409

Title:
  HyperV destroy vm fails on Windows Server 2008R2

Status in OpenStack Compute (Nova):
  New

Bug description:
  Delete vm fails on  Windows Server 2008R2 with the following error:

  WindowsError: [Error 145] The directory is not empty:
  'C:\\OpenStack\\Instances\\instance-0003'

  This is happening because on Windows 2008R2 it takes a while (1-2 sec.
  ) for Hyper-V to delete the instance specific files that it stores in
  the instance folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467451] [NEW] Hyper-V: fail to detach virtual hard disks

2015-06-22 Thread Lucian Petrut
Public bug reported:

Nova Hyper-V driver fails to detach  virtual hard disks when using the
virtualizaton v1 WMI namespace.

The reason is that it cannot find the attached resource, using the wrong
resource object connection attribute.

This affects Windows Server 2008 as well as Windows Server 2012 when the
old namespace is used.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467451

Title:
  Hyper-V: fail to detach virtual hard disks

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova Hyper-V driver fails to detach  virtual hard disks when using the
  virtualizaton v1 WMI namespace.

  The reason is that it cannot find the attached resource, using the
  wrong resource object connection attribute.

  This affects Windows Server 2008 as well as Windows Server 2012 when
  the old namespace is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382390] Re: nova-api should shutdown gracefully

2015-06-22 Thread Elena Ezhova
Related fix for oslo.service: https://review.openstack.org/#/c/190175/

** Changed in: oslo.service
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382390

Title:
  nova-api should shutdown gracefully

Status in OpenStack Compute (Nova):
  In Progress
Status in The Oslo library incubator:
  Confirmed
Status in Library for running OpenStack services:
  Fix Released

Bug description:
  In IceHouse, An awesome feature got implemented: graceful shutdown
  nova service, which can make sure in-process RPC request got done
  before kill the process.

  But nova-api not support graceful shutdown now, which can cause
  problem when do upgrading. For example, when a request to create an
  instance was in-progress, kill the nova-api may lead to quota not sync
  or odd database records. Especially in large-scale development, there
  are hundreds of request in a second, kill the nova-api will interrupt
  lots in-process greenlet.

  In nova/wsgi.py, when stoping WSGI service, we first shrink the
  greenlet pool size to 0, then kill the eventlet wsgi server. The work
  around is quick and easy: wait for all greenlets in the pool to
  finish, then kill wsgi server. The code looks like below:

  
  diff --git a/nova/wsgi.py b/nova/wsgi.py
  index ba52872..3c89297 100644
  --- a/nova/wsgi.py
  +++ b/nova/wsgi.py
  @@ -212,6 +212,9 @@ class Server(object):
   if self._server is not None:
   # Resize pool to stop new requests from being processed
   self._pool.resize(0)
  +num = self._pool.running()
  +LOG.info(_(Waiting WSGI server to finish %d requests. % num))
  +self._pool.waitall()
   self._server.kill()
   
   def wait(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362863] Re: reply queues fill up with unacked messages

2015-06-22 Thread Jorge Niedbalski
** Also affects: oslo.messaging (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Fix Released
Status in oslo.messaging package in Ubuntu:
  New
Status in oslo.messaging source package in Trusty:
  New

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362863] Re: reply queues fill up with unacked messages

2015-06-22 Thread Chris J Arges
** Also affects: oslo.messaging (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Fix Released
Status in oslo.messaging package in Ubuntu:
  New
Status in oslo.messaging source package in Trusty:
  New

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467719] [NEW] image-create returns wrong error

2015-06-22 Thread takmatsu
Public bug reported:

When set wrong credentials in glance-api.conf and not exist 
~/.glanceclient/image_schema.json,
image-create returns unrecognized arguments.

ex)
glance-api.conf
   [keystone_authtoken]
   ...
   password = wrongpassword
   ...


$ glance image-create --name cirros-0.3.4-x86_64 --file 
/tmp/images/cirros-0.3.4-x86_64-disk.img   --disk-format qcow2 
--container-format bare --visibility public --progress

usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT]
  [--no-ssl-compression] [-f] [--os-image-url OS_IMAGE_URL]
  [--os-image-api-version OS_IMAGE_API_VERSION]
  [--profile HMAC_KEY] [-k] [--os-cert OS_CERT]
  [--cert-file OS_CERT] [--os-key OS_KEY] [--key-file OS_KEY]
  [--os-cacert ca-certificate-file] [--ca-file OS_CACERT]
  [--os-username OS_USERNAME] [--os-user-id OS_USER_ID]
  [--os-user-domain-id OS_USER_DOMAIN_ID]
  [--os-user-domain-name OS_USER_DOMAIN_NAME]
  [--os-project-id OS_PROJECT_ID]
  [--os-project-name OS_PROJECT_NAME]
  [--os-project-domain-id OS_PROJECT_DOMAIN_ID]
  [--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
  [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID]
  [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL]
  [--os-region-name OS_REGION_NAME]
  [--os-auth-token OS_AUTH_TOKEN]
  [--os-service-type OS_SERVICE_TYPE]
  [--os-endpoint-type OS_ENDPOINT_TYPE]
  subcommand ...
glance: error: unrecognized arguments: --name --disk-format qcow2 
--container-format bare --visibility public

** Affects: glance
 Importance: Undecided
 Assignee: takmatsu (takeaki-matsumoto)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = takmatsu (takeaki-matsumoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1467719

Title:
  image-create returns wrong error

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When set wrong credentials in glance-api.conf and not exist 
~/.glanceclient/image_schema.json,
  image-create returns unrecognized arguments.

  ex)
  glance-api.conf
 [keystone_authtoken]
 ...
 password = wrongpassword
 ...

  
  $ glance image-create --name cirros-0.3.4-x86_64 --file 
/tmp/images/cirros-0.3.4-x86_64-disk.img   --disk-format qcow2 
--container-format bare --visibility public --progress

  usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT]
[--no-ssl-compression] [-f] [--os-image-url OS_IMAGE_URL]
[--os-image-api-version OS_IMAGE_API_VERSION]
[--profile HMAC_KEY] [-k] [--os-cert OS_CERT]
[--cert-file OS_CERT] [--os-key OS_KEY] [--key-file OS_KEY]
[--os-cacert ca-certificate-file] [--ca-file OS_CACERT]
[--os-username OS_USERNAME] [--os-user-id OS_USER_ID]
[--os-user-domain-id OS_USER_DOMAIN_ID]
[--os-user-domain-name OS_USER_DOMAIN_NAME]
[--os-project-id OS_PROJECT_ID]
[--os-project-name OS_PROJECT_NAME]
[--os-project-domain-id OS_PROJECT_DOMAIN_ID]
[--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
[--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID]
[--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL]
[--os-region-name OS_REGION_NAME]
[--os-auth-token OS_AUTH_TOKEN]
[--os-service-type OS_SERVICE_TYPE]
[--os-endpoint-type OS_ENDPOINT_TYPE]
subcommand ...
  glance: error: unrecognized arguments: --name --disk-format qcow2 
--container-format bare --visibility public

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1467719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464554] Re: instance failed to spawn with external network

2015-06-22 Thread Yao Long
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464554

Title:
  instance failed to spawn with external network

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm trying to launch an instance with external network but result in
  failed status. But instances with internal network are fine.

   F ollowing is the nova-compute.log from compute node

  2015-06-12 15:22:50.899 3121 INFO nova.compute.manager 
[req-6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66 
700e680640e0415faf591e950cdb42d0 - - -] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Starting instance...
  2015-06-12 15:22:50.997 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Attempting claim: memory 2048 MB, disk 50 
GB
  2015-06-12 15:22:50.997 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Total memory: 515884 MB, used: 2560.00 MB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] memory limit: 773826.00 MB, free: 
771266.00 MB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Total disk: 1144 GB, used: 50.00 GB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] disk limit not specified, defaulting to 
unlimited
  2015-06-12 15:22:51.023 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Claim successful
  2015-06-12 15:22:51.134 3121 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.270 3121 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.470 3121 INFO nova.virt.libvirt.driver 
[req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Creating image
  2015-06-12 15:22:51.760 3121 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.993 3121 ERROR nova.compute.manager 
[req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Instance failed to spawn
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Traceback (most recent call last):
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2442, in 
_build_resources
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] yield resources
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2314, in 
_build_and_run_instance
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] block_device_info=block_device_info)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2351, in 
spawn
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] write_to_disk=True)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4172, in 
_get_guest_xml
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] context)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4043, in 
_get_guest_config
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] flavor, virt_type)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py, line 374, in 
get_config
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] _(Unexpected vif_type=%s) % 
vif_type)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] NovaException: Unexpected 
vif_type=binding_failed
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]

[Yahoo-eng-team] [Bug 1467208] Re: firewall remove router doesn't work

2015-06-22 Thread yan.haifeng
Fix Committed https://review.openstack.org/#/c/193922/

** Changed in: horizon
   Status: Invalid = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467208

Title:
  firewall remove router doesn't  work

Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  the label of form firewall remove router remove routers may be
  misunderstood,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467728] [NEW] Do not check neutron port quota in API layer

2015-06-22 Thread shihanzhang
Public bug reported:

Now  neutron API does not provide reservation mechanism, so if a tenant has a 
large number of ports, in function validate_networks, 
'list_ports' will be very expensive, and port creation depends in some cases on 
mac addresses only available on the compute manager, so I think it is better to 
remove this check in function validate_networks:

def validate_networks(self, context, requested_networks, num_instances):
...
neutron = get_client(context)
ports_needed_per_instance = self._ports_needed_per_instance(
context, neutron, requested_networks)
if ports_needed_per_instance:
ports = neutron.list_ports(tenant_id=context.project_id)['ports']
quotas = neutron.show_quota(tenant_id=context.project_id)['quota']
if quotas.get('port', -1) == -1:
# Unlimited Port Quota
return num_instances

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467728

Title:
  Do not check neutron port quota in API layer

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now  neutron API does not provide reservation mechanism, so if a tenant has a 
large number of ports, in function validate_networks, 
  'list_ports' will be very expensive, and port creation depends in some cases 
on mac addresses only available on the compute manager, so I think it is better 
to remove this check in function validate_networks:

  def validate_networks(self, context, requested_networks, num_instances):
  ...
  neutron = get_client(context)
  ports_needed_per_instance = self._ports_needed_per_instance(
  context, neutron, requested_networks)
  if ports_needed_per_instance:
  ports = neutron.list_ports(tenant_id=context.project_id)['ports']
  quotas = neutron.show_quota(tenant_id=context.project_id)['quota']
  if quotas.get('port', -1) == -1:
  # Unlimited Port Quota
  return num_instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443562] Re: /etc/nova/nova.conf missing section: [database]

2015-06-22 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443562

Title:
  /etc/nova/nova.conf missing section: [database]

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  the docs tell you to add the 'connection=...' line after the
  [database] section within /etc/nova/nova.conf

  i then created and ran the bash script as follows:
  OLD=^#connection=mysql.*
  NEW=connection = mysql://nova:$NOVA_DBPASS@controller/nova
  sed -i /^\[database\]/,/^\[/{ s~$OLD~$NEW~ } /etc/nova/nova.conf

  unlike some of the other openstack config files that require a '[database]' 
section,
  unbeknownst to me, there was no '[database]' section in /etc/nova/nova.conf 
so my script did nothing

  i later kept getting the following error:
  Access denied for user 'nova'@'localhost' (using password: YES)) None None

  googling I found many people faced the same error, but nothing lead me
  to this issue.

  i recommend adding a '[database]' section to /etc/nova/nova.conf

  cheerz
  kendal

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-06-22 Thread Sergey Vilgelm
heat review: https://review.openstack.org/#/c/194494/

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
   Status: New = In Progress

** Changed in: heat
 Assignee: (unassigned) = Sergey Vilgelm (sergey.vilgelm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467504] [NEW] Health monitor Admin state up False inactive

2015-06-22 Thread Alex Syafeyev
Public bug reported:

We have LbaasV2 running. 
I configured health monitor. 
When executing tcpdump on VM I see it receives the http traffic. 

I executed the following 
[root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-create 
--delay 1 --max-retries 3 --timeout 3 --type PING --pool 
1ac828d0-0064-446e-a7cc-5f4eacaf37de
Created a new healthmonitor:
+++
| Field  | Value  |
+++
| admin_state_up | True   |
| delay  | 1  |
| expected_codes | 200|
| http_method| GET|
| id | 46fd07b4-94b1-4494-8ee2-0a6c9803cf2c   |
| max_retries| 3  |
| pools  | {id: 1ac828d0-0064-446e-a7cc-5f4eacaf37de} |
| tenant_id  | bee13f9e436e4d78b9be72b8ec78d81c   |
| timeout| 3  |
| type   | PING   |
| url_path   | /  |
+++
[root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update
usage: neutron lbaas-healthmonitor-update [-h] [--request-format {json,xml}]
  HEALTHMONITOR
neutron lbaas-healthmonitor-update: error: too few arguments
[root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update 
46fd07b4-94b1-4494-8ee2-0a6c9803cf2c --admin_state_down True
Unrecognized attribute(s) 'admin_state_down'
[root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update 
46fd07b4-94b1-4494-8ee2-0a6c9803cf2c --admin_state_up False
Updated healthmonitor: 46fd07b4-94b1-4494-8ee2-0a6c9803cf2c
[root@puma09 neutron(keystone_redhat)]# 

A executed tcpdump again on the VM and I could see the HealthMonitoring
traffic continues.

After deleting the healthmonitor not traffic captured on VM.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467504

Title:
  Health monitor Admin state up False inactive

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We have LbaasV2 running. 
  I configured health monitor. 
  When executing tcpdump on VM I see it receives the http traffic. 

  I executed the following 
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-create 
--delay 1 --max-retries 3 --timeout 3 --type PING --pool 
1ac828d0-0064-446e-a7cc-5f4eacaf37de
  Created a new healthmonitor:
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 1  |
  | expected_codes | 200|
  | http_method| GET|
  | id | 46fd07b4-94b1-4494-8ee2-0a6c9803cf2c   |
  | max_retries| 3  |
  | pools  | {id: 1ac828d0-0064-446e-a7cc-5f4eacaf37de} |
  | tenant_id  | bee13f9e436e4d78b9be72b8ec78d81c   |
  | timeout| 3  |
  | type   | PING   |
  | url_path   | /  |
  +++
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update
  usage: neutron lbaas-healthmonitor-update [-h] [--request-format {json,xml}]
HEALTHMONITOR
  neutron lbaas-healthmonitor-update: error: too few arguments
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update 
46fd07b4-94b1-4494-8ee2-0a6c9803cf2c --admin_state_down True
  Unrecognized attribute(s) 'admin_state_down'
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update 
46fd07b4-94b1-4494-8ee2-0a6c9803cf2c --admin_state_up False
  Updated healthmonitor: 46fd07b4-94b1-4494-8ee2-0a6c9803cf2c
  [root@puma09 neutron(keystone_redhat)]# 

  A executed tcpdump again on the VM and I could see the
  HealthMonitoring traffic continues.

  After deleting the healthmonitor not traffic captured on VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467504/+subscriptions


[Yahoo-eng-team] [Bug 1467518] [NEW] neutron --debug port-list --binding:vif_type=binding_failed returns wrong ports

2015-06-22 Thread George Shuklin
Public bug reported:

neutron --debug port-list --binding:vif_type=binding_failed displays all
ports with all vif_type, not only with binding_failed.

vif_type=binding_failed is set when something bad happens on a compute
host during port configuration (no local vlans in ml2 conf, etc)

We had intention to monitor for such ports, but request to neutron
return some irrelevant ports:

REQ: curl -i -X GET
https://neutron.lab.internal:9696/v2.0/ports.json?binding%3Avif_type=binding_failed
-H User-Agent: python-neutronclient -H Accept: application/json -H
X-Auth-Token: 52c0c1ee1f764c408977f41c9f3743ca

RESP BODY: {ports: [{status: ACTIVE, binding:host_id:
compute2, name: , admin_state_up: true, network_id: 5c399fb7
-67ac-431d-9965-9586dbcec1c9, tenant_id:
3e6b1fc20da346838f93f124cb894d0f, extra_dhcp_opts: [],
binding:vif_details: {port_filter: false, ovs_hybrid_plug: false},
binding:vif_type: ovs, device_owner: network:dhcp,
mac_address: fa:16:3e:ad:6f:22, binding:profile: {},
binding:vnic_type: normal, fixed_ips: [{subnet_id:
c10a3520-17e2-4c04-94c6-a4419d79cca9, ip_address: 192.168.0.3}],
.

If request is send on neutron --debug port-list
--binding:host_id=compute1, filtering works as expected.

Neutron version - 2014.2.4

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- neutron --debug port-list --binding:vif_type=binding_failed displays wrong 
ports
+ neutron --debug port-list --binding:vif_type=binding_failed returns wrong 
ports

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467518

Title:
  neutron --debug port-list --binding:vif_type=binding_failed returns
  wrong ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron --debug port-list --binding:vif_type=binding_failed displays
  all ports with all vif_type, not only with binding_failed.

  vif_type=binding_failed is set when something bad happens on a compute
  host during port configuration (no local vlans in ml2 conf, etc)

  We had intention to monitor for such ports, but request to neutron
  return some irrelevant ports:

  REQ: curl -i -X GET
  
https://neutron.lab.internal:9696/v2.0/ports.json?binding%3Avif_type=binding_failed
  -H User-Agent: python-neutronclient -H Accept: application/json -H
  X-Auth-Token: 52c0c1ee1f764c408977f41c9f3743ca

  RESP BODY: {ports: [{status: ACTIVE, binding:host_id:
  compute2, name: , admin_state_up: true, network_id:
  5c399fb7-67ac-431d-9965-9586dbcec1c9, tenant_id:
  3e6b1fc20da346838f93f124cb894d0f, extra_dhcp_opts: [],
  binding:vif_details: {port_filter: false, ovs_hybrid_plug:
  false}, binding:vif_type: ovs, device_owner: network:dhcp,
  mac_address: fa:16:3e:ad:6f:22, binding:profile: {},
  binding:vnic_type: normal, fixed_ips: [{subnet_id:
  c10a3520-17e2-4c04-94c6-a4419d79cca9, ip_address: 192.168.0.3}],
  .

  If request is send on neutron --debug port-list
  --binding:host_id=compute1, filtering works as expected.

  Neutron version - 2014.2.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436324] Re: Keystone is not HA with memcache as token persistence driver

2015-06-22 Thread David Stanek
memcached is not HA by design. This isn't really something we can fix in
Keystone, other than to removed this backend. memcached should really
only by used in situations where it is a cache and the application can
run even with it shut off.

** Changed in: keystone
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1436324

Title:
  Keystone is not HA with memcache as token persistence driver

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  Keystone becomes extremely slow if one of memcached servers, used as
  token persistence driver, stops working. This happens because Keystone
  re-initializes memcache client on every call and memcache client loses
  information about dead servers and time until they are dead.

  To reproduce the issue:
  1. Set up two memcached instances on, say, virtual machines
  2. Set [token]driver=keystone.token.persistence.backends.memcache_pool.Token
  3. Set [memcache]servers=192.168.56.101:11211,192.168.56.102:11211 (change ip 
to your own)
  4. Break down one of the servers. By turning the server off for example
  5. Try signing in in Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1436324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467544] [NEW] Network field update delayed if a few net-id were specified during creation

2015-06-22 Thread George Shuklin
Public bug reported:

Steps to reproduce:

1. Create a few networks. In my case they were shared external networks
of 'vlan' type.

Example:
neutron net-create internet_192.168.16.64/27 --router:external True 
--provider:physical_network internet --provider:network_type  vlan 
--provider:segmentation_id 20 --shared
neutron subnet-create internet_192.168.16.64/27 --enable-dhcp 
--gateway=192.168.16.65 --dns-nameserver=8.8.8.8 --dns-nameserver=77.88.8.8 
192.168.16.64/27


2. Boot instance:

 nova boot  --flavor  m1.small --image cirros --key-name x220 --nic net-
id=25f7440e-5ffd-4407-a83e-0bce6e8c216d --nic net-
id=a3af8097-f348-4767-97c3-b9bf75263ef9 myinstance

3. Get instance info after it becomes 'ACTIVE':

nova show 0111cff2-205f-493c-9d37-bf8a550270a2
+--+---+
| Property | Value  
   |
+--+---+
| OS-DCF:diskConfig| MANUAL 
   |
| OS-EXT-AZ:availability_zone  | nova   
   |
| OS-EXT-STS:power_state   | 1  
   |
| OS-EXT-STS:task_state| -  
   |
| OS-EXT-STS:vm_state  | active 
   |
| OS-SRV-USG:launched_at   | 2015-06-22T13:47:10.00 
   |
| OS-SRV-USG:terminated_at | -  
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| config_drive |
   |
| created  | 2015-06-22T13:47:03Z   
   |
| flavor   | SSD.30 (30)
   |
| hostId   | 
ac01a9c7098d3d6f769fabd7071794ba11cca06d11a15867da898dbc  |
| id   | 0111cff2-205f-493c-9d37-bf8a550270a2   
   |
| image| Debian 8.0 Jessie (x86_64) 
(cc00f340-c927-4309-965e-63f02c94027d) |
| internet_192.168.16.192/27 network   | 192.168.16.205 
   |
| key_name | x220   
   |
| local_private network|
   |
| metadata | {} 
   |
| name | hands  
   |
| os-extended-volumes:volumes_attached | [] 
   |
| progress | 0  
   |
| security_groups  | default
   |
| status   | ACTIVE 
   |
| tenant_id| 1d7f6604ebb54c69820f9d157bcea5f9   
   |
| updated  | 2015-06-22T13:47:10Z   
   |
| user_id  | 51b457fc5dee4b6098093542bd659e8a   
   |
+--+---+

The local_private network field is empty.

Expected: it contains an IP address.

This can be fixed by nova refresh-network, but it requires admin
privileges.

Version: nova 2014.2.4 with neutron network.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Steps to reproduce:
  
- 1. Create a few networks. In my case they were external networks of 'vlan' 
type. 
+ 1. Create a few networks. In my case they were shared external networks of 
'vlan' type.
  2. Boot instance:
  
-  nova boot  --flavor  m1.small --image cirros --key-name x220 --nic net-
+  nova boot  --flavor  m1.small --image cirros --key-name x220 --nic net-
  id=25f7440e-5ffd-4407-a83e-0bce6e8c216d --nic net-
  id=a3af8097-f348-4767-97c3-b9bf75263ef9 myinstance
  
  3. Get instance info after it becomes