[Yahoo-eng-team] [Bug 1414802] Re: volume-attach already attached and already detached are not mutually exclusive

2015-03-16 Thread Vincent Hou
** Changed in: cinder
 Assignee: (unassigned) = Vincent Hou (houshengbo)

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414802

Title:
  volume-attach already attached and already detached are not
  mutually exclusive

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
   nova volume-attach 81eef059-1490-4ec3-b0fc-b1b4e6380ee8 
639e0a87-1190-42d4-b3a1-a385090aec06
  ERROR: Invalid volume: already attached (HTTP 400) (Request-ID: 
req-1c51a40b-1015-4cee-b313-5dda524a3555)
   nova volume-detach 81eef059-1490-4ec3-b0fc-b1b4e6380ee8 
639e0a87-1190-42d4-b3a1-a385090aec06
  ERROR: Invalid volume: already detached (HTTP 400) (Request-ID: 
req-bb99c24e-1ca8-4f02-bb0a-9bb125857637)

  I'm trying to resolve a much larger problem with my openstack
  environment, involving volume exporting, and I ran across this
  situation. Whatever the tests are for these two messages, they should
  be mutually exclusive. Maybe a third message is needed, for some
  intermediate state?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1414802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432883] [NEW] [Launch Instance Fix] source step - finalize dynamic table column headers and data

2015-03-16 Thread Brian Tully
Public bug reported:

The source step of the Launch Instance Wizard contains a transfer table
which is dynamic in nature. The table's column headers and data change
based on the boot source that is selected. However the current version
of the code does not account for the fact that the column headers and
data mapping changes based on the selection. We need to figure out which
columns and data show for each boot source (based on the latest Invision
mockups) and then figure out if and/or where the corresponding data is
stored from the model.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432883

Title:
  [Launch Instance Fix] source step - finalize dynamic table column
  headers and data

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The source step of the Launch Instance Wizard contains a transfer
  table which is dynamic in nature. The table's column headers and data
  change based on the boot source that is selected. However the current
  version of the code does not account for the fact that the column
  headers and data mapping changes based on the selection. We need to
  figure out which columns and data show for each boot source (based on
  the latest Invision mockups) and then figure out if and/or where the
  corresponding data is stored from the model.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432897] [NEW] [Launch Instance Fix] source step - fix donut chart dynamic update - limit instance count

2015-03-16 Thread Brian Tully
Public bug reported:

In the current version of the Launch Instance Wizard Source Step, the
instance count field is not correctly limited based on current instance
usage and nova limits (maxTotalInstances). There is a bug whereby the
user can enter an instance count of more than what is available, and in
addition the donut chart does not update properly when this occurs. We
should correct this so that if a user enters a number greater than what
is available, it should automatically round down to the available count.

Example. If I have 10 max instances quota, and currently have 6
instances in use, I should only be able to specify 4 as my instance
count (which will update the donut chart to 100%). However there is
nothing in the code that limits me from entering a higher number. We
should add some code that limits the instance count based on #
remaining, and automatically rounds down to the highest number of
instances that would prevent the user from going over 100%

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432897

Title:
  [Launch Instance Fix] source step - fix donut chart dynamic update -
  limit instance count

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the current version of the Launch Instance Wizard Source Step, the
  instance count field is not correctly limited based on current
  instance usage and nova limits (maxTotalInstances). There is a bug
  whereby the user can enter an instance count of more than what is
  available, and in addition the donut chart does not update properly
  when this occurs. We should correct this so that if a user enters a
  number greater than what is available, it should automatically round
  down to the available count.

  Example. If I have 10 max instances quota, and currently have 6
  instances in use, I should only be able to specify 4 as my instance
  count (which will update the donut chart to 100%). However there is
  nothing in the code that limits me from entering a higher number. We
  should add some code that limits the instance count based on #
  remaining, and automatically rounds down to the highest number of
  instances that would prevent the user from going over 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432913] [NEW] rescue imageref not pulling image metadata during rescue mode

2015-03-16 Thread Tim Pownall
Public bug reported:

We recently added a new feature with the nova api allow a user to
specify a rescue_image_ref and a password to the rescue mode action in
nova.  When specifying an alternative image we're not currently
utilizing the image vm_mode and image auto_disk_config flags and are
using the current instance flags.  For example, if you are to specify an
hvm image to rescue a pv vm, you'll end up with a pv vm built off a pv
image if you even get that far, as auto_disk_config will traceback first
depending on whether or not compute can resize the file system during
rescue mode due to auto_disk_config being turned on.

I have a potential fix for this issue and will be creating a code
review.

** Affects: nova
 Importance: Undecided
 Assignee: Tim Pownall (pownalltim)
 Status: In Progress

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Tim Pownall (pownalltim)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432913

Title:
  rescue imageref not pulling image metadata during rescue mode

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  We recently added a new feature with the nova api allow a user to
  specify a rescue_image_ref and a password to the rescue mode action in
  nova.  When specifying an alternative image we're not currently
  utilizing the image vm_mode and image auto_disk_config flags and are
  using the current instance flags.  For example, if you are to specify
  an hvm image to rescue a pv vm, you'll end up with a pv vm built off a
  pv image if you even get that far, as auto_disk_config will traceback
  first depending on whether or not compute can resize the file system
  during rescue mode due to auto_disk_config being turned on.

  I have a potential fix for this issue and will be creating a code
  review.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432892] [NEW] Wrong exception when validating trust scoped tokens with disabled trustor

2015-03-16 Thread Samuel de Medeiros Queiroz
Public bug reported:

When validating a trust scoped token with disabled trustor, an exception
of type Forbidden with message 'Trustor is disabled.' is raised.

However, the exception used when the user (owning the role assignment for the 
provided token) is disabled is Unauthorized.
This should be changed in order to make the API consistent.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1432892

Title:
  Wrong exception when validating trust scoped tokens with disabled
  trustor

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When validating a trust scoped token with disabled trustor, an
  exception of type Forbidden with message 'Trustor is disabled.' is
  raised.

  However, the exception used when the user (owning the role assignment for the 
provided token) is disabled is Unauthorized.
  This should be changed in order to make the API consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1432892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432904] [NEW] Project with empty description is not sorted correctly

2015-03-16 Thread Kahou Lei
Public bug reported:

Steps to reproduce:

1. Identity - Project
2. In one of the project, edit the description as an empty description
3. Refresh the page and try to sort it, you will notice that the project list 
is sorted without including the empty description project.

** Affects: horizon
 Importance: Undecided
 Assignee: Kahou Lei (kahou82)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Kahou Lei (kahou82)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432904

Title:
  Project with empty description is not sorted correctly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1. Identity - Project
  2. In one of the project, edit the description as an empty description
  3. Refresh the page and try to sort it, you will notice that the project list 
is sorted without including the empty description project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432905] [NEW] Launching an instance fails when using a port with vnic_type=direct

2015-03-16 Thread Itzik Brown
Public bug reported:

After Launching an instance with a port with vnic_type=direct the
instance fails to start.

In the nova compute log I see:
2015-03-16 17:51:34.432 3313 TRACE nova.compute.manager ValueError: Field 
`extra_info[numa_node]' cannot be None 

Version
=
openstack-nova-compute-2014.2.2-18.el7ost.noarch
python-nova-2014.2.2-18.el7ost.noarch
openstack-nova-common-2014.2.2-18.el7ost.noarch

How to Reproduce
===
# neutron port-create tenant1-net1 --binding:vnic-type direct
# nova boot --flavor m1.small --image rhel7 --nic port-id=port-id vm1

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: pci-passthrough

** Attachment added: Nova compute log
   
https://bugs.launchpad.net/bugs/1432905/+attachment/4347490/+files/instance_fails_sriov.txt

** Tags added: pci-passthrough

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432905

Title:
  Launching an instance fails when using a port with vnic_type=direct

Status in OpenStack Compute (Nova):
  New

Bug description:
  After Launching an instance with a port with vnic_type=direct the
  instance fails to start.

  In the nova compute log I see:
  2015-03-16 17:51:34.432 3313 TRACE nova.compute.manager ValueError: Field 
`extra_info[numa_node]' cannot be None 

  Version
  =
  openstack-nova-compute-2014.2.2-18.el7ost.noarch
  python-nova-2014.2.2-18.el7ost.noarch
  openstack-nova-common-2014.2.2-18.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id=port-id vm1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237162] Re: Use DictOpt for mapping options

2015-03-16 Thread Zhongyue Luo
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237162

Title:
  Use DictOpt for mapping options

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Below options are using ListOpt which does not represent the purpose
  correctly

  LINUX_BRIDGE.physical_interface_mappings
  ESWITCH.physical_interface_mappings
  OVS.bridge_mappings

  It would be more intuitive to use DictOpt which is supported in
  oslo.config from version 1.2.0a1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1237162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373513] Re: Lvm hang during tempest tests

2015-03-16 Thread John Griffith
** Changed in: nova
   Status: New = Invalid

** Changed in: cinder
 Assignee: (unassigned) = John Griffith (john-griffith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373513

Title:
  Lvm hang during tempest tests

Status in Cinder:
  Triaged
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Managed to trigger a hang in lvm create

  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.704164] 
INFO: task lvm:14805 blocked for more than 120 seconds.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705096] 
  Not tainted 3.13.0-35-generic #62-Ubuntu
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705839] 
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706871] lvm 
D 8801ffd14440 0 14805  14804 0x
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706876]  
880068f9dae0 0082 8801a14bc800 880068f9dfd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706879]  
00014440 00014440 8801a14bc800 8801ffd14cd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706881]  
 88004063c280  8801a14bc800
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706883] 
Call Trace:
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706895]  
[81722a6d] io_schedule+0x9d/0x140
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706914]  
[811fac94] do_blockdev_direct_IO+0x1ce4/0x2910
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706918]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706920]  
[811fb915] __blockdev_direct_IO+0x55/0x60
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706922]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706924]  
[811f61f6] blkdev_direct_IO+0x56/0x60
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706926]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706937]  
[8115106b] generic_file_aio_read+0x69b/0x700
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706947]  
[811cca78] ? path_openat+0x158/0x640
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706953]  
[810f3c92] ? from_kgid_munged+0x12/0x20
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706955]  
[811f667b] blkdev_aio_read+0x4b/0x70
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706958]  
[811bc99a] do_sync_read+0x5a/0x90
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706960]  
[811bd035] vfs_read+0x95/0x160
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706962]  
[811bdb49] SyS_read+0x49/0xa0
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706966]  
[8172ed6d] system_call_fastpath+0x1a/0x1f
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706968] 
INFO: task lvs:14822 blocked for more than 120 seconds.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.707774] 
  Not tainted 3.13.0-35-generic #62-Ubuntu
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.708507] 
echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709535] lvs 
D 8801ffc14440 0 14822  14821 0x
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709537]  
880009ffdae0 0082 8800095e1800 880009ffdfd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709539]  
00014440 00014440 8800095e1800 8801ffc14cd8
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709541]  
 880003d59900  8800095e1800
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709543] 
Call Trace:
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709547]  
[81722a6d] io_schedule+0x9d/0x140
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709549]  
[811fac94] do_blockdev_direct_IO+0x1ce4/0x2910
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709551]  
[811f5b00] ? I_BDEV+0x10/0x10
  Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709554]  
[811fb915] __blockdev_direct_IO+0x55/0x60
  Sep 22 21:19:01 

[Yahoo-eng-team] [Bug 1421453] Re: AttributeError on Neutron API job

2015-03-16 Thread Assaf Muller
This has since been fixed.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421453

Title:
  AttributeError on Neutron API job

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The trace:

  http://paste.openstack.org/show/172449/

  The logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6ICdtb2R1bGUnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlICdOb3RGb3VuZCdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzc4MzAyODMxMn0=

  It seems something sneaked in that broke the non-voting job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427959] Re: Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eno50:eno50,encounters an error“ERROR : Error appeared during Puppet run: 10.43.241.186_neu

2015-03-16 Thread Assaf Muller
This is a Packstack or Puppet modules bug, unrelated to the Neutron project. 
Please report bug on Packstack:
https://bugzilla.redhat.com/enter_bug.cgi?product=RDO

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427959

Title:
  Using packstack deploy openstack,when CONFIG_NEUTRON_OVS_BRIDGE_IFACES
  =br-eno50:eno50,encounters an error“ERROR : Error appeared during
  Puppet run: 10.43.241.186_neutron.pp ”.

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  yum install -y openstack-packstack 
  packstack --gen-answer-file=~/answers.cfg 

  cat ~/answers.cfg | grep CONFIG_KEYSTONE_ADMIN_PW | grep -v ^$ | grep -v ^# 
  sed -i 's/CONFIG_HEAT_INSTALL=n/CONFIG_HEAT_INSTALL=y/' ~/answers.cfg 
  sed -i 
's/CONFIG_KEYSTONE_ADMIN_PW=d6f026eb28bc4ac6/CONFIG_KEYSTONE_ADMIN_PW=osp/' 
~/answers.cfg 
  sed -i 's/CONFIG_CINDER_VOLUMES_CREATE=y/CONFIG_CINDER_VOLUMES_CREATE=n/' 
~/answers.cfg 
  sed -i 
's/CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan/CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan/' 
~/answers.cfg 
  sed -i 
's/CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan/CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan/'
 ~/answers.cfg 
  sed -i 
's/CONFIG_NEUTRON_ML2_VLAN_RANGES=/CONFIG_NEUTRON_ML2_VLAN_RANGES=phvnet:100:200/'
 ~/answers.cfg 
  sed -i 
's/CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=/CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=phvnet:br-eno50/'
 ~/answers.cfg 
  sed -i 
's/CONFIG_NEUTRON_OVS_BRIDGE_IFACES=/CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eno50:eno50/'
 ~/answers.cfg 
  sed -i 's/CONFIG_PROVISION_DEMO=y/CONFIG_PROVISION_DEMO=n/' ~/answers.cfg 

  ...
  10.43.241.186_neutron.pp: [ ERROR ]  
  Applying Puppet manifests [ ERROR ] 

  ERROR : Error appeared during Puppet run: 10.43.241.186_neutron.pp 
  Error: Duplicate declaration: Vs_bridge[br-ens34] is already declared in file 
/var/tmp/packstack/f441c89782684b79848481c0eb3f7b5b/manifests/10.43.241.186_neutron.pp:98;
 cannot redeclare at 
/var/tmp/packstack/f441c89782684b79848481c0eb3f7b5b/modules/neutron/manifests/plugins/ovs/bridge.pp:9
 on node cephm186 
  You will find full trace in log 
/var/tmp/packstack/20150302-142444-EfhkSC/manifests/10.43.241.186_neutron.pp.log
 
  Please check log file 
/var/tmp/packstack/20150302-142444-EfhkSC/openstack-setup.log for more 
information

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245682] Re: interface-attach: does not prevent attaching multiple ports to same network on same instance

2015-03-16 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245682

Title:
  interface-attach: does not prevent attaching multiple ports to same
  network on same instance

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When booting a vm you are only allocated to have one port on each
  network attached to the instance. Currently, you can add multiple
  interfaces on the same network via interface-attach. This is a
  regression from grizzly where this was working correctly previously.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1245682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432582] [NEW] OVS agent shows connected to AMQP but neutron server rejects the request.

2015-03-16 Thread Sudipta Biswas
Public bug reported:

In my environment, the neutron-server (on the controller) and the
neutron agent (OVS agent - on the compute host) had a timestamp offset.

The date command shows the following:

Mon Mar 16 02:50:36 UTC 2015  -- On the compute.
Mon Mar 16 09:28:52 UTC 2015 -- On the controller.

The neutron agent in the openvswitch-agent.log says connected to AMQP server on 
IP:5672
However, the neutron server doesn't seem to register the agent.

Upon switching on debug - the server.log on the neutron-server shows:

 Message with invalid timestamp received report_state /usr/lib/python2.7
/site-packages/neutron/db/agents_db.py:232

There's no way to detect this problem from the compute node - since it
gives an illusion that the service is connected to AMQP - however, only
when debugged on the neutron-server, the actual error is determined.

In agents_db.py, report_state should throw back an exception to the
agent - saying the reporting state failed with the appropriate message.
This can be achieved by  bubbling an exception from the report_state
method on a timestamp miss match to the OVS agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Sudipta Biswas (sbiswas7)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sudipta Biswas (sbiswas7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432582

Title:
  OVS agent shows connected to AMQP but neutron server rejects the
  request.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In my environment, the neutron-server (on the controller) and the
  neutron agent (OVS agent - on the compute host) had a timestamp
  offset.

  The date command shows the following:

  Mon Mar 16 02:50:36 UTC 2015  -- On the compute.
  Mon Mar 16 09:28:52 UTC 2015 -- On the controller.

  The neutron agent in the openvswitch-agent.log says connected to AMQP server 
on IP:5672
  However, the neutron server doesn't seem to register the agent.

  Upon switching on debug - the server.log on the neutron-server shows:

   Message with invalid timestamp received report_state
  /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:232

  There's no way to detect this problem from the compute node - since it
  gives an illusion that the service is connected to AMQP - however,
  only when debugged on the neutron-server, the actual error is
  determined.

  In agents_db.py, report_state should throw back an exception to the
  agent - saying the reporting state failed with the appropriate
  message. This can be achieved by  bubbling an exception from the
  report_state method on a timestamp miss match to the OVS agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233767] Re: Error during ComputeManager.update_available_resource: 'NoneType' object is not iterable

2015-03-16 Thread Davanum Srinivas (DIMS)
*** This bug is a duplicate of bug 1238374 ***
https://bugs.launchpad.net/bugs/1238374

** This bug has been marked a duplicate of bug 1238374
   TypeError in periodic task 'update_available_resource'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233767

Title:
  Error during ComputeManager.update_available_resource: 'NoneType'
  object is not iterable

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  2013-10-01 16:29:18.979 25543 ERROR nova.virt.libvirt.driver [-] [instance: 
f3ed83a9-2e8f-4d6e-a462-ad35e88507b6] During wait destroy, instance disappeared.
  2013-10-01 16:30:04.240 25543 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: 'NoneType' object is not 
iterable
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 180, in 
run_periodic_tasks
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/compute/manager.py, line 4863, in 
update_available_resource
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/lockutils.py, line 246, in 
inner
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/compute/resource_tracker.py, line 318, in 
update_available_resource
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
self.pci_tracker.clean_usage(instances, migrations, orphans)
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/pci/pci_manager.py, line 285, in clean_usage
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
for dev in self.claims.pop(uuid):
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 
TypeError: 'NoneType' object is not iterable
  2013-10-01 16:30:04.240 25543 TRACE nova.openstack.common.periodic_task 

  
  
http://logs.openstack.org/35/43335/12/check/check-tempest-devstack-vm-full/49e7df0/logs/screen-n-cpu.txt.gz?level=TRACE#_2013-10-01_16_29_18_979

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1233767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248936] Re: Information about domain are not setted in the Nova context

2015-03-16 Thread Davanum Srinivas (DIMS)
Fixed with latest switch to oslo.log and oslo.context libraries.

** Changed in: nova
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248936

Title:
  Information about domain are not setted in the Nova context

Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  When a request is made to Keystone, information such as user_id,
  project_id, among others, are setted in the class NovaKeystoneContext
  https://github.com/openstack/nova/blob/master/nova/api/auth.py#L79.
  When using the Keystone V3 API, domain information should be passed,
  but Nova is not ready to receive this information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1248936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334661] Re: rpc.cleanup method is not reachable due to wrong import of rpc module

2015-03-16 Thread Davanum Srinivas (DIMS)
** Changed in: oslo-incubator
   Status: In Progress = Invalid

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334661

Title:
  rpc.cleanup method is not reachable due to wrong import of rpc module

Status in OpenStack Compute (Nova):
  Invalid
Status in The Oslo library incubator:
  Invalid

Bug description:
  In nova service rpc.cleanup method is not getting called because 'rpc'
  is not imported properly.

  rpc = importutils.try_import('nova.openstack.common.rpc')
  It should be
  rpc = importutils.try_import('nova.rpc')

  'rpc' module is not present in nova/openstack/common package.
  As it is present in nova package it should be imported from there only.

  Also rpc cleanup method should not be called while restarting the
  service as ideally cleanup task should be done only while exiting from
  the service. In case of SIGHUP signal, service gets restarted and
  tries to cleanup the rpc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432550] [NEW] Instance Console hangs up at Starting VNC handshake

2015-03-16 Thread Wu Hong Guang
Public bug reported:

Testing step:

1:setup a latest devstack instance
2:login as demo and launch a instance for demo project
3:go to instance console tab
4:show only console by Click here to show only console
5:Instance Console hangs up  at Starting VNC handshake


the n-novnc report :

2015-03-16 16:21:03.570 INFO nova.console.websocketproxy 
[req-bb5cb371-79ed-4d72-adc5-dca791661cea None None]  14: connect info: 
{u'instance_uuid': u'92c03c13-049c-4098-9e45-6b5d40dd1200', 
u'internal_access_path': None, u'last_activity_at': 1426494063.149884, 
u'console_type': u'novnc', u'host': u'127.0.0.1', u'token': 
u'eca8b2e7-b675-49d9-9b31-e5cb86feb656', u'port': u'5900'}
2015-03-16 16:21:03.571 INFO nova.console.websocketproxy 
[req-bb5cb371-79ed-4d72-adc5-dca791661cea None None]  14: connecting to: 
127.0.0.1:5900
{2015-03-16 16:21:23.241 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
2015-03-16 16:21:23.242 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
2015-03-16 16:21:23.241 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
2015-03-16 16:21:23.242 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
2015-03-16 16:21:23.242 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
2015-03-16 16:21:23.245 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 4 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
2015-03-16 16:21:23.245 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 5 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
2015-03-16 16:21:23.246 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 2 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
2015-03-16 16:26:13.260 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 1 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432550

Title:
  Instance Console hangs up  at Starting VNC handshake

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Testing step:

  1:setup a latest devstack instance
  2:login as demo and launch a instance for demo project
  3:go to instance console tab
  4:show only console by Click here to show only console
  5:Instance Console hangs up  at Starting VNC handshake

  
  the n-novnc report :

  2015-03-16 16:21:03.570 INFO nova.console.websocketproxy 
[req-bb5cb371-79ed-4d72-adc5-dca791661cea None None]  14: connect info: 
{u'instance_uuid': u'92c03c13-049c-4098-9e45-6b5d40dd1200', 
u'internal_access_path': None, u'last_activity_at': 1426494063.149884, 
u'console_type': u'novnc', u'host': u'127.0.0.1', u'token': 
u'eca8b2e7-b675-49d9-9b31-e5cb86feb656', u'port': u'5900'}
  2015-03-16 16:21:03.571 INFO nova.console.websocketproxy 
[req-bb5cb371-79ed-4d72-adc5-dca791661cea None None]  14: connecting to: 
127.0.0.1:5900
  {2015-03-16 16:21:23.241 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
  2015-03-16 16:21:23.242 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
  2015-03-16 16:21:23.241 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
  2015-03-16 16:21:23.242 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
  2015-03-16 16:21:23.242 INFO nova.console.websocketproxy [-] 172.1.1.105: 
ignoring empty handshake
  2015-03-16 16:21:23.245 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 4 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
  2015-03-16 16:21:23.245 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 5 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
  2015-03-16 16:21:23.246 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 2 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824
  2015-03-16 16:26:13.260 DEBUG nova.console.websocketproxy [-] Reaing zombies, 
active child count is 1 from (pid=4238) vmsg 
/usr/local/lib/python2.7/dist-packages/websockify/websocket.py:824

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432685] [NEW] 2014.1.4 introduces new dependency for oslo.utils

2015-03-16 Thread Corey Bryant
Public bug reported:

In the latest release of 2014.1.4 nova, commit
4b46a86f8a2af096e399df8518f8269f825684e0 introduces a new dependency for
oslo.utils in nova/compute/api.py.

I think it's policy to not allow new dependencies in stable releases so
this may have slipped by.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  In the latest release of 2014.1.4 nova, commit
  4b46a86f8a2af096e399df8518f8269f825684e0 introduces a new dependency for
  oslo.utils in nova/compute/api.py.
+ 
+ I think it's policy to not allow new dependencies in stable releases so
+ this may have slipped by.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432685

Title:
  2014.1.4 introduces new dependency for oslo.utils

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the latest release of 2014.1.4 nova, commit
  4b46a86f8a2af096e399df8518f8269f825684e0 introduces a new dependency
  for oslo.utils in nova/compute/api.py.

  I think it's policy to not allow new dependencies in stable releases
  so this may have slipped by.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432490] Re: TestEncryptedCinderVolumes cryptsetup name is too long

2015-03-16 Thread Mike Perez
Going to take John's suggestion of just passing a uuid instead of the
volume name in the iqn.

** Changed in: cinder
   Status: New = Incomplete

** Changed in: nova
   Status: New = Invalid

** Changed in: cinder
   Status: Incomplete = Invalid

** Changed in: tempest
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432490

Title:
  TestEncryptedCinderVolumes cryptsetup name is too long

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  First off, while I understand this is not reproducible with the
  reference implementation LVM, this seems like a unknown limitation
  today since we're not enforcing any length on the IQN or recommending
  anything.

  When running Datera storage with Cinder and the following
  TestEncryptedCinderVolumes tests:

  {0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup
  {0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks

  cryptsetup complains about the name being too long:

  http://paste.openstack.org/show/192537

  Nova uses the device name that's in /dev/disk-by-path, which in this
  case is the returned iqn from the backend:

  ip-172.30.128.2:3260-iscsi-iqn.2013-05.com.daterainc:OpenStack-
  TestEncryptedCinderVolumes-676292884:01:sn:aef6a6f1cd84768f-lun-0

  Already started talking Matt Treinish about this on IRC last week.
  Unsure where the fix should actual go into.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1432490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432880] Re: [Launch Instance Fix] Add Sorting To Table in Select Source

2015-03-16 Thread Shaoquan Chen
** Description changed:

- In Launch Instance wizard work flow, the columns of the available source
- table of select source step should be sortable.
+ In the new angular-based Launch Instance wizard work flow, the columns
+ of the available source table of select source step should be sortable.

** Changed in: stratagus
   Status: Invalid = In Progress

** Project changed: stratagus = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432880

Title:
  [Launch Instance Fix] Add Sorting To Table in Select Source

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the new angular-based Launch Instance wizard work flow, the columns
  of the available source table of select source step should be
  sortable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432924] Re: [Launch Instance Fix] Cleaning up unused injected dependencies

2015-03-16 Thread Shaoquan Chen
** Project changed: stratagus = horizon

** Changed in: horizon
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432924

Title:
  [Launch Instance Fix] Cleaning up unused injected dependencies

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are some unused angular injected dependencies in some
  controllers for Launch Instance work flow.  They should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432880] [NEW] [Launch Instance Fix] Add Sorting To Table in Select Source

2015-03-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In the new angular-based Launch Instance wizard work flow, the columns
of the available source table of select source step should be sortable.

** Affects: horizon
 Importance: Low
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
[Launch Instance Fix] Add Sorting To Table in Select Source
https://bugs.launchpad.net/bugs/1432880
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432924] [NEW] [Launch Instance Fix] Cleaning up unused injected dependencies

2015-03-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There are some unused angular injected dependencies in some controllers
for Launch Instance work flow.  They should be removed.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: Invalid

-- 
[Launch Instance Fix] Cleaning up unused injected dependencies
https://bugs.launchpad.net/bugs/1432924
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432920] [NEW] [Launch Instance Fix] Removing period from selecting message

2015-03-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In Launch Instance wizard work flow, the user message for selecting
should not have periods according to UX design.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: Invalid

-- 
[Launch Instance Fix] Removing period from selecting message
https://bugs.launchpad.net/bugs/1432920
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432873] [NEW] Add FDB bridge entry fails if old entry not removed

2015-03-16 Thread Kevin Stevens
Public bug reported:

Running on Ubuntu 14.04 with Linuxbridge agent and L2pop with vxlan
networks.

In situations where remove_fdb_entries messages are lost/never consumed, 
future add_fdb_bridge_entry attempts will fail with the following example 
error message:
2015-03-16 21:10:08.520 30207 ERROR neutron.agent.linux.utils 
[req-390ab63a-9d3c-4d0e-b75b-200e9f5b97c6 None]
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'bridge', 'fdb', 'add', 'fa:16:3e:a5:15:35', 
'dev', 'vxlan-15', 'dst', '172.30.100.60']
Exit code: 2
Stdout: ''
Stderr: 'RTNETLINK answers: File exists\n'

In our case, instances were unable to communicate with their Neutron
router because vxlan traffic was being forwarded to the wrong vxlan
endpoint. This was corrected by either migrating the router to a new
agent or by executing a bridge fdb del for the fdb entry corresponding
with the Neutron router mac address. Once deleted, the LB agent added
the appropriate fdb entry at the next polling event.

If anything is unclear, please let me know.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l2-pop lb linuxbridge vxlan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432873

Title:
  Add FDB bridge entry fails if old entry not removed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Running on Ubuntu 14.04 with Linuxbridge agent and L2pop with vxlan
  networks.

  In situations where remove_fdb_entries messages are lost/never consumed, 
future add_fdb_bridge_entry attempts will fail with the following example 
error message:
  2015-03-16 21:10:08.520 30207 ERROR neutron.agent.linux.utils 
[req-390ab63a-9d3c-4d0e-b75b-200e9f5b97c6 None]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'bridge', 'fdb', 'add', 'fa:16:3e:a5:15:35', 
'dev', 'vxlan-15', 'dst', '172.30.100.60']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: File exists\n'

  In our case, instances were unable to communicate with their Neutron
  router because vxlan traffic was being forwarded to the wrong vxlan
  endpoint. This was corrected by either migrating the router to a new
  agent or by executing a bridge fdb del for the fdb entry
  corresponding with the Neutron router mac address. Once deleted, the
  LB agent added the appropriate fdb entry at the next polling event.

  If anything is unclear, please let me know.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432858] [NEW] Suboptimal security groups calculation for nodes

2015-03-16 Thread Aleksandr Shaposhnikov
Public bug reported:

During my testing Openstack with Neutron on scale I found that security group 
updates for the neutron-ovs-agent are suboptimal. The agent will request the 
security group rules for all of the ports attached to OVS. The server will then 
provide the rules for each individual port,
which are almost identical for the ports in the same security group. This 
becomes extremely large if the security group has a lot of members and the OVS 
agent has a lot of ports.

So here is some math:
If a security group has 2000 VM’s spread across and 50 compute nodes, the 
average node will have 40 VMs. If a new VM is launched in the same security 
group, each compute node will get a notification and request the security group 
info that will have 2001 entries for each of the ~40 ports on that node. That’s 
~80k records that need to be delivered to 50 compute nodes in a short period of 
time. The only difference between each port’s list of rules is that the port’s 
fixed_ips are excluded.

I suggest approach when there would be only one response for node that
contains 2000+1 records. The agent would be responsible for taking the
list of rules for the security group and applying it per port by
excluding the rule referencing the port’s address. Besides the
generation of useless information and a lot of work done on neutron-
server side it will significantly decrease load on oslo.messaging and
neutron-server. Right now in my env (25 computes; 1500VMs) a security
groups response could be up to 32Mb for each compute node/ovs-agent.

** Affects: mos
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: scale

** Also affects: mos
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432858

Title:
  Suboptimal security groups calculation for nodes

Status in Mirantis OpenStack:
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  During my testing Openstack with Neutron on scale I found that security group 
updates for the neutron-ovs-agent are suboptimal. The agent will request the 
security group rules for all of the ports attached to OVS. The server will then 
provide the rules for each individual port,
  which are almost identical for the ports in the same security group. This 
becomes extremely large if the security group has a lot of members and the OVS 
agent has a lot of ports.

  So here is some math:
  If a security group has 2000 VM’s spread across and 50 compute nodes, the 
average node will have 40 VMs. If a new VM is launched in the same security 
group, each compute node will get a notification and request the security group 
info that will have 2001 entries for each of the ~40 ports on that node. That’s 
~80k records that need to be delivered to 50 compute nodes in a short period of 
time. The only difference between each port’s list of rules is that the port’s 
fixed_ips are excluded.

  I suggest approach when there would be only one response for node that
  contains 2000+1 records. The agent would be responsible for taking the
  list of rules for the security group and applying it per port by
  excluding the rule referencing the port’s address. Besides the
  generation of useless information and a lot of work done on neutron-
  server side it will significantly decrease load on oslo.messaging and
  neutron-server. Right now in my env (25 computes; 1500VMs) a security
  groups response could be up to 32Mb for each compute node/ovs-agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1432858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432867] [NEW] New Launch Instance workflow needs validation

2015-03-16 Thread Kelly Domico
Public bug reported:

The new Launch Instance workflow needs to have the Launch Instance
button disabled until all requirements met. Should have the following:

1. Instance name
2. Source selected
  - if creating volume: volume size  0
  - if volume selected, volume size  max(volume image size, volume image root 
disk)
3. Instance count = 1 (1 allowed if image or instance snapshot source type 
selected)
4. Valid flavor
  - flavor RAM = image minimum RAM
  - flavor root disk = image minimum disk
5. Valid security group
6. Valid network if Neutron is enabled

** Affects: horizon
 Importance: Undecided
 Assignee: Kelly Domico (kelly-domico)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Kelly Domico (kelly-domico)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432867

Title:
  New Launch Instance workflow needs validation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The new Launch Instance workflow needs to have the Launch Instance
  button disabled until all requirements met. Should have the following:

  1. Instance name
  2. Source selected
- if creating volume: volume size  0
- if volume selected, volume size  max(volume image size, volume image 
root disk)
  3. Instance count = 1 (1 allowed if image or instance snapshot source type 
selected)
  4. Valid flavor
- flavor RAM = image minimum RAM
- flavor root disk = image minimum disk
  5. Valid security group
  6. Valid network if Neutron is enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432856] [NEW] Security groups aren’t network topology aware

2015-03-16 Thread Aleksandr Shaposhnikov
Public bug reported:

Security group rules for a host include all hosts that are members of
the security group even though some can be unaccessible because they
aren’t attached to the same router. This introduces two problems. First,
it will create unneeded iptables rules on nodes and additional work on
neutron-server and agent-side. Second, in the case of overlapping
networks, the rules that result from a host on a completely separate
network may end up allowing traffic from an untrusted host on the same
network.

e.g. Security group SG1 has rules to allow traffic from other members of
the same group. Members of SG1 include 10.0.0.2 and 10.0.0.3, which are
on two separate networks with overlapping IPs. The iptables rules on
10.0.0.2 will then permit traffic from 10.0.0.3 even though 10.0.0.3
could be an untrusted node on its own network.

Workaround: Use separate security groups per each network. This will
decrease load from calculations significantly on neutron-server and also
will decrease number of iptables rules on nodes.

** Affects: mos
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: scale

** Also affects: mos
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432856

Title:
  Security groups aren’t network topology aware

Status in Mirantis OpenStack:
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Security group rules for a host include all hosts that are members of
  the security group even though some can be unaccessible because they
  aren’t attached to the same router. This introduces two problems.
  First, it will create unneeded iptables rules on nodes and additional
  work on neutron-server and agent-side. Second, in the case of
  overlapping networks, the rules that result from a host on a
  completely separate network may end up allowing traffic from an
  untrusted host on the same network.

  e.g. Security group SG1 has rules to allow traffic from other members
  of the same group. Members of SG1 include 10.0.0.2 and 10.0.0.3, which
  are on two separate networks with overlapping IPs. The iptables rules
  on 10.0.0.2 will then permit traffic from 10.0.0.3 even though
  10.0.0.3 could be an untrusted node on its own network.

  Workaround: Use separate security groups per each network. This will
  decrease load from calculations significantly on neutron-server and
  also will decrease number of iptables rules on nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1432856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432505] [NEW] all the status values are not translatable in port table

2015-03-16 Thread Masco Kaliyamoorthy
Public bug reported:

in port table, all the status values are not translatable.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432505

Title:
  all the status values are not translatable in port table

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in port table, all the status values are not translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432785] [NEW] L3 HA functional test not cleaning up router

2015-03-16 Thread Assaf Muller
Public bug reported:

neutron.tests.functional.agent.test_l3_agent.L3AgentTestCase.test_ha_router_conf_on_restarted_agent
is not deleting the router it creates. This leaves namespaces on the
test machine.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Assaf Muller (amuller)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432785

Title:
  L3 HA functional test not cleaning up router

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
neutron.tests.functional.agent.test_l3_agent.L3AgentTestCase.test_ha_router_conf_on_restarted_agent
  is not deleting the router it creates. This leaves namespaces on the
  test machine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432769] [NEW] In modal dialog, the first form element should be automatically focused for user

2015-03-16 Thread Shaoquan Chen
Public bug reported:

When popping up a dialog, the first form element should
be automatically focused for the user.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Shaoquan Chen (sean-chen2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432769

Title:
  In modal dialog, the first form element should be automatically
  focused for user

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When popping up a dialog, the first form element should
  be automatically focused for the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390336] Re: Libvirt: Raise wrong exception message when binding vif failed

2015-03-16 Thread Brent Eagles
I removed neutron from this bz after confirming that the binding_failed
is a valid state and indicates a  potentially transient condition.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1390336

Title:
  Libvirt: Raise wrong exception message when binding vif failed

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  nova get an NovaException with wrong message due to wrong try to build
  an instance on compute node.

  
  2014-11-07 14:40:54.446 ERROR nova.compute.manager [-] [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] Instance failed to spawn
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] Traceback (most recent call last):
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
/opt/stack/nova/nova/compute/manager.py, line 2244, in _build_resources
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] yield resources
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
/opt/stack/nova/nova/compute/manager.py, line 2114, in _build_and_run_instance
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] block_device_info=block_device_info)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2597, in spawn
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] write_to_disk=True)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4157, in _get_guest_xml
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] context)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4018, in _get_guest_config
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] flavor, virt_type)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
/opt/stack/nova/nova/virt/libvirt/vif.py, line 352, in get_config
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] _(Unexpected vif_type=%s) % 
vif_type)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] NovaException: Unexpected 
vif_type=binding_failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432790] [NEW] Action directive does not update angular expressions

2015-03-16 Thread Thai Tran
Public bug reported:

The angular Action directive uses transclude to clone the innerHTML and
prepend it once. However, when the data model is updated, the expression
inside the innerHTML does not get updated.

Example:
action
  button-type=menu-item
  action-classes='btn-default'
  callback=actions.enable.toggle item=user
  {$ user.enabled? 'Enable': 'Disable' $}
/action

When user.enabled is updated, the 'Enable' and 'Disable' label does not.

** Affects: horizon
 Importance: Undecided
 Assignee: Kelly Domico (kelly-domico)
 Status: New


** Tags: angular ui

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432790

Title:
  Action directive does not update angular expressions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The angular Action directive uses transclude to clone the innerHTML
  and prepend it once. However, when the data model is updated, the
  expression inside the innerHTML does not get updated.

  Example:
  action
button-type=menu-item
action-classes='btn-default'
callback=actions.enable.toggle item=user
{$ user.enabled? 'Enable': 'Disable' $}
  /action

  When user.enabled is updated, the 'Enable' and 'Disable' label does
  not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426324] Re: VFS blkid calls need to handle 0 or 2 return codes

2015-03-16 Thread James Page
** Changed in: cloud-archive
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426324

Title:
  VFS blkid calls need to handle 0 or 2 return codes

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Compute (Nova):
  In Progress
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  kilo-2 introduce blkid calls for fs detection on all new instances; if
  the specified key is not found on the block device, blkid will return
  2 instead of 0 - nova needs to deal with this:

  2015-02-27 10:48:51.270 3062 INFO nova.virt.disk.vfs.api [-] Unable to import 
guestfs, falling back to VFSLocalFS
  2015-02-27 10:48:51.476 3062 ERROR nova.compute.manager [-] [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] Instance failed to spawn
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] Traceback (most recent call last):
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2328, in 
_build_resources
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] yield resources
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2198, in 
_build_and_run_instance
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] flavor=flavor)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2329, in 
spawn
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] admin_pass=admin_password)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2728, in 
_create_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] project_id=instance['project_id'])
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 230, 
in cache
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] *args, **kwargs)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 507, 
in create_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] copy_qcow2_image(base, self.path, 
size)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 431, in 
inner
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] return f(*args, **kwargs)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 473, 
in copy_qcow2_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] disk.extend(target, size, 
use_cow=True)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 183, in extend
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] if not is_image_extendable(image, 
use_cow):
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 235, in 
is_image_extendable
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] if fs.get_image_fs() in 
SUPPORTED_FS_TO_EXTEND:
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py, line 167, in 
get_image_fs
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] run_as_root=True)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1432798] [NEW] Action directive uses isolated scope, makes it difficult to pass in translated texts.

2015-03-16 Thread Thai Tran
Public bug reported:

Assume that I have an object containing translated texts.
labels = {
  hi: gettext('hi'),
  ho: gettext('ho')
}

The current architecture does not let me embed translated text into
angular template directly. We have a separate work item to address this
in L. So the current implementation is to use this object in our
template for translated texts.

Current the Action directive uses isolated scope. 
scope = {
  actionClasses: '=?',
  callback: '=?',
  disabled: '=?',
  item: '=?'
}

This poses a problem because we can no longer use the translated text in this 
directive.
Solutions that might work:

1. Use transclude: true for directive
2. Add labels attribute to directive

** Affects: horizon
 Importance: Medium
 Assignee: Kelly Domico (kelly-domico)
 Status: New

** Changed in: horizon
   Importance: Undecided = Medium

** Changed in: horizon
Milestone: None = kilo-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432798

Title:
  Action directive uses isolated scope, makes it difficult to pass in
  translated texts.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Assume that I have an object containing translated texts.
  labels = {
hi: gettext('hi'),
ho: gettext('ho')
  }

  The current architecture does not let me embed translated text into
  angular template directly. We have a separate work item to address
  this in L. So the current implementation is to use this object in our
  template for translated texts.

  Current the Action directive uses isolated scope. 
  scope = {
actionClasses: '=?',
callback: '=?',
disabled: '=?',
item: '=?'
  }

  This poses a problem because we can no longer use the translated text in this 
directive.
  Solutions that might work:

  1. Use transclude: true for directive
  2. Add labels attribute to directive

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426121] Re: vmw nsx: add/remove interface on dvr is broken

2015-03-16 Thread Jason Niesz
** Changed in: neutron/juno
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426121

Title:
  vmw nsx: add/remove interface on dvr is broken

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron juno series:
  Fix Released
Status in VMware NSX:
  Fix Committed

Bug description:
  When the NSX specific extension was dropped in favour of the community
  one, there was a side effect that unfortunately caused add/remove
  interface operations to fail when executed passing a subnet id.

  This should be fixed soon and backported to Juno.
  Icehouse is not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432806] [NEW] Primary VIP being deleted from L3 HA routers after agent restart

2015-03-16 Thread Assaf Muller
Public bug reported:

After you restart a L3 agent, the primary VIP is deleted from the router
namespace. It should not have any effect on HEAD / Kilo / Juno, but I
found it while testing bp/report-ha-router-master where it has a very
nasty effect - It reports the router as moving to standby.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Assaf Muller (amuller)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432806

Title:
  Primary VIP being deleted from L3 HA routers after agent restart

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  After you restart a L3 agent, the primary VIP is deleted from the
  router namespace. It should not have any effect on HEAD / Kilo / Juno,
  but I found it while testing bp/report-ha-router-master where it has a
  very nasty effect - It reports the router as moving to standby.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432808] [NEW] variablize fonts

2015-03-16 Thread mattfarina
Public bug reported:

In some cases fonts (family, size, etc) are specified as variables and
other times they are in the components themselves. We should move to one
consistent method. I propose variables.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432808

Title:
  variablize fonts

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In some cases fonts (family, size, etc) are specified as variables and
  other times they are in the components themselves. We should move to
  one consistent method. I propose variables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431767] Re: Table rows aligned incorrectly

2015-03-16 Thread Rob Cresswell
Seems this was fixed elsewhere.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1431767

Title:
  Table rows aligned incorrectly

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The CSS for table rows contains 'vertical-align: top' causing the cell
  data to align strangely. There is also an additional top border on the
  tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1431767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432810] [NEW] Focus should be trapped in modal dialog box when popping up

2015-03-16 Thread Shaoquan Chen
Public bug reported:

When a modal dialog showing up, by design, user cannot click
on buttons and links under the overlay layer.  However when
pressing [tab]/[shift + tab] key, user still be able to navigate
the focus to any focusable elements in the page outside the
modal dialog, and interact on it via keyboard, this is not
desired. `focusTrap` service provides an elegant solution for
it.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1432810

Title:
  Focus should be trapped in modal dialog box when popping up

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When a modal dialog showing up, by design, user cannot click
  on buttons and links under the overlay layer.  However when
  pressing [tab]/[shift + tab] key, user still be able to navigate
  the focus to any focusable elements in the page outside the
  modal dialog, and interact on it via keyboard, this is not
  desired. `focusTrap` service provides an elegant solution for
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1432810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432522] [NEW] weakref ReferenceError not handled in callback manager

2015-03-16 Thread Saggi Mizrahi
Public bug reported:

How to reproduce:
1. register a callable 
2. delete it
3. notify

Example output:

2015-03-16 10:39:07.600 ERROR neutron.callbacks.manager 
[req-0e06ab7e-12f4-4807-bbbe-a05d183a54f5 None None] Error during notification 
for 
dragonflow.neutron.services.l3.l3_controller_plugin.ControllerL3ServicePlugin.dvr_vmarp_table_update
 port, after_update
2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager Traceback (most recent 
call last):
2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager   File 
/opt/stack/neutron/neutron/callbacks/manager.py, line 143, in _notify_loop
2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager callback(resource, 
event, trigger, **kwargs)
2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager ReferenceError: 
weakly-referenced object no longer exists
2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager

** Affects: neutron
 Importance: Undecided
 Assignee: Saggi Mizrahi (ficoos)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432522

Title:
  weakref ReferenceError not handled in callback manager

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  How to reproduce:
  1. register a callable 
  2. delete it
  3. notify

  Example output:

  2015-03-16 10:39:07.600 ERROR neutron.callbacks.manager 
[req-0e06ab7e-12f4-4807-bbbe-a05d183a54f5 None None] Error during notification 
for 
dragonflow.neutron.services.l3.l3_controller_plugin.ControllerL3ServicePlugin.dvr_vmarp_table_update
 port, after_update
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager Traceback (most 
recent call last):
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager   File 
/opt/stack/neutron/neutron/callbacks/manager.py, line 143, in _notify_loop
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager ReferenceError: 
weakly-referenced object no longer exists
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp