[Yahoo-eng-team] [Bug 1260617] Re: Provide the ability to attach volumes in the read-only mode

2013-12-16 Thread Zhenguo Niu
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) = Zhenguo Niu (niu-zglinux)

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Zhenguo Niu (niu-zglinux)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260617

Title:
  Provide the ability to attach volumes in the read-only mode

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  Cinder now support the ability to attach volumes in the read-only
  mode, this should be exposed through horizon. Read-only mode could be
  ensured by hypervisor configuration during the attachment. Libvirt,
  Xen, VMware and Hyper-V support R/O volumes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261313] [NEW] prevent multiple neutron-netns-cleanup running simultaneously

2013-12-16 Thread Zang MingJie
Public bug reported:

Debian Neutron has set a cron entry to invoke neutron-netns-cleanup
every hour, in ordinary system, it works fine, but in our heavy stress
test environment, I found there are over 10 neutron-netns-cleanup
processes running.

Can we introduce a lock which can prevent spawning new instance when
there is already one running ?

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Debian Neutron has set a cron entry to invoke neutron-netns-cleanup
  every hour, in ordinary system, it works fine, but in our heavy stress
  test environment, I found there are over 10 neutron-netns-cleanup
- process running.
+ processes running.
  
  Can we introduce a lock which can prevent spawning new instance when
  there is already one running ?

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261313

Title:
  prevent multiple neutron-netns-cleanup running simultaneously

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Debian Neutron has set a cron entry to invoke neutron-netns-cleanup
  every hour, in ordinary system, it works fine, but in our heavy stress
  test environment, I found there are over 10 neutron-netns-cleanup
  processes running.

  Can we introduce a lock which can prevent spawning new instance when
  there is already one running ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261334] [NEW] nvp: update network gateway name on backend as well

2013-12-16 Thread Salvatore Orlando
Public bug reported:

when a network gateway name is updated the plugin currently only the
neutron database is updated; it might be useful to propagate the update
to the backend.

This breaks a use case when network gateways created in neutron need
then to be processed by other tools finding them in nvp by name.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: havana-backport-potential nicira

** Tags added: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261334

Title:
  nvp: update network gateway name on backend as well

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  when a network gateway name is updated the plugin currently only the
  neutron database is updated; it might be useful to propagate the
  update to the backend.

  This breaks a use case when network gateways created in neutron need
  then to be processed by other tools finding them in nvp by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261348] [NEW] manage cmd does not parse positional args

2013-12-16 Thread Dima Shulyak
Public bug reported:

Currently all db commands in glance doesnot care about
positional args, e.g. --version

** Affects: glance
 Importance: Undecided
 Assignee: Dima Shulyak (dshulyak)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = Dima Shulyak (dshulyak)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1261348

Title:
  manage cmd does not parse positional args

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  Currently all db commands in glance doesnot care about
  positional args, e.g. --version

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1261348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158807] Re: Qpid SSL protocol

2013-12-16 Thread Xavier Queralt
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158807

Title:
  Qpid SSL protocol

Status in Cinder:
  Invalid
Status in Cinder grizzly series:
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  By default, TCP is used as transport for QPID connections. If you like
  to enable SSL, there is a flat 'qpid_protocol = ssl' available in
  nova.conf. However, python-qpid client is awaiting transport type
  instead of protocol. It seems to be a bug:

  Solution:
  
(https://github.com/openstack/nova/blob/master/nova/openstack/common/rpc/impl_qpid.py#L323)

  WRONG:self.connection.protocol = self.conf.qpid_protocol
  CORRECT:self.connection.transport = self.conf.qpid_protocol

  Regards,
  JuanFra.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1158807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261367] [NEW] Cannot view image with empty name

2013-12-16 Thread Ana Krivokapić
Public bug reported:

After creating an image with empty name (only space characters in the
name), and attempting to view it, the view crashes.

Related bug: https://bugs.launchpad.net/horizon/+bug/1258349

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1261367

Title:
  Cannot view image with empty name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After creating an image with empty name (only space characters in the
  name), and attempting to view it, the view crashes.

  Related bug: https://bugs.launchpad.net/horizon/+bug/1258349

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1261367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220980] Re: flavor-manage v3 is core api , should add to list API_V3_CORE_EXTENSIONS

2013-12-16 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: icehouse-2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1220980

Title:
  flavor-manage v3 is core api , should add to list
  API_V3_CORE_EXTENSIONS

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  flavor-manage is core api , should add to list API_V3_CORE_EXTENSIONS

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1220980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1222697] Re: AuthInfo Object in auth/controllers.py doesn't use @dependency.required decorator

2013-12-16 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1222697

Title:
  AuthInfo Object in auth/controllers.py doesn't use
  @dependency.required decorator

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  AuthInfo Object in auth/controllers.py instead of using the dependency
  decorator, directly instantiates the managers that it is expecting to
  use (in this case: trust and identity manager).

  A quick sweep of code should be done to identify the remaining cases
  and clean them up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1222697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208577] Re: BaseException.message throws DeprecationWarning

2013-12-16 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1208577

Title:
  BaseException.message throws DeprecationWarning

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  Noticed the following deprecation warnings at console output:

  /openstack/glance/glance/openstack/common/timeutils.py:51: 
DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
raise ValueError(e.message)

  /openstack/glance/glance/store/filesystem.py:179: DeprecationWarning: 
BaseException.message has been deprecated as of Python 2.6
% (CONF.filesystem_store_metadata_file, ioe.message)))

  Fix: change e.message to str(e).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1208577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261253] Re: oslo.messaging 1.2.0a11 is outdated and problematic to install

2013-12-16 Thread Clint Byrum
Adding affects tripleo. The fix there is to manually add d2to1 to the
local pypi-mirror element in tripleo-image-elements. If this is fixed in
glance then the tripleo task can be closed. The CD undercloud has had
d2to1 manually added to the local pypi-mirror for now to unblock image
building.

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New = Fix Released

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: tripleo
   Status: Fix Released = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1261253

Title:
  oslo.messaging 1.2.0a11 is outdated and problematic to install

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  oslo.messaging needs to be updated to the latest alpha of 1.3.0 (a3 as
  of bug filing) or if this bug is still open an oslo.messaging is
  released, to the released version.

  1.2.0a11 directly depends on d2to1, which is no longer in global
  requirements and thus presents problems for deployers using that as a
  gate for pypi mirrors (notably: TripleO).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1261253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261424] [NEW] LBaaS DB: update_member catch DBDuplicateEntry

2013-12-16 Thread Avishay Balderman
Public bug reported:

When a Member is updated the code catch a DBDuplicateEntry exception.
since we are in an update operaion we assume that the entity already exists in 
DB

See:
https://github.com/openstack/neutron/blob/master/neutron/db/loadbalancer/loadbalancer_db.py#L695

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261424

Title:
  LBaaS DB: update_member catch DBDuplicateEntry

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a Member is updated the code catch a DBDuplicateEntry exception.
  since we are in an update operaion we assume that the entity already exists 
in DB

  See:
  
https://github.com/openstack/neutron/blob/master/neutron/db/loadbalancer/loadbalancer_db.py#L695

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261430] [NEW] Port opening in security group doesn't apply to existing VMs

2013-12-16 Thread Andrew Lazarev
Public bug reported:

Steps to repro:
1. Create VM choosing security group without port 8080 open
2. Assign floating IP to VM
3. Try to reach VM's port 8080 from external. As expected: connection refused.
4. Edit security group allowing 1-65535 port range
5. Try to reach VM's port 8080 from external.

Expected behavior: VM can be reached by port 8080.
Observed behavior: connection refused.

6. Create new VM with this security group
7. Assign floating IP to VM
8. Try to reach VM's port 8080 from external. As expected: VM can be reached by 
port 8080

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Steps to repro:
  1. Create VM choosing security group without port 8080 open
  2. Assign floating IP to VM
  3. Try to reach VM's port 8080 from external. As expected: connection refused.
  4. Edit security group allowing 1-65535 port range
- 5. Try to reach VM's port 8080 from external. 
+ 5. Try to reach VM's port 8080 from external.
  
  Expected behavior: VM can be reached by port 8080.
  Observed behavior: connection refused.
  
  6. Create new VM with this security group
- 7. Try to reach VM's port 8080 from external. As expected: VM can be reached 
by port 8080
+ 7. Assign floating IP to VM
+ 8. Try to reach VM's port 8080 from external. As expected: VM can be reached 
by port 8080

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261430

Title:
  Port opening in security group doesn't apply to existing VMs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Steps to repro:
  1. Create VM choosing security group without port 8080 open
  2. Assign floating IP to VM
  3. Try to reach VM's port 8080 from external. As expected: connection refused.
  4. Edit security group allowing 1-65535 port range
  5. Try to reach VM's port 8080 from external.

  Expected behavior: VM can be reached by port 8080.
  Observed behavior: connection refused.

  6. Create new VM with this security group
  7. Assign floating IP to VM
  8. Try to reach VM's port 8080 from external. As expected: VM can be reached 
by port 8080

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261431] [NEW] Horizon: Display Information About Version

2013-12-16 Thread Tanja Roth
Public bug reported:

IMHO, it would be helpful if some kind of information about the
OpenStack version and the Horizon version was displayed at the bottom of
the Horizon Web UI, so the user or admin is sure at what he is currently
looking at.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1261431

Title:
  Horizon: Display Information About Version

Status in Init scripts for use on cloud images:
  New

Bug description:
  IMHO, it would be helpful if some kind of information about the
  OpenStack version and the Horizon version was displayed at the bottom
  of the Horizon Web UI, so the user or admin is sure at what he is
  currently looking at.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1261431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259425] Re: service-create allows 2 services with the same name

2013-12-16 Thread Dolph Mathews
If the migration can't be applied, then it should exit cleanly without
making changes (an inconsistent state is a terrible place to be!).

I think the migration should wait for manual intervention to correct
unexpected data (duplicate service names). Adding devstack to this bug
because it sounds like it's behavior needs to be changed prior to
landing a fix in keystone.

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1259425

Title:
  service-create allows 2 services with the same name

Status in devstack - openstack dev environments:
  New
Status in OpenStack Identity (Keystone):
  Triaged

Bug description:
  While `service-get name` seems to be confused by the duplicated
  name. The same thing happens to `service-delete name`. Of course,
  getting and deleting services by ID is not affected.

  Following http://docs.openstack.org/trunk/install-
  guide/install/apt/content/cinder-controller.html:

  $ keystone service-create --name=cinder --type=volume --description=Cinder 
Volume Service
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  Cinder Volume Service   |
  |  id | 54f9153b21b54a908562d85392784992 |
  | name|  cinder  |
  | type|  volume  |
  +-+--+
  $ keystone service-create --name=cinder --type=volumev2 --description=Cinder 
Volume Service V2
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description | Cinder Volume Service V2 |
  |  id | dd425789606b4472b32d78a63942aefc |
  | name|  cinder  |
  | type| volumev2 |
  +-+--+
  $ keystone service-get cinder
  global name 'exc' is not defined

  Debug output is here http://pastebin.com/kVJGUCwA

  I'm not sure whether duplicated names are allowed/recommended, but at
  least one of the following is needed:

    - service-create should check the name and stop when a service with the 
same name exists
    - service-get and service-delete should be changed so that they work with 
duplicated names, maybe by showing multiple services or reporting an 
appropriate error
    - The help text of service-get and service-delete should not say that name 
is allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1259425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261442] [NEW] Spurious error from ComputeManager._run_image_cache_manager_pass

2013-12-16 Thread Thomas Herve
Public bug reported:

I got the following errors in a tempest test at
http://logs.openstack.org/97/62397/1/check/check-tempest-dsvm-
full/3f7b8c3:


2013-12-16 16:07:52.189 27621 DEBUG nova.openstack.common.processutils [-] 
Result was 1 execute 
/opt/stack/new/nova/nova/openstack/common/processutils.py:172
2013-12-16 16:07:52.189 27621 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager._run_image_cache_manager_pass: Unexpected error 
while running command.
Command: env LC_ALL=C LANG=C qemu-img info 
/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk
Exit code: 1
Stdout: ''
Stderr: qemu-img: Could not open 
'/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk': No 
such file or directory\n
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 180, in 
run_periodic_tasks
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
task(self, context)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/compute/manager.py, line 5210, in 
_run_image_cache_manager_pass
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
self.driver.manage_image_cache(context, filtered_instances)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 4650, in 
manage_image_cache
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
self.image_cache_manager.verify_base_images(context, all_instances)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 603, in 
verify_base_images
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
inuse_backing_images = self._list_backing_images()
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 345, in 
_list_backing_images
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
backing_file = virtutils.get_disk_backing_file(disk_path)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/utils.py, line 442, in 
get_disk_backing_file
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
backing_file = images.qemu_img_info(path).backing_file
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/images.py, line 56, in qemu_img_info
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
'qemu-img', 'info', path)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/utils.py, line 175, in execute
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
return processutils.execute(*cmd, **kwargs)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/openstack/common/processutils.py, line 178, in 
execute
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
cmd=' '.join(cmd))
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
ProcessExecutionError: Unexpected error while running command.
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
Command: env LC_ALL=C LANG=C qemu-img info 
/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task Exit 
code: 1
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task Stdout: 
''
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task Stderr: 
qemu-img: Could not open 
'/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk': No 
such file or directory\n
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261442

Title:
  Spurious error from ComputeManager._run_image_cache_manager_pass

Status in OpenStack Compute (Nova):
  New

Bug description:
  I got the following errors in a tempest test at
  http://logs.openstack.org/97/62397/1/check/check-tempest-dsvm-
  full/3f7b8c3:


  2013-12-16 16:07:52.189 27621 DEBUG nova.openstack.common.processutils [-] 
Result was 1 execute 
/opt/stack/new/nova/nova/openstack/common/processutils.py:172
  2013-12-16 16:07:52.189 27621 ERROR nova.openstack.common.periodic_task [-] 
Error during 

[Yahoo-eng-team] [Bug 1190619] Re: Improve unit test coverage for Cisco plugin db code

2013-12-16 Thread Kyle Mestery
Moving this to Won't Fix since we plan to deprecate the Cisco plugin
in the future and move all work to ML2.

** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190619

Title:
  Improve unit test coverage for Cisco plugin db code

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Improve unit test coverage for ...

  quantum/plugins/cisco/db/api  170 170 0   26  0   0%
  quantum/plugins/cisco/db/l2network_db 217 217 0   14  0   
0%
  quantum/plugins/cisco/db/l2network_models 74  31  0   2   
0   57%
  quantum/plugins/cisco/db/models   51  21  0   2   0   
57%
  quantum/plugins/cisco/db/network_db_v2221 175 0   14  
0   20%
  quantum/plugins/cisco/db/network_models_v274  26  0   2   
0   63%
  quantum/plugins/cisco/db/nexus_db_v2  77  23  0   12  0   
70%
  quantum/plugins/cisco/db/nexus_models_v2  19  1   0   0   
0   95%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1190619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261462] [NEW] When RAM quota is inf value the overview page show 0 bytes.

2013-12-16 Thread Floren
Public bug reported:

If we fill RAM quota with a representation of infinite value [see 1.png]
in the overview page shows 0 bytes [see 2.png].

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 1.png
   https://bugs.launchpad.net/bugs/1261462/+attachment/3930426/+files/1.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1261462

Title:
  When RAM quota is inf value the overview page show 0 bytes.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If we fill RAM quota with a representation of infinite value [see
  1.png] in the overview page shows 0 bytes [see 2.png].

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1261462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261469] [NEW] Plugin switch issues undocumented

2013-12-16 Thread Agrin Hilmkil
Public bug reported:

Switching plugins may currently it seems lead to issues even though the
configuration is correct. The docs should therefore clearly state that
such issues might occur when moving from one plugin to another to save
customers headaches in trying to figure out the issues.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261469

Title:
  Plugin switch issues undocumented

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Switching plugins may currently it seems lead to issues even though
  the configuration is correct. The docs should therefore clearly state
  that such issues might occur when moving from one plugin to another to
  save customers headaches in trying to figure out the issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260432] Re: nova-compute can't be setting up during install on trusty

2013-12-16 Thread Joe Gordon
This isn't a nova bug, it looks like an issue with the packaging you are
using..

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260432

Title:
  nova-compute can't be setting up during install on trusty

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
  1, during install:
  Setting up nova-compute (1:2014.1~b1-0ubuntu2) ...
  start: Job failed to start
  invoke-rc.d: initscript nova-compute, action start failed.
  dpkg: error processing nova-compute (--configure):
   subprocess installed post-installation script returned error exit status 1
  Setting up nova-compute-kvm (1:2014.1~b1-0ubuntu2) ...
  Errors were encountered while processing:
   nova-compute
  E: Sub-process /usr/bin/dpkg returned an error code (1)

  2, the system is latest trusty:
  ming@arm64:~$ sudo apt-get dist-upgrade
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  Calculating upgrade... Done
  The following packages were automatically installed and are no longer 
required:
dnsmasq-utils iputils-arping libboost-system1.53.0 libboost-thread1.53.0
libclass-isa-perl libopts25 libswitch-perl ttf-dejavu-core
  Use 'apt-get autoremove' to remove them.
  The following packages have been kept back:
checkbox-cli
  0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

  3, looks /usr/bin/nova-compute can't be started:
  ming@arm64:~$ nova-compute 
  2013-12-12 17:57:19.992 13823 ERROR stevedore.extension [-] Could not load 
'file': (WebOb 1.3 (/usr/lib/python2.7/dist-packages), 
Requirement.parse('WebOb=1.2.3,1.3'))
  2013-12-12 17:57:19.993 13823 ERROR stevedore.extension [-] (WebOb 1.3 
(/usr/lib/python2.7/dist-packages), Requirement.parse('WebOb=1.2.3,1.3'))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension Traceback (most 
recent call last):
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
/usr/lib/python2.7/dist-packages/stevedore/extension.py, line 134, in 
_load_plugins
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension invoke_kwds,
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
/usr/lib/python2.7/dist-packages/stevedore/extension.py, line 146, in 
_load_one_plugin
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension plugin = ep.load()
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 2107, in load
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension if require: 
self.require(env, installer)
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 2120, in require
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension 
working_set.resolve(self.dist.requires(self.extras),env,installer)))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension   File 
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 580, in resolve
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension raise 
VersionConflict(dist,req) # XXX put more info here
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension VersionConflict: 
(WebOb 1.3 (/usr/lib/python2.7/dist-packages), 
Requirement.parse('WebOb=1.2.3,1.3'))
  2013-12-12 17:57:19.993 13823 TRACE stevedore.extension 
  2013-12-12 17:57:20.133 13823 ERROR nova.virt.driver [-] Compute driver 
option required, but not specified

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261468] [NEW] domain-scoped token has None for tenant_id replacement

2013-12-16 Thread Brant Knudson
Public bug reported:


When I get a domain-scoped token, I get back a catalog. The catalog contains a 
bunch of endpoints that aren't valid because the tenant_id replacement has been 
changed to None rather than a valid tenant-id.

Here's an example of the data in the auth request:

{
token: {
catalog: [
{
endpoints: [
{
id: 247c60ab8ce94cac9bd6de51ad3a5da4,
interface: internal,
legacy_endpoint_id: 
677bffa798da42c594fb536f9e549f84, 
region: RegionOne,
url: http://192.168.122.176:8774/v2/None;
},
...
],
id: 425a93743a7d46708d55f7f099bf1a07,
type: compute
},
...
}

The compute endpoint in Keystone is like this:

| 677bffa798da42c594fb536f9e549f84 | RegionOne |
http://192.168.122.176:8774/v2/$(tenant_id)s |
http://192.168.122.176:8774/v2/$(tenant_id)s |
http://192.168.122.176:8774/v2/$(tenant_id)s |
425a93743a7d46708d55f7f099bf1a07 |

So it's replacing $(tenant_id)s with None

I don't think this is working as designed. What's the point of providing
a bunch of invalid endpoints?

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1261468

Title:
  domain-scoped token has None for tenant_id replacement

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  When I get a domain-scoped token, I get back a catalog. The catalog contains 
a bunch of endpoints that aren't valid because the tenant_id replacement has 
been changed to None rather than a valid tenant-id.

  Here's an example of the data in the auth request:

  {
  token: {
  catalog: [
  {
  endpoints: [
  {
  id: 247c60ab8ce94cac9bd6de51ad3a5da4,
  interface: internal,
  legacy_endpoint_id: 
677bffa798da42c594fb536f9e549f84, 
  region: RegionOne,
  url: http://192.168.122.176:8774/v2/None;
  },
  ...
  ],
  id: 425a93743a7d46708d55f7f099bf1a07,
  type: compute
  },
  ...
  }

  The compute endpoint in Keystone is like this:

  | 677bffa798da42c594fb536f9e549f84 | RegionOne |
  http://192.168.122.176:8774/v2/$(tenant_id)s |
  http://192.168.122.176:8774/v2/$(tenant_id)s |
  http://192.168.122.176:8774/v2/$(tenant_id)s |
  425a93743a7d46708d55f7f099bf1a07 |

  So it's replacing $(tenant_id)s with None

  I don't think this is working as designed. What's the point of
  providing a bunch of invalid endpoints?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1261468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261475] [NEW] Nova should disable libguestfs' automatic cleanup

2013-12-16 Thread Daniel Berrange
Public bug reported:

By default libguestfs will register an atexit() handler to cleanup any
open libguestfs handles when the process exits. Since libguestfs does
not provide any mutex locking in its APIs, the atexit handlers are not
safe in multi-threaded processes. If they run they are liable to cause
memory corruption as multiple threads access the same libguestfs handle.
As such at atexit handlers should be disabled in any multi-threaded
program using libguestfs. eg by using

  guestfs.GuestFS (close_on_exit = False)

instead of

  guestfs.GuestFS()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261475

Title:
  Nova should disable libguestfs' automatic cleanup

Status in OpenStack Compute (Nova):
  New

Bug description:
  By default libguestfs will register an atexit() handler to cleanup any
  open libguestfs handles when the process exits. Since libguestfs does
  not provide any mutex locking in its APIs, the atexit handlers are not
  safe in multi-threaded processes. If they run they are liable to cause
  memory corruption as multiple threads access the same libguestfs
  handle. As such at atexit handlers should be disabled in any multi-
  threaded program using libguestfs. eg by using

guestfs.GuestFS (close_on_exit = False)

  instead of

guestfs.GuestFS()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261485] [NEW] Error messages do not convey problem or resolution

2013-12-16 Thread Victoria Kouyoumjian
Public bug reported:

Error messages frequently do not indicate why the error occurred or how
the user should address the problem to avoid the error.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Example message
   
https://bugs.launchpad.net/bugs/1261485/+attachment/3930454/+files/DeleteNetwork_message.JPG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1261485

Title:
  Error messages do not convey problem or resolution

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Error messages frequently do not indicate why the error occurred or
  how the user should address the problem to avoid the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1261485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260489] Re: --debug flag not working in neutron

2013-12-16 Thread Aaron Rosen
I'd be okay with adding a --debug to the python-neutronclient that does
the same thing as -v  (just so all the clients works similar).

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260489

Title:
  --debug flag not working in neutron

Status in Python client library for Neutron:
  New

Bug description:
  This is with the neutron master branch, in a single node devstack
  setup. The branch is at commit
  3b4233873539bad62d202025529678a5b0add412.

  If I use the --debug flag in a neutron CLI, for example, port-list, I
  don't see any debug output:

  cloud@controllernode:/opt/stack/neutron$ neutron --debug port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6c26cdc1-acc1-439c-bb47-d343085b7b78 |  | fa:16:3e:32:2c:eb | 
{subnet_id: 37f15352-e816-4a03-b58c-b4d5c1fa8e2a, ip_address: 10.0.0.2} 
|
  | f09b14b2-3162-4212-9d91-f97b22c95f31 |  | fa:16:3e:99:08:6b | 
{subnet_id: d4717b67-fd64-45ed-b22c-dedbd23afff3, ip_address: 
172.24.4.226} |
  | f0ba4efd-12ca-4d56-8c7d-e879e4150a63 |  | fa:16:3e:02:41:47 | 
{subnet_id: 37f15352-e816-4a03-b58c-b4d5c1fa8e2a, ip_address: 10.0.0.1} 
|
  
+--+--+---+-+
  cloud@controllernode:/opt/stack/neutron$ 

  
  On the other hand, if I use the --debug flag for nova, for example, nova 
list, I see the curl request and response showing up:

  
  cloud@controllernode:/opt/stack/neutron$ nova --debug list

  REQ: curl -i 'http://192.168.52.85:5000/v2.0/tokens' -X POST -H
  Content-Type: application/json -H Accept: application/json -H
  User-Agent: python-novaclient -d '{auth: {tenantName: admin,
  passwordCredentials: {username: admin, password:
  password}}}'

  RESP: [200] CaseInsensitiveDict({'date': 'Thu, 05 Dec 2013 23:41:07 GMT', 
'vary': 'X-Auth-Token', 'content-length': '8255', 'content-type': 
'application/json'})
  RESP BODY: {access: {token: {issued_at: 2013-12-05T23:41:07.307915, 
expires: 2013-12-06T23:41:07Z, id: 
MIIOkwYJKoZIhvcNAQcCoIIOhDCCDoACAQExCTAHBgUrDgMCGjCCDOkGCSqGSIb3DQEHAaCCDNoEggzWeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMi0wNVQyMzo0MTowNy4zMDc5MTUiLCAiZXhwaXJlcyI6ICIyMDEzLTEyLTA2VDIzOjQxOjA3WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92Mi9hN2IzOTYwYjk3OTI0YmFiOWE1NWE5ZjlmNjg0YTg3MCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAiaWQiOiAiMDQyMzVjMmE1ODNlNDAwZDg1NTBkYTI0NmNiZDI1YWEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOi
 
AiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojk2OTYvIiwgImlkIjogIjYyNWI1YzM3ZDJlYzQ4ZGRhMTRmZGZmZmMyZjBhMTY0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzYvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIiwgImlkIjogIjNmODVjN2ZmZjNjMzRmNWNiMzlmMTZiMzQ2ZmY1Mjc0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92MyIsICJyZWd
 

[Yahoo-eng-team] [Bug 1261510] [NEW] Instance fails to spawn in tempest tests

2013-12-16 Thread Salvatore Orlando
Public bug reported:

This happened only 3 times in the past 12 hours, so nothing to worry
about so far.

Logstash query for the exact failure in [1] available at [2]
I am also seeing more Timeout waiting for thing errors (not the same 
condition as bug 1254890, which affects the large_ops job and is due to 
nova/neutron chatty interface). Logstash query for this at [3] (13 hits in past 
12 hours). I think they might have the same root cause.


[1] 
http://logs.openstack.org/22/62322/2/check/check-tempest-dsvm-neutron-isolated/cce7146
[2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzXCIgQU5EICBcIkN1cnJlbnQgc3RhdHVzOiBCVUlMRFwiIEFORCBcIkN1cnJlbnQgdGFzayBzdGF0ZTogc3Bhd25pbmdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg3MjIzNzQ0Mjk2fQ==
[3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4NzIyMzg2Mjg1MH0=

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261510

Title:
  Instance fails to spawn in tempest tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  This happened only 3 times in the past 12 hours, so nothing to worry
  about so far.

  Logstash query for the exact failure in [1] available at [2]
  I am also seeing more Timeout waiting for thing errors (not the same 
condition as bug 1254890, which affects the large_ops job and is due to 
nova/neutron chatty interface). Logstash query for this at [3] (13 hits in past 
12 hours). I think they might have the same root cause.

  
  [1] 
http://logs.openstack.org/22/62322/2/check/check-tempest-dsvm-neutron-isolated/cce7146
  [2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzXCIgQU5EICBcIkN1cnJlbnQgc3RhdHVzOiBCVUlMRFwiIEFORCBcIkN1cnJlbnQgdGFzayBzdGF0ZTogc3Bhd25pbmdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg3MjIzNzQ0Mjk2fQ==
  [3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4NzIyMzg2Mjg1MH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258024] Re: Wrong flavor name on show_server

2013-12-16 Thread Maithem
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258024

Title:
  Wrong flavor name on show_server

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I show server with nova show serverid ,it returns the flavor which 
does not  I use when creating server.
  I think it is a bug.

  Basically what is happening is:

  1.creating a flavor flavor1 ,flavorid 10
  2.deleting  flavor flavor1
  3.creating a flavor flavor2, flavorid 10
  4.booting a server with flavor2
  5.showing server details.

  This bug because we show flavor details with flavorid in API, but the 
flavorid is not primary key in table.(another parameter id is PK)
  So I think we should remove  flavorid from the table and show flavor with 
id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206081] Re: [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1206081

Title:
  [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Won't Fix
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When doing QA for SUSE on bug 1177830
  I found that the fix is incomplete,
  because it assumed that the cached image would be mostly sparse.

  However, I can easily create non-sparse small compressed qcow2 images
  with

  perl -e 'for(1..11000){print x x 1024000}'  img
  qemu-img convert -c -O qcow2 img img.qcow2
  glance image-create --name=11gb --is-public=True --disk-format=qcow2 
--container-format=bare  img.qcow2
  nova boot --image 11gb --flavor m1.small testvm

  which (in Grizzly and Essex) results in one (or two in Essex) 11GB large 
files being created in /var/lib/nova/instances/_base/
  still allowing attackers to fill up disk space of compute nodes
  because the size check is only done after the uncompressing / caching

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1206081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1195947] Re: VM re-scheduler mechanism will cause BDM-volumes conflict

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1195947

Title:
  VM re-scheduler mechanism will cause BDM-volumes conflict

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Due to re-scheduler mechanism, when a user tries to 
   create (in error) an instance using a volume
   which is already in use by another instance,
  the error is correctly detected, but the recovery code
   will incorrectly affect the original instance.

  Need to raise exception directly when the situation above occurred.

  
  
  We can create VM1 with BDM-volumes (for example, one volume we called it 
“Vol-1”).

  But when the attached-volume (Vol-1..) involved in BDM parameters to
  create a new VM2, due to VM re-scheduler mechanism, the volume will
  change to attach on the new VM2 in Nova  Cinder, instead of raise an
  “InvalidVolume” exception of “Vol-1 is already attached on VM1”.

  In actually, Vol-1 both attached on VM1 and VM2 on hypervisor. But
  when you operate Vol-1 on VM1, you can’t see any corresponding changes
  on VM2…

  I reproduced it and wrote in the doc. Please check the attachment for
  details~

  -
  I checked on the Nova codes, the problem is caused by VM re-scheduler 
mechanism:

  Now Nova will check the state of BDM-volumes from Cinder now [def
  _setup_block_device_mapping() in manager.py]. If any state is “in-
  use”, this request will fail, and trigger VM re-scheduler.

  According to existing processes in Nova, before VM re-scheduler, it
  will shutdown VM and detach all BDM-volumes in Cinder for rollback
  [def _shutdown_instance() in manager.py]. As the result, the state of
  Vol-1 will change from “in-use” to “available” in Cinder. But,
  there’re nothing detach-operations on the Nova side…

  Therefore, after re-scheduler, it will pass the BDM-volumes checking
  in creating VM2 on the second time, and all VM1’s BDM-volumes (Vol-1)
  will be possessed by VM2 and are recorded in Nova  Cinder DB. But
  Vol-1 is still attached on VM1 on hypervisor, and will also attach on
  VM2 after VM creation success…

  ---

  Moreover, the problem mentioned-above will occur when “delete_on_termination” 
of BDMs is “False”. If the flag is “True”, all BDM-volumes will be deleted in 
Cinder because the states are already changed from “in-use” to “available” 
before [def _cleanup_volumes() in manager.py].
  (P.S. Success depends on the specific implementation of Cinder Driver)

  Thanks~

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1195947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177830] Re: [OSSA 2013-012] Unchecked qcow2 root disk sizes

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177830

Title:
  [OSSA 2013-012] Unchecked qcow2 root disk sizes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Currently there's no check on the root disk raw sizes. A user can
  create qcow2 images with any size and upload it to glance and spawn
  instances off this file. The raw backing file created in the compute
  node will be small at first due to it being a sparse file, but will
  grow as data is written to it. This can cause the following issues.

  1. Bypass storage quota restrictions
  2. Overrun compute host disk space

  This was reproduced in Devstack using recent trunk d7e4692.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251152] Re: create instance with ephemeral disk fails

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251152

Title:
  create instance with ephemeral disk fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1037, in _build_instance
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] set_access_ip=set_access_ip)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1410, in _spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] LOG.exception(_('Instance faile
  d to spawn'), instance=instance)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1407, in _spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] block_device_info)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/driver.py, line 2063, in spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] admin_pass=admin_password)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/driver.py, line 2370, in _create_image
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] ephemeral_size=ephemeral_gb)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/imagebackend.py, line 174, in cache
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] *args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/imagebackend.py, line 307, in create_image
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] prepare_template(target=base, m
  ax_size=size, *args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/openst
  ack/common/lockutils.py, line 246, in inner
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] return f(*args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 162, in 
call_if_not_exists
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] fetch_func(target=target, *args, 
**kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] TypeError: _create_ephemeral() got an 
unexpected keyword argument 'max_size'
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] 

  max_size argument was add in 3cdfe894ab58f7b91bf7fb690fc5bc724e44066f,
  when creating ephemeral disks , _create_ephemeral method will get an
  unexpected keyword argument  max_size

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251792] Re: infinite recursion when deleting an instance with no network interfaces

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251792

Title:
  infinite recursion when deleting an instance with no network
  interfaces

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In some situations when an instance has no network information (a
  phrase that I'm using loosely), deleting the instance results in
  infinite recursion. The stack looks like this:

  2013-11-15 18:50:28.995 DEBUG nova.network.neutronv2.api 
[req-28f48294-0877-4f09-bcc1-7595dbd4c15a demo demo]   File 
/usr/lib/python2.7/dist-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
_process_data
  **args)
File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 354, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/exception.py, line 73, in wrapped
  return f(self, context, *args, **kw)
File /opt/stack/nova/nova/compute/manager.py, line 230, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 295, in 
decorated_function
  function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 259, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1984, in 
terminate_instance
  do_terminate_instance(instance, bdms)
File /opt/stack/nova/nova/openstack/common/lockutils.py, line 248, in 
inner
  return f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1976, in 
do_terminate_instance
  reservations=reservations)
File /opt/stack/nova/nova/hooks.py, line 105, in inner
  rv = f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1919, in 
_delete_instance
  self._shutdown_instance(context, db_inst, bdms)
File /opt/stack/nova/nova/compute/manager.py, line 1829, in 
_shutdown_instance
  network_info = self._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/compute/manager.py, line 868, in 
_get_instance_nw_info
  instance)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 449, in 
get_instance_nw_info
  result = self._get_instance_nw_info(context, instance, networks)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  RECURSION STARTS HERE

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  ... REPEATS AD NAUSEUM ...

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)
File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 49, in wrapper
  res = f(self, context, *args, **kwargs)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 459, in 
_get_instance_nw_info
  LOG.debug('%s', ''.join(traceback.format_stack()))

  Here's a step-by-step explanation of how the infinite recursion
  arises:

  1. somebody calls nova.network.neutronv2.api.API.get_instance_nw_info

  2. in the above call, the network info is successfully retrieved as
  result = self._get_instance_nw_info(context, instance, networks)

  3. however, since the instance has no network information, result is
  the empty list (i.e., [])

  4. the result is put in the cache by calling
  nova.network.api.update_instance_cache_with_nw_info

  5. update_instance_cache_with_nw_info is supposed to add the result to
  the cache, but due to a bug in update_instance_cache_with_nw_info, it
  recursively calls api.get_instance_nw_info, which brings us back to
  step 1. The bug is the check before the recursive call:

  if not nw_info:
  nw_info = api._get_instance_nw_info(context, instance)

  which erroneously equates [] and None. Hence the check should be if
  nw_info is None:

  I should clarify that the instance _did_ have network information at
  some point (i.e., I booted it normally with a NIC), however, some time
  after I issued a nova delete request, the network information was
  gone (i.e., in 

[Yahoo-eng-team] [Bug 1235450] Re: [OSSA 2013-033] Metadata queries from Neutron to Nova are not restricted by tenant (CVE-2013-6419)

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235450

Title:
  [OSSA 2013-033] Metadata queries from Neutron to Nova are not
  restricted by tenant (CVE-2013-6419)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron grizzly series:
  In Progress
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Committed

Bug description:
  The neutron metadata service works in the following way:

  Instance makes a GET request to http://169.254.169.254/

  This is directed to the metadata-agent which knows which
  router(namespace) he is running on and determines the ip_address from
  the http request he receives.

  Now, the neturon-metadata-agent queries neutron-server  using the
  router_id and ip_address from the request to determine the port the
  request came from. Next, the agent takes the device_id (nova-instance-
  id) on the port and passes that to nova as X-Instance-ID.

  The vulnerability is that if someone exposes their instance_id their
  metadata can be retrieved. In order to exploit this, one would need to
  update the device_id  on a port to match the instance_id they want to
  hijack the data from.

  To demonstrate:

  arosen@arosen-desktop:~/devstack$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1eb33bf1-6400-483a-9747-e19168b68933 | vm1  | ACTIVE | None   | Running 
| private=10.0.0.4 |
  | eed973e2-58ea-42c4-858d-582ff6ac3a51 | vm2  | ACTIVE | None   | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  
  arosen@arosen-desktop:~/devstack$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 3128f195-c41b-4160-9a42-40e024771323 |  | fa:16:3e:7d:a5:df | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.1} 
|
  | 62465157-8494-4fb7-bdce-2b8697f03c12 |  | fa:16:3e:94:62:47 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} 
|
  | 8473fb8d-b649-4281-b03a-06febf61b400 |  | fa:16:3e:4f:a3:b0 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.2} 
|
  | 92c42c1a-efb0-46a6-89eb-a38ae170d76d |  | fa:16:3e:de:9a:39 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.3} 
|
  
+--+--+---+-+

  
  arosen@arosen-desktop:~/devstack$ neutron port-show  
62465157-8494-4fb7-bdce-2b8697f03c12
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 1eb33bf1-6400-483a-9747-e19168b68933
|
  | device_owner  | compute:None
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {subnet_id: 
d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} |
  | id| 62465157-8494-4fb7-bdce-2b8697f03c12
|
  | mac_address   | fa:16:3e:94:62:47   
|
  | name  | 
|
  | network_id| 

[Yahoo-eng-team] [Bug 1247526] Re: libvirt evacuate(shared storage) fails w/ Permission denied on disk.config

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247526

Title:
  libvirt evacuate(shared storage) fails w/ Permission denied on
  disk.config

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  when doing a evacuate for an instance on shared storage, the following
  error occurs

  2013-10-25 01:20:49.843 INFO nova.compute.manager 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] disk on shared storage, recreating using 
existing disk
  2013-10-25 01:20:53.325 INFO nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Creating image
  2013-10-25 01:20:53.413 INFO nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Using config drive
  2013-10-25 01:20:57.812 INFO nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Creating config drive at 
/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config
  2013-10-25 01:20:57.835 ERROR nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Creating config drive failed with error: 
Unexpected error while running command.
  Command: genisoimage -o 
/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config -ldots 
-allow-lowercase -allow-multidot -l -publisher OpenStack Nova 2013.1.3 -quiet 
-J -r -V config-2 /tmp/cd_gen_I3EQUN
  Exit code: 13
  Stdout: ''
  Stderr: Warning: creating filesystem that does not conform to 
ISO-9660.\ngenisoimage: Permission denied. Unable to open disc image file 
'/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config'.\n
  2013-10-25 01:20:57.837 ERROR nova.compute.manager 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Unexpected error while running command.
  Command: genisoimage -o 
/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config -ldots 
-allow-lowercase -allow-multidot -l -publisher OpenStack Nova 2013.1.3 -quiet 
-J -r -V config-2 /tmp/cd_gen_I3EQUN
  Exit code: 13
  Stdout: ''
  Stderr: Warning: creating filesystem that does not conform to 
ISO-9660.\ngenisoimage: Permission denied. Unable to open disc image file 
'/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config'.\n. 
Setting instance vm_state to ERROR
  2013-10-25 01:20:58.693 ERROR nova.openstack.common.rpc.amqp 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] Exception during message handling

  
  the nova version it 2013.1 but after looking the code, it should also affect 
the latest trunk

  Allowing the nova user to read/write disk.config should fix this
  issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1247526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246327] Re: the snapshot of a volume-backed instance cannot be used to boot a new instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246327

Title:
  the snapshot of a volume-backed instance cannot be used to boot a new
  instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  After the changes in the block device mappings introduced for Havana,
  if we try to create an snapshot of a volume-backed instance the
  resulting image cannot be used to boot a new instance due to conflicts
  with the bootindex between the block_device_mapping stored in the
  image properties and the current image.

  The steps to reproduce are:

  $ glance image-create --name f20 --disk-format qcow2 --container-
  format bare --min-disk 2 --is-public True --min-ram 512 --copy-from
  
http://download.fedoraproject.org/pub/fedora/linux/releases/test/20-Alpha/Images/x86_64
  /Fedora-x86_64-20-Alpha-20130918-sda.qcow2

  $ cinder create --image-id uuid of the new image --display-name f20
  2

  $ nova boot --boot-volume uuid of the new volume --flavor m1.tiny
  test-instance

  $ nova image-create test-instance test-snap

  This will create an snapshot of the volume and an image in glance with
  a block_device_mapping containing the snapshot_id and all the other
  values from the original block_device_mapping (id, connection_info,
  instance_uuid, ...):

  | Property 'block_device_mapping' | [{instance_uuid:
  989f03dc-2736-4884-ab66-97360102d804, virtual_name: null,
  no_device: null, connection_info: {\driver_volume_type\:
  \iscsi\, \serial\: \cb6d4406-1c66-4f9a-9fd8-7e246a3b93b7\,
  \data\: {\access_mode\: \rw\, \target_discovered\: false,
  \encrypted\: false, \qos_spec\: null, \device_path\: \/dev/disk
  /by-path/ip-192.168.122.2:3260-iscsi-iqn.2010-10.org.openstack:volume-
  cb6d4406-1c66-4f9a-9fd8-7e246a3b93b7-lun-1\, \target_iqn\:
  \iqn.2010-10.org.openstack:volume-cb6d4406-1c66-4f9a-
  9fd8-7e246a3b93b7\, \target_portal\: \192.168.122.2:3260\,
  \volume_id\: \cb6d4406-1c66-4f9a-9fd8-7e246a3b93b7\,
  \target_lun\: 1, \auth_password\: \wh5bWkAjKv7Dy6Ptt4nY\,
  \auth_username\: \oPbN9FzbEPQ3iFpPhv5d\, \auth_method\:
  \CHAP\}}, created_at: 2013-10-30T13:18:57.00,
  snapshot_id: f6a25cc2-b3af-400b-9ef9-519d28239920, updated_at:
  2013-10-30T13:19:08.00, device_name: /dev/vda, deleted: 0,
  volume_size: null, volume_id: null, id: 3, deleted_at: null,
  delete_on_termination: false}] |

  When we try latter to use this image to boot a new instance, the API
  won't let us because both, the device in the image bdm and the image
  (which is empty) are considered to be the boot device:

  $ nova boot --image test-snap --flavor m1.nano test-instance2
  ERROR: Block Device Mapping is Invalid: Boot sequence for the instance and 
image/block device mapping combination is not valid. (HTTP 400) (Request-ID: 
req-3e502a29-9cd3-4c0c-8ddc-a28d315d21ea)

  If we check the internal flow we can see that nova considers the image
  to be the boot device even thought the image itself doesn't define any
  local disk but only a block_device_mapping pointing to the snapshot.

  To be able to generate proper images from volume-backed instances we should:
   1. copy only the relevant keys from the original block_device_mapping to 
prevent duplicities in DB
   2. prevent nova from adding a new block device for the image if this one 
doesn't define any local disk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247675] Re: [OSSA 2013-036] Insufficient sanitization of Instance Name in Horizon (CVE-2013-6858)

2013-12-16 Thread Jeremy Stanley
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1247675

Title:
  [OSSA 2013-036] Insufficient sanitization of Instance Name in Horizon
  (CVE-2013-6858)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512

  Hello,

  My name is Chris Chapman, I am an Incident Manager with Cisco PSIRT.

  I would like to report the following XSS issue found in the OpenStack
  WebUI that was reported to Cisco.

  The details are as follows:

  The OpenStack web user interface is vulnerable to XSS:

  While launching (or editing) an instance, injecting script tags in
  the instance name results in the javascript being executed on the
  Volumes and the Network Topology page.  This is a classic Stored
  XSS vulnerability.

  Recommendations:
  - - Sanitize the Instance Name string to prevent XSS.
  - - Sanitize all user input to prevent XSS.
  - - Consider utilizing Content Security Policy (CSP). This can be used
  to prevent inline javascript from executing  only load javascript
  files from approved domains.  This would prevent XSS, even in
  scenarios where user input is not
  properly sanitized.

  
  Please include PSIRT-2070334443 in the subject line for all
  communications on this issue with Cisco going forward.

  If you can also include any case number that this issue is assigned
  that will help us track the issue.

  Thank you,
  Chris

  Chris Chapman | Incident Manager
  Cisco Product Security Incident Response Team - PSIRT
  Security Research and Operations
  Office: (949) 823-3167 | Direct: (562) 208-0043
  Email: chchcha...@cisco.com
  SIO: http://www.cisco.com/security
  PGP: 0x959B3169
  -BEGIN PGP SIGNATURE-
  Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
  Comment: GPGTools - http://gpgtools.org
  Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

  iQEcBAEBCgAGBQJSc8QQAAoJEPMPZe6VmzFpLw8H/1h2ZhqKJs6nxZDGnDpn3N2t
  6S6vwx3UYZGG5O1TTx1wrZkkHxckAg8GzMBJa6HFXPs1Zr0o9nhuLfvdKfShQFUA
  HqWMPOFPKid2LML2FMOGAWAdQAG6YTMknZ9d8JTvHI2BhluOsjxlOa0TBNr/Gm+Z
  iwAOBmAgJqU2nWx1iomiGhUpwX2oaQuqDyaosycpVtv0gQAtYsEf7zYdRNod7kB5
  6CGEXJ8J161Bd04dta99onFAB1swroOpOgUopUoONK4nHDxot/MojnvusDmWe2Fs
  usVLh7d6hB3eDyWpVFhbKwSW+Bkmku1Tl0asCgm1Uy9DkrY23UGZuIqKhFs5A8U=
  =gycf
  -END PGP SIGNATURE-

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1247675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261551] [NEW] LXC volume attach does not work

2013-12-16 Thread Ripal Nathuji
Public bug reported:

According to the older bug 1009701
(https://bugs.launchpad.net/nova/+bug/1009701), LXC volume attach should
begin working with newer versions of libvirt (1.0.1 or 1.0.2). Based on
testing with libvirt version 1.1.x, however, I get the following error:

 libvirtError: Unable to create device /proc/4895/root/dev/sdb:
Permission denied

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: lxc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261551

Title:
  LXC volume attach does not work

Status in OpenStack Compute (Nova):
  New

Bug description:
  According to the older bug 1009701
  (https://bugs.launchpad.net/nova/+bug/1009701), LXC volume attach
  should begin working with newer versions of libvirt (1.0.1 or 1.0.2).
  Based on testing with libvirt version 1.1.x, however, I get the
  following error:

   libvirtError: Unable to create device /proc/4895/root/dev/sdb:
  Permission denied

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229994] Re: VMwareVCDriver: snapshot failure when host in maintenance mode

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229994

Title:
  VMwareVCDriver: snapshot failure when host in maintenance mode

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Image snapshot through the VC cluster driver may fail if, within the
  datacenter containing the cluster managed by the driver, there are one
  or more hosts in maintenance mode with access to the datastore
  containing the disk image snapshot.

  A sign that this situation has occurred is the appearance in the nova
  compute log of an error similar to the following:

  2013-08-02 07:10:30.036 WARNING nova.virt.vmwareapi.driver [-] Task 
[DeleteVirtualDisk_Task] (returnval){
  value = task-228
  _type = Task
  } status: error The operation is not allowed in the current state.

  What this means is that even if all hosts in cluster are running fine in 
normal mode, a host outside of the cluster going into maintenance mode may
  lead to snapshot failure.

  The root cause of the problem is due to an issue in VC's handler of
  the VirtualDiskManager.DeleteVirtualDisk_Task API, which may
  incorrectly pick a host in maintenance mode to service the disk
  deletion even though such an operation will be rejected by the host
  under maintenance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1229994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252827] Re: VMWARE: Intermittent problem with stats reporting

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252827

Title:
  VMWARE: Intermittent problem with stats reporting

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  I see that sometimes vmware driver reports 0 stats. Please take a look
  at the following log file for more information:
  http://162.209.83.206/logs/51404/6/screen-n-cpu.txt.gz

  excerpts from log file:
  2013-11-18 15:41:03.994 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for datastore Reason: None
  2013-11-18 15:41:04.029 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for host Reason: None
  2013-11-18 15:41:04.029 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for resourcePool Reason: None
  2013-11-18 15:41:04.029 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: free ram (MB): 0 _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:389
  2013-11-18 15:41:04.029 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: free disk (GB): 0 _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:390
  2013-11-18 15:41:04.030 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: VCPU information unavailable _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:397

  During this time we cannot spawn any server. Look at the
  http://162.209.83.206/logs/51404/6/screen-n-sch.txt.gz

  excerpts from log file:
  2013-11-18 15:41:52.475 DEBUG nova.filters 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] Filter AvailabilityZoneFilter 
returned 1 host(s) get_filtered_objects /opt/stack/nova/nova/filters.py:88
  2013-11-18 15:41:52.476 DEBUG nova.scheduler.filters.ram_filter 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] (Ubuntu1204Server, 
domain-c26(c1)) ram:-576 disk:0 io_ops:0 instances:1 does not have 64 MB usable 
ram, it only has -576.0 MB usable ram. host_passes 
/opt/stack/nova/nova/scheduler/filters/ram_filter.py:60
  2013-11-18 15:41:52.476 INFO nova.filters 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] Filter RamFilter returned 0 
hosts
  2013-11-18 15:41:52.477 WARNING nova.scheduler.driver 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] [instance: 
1a648022-1783-4874-8b41-c3f4c89d8500] Setting instance to ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251920] Re: Tempest failures due to failure to return console logs from an instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251920

Title:
  Tempest failures due to failure to return console logs from an
  instance

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  Logstash search:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpjb25zb2xlLmh0bWwgQU5EIG1lc3NhZ2U6XCJhc3NlcnRpb25lcnJvcjogY29uc29sZSBvdXRwdXQgd2FzIGVtcHR5XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODQ2NDEwNzIxODl9

  An example failure is http://logs.openstack.org/92/55492/8/check
  /check-tempest-devstack-vm-full/ef3a4a4/console.html

  console.html
  ===

  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,775 Request: POST 
http://127.0.0.1:8774/v2/3f6934d9aabf467aa8bc51397ccfa782/servers/10aace14-23c1-4cec-9bfd-2c873df1fbee/action
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': 'Token omitted'}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Body: 
{os-getConsoleOutput: {length: 10}}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:21,000 Response Status: 200
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Nova request id: 
req-7a2ee0ab-c977-4957-abb5-1d84191bf30c
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Headers: 
{'content-length': '14', 'date': 'Sat, 16 Nov 2013 21:41:20 GMT', 
'content-type': 'application/json', 'connection': 'close'}
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Body: {output: 
}
  2013-11-16 21:54:27.999 | }}}
  2013-11-16 21:54:27.999 | 
  2013-11-16 21:54:27.999 | Traceback (most recent call last):
  2013-11-16 21:54:27.999 |   File 
tempest/api/compute/servers/test_server_actions.py, line 281, in 
test_get_console_output
  2013-11-16 21:54:28.000 | self.wait_for(get_output)
  2013-11-16 21:54:28.000 |   File tempest/api/compute/base.py, line 133, in 
wait_for
  2013-11-16 21:54:28.000 | condition()
  2013-11-16 21:54:28.000 |   File 
tempest/api/compute/servers/test_server_actions.py, line 278, in get_output
  2013-11-16 21:54:28.000 | self.assertTrue(output, Console output was 
empty.)
  2013-11-16 21:54:28.000 |   File /usr/lib/python2.7/unittest/case.py, line 
420, in assertTrue
  2013-11-16 21:54:28.000 | raise self.failureException(msg)
  2013-11-16 21:54:28.001 | AssertionError: Console output was empty.

  n-api
  

  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Action: 'action', body: 
{os-getConsoleOutput: {length: 10}} _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:963
  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Calling method bound method 
ConsoleOutputController.get_console_output of 
nova.api.openstack.compute.contrib.console_output.ConsoleOutputController 
object at 0x3c1f990 _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:964
  2013-11-16 21:41:20.865 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Making synchronous call on 
compute.devstack-precise-hpcloud-az2-663635 ... multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:553
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] MSG_ID is 
a93dceabf6a441eb850b5fbb012d661f multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:556
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] UNIQUE_ID is 
706ab69dc066440fbe1bd7766b73d953. _add_unique_id 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:341
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] Closed channel #1 _do_close 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] using channel_id: 1 __init__ 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
  2013-11-16 21:41:20.870 22679 DEBUG amqp [-] Channel open _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
  2013-11-16 21:41:20.999 INFO nova.osapi_compute.wsgi.server 

[Yahoo-eng-team] [Bug 1246592] Re: Nova live migration failed due to OLE error

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246592

Title:
  Nova live migration failed due to OLE error

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When migrate vm on hyperV, command fails with the following error:

  2013-10-25 03:35:40.299 12396 ERROR nova.openstack.common.rpc.amqp 
[req-b542e0fd-74f5-4e53-889c-48a3b44e2887 3a75a18c8b60480d9369b25ab06519b3 
0d44e4afd3d448c6acf0089df2dc7658] Exception during message handling
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\amqp.py, line 461, 
in _process_data
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\dispatcher.py, line 
172, in dispatch
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 90, in wrapped
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
payload)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 73, in wrapped
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 4103, in 
live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 118, in 
live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
44, in wrapper
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp return 
function(self, *args, **kwds)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
76, in live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
recover_method(context, instance_ref, dest, block_migration)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
69, in live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp dest)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationutils.py, line 
231, in live_migrate_vm
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
disk_paths = self._get_physical_disk_paths(vm_name)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationutils.py, line 
114, in _get_physical_disk_paths
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
ide_paths = self._vmutils.get_controller_volume_paths(ide_ctrl_path)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py, line 553, in 
get_controller_volume_paths
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
parent: controller_path})
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\wmi.py, line 

[Yahoo-eng-team] [Bug 1239603] Re: Bogus ERROR level debug spew when creating a new instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239603

Title:
  Bogus ERROR level debug spew when creating a new instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Change-Id: Ifd41886b9bc7dff01cdf741a833946bed1bdddc implemented a
  number of items required for auto_disk_config to be more than just
  True or False.

  It appears that a logging statement used for debugging of the code has
  been left behind:

  1256 def _check_auto_disk_config(self, instance=None, image=None,
  1257 **extra_instance_updates):
  1258 auto_disk_config = extra_instance_updates.get(auto_disk_config)
  1259 if auto_disk_config is None:
  1260 return
  1261 if not image and not instance:
  1262 return
  1263 
  1264 if image:
  1265 image_props = image.get(properties, {})
  1266 LOG.error(image_props)

  
  This needs to be removed as it is causing false-positives to be picked up by 
our error-tracking software

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244311] Re: notification failure in _sync_power_states

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244311

Title:
  notification failure in _sync_power_states

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The _sync_power_states periodic task pull instances without
  system_metadata in order to reduce network bandwidth being
  unnecessarily consumed.  Most of the time this is fine, but if
  vm_power_state != db_power_state then the instance is updated and
  saved.  As part of saving the instance a notification is sent.  In
  order to send the notification it extracts flavor information from the
  system_metadata on the instance.  But system_metadata isn't loaded,
  and won't be lazy loaded.  So an exception is raised and the
  notification isn't sent.

  2013-10-23 03:30:35.714 21492 ERROR nova.notifications [-] [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Failed to send state update notification
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Traceback (most recent call last):
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 146, in send_update
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] old_display_name=old_display_name)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 199, in _send_instance_update_notification
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] payload = info_from_instance(context, 
instance, None, None)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 343, in info_from_instance
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] instance_type = 
flavors.extract_flavor(instance_ref)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/compute/flavors.py,
 line 282, in extract_flavor
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] instance_type[key] = 
type_fn(sys_meta[type_key])
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] KeyError: 'instance_type_memory_mb'
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab]
  2013-10-23 03:30:35.718 21492 WARNING nova.compute.manager [-] [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Instance shutdown by itself. Calling the 
stop API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237795] Re: VMware: restarting nova compute reports invalid instances

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237795

Title:
  VMware: restarting nova compute reports invalid instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When nova compute restarts the running instances on the hypervisor are
  queried. None of the instances would be matched - this would prevent
  the instance states being in sync with the state in the database. See
  _destroy_evacuated_instances
  (https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L531)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230925] Re: Require new python-cinderclient for Havana

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1230925

Title:
  Require new python-cinderclient for Havana

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Havana Nova needs to require cinderclient 1.0.6, which contains the
  update_snapshot_status() API used by assisted snapshots, as well as
  migrate_volume_completion() for volume migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1230925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229671] Re: Deploy instances failed on Hyper-V with Chinese locale

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229671

Title:
  Deploy instances failed on Hyper-V with Chinese locale

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I am deploying instances on my Hyper-V host but I met the below error.
  I remember in the past, vmops.py calls vhdutils.py but now it calls
  vhdutilsv2.py. Not sure if that is the correct place that caused this
  issue.  Please help to check.

  
  2013-09-24 18:46:47.079 2304 WARNING nova.network.neutronv2.api [-] 
[instance: 973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] No network configured!
  2013-09-24 18:46:47.734 2304 INFO nova.virt.hyperv.vmops 
[req-474eb715-9048-475f-9734-7b5fdc005a64 b13861ca49f641d7a818e6b8335f2351 
29db386367fa4c4e9ffb3c369a46ee90] [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] Spawning new instance
  2013-09-24 18:46:49.996 2304 ERROR nova.compute.manager 
[req-474eb715-9048-475f-9734-7b5fdc005a64 b13861ca49f641d7a818e6b8335f2351 
29db386367fa4c4e9ffb3c369a46ee90] [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] Instance failed to spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] Traceback (most recent call last):
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 1431, in _spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] block_device_info)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 55, in spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] admin_password, network_info, 
block_device_info)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 90, in wrapper
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] return function(self, *args, **kwds)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 208, in spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] root_vhd_path = 
self._create_root_vhd(context, instance)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 177, in 
_create_root_vhd
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] self._pathutils.remove(root_vhd_path)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 161, in 
_create_root_vhd
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] base_vhd_info = 
self._vhdutils.get_vhd_info(base_vhd_path)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vhdutilsv2.py, line 124, in 
get_vhd_info
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] et = 
ElementTree.fromstring(vhd_info_xml)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\xml\etree\ElementTree.py, line 1301, in XML
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] parser.feed(text)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\xml\etree\ElementTree.py, line 1641, in feed
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1241350] Re: VMware: Detaching a volume from an instance also deletes the volume's backing vmdk (ESXDriver only)

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241350

Title:
  VMware: Detaching a volume from an instance also deletes the volume's
  backing vmdk (ESXDriver only)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I found that when I run:

  % nova volume-detach my_instance c54ad11f-4e51-41a0-97db-7e551776db59

  where the volume with given id is currently attached to my running
  instance named my_instance, the operation completes successfully.
  Nevertheless a subsequent attach of the same volume again will fail.
  So:

  % nova volume-attach my_instance c54ad11f-4e51-41a0-97db-7e551776db59
  /dev/sdb

  fails with the error that the volume's vmdk file is not found.

  Cause:

  During volume detach a delete_virtual_disk_spec is used to remove the
  device from the running instance. This spec also destroys the
  underlying vmdk file. The offending line is :
  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vm_util.py#L471

  Possible fix:
  The fileOperation field of the device config during this reconfigure 
operation should be left unset. We should continue setting 
device_config.operation field to remove. This will remove the device from the 
VM without deleting the underlying vmdk backing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253510] Re: Error mispelt in disk api file

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253510

Title:
  Error mispelt in disk api file

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Error is spelt errror, this is causing a key error. See bug 1253508

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261559] [NEW] Timeouts due to VMs not sending DHCPDISCOVER messages

2013-12-16 Thread Salvatore Orlando
Public bug reported:

In some instances, tempest scenario tests fail with a timeout error
similarly to bug 1253896, but unlike other occurrences of this bug, the
failure happens even if all the elements connecting the floating IP to
the VM are properly wired.

Further investigations revealed the a DHCPDISCOVER is apparently not sent from 
the VM.
An instance of this failure can be seen here: 
http://logs.openstack.org/60/58860/2/gate/gate-tempest-dsvm-neutron/b9b25eb

Looking at syslog for this tempest run, only one DHCPDISCOVER is
detected, even if 27 DHCPRELEASE are sent (thus meaning the
notifications were properly handled and the dnsmasq processed were up
and running).

Relevant events from a specific failure (boot_volume_pattern)

Server boot: 15:18:44.972
Fip create: 15:18:45.075
Port wired: 15:18:45.279
Fip wired: 15:18:46:356
Server delete: 15:22:03 (timeout expired)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261559

Title:
  Timeouts due to VMs not sending DHCPDISCOVER messages

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some instances, tempest scenario tests fail with a timeout error
  similarly to bug 1253896, but unlike other occurrences of this bug,
  the failure happens even if all the elements connecting the floating
  IP to the VM are properly wired.

  Further investigations revealed the a DHCPDISCOVER is apparently not sent 
from the VM.
  An instance of this failure can be seen here: 
http://logs.openstack.org/60/58860/2/gate/gate-tempest-dsvm-neutron/b9b25eb

  Looking at syslog for this tempest run, only one DHCPDISCOVER is
  detected, even if 27 DHCPRELEASE are sent (thus meaning the
  notifications were properly handled and the dnsmasq processed were up
  and running).

  Relevant events from a specific failure (boot_volume_pattern)

  Server boot: 15:18:44.972
  Fip create: 15:18:45.075
  Port wired: 15:18:45.279
  Fip wired: 15:18:46:356
  Server delete: 15:22:03 (timeout expired)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246412] Re: Unshelving an instance with an attached volume causes the volume to not get attached

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246412

Title:
  Unshelving an instance with an attached volume causes the volume to
  not get attached

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When shelving an instance that has a volume attached - once it's
  unshelved, the volume will not get re-attached.

  Reproduce by:

  $nova boot --image IMAGE --flavor FLAVOR test
  $nova attach INSTANCE VOLUME #ssh into the instance and make sure the 
volume is there
  $nova shelve INSTANCE #Make sure the instance is done shelving
  $nova unshelve INSTANCE #Log in and see that the volume is not visible any 
more

  It can also be seen that the volume remains attached as per

  $sinder list

  And if you take a look at the generated xml (if you use libvirt) you
  can see that the volume is not there when the instance is done
  unshelving.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246103] Re: encryptors module forces cert and scheduler services to depend on cinderclient

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246103

Title:
  encryptors module forces cert and scheduler services to depend on
  cinderclient

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Packstack:
  Invalid

Bug description:
  When Nova Scheduler is installed via packstack as the only explicitly
  installed service on a particular node, it will fail to start.  This
  is because it depends on the Python cinderclient library, which is not
  marked as a dependency in 'nova::scheduler' class in Packstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243260] Re: Nova api doesn't start with a backdoor port set

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243260

Title:
  Nova api doesn't start with a backdoor port set

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  nova api fails to start properly if a backdoor port is specified.
  Looking at the logs this traceback is repeatedly printed:

  2013-10-22 14:19:46.822 INFO nova.openstack.common.service [-] Child 1460 
exited with status 1
  2013-10-22 14:19:46.824 INFO nova.openstack.common.service [-] Started child 
1468
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 60684 for process 1467
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 58986 for process 1468
  2013-10-22 14:19:46.837 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 117, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup x.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 49, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 448, in run_service
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 357, in start
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
self.manager.backdoor_port = self.backdoor_port
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.840 TRACE nova   File /usr/local/bin/nova-api, line 10, 
in module
  2013-10-22 14:19:46.840 TRACE nova sys.exit(main())
  2013-10-22 14:19:46.840 TRACE nova   File /opt/stack/nova/nova/cmd/api.py, 
line 53, in main
  2013-10-22 14:19:46.840 TRACE nova launcher.wait()
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 351, in wait
  2013-10-22 14:19:46.840 TRACE nova self._respawn_children()
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 341, in 
_respawn_children
  2013-10-22 14:19:46.840 TRACE nova self._start_child(wrap)
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 287, in _start_child
  2013-10-22 14:19:46.840 TRACE nova os._exit(status)
  2013-10-22 14:19:46.840 TRACE nova TypeError: an integer is required
  2013-10-22 14:19:46.840 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243291] Re: Restarting nova compute has an exception

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243291

Title:
  Restarting nova compute has an exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  (latest havana code - libvirt driver)

  1. launch a nova vm
  2. see that the instance is deployed on the compute node
  3. restart the compute node

  get the following exception:

  2013-10-22 05:46:53.711 30742 INFO nova.openstack.common.rpc.common 
[req-57056535-4ecd-488a-a75e-ff83341afb98 None None] Connected to AMQP server 
on 192.168.10.111:5672
  2013-10-22 05:46:53.737 30742 AUDIT nova.service [-] Starting compute node 
(version 2013.2)
  2013-10-22 05:46:53.814 30742 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
x.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 65, 
in run_service
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 154, in start
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 786, in 
init_host
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 664, in 
_init_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
net_info = compute_utils.get_nw_info_for_instance(instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/utils.py, line 349, in 
get_nw_info_for_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return instance.info_cache.network_info
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240247] Re: API cell always doing local deletes

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240247

Title:
  API cell always doing local deletes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  It appears a regression was introduced in:

  https://review.openstack.org/#/c/36363/

  Where the API cell is now always doing a _local_delete()... before
  telling child cells to delete the instance.  There's at least a couple
  of bad side effects of this:

  1) The instance disappears immediately from API view, even though the 
instance still exists in the child cell.  The user does not see a 'deleting' 
task state.  And if the delete fails in the child cell, you have a sync issue 
until the instance is 'healed'.
  2) Double delete.start and delete.end notifications are sent.  1 from API 
cell, 1 from child cell.

  The problem seems to be that _local_delete is being called because the
  service is determined to be down... because the compute service does
  not run in the API cell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237126] Re: nova-api-{ec2, metadata, os-compute} don't allow SSL to be enabled

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237126

Title:
  nova-api-{ec2,metadata,os-compute} don't allow SSL to be enabled

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Although the script bin/nova-api will read nova.conf to determine
  which API services should have SSL enabled (via 'enabled_ssl_apis'),
  the individual API scripts

  bin/nova-api-ec2
  bin/nova-api-metadata
  bin/nova-api-os-compute

  do not contain similar logic to allow configuration of SSL. For
  installations that want to use SSL but not the nova-api wrapper, there
  should be a similar way to enable the former.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242855] Re: [OSSA 2013-028] Removing role adds role with LDAP backend

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242855

Title:
  [OSSA 2013-028] Removing role adds role with LDAP backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Using the LDAP assignment backend, if you attempt to remove a role
  from a user on a tenant and the user doesn't have that role on the
  tenant then the user is actually granted the role on the tenant. Also,
  the role must not have been granted to anyone on the tenant before.

  To recreate

  0) Start with devstack, configured with LDAP (note especially to set
  KEYSTONE_ASSIGNMENT_BACKEND):

  In localrc,
   enable_service ldap
   KEYSTONE_IDENTITY_BACKEND=ldap
   KEYSTONE_ASSIGNMENT_BACKEND=ldap

  1) set up environment with OS_USERNAME=admin

  export OS_USERNAME=admin
  ...

  2) Create a new user, give admin role, list roles:

  $ keystone user-create --name blktest1 --pass blkpwd
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| 3b71182dc36e45c6be4733d508201694 |
  |   name   | blktest1 |
  +--+--+

  $ keystone user-role-add --user blktest1 --role admin --tenant service
  (no output)

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  3) Remove a role from that user that they don't have (using otherrole
  here since devstack sets it up):

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - Expected to fail with 404, but it doesn't!

  4) List roles as that user:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+-+--+--+
  |id| name| user_id
  |tenant_id |
  
+--+-+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b |admin| 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  | afe23e7955704ccfad803b4a104b28a7 | anotherrole | 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  
+--+-+--+--+

  - Expected to not include the role that was just removed!

  5) Remove the role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - No errors, which I guess is expected since list just said they had
  the role...

  6) List roles, and now it's gone:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  7) Remove role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-remove --user blktest1 --role anotherrole --tenant service
  Could not find user, 3b71182dc36e45c6be4733d508201694. (HTTP 404)

  - Strangely says user not found rather than role not assigned.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1239709] Re: NovaObject does not properly honor VERSION

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239709

Title:
  NovaObject does not properly honor VERSION

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The base object infrastructure has been comparing Object.version
  instead of the Object.VERSION that *all* the objects have been setting
  and incrementing when changes have been made. Since the base object
  defined a .version, and that was used to determine the actual version
  of an object, all objects defining a different VERSION were ignored.

  All systems in the wild currently running broken code are sending
  version '1.0' for all of their objects. The fix is to change the base
  object infrastructure to properly examine, compare and send
  Object.VERSION.

  Impact should be minimal at this point, but getting systems patched as
  soon as possible will be important going forward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242597] Re: [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens (CVE-2013-6391)

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242597

Title:
  [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens
  (CVE-2013-6391)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  So I finally got around to investigating the scenario I mentioned in
  https://review.openstack.org/#/c/40444/, and unfortunately it seems
  that the ec2tokens API does indeed provide a way to circumvent the
  role delegation provided by trusts, and obtain all the roles of the
  trustor user, not just those explicitly delegated.

  Steps to reproduce:
  - Trustor creates a trust delegating a subset of roles
  - Trustee gets a token scoped to that trust
  - Trustee creates an ec2-keypair
  - Trustee makes a request to the ec2tokens API, to validate a signature 
created with the keypair
  - ec2tokens API returns a new token, which is not scoped to the trust and 
enables access to all the trustor's roles.

  I can provide some test code which demonstrates the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1242597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238374] Re: TypeError in periodic task 'update_available_resource'

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238374

Title:
  TypeError in periodic task 'update_available_resource'

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  this occurs while I creating an instance under my devstack env:

  2013-10-11 02:56:29.374 ERROR nova.openstack.common.periodic_task [-] Error 
during ComputeManager.update_available_resource: 'NoneType' object is not 
iterable
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task Traceback 
(most recent call last):
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/openstack/common/periodic_task.py, line 180, in 
run_periodic_tasks
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/compute/manager.py, line 4859, in 
update_available_resource
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 246, in inner
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task return 
f(*args, **kwargs)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/compute/resource_tracker.py, line 313, in 
update_available_resource
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
self.pci_tracker.clean_usage(instances, migrations, orphans)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/pci/pci_manager.py, line 285, in clean_usage
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task for dev 
in self.claims.pop(uuid):
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task TypeError: 
'NoneType' object is not iterable
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235435] Re: 'SubnetInUse: Unable to complete operation on subnet UUID. One or more ports have an IP allocation from this subnet.'

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235435

Title:
  'SubnetInUse: Unable to complete operation on subnet UUID. One or more
  ports have an IP allocation from this subnet.'

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  Occasional tempest failure:

  http://logs.openstack.org/86/49086/2/gate/gate-tempest-devstack-vm-
  neutron-isolated/ce14ceb/testr_results.html.gz

  ft3.1: tearDownClass 
(tempest.scenario.test_network_basic_ops.TestNetworkBasicOps)_StringException: 
Traceback (most recent call last):
File tempest/scenario/manager.py, line 239, in tearDownClass
  thing.delete()
File tempest/api/network/common.py, line 71, in delete
  self.client.delete_subnet(self.id)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 112, in with_params
  ret = self.function(instance, *args, **kwargs)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 380, in delete_subnet
  return self.delete(self.subnet_path % (subnet))
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1233, in delete
  headers=headers, params=params)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1222, in retry_request
  headers=headers, params=params)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1165, in do_request
  self._handle_fault_response(status_code, replybody)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 97, in exception_handler_v20
  message=msg)
  NeutronClientException: 409-{u'NeutronError': {u'message': u'Unable to 
complete operation on subnet 9e820b02-bfe2-47e3-b186-21c5644bc9cf. One or more 
ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', 
u'detail': u''}}

  
  logstash query:

  @message:One or more ports have an IP allocation from this subnet
  AND @fields.filename:logs/screen-q-svc.txt and @message:
  SubnetInUse: Unable to complete operation on subnet


  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiBhbmQgQG1lc3NhZ2U6XCIgU3VibmV0SW5Vc2U6IFVuYWJsZSB0byBjb21wbGV0ZSBvcGVyYXRpb24gb24gc3VibmV0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODA5MTY1NDUxODcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226698] Re: flavor pagination incorrectly uses id rather than flavorid

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226698

Title:
  flavor pagination incorrectly uses id rather than flavorid

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The ID in flavor-list response is really instance_types.flavorid in 
database.  When using the marker, it use instance_types.id field. The test pass 
as long as instance_types.id begin with 1 and it is sequential. If it does not 
begin with 1 or if it does not match instance_types.flavorid, the test fail 
with following error:   
   

  
  '''   
  
  Traceback (most recent call last):
  
File 
/Volumes/apple/openstack/tempest/tempest/api/compute/flavors/test_flavors.py, 
line 91, in test_list_flavors_detailed_using_marker 
  resp, flavors = self.client.list_flavors_with_detail(params)  
  
File 
/Volumes/apple/openstack/tempest/tempest/services/compute/json/flavors_client.py,
 line 45, in list_flavors_with_detail   
 
  resp, body = self.get(url)
  
File /Volumes/apple/openstack/tempest/tempest/common/rest_client.py, line 
263, in get
  return self.request('GET', url, headers)  
  
File /Volumes/apple/openstack/tempest/tempest/common/rest_client.py, line 
394, in request
  resp, resp_body)  
  
File /Volumes/apple/openstack/tempest/tempest/common/rest_client.py, line 
439, in _error_checker
  raise exceptions.NotFound(resp_body)  
  
  NotFound: Object not found
  
  Details: {itemNotFound: {message: The resource could not be found., 
code: 404}}

  
  ==
  
  FAIL: 
tempest.api.compute.flavors.test_flavors.FlavorsTestJSON.test_list_flavors_using_marker[gate]
  '''   
  

  
  Really, it should use flavorid for marker.  The flavor_get_all() method in 
nova.db.sqlalchemy.api should be fixed to use flavorid=marker in filter, as 
follows:
  -filter_by(id=marker).\   
  
  +filter_by(flavorid=marker).\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213927] Re: flavor extra spec api fails with XML content type if key contains a colon

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213927

Title:
  flavor extra spec api fails with XML content type if key contains a
  colon

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  The flavor extra spec API  extension (os-extra_specs) fails with HTTP
  500 when content-type application/xml is requested if the extra spec
  key contains a colon.

  For example:

  curl [endpoint]/flavors/[ID]/os-extra_specs -H Accept: application/json -H 
X-Auth-Token: $TOKEN
  {extra_specs: {foo:bar: 999}}

  curl -i [endpoint]/flavors/[ID]/os-extra_specs -H Accept: application/xml 
-H X-Auth-Token: $TOKEN
  {extra_specs: {foo:bar: 999}}
  HTTP/1.1 500 Internal Server Error

  The stack trace shows that the XML parser tries to interpret the :
  in key as if it would be a XML namespace, which fails, as the
  namespace is not valid:

  2013-08-19 13:08:14.374 27521 DEBUG nova.api.openstack.wsgi 
[req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Calling method 
bound method FlavorExtraSpecsController.index of 
nova.api.openstack.compute.contrib.flavorextraspecs.FlavorExtraSpecsController 
object at 0x2c01b90 _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:927
  2013-08-19 13:08:14.377 27521 ERROR nova.api.openstack 
[req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Caught error: 
Invalid tag name u'foo:bar'
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 110, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/hp/middleware/cs_auth_token.py, line 160, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
super(CsAuthProtocol, self).__call__(env, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 461, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 903, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack content_type, 
body, accept)
  2013-08-19 13:08:14.377 

[Yahoo-eng-team] [Bug 1233837] Re: target_iqn is referenced before assignment after exceptions in hyperv/volumeop.py attch_volume()

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233837

Title:
  target_iqn is referenced before assignment after exceptions in
  hyperv/volumeop.py attch_volume()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  If a exception is encountered in _login_storage_target or
  _get_mounted_disk_from_lun target_iqn will be referenced in the
  exception handler before it is defined resulting in the following
  traceback:

  c39117134492490cba81828d080895b5 1a26ee4f153e438c806203607a0d728e] Exception 
during message handling
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\amqp.py, line 461, 
in _process_data
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\dispatcher.py, line 
172, in dispatch
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 90, in wrapped
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 73, in wrapped
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 249, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 235, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 277, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 264, in 
decorated_function
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 3676, in 
attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
context, instance, mountpoint)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 3671, in 
attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
mountpoint, instance)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 3717, in 
_attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
connector)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 3707, in 
_attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 
encryption=encryption)
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 72, in 
attach_volume
  2013-10-01 16:06:19.993 5588 TRACE nova.openstack.common.rpc.amqp 

[Yahoo-eng-team] [Bug 1231263] Re: Clear text password has been print in log by some API call

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1231263

Title:
  Clear text password has been print in log by some API call

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In current implementation, when perform some api call, like change server 
password, or rescue server, the password has been print in log in nova.
  i.e:

  2013-09-26 13:48:01.711 DEBUG routes.middleware [-] Match dict: {'action': 
u'action', 'controller': nova.api.openstack.wsgi.Resource object at 
0x46d09d0, 'project_id': u'05004a24b3304cd9b55a0fcad08107b3', 'id': 
u'8c4a1dfa-147a-4f
  f8-8116-010d8c346115'} from (pid=10629) __call__ 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-09-26 13:48:01.711 DEBUG nova.api.openstack.wsgi 
[req-10ebd201-ba52-453f-b1ce-1e41fbef8cdd admin demo] Action: 'action', body: 
{changePassword: {adminPass: 1234567}} from (pid=10629) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:926

  This is not secue which the password should be replaced by ***

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1231263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197041] Re: nova compute crashes if you do not have any hosts in your cluster

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1197041

Title:
  nova compute crashes if you do not have any hosts in your cluster

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I forgot to add a host to my cluster and brought up nova-compute.  I
  get the following crash on startup.   A controlled  exit with a proper
  warning message would have saved me some time

  
  \\File /opt/stack/nova/nova/virt/vmwareapi/host.py, line 156, in __init__
 self.update_status()
   File /opt/stack/nova/nova/virt/vmwareapi/host.py, line 169, in 
update_status
 host_mor = vm_util.get_host_ref(self._session, self._cluster)
   File /opt/stack/nova/nova/virt/vmwareapi/vm_util.py, line 663, in 
get_host_ref
 if not host_ret.ManagedObjectReference:
  AttributeError: 'Text' object has no attribute 'ManagedObjectReference'
  Removing descriptor: 6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1197041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188543] Re: NBD mount errors when booting an instance from volume

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188543

Title:
  NBD mount errors when booting an instance from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  My environment:
  - Grizzly OpenStack (installed from Ubuntu repository)
  - Network using Quantum
  - Cinder backed up by a Ceph cluster

  I'm able to boot an instance from a volume but it takes a long time
  for the instance to be active. I've got warnings in the logs of the
  nova-compute node (see attached file). The logs show that the problem
  is related to file injection in the disk image which isn't
  required/relevant when booting from a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199954] Re: VCDriver: Failed to resize instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199954

Title:
  VCDriver: Failed to resize instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Steps to reproduce:
  nova resize UUID 2

  Error:
   ERROR nova.openstack.common.rpc.amqp 
[req-762f3a87-7642-4bd3-a531-2bcc095ec4a5 demo demo] Exception during message 
handling
Traceback (most recent call last):
  File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 421, in 
_process_data
**args)
  File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
result = getattr(proxyobj, method)(ctxt, **kwargs)
  File /opt/stack/nova/nova/exception.py, line 99, in wrapped
temp_level, payload)
  File /opt/stack/nova/nova/exception.py, line 76, in wrapped
return f(self, context, *args, **kw)
  File /opt/stack/nova/nova/compute/manager.py, line 218, in 
decorated_function
pass
  File /opt/stack/nova/nova/compute/manager.py, line 204, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 269, in 
decorated_function
function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 246, in 
decorated_function
e, sys.exc_info())
  File /opt/stack/nova/nova/compute/manager.py, line 233, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 2633, in 
resize_instance
block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 410, in 
migrate_disk_and_power_off
dest, instance_type)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 893, in 
migrate_disk_and_power_off
raise exception.HostNotFound(host=dest)
HostNotFound:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1199954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193980] Re: Regression: Cinder Volumes unable to find iscsi target for VMware instances

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1193980

Title:
  Regression: Cinder Volumes unable to find iscsi target for VMware
  instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When trying to attach a cinder volume to a VMware based instance I am
  seeing the attached error in the nova-compute logs. Cinder does not
  report back any problem to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1193980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211742] Re: notification not available for deleting an instance having no host associated

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211742

Title:
  notification not available for deleting an instance having no host
  associated

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Steps to reproduce issue:
  1. Set the Nova notification_driver (to say log_notifier) and monitor the 
notifications.
  2. Delete an instance which does not have a host associated with it.
  3. Check if any notifications are generated for the instance deletion.

  Expected Result:
  'delete.start' and 'delete.end' notifications should be generated for the 
instance being deleted.

  Actual Result:
  There are no 'delete' notifications being generated in this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1211742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255519] Re: NVP connection fails because port is a string

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255519

Title:
  NVP connection fails because port is a string

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  On a dev machine I've recently create I noticed failures at startup when 
Neutron is configured with the NVP plugin.
  I root caused the failure to port being explicitly passed to HTTPSConnection 
constructor as a string rather than an integer.

  This can be easily fixed ensuring port is always an integer.

  I am not sure of the severity of this bug as it might strictly related
  to this specific dev env, but it might be worth applying and
  backporting it

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1255519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255421] Re: Unittest fails due to unexpected ovs-vsctl calling in ryu plugin test

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255421

Title:
  Unittest fails due to unexpected ovs-vsctl calling in ryu plugin test

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  In the unit test, ovs-vsctl command is unexpectedly called in ryu
  plugin test.

  It occurs in the latest master branch 
(4b47717b132336396cdbea9d168acaaa30bd5a02).
  In gating test, the followings hit this issue:
  
http://logs.openstack.org/70/58270/2/check/gate-neutron-python27/79ef6dd/console.html
  
http://logs.openstack.org/25/58125/4/check/gate-neutron-python27/2cc0dc5/console.html#_2013-11-27_01_46_09_003

  According to the result of debugging by adding print_traceback in 
ovs_lib.OVSBridge.get_vif_port_by_id,
  the following tests fails and the following stack trace is got:

  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_update_ip_address_only
  neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_update_ips
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_delete_ip
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_add_additional_ip
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_not_admin
  
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port_update_ip

    File 
/home/ubuntu/neutron/.venv/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 194, in main
  result = function(*args, **kwargs)
    File neutron/openstack/common/rpc/impl_fake.py, line 67, in _inner
  namespace, **args)
    File neutron/openstack/common/rpc/dispatcher.py, line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
    File neutron/plugins/openvswitch/agent/ovs_neutron_agent.py, line 296, in 
port_update
  vif_port = self.int_br.get_vif_port_by_id(port['id'])
    File neutron/agent/linux/ovs_lib.py, line 362, in get_vif_port_by_id
  print traceback.print_stack()

  More intersting thing is that it occurs only when both OVS plugin test and 
Ryu plugin test are run.
  More precisely, it happens when we run
  - first 
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent, and
  - then neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port

  $ source .venv/bin/activate
  $ OS_DEBUG=1 python setup.py testr --testr-args='--concurrency=4 
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent 
neutron.tests.unit.ryu.test_ryu_plugin.TestRyuPortsV2.test_update_port'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1255421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257523] Re: Neither vpnaas.filters nor debug.filters are referenced in setup.cfg

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257523

Title:
  Neither vpnaas.filters nor debug.filters are referenced in setup.cfg

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Both vpnaas.filters and debug.filters are missing from setup.cfg,
  breaking rootwrap for the appropriate commands.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252284] Re: OVS agent doesn't reclaim local VLAN

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252284

Title:
  OVS agent doesn't reclaim local VLAN

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Locally to an OVS agent, when the last port of a network disappears
  the local VLAN isn't reclaim.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244255] Re: binding_failed because of l2 agent assumed down

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244255

Title:
  binding_failed because of l2 agent assumed down

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest test ServerAddressesTestXML failed on a change that does not
  involve any code modification.

  https://review.openstack.org/53633

  2013-10-24 14:04:29.188 | 
==
  2013-10-24 14:04:29.189 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | 
--
  2013-10-24 14:04:29.189 | _StringException: Traceback (most recent call last):
  2013-10-24 14:04:29.189 |   File 
tempest/api/compute/servers/test_server_addresses.py, line 31, in setUpClass
  2013-10-24 14:04:29.189 | resp, cls.server = 
cls.create_server(wait_until='ACTIVE')
  2013-10-24 14:04:29.189 |   File tempest/api/compute/base.py, line 143, in 
create_server
  2013-10-24 14:04:29.190 | server['id'], kwargs['wait_until'])
  2013-10-24 14:04:29.190 |   File 
tempest/services/compute/xml/servers_client.py, line 356, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | return waiters.wait_for_server_status(self, 
server_id, status)
  2013-10-24 14:04:29.190 |   File tempest/common/waiters.py, line 71, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-10-24 14:04:29.190 | BuildErrorException: Server 
e21d695e-4f15-4215-bc62-8ea645645a26 failed to build and is in ERROR status


  From n-cpu.log (http://logs.openstack.org/33/53633/1/check/check-
  tempest-devstack-vm-
  neutron/4dd98e5/logs/screen-n-cpu.txt.gz#_2013-10-24_13_58_07_532):

   Error: Unexpected vif_type=binding_failed
   Traceback (most recent call last):
   set_access_ip=set_access_ip)
 File /opt/stack/new/nova/nova/compute/manager.py, line 1413, in _spawn
   LOG.exception(_('Instance failed to spawn'), instance=instance)
 File /opt/stack/new/nova/nova/compute/manager.py, line 1410, in _spawn
   block_device_info)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2084, in spawn
   write_to_disk=True)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3064, in 
to_xml
   disk_info, rescue, block_device_info)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2951, in 
get_guest_config
   inst_type)
 File /opt/stack/new/nova/nova/virt/libvirt/vif.py, line 380, in 
get_config
   _(Unexpected vif_type=%s) % vif_type)
   NovaException: Unexpected vif_type=binding_failed
   TRACE nova.compute.manager [instance: e21d695e-4f15-4215-bc62-8ea645645a26]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251086] Re: nvp_cluster_uuid is no longer used in nvp.ini

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251086

Title:
  nvp_cluster_uuid is no longer used in nvp.ini

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  remove it!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244259] Re: error while creating l2 gateway services in nvp

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244259

Title:
  error while creating l2 gateway services in nvp

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  If a conflict occurs while using the L2 Gateway extension, 500 errors
  may mask underlying exceptions, For instance:

  
  2013-10-24 07:42:37.709 ERROR NVPApiHelper [-] Received error code: 409
  2013-10-24 07:42:37.710 ERROR NVPApiHelper [-] Server Error Message: Device 
breth0 on transport node dd2e6fb9-98fe-4306-a679-30e15f0af06a is already in use 
as a gateway in Gateway Service 166ddc25-e617-4cfc-bde5-485a0b622fc6
  2013-10-24 07:42:37.710 ERROR neutron.api.v2.resource [-] create failed
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 84, in resource
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 411, in create
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/nicira/NeutronPlugin.py, line 1921, in 
create_network_gateway
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource created 
resource:%s) % nvp_res)
  2013-10-24 07:42:37.710 TRACE neutron.api.v2.resource UnboundLocalError: 
local variable 'nvp_res' referenced before assignment

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243862] Re: fix nvp version validation for distributed router creation

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243862

Title:
  fix nvp version validation for distributed router creation

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Current test is not correct as it prevents the right creation policy
  to occur when for newer versions of NVP whose minor is 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245799] Re: IP lib fails when int name has '@' character and VLAN interfaces

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245799

Title:
  IP lib fails when int name has '@' character and VLAN interfaces

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  IP lib can not distinguish between interfaces with an '@' in their name to a 
VLAN interfaces.
  And an interface name can have more than one '@' in their name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242715] Re: Wrong parameter in the config file s/qpid_host/qpid_hostname/

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242715

Title:
  Wrong parameter in the config file s/qpid_host/qpid_hostname/

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  Glance config sample shows `qpid_host` as the parameter to use for
  qpid's host, however, the right value is `qpid_hostname`

  [0] https://github.com/openstack/glance/blob/master/etc/glance-api.conf#L228
  [1] 
https://github.com/openstack/glance/blob/master/glance/notifier/notify_qpid.py#L34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241198] Re: Keystone tests determine rootdir relative to pwd

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1241198

Title:
  Keystone tests determine rootdir relative to pwd

Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  Fix Released

Bug description:
  keystone/tests/core.py

  contains this code:

ROOTDIR = os.path.dirname(os.path.abspath('..'))

  which is determining the abspath of $PWD/..

  A more reliable way to determine the rootdir is relative to the
  dirname(__file__) of the python module itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1241198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240744] Re: L2 pop sends updates for unrelated networks

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240744

Title:
  L2 pop sends updates for unrelated networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The l2population mechanism driver sends update notifications for
  networks which are not related to the port which is being updated.
  Thus the fdb is populated with some incorrect entries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243821] Re: Qpid protocol configuration is wrong

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1243821

Title:
  Qpid protocol configuration is wrong

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  notify_qpid.py appears to be suffering from the same issue as
  described in launchpad bug
  https://bugs.launchpad.net/oslo/+bug/1158807.  Instead of setting
  connection.transport, it is attempting to set connection.protocol.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1243821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240790] Re: Allow using ipv6 address with omiting zero

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240790

Title:
  Allow using ipv6 address with omiting zero

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Now neutron support ipv6 address like 2001:db8::10:10:10:0/120,
  but don't support ipv6 address with omiting zero like 
2001:db8:0:0:10:10:10:0/120
  that will cause the exception '2001:db8:0:0:10:10:10:0/120' isn't a 
recognized IP subnet cidr, '2001:db8::10:10:10:0/120' is recommended

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241874] Re: L2 pop mech driver sends notif. even no related port changes

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241874

Title:
  L2 pop mech driver sends notif. even no related port changes

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  L2 population mechanism driver sends add notifications even if there
  is no related port changes, ex ip changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1241874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241602] Re: AttributeError in plugins/linuxbridge/lb_neutron_plugin.py

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241602

Title:
  AttributeError in plugins/linuxbridge/lb_neutron_plugin.py

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  I'm running Ubuntu 12.04 LTS x64 + OpenStack Havana with the following
  neutron package versions:

  neutron-common 2013.2~rc3-0ubuntu1~cloud0
  neutron-dhcp-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-l3-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-metadata-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-plugin-linuxbridge 2013.2~rc3-0ubuntu1~cloud0
  neutron-plugin-linuxbridge-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-server 2013.2~rc3-0ubuntu1~cloud0
  python-neutron 2013.2~rc3-0ubuntu1~cloud0   
  python-neutronclient 2.3.0-0ubuntu1~cloud0


  When adding a router interface the following error message in
  /var/log/neutron/server.log:

  2013-10-18 15:35:14.862 15675 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, line 
438, in _process_data
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/common/rpc.py, line 44, in dispatch
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/lb_neutron_plugin.py,
 line 147, in update_device_up
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
port = self.get_port_from_device.get_port(device)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
AttributeError: 'function' object has no attribute 'get_port'
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp
  2013-10-18 15:35:14.862 15675 ERROR neutron.openstack.common.rpc.common [-] 
Returning exception 'function' object has no attribute 'get_port' to caller
  2013-10-18 15:35:14.863 15675 ERROR neutron.openstack.common.rpc.common [-] 
['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, line 
438, in _process_data\n**args)\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/common/rpc.py, line 44, in 
dispatch\nneutron_ctxt, version, method, namespace, **kwargs)\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch\nresult = getattr(proxyobj, method)(ctxt, 
**kwargs)\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/lb_neutron_plugin.py,
 line 147, in update_device_up\nport = 
self.get_port_from_device.get_port(device)\n', AttributeError: 'function' 
object has no attribute 'get_port'\n]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1241602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242734] Re: Error message encoding issue when using qpid

2013-12-16 Thread Alan Pevec
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242734

Title:
  Error message encoding issue when using qpid

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  When I was trying to create a new image to recreate the storage full
  exception, I got 500 error code instead of 413. And I see below trace
  in log. Seems we need to call jsonutils.to_primitive to make the
  message can be encoded.

  2013-10-15 05:18:18.623 2430 ERROR glance.api.v1.upload_utils 
[b256bf1b-81e4-41b1-b89a-0a6bcb58b5ab 396ce5f3575a43abb636c489a959bf16 
29db386367fa4c4e9ffb3c369a46ee90] Image storage media is full: There is not 
enough disk space on the image storage media.
  2013-10-15 05:18:18.691 2430 ERROR glance.notifier.notify_qpid 
[b256bf1b-81e4-41b1-b89a-0a6bcb58b5ab 396ce5f3575a43abb636c489a959bf16 
29db386367fa4c4e9ffb3c369a46ee90] Notification error.  Priority: error Message: 
{'event_type': 'image.upload', 'timestamp': '2013-10-15 10:18:18.662667', 
'message_id': 'b74ec17a-06ac-45b8-84c3-37a55af8dfe1', 'priority': 'ERROR', 
'publisher_id': 'yangj228', 'payload': u'Image storage media is full: There is 
not enough disk space on the image storage media.'}
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid Traceback 
(most recent call last):
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/glance/notifier/notify_qpid.py, line 134, in 
_send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
sender.send(qpid_msg)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
string, line 6, in send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 879, in 
send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.sync(timeout=timeout)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
string, line 6, in sync
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 890, in 
sync
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid if not 
self._ewait(lambda: self.acked = mno, timeout=timeout):
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 804, in 
_ewait
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid result = 
self.session._ewait(lambda: self.error or predicate(), timeout)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 571, in 
_ewait
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid result = 
self.connection._ewait(lambda: self.error or predicate(), timeout)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 214, in 
_ewait
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.check_error()
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 207, in 
check_error
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid raise 
self.error
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid InternalError: 
Traceback (most recent call last):
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 497, in 
dispatch
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.engine.dispatch()
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 802, in 
dispatch
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.process(ssn)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 1037, in 
process
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
self.send(snd, msg)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 1248, in send
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid body = 
enc(msg.content)
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/message.py, line 28, in encode
  2013-10-15 05:18:18.691 2430 TRACE glance.notifier.notify_qpid 
sc.write_primitive(type, x)
  2013-10-15 05:18:18.691 2430 TRACE 

[Yahoo-eng-team] [Bug 1240125] Re: Linux IP wrapper cannot handle VLAN interfaces

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240125

Title:
  Linux IP wrapper cannot handle VLAN interfaces

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Interface VLAN name have a '@' character in their names when iproute2 utility 
list them.
  But the usable interface name (for iproute2 commands) is the string before 
the '@' character, so this interface need a special parse.

  $ ip link show
  1: wlan0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state DOWN 
group default qlen 1000
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff
  inet 169.254.10.78/16 brd 169.254.255.255 scope link wlan0:avahi
 valid_lft forever preferred_lft forever
  2: wlan0.10@wlan0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group 
default 
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff
  3: vlan100@wlan0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group 
default 
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240742] Re: linuxbridge agent doesn't remove vxlan interface if no interface mappings

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240742

Title:
  linuxbridge agent doesn't remove vxlan interface if no interface
  mappings

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The LinuxBridge Agent doesn't remove vxlan interfaces if  
  physical_interface_mappings isn't set  in the config file

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235450] Re: [OSSA 2013-033] Metadata queries from Neutron to Nova are not restricted by tenant (CVE-2013-6419)

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235450

Title:
  [OSSA 2013-033] Metadata queries from Neutron to Nova are not
  restricted by tenant (CVE-2013-6419)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron grizzly series:
  Fix Committed
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Committed

Bug description:
  The neutron metadata service works in the following way:

  Instance makes a GET request to http://169.254.169.254/

  This is directed to the metadata-agent which knows which
  router(namespace) he is running on and determines the ip_address from
  the http request he receives.

  Now, the neturon-metadata-agent queries neutron-server  using the
  router_id and ip_address from the request to determine the port the
  request came from. Next, the agent takes the device_id (nova-instance-
  id) on the port and passes that to nova as X-Instance-ID.

  The vulnerability is that if someone exposes their instance_id their
  metadata can be retrieved. In order to exploit this, one would need to
  update the device_id  on a port to match the instance_id they want to
  hijack the data from.

  To demonstrate:

  arosen@arosen-desktop:~/devstack$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1eb33bf1-6400-483a-9747-e19168b68933 | vm1  | ACTIVE | None   | Running 
| private=10.0.0.4 |
  | eed973e2-58ea-42c4-858d-582ff6ac3a51 | vm2  | ACTIVE | None   | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  
  arosen@arosen-desktop:~/devstack$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 3128f195-c41b-4160-9a42-40e024771323 |  | fa:16:3e:7d:a5:df | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.1} 
|
  | 62465157-8494-4fb7-bdce-2b8697f03c12 |  | fa:16:3e:94:62:47 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} 
|
  | 8473fb8d-b649-4281-b03a-06febf61b400 |  | fa:16:3e:4f:a3:b0 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.2} 
|
  | 92c42c1a-efb0-46a6-89eb-a38ae170d76d |  | fa:16:3e:de:9a:39 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.3} 
|
  
+--+--+---+-+

  
  arosen@arosen-desktop:~/devstack$ neutron port-show  
62465157-8494-4fb7-bdce-2b8697f03c12
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 1eb33bf1-6400-483a-9747-e19168b68933
|
  | device_owner  | compute:None
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {subnet_id: 
d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} |
  | id| 62465157-8494-4fb7-bdce-2b8697f03c12
|
  | mac_address   | fa:16:3e:94:62:47   
|
  | name  | 
|
  | network_id| 

[Yahoo-eng-team] [Bug 1235486] Re: Integrity violation on delete network

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235486

Title:
  Integrity violation on delete network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Found while running tests for bug 1224001.
  Full logs here: 
http://logs.openstack.org/24/49424/13/check/check-tempest-devstack-vm-neutron-pg-isolated/405d3b4

  Keeping to medium priority for now.
  Will raise priority if we found more occurrences.

  2013-10-04 21:20:46.888 31438 ERROR neutron.api.v2.resource [-] delete failed
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 84, in resource
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 432, in delete
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 411, in 
delete_network
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource break
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 456, 
in __exit__
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource self.commit()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 368, 
in commit
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
self._prepare_impl()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 347, 
in _prepare_impl
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
self.session.flush()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 542, in _wrap
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource raise 
exception.DBError(e)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource DBError: 
(IntegrityError) update or delete on table networks violates foreign key 
constraint ports_network_id_fkey on table ports
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource DETAIL:  Key 
(id)=(c63057f4-8d8e-497c-95d6-0d93d2cc83f5) is still referenced from table 
ports.
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource  'DELETE FROM 
networks WHERE networks.id = %(id)s' {'id': 
u'c63057f4-8d8e-497c-95d6-0d93d2cc83f5'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239637] Re: internal neutron server error on tempest VolumesActionsTest

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239637

Title:
  internal neutron server error on tempest VolumesActionsTest

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Logstash query:
  @message:DBError: (IntegrityError) null value in column \network_id\ 
violates not-null constraint AND @fields.filename:logs/screen-q-svc.txt

  
http://logs.openstack.org/22/51522/2/check/check-tempest-devstack-vm-neutron-pg-isolated/015b3d9/logs/screen-q-svc.txt.gz#_2013-10-14_10_13_01_431
  
http://logs.openstack.org/22/51522/2/check/check-tempest-devstack-vm-neutron-pg-isolated/015b3d9/console.html

  
  2013-10-14 10:16:28.034 | 
==
  2013-10-14 10:16:28.034 | FAIL: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesActionsTest)
  2013-10-14 10:16:28.035 | tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesActionsTest)
  2013-10-14 10:16:28.035 | 
--
  2013-10-14 10:16:28.035 | _StringException: Traceback (most recent call last):
  2013-10-14 10:16:28.035 |   File 
tempest/api/volume/test_volumes_actions.py, line 55, in tearDownClass
  2013-10-14 10:16:28.036 | super(VolumesActionsTest, cls).tearDownClass()
  2013-10-14 10:16:28.036 |   File tempest/api/volume/base.py, line 72, in 
tearDownClass
  2013-10-14 10:16:28.036 | cls.isolated_creds.clear_isolated_creds()
  2013-10-14 10:16:28.037 |   File tempest/common/isolated_creds.py, line 
453, in clear_isolated_creds
  2013-10-14 10:16:28.037 | self._clear_isolated_net_resources()
  2013-10-14 10:16:28.037 |   File tempest/common/isolated_creds.py, line 
445, in _clear_isolated_net_resources
  2013-10-14 10:16:28.038 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-14 10:16:28.038 |   File tempest/common/isolated_creds.py, line 
399, in _clear_isolated_network
  2013-10-14 10:16:28.038 | net_client.delete_network(network_id)
  2013-10-14 10:16:28.038 |   File 
tempest/services/network/json/network_client.py, line 76, in delete_network
  2013-10-14 10:16:28.039 | resp, body = self.delete(uri, self.headers)
  2013-10-14 10:16:28.039 |   File tempest/common/rest_client.py, line 308, 
in delete
  2013-10-14 10:16:28.039 | return self.request('DELETE', url, headers)
  2013-10-14 10:16:28.040 |   File tempest/common/rest_client.py, line 436, 
in request
  2013-10-14 10:16:28.040 | resp, resp_body)
  2013-10-14 10:16:28.040 |   File tempest/common/rest_client.py, line 522, 
in _error_checker
  2013-10-14 10:16:28.041 | raise exceptions.ComputeFault(message)
  2013-10-14 10:16:28.041 | ComputeFault: Got compute fault
  2013-10-14 10:16:28.041 | Details: {NeutronError: Request Failed: internal 
server error while processing your request.}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1239637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237912] Re: Cannot update IPSec Policy lifetime

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237912

Title:
  Cannot update IPSec Policy lifetime

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  When you try to update IPSec Policy lifetime, you get an error:

  (neutron) vpn-ipsecpolicy-update ipsecpolicy --lifetime 
units=seconds,value=36001
  Request Failed: internal server error while processing your request.

  Meanwhile updating IKE Policy lifetime works well:

  (neutron) vpn-ikepolicy-update ikepolicy --lifetime units=seconds,value=36001
  Updated ikepolicy: ikepolicy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1237912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210236] Re: traceback is suppressed when deploy.loadapp fails

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210236

Title:
  traceback is suppressed when deploy.loadapp fails

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  I saw this error when attempt to start a relatively recent quantum (setup.py 
--version says 2013.2.a782.ga36f237):
   ERROR: Unable to load quantum from configuration file 
/etc/quantum/api-paste.ini.

  After running quantum-server through strace I determined that the
  error was due to missing mysql client libraries:

  ...
  open(/lib64/tls/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No such 
file or directory)
  open(/lib64/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No such file 
or directory)
  open(/usr/lib64/tls/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No 
such file or directory)
  open(/usr/lib64/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No such 
file or directory)
  munmap(0x7ffcd8132000, 34794)   = 0
  munmap(0x7ffccd147000, 2153456) = 0
  close(4)= 0
  close(3)= 0
  write(2, ERROR: Unable to load quantum fr..., 95ERROR: Unable to load 
quantum from configuration file /usr/local/csi/etc/quantum/api-paste.ini.) = 95
  write(2, \n, 1 )   = 1
  rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x3eec80f500}, 
{0x3eef90db70, [], SA_RESTORER, 0x3eec80f500}, 8) = 0
  exit_group(1)

  
  The error message is completely bogus and the lack of traceback made it 
difficult to debug.

  This is a regression from commit 6869821 which was to fix related bug
  1004062

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209011] Re: L3 agent can't handle updates that change floating ip id

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1209011

Title:
  L3 agent can't handle updates that change floating ip id

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The problem occurs when a network update comes along where a new
  floating ip id carries the same (reused) IP address as an old floating
  IP.  In short, same address, different floating ip id.  We've seen
  this occur in testing where the floating ip free pool has gotten small
  and creates/deletes come quickly.

  What happens is the agent skips calling ip addr add for the address
  since the address already appears.  It then calls ip addr del to
  remove the address from the qrouter's gateway interface.  It shouldn't
  have done this and the floating ip is left in a non-working state.

  Later, when the floating ip is disassociated from the port, the agent
  attempts to remove the address from the device which results in an
  exception which is caught above.  The exception prevents the iptables
  code from removing the DNAT address for the floating ip.

  2013-07-23 09:20:06.094 3109 DEBUG quantum.agent.linux.utils [-] Running 
command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-2b75022a-3721-443f-af99-ec648819d080', 'ip', '-4', 
'addr', 'del', '15.184.103.155/32', 'dev', 'qg-c847c5a7-62'] execute 
/usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py:42
  2013-07-23 09:20:06.179 3109 DEBUG quantum.agent.linux.utils [-] 
  Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-2b75022a-3721-443f-af99-ec648819d080', 'ip', '-4', 
'addr', 'del', '15.184.103.155/32', 'dev', 'qg-c847c5a7-62']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: Cannot assign requested address\n' execute 
/usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py:59

  The DNAT entries in the iptables stay in a bad state from this point
  on sometimes preventing other floating ip addresses from being
  attached to the same instance.

  I have a fix for this that is currently in testing.  Will submit for
  review when it is ready.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1209011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211915] Re: Connection to neutron failed: Maximum attempts reached

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1211915

Title:
  Connection to neutron failed: Maximum attempts reached

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  http://logs.openstack.org/64/41464/4/check/gate-tempest-devstack-vm-
  neutron/4288a6b/console.html

  Seen testing https://review.openstack.org/#/c/41464/

  2013-08-13 17:34:46.774 | Traceback (most recent call last):
  2013-08-13 17:34:46.774 |   File 
tempest/scenario/test_network_basic_ops.py, line 176, in 
test_003_create_networks
  2013-08-13 17:34:46.774 | router = self._get_router(self.tenant_id)
  2013-08-13 17:34:46.775 |   File 
tempest/scenario/test_network_basic_ops.py, line 141, in _get_router
  2013-08-13 17:34:46.775 | router.add_gateway(network_id)
  2013-08-13 17:34:46.775 |   File tempest/api/network/common.py, line 78, in 
add_gateway
  2013-08-13 17:34:46.776 | self.client.add_gateway_router(self.id, 
body=body)
  2013-08-13 17:34:46.776 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 108, 
in with_params
  2013-08-13 17:34:46.776 | ret = self.function(instance, *args, **kwargs)
  2013-08-13 17:34:46.776 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 396, 
in add_gateway_router
  2013-08-13 17:34:46.777 | body={'router': {'external_gateway_info': 
body}})
  2013-08-13 17:34:46.777 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 987, 
in put
  2013-08-13 17:34:46.777 | headers=headers, params=params)
  2013-08-13 17:34:46.778 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 970, 
in retry_request
  2013-08-13 17:34:46.778 | raise 
exceptions.ConnectionFailed(reason=_(Maximum attempts reached))
  2013-08-13 17:34:46.778 | ConnectionFailed: Connection to neutron failed: 
Maximum attempts reached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1211915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234857] Re: neutron unittest require minimum 4gb memory

2013-12-16 Thread Alan Pevec
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1234857

Title:
  neutron unittest require minimum 4gb memory

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron havana series:
  Fix Released

Bug description:
  tox -e py26

  The unittest hang forever. Each test seem to take around 25mins to
  complete. Each test report following error, though it is PASS. It
  sounds like a regression caused by fix for
  https://bugs.launchpad.net/neutron/+bug/1191768.

  
https://github.com/openstack/neutron/commit/06f679df5d025e657b2204151688ffa60c97a3d3

  As per this fix, the default behavior for
  neutron.agent.rpc.report_state() is modified to use cast(), to report
  back the state in json format. The original behavior was to use call()
  method.

  Using call() method by default might fix this problem.

  ERROR:neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent:Failed 
reporting state!
  Traceback (most recent call last):
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 759, in _report_state
  self.agent_state)
File /home/jenkins/workspace/csi-neutron-upstream/neutron/agent/rpc.py, 
line 74, in report_state
  return self.cast(context, msg, topic=self.topic)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/proxy.py,
 line 171, in cast
  rpc.cast(context, self._get_topic(topic), msg)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/__init__.py,
 line 158, in cast
  return _get_impl().cast(CONF, context, topic, msg)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/impl_fake.py,
 line 166, in cast
  check_serialize(msg)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/impl_fake.py,
 line 131, in check_serialize
  json.dumps(msg)
File /usr/lib64/python2.6/json/__init__.py, line 230, in dumps
  return _default_encoder.encode(obj)
File /usr/lib64/python2.6/json/encoder.py, line 367, in encode
  chunks = list(self.iterencode(o))
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 317, in _iterencode
  for chunk in self._iterencode_default(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 323, in 
_iterencode_default
  newobj = self.default(o)
File /usr/lib64/python2.6/json/encoder.py, line 344, in default
  raise TypeError(repr(o) +  is not JSON serializable)
  TypeError: MagicMock name='LinuxBridgeManager().local_ip' id='666599248' is 
not JSON serializable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1234857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261565] [NEW] nova.compute.utils.EventReporter drops exception messages on the floor

2013-12-16 Thread Nicolas Simonds
Public bug reported:

While reviewing the instance action logs, it was noticed that upon error
conditions, the instance_actions_events log separates the exception
message from the traceback, but there is no corresponding column in the
model to store it.

This appears to be a simple oversight and/or mistake in the
implementation of the InstanceActionEvent class.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261565

Title:
  nova.compute.utils.EventReporter drops exception messages on the floor

Status in OpenStack Compute (Nova):
  New

Bug description:
  While reviewing the instance action logs, it was noticed that upon
  error conditions, the instance_actions_events log separates the
  exception message from the traceback, but there is no corresponding
  column in the model to store it.

  This appears to be a simple oversight and/or mistake in the
  implementation of the InstanceActionEvent class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-16 Thread Alan Pevec
** Changed in: cinder/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255004] Re: I18n: Localization of the role Member

2013-12-16 Thread Alan Pevec
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1255004

Title:
  I18n: Localization of the role Member

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack I18n  L10n:
  New

Bug description:
  Hi,

  There is a very strange thing happened to role Member when I set
  Horizon to use my local language.

  In the dialog Domain Groups, it is translated. But in the dialog of
  Project Members and Project Groups, it is not translated.

  From my point of view, if we can localize role names, it will be
  wonderful. If we are not able to localize role names, it is
  acceptable. But we need to make them consistent.

  Hope somebody can take a look at this interested issue.

  Thanks.
  Daisy

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1255004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >