[Yahoo-eng-team] [Bug 1460528] [NEW] Some useless variables and parameters

2015-06-01 Thread wangxiyuan
Public bug reported:

There are some uesless variables and parameters in
nova.api.openstack.compute.server.py and nova.compute.api.py.

nova.api.openstack.compute.server.py lines 650  except Exception as error -- 
except Exception
nova.compute.api.py. lines 563  remove the context parameter.
   lines 1097  considertaion  --  consideration

** Affects: nova
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wangxiyuan (wangxiyuan)

** Description changed:

- There are some uesless variables and parametesr in
+ There are some uesless variables and parameters in
  nova.api.openstack.compute.server.py and nova.compute.api.py.
- 
  
  nova.api.openstack.compute.server.py lines 650  except Exception as error -- 
except Exception
  nova.compute.api.py. lines 563  remove the context parameter.
-lines 1097  considertaion  --  consideration
+    lines 1097  considertaion  --  consideration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460528

Title:
  Some useless variables and parameters

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are some uesless variables and parameters in
  nova.api.openstack.compute.server.py and nova.compute.api.py.

  nova.api.openstack.compute.server.py lines 650  except Exception as error -- 
except Exception
  nova.compute.api.py. lines 563  remove the context parameter.
     lines 1097  considertaion  --  consideration

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460536] [NEW] nova rescue do not actual work

2015-06-01 Thread Luo Gangyi
Public bug reported:

nova rescue do not actual works in a lot of situation.

Although nova rescue generate the right libvirt.xml (at least in my
opinion), the virtual machine OS do not use the rescue disk to boot. It
still use the origin disk to boot(I tested it in icehouse,Juno,Kilo).

I am not sure it is the bug of libvirt/qemu or it is because of the
wrong configuration of OS inside VM.

How to reproduce:

1. Download a image(for
example,http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2)
and upload it to glance.

2. Create an instance by using above image.

3. touch a file in the instance.

4. nova rescue [instance-id]

You can see the file you touch is still there, which indicates the OS of
the VM still boot from the original disk.

If you use #df -h ,you wiil file the OS is using /dev/vdb1 as root file
system.

===
I think the possible reason is /etc/fstab use disk UUID as block device name, 
and all the instance from one image share the same UUID, which confuse OS when 
it has two disk with same UUID.

If I use /dev/vda1 instead of UUID , it seems work correctly.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460536

Title:
  nova rescue do not actual work

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova rescue do not actual works in a lot of situation.

  Although nova rescue generate the right libvirt.xml (at least in my
  opinion), the virtual machine OS do not use the rescue disk to boot.
  It still use the origin disk to boot(I tested it in
  icehouse,Juno,Kilo).

  I am not sure it is the bug of libvirt/qemu or it is because of the
  wrong configuration of OS inside VM.

  How to reproduce:

  1. Download a image(for
  
example,http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2)
  and upload it to glance.

  2. Create an instance by using above image.

  3. touch a file in the instance.

  4. nova rescue [instance-id]

  You can see the file you touch is still there, which indicates the OS
  of the VM still boot from the original disk.

  If you use #df -h ,you wiil file the OS is using /dev/vdb1 as root
  file system.

  ===
  I think the possible reason is /etc/fstab use disk UUID as block device name, 
and all the instance from one image share the same UUID, which confuse OS when 
it has two disk with same UUID.

  If I use /dev/vda1 instead of UUID , it seems work correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460652] [NEW] nova-conductor infinitely reconnets to rabbit

2015-06-01 Thread Michael Kazakov
Public bug reported:

1. Exact version of Nova 
ii  nova-api
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - API frontend
ii  nova-cert   
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - certificate management
ii  nova-common 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - common files
ii  nova-conductor  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - conductor service
ii  nova-console
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console
ii  nova-consoleauth
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console Authenticatorii  nova-novncproxy 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - NoVNC proxy
ii  nova-scheduler  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - virtual machine scheduler
ii  python-nova 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute Python libraries
ii  python-novaclient   
1:2.17.0.74.g2598714+git201404220131~trusty-0ubuntu1 all  client 
library for OpenStack Compute API

rabbit configuration in nova.conf:

  rabbit_hosts = m610-2:5672, m610-1:5672
  rabbit_ha_queues =  true


2. Relevant log files:
/var/log/nova/nova-conductor.log

 exchange 'reply_bea18a6133c548f099b85b168fddf83c' in vhost '/'
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 
624, in ensure
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
return method(*args, **kwargs)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 
729, in _publish
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
publisher = cls(self.conf, self.channel, topic, **kwargs)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 
361, in __init__
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
type='direct', **options)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 
326, in __init__
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.reconnect(channel)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 
334, in reconnect
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
routing_key=self.routing_key)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/kombu/messaging.py, line 82, in __init__
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.revive(self._channel)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/kombu/messaging.py, line 216, in revive
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.declare()
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/kombu/messaging.py, line 102, in declare
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.exchange.declare()
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/kombu/entity.py, line 166, in declare
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
nowait=nowait, passive=passive,
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/amqp/channel.py, line 612, in 
exchange_declare
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
(40, 11),  # Channel.exchange_declare_ok
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/amqp/abstract_channel.py, line 75, in wait
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
return self.dispatch_method(method_sig, args, content)
2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/python2.7/dist-packages/amqp/abstract_channel.py, line 

[Yahoo-eng-team] [Bug 1460577] [NEW] If instance was migrated while was in shutdown state, nova disallow start before resize-confirm

2015-06-01 Thread George Shuklin
Public bug reported:

Steps to reproduce:
1. Create instance
2. Shutdown instance
3. Perform resize
4. Try to start instance.

Expected behaviour: instance starts  in resize_confirm state
Actual behaviour: ERROR (Conflict): Instance 
d0e9bc6b-0544-410f-ba96-b0b78ce18828 in vm_state resized. Cannot start while 
the instance is in this state. (HTTP 409)

Rationale:

If tenant resizing running instance, he can log into instance after
reboot and see if it was successful.  If tenant resizing stopped
instance, he has no change to check if instance resized successfully or
not before confirming migration.

Proposed solution: Allow to start instance in the state resize_confirm +
stopped.

(Btw: I'd like to allow to stop/resize instances in  resize_confirm
state, because tenant may wish to reboot/stop/start instance few times
before deciding that migration was successful or revert it back).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460577

Title:
  If instance was migrated while was in shutdown state, nova disallow
  start before resize-confirm

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce:
  1. Create instance
  2. Shutdown instance
  3. Perform resize
  4. Try to start instance.

  Expected behaviour: instance starts  in resize_confirm state
  Actual behaviour: ERROR (Conflict): Instance 
d0e9bc6b-0544-410f-ba96-b0b78ce18828 in vm_state resized. Cannot start while 
the instance is in this state. (HTTP 409)

  Rationale:

  If tenant resizing running instance, he can log into instance after
  reboot and see if it was successful.  If tenant resizing stopped
  instance, he has no change to check if instance resized successfully
  or not before confirming migration.

  Proposed solution: Allow to start instance in the state resize_confirm
  + stopped.

  (Btw: I'd like to allow to stop/resize instances in  resize_confirm
  state, because tenant may wish to reboot/stop/start instance few times
  before deciding that migration was successful or revert it back).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460584] [NEW] Add image while launching instance from image(create new volume)

2015-06-01 Thread Lawrance
Public bug reported:

Now,  after  launching instance from image(create new volume),  we can't
see image name in instance tables.

** Affects: horizon
 Importance: Undecided
 Assignee: Lawrance (jing)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460584

Title:
  Add image while launching instance from image(create new volume)

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Now,  after  launching instance from image(create new volume),  we
  can't see image name in instance tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460632] [NEW] [LBaaS] Create load balancer for each fixed_ip of a port

2015-06-01 Thread venkata anil
Public bug reported:

The spec

http://specs.openstack.org/openstack/neutron-specs/specs/juno-incubator/lbaas-api-and-objmodel-improvement.html

says - If a neutron port is encountered that has many fixed_ips then a load 
balancer should be created for each fixed_ip with each being a deep copy of 
each other.

currently neutron LBaaS v2 is only creating one loadbalncer though the port is 
having many fixed_ip.
Enhance neutron LBaaS v2, to create load balancer for each fixed_ip of a port, 
as specified in spec.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460632

Title:
  [LBaaS] Create load balancer for each fixed_ip of a port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The spec

  
http://specs.openstack.org/openstack/neutron-specs/specs/juno-incubator/lbaas-api-and-objmodel-improvement.html
  
  says - If a neutron port is encountered that has many fixed_ips then a load 
balancer should be created for each fixed_ip with each being a deep copy of 
each other.

  currently neutron LBaaS v2 is only creating one loadbalncer though the port 
is having many fixed_ip.
  Enhance neutron LBaaS v2, to create load balancer for each fixed_ip of a 
port, as specified in spec.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460630] [NEW] nova should not vertify port_security_enabled according the info from network

2015-06-01 Thread zhaobo
Public bug reported:

nova version:
2.25.0

according the bp :
https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity

repro:
1.create a network with port_security_enabled is false, and create a sample 
subnet.
2.create a port with port_security_enabled is true on this network through 
neutron.
3. boot a server based on this port.

expect:
This server should be fine.

But it hit the error as:
SecurityGroupCannotBeApplied: Network requires port_security_enabled and subnet 
associated in order to apply security groups.

** Affects: nova
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460630

Title:
  nova should not vertify port_security_enabled according the info
  from network

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova version:
  2.25.0

  according the bp :
  https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity

  repro:
  1.create a network with port_security_enabled is false, and create a sample 
subnet.
  2.create a port with port_security_enabled is true on this network through 
neutron.
  3. boot a server based on this port.

  expect:
  This server should be fine.

  But it hit the error as:
  SecurityGroupCannotBeApplied: Network requires port_security_enabled and 
subnet associated in order to apply security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460609] [NEW] The overview pie chart is so big and ugly

2015-06-01 Thread Lawrance
Public bug reported:

the overview pie chart is so big for master branch,  because we use
width: 100% ,  delete it

** Affects: horizon
 Importance: Undecided
 Assignee: Lawrance (jing)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460609

Title:
  The overview pie chart is so big and ugly

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  the overview pie chart is so big for master branch,  because we use
  width: 100% ,  delete it

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459726] Re: api servers hang with 100% CPU if syslog restarted

2015-06-01 Thread James Page
*** This bug is a duplicate of bug 1452312 ***
https://bugs.launchpad.net/bugs/1452312

** This bug has been marked a duplicate of bug 1452312
   glance-registry process spins if rsyslog restarted with syslog logging 
enabled

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459726

Title:
  api servers hang with 100% CPU if syslog restarted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Logging configuration library for OpenStack:
  Invalid
Status in python-eventlet package in Ubuntu:
  Confirmed

Bug description:
  Affected:

  glance-api
  glance-registry
  neutron-server
  nova-api

  If service was configured to use rsyslog and rsyslog was restarted
  after API server started, it hangs on next log line with 100% CPU. If
  server have few workers, each worker will eat own 100% CPU share.

  Steps to reproduce:
  1. Configure syslog:
  use_syslog=true
  syslog_log_facility=LOG_LOCAL4
  2. restart api service
  3. restart rsyslog

  Execute some command to force logging. F.e.: neutron net-create foo,
  nova boot, etc.

  Expected result: normal operation

  Actual result:
  with some chance (about 30-50%) api server will hung with 100% CPU usage and 
will not reply to request.

  Strace on hung service:

  gettimeofday({1432827199, 745141}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, 151keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0, 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745226}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, 151keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0, 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745325}, NULL) = 0

  Tested on:
  nova, glance, neutron:  1:2014.2.3, Ubuntu version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1459726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460562] [NEW] ipset can't be destroyed when last sg rule is deleted

2015-06-01 Thread shihanzhang
Public bug reported:

reproduce steps:
1. a VM A in default security group
2. default security group has rules: 1. allow all traffic out; 2. allow it self 
as remote_group in
3. firstly delete rule 1, then delete rule2

I found the iptables in compute node which VM A resids didn't be reload,
and the relevant ipset didn't be destroyed.

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New


** Tags: ipset

** Tags added: ipset

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460562

Title:
  ipset can't be destroyed when last sg rule is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  reproduce steps:
  1. a VM A in default security group
  2. default security group has rules: 1. allow all traffic out; 2. allow it 
self as remote_group in
  3. firstly delete rule 1, then delete rule2

  I found the iptables in compute node which VM A resids didn't be
  reload, and the relevant ipset didn't be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457554] Re: host-evacuate-live doesn't limit number of servers evacuated simultaneously from a host

2015-06-01 Thread Pawel Koniszewski
Nova allows to live migrate multiple VMs at a time, there's no limit for
simultaneous live migrations - everything depends on use case and setup
configuration (particularly network configuration and bandwidth).

host-evacuate-live is implemented in python-novaclient, so nothing to
fix in nova.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457554

Title:
  host-evacuate-live doesn't limit number of servers evacuated
  simultaneously from a host

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Attempting to evacuate too many servers from a single host
  simultaneously could result in bandwidth starvation. Instances dirty
  their memory faster than they can be migrated, resulting in instances
  perpetually stuck in the migrating state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1457554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460715] [NEW] MBR disk setup fails in wily because sfdisk no longer accepts M as a valid unit

2015-06-01 Thread Dan Watkins
Public bug reported:

Specifically, we get the following output in
cc_disk_setup.exec_mkpart_mbr:

sfdisk: --Linux option is unnecessary and deprecated
sfdisk: unsupported unit 'M'

and the manpage says:

   -u, --unit S
  Deprecated option.  Only the sector unit is supported.

So we'll need to shift to using sectors.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1460715

Title:
  MBR disk setup fails in wily because sfdisk no longer accepts M as a
  valid unit

Status in Init scripts for use on cloud images:
  New

Bug description:
  Specifically, we get the following output in
  cc_disk_setup.exec_mkpart_mbr:

  sfdisk: --Linux option is unnecessary and deprecated
  sfdisk: unsupported unit 'M'

  and the manpage says:

 -u, --unit S
Deprecated option.  Only the sector unit is supported.

  So we'll need to shift to using sectors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1460715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460720] [NEW] Move ipv6_gateway L3 config to CLI

2015-06-01 Thread Abishek Subramanian
Public bug reported:

The ipv6 router BP
(https://blueprints.launchpad.net/neutron/+spec/ipv6-router) added a new
L3 agent config called ipv6_gateway wherein an admin can configure the
IPv6 LLA of the upstream physical router, so that the neutron virtual
router has a default V6 gateway route to the upstream router.

This solution is however not scalable when there are multiple external routers 
per L3 agent. 
Per review comments - 
https://review.openstack.org/#/c/156283/42/etc/l3_agent.ini
It is better to move this config to the CLI.

This change aims to make this exact change by updating the router_update
API so that the neutron router-gateway-set CLI will now have a new
option to set an ipv6_gateway.

** Affects: neutron
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: New


** Tags: rfe

** Tags added: rfe

** Changed in: neutron
 Assignee: (unassigned) = Abishek Subramanian (absubram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460720

Title:
  Move ipv6_gateway L3 config to CLI

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The ipv6 router BP
  (https://blueprints.launchpad.net/neutron/+spec/ipv6-router) added a
  new L3 agent config called ipv6_gateway wherein an admin can configure
  the IPv6 LLA of the upstream physical router, so that the neutron
  virtual router has a default V6 gateway route to the upstream router.

  This solution is however not scalable when there are multiple external 
routers per L3 agent. 
  Per review comments - 
https://review.openstack.org/#/c/156283/42/etc/l3_agent.ini
  It is better to move this config to the CLI.

  This change aims to make this exact change by updating the
  router_update API so that the neutron router-gateway-set CLI will now
  have a new option to set an ipv6_gateway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202785] Re: Authentication is not checked before sending potentially large request bodies

2015-06-01 Thread nikhil komawar
** Changed in: glance
   Status: Triaged = Confirmed

** Changed in: glance
   Status: Confirmed = Won't Fix

** Changed in: glance
   Importance: High = Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1202785

Title:
  Authentication is not checked before sending potentially large request
  bodies

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix
Status in OpenStack Identity  (Keystone) Middleware:
  Incomplete
Status in OpenStack Security Advisories:
  Won't Fix
Status in Python client library for Keystone:
  Invalid

Bug description:
  When making an HTTP request with a body to an api using the keystone
  auth_token middleware and no request size limiting then an
  unauthorized user can send a very large request that will not fail
  with a 401 until after all of the data is sent. This means that anyone
  who can hit an api could make many requests with large bodies and not
  be denied until after all of that data has been sent, wasting lots/all
  of the resources on the api node essentially bringing it down.

  This issue can be mitigated for apis like nova by having middleware or
  using the webserver to limit the maximum size of a request. In the
  case of the glance-api however, large requests such as image uploads
  need to occur. Perhaps the auth_token middleware should look at
  request headers and perform authN and authZ before accepting all of
  the request body. It's also very inefficient and time consuming to
  wait until all the data is sent before receiving a 401.

  I am not sure of the level of impact this could have for most
  deployers and the different APIs.

  Here is an example of requests to glance and devstack with a bad token
  and their times to complete. Nova-api on devstack also accepted large
  bodies before returning a 401.


  1 Meg Image

  [ameade@ameade-dev:~]
  [17:30:16] $ time glance --debug --os-auth-token 'gah' image-create --name 
test 1meg.img 
  curl -i -X POST -H 'Transfer-Encoding: chunked' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-size: 1048576' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: gah' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-name: test' -d 'open file 
'stdin', mode 'r' at 0x7f8d762bd150' http://50.56.173.46:9292/v1/images

  HTTP/1.1 401 Unauthorized
  date: Thu, 18 Jul 2013 17:30:30 GMT
  content-length: 253
  content-type: text/plain; charset=UTF-8

  401 Unauthorized

  This server could not verify that you are authorized to access the
  document you requested. Either you supplied the wrong credentials
  (e.g., bad password), or your browser does not understand how to
  supply the credentials required.


  Request returned failure status.
  Invalid OpenStack Identity credentials.

  real0m0.766s
  user0m0.312s
  sys 0m0.164s


  100 meg

  
  [ameade@ameade-dev:~]
  [17:31:35] $ time glance --debug --os-auth-token 'gah' image-create --name 
test 100meg.img 
  curl -i -X POST -H 'Transfer-Encoding: chunked' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-size: 104857600' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: gah' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-name: test' -d 'open file 
'stdin', mode 'r' at 0x7f6af9768150' http://50.56.173.46:9292/v1/images

  HTTP/1.1 401 Unauthorized
  date: Thu, 18 Jul 2013 17:31:40 GMT
  content-length: 253
  content-type: text/plain; charset=UTF-8

  401 Unauthorized

  This server could not verify that you are authorized to access the
  document you requested. Either you supplied the wrong credentials
  (e.g., bad password), or your browser does not understand how to
  supply the credentials required.


  Request returned failure status.
  Invalid OpenStack Identity credentials.

  real0m1.441s
  user0m0.420s
  sys 0m0.344s


  10 gig

  [ameade@ameade-dev:~]
  [17:16:23] 1 $ time glance --debug --os-auth-token 'gah' image-create --name 
test 10g.img
  curl -i -X POST -H 'Transfer-Encoding: chunked' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-size: 100' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: gah' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-name: test' -d 'open file 
'stdin', mode 'r' at 0x7f768c151150' http://50.56.173.46:9292/v1/images

  HTTP/1.1 401 Unauthorized
  date: Thu, 18 Jul 2013 17:16:28 GMT
  content-length: 253
  content-type: text/plain; charset=UTF-8

  401 Unauthorized

  This server could not verify that you are authorized to access the
  document you requested. Either you supplied the wrong credentials
  (e.g., bad password), or your browser does not understand how to
  supply the credentials required.


  Request returned failure status.
  Invalid OpenStack Identity credentials.

  real0m56.082s
  user0m6.308s
  sys 0m17.669s

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1277104] Re: wrong order of assertEquals args

2015-06-01 Thread Doug Hellmann
** Changed in: python-troveclient
   Status: Fix Committed = Fix Released

** Changed in: python-troveclient
Milestone: None = 1.2.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277104

Title:
  wrong order of assertEquals args

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Oslo Policy:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Nova:
  Fix Released
Status in OpenStack Command Line Client:
  Fix Released
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in Python client library for Swift:
  In Progress
Status in Trove client binding:
  Fix Released
Status in Rally:
  Confirmed
Status in Openstack Database (Trove):
  Fix Committed

Bug description:
  Args of assertEquals method in ceilometer.tests are arranged in wrong order. 
In result when test fails it shows incorrect information about observed and 
actual data. It's found more than 2000 times.
  Right order of arguments is expected, actual.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285478] Re: Enforce alphabetical ordering in requirements file

2015-06-01 Thread Doug Hellmann
** Changed in: python-ironicclient
   Status: Fix Committed = Fix Released

** Changed in: python-ironicclient
Milestone: None = 0.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285478

Title:
  Enforce alphabetical ordering in requirements file

Status in Blazar:
  Invalid
Status in Cinder:
  Invalid
Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Python client library for Cinder:
  Invalid
Status in Python client library for Glance:
  Invalid
Status in Python client library for Ironic:
  Fix Released
Status in Python client library for Neutron:
  Invalid
Status in Trove client binding:
  Invalid
Status in OpenStack contribution dashboard:
  Fix Released
Status in Storyboard database creator:
  Invalid
Status in Tempest:
  Invalid
Status in Openstack Database (Trove):
  Invalid
Status in Tuskar:
  Fix Released
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Won't Fix

Bug description:
  
  Sorting requirement files in alphabetical order makes code more readable, and 
can check whether specific library
  in the requirement files easily. Hacking donesn't check *.txt files.
  We had  enforced  this check in oslo-incubator 
https://review.openstack.org/#/c/66090/.

  This bug is used to track syncing the check gating.

  How to sync this to other projects:

  1.  Copy  tools/requirements_style_check.sh  to project/tools.

  2. run tools/requirements_style_check.sh  requirements.txt test-
  requirements.txt

  3. fix the violations

To manage notifications about this bug go to:
https://bugs.launchpad.net/blazar/+bug/1285478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460681] [NEW] Neutron DHCP namespaces are not created properly on reboot

2015-06-01 Thread Abhishek Chanda
Public bug reported:

I am running a Openstack Juno on a bunch of docker containers. When my
neutron-network container reboots, neutron dhcp logs has a bunch of

015-05-28 17:49:14.629 475 TRACE neutron.agent.dhcp_agent Stderr:
'RTNETLINK answers: Invalid argument\n' 2015-05-28 17:49:14.629 475
TRACE neutron.agent.dhcp_agent

I noticed that this is due to the fact that the namespace behaves
weird when the container comes back up:

# ip netns exec qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 ip a
setting the network namespace
qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 failed: Invalid argument
# ls -la /var/run/netns/
total 8
drwxr-xr-x 2 root root 4096 May 29 14:43 .
drwxr-xr-x 9 root root 4096 May 29 14:43 ..
-- 1 root root0 May 29 14:43
qdhcp-474bd6da-e74f-436a-8408-e10fe5925220

So, the namespace does exist, but the kernel does not seem to recognize
it.

Note that neutron-dhcp is running in a docker container. Also, reboot is
a `docker restart`

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460681

Title:
  Neutron DHCP namespaces are not created properly on reboot

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am running a Openstack Juno on a bunch of docker containers. When my
  neutron-network container reboots, neutron dhcp logs has a bunch of

  015-05-28 17:49:14.629 475 TRACE neutron.agent.dhcp_agent Stderr:
  'RTNETLINK answers: Invalid argument\n' 2015-05-28 17:49:14.629 475
  TRACE neutron.agent.dhcp_agent

  I noticed that this is due to the fact that the namespace behaves
  weird when the container comes back up:

  # ip netns exec qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 ip a
  setting the network namespace
  qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 failed: Invalid argument
  # ls -la /var/run/netns/
  total 8
  drwxr-xr-x 2 root root 4096 May 29 14:43 .
  drwxr-xr-x 9 root root 4096 May 29 14:43 ..
  -- 1 root root0 May 29 14:43
  qdhcp-474bd6da-e74f-436a-8408-e10fe5925220

  So, the namespace does exist, but the kernel does not seem to
  recognize it.

  Note that neutron-dhcp is running in a docker container. Also, reboot
  is a `docker restart`

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449260] Re: [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

2015-06-01 Thread Tristan Cacqueray
** Summary changed:

- Sanitation of metadata label (CVE-2015-3988)
+ [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449260

Title:
  [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  1) Start up Horizon
  2) Go to Images
  3) Next to an image, pick Update Metadata
  4) From the dropdown button, select Update Metadata
  5) In the Custom box, enter a value with some HTML like 
'/scriptscriptalert(1)/script//', click +
  6) On the right-hand side, give it a value, like ee
  7) Click Save
  8) Pick Update Metadata for the image again, the page will fail to load, 
and the JavaScript console says:

  SyntaxError: invalid property id
  var existing_metadata = {

  An alternative is if you change the URL to update_metadata for the
  image (for example,
  
http://192.168.122.239/admin/images/fa62ba27-e731-4ab9-8487-f31bac355b4c/update_metadata/),
  it will actually display the alert box and a bunch of junk.

  I'm not sure if update_metadata is actually a page, though... can't
  figure out how to get to it other than typing it in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460673] [NEW] nova-manage flavor convert fails if instance has no flavor in sys_meta

2015-06-01 Thread John Garbutt
Public bug reported:

nova-manage fails if instance has no flavor in sys_meta when trying to
move them all to instance_extra.

But mostly the instance_type table includes the correct information, so
it should be possible to copy it from there.

** Affects: nova
 Importance: Medium
 Assignee: Dan Smith (danms)
 Status: In Progress


** Tags: unified-objects

** Tags added: unified-objects

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
   Status: New = Triaged

** Description changed:

  nova-manage fails if instance has no flavor in sys_meta when trying to
- move them all to instance_extra
+ move them all to instance_extra.
+ 
+ But mostly the instance_type table includes the correct information, so
+ it should be possible to copy it from there.

** Summary changed:

- nova-manage fails if instance has no flavor in sys_meta when trying to move 
them all to instance_extra
+ nova-manage flavor convert fails if instance has no flavor in sys_meta

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460673

Title:
  nova-manage flavor convert fails if instance has no flavor in sys_meta

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  nova-manage fails if instance has no flavor in sys_meta when trying to
  move them all to instance_extra.

  But mostly the instance_type table includes the correct information,
  so it should be possible to copy it from there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460741] [NEW] security groups iptables can block legitimate traffic as INVALID

2015-06-01 Thread Mike Dorman
Public bug reported:

The iptables implementation of security groups includes a default rule
to drop any INVALID packets (according to the Linux connection state
tracking system.)  It looks like this:

-A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

This is placed near the top of the rule stack, before any security group
rules added by the user.  See:

https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

However, there are some cases where you would not want traffic marked as
INVALID to be dropped here.  Specifically, our use case:

We have a load balancing scheme where requests from the LB are tunneled
as IP-in-IP encapsulation between the LB and the VM.  Response traffic
is configured for DSR, so the responses go directly out the default
gateway of the VM.

The results of this are iptables on the hypervisor does not see the
initial SYN from the LB to VM (because it is encapsulated in IP-in-IP),
and thus it does not make it into the connection table.  The response
that comes out of the VM (not encapsulated) hits iptables on the
hypervisor and is dropped as invalid.

I'd like to see a Neutron option to enable/disable the population of
this INVALID state rule, so that operators (such as us) can disable it
if desired.  Obviously it's better in general to keep it in there to
drop invalid packets, but there are cases where you would like to not do
this.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460741

Title:
  security groups iptables can block legitimate traffic as INVALID

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The iptables implementation of security groups includes a default rule
  to drop any INVALID packets (according to the Linux connection state
  tracking system.)  It looks like this:

  -A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

  This is placed near the top of the rule stack, before any security
  group rules added by the user.  See:

  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

  However, there are some cases where you would not want traffic marked
  as INVALID to be dropped here.  Specifically, our use case:

  We have a load balancing scheme where requests from the LB are
  tunneled as IP-in-IP encapsulation between the LB and the VM.
  Response traffic is configured for DSR, so the responses go directly
  out the default gateway of the VM.

  The results of this are iptables on the hypervisor does not see the
  initial SYN from the LB to VM (because it is encapsulated in IP-in-
  IP), and thus it does not make it into the connection table.  The
  response that comes out of the VM (not encapsulated) hits iptables on
  the hypervisor and is dropped as invalid.

  I'd like to see a Neutron option to enable/disable the population of
  this INVALID state rule, so that operators (such as us) can disable it
  if desired.  Obviously it's better in general to keep it in there to
  drop invalid packets, but there are cases where you would like to not
  do this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460793] [NEW] Metadata IP is resolved locally by Windows by default, causing big delay in URL access

2015-06-01 Thread Eugene Nikanorov
Public bug reported:

WIth private network plugged into a router and router serving metadata:

When windows accesses metadata url, it tries to resolve mac address of it 
despite of routing table that tells to go to default gateway. That is because 
of nature of 169.254 which is considered local by default.
Such behavior causes big delay before connection could be established.
This, in turn, causes lots of issues during cloud init phase: slowness, 
timeouts, etc.

The workaround could be to add explicit route to a subnet, e.g.
169.254.169.254/32 via subnet's default gateway.

It makes sense to let DHCP agent inject such route by default via
dnsmasq config.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460793

Title:
  Metadata IP is resolved locally by Windows by default, causing big
  delay in URL access

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  WIth private network plugged into a router and router serving
  metadata:

  When windows accesses metadata url, it tries to resolve mac address of it 
despite of routing table that tells to go to default gateway. That is because 
of nature of 169.254 which is considered local by default.
  Such behavior causes big delay before connection could be established.
  This, in turn, causes lots of issues during cloud init phase: slowness, 
timeouts, etc.

  The workaround could be to add explicit route to a subnet, e.g.
  169.254.169.254/32 via subnet's default gateway.

  It makes sense to let DHCP agent inject such route by default via
  dnsmasq config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459828] Re: keystone-all crashes when ca_certs is not defined in conf

2015-06-01 Thread Dolph Mathews
If this can be reproduced against 2014.1 icehouse, I would consider it
to be a critical issue for our core use case (default SSL configuration
w/ apache httpd).

** Changed in: keystone
   Importance: Undecided = Critical

** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459828

Title:
  keystone-all crashes when ca_certs is not defined in conf

Status in OpenStack Identity (Keystone):
  New
Status in Keystone icehouse series:
  New

Bug description:
  When [ssl] ca_certs parameter is commented on keystone.conf, ssl
  module try to load the default ca_cert file
  (/etc/keystone/ssl/certs/ca.pem) and raises an IOError exception
  because it didn't find the file.

  This happens running on Python 2.7.9.

  I have a keystone cluster running on Python 2.7.7, with the very same
  keystone.conf file, and that crash doesn't happen.

  If any further information is required, don't hesitate in contacting
  me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460813] [NEW] JSHint failing on master

2015-06-01 Thread Aaron Sahlin
Public bug reported:

JSHint is throwing and error on master.

horizon/static/horizon/js/horizon.d3linechart.js: line 760, col 11,
'now' is defined but never used.

** Affects: horizon
 Importance: Undecided
 Assignee: Aaron Sahlin (asahlin)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Aaron Sahlin (asahlin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460813

Title:
  JSHint failing on master

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  JSHint is throwing and error on master.

  horizon/static/horizon/js/horizon.d3linechart.js: line 760, col 11,
  'now' is defined but never used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460804] [NEW] Create project name validation should be handled by Keystone

2015-06-01 Thread Lin Hua Cheng
Public bug reported:

Instead of horizon validating the project name is unique.

I think the project name validation should be handled by Keystone. we
should just catch the Conflict error instead and display the appropriate
error msg.

Related patch: 
https://review.openstack.org/#/c/175096/

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Create project name validation does not perform well
+ Create project name validation should be handled by Keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460804

Title:
  Create project name validation should be handled by Keystone

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Instead of horizon validating the project name is unique.

  I think the project name validation should be handled by Keystone. we
  should just catch the Conflict error instead and display the
  appropriate error msg.

  Related patch: 
  https://review.openstack.org/#/c/175096/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202785] Re: Authentication is not checked before sending potentially large request bodies

2015-06-01 Thread Jeremy Stanley
** Changed in: keystonemiddleware
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1202785

Title:
  Authentication is not checked before sending potentially large request
  bodies

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix
Status in OpenStack Identity  (Keystone) Middleware:
  Invalid
Status in OpenStack Security Advisories:
  Won't Fix
Status in Python client library for Keystone:
  Invalid

Bug description:
  When making an HTTP request with a body to an api using the keystone
  auth_token middleware and no request size limiting then an
  unauthorized user can send a very large request that will not fail
  with a 401 until after all of the data is sent. This means that anyone
  who can hit an api could make many requests with large bodies and not
  be denied until after all of that data has been sent, wasting lots/all
  of the resources on the api node essentially bringing it down.

  This issue can be mitigated for apis like nova by having middleware or
  using the webserver to limit the maximum size of a request. In the
  case of the glance-api however, large requests such as image uploads
  need to occur. Perhaps the auth_token middleware should look at
  request headers and perform authN and authZ before accepting all of
  the request body. It's also very inefficient and time consuming to
  wait until all the data is sent before receiving a 401.

  I am not sure of the level of impact this could have for most
  deployers and the different APIs.

  Here is an example of requests to glance and devstack with a bad token
  and their times to complete. Nova-api on devstack also accepted large
  bodies before returning a 401.


  1 Meg Image

  [ameade@ameade-dev:~]
  [17:30:16] $ time glance --debug --os-auth-token 'gah' image-create --name 
test 1meg.img 
  curl -i -X POST -H 'Transfer-Encoding: chunked' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-size: 1048576' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: gah' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-name: test' -d 'open file 
'stdin', mode 'r' at 0x7f8d762bd150' http://50.56.173.46:9292/v1/images

  HTTP/1.1 401 Unauthorized
  date: Thu, 18 Jul 2013 17:30:30 GMT
  content-length: 253
  content-type: text/plain; charset=UTF-8

  401 Unauthorized

  This server could not verify that you are authorized to access the
  document you requested. Either you supplied the wrong credentials
  (e.g., bad password), or your browser does not understand how to
  supply the credentials required.


  Request returned failure status.
  Invalid OpenStack Identity credentials.

  real0m0.766s
  user0m0.312s
  sys 0m0.164s


  100 meg

  
  [ameade@ameade-dev:~]
  [17:31:35] $ time glance --debug --os-auth-token 'gah' image-create --name 
test 100meg.img 
  curl -i -X POST -H 'Transfer-Encoding: chunked' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-size: 104857600' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: gah' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-name: test' -d 'open file 
'stdin', mode 'r' at 0x7f6af9768150' http://50.56.173.46:9292/v1/images

  HTTP/1.1 401 Unauthorized
  date: Thu, 18 Jul 2013 17:31:40 GMT
  content-length: 253
  content-type: text/plain; charset=UTF-8

  401 Unauthorized

  This server could not verify that you are authorized to access the
  document you requested. Either you supplied the wrong credentials
  (e.g., bad password), or your browser does not understand how to
  supply the credentials required.


  Request returned failure status.
  Invalid OpenStack Identity credentials.

  real0m1.441s
  user0m0.420s
  sys 0m0.344s


  10 gig

  [ameade@ameade-dev:~]
  [17:16:23] 1 $ time glance --debug --os-auth-token 'gah' image-create --name 
test 10g.img
  curl -i -X POST -H 'Transfer-Encoding: chunked' -H 'User-Agent: 
python-glanceclient' -H 'x-image-meta-size: 100' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: gah' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-name: test' -d 'open file 
'stdin', mode 'r' at 0x7f768c151150' http://50.56.173.46:9292/v1/images

  HTTP/1.1 401 Unauthorized
  date: Thu, 18 Jul 2013 17:16:28 GMT
  content-length: 253
  content-type: text/plain; charset=UTF-8

  401 Unauthorized

  This server could not verify that you are authorized to access the
  document you requested. Either you supplied the wrong credentials
  (e.g., bad password), or your browser does not understand how to
  supply the credentials required.


  Request returned failure status.
  Invalid OpenStack Identity credentials.

  real0m56.082s
  user0m6.308s
  sys 0m17.669s

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1202785/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1460839] [NEW] bandit: blacklist_functions not a valid plugin

2015-06-01 Thread Eric Brown
Public bug reported:

Keystone currently has the keystone_conservative profile in bandit.yaml
defined as follows:

keystone_conservative:
include:
- blacklist_functions
- blacklist_imports
- request_with_no_cert_validation
- exec_used
- set_bad_file_permissions
- subprocess_popen_with_shell_equals_true
- linux_commands_wildcard_injection
- ssl_with_bad_version

The keystone_conservative profile is the default profile run when using
bandit in the keystone project.  The problem is that blacklist_functions
is not actually a bandit plugin.  There is a plugin called
blacklist_calls, but not blacklist_functions.

To recreate:
- Edit bandit.yaml, comment out - '/tests/' in the exclude_dirs
- Run 'tox -e bandit'
- Notice you get no errors
- Edit bandit.yaml again, search/replace blacklist_functions to blacklist_calls
- Rerun 'tox -e bandit'
- Notice you get an error now:

 Issue: Use of possibly insecure function - consider using safer 
 ast.literal_eval.  
   Severity: Medium   Confidence: High
   Location: keystone/tests/unit/test_wsgi.py:104
103 resp = req.get_response(app)
104 self.assertIn('X-Foo', eval(resp.body))
105 

So basically, the blacklist_calls are never checked.

** Affects: keystone
 Importance: Low
 Assignee: Eric Brown (ericwb)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Eric Brown (ericwb)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1460839

Title:
  bandit: blacklist_functions not a valid plugin

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Keystone currently has the keystone_conservative profile in
  bandit.yaml defined as follows:

  keystone_conservative:
  include:
  - blacklist_functions
  - blacklist_imports
  - request_with_no_cert_validation
  - exec_used
  - set_bad_file_permissions
  - subprocess_popen_with_shell_equals_true
  - linux_commands_wildcard_injection
  - ssl_with_bad_version

  The keystone_conservative profile is the default profile run when
  using bandit in the keystone project.  The problem is that
  blacklist_functions is not actually a bandit plugin.  There is a
  plugin called blacklist_calls, but not blacklist_functions.

  To recreate:
  - Edit bandit.yaml, comment out - '/tests/' in the exclude_dirs
  - Run 'tox -e bandit'
  - Notice you get no errors
  - Edit bandit.yaml again, search/replace blacklist_functions to 
blacklist_calls
  - Rerun 'tox -e bandit'
  - Notice you get an error now:

   Issue: Use of possibly insecure function - consider using safer 
ast.literal_eval.  
 Severity: Medium   Confidence: High
 Location: keystone/tests/unit/test_wsgi.py:104
  103   resp = req.get_response(app)
  104   self.assertIn('X-Foo', eval(resp.body))
  105   

  So basically, the blacklist_calls are never checked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1460839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460841] [NEW] Removing unused variable from d3linechart.js

2015-06-01 Thread Thai Tran
Public bug reported:

Bug caused by jshint complaining about unused variable 'now'.
Gate failing due to this bug, this patch fixes it.

horizon/static/horizon/js/horizon.d3linechart.js: line 760, col 11, 'now' is 
defined but never used.
http://logs.openstack.org/21/187321/8/check/gate-horizon-jshint/3cc537e/console.html

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460841

Title:
  Removing unused variable from d3linechart.js

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Bug caused by jshint complaining about unused variable 'now'.
  Gate failing due to this bug, this patch fixes it.

  horizon/static/horizon/js/horizon.d3linechart.js: line 760, col 11, 'now' is 
defined but never used.
  
http://logs.openstack.org/21/187321/8/check/gate-horizon-jshint/3cc537e/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460875] [NEW] security group creation fails due to internal error if not specifying description

2015-06-01 Thread Ken'ichi Ohmichi
Public bug reported:

If not specifying description parameter on create a security group
API, an internal error happens like the following:

$ curl [..] -X POST 
http://192.168.11.62:8774/v2/138c5606916a468abec3dd9371e66975/os-security-groups
 -H Content-Type: application/json -H Accept: application/json -d 
'{security_group: {name: test}}'
HTTP/1.1 500 Internal Server Error
Content-Length: 128
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-1fbc1833-d87c-4f49-9b73-fc7c4bf894a6
Date: Tue, 02 Jun 2015 00:59:35 GMT

{computeFault: {message: The server has either erred or is incapable of 
performing the requested operation., code: 500}}
$

nova-api.log is here:

2015-06-02 00:58:25.817 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
2015-06-02 00:58:25.817 TRACE nova.api.openstack return method(req=request, 
**action_args)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/security_groups.py, line 
204, in create
2015-06-02 00:58:25.817 TRACE nova.api.openstack context, group_name, 
group_description)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/network/security_group/neutron_driver.py, line 54, in 
create_security_group
2015-06-02 00:58:25.817 TRACE nova.api.openstack body).get('security_group')
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
102, in with_params
2015-06-02 00:58:25.817 TRACE nova.api.openstack ret = 
self.function(instance, *args, **kwargs)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
716, in create_security_group
2015-06-02 00:58:25.817 TRACE nova.api.openstack return 
self.post(self.security_groups_path, body=body)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
298, in post
2015-06-02 00:58:25.817 TRACE nova.api.openstack headers=headers, 
params=params)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
211, in do_request
2015-06-02 00:58:25.817 TRACE nova.api.openstack 
self._handle_fault_response(status_code, replybody)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
185, in _handle_fault_response
2015-06-02 00:58:25.817 TRACE nova.api.openstack 
exception_handler_v20(status_code, des_error_body)
2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 70, 
in exception_handler_v20
2015-06-02 00:58:25.817 TRACE nova.api.openstack status_code=status_code)
2015-06-02 00:58:25.817 TRACE nova.api.openstack BadRequest: Invalid input for 
description. Reason: 'None' is not a valid string.
2015-06-02 00:58:25.817 TRACE nova.api.openstack

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460875

Title:
  security group creation fails due to internal error if not specifying
  description

Status in OpenStack Compute (Nova):
  New

Bug description:
  If not specifying description parameter on create a security group
  API, an internal error happens like the following:

  $ curl [..] -X POST 
http://192.168.11.62:8774/v2/138c5606916a468abec3dd9371e66975/os-security-groups
 -H Content-Type: application/json -H Accept: application/json -d 
'{security_group: {name: test}}'
  HTTP/1.1 500 Internal Server Error
  Content-Length: 128
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-1fbc1833-d87c-4f49-9b73-fc7c4bf894a6
  Date: Tue, 02 Jun 2015 00:59:35 GMT

  {computeFault: {message: The server has either erred or is incapable of 
performing the requested operation., code: 500}}
  $

  nova-api.log is here:

  2015-06-02 00:58:25.817 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
  2015-06-02 00:58:25.817 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/security_groups.py, line 
204, in create
  2015-06-02 00:58:25.817 TRACE nova.api.openstack context, group_name, 
group_description)
  2015-06-02 00:58:25.817 TRACE nova.api.openstack   File 

[Yahoo-eng-team] [Bug 1460652] Re: nova-conductor infinitely reconnets to rabbit

2015-06-01 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460652

Title:
  nova-conductor  infinitely reconnets to rabbit

Status in OpenStack Compute (Nova):
  New
Status in Messaging API for OpenStack:
  New

Bug description:
  1. Exact version of Nova 
  ii  nova-api
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - API frontend
  ii  nova-cert   
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - certificate management
  ii  nova-common 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - common files
  ii  nova-conductor  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - conductor service
  ii  nova-console
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console
  ii  nova-consoleauth
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - Console Authenticatorii  nova-novncproxy 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - NoVNC proxy
  ii  nova-scheduler  
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute - virtual machine scheduler
  ii  python-nova 
1:2014.1.100+git201410062002~trusty-0ubuntu1 all  OpenStack 
Compute Python libraries
  ii  python-novaclient   
1:2.17.0.74.g2598714+git201404220131~trusty-0ubuntu1 all  client 
library for OpenStack Compute API

  rabbit configuration in nova.conf:

rabbit_hosts = m610-2:5672, m610-1:5672
rabbit_ha_queues =  true

  
  2. Relevant log files:
  /var/log/nova/nova-conductor.log

   exchange 'reply_bea18a6133c548f099b85b168fddf83c' in vhost '/'
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 624, in ensure
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
return method(*args, **kwargs)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 729, in _publish
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
publisher = cls(self.conf, self.channel, topic, **kwargs)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 361, in __init__
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
type='direct', **options)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 326, in __init__
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.reconnect(channel)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 334, in reconnect
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
routing_key=self.routing_key)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 82, in __init__
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.revive(self._channel)
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 216, in revive
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.declare()
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/messaging.py, line 102, in declare
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
self.exchange.declare()
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/kombu/entity.py, line 166, in declare
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 
nowait=nowait, passive=passive,
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/python2.7/dist-packages/amqp/channel.py, line 612, in 
exchange_declare
  2015-06-01 08:23:56.484 16427 TRACE oslo.messaging._drivers.impl_rabbit 

[Yahoo-eng-team] [Bug 1460866] [NEW] Hypervisor stats is incorrect in Hozion

2015-06-01 Thread Kahou Lei
Public bug reported:

From the Admin - Hypervisor panel, I notice that the Hypervisort stats
is converted from megabyte to Gibibyte.

However, the calculation seems incorrect.

Here is my observation:

From nova CLI:

vagrant@precise64:~/devstack$ nova hypervisor-stats
+--+---+
| Property | Value |
+--+---+
| count| 1 |
| current_workload | 0 |
| disk_available_least | -49   |
| free_disk_gb | -43   |
| free_ram_mb  | -1236 |
| local_gb | 38|
| local_gb_used| 81|
| memory_mb| 7980  |
| memory_mb_used   | 9216  |
| running_vms  | 3 |
| vcpus| 4 |
| vcpus_used   | 5 |
+--+---+

Where as the member_mb is 7980

On the horizon side, it is converted to 7.8 GB (See attached and GB is
Gibibyte).

However, if you go to google and convert the value, it is 7.43 GB
(Gibibyte).

https://www.google.com/search?q=megabyte+to+gigabyteie=utf-8oe=utf-8aq=trls=org.mozilla
:en-US:officialclient=firefox-achannel=sb#rls=org.mozilla:en-
US:officialchannel=sbq=7980+megabyte+to+gibibyte

Note: you will see my hypervisor stats is oversubscribed. Please ignore
that as this bug is only against the calculation problem.

** Affects: horizon
 Importance: Undecided
 Assignee: Kahou Lei (kahou82)
 Status: New

** Attachment added: Screen Shot 2015-06-01 at 6.17.35 PM.png
   
https://bugs.launchpad.net/bugs/1460866/+attachment/4408364/+files/Screen%20Shot%202015-06-01%20at%206.17.35%20PM.png

** Changed in: horizon
 Assignee: (unassigned) = Kahou Lei (kahou82)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1460866

Title:
  Hypervisor stats is incorrect in Hozion

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From the Admin - Hypervisor panel, I notice that the Hypervisort
  stats is converted from megabyte to Gibibyte.

  However, the calculation seems incorrect.

  Here is my observation:

  From nova CLI:

  vagrant@precise64:~/devstack$ nova hypervisor-stats
  +--+---+
  | Property | Value |
  +--+---+
  | count| 1 |
  | current_workload | 0 |
  | disk_available_least | -49   |
  | free_disk_gb | -43   |
  | free_ram_mb  | -1236 |
  | local_gb | 38|
  | local_gb_used| 81|
  | memory_mb| 7980  |
  | memory_mb_used   | 9216  |
  | running_vms  | 3 |
  | vcpus| 4 |
  | vcpus_used   | 5 |
  +--+---+

  Where as the member_mb is 7980

  On the horizon side, it is converted to 7.8 GB (See attached and GB is
  Gibibyte).

  However, if you go to google and convert the value, it is 7.43 GB
  (Gibibyte).

  
https://www.google.com/search?q=megabyte+to+gigabyteie=utf-8oe=utf-8aq=trls=org.mozilla
  :en-US:officialclient=firefox-achannel=sb#rls=org.mozilla:en-
  US:officialchannel=sbq=7980+megabyte+to+gibibyte

  Note: you will see my hypervisor stats is oversubscribed. Please
  ignore that as this bug is only against the calculation problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1460866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460887] [NEW] when backup a vm booted from volume , the error information is not detailed

2015-06-01 Thread wangxiyuan
Public bug reported:

Now,when backup a vm booted from volume ,the error information is:The
request is invalid.

Users don't know why.

So we can support more information like:

It's not supported to backup volume backed instance.

** Affects: nova
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Description changed:

  Now,when backup a vm booted from volume ,the error information is:The
  request is invalid.
  
- User don't know why.
+ Users don't know why.
  
  So we can support more information like:
  
  It's not supported to backup volume backed instance.

** Changed in: nova
 Assignee: (unassigned) = wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460887

Title:
  when backup a vm booted from volume ,the error information is not
  detailed

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now,when backup a vm booted from volume ,the error information is:The
  request is invalid.

  Users don't know why.

  So we can support more information like:

  It's not supported to backup volume backed instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356737] Re: nova backup vm should not backup those vm boot from cinder volume

2015-06-01 Thread wangxiyuan
** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356737

Title:
  nova backup vm should not backup those vm boot from cinder volume

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When doing backup on a vm which boot from cinder volume, it firstly shutdown 
vm and then do backup work.
  But it fails and the detail exception log could be seen in the last part.
  When we need nova to backup a vm booted from cinder volume, we could do this 
by using cinder's functioncopy volume to image.
  I think we could raise an exception when nova backup a vm from cinder volume.

  ==Nova-compute exception log===
  Traceback (most recent call last):
File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  incoming.message))
File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File /opt/stack/nova/nova/exception.py, line 88, in wrapped
  payload)
File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/exception.py, line 71, in wrapped
  return f(self, context, *args, **kw)
File /opt/stack/nova/nova/compute/manager.py, line 285, in 
decorated_function
  pass
File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 271, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 313, in 
decorated_function
  kwargs['instance'], e, sys.exc_info())
File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 301, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 2741, in 
backup_instance
  self._do_snapshot_instance(context, image_id, instance, rotation)
File /opt/stack/nova/nova/compute/manager.py, line 360, in 
decorated_function
  % image_id, instance=instance)
File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 351, in 
decorated_function
  *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 2729, in 
_do_snapshot_instance
  task_states.IMAGE_BACKUP)
File /opt/stack/nova/nova/compute/manager.py, line 2807, in 
_snapshot_instance
  update_task_state)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 1501, in snapshot
  image_type=source_format)
File /opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 716, in 
snapshot
  return backend(path=disk_path)
File /opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 421, in 
__init__
  info = libvirt_utils.logical_volume_info(path)
File /opt/stack/nova/nova/virt/libvirt/utils.py, line 336, in 
logical_volume_info
  '--separator', '|', path, run_as_root=True)
File /opt/stack/nova/nova/virt/libvirt/utils.py, line 54, in execute
  return utils.execute(*args, **kwargs)
File /opt/stack/nova/nova/utils.py, line 164, in execute
  return processutils.execute(*cmd, **kwargs)
File /opt/stack/nova/nova/openstack/common/processutils.py, line 194, in 
execute
  cmd=' '.join(cmd))
  ProcessExecutionError: Unexpected error while running command.
  Command: lvs -o vg_all,lv_all --separator | 
/dev/disk/by-path/ip-10.250.10.193:3260-iscsi-iqn.2010-10.org.openstack:volume-9ba91e05-050b-4dda-ac3c-9f3630c704c0-lun-1
  Exit code: 5
  Stdout: ''
  Stderr: '  
disk/by-path/ip-10.250.10.193:3260-iscsi-iqn.2010-10.org.openstack:volume-9ba91e05-050b-4dda-ac3c-9f3630c704c0-lun-1:
 Invalid path for Logical Volume\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp