[Yahoo-eng-team] [Bug 1478531] [NEW] Functional test for dhcp autoschedule does not check properly

2015-07-27 Thread Darragh O'Reilly
Public bug reported:

https://github.com/openstack/neutron/blob/801dedebbfc7ff4ae6421c793a1154ca0d169e6c/neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py#L394-L396


for hosted_net_id in hosted_net_ids:
self.assertIn(hosted_net_id, expected_hosted_networks,
  message=msg + '[%s]' % hosted_net_id)


this will always pass because hosted_net_ids is always a sub-list of 
expected_hosted_networks, even if the method under test doesn't do what it's 
supposed to. 

The test should assert that each expected hosted network is in
hosted_net_ids instead.

** Affects: neutron
 Importance: Undecided
 Assignee: Darragh O'Reilly (darragh-oreilly)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Darragh O'Reilly (darragh-oreilly)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478531

Title:
  Functional test for dhcp autoschedule does not check properly

Status in neutron:
  In Progress

Bug description:
  
https://github.com/openstack/neutron/blob/801dedebbfc7ff4ae6421c793a1154ca0d169e6c/neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py#L394-L396

  
  for hosted_net_id in hosted_net_ids:
  self.assertIn(hosted_net_id, expected_hosted_networks,
message=msg + '[%s]' % hosted_net_id)

  
  this will always pass because hosted_net_ids is always a sub-list of 
expected_hosted_networks, even if the method under test doesn't do what it's 
supposed to. 

  The test should assert that each expected hosted network is in
  hosted_net_ids instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478546] [NEW] nova GMR doesn't provide option to specify log_dir path

2015-07-27 Thread Divya K Konoor
Public bug reported:

oslo_report
https://github.com/openstack/oslo.reports/blob/master/oslo_reports/guru_meditation_report.py#L109
gives a provision to specify a log directory that could be used to save
GMR when user sends signal to a processes. When the log_dir is not
specified, the report gets dumped into stderr .

Currently, nova (and other services that support GMR) doesn't have a
provision to specify a log dir . Due to this, GMR gets dumped to stderr
and often gets lost along with thousands of other log statements. As GMR
is used primarily for debug purposes, it makes a lot of sense to have a
separate file/directory that captures the report . This way, it's also
much easier to share the output of GMR and keep it archived for later
reference if required. This problem can be fixed by providing a new
section in the respective service conf files something like the below:

[oslo_report]
log_dir = /home/abc/gmr

** Affects: nova
 Importance: Undecided
 Assignee: Divya K Konoor (dikonoor)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478546

Title:
  nova GMR doesn't provide option to specify log_dir path

Status in OpenStack Compute (nova):
  New

Bug description:
  oslo_report
  
https://github.com/openstack/oslo.reports/blob/master/oslo_reports/guru_meditation_report.py#L109
  gives a provision to specify a log directory that could be used to
  save GMR when user sends signal to a processes. When the log_dir is
  not specified, the report gets dumped into stderr .

  Currently, nova (and other services that support GMR) doesn't have a
  provision to specify a log dir . Due to this, GMR gets dumped to
  stderr and often gets lost along with thousands of other log
  statements. As GMR is used primarily for debug purposes, it makes a
  lot of sense to have a separate file/directory that captures the
  report . This way, it's also much easier to share the output of GMR
  and keep it archived for later reference if required. This problem can
  be fixed by providing a new section in the respective service conf
  files something like the below:

  [oslo_report]
  log_dir = /home/abc/gmr

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477576] [NEW] No option to delete(reset to default) specific resources with nova quota delete

2015-07-27 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Hi All,

There is no command in nova(or any other service) to delete specific resource.
Nova quota delete command takes only project Id and resets all the resources 
corresponding to the service.

Suppose I have updated ram to , vcpus to 10, cores to 5. Now after
few days I want to reset ram and vcpus to default then i have to update
ram and vcpus to default using nova quota update. If I do nova quota
delete, there is no option to mention these resources(it takes only
project_id).  It will delete/reset all the resources for that project.

I feel If nova quota delete also provides option to mention which
resource to delete(reset to default) then it would be better.

Thanks,
Ashish.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
No option to delete(reset to default) specific resources with nova quota delete
https://bugs.launchpad.net/bugs/1477576
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478579] [NEW] When user in AD doesn't have ID field all user handlers error out

2015-07-27 Thread Victor Denisov
Public bug reported:

We have keystone integrated with AD.

'user_id_attribute' is set to 'info'. So, when our users first get
created in AD, they don't always have this field populated. When a user
does not have a populated 'info' attribute, all keystone queries fail,
not just queries or rows containing that user.

Jul 7 14:02:12 node-38 keystone-all ID attribute info not found in LDAP
object AD CN Object here

Some examples of how I see keystone should be have in this situation:

List all users - list only correct users and ignore invalid.

Authenticate invalid user - this request should not be authenticated.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478579

Title:
  When user in AD doesn't have ID field all user handlers error out

Status in Keystone:
  New

Bug description:
  We have keystone integrated with AD.

  'user_id_attribute' is set to 'info'. So, when our users first get
  created in AD, they don't always have this field populated. When a
  user does not have a populated 'info' attribute, all keystone queries
  fail, not just queries or rows containing that user.

  Jul 7 14:02:12 node-38 keystone-all ID attribute info not found in
  LDAP object AD CN Object here

  Some examples of how I see keystone should be have in this situation:

  List all users - list only correct users and ignore invalid.

  Authenticate invalid user - this request should not be authenticated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477576] Re: No option to delete(reset to default) specific resources with nova quota delete

2015-07-27 Thread Ashish Singh
** Project changed: nova-hyper = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477576

Title:
  No option to delete(reset to default) specific resources with nova
  quota delete

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi All,

  There is no command in nova(or any other service) to delete specific resource.
  Nova quota delete command takes only project Id and resets all the resources 
corresponding to the service.

  Suppose I have updated ram to , vcpus to 10, cores to 5. Now after
  few days I want to reset ram and vcpus to default then i have to
  update ram and vcpus to default using nova quota update. If I do nova
  quota delete, there is no option to mention these resources(it takes
  only project_id).  It will delete/reset all the resources for that
  project.

  I feel If nova quota delete also provides option to mention which
  resource to delete(reset to default) then it would be better.

  Thanks,
  Ashish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478604] [NEW] VPNaaS: openswan process isn't stopped at removing the router from l3 agent

2015-07-27 Thread Hiroyuki Ito
Public bug reported:

When removing a router from l3 agent, the openswan process on its router isn't
stopped though the router's network namespace is deleted. I think the process 
should be stopped at least because it increases abandoned openswan processes.

Reproduce procedure:

I found this problem at the following devstack environment:
stack@ubuntu-com1:~/devstack$ git show
commit 9cdde34319feffc7f1e27a4ffea43eae40eb6536

The operation I did is as follows:

1) Crete IPsecSiteConnection resource

The namespaces including the openswan process was as follows:
root@ubuntu-com1:~# ip netns | grep 82174423-af6a-4c0d-b637-d34fa7a6b24b
qrouter-82174423-af6a-4c0d-b637-d34fa7a6b24b
The openswan process on 82174423-af6a-4c0d-b637-d34fa7a6b24b was running like
   the following:
root@ubuntu-com1:~# ps aux | grep ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b
root 5183 0.0 0.0 94072 3992 ? Ss 18:46 0:00 /usr/lib/ipsec/pluto --ctlbase 
/opt/stack/data/neutron/ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b/var/run/p
luto --ipsecdir /opt/stack/data/neutron/ipsec/82174423-af6a-4c0d-b637-d34fa7
a6b24b/etc --use-netkey --uniqueids --nat_traversal --secretsfile /opt/stack
/data/neutron/ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b/etc/ipsec.secrets -
-virtual_private %v4:172.16.200.0/24,%v4:172.16.100.0/24
root 12553 0.0 0.0 11884 2204 pts/18 S+ 23:19 0:00 grep --color=auto ipsec/8
2174423-af6a-4c0d-b637-d34fa7a6b24

2) Remove router which includes the 1)'s resource from the l3 agent

I removed 82174423-af6a-4c0d-b637-d34fa7a6b24b from the l3 agent by neutron
   l3-agent-router-remove cli.
   The namespaces on the node are as follows:
stack@ubuntu-com1:~$ ip netns | grep 82174423-af6a-4c0d-b637-d34fa7a6b24b
stack@ubuntu-com1:~$

3) Check processes on the node with 2)'s l3 agent

The openswan process was still running like the following:
stack@ubuntu-com1:~$ ps aux | grep 
ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b
root 5183 0.0 0.0 94072 3992 ? Ss 18:46 0:00 /usr/lib/ipsec/pluto --ctlbase 
/opt/stack/data/neutron/ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b/var/run/p
luto --ipsecdir /opt/stack/data/neutron/ipsec/82174423-af6a-4c0d-b637-d34fa7
a6b24b/etc --use-netkey --uniqueids --nat_traversal --secretsfile /opt/stack
/data/neutron/ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b/etc/ipsec.secrets -
-virtual_private %v4:172.16.200.0/24,%v4:172.16.100.0/24
In the vpn agent log, the following error message was outputed:
2015-07-27 23:20:57.415 ^[[00;32mDEBUG oslo_concurrency.lockutils Releasing 
semaphore iptables-qrouter-82174423-af6a-4c0d-b637-d34fa7a6b24b from (pid=
19216) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutil
s.py:210
2015-07-27 23:20:57.415 ERROR neutron.callbacks.manager Error during notific
ation for neutron_vpnaas.services.vpn.vpn_service.router_removed_actions rou
ter, after_delete
.
2015-07-27 23:20:57.415 TRACE neutron.callbacks.manager Command: ['ip', 'net
ns', 'exec', u'qrouter-82174423-af6a-4c0d-b637-d34fa7a6b24b', 'iptables-save
', '-c']
2015-07-27 23:20:57.415 TRACE neutron.callbacks.manager Exit code: 1
2015-07-27 23:20:57.415 TRACE neutron.callbacks.manager Stdin:
2015-07-27 23:20:57.415 TRACE neutron.callbacks.manager Stdout:
2015-07-27 23:20:57.415 TRACE neutron.callbacks.manager Stderr: Cannot open 
network namespace qrouter-82174423-af6a-4c0d-b637-d34fa7a6b24b: No such fi
le or directory

** Affects: neutron
 Importance: Undecided
 Assignee: Hiroyuki Ito (ito-hiroyuki-01)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Hiroyuki Ito (ito-hiroyuki-01)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478604

Title:
  VPNaaS: openswan process isn't stopped at removing the router from l3
  agent

Status in neutron:
  New

Bug description:
  When removing a router from l3 agent, the openswan process on its router isn't
  stopped though the router's network namespace is deleted. I think the process 
  should be stopped at least because it increases abandoned openswan processes.

  Reproduce procedure:
  
  I found this problem at the following devstack environment:
  stack@ubuntu-com1:~/devstack$ git show
  commit 9cdde34319feffc7f1e27a4ffea43eae40eb6536

  The operation I did is as follows:

  1) Crete IPsecSiteConnection resource

  The namespaces including the openswan process was as follows:
  root@ubuntu-com1:~# ip netns | grep 82174423-af6a-4c0d-b637-d34fa7a6b24b
  qrouter-82174423-af6a-4c0d-b637-d34fa7a6b24b
  The openswan process on 82174423-af6a-4c0d-b637-d34fa7a6b24b was running like
 the following:
  root@ubuntu-com1:~# ps aux | grep 
ipsec/82174423-af6a-4c0d-b637-d34fa7a6b24b
  root 5183 0.0 0.0 94072 3992 ? Ss 18:46 0:00 

[Yahoo-eng-team] [Bug 1478607] [NEW] libvirt: serial console ports count upper limit needs to be checked

2015-07-27 Thread Markus Zoeller
Public bug reported:

This bug is based on Daniel Berrange's comment on [1].

There is a limit of 4 serial ports on x86, [...] guest will have 
5 consoles and will fail to boot with a QEMU error.

The number of serial ports a guest will have can be influenced by
* the image properties: hw_serial_port_count
* and the flavor extra specs: hw:serial_port_count

The upper limit of 4 serial ports (on x86) is not checked in the code 
but should be. Otherwise it is possible to prevent the boot of the 
guest.

This observation is based on nova master:
183cd889 2015-07-26 Merge remove _rescan_iscsi fr[...]

Steps to reproduce
---
1) setup OpenStack with an libvirt/kvm on x86 compute node
2) set image property hw_serial_port_count to 6
3) launch instance from that image

CLI:

glance image-update cirros-0.3.4-x86_64-uec \
--property hw_serial_port_count=6

nova boot test_serial_port_count --flavor m1.tiny \
--image cirros-0.3.4-x86_64-uec

nova list

Expected result
---
The request fails with an error message that explains that the upper
limit of serial ports on x86 is 4. 

At least the documentation in [3]

- image hw_serial_port_count=6
  VM gets 6 serial ports

can be updated to reflect that limitation.


Actual result
-
Instance get scheduled on a compute node [2] and is in ERROR state.

horizon shows:
No valid host was found. There are not enough hosts available.
Code 500


nova-compute.log shows:
libvirtError: internal error: process exited while connecting to 
monitor: qemu-system-x86_64: -device isa-serial,chardev=charserial4,
id=serial4: Max. supported number of ISA serial ports is 4.

qemu-system-x86_64: -device isa-serial,chardev=charserial4,
id=serial4: Device 'isa-serial' could not be initialized


References
--
[1] https://review.openstack.org/#/c/188058/
[2] Instance's domain XML: http://paste.openstack.org/show/405929/
[3] def get_number_of_serial_ports; 
https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L168

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478607

Title:
  libvirt: serial console ports count upper limit needs to be checked

Status in OpenStack Compute (nova):
  New

Bug description:
  This bug is based on Daniel Berrange's comment on [1].

  There is a limit of 4 serial ports on x86, [...] guest will have 
  5 consoles and will fail to boot with a QEMU error.

  The number of serial ports a guest will have can be influenced by
  * the image properties: hw_serial_port_count
  * and the flavor extra specs: hw:serial_port_count

  The upper limit of 4 serial ports (on x86) is not checked in the code 
  but should be. Otherwise it is possible to prevent the boot of the 
  guest.

  This observation is based on nova master:
  183cd889 2015-07-26 Merge remove _rescan_iscsi fr[...]

  Steps to reproduce
  ---
  1) setup OpenStack with an libvirt/kvm on x86 compute node
  2) set image property hw_serial_port_count to 6
  3) launch instance from that image

  CLI:

  glance image-update cirros-0.3.4-x86_64-uec \
  --property hw_serial_port_count=6

  nova boot test_serial_port_count --flavor m1.tiny \
  --image cirros-0.3.4-x86_64-uec
  
  nova list

  Expected result
  ---
  The request fails with an error message that explains that the upper
  limit of serial ports on x86 is 4. 

  At least the documentation in [3]

  - image hw_serial_port_count=6
VM gets 6 serial ports

  can be updated to reflect that limitation.

  
  Actual result
  -
  Instance get scheduled on a compute node [2] and is in ERROR state.

  horizon shows:
  No valid host was found. There are not enough hosts available.
  Code 500

  
  nova-compute.log shows:
  libvirtError: internal error: process exited while connecting to 
  monitor: qemu-system-x86_64: -device isa-serial,chardev=charserial4,
  id=serial4: Max. supported number of ISA serial ports is 4.
  
  qemu-system-x86_64: -device isa-serial,chardev=charserial4,
  id=serial4: Device 'isa-serial' could not be initialized

  
  References
  --
  [1] https://review.openstack.org/#/c/188058/
  [2] Instance's domain XML: http://paste.openstack.org/show/405929/
  [3] def get_number_of_serial_ports; 
https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L168

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478630] [NEW] nova-compute was forced down due to [Errno 24] too many open files

2015-07-27 Thread JohnsonYi
Public bug reported:

vi /var/log/nova-all.log
180Jul 27 10:15:33 node-1 nova-compute Auditing locally available compute 
resources
179Jul 27 10:15:33 node-1 nova-compute Error during 
ComputeManager.update_available_resource: [Errno 24] Too many open files
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/periodic_task.py, line 
198, in run_periodic_tasks
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
task(self, context)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5963, in 
update_available_resource
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, line 313, 
in update_available_resource
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
resources = self.driver.get_available_resource(self.nodename)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4939, in 
get_available_resource
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
stats = self.get_host_stats(refresh=True)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5809, in 
get_host_stats
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
return self.host_state.get_host_stats(refresh=refresh)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 6383, in 
get_host_stats
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
self.update_status()
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 6406, in 
update_status
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
disk_info_dict = self.driver._get_local_gb_info()
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4552, in 
_get_local_gb_info
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
info = LibvirtDriver._get_rbd_driver().get_pool_info()
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py, line 273, in 
get_pool_info
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
with RADOSClient(self) as client:
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py, line 86, in 
__init__
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
self.cluster, self.ioctx = driver._connect_to_rados(pool)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py, line 108, in 
_connect_to_rados
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
conffile=self.ceph_conf)
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/dist-packages/rados.py, line 198, in __init__
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
librados_path = find_library('rados')
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/ctypes/util.py, line 224, in find_library
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
return _findSoname_ldconfig(name) or _get_soname(_findLib_gcc(name))
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task   File 
/usr/lib/python2.7/ctypes/util.py, line 213, in _findSoname_ldconfig
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task f = 
os.popen('/sbin/ldconfig -p 2/dev/null')
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task 
OSError: [Errno 24] Too many open files
2015-07-27 10:15:33.401 12422 TRACE nova.openstack.common.periodic_task


Current limit setting:
root@node-1:/tmp# ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386140
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files 

[Yahoo-eng-team] [Bug 1478629] [NEW] test_admin in VersionSingleAppTestCase expects public endpoint in a response

2015-07-27 Thread Alexey Miroshkin
Public bug reported:

In VersionSingleAppTestCase both test_public and test_admin methods use
a helper method _test_version which expects public_port config value in
a response. It has no impact on test result because of bug #1478000,
admin and public endpoints are indistinguishable in test_versions.

** Affects: keystone
 Importance: Undecided
 Assignee: Alexey Miroshkin (amirosh)
 Status: New


** Tags: test-improvement

** Changed in: keystone
 Assignee: (unassigned) = Alexey Miroshkin (amirosh)

** Summary changed:

- test_admin in VersionSingleAppTestCase expects public endpoint
+ test_admin in VersionSingleAppTestCase expects public endpoint in a response

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478629

Title:
  test_admin in VersionSingleAppTestCase expects public endpoint in a
  response

Status in Keystone:
  New

Bug description:
  In VersionSingleAppTestCase both test_public and test_admin methods
  use a helper method _test_version which expects public_port config
  value in a response. It has no impact on test result because of bug
  #1478000, admin and public endpoints are indistinguishable in
  test_versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478652] [NEW] 500 error template does not support custom theme

2015-07-27 Thread Brian Tully
Public bug reported:

The current template for a 500 error does not support a custom
theme/logo. It also contains hardcoded/inline CSS styles and a non-
standard page layout that resembles a broken modal. Ideally this
template should get rewritten to inherit from base.html and be
structured like the 404 and 403 error templates in order to be
consistent as well as support custom theming.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: custom-theme error-reporting template

** Attachment added: Current 500 error template
   
https://bugs.launchpad.net/bugs/1478652/+attachment/4434706/+files/500-error-template.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478652

Title:
  500 error template does not support custom theme

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The current template for a 500 error does not support a custom
  theme/logo. It also contains hardcoded/inline CSS styles and a non-
  standard page layout that resembles a broken modal. Ideally this
  template should get rewritten to inherit from base.html and be
  structured like the 404 and 403 error templates in order to be
  consistent as well as support custom theming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1105488] Re: linuxbridge agent needs ability to use pre-configured physical network bridges (nova-related)

2015-07-27 Thread Li Ma
This feature is related with nova-neutron interaction.

If nova calls neutron restful api, it is difficult to append 'bridge-to-
be-attached' info into api response. For example, a parameter called
'physical bridge' may be added to Port object in neutron or Port-binding
dict.

However, if nova calls neutron rpc, it is easy to append any info to
nova.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Li Ma (nick-ma-z)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1105488

Title:
  linuxbridge agent needs ability to use pre-configured physical network
  bridges (nova-related)

Status in neutron:
  Triaged
Status in OpenStack Compute (nova):
  New

Bug description:
  The linuxbridge agent currently creates a bridge for each physical
  network used as a flat network, moving any existing IP address from
  the interface to the newly created bridge. This is very helpful in
  some cases, but there are other cases where the ability to use a pre-
  existing bridge is needed. For instance, the same physical network
  might need to be bridged for other purposes, or the agent moving the
  system's IP might not be desired.

  I suggest we add a physical_bridge_mappings configuration variable,
  similar to that used by the openvswitch agent, alongside the current
  physical_interface_mappings variable. When a bridge for a flat network
  is needed, the bridge mappings would be checked first. If a bridge
  mapping for the physical network exists, it would be used. If not, the
  interface mapping would be used and a bridge for the interface would
  be created automatically. Sub-interfaces and bridges for VLAN networks
  would continue to work as they do now, created by the agent using the
  interface mappings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1105488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478656] [NEW] Non-numeric filenames in key_repository will make Keystone explode

2015-07-27 Thread Clint Byrum
Public bug reported:

If one makes any files in that directory, such as an editor backup,
Keystone will explode on startup or at the next key rotation because it
assumes all files will pass int(filename)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478656

Title:
  Non-numeric filenames in key_repository will make Keystone explode

Status in Keystone:
  New

Bug description:
  If one makes any files in that directory, such as an editor backup,
  Keystone will explode on startup or at the next key rotation because
  it assumes all files will pass int(filename)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461299] Re: Failure on list users when using ldap domain configuration from database

2015-07-27 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.config
   Status: Fix Committed = Fix Released

** Changed in: oslo.config
Milestone: None = 2.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461299

Title:
  Failure on list users when using ldap domain configuration from
  database

Status in Keystone:
  New
Status in oslo.config:
  Fix Released

Bug description:
  When having a setup with domain_specific_drivers_enabled set to true,
  and a domain configured with ldap backend and configurations stored in
  the database : the keystone user list API fails with the following
  error:

  openstack user list --domain domainX
  ERROR: openstack An unexpected error prevented the server from fulfilling 
your request: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead. (Disable 
debug mode to suppress these details.) (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474933] Re: Nova compute interprets rabbitmq passwords

2015-07-27 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.messaging
   Status: Fix Committed = Fix Released

** Changed in: oslo.messaging
Milestone: None = 2.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474933

Title:
  Nova compute interprets rabbitmq passwords

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Fix Released

Bug description:
  Using the kilo rpms - openstack-nova-compute-2015.1.0-3.el7.noarch

  If the rabbit_password (set in [Default] section - this is how the
  Ansible role I am using sets it) includes a slash character - / -
  then the service fails to start.

  In the log - /var/log/nova/nova-compute.log
  the following error is seen:-

  CRITICAL nova [req-72c0fe29-f2d6-4164-95de-e9e8f50fa7bc - - - - -]
  ValueError: invalid literal for int() with base 10: 'prefix'

  where prefix is the first part of the password - ie
rabbit_password = 'prefix/suffix'

  Traceback enclosed below.

  If the Rabbit password is changed to not include a / then the service
  starts up OK

  This could have security implications, but I am not currently flagging
  it as a security issue

  2015-07-15 16:28:50.824 9670 TRACE nova Traceback (most recent call last):
  2015-07-15 16:28:50.824 9670 TRACE nova   File /usr/bin/nova-compute, line 
10, in module
  2015-07-15 16:28:50.824 9670 TRACE nova sys.exit(main())
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/cmd/compute.py, line 72, in main
  2015-07-15 16:28:50.824 9670 TRACE nova 
db_allowed=CONF.conductor.use_local)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 277, in create
  2015-07-15 16:28:50.824 9670 TRACE nova db_allowed=db_allowed)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 157, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova 
self.conductor_api.wait_until_ready(context.get_admin_context())
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 292, in 
wait_until_ready
  2015-07-15 16:28:50.824 9670 TRACE nova timeout=timeout)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/baserpc.py, line 62, in ping
  2015-07-15 16:28:50.824 9670 TRACE nova return cctxt.call(context, 
'ping', arg=arg_p)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
  2015-07-15 16:28:50.824 9670 TRACE nova retry=self.retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
  2015-07-15 16:28:50.824 9670 TRACE nova timeout=timeout, retry=retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
  2015-07-15 16:28:50.824 9670 TRACE nova retry=retry)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
312, in _send
  2015-07-15 16:28:50.824 9670 TRACE nova msg.update({'_reply_q': 
self._get_reply_q()})
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
283, in _get_reply_q
  2015-07-15 16:28:50.824 9670 TRACE nova conn = 
self._get_connection(rpc_amqp.PURPOSE_LISTEN)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
274, in _get_connection
  2015-07-15 16:28:50.824 9670 TRACE nova purpose=purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py, line 121, 
in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova self.connection = 
connection_pool.create(purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py, line 93, in 
create
  2015-07-15 16:28:50.824 9670 TRACE nova return 
self.connection_cls(self.conf, self.url, purpose)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py, line 
664, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova 
heartbeat=self.driver_conf.heartbeat_timeout_threshold)
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/kombu/connection.py, line 180, in __init__
  2015-07-15 16:28:50.824 9670 TRACE nova params.update(parse_url(hostname))
  2015-07-15 16:28:50.824 9670 TRACE nova   File 
/usr/lib/python2.7/site-packages/kombu/utils/url.py, line 34, in parse_url
  2015-07-15 16:28:50.824 9670 TRACE nova scheme, host, port, user, 
password, path, query = _parse_url(url)
  2015-07-15 

[Yahoo-eng-team] [Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-27 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.log
   Status: Fix Committed = Fix Released

** Changed in: oslo.log
Milestone: None = 1.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

Status in ubuntu-cloud-archive:
  Confirmed
Status in ubuntu-cloud-archive icehouse series:
  In Progress
Status in ubuntu-cloud-archive juno series:
  In Progress
Status in ubuntu-cloud-archive kilo series:
  Confirmed
Status in ubuntu-cloud-archive liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in oslo.log:
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Committed
Status in python-oslo.log source package in Trusty:
  Invalid
Status in nova source package in Utopic:
  Won't Fix
Status in python-oslo.log source package in Utopic:
  Invalid
Status in nova source package in Vivid:
  Invalid
Status in python-oslo.log source package in Vivid:
  Confirmed
Status in nova source package in Wily:
  Invalid
Status in python-oslo.log source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * If Nova services are configured to log to syslog (use_syslog=True) they
 will currently fail with ECONNREFUSED if they cannot connect to syslog.
 This patch adds support for allowing nova to retry connecting a 
 configurable number of times before print an error message and continuing
 with startup.

  [Test Case]

   * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
 restart nova services. Check that upstart nova logs to see retries 
 occurring then start rsyslog and observe connection succeed and 
 nova-compute startup.

  [Regression Potential]

   * None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478702] [NEW] Unable to clear device ID for port 'None'

2015-07-27 Thread Matt Riedemann
Public bug reported:

I'm seeing this trace in an ironic job but it shows up in other jobs as
well:

http://logs.openstack.org/75/190675/2/check/gate-tempest-dsvm-ironic-
pxe_ssh-full-
nv/2c65f3f/logs/screen-n-cpu.txt.gz#_2015-07-26_00_36_47_257

2015-07-26 00:36:47.257 ERROR nova.network.neutronv2.api 
[req-57d4e9e6-adf1-4774-a27a-63d096fe48e6 tempest-ServersTestJSON-1332826451 
tempest-ServersTestJSON-2014105270] Unable to clear device ID for port 'None'
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api Traceback (most 
recent call last):
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/opt/stack/new/nova/nova/network/neutronv2/api.py, line 365, in _unbind_ports
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
port_client.update_port(port_id, port_req_body)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
102, in with_params
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api ret = 
self.function(instance, *args, **kwargs)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
549, in update_port
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api return 
self.put(self.port_path % (port), body=body)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
302, in put
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
headers=headers, params=params)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
270, in retry_request
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
headers=headers, params=params)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
211, in do_request
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
self._handle_fault_response(status_code, replybody)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 
185, in _handle_fault_response
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
exception_handler_v20(status_code, des_error_body)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py, line 83, 
in exception_handler_v20
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
message=message)
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
NeutronClientException: 404 Not Found
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api The resource 
could not be found.
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api
2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hYmxlIHRvIGNsZWFyIGRldmljZSBJRCBmb3IgcG9ydCAnTm9uZSdcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzODAzMTMwMzgzNX0=

master and stable/kilo when we added the preserve pre-existing ports
stuff in the neutron v2 API in nova.

My guess is this happens in the deallocate_for_instance call and the
port_id in the requested_networks dict is None, but we don't filter
those out properly.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Confirmed


** Tags: kilo-backport-potential network neutron

** Tags added: network neutron

** Tags added: kilo-backport-potential

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478702

Title:
  Unable to clear device ID for port 'None'

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  I'm seeing this trace in an ironic job but it shows up in other jobs
  as well:

  http://logs.openstack.org/75/190675/2/check/gate-tempest-dsvm-ironic-
  pxe_ssh-full-
  nv/2c65f3f/logs/screen-n-cpu.txt.gz#_2015-07-26_00_36_47_257

  2015-07-26 00:36:47.257 ERROR nova.network.neutronv2.api 
[req-57d4e9e6-adf1-4774-a27a-63d096fe48e6 tempest-ServersTestJSON-1332826451 
tempest-ServersTestJSON-2014105270] Unable to clear device ID for port 'None'
  2015-07-26 00:36:47.257 20871 

[Yahoo-eng-team] [Bug 1477142] Re: Remove redundant commas

2015-07-27 Thread Ian Cordasco
This more or less boils down to future-proofing. If we need to expand
the parameters included in these messages, it reduces the visual noise
in the diff.

** Changed in: glance
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1477142

Title:
  Remove redundant commas

Status in Glance:
  Invalid

Bug description:
  In glance source, there is redundant comma in front of parenthesis. 
  For example in below:

  def _get_sort_key(self, req):
  Parse a sort key query param from the request object.
  sort_key = req.params.get('sort_key', 'created_at')
  if sort_key is not None and sort_key not in SUPPORTED_SORT_KEYS:
  _keys = ', '.join(SUPPORTED_SORT_KEYS)
  msg = _(Unsupported sort_key. Acceptable values: %s) % (_keys,)
  raise exc.HTTPBadRequest(explanation=msg)
  return sort_key

  in line six, the commas behind _keys is redundant, so we should
  remove it .

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1477142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478717] [NEW] Collect dashboard app specific code in static/app/

2015-07-27 Thread Tyr Johanson
Public bug reported:

The application openstack_dashboard has code that lives in two main
locations:

1) horizon/openstack_dashboard/dashboards/ - this is code that is
specific to a particular dashboard

2) horizon/openstack_dashboard/static/app/ - this is code specific to
this application, but needed by two or more dashboards

This bug  moves the Angular code that semanticaly belongs to the
application together into the openstack_dashboard/static/app/ location.

** Affects: horizon
 Importance: Undecided
 Assignee: Tyr Johanson (tyr-6)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478717

Title:
  Collect dashboard app specific code in static/app/

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The application openstack_dashboard has code that lives in two main
  locations:

  1) horizon/openstack_dashboard/dashboards/ - this is code that is
  specific to a particular dashboard

  2) horizon/openstack_dashboard/static/app/ - this is code specific to
  this application, but needed by two or more dashboards

  This bug  moves the Angular code that semanticaly belongs to the
  application together into the openstack_dashboard/static/app/
  location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478466] Re: Apache2 fail to start

2015-07-27 Thread Dolph Mathews
Was something else already running on port 5000, perhaps?

** Project changed: keystone = packstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478466

Title:
  Apache2 fail to start

Status in Packstack:
  New

Bug description:
  Using OpenStack Juno rdo-release-juno-1.noarch.rpm.
  Installation Method: packstack on fedora 20 x64
  Error: Execution of '/sbin/service httpd start' returned 1
  When  check on httpd/apache2 status it shows following error
  (13)Permission denied: AH00072: make_sock: could not bind to address ...:5000

  Solution: Open  /etc/httpd/conf
  and comment out port 5000 and 35357 . Close the file and restart the service. 
It will start running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/packstack/+bug/1478466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478690] Re: Request ID has a double req- at the start

2015-07-27 Thread Steve McLellan
This appears in other services too - glance --debug image-list, for
instance:

HTTP/1.1 200 OK
date: Mon, 27 Jul 2015 23:17:37 GMT
connection: keep-alive
content-type: application/json; charset=UTF-8
content-length: 1642
x-openstack-request-id: req-req-1de6f809-da0f-4ca3-b262-4f206cd4700d


** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1478690

Title:
  Request ID has a double req- at the start

Status in Glance:
  New
Status in OpenStack Search (Searchlight):
  New

Bug description:
  ➜  vagrant git:(master) http http://192.168.121.242:9393/v1/search 
X-Auth-Token:$token query:='{match_all : {}}'
  HTTP/1.1 200 OK
  Content-Length: 138
  Content-Type: application/json; charset=UTF-8
  Date: Mon, 27 Jul 2015 20:21:31 GMT
  X-Openstack-Request-Id: req-req-0314bf5b-9c04-4bed-bf86-d2e76d297a34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1478690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478710] [NEW] two launch instances showing up in network topo map

2015-07-27 Thread Eric Peterson
Public bug reported:

We have disabled old launch instance, and enabled the new launch
instance.

When we access the network topo page, we see two launch instance
buttons.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478710

Title:
  two launch instances showing up in network topo map

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We have disabled old launch instance, and enabled the new launch
  instance.

  When we access the network topo page, we see two launch instance
  buttons.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478714] [NEW] Users table should display whether the user is an admin

2015-07-27 Thread Ishita Mandhan
Public bug reported:

In the horizon UI, right now in order to find out if a user is an admin,
the user has to go to Projects and click on manage members in order to
find out who has admin privileges.  Adding an admin column on the Users
table will be useful.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478714

Title:
  Users table should display whether the user is an admin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the horizon UI, right now in order to find out if a user is an
  admin, the user has to go to Projects and click on manage members in
  order to find out who has admin privileges.  Adding an admin column on
  the Users table will be useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478730] [NEW] Webroot theme fails because it lacks all scss variables

2015-07-27 Thread Diana Whitten
Public bug reported:

when setting the theme to 'webroot', Horizon breaks because webroot
doesn't have any style associated with it.

** Affects: horizon
 Importance: High
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478730

Title:
  Webroot theme fails because it lacks all scss variables

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  when setting the theme to 'webroot', Horizon breaks because webroot
  doesn't have any style associated with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478731] [NEW] magic search test adding URL ?status=shutdown

2015-07-27 Thread Tyr Johanson
Public bug reported:

Running Jasmine tests in browser
http://localhost:8000/jasmine/ServicesTests

URL changes to
http://localhost:8000/jasmine/ServicesTests?status=shutdown

When the page is reloaded using the new URL, some magic search tests now
fail.

Refactor the tests to use a mock $window.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478731

Title:
  magic search test adding URL ?status=shutdown

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Running Jasmine tests in browser
  http://localhost:8000/jasmine/ServicesTests

  URL changes to
  http://localhost:8000/jasmine/ServicesTests?status=shutdown

  When the page is reloaded using the new URL, some magic search tests
  now fail.

  Refactor the tests to use a mock $window.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469260] Re: Custom vendor data causes cloud-init failure on 0.7.5

2015-07-27 Thread Felipe Reyes
Utopic is already EOL, so I'm marking it as Invalid

** Changed in: cloud-init (Ubuntu Utopic)
   Status: New = Invalid

** Tags added: sts

** Tags added: openstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1469260

Title:
  Custom vendor data causes cloud-init failure on 0.7.5

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Utopic:
  Invalid

Bug description:
  I encountered this issue when adding custom vendor data via nova-
  compute. Originally the bug manifested as SSH host key generation
  failing to fire when vendor data was present (example vendor data
  below).

  {msg: , uuid: 4996e2b67d2941818646481453de1efe, users:
  [{username: erhudy, sshPublicKeys: [], uuid: erhudy}],
  name: TestTenant}

  I launched a volume-backed instance, waited for it to fail, then
  terminated it and mounted its root volume to examine the logs. What I
  found was that cloud-init was failing to process vendor-data into MIME
  multipart (note the absence of the line that indicates that cloud-init
  is writing vendor-data.txt.i):

  2015-06-25 21:41:02,178 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instance/obj.pkl - wb: [256] 9751 bytes
  2015-06-25 21:41:02,178 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/user-data.txt - 
wb: [384] 0 bytes
  2015-06-25 21:41:02,184 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/user-data.txt.i - 
wb: [384] 345 bytes
  2015-06-25 21:41:02,185 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/vendor-data.txt - 
wb: [384] 234 bytes
  2015-06-25 21:41:02,185 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)

  After following the call chain all the way down, I found the
  problematic code in user_data.py:

  # Coverts a raw string into a mime message
  def convert_string(raw_data, headers=None):
  if not raw_data:
  raw_data = ''
  if not headers:
  headers = {}
  data = util.decomp_gzip(raw_data)
  if mime-version: in data[0:4096].lower():
  msg = email.message_from_string(data)
  for (key, val) in headers.iteritems():
  _replace_header(msg, key, val)
  else:
  mtype = headers.get(CONTENT_TYPE, NOT_MULTIPART_TYPE)
  maintype, subtype = mtype.split(/, 1)
  msg = MIMEBase(maintype, subtype, *headers)
  msg.set_payload(data)
  return msg

  raw_data in the case that is failing is a dictionary rather than the
  expected string, so slicing into data causes a TypeError: unhashable
  type exception.

  I think this bug was fixed after a fashion in 0.7.7, where the call to
  util.decomp_gzip() is now wrapped by util.decode_binary(), which
  appears to always return a string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1469260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478503] [NEW] test_admin_version_v3 actually tests public app

2015-07-27 Thread Alexey Miroshkin
Public bug reported:

VersionTestCase.test_admin_version_v3
(keystone/tests/unit/test_versions.py) in fact tests public app:

def test_admin_version_v3(self):
client = tests.TestClient(self.public_app)

It makes sense only in case of V3 eventless setup where public app
handles bot endpoints, but I believe it should be tested by a separate
test like .test_admin_version_v3_eventlets which will be introduced as
part of fix of bug #1381961. Also this behavior was introduced when 2
apps setup was used for eventless.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478503

Title:
  test_admin_version_v3 actually  tests public app

Status in Keystone:
  New

Bug description:
  VersionTestCase.test_admin_version_v3
  (keystone/tests/unit/test_versions.py) in fact tests public app:

  def test_admin_version_v3(self):
  client = tests.TestClient(self.public_app)

  It makes sense only in case of V3 eventless setup where public app
  handles bot endpoints, but I believe it should be tested by a separate
  test like .test_admin_version_v3_eventlets which will be introduced as
  part of fix of bug #1381961. Also this behavior was introduced when 2
  apps setup was used for eventless.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478504] [NEW] test_admin_version_v3 actually tests public app

2015-07-27 Thread Alexey Miroshkin
Public bug reported:

VersionTestCase.test_admin_version_v3
(keystone/tests/unit/test_versions.py) in fact tests public app:

def test_admin_version_v3(self):
client = tests.TestClient(self.public_app)

It makes sense only in case of V3 eventless setup where public app
handles bot endpoints, but I believe it should be tested by a separate
test like .test_admin_version_v3_eventlets which will be introduced as
part of fix of bug #1381961. Also this behavior was introduced when 2
apps setup was used for eventless.

** Affects: keystone
 Importance: Undecided
 Assignee: Alexey Miroshkin (amirosh)
 Status: New


** Tags: test-improvement

** Changed in: keystone
 Assignee: (unassigned) = Alexey Miroshkin (amirosh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478504

Title:
  test_admin_version_v3 actually  tests public app

Status in Keystone:
  New

Bug description:
  VersionTestCase.test_admin_version_v3
  (keystone/tests/unit/test_versions.py) in fact tests public app:

  def test_admin_version_v3(self):
  client = tests.TestClient(self.public_app)

  It makes sense only in case of V3 eventless setup where public app
  handles bot endpoints, but I believe it should be tested by a separate
  test like .test_admin_version_v3_eventlets which will be introduced as
  part of fix of bug #1381961. Also this behavior was introduced when 2
  apps setup was used for eventless.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478778] [NEW] VPNaas: strongswan: cannnot add more than one subnet to ipsec

2015-07-27 Thread hanumanth jerbandi
Public bug reported:

I used this patch (VPNaaS: Fedora support for StrongSwan) for vpnaas on centos 
referring this bug
https://bugs.launchpad.net/neutron/+bug/1441788

1. I used a single node with 2 routers, create ike/ipsec/vpn-service/site vpn, 
the tunnels came
up fine
kilo-vpnaas-centos71


10.10.10.x/24R1-R2-20.20.20.x/24

R1 to R2 on 192.168.122.202, 192.168.122.203.

2. When i added one more interface to r1 and r2, 30.30.30.x and 40.40.40.x 
respectively, created
ike/ipsec/vpn-service/site-vpn, it did not create a new conn in ipsec.conf 
file, rather, it 
over wrote the existing(10.10.10.x) conn in ipsec.conf file.

[root@ceos71 ~]# cat 
/var/lib/neutron/ipsec/70e88c46-c6b2-4c8d-afad-76ebd77b55cb/etc/strongswan/ipsec.conf
 
# Configuration for vpn10
config setup

conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
authby=psk
mobike=no

conn 221c6d37-e7a1-4afc-8d0f-4de32df3818b   this for 10.10.10.x
keyexchange=ikev2
left=192.168.122.202
leftsubnet=10.10.10.0/24
leftid=192.168.122.202
leftfirewall=yes
right=192.168.122.203
rightsubnet=20.20.20.0/24
rightid=192.168.122.203
auto=route

### added 1 more subnet 30.30.30.x

[root@ceos71 ~]# cat 
/var/lib/neutron/ipsec/70e88c46-c6b2-4c8d-afad-76ebd77b55cb/etc/strongswan/ipsec.conf
 
# Configuration for vpn30
config setup

conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
authby=psk
mobike=no

conn 7b57fc83-3581-4e86-a193-e14474eef295 ### this is for 30.30.30.x, it over 
wrote the 10.10.10.x conn 
keyexchange=ikev2
left=192.168.122.202
leftsubnet=30.30.30.0/24 
leftid=192.168.122.202
leftfirewall=yes
right=192.168.122.203
rightsubnet=40.40.40.0/24
rightid=192.168.122.203
auto=route

3. My understanding is that, it should add new conn to ipsec.conf file,
than overwriting the existing conn. am i right ???

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478778

Title:
  VPNaas: strongswan: cannnot add more than one subnet to ipsec

Status in neutron:
  New

Bug description:
  I used this patch (VPNaaS: Fedora support for StrongSwan) for vpnaas on 
centos referring this bug
  https://bugs.launchpad.net/neutron/+bug/1441788

  1. I used a single node with 2 routers, create ike/ipsec/vpn-service/site 
vpn, the tunnels came
  up fine
  kilo-vpnaas-centos71

  
  10.10.10.x/24R1-R2-20.20.20.x/24

  R1 to R2 on 192.168.122.202, 192.168.122.203.

  2. When i added one more interface to r1 and r2, 30.30.30.x and 40.40.40.x 
respectively, created
  ike/ipsec/vpn-service/site-vpn, it did not create a new conn in ipsec.conf 
file, rather, it 
  over wrote the existing(10.10.10.x) conn in ipsec.conf file.

  [root@ceos71 ~]# cat 
/var/lib/neutron/ipsec/70e88c46-c6b2-4c8d-afad-76ebd77b55cb/etc/strongswan/ipsec.conf
 
  # Configuration for vpn10
  config setup

  conn %default
  ikelifetime=60m
  keylife=20m
  rekeymargin=3m
  keyingtries=1
  authby=psk
  mobike=no

  conn 221c6d37-e7a1-4afc-8d0f-4de32df3818b   this for 10.10.10.x
  keyexchange=ikev2
  left=192.168.122.202
  leftsubnet=10.10.10.0/24
  leftid=192.168.122.202
  leftfirewall=yes
  right=192.168.122.203
  rightsubnet=20.20.20.0/24
  rightid=192.168.122.203
  auto=route

  ### added 1 more subnet 30.30.30.x

  [root@ceos71 ~]# cat 
/var/lib/neutron/ipsec/70e88c46-c6b2-4c8d-afad-76ebd77b55cb/etc/strongswan/ipsec.conf
 
  # Configuration for vpn30
  config setup

  conn %default
  ikelifetime=60m
  keylife=20m
  rekeymargin=3m
  keyingtries=1
  authby=psk
  mobike=no

  conn 7b57fc83-3581-4e86-a193-e14474eef295 ### this is for 30.30.30.x, it over 
wrote the 10.10.10.x conn 
  keyexchange=ikev2
  left=192.168.122.202
  leftsubnet=30.30.30.0/24 
  leftid=192.168.122.202
  leftfirewall=yes
  right=192.168.122.203
  rightsubnet=40.40.40.0/24
  rightid=192.168.122.203
  auto=route

  3. My understanding is that, it should add new conn to ipsec.conf
  file, than overwriting the existing conn. am i right ???

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447945] Re: check-tempest-dsvm-postgres-full fails with mismatch_error

2015-07-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447945

Title:
  check-tempest-dsvm-postgres-full fails with mismatch_error

Status in OpenStack Compute (nova):
  Expired

Bug description:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_terminate_instance
  failed with following stack trace

  
--
  2015-04-23 18:28:42.950 | 
  2015-04-23 18:28:42.950 | Captured traceback:
  2015-04-23 18:28:42.950 | ~~~
  2015-04-23 18:28:42.950 | Traceback (most recent call last):
  2015-04-23 18:28:42.950 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 216, in 
test_run_terminate_instance
  2015-04-23 18:28:42.950 | self.assertInstanceStateWait(instance, 
'_GONE')
  2015-04-23 18:28:42.950 |   File tempest/thirdparty/boto/test.py, line 
373, in assertInstanceStateWait
  2015-04-23 18:28:42.950 | state = self.waitInstanceState(lfunction, 
wait_for)
  2015-04-23 18:28:42.950 |   File tempest/thirdparty/boto/test.py, line 
358, in waitInstanceState
  2015-04-23 18:28:42.951 | self.valid_instance_state)
  2015-04-23 18:28:42.951 |   File tempest/thirdparty/boto/test.py, line 
349, in state_wait_gone
  2015-04-23 18:28:42.951 | self.assertIn(state, valid_set | 
self.gone_set)
  2015-04-23 18:28:42.951 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 356, in assertIn
  2015-04-23 18:28:42.951 | self.assertThat(haystack, Contains(needle), 
message)
  2015-04-23 18:28:42.951 |   File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  2015-04-23 18:28:42.951 | raise mismatch_error
  2015-04-23 18:28:42.951 | testtools.matchers._impl.MismatchError: 
u'error' not in set(['terminated', 'paused', 'stopped', 'running', 'stopping', 
'shutting-down', 'pending', '_GONE'])

  Logs: http://logs.openstack.org/38/145738/11/check/check-tempest-dsvm-
  postgres-full/fd21577/console.html#_2015-04-23_18_28_42_950

  http://logs.openstack.org/38/145738/11/check/check-tempest-dsvm-
  postgres-full/fd1a680/console.html#_2015-04-23_15_02_51_607

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478450] [NEW] can not nova reboot vm

2015-07-27 Thread hiyonger-ZTE_TECS
Public bug reported:

I have 10 vms, run nova reboot the 10 vms. After reboot 285 times,can
not nova reboot again.The detail information:

nova list:
+--+---+++-++
| ID   | Name   
   | Status | Task State | Power State | Networks   |
+--+---+++-++
| 26d74bba-2e9a-4594-b6a1-1b27f337759b | 
test-26d74bba-2e9a-4594-b6a1-1b27f337759b | REBOOT | reboot_started | Running   
  | net01=192.168.0.8  |
| 7d129f4d-0af7-4ea8-9de5-3483cdcb7a68 | 
test-7d129f4d-0af7-4ea8-9de5-3483cdcb7a68 | REBOOT | reboot_started | Running   
  | net01=192.168.0.13 |
| 8d2e097b-e478-4a7f-b06d-7b578dfaf7c0 | 
test-8d2e097b-e478-4a7f-b06d-7b578dfaf7c0 | REBOOT | reboot_started | Running   
  | net01=192.168.0.9  |
| a2d264f0-c1d0-449c-9ba3-32041247490c | 
test-a2d264f0-c1d0-449c-9ba3-32041247490c | REBOOT | reboot_started | Running   
  | net01=192.168.0.12 |
| a3d9448d-25d8-4c01-925c-825a19164970 | 
test-a3d9448d-25d8-4c01-925c-825a19164970 | REBOOT | reboot_started | Running   
  | net01=192.168.0.11 |
| a993222f-4e41-483e-8e91-300be0d525a4 | 
test-a993222f-4e41-483e-8e91-300be0d525a4 | REBOOT | reboot_started | Running   
  | net01=192.168.0.5  |
| ace0736a-0371-4a3d-8a7f-525e299a924b | 
test-ace0736a-0371-4a3d-8a7f-525e299a924b | REBOOT | reboot_started | Running   
  | net01=192.168.0.6  |
| e5515b31-8edb-4558-bb9f-f6f8b1142db2 | 
test-e5515b31-8edb-4558-bb9f-f6f8b1142db2 | REBOOT | reboot_started | Running   
  | net01=192.168.0.4  |
| f380346a-bc24-4d63-a6e0-e36b5a508a59 | 
test-f380346a-bc24-4d63-a6e0-e36b5a508a59 | REBOOT | reboot_started | Running   
  | net01=192.168.0.10 |
| fc396066-7ec6-4fe9-b7bf-70f162e293fe | 
test-fc396066-7ec6-4fe9-b7bf-70f162e293fe | REBOOT | reboot_started | Running   
  | net01=192.168.0.7  |
+--+---+++-++

The nova-compute.log:

2015-07-22 14:51:43.808 15940 WARNING nova.openstack.common.deadlock_monitor 
[-] service function nova.servicegroup.drivers.db._report_state is deadlocked!
2015-07-22 14:51:43.984 15940 WARNING nova.openstack.common.deadlock_monitor 
[-] deadlock traceback is ['  File 
/usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py, line 
78, in _inner\nself.f(*self.args, **self.kw)\n', '  File 
/usr/lib/python2.7/site-packages/nova/openstack/common/deadlock_monitor.py, 
line 58, in __deco\nret = func(*args, **kwargs)\n', '  File 
/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py, line 116, 
in _report_state\nservice.service_ref, state_catalog)\n', '  File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 218, in 
service_update\nreturn self._manager.service_update(context, service, 
values)\n', '  File 
/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py, line 330, in 
service_update\nservice=service_p, values=values)\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call\nwait_for_reply=True, timeout=timeout)\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/transport.py, line 90, in 
_send\ntimeout=timeout)\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
412, in send\nreturn self._send(target, ctxt, message, wait_for_reply, 
timeout)\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
400, in _send\nconn.topic_send(topic, msg, timeout=timeout)\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqp.py, line 147, 
in __exit__\nself._done()\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqp.py, line 136, 
in _done\nself.connection.reset()\n', '  File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py, line 
656, in reset\nself.channel.close()\n', '  File 
/usr/lib/python2.7/site-packages/amqp/channel.py, line 163, in close\n
(20, 41),  # Channel.close_ok\n', '  File 
/usr/lib/python2.7/site-packages/amqp/abstract_channel.py, line 69, in wait\n 
   self.channel_id, allowed_methods)\n', '  File 
/usr/lib/python2.7/site-packages/amqp/connection.py, line 204, in 
_wait_method\nself.method_reader.read_method()\n', '  File 
/usr/lib/python2.7/site-packages/amqp/method_framing.py, line 189, in 
read_method\nself._next_method()\n', '  File 
/usr/lib/python2.7/site-packages/amqp/method_framing.py, line 112, in 
_next_method\nframe_type, channel, payload = read_frame()\n', '  File 
/usr/lib/python2.7/site-packages/amqp/transport.py, line 147, in read_frame\n 
   frame_type, channel, size = unpack(\'BHI\', read(7, 

[Yahoo-eng-team] [Bug 1324496] Re: shared firewall policies and rules are not displayed in horizon

2015-07-27 Thread Shivakumar M
*** This bug is a duplicate of bug 1294541 ***
https://bugs.launchpad.net/bugs/1294541

** This bug has been marked a duplicate of bug 1294541
   shared firewall policies can't be displayed in horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1324496

Title:
  shared firewall policies and rules are not displayed in horizon

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  This bug is an extension to
  https://bugs.launchpad.net/neutron/+bug/1323322

  As a normal user, Shared Firewall Policies and Rules which are created by 
admin , are listed for CLI command.
  But, those are not visible in Horizon UI.

  Steps to reproduce:

  As admin:
  Create a firewall rule and mark it as shared.
  Create a firewall policy and mark it as shared.

  Now, as a normal user:
  Try CLI commands:
  neutron firewall-rule-list
  neutron-firewall-policy-list

  It will list all the policies and rules which are shared also.

  Now login to Horizon as normal user,
  In Firewall panel, 
  Nothing is displayed under firewall-policies and firewall-rules.

  Expected Results:
  Shared firewall policies and rules should be listed as in CLI.
  And modification of those should be disabled for all users other than admin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1324496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477912] Re: size still exist when create image with incomplete parameter in api v1

2015-07-27 Thread wangxiyuan
** Changed in: glance
   Status: New = Invalid

** Changed in: glance
 Assignee: wangxiyuan (wangxiyuan) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1477912

Title:
  size still exist when create image with incomplete parameter in api v1

Status in Glance:
  Invalid

Bug description:
  version:Glance master

  Reproduce:
  1. glance image-create --file  --name test(in my env, image file's 
size is 642580480B)
     Then An error raised: 400 Bad Request: Disk format is not specified. (HTTP 
400)

  2. glance image-list
  list information:

  ID  | Name | Disk Format | Container Format |   Size   | Status
  
  xxx | test | |  |642580480 | queued

  except: There should not leave 'size' in the list.

  ID  | Name | Disk Format | Container Format |   Size   | Status
  
  xxx | test | |  |  | queued

  There is no bug in api v2.  Only occured in api v1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1477912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478471] [NEW] neutron can't use internal nova-api url for notification with v3 auth

2015-07-27 Thread Zsolt Krenák
Public bug reported:

Hi!

When the v3 auth plugin is used for notifying nova on changes, the
config parameter nova_url is ignored and neutron uses public endpoint
automatically. If internalURL and publicURL are not the same, and
publicURL is not accessible for internal services, than the notification
will fail and so will VM creation. I tried to look for any config
parmeter to change this behavior, but couldn't find any.

OS: Ubuntu 14.04
Neutron version: 1:2015.1.0-0ubuntu1~cloud0

my nova endpoints:
+--+-+
| Field| Value   |
+--+-+
| adminurl | http://192.168.56.10:8774/v2/%(tenant_id)s  |
| enabled  | True|
| id   | 3706c3fe985c4219a145a7ec83c14955|
| internalurl  | http://192.168.56.10:8774/v2/%(tenant_id)s  |
| publicurl| https://192.168.55.10:8774/v2/%(tenant_id)s |
| region   | labor   |
| service_id   | ebc286c9356449819d2c7a7a5fbd1c77|
| service_name | nova|
| service_type | compute |
+--+-+

partial log:
2015-07-23 11:31:06.021 24522 DEBUG keystoneclient.auth.identity.v3 [-] Making 
authentication request to http://192.168.56.10:35357/v3/auth/tokens 
get_auth_ref 
/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3.py:125
2015-07-23 11:31:06.385 24522 DEBUG keystoneclient.session [-] REQ: curl -g -i 
-X POST 
https://192.168.55.10:8774/v2/d239fb491e944da39e430174dc5fd33e/os-server-external-events
 -H User-Agent: python-novaclient -H Content-Type: applicat$
2015-07-23 11:31:06.443 24522 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'status': 'completed', 'tag': 
u'2a6ce600-1c57-4f91-a17e-1623adf9e1ee', 'name': 'network-vif-plugged', 
'server_uuid': u'f77b2756-99ed-4259-b$
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova Traceback (most 
recent call last):
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py, line 243, in 
send_events
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova batched_events)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/novaclient/v2/contrib/server_external_events.py,
 line 39, in create
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova return_raw=True)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/novaclient/base.py, line 152, in _create
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py, line 170, in post
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/novaclient/client.py, line 89, in request
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova **kwargs)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py, line 200, in 
request
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py, line 89, in 
request
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova return 
self.session.request(url, method, **kwargs)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/keystoneclient/utils.py, line 318, in inner
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova return 
func(*args, **kwargs)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/keystoneclient/session.py, line 374, in 
request
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova resp = 
send(**kwargs)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova   File 
/usr/lib/python2.7/dist-packages/keystoneclient/session.py, line 411, in 
_send_request
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova raise 
exceptions.SSLError(msg)
2015-07-23 11:31:06.443 24522 TRACE neutron.notifiers.nova SSLError: SSL 
exception connecting to 
https://192.168.55.10:8774/v2/d239fb491e944da39e430174dc5fd33e/os-server-external-events


The log show an SSL exception log, but thats just beacuse my public endpoint is 
SSL protected, Neutron should use internal endpoint instead.

** Affects: neutron
 Importance: Undecided
  

[Yahoo-eng-team] [Bug 1478181] Re: there is something wrong in the note below function _validate_ip_address

2015-07-27 Thread Kevin Benton
The reason that note is there and the reason that function is written
that way is because those types of address inputs only work on some
systems. We want neutron input to be consistent on all systems.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478181

Title:
  there is something wrong in the note below function
  _validate_ip_address

Status in neutron:
  Invalid

Bug description:
  https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py
  the note that below function _validate_ip_address
  on line199 :
  #netaddr.IPAddress('1' * 59)
  will not get right result 
  #   IPAddress('199.28.113.199')
  but  raise error result like this:
  Traceback (most recent call last):
File stdin, line 1, in module
File /usr/local/lib/python2.7/dist-packages/netaddr/ip/__init__.py, line 
306, in __init__
  'address from %r' % addr)
  netaddr.core.AddrFormatError: failed to detect a valid IP address from 
'111'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478466] [NEW] Apache2 fail to start

2015-07-27 Thread Khayam Gondal
Public bug reported:

Using OpenStack Juno rdo-release-juno-1.noarch.rpm.
Installation Method: packstack on fedora 20 x64
Error: Execution of '/sbin/service httpd start' returned 1
When  check on httpd/apache2 status it shows following error
(13)Permission denied: AH00072: make_sock: could not bind to address ...:5000

Solution: Open  /etc/httpd/conf
and comment out port 5000 and 35357 . Close the file and restart the service. 
It will start running.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: apache2 httpd keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478466

Title:
  Apache2 fail to start

Status in Keystone:
  New

Bug description:
  Using OpenStack Juno rdo-release-juno-1.noarch.rpm.
  Installation Method: packstack on fedora 20 x64
  Error: Execution of '/sbin/service httpd start' returned 1
  When  check on httpd/apache2 status it shows following error
  (13)Permission denied: AH00072: make_sock: could not bind to address ...:5000

  Solution: Open  /etc/httpd/conf
  and comment out port 5000 and 35357 . Close the file and restart the service. 
It will start running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449546] Re: Neutron-LB Health monitor association not listed in Horizon Dashboard

2015-07-27 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1398754 ***
https://bugs.launchpad.net/bugs/1398754

** This bug has been marked a duplicate of bug 1398754
   LBaas v1 Associate Monitor to Pool Fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449546

Title:
  Neutron-LB Health monitor association not listed in Horizon Dashboard

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  In LB-Pool Horizon Dashboard, 
  --LB- Pool -- Edit Pool -- Associate Monitor,

  it is expected that all the available health monitors to be listed.
  But the List box is empty.

  Please find the attached screen shot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp