[Yahoo-eng-team] [Bug 1213126] Re: attaching volume to instance fails with IO error

2014-08-26 Thread Kashyap Chamarthy
Closing this bug per comment #3.

Please reopen it (with more verbose details) if you encounter it again.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213126

Title:
  attaching volume to instance fails with IO error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2013-08-16 10:06:05.315 ^[[01;31mERROR root [^[[00;36m-^[[01;31m] 
^[[01;35m^[[01;31mOriginal exception being dropped: ['Traceback (most recent 
call last):\n', '  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 
1038, in attach_volume\nvirt_dom.attachDeviceFlags(conf.to_xml(), 
flags)\n', '  File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, 
line 179, in doit\nresult = proxy_call(self._autowrap, f, *args, 
**kwargs)\n', '  File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in 
proxy_call\nrv = execute(f,*args,**kwargs)\n', '  File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in 
tworker\nrv = meth(*args,**kwargs)\n', '  File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 420, in attachDeviceFlags\n 
   if ret == -1: raise libvirtError (\'virDomainAttachDeviceFlags() failed\', 
dom=self)\n', 'libvirtError: End of file while reading data: Input/output 
error\n']^[[00m
  2013-08-16 10:06:05.316 ^[[01;31mERROR nova.compute.manager 
[^[[01;36mreq-db59ccfd-b546-40fd-8447-03d546725caa ^[[00;36madmin 
demo^[[01;31m] ^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] 
^[[01;31mFailed to attach volume 079b6295-8433-444f-bf8f-c013d65ae634 at 
/dev/vdc^[[00m
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mTraceback (most 
recent call last):
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/compute/manager.py, line 3465, in _attach_volume
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mmountpoint)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1051, in attach_volume
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mdisk_dev)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 992, in volume_driver_method
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mreturn 
method(connection_info, *args, **kwargs)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 246, in inner
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mreturn 
f(*args, **kwargs)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/virt/libvirt/volume.py, line 308, in disconnect_volume
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mdevices = 
self.connection.get_all_block_devices()
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2666, in 
get_all_block_devices
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mfor dom_id 
in self.list_instance_ids():
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 694, in list_instance_ids
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mif 
self._conn.numOfDomains() == 0:
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mAttributeError: 
'NoneType' object has no attribute 'numOfDomains'
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to 

[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-08-26 Thread Nikhil Manchanda
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
Milestone: None = juno-3

** Changed in: trove
 Assignee: (unassigned) = Nikhil Manchanda (slicknik)

** Changed in: trove
   Importance: Undecided = Critical

** Changed in: trove
   Importance: Critical = High

** Changed in: trove
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  Triaged
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Committed
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Neutron:
  In Progress
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in Openstack Database (Trove):
  Triaged
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361517] [NEW] Nova prompt wrong message when boot from a error status volume

2014-08-26 Thread zhu zhu
Public bug reported:

1. create a cinder from an existed image

cinder create 2 --display-name hbvolume-newone --image-id 9769cbfe-2d1a-
4f60-9806-16810c666d7f

2. set the created volume with error status
cinder reset-state --state error 76f5e521-d45f-4675-851e-48f8e3a3f039

3. boot a vm from the created volume
nova boot --flavor 2 --block-device-mapping 
vda=76f5e521-d45f-4675-851e-48f8e3a3f039:::0 device-mapping-test2 --nic 
net-id=231eb787-e5bf-4e65-a822-25d37a84eab8

# cinder list
+--+---+-+--+-+--+-+
|  ID  |   Status  |   Display Name  | Size | 
Volume Type | Bootable | Attached to |
+--+---+-+--+-+--+-+
| 21c50923-7341-49ba-af48-f4a7e2099bfd | available |   None  |  1   |   
  None|  false   | |
| 76f5e521-d45f-4675-851e-48f8e3a3f039 |   error   |hbvolume-2   |  2   |   
  None|   true   | |
| 92de3c7f-9c56-447a-b06a-a5c3bdfca683 | available | hbvolume-newone |  2   |   
  None|   true   | |
+--+---+-+--+-+--+-+

#RESULTS
it reports failed to get the volume
ERROR (BadRequest): Block Device Mapping is Invalid: failed to get volume 
76f5e521-d45f-4675-851e-48f8e3a3f039. (HTTP 400)

#Expected Message:
report the status of the volume is not correct to boot a VM

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361517

Title:
  Nova prompt wrong message when boot from a error status volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. create a cinder from an existed image

  cinder create 2 --display-name hbvolume-newone --image-id 9769cbfe-
  2d1a-4f60-9806-16810c666d7f

  2. set the created volume with error status
  cinder reset-state --state error 76f5e521-d45f-4675-851e-48f8e3a3f039

  3. boot a vm from the created volume
  nova boot --flavor 2 --block-device-mapping 
vda=76f5e521-d45f-4675-851e-48f8e3a3f039:::0 device-mapping-test2 --nic 
net-id=231eb787-e5bf-4e65-a822-25d37a84eab8

  # cinder list
  
+--+---+-+--+-+--+-+
  |  ID  |   Status  |   Display Name  | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+-+--+-+--+-+
  | 21c50923-7341-49ba-af48-f4a7e2099bfd | available |   None  |  1   | 
None|  false   | |
  | 76f5e521-d45f-4675-851e-48f8e3a3f039 |   error   |hbvolume-2   |  2   | 
None|   true   | |
  | 92de3c7f-9c56-447a-b06a-a5c3bdfca683 | available | hbvolume-newone |  2   | 
None|   true   | |
  
+--+---+-+--+-+--+-+

  #RESULTS
  it reports failed to get the volume
  ERROR (BadRequest): Block Device Mapping is Invalid: failed to get volume 
76f5e521-d45f-4675-851e-48f8e3a3f039. (HTTP 400)

  #Expected Message:
  report the status of the volume is not correct to boot a VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361529] [NEW] Configurable instance detail tabs

2014-08-26 Thread Shuichiro MAKIGAKI
Public bug reported:

For example, VMware environment doesn't support the console log feature.
In that case, usability will improve if the console log tab is hidden.
This should be configurable by local_settings.py:
---
# Available values are 'OverviewTab', 'LogTab', 'ConsoleTab'
HORIZON_CONFIG[instance_detail_tabs] = ('ConsoleTab', 'OverviewTab')
---

** Affects: horizon
 Importance: Undecided
 Assignee: Shuichiro MAKIGAKI (shuichiro-makigaki)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Shuichiro MAKIGAKI (shuichiro-makigaki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361529

Title:
  Configurable instance detail tabs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  For example, VMware environment doesn't support the console log feature.
  In that case, usability will improve if the console log tab is hidden.
  This should be configurable by local_settings.py:
  ---
  # Available values are 'OverviewTab', 'LogTab', 'ConsoleTab'
  HORIZON_CONFIG[instance_detail_tabs] = ('ConsoleTab', 'OverviewTab')
  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361540] [NEW] no mechanism driver calls for gateway port removal

2014-08-26 Thread Cédric OLLIVIER
Public bug reported:

MechanismDriver.delete_port_* (gateway port) are not called when the router is 
being removed.
For instance, it remains the network:router_gateway ports in OpenDaylight as 
its Mechanism Driver is not correctly called.

To reproduce it:
- create a router and set the gateway
- delete this router (without clearing the gateway)

It works well if the gateway port is cleared before the router.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: icehouse-backport-potential l3-ipam-dhcp ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361540

Title:
  no mechanism driver calls for gateway port removal

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  MechanismDriver.delete_port_* (gateway port) are not called when the router 
is being removed.
  For instance, it remains the network:router_gateway ports in OpenDaylight as 
its Mechanism Driver is not correctly called.

  To reproduce it:
  - create a router and set the gateway
  - delete this router (without clearing the gateway)

  It works well if the gateway port is cleared before the router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361545] [NEW] dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

2014-08-26 Thread John Schwarz
Public bug reported:

The enable_isolated_metadata = True options tells DHCP agents that for each 
network under its care, a neutron-ns-metadata-proxy process should be spawned, 
regardless if it's isolated or not.
This is fine for isolated networks (networks with no routers and no default 
gateways), but for networks which are connected to a router (for which the L3 
agent spawns a separate neutron-ns-metadata-proxy which is attached to the 
router's namespace), 2 different metadata proxies are spawned. For these 
networks, the static routes which are pushed to each instance, letting it know 
where to search for the metadata-proxy, is not pushed and the proxy spawned 
from the DHCP agent is left unused.

The DHCP agent should know if the network it handles is isolated or not,
and for non-isolated networks, no neutron-ns-metadata-proxy processes
should spawn.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = John Schwarz (jschwarz)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361545

Title:
  dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The enable_isolated_metadata = True options tells DHCP agents that for each 
network under its care, a neutron-ns-metadata-proxy process should be spawned, 
regardless if it's isolated or not.
  This is fine for isolated networks (networks with no routers and no default 
gateways), but for networks which are connected to a router (for which the L3 
agent spawns a separate neutron-ns-metadata-proxy which is attached to the 
router's namespace), 2 different metadata proxies are spawned. For these 
networks, the static routes which are pushed to each instance, letting it know 
where to search for the metadata-proxy, is not pushed and the proxy spawned 
from the DHCP agent is left unused.

  The DHCP agent should know if the network it handles is isolated or
  not, and for non-isolated networks, no neutron-ns-metadata-proxy
  processes should spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361554] [NEW] Missing sort_key and sort_dir for server list api

2014-08-26 Thread Liyingjun
Public bug reported:

Since the compute api now supporting sort_dir and sort_key [1], we may
need to add this to the REST api.

[1]:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1790

** Affects: nova
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361554

Title:
  Missing sort_key and sort_dir for server list api

Status in OpenStack Compute (Nova):
  New

Bug description:
  Since the compute api now supporting sort_dir and sort_key [1], we may
  need to add this to the REST api.

  [1]:
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1790

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361542] [NEW] neutron-l3-agent does not start without IPv6

2014-08-26 Thread Bernhard M. Wiedemann
Public bug reported:

When testing on a one-node-cloud that had ipv6 blacklisted, I found that 
neutron-l3-agent does not start
because it errors out when it tries to access 
/proc/sys/net/ipv6/conf/default/disable_ipv6

2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 269, in create
2014-08-26 10:12:57.987 29609 TRACE neutron 
periodic_fuzzy_delay=periodic_fuzzy_delay)
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 202, in __init__
2014-08-26 10:12:57.987 29609 TRACE neutron self.manager = 
manager_class(host=host, *args, **kwargs)
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 916, in 
__init__
2014-08-26 10:12:57.987 29609 TRACE neutron 
super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf)
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 230, in 
__init__
2014-08-26 10:12:57.987 29609 TRACE neutron self.use_ipv6 = 
ipv6_utils.is_enabled()
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/common/ipv6_utils.py, line 50, in 
is_enabled
2014-08-26 10:12:57.987 29609 TRACE neutron with open(disabled_ipv6_path, 
'r') as f:
2014-08-26 10:12:57.987 29609 TRACE neutron IOError: [Errno 2] No such file or 
directory: '/proc/sys/net/ipv6/conf/default/disable_ipv6'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361542

Title:
  neutron-l3-agent does not start without IPv6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When testing on a one-node-cloud that had ipv6 blacklisted, I found that 
neutron-l3-agent does not start
  because it errors out when it tries to access 
/proc/sys/net/ipv6/conf/default/disable_ipv6

  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 269, in create
  2014-08-26 10:12:57.987 29609 TRACE neutron 
periodic_fuzzy_delay=periodic_fuzzy_delay)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 202, in __init__
  2014-08-26 10:12:57.987 29609 TRACE neutron self.manager = 
manager_class(host=host, *args, **kwargs)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 916, in 
__init__
  2014-08-26 10:12:57.987 29609 TRACE neutron 
super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 230, in 
__init__
  2014-08-26 10:12:57.987 29609 TRACE neutron self.use_ipv6 = 
ipv6_utils.is_enabled()
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/common/ipv6_utils.py, line 50, in 
is_enabled
  2014-08-26 10:12:57.987 29609 TRACE neutron with open(disabled_ipv6_path, 
'r') as f:
  2014-08-26 10:12:57.987 29609 TRACE neutron IOError: [Errno 2] No such file 
or directory: '/proc/sys/net/ipv6/conf/default/disable_ipv6'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361573] [NEW] TAP_PREFIX_LEN constant is defined several times

2014-08-26 Thread Rossella Sblendido
Public bug reported:

TAP_PREFIX_LEN constant is defined several times, it should be defined
only once preferably in neutron/common/constants.py .

From a coarse grep:

grep -r TAP . | grep 3

./neutron/plugins/brocade/NeutronPlugin.py:TAP_PREFIX_LEN = 3
./neutron/plugins/linuxbridge/lb_neutron_plugin.py:TAP_PREFIX_LEN = 3
./neutron/plugins/ml2/rpc.py:TAP_DEVICE_PREFIX_LENGTH = 3
./neutron/plugins/mlnx/rpc_callbacks.py:TAP_PREFIX_LEN = 3

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361573

Title:
  TAP_PREFIX_LEN constant is defined several times

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  TAP_PREFIX_LEN constant is defined several times, it should be defined
  only once preferably in neutron/common/constants.py .

  From a coarse grep:

  grep -r TAP . | grep 3

  ./neutron/plugins/brocade/NeutronPlugin.py:TAP_PREFIX_LEN = 3
  ./neutron/plugins/linuxbridge/lb_neutron_plugin.py:TAP_PREFIX_LEN = 3
  ./neutron/plugins/ml2/rpc.py:TAP_DEVICE_PREFIX_LENGTH = 3
  ./neutron/plugins/mlnx/rpc_callbacks.py:TAP_PREFIX_LEN = 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361605] [NEW] Use lazy init for L3 plugin reference

2014-08-26 Thread Paul Michali
Public bug reported:

In many L3 plugins, there is a reference needed to the L3 core plugin.
This is typically done as:

plugin =
manager.NeutronManager.get_service_plugins().get(constants.L3_ROUTER_NAT)

Rather than looking up the plugin, each time it is needed (e.g.
processing each VPN API request), this bug proposes to do a lazy init of
the plugin as in:

@property
def l3_plugin(self):
try:
return self._l3_plugin
except AttributeError:
self._l3_plugin = manager.NeutronManager.get_service_plugins().get(
constants.L3_ROUTER_NAT)
return self._l3_plugin

In addition, we can look at placing this in a common area (mixin?) or as
a decorator, so that each class that needs it could use the mixin,
rather than repeat this property.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361605

Title:
  Use lazy init for L3 plugin reference

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In many L3 plugins, there is a reference needed to the L3 core plugin.
  This is typically done as:

  plugin =
  manager.NeutronManager.get_service_plugins().get(constants.L3_ROUTER_NAT)

  Rather than looking up the plugin, each time it is needed (e.g.
  processing each VPN API request), this bug proposes to do a lazy init
  of the plugin as in:

  @property
  def l3_plugin(self):
  try:
  return self._l3_plugin
  except AttributeError:
  self._l3_plugin = 
manager.NeutronManager.get_service_plugins().get(
  constants.L3_ROUTER_NAT)
  return self._l3_plugin

  In addition, we can look at placing this in a common area (mixin?) or
  as a decorator, so that each class that needs it could use the mixin,
  rather than repeat this property.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361611] [NEW] console/virt stop returning arbitrary dicts in driver API

2014-08-26 Thread sahid
Public bug reported:

We have a general desire though to stop returning / passing arbitrary
dicts in the virt driver API - On this report we would like to create
typed objects for consoles that will be used by drivers to return values
on the compute manager.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: api virt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361611

Title:
  console/virt stop returning arbitrary dicts in driver API

Status in OpenStack Compute (Nova):
  New

Bug description:
  We have a general desire though to stop returning / passing arbitrary
  dicts in the virt driver API - On this report we would like to create
  typed objects for consoles that will be used by drivers to return
  values on the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361613] [NEW] auth fragments deprecated - sample.conf/authentication.rst doc need updating.

2014-08-26 Thread Andy McCrae
Public bug reported:

The auth_port, auth_protocol and auth_host variables is deprecated in favour of 
identity_uri
.
2014-08-26 11:13:43.764 8009 WARNING keystonemiddleware.auth_token [-] 
Configuring admin URI using auth fragments. This is deprecated, use 
'identity_uri' instead.

Sample confs and authentication.rst doc needs to be updated to reflect
this.

** Affects: glance
 Importance: Undecided
 Assignee: Andy McCrae (andrew-mccrae)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Andy McCrae (andrew-mccrae)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361613

Title:
  auth fragments deprecated - sample.conf/authentication.rst doc need
  updating.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The auth_port, auth_protocol and auth_host variables is deprecated in favour 
of identity_uri
  .
  2014-08-26 11:13:43.764 8009 WARNING keystonemiddleware.auth_token [-] 
Configuring admin URI using auth fragments. This is deprecated, use 
'identity_uri' instead.

  Sample confs and authentication.rst doc needs to be updated to reflect
  this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1361613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322597] Re: Unable to update image members

2014-08-26 Thread Santiago Baldassin
** Changed in: horizon
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1322597

Title:
  Unable to update image members

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Glance API let us update the image members, we should expose that
  functionality in Horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1322597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361631] [NEW] Do not query datetime type filed when it is not needed

2014-08-26 Thread Attila Fazekas
Public bug reported:

creating a datetime object is more expensive then any other type used in
the database.

Creating the datetime object is expensive especially for mysql drivers,
because creating the object from a datetime string representation is
expensive.

When listing 4k instances with details without the volumes_extension,
approximately 2 second spent in the mysql driver, which spent 1 second
for parsing the datetime (DateTime_or_None).

The datetime format is only useful when you are intended to present the
time for an end user, for the system the float or integer
representations are more efficient.

* consider changing the store type to float or int
* exclude the datetime fields from the query when it will not be part of an api 
response
* remove the datetime fields from the database where it is is not really needed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361631

Title:
  Do not query datetime type filed when it is not needed

Status in OpenStack Compute (Nova):
  New

Bug description:
  creating a datetime object is more expensive then any other type used
  in the database.

  Creating the datetime object is expensive especially for mysql
  drivers, because creating the object from a datetime string
  representation is expensive.

  When listing 4k instances with details without the volumes_extension,
  approximately 2 second spent in the mysql driver, which spent 1 second
  for parsing the datetime (DateTime_or_None).

  The datetime format is only useful when you are intended to present
  the time for an end user, for the system the float or integer
  representations are more efficient.

  * consider changing the store type to float or int
  * exclude the datetime fields from the query when it will not be part of an 
api response
  * remove the datetime fields from the database where it is is not really 
needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361666] [NEW] Calendar stays open after date selected

2014-08-26 Thread Randy Bertram
Public bug reported:

Go to Admin  System  Overview.
Click on the first date field; the calendar appears.
Click a date in the calendar. The calendar stays open.
Look in the javascript console; there is this exception: 
Uncaught TypeError: Cannot read property 'valueOf' of undefined

This bug occurs when updating to the 1.3.1 version of bootstrap-
datepicker, which provides Bootstrap 3 support. Updating the datepicker
is in https://review.openstack.org/#/c/116866/

** Affects: horizon
 Importance: Undecided
 Assignee: Randy Bertram (rbertram)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Randy Bertram (rbertram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361666

Title:
  Calendar stays open after date selected

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Go to Admin  System  Overview.
  Click on the first date field; the calendar appears.
  Click a date in the calendar. The calendar stays open.
  Look in the javascript console; there is this exception: 
  Uncaught TypeError: Cannot read property 'valueOf' of undefined

  This bug occurs when updating to the 1.3.1 version of bootstrap-
  datepicker, which provides Bootstrap 3 support. Updating the
  datepicker is in https://review.openstack.org/#/c/116866/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357453] Re: Resource tracker should create compute node record in constructor

2014-08-26 Thread Sylvain Bauza
** Changed in: nova
   Status: Opinion = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357453

Title:
  Resource tracker should create compute node record in constructor

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Currently, the resource tracker lazily-creates the compute node record
  in the database (via a call to the conductor's compute_node_create()
  API call) during calls to update_available_resource():

  ```
  @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE)
  def update_available_resource(self, context):
  Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  
  LOG.audit(_(Auditing locally available compute resources))
  resources = self.driver.get_available_resource(self.nodename)

  if not resources:
  # The virt driver does not support this function
  LOG.audit(_(Virt driver does not support 
   'get_available_resource'  Compute tracking is disabled.))
  self.compute_node = None
  return
  resources['host_ip'] = CONF.my_ip

  self._verify_resources(resources)

  self._report_hypervisor_resource_view(resources)

  if 'pci_passthrough_devices' in resources:
  if not self.pci_tracker:
  self.pci_tracker = pci_manager.PciDevTracker()
  self.pci_tracker.set_hvdevs(jsonutils.loads(resources.pop(
  'pci_passthrough_devices')))

  # Grab all instances assigned to this node:
  instances = objects.InstanceList.get_by_host_and_node(
  context, self.host, self.nodename)

  # Now calculate usage based on instance utilization:
  self._update_usage_from_instances(resources, instances)

  # Grab all in-progress migrations:
  capi = self.conductor_api
  migrations = capi.migration_get_in_progress_by_host_and_node(context,
  self.host, self.nodename)

  self._update_usage_from_migrations(context, resources,
  migrations)

  # Detect and account for orphaned instances that may exist on the
  # hypervisor, but are not in the DB:
  orphans = self._find_orphaned_instances()
  self._update_usage_from_orphans(resources, orphans)

  # NOTE(yjiang5): Because pci device tracker status is not cleared in
  # this periodic task, and also because the resource tracker is not
  # notified when instances are deleted, we need remove all usages
  # from deleted instances.
  if self.pci_tracker:
  self.pci_tracker.clean_usage(instances, migrations, orphans)
  resources['pci_stats'] = jsonutils.dumps(self.pci_tracker.stats)
  else:
  resources['pci_stats'] = jsonutils.dumps([])

  self._report_final_resource_view(resources)

  metrics = self._get_host_metrics(context, self.nodename)
  resources['metrics'] = jsonutils.dumps(metrics)
  self._sync_compute_node(context, resources)

  def _sync_compute_node(self, context, resources):
  Create or update the compute node DB record.
  if not self.compute_node:
  # we need a copy of the ComputeNode record:
  service = self._get_service(context)
  if not service:
  # no service record, disable resource
  return

  compute_node_refs = service['compute_node']
  if compute_node_refs:
  for cn in compute_node_refs:
  if cn.get('hypervisor_hostname') == self.nodename:
  self.compute_node = cn
  if self.pci_tracker:
  self.pci_tracker.set_compute_node_id(cn['id'])
  break

  if not self.compute_node:
  # Need to create the ComputeNode record:
  resources['service_id'] = service['id']
  self._create(context, resources)
  if self.pci_tracker:
  self.pci_tracker.set_compute_node_id(self.compute_node['id'])
  LOG.info(_('Compute_service record created for %(host)s:%(node)s')
  % {'host': self.host, 'node': self.nodename})

  else:
  # just update the record:
  self._update(context, resources)
  LOG.info(_('Compute_service record updated for %(host)s:%(node)s')
  % {'host': self.host, 'node': self.nodename})

  def 

[Yahoo-eng-team] [Bug 1361683] [NEW] Instance pci_devices and security_groups refreshing can break backporting

2014-08-26 Thread Dan Smith
Public bug reported:

In the Instance object, on a remotable operation such as save(), we
refresh the pci_devices and security_groups with the information we get
back from the database. Since this *replaces* the objects currently
attached to the instance object (which might be backlevel) with current
versions, an older client could get a failure upon deserializing the
result.

We need to figure out some way to either backport the results of
remoteable methods, or put matching backlevel objects into the instance
during the refresh in the first place.

** Affects: nova
 Importance: Medium
 Assignee: Dan Smith (danms)
 Status: Confirmed


** Tags: unified-objects

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361683

Title:
  Instance pci_devices and security_groups refreshing can break
  backporting

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  In the Instance object, on a remotable operation such as save(), we
  refresh the pci_devices and security_groups with the information we
  get back from the database. Since this *replaces* the objects
  currently attached to the instance object (which might be backlevel)
  with current versions, an older client could get a failure upon
  deserializing the result.

  We need to figure out some way to either backport the results of
  remoteable methods, or put matching backlevel objects into the
  instance during the refresh in the first place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358297] Re: Port doesn't receive IP SLAAC in subnets with Router advertisements without dnsmasq

2014-08-26 Thread Sean M. Collins
What we probably should do, is create a patch that when
ipv6_address_mode is unset, but the ipv6_ra_mode attribute is set, is to
not allocate an IPv6 address to the port?

** Changed in: neutron
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358297

Title:
  Port doesn't receive IP SLAAC in subnets with Router advertisements
  without dnsmasq

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When creating network and subnet of IPv6 with arguments: --ipv6-ra-
  mode=slaac or --ipv6-ra-mode=dhcp-stateless and without
  ipv6_address_mode set, the created port doesn't receive SLLAC IPv6
  address, but one of fixed IPs.

  For example:
  Let's create network and subnet:
  ~$ neutron net-create net12
  Created a new network:
  ...
  ~$ neutron subnet-create --name=subnet-net12 --ipv6-ra-mode=slaac  
--ip-version=6 net12  2004::/64 
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | allocation_pools  | {start: 2004::2, end: 
2004:::::fffe} |
  | cidr  | 2004::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2004::1 
 |
  | host_routes   | 
 |
  | id| 81287be4-0d5a-4185-bdb4-4f3bcc6a0bdc
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | 
 |
  | ipv6_ra_mode  | slaac   
 |
  | name  | subnet-net12
 |
  ...

  ~$ neutron router-create router1
  Created a new router:
  ...
  ~$ neutron router-interface-add 2ca34fba-258f-4395-95b0-4fe32e4f19b7 
81287be4-0d5a-4185-bdb4-4f3bcc6a0bdc
  Added interface af8ac8d5-9461-4998-b561-007e3646f53f to router 
2ca34fba-258f-4395-95b0-4fe32e4f19b7.

  Now should be Router Advertisements in the network which should propagate /64 
subnet mask for all hosts.
  Created port now should receive SLAAC address and mask from RAs. But:

  localadmin@devstack-server:~$ neutron port-create net12
  Created a new port:
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:vnic_type | normal  
   |
  | device_id | 
   |
  | device_owner  | 
   |
  | fixed_ips | {subnet_id: 
81287be4-0d5a-4185-bdb4-4f3bcc6a0bdc, ip_address: 2004::2} |
  | id| 956d0c15-8ba4-473d-9674-a74d4cf19a47
   |
  | mac_address   | fa:16:3e:4b:14:2b   
   |
  | name  | 
   |
  | network_id| b9bfb75a-8908-4fbf-a771-74ec345a0ee4
   |
  | security_groups   | 5c0d2c99-65a4-497c-a5c4-3cb241f669f4
   |
  | status| DOWN
   |
  | tenant_id | 74267b5732114ca1a11b7e2849156363
   |
  
+---++

  Port receives IP from defualt fixed IP range, it means SLAAC doesn't work.
  The same thing with ra_mode=dhcpv6-stateless and add_mode=None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1358297] Re: Port doesn't receive IP SLAAC in subnets with Router advertisements without dnsmasq

2014-08-26 Thread Sean M. Collins
@Sergey - that table in that spec you linked, the description is inaccurate. 
Having ipv6_address_mode unset means that you have some *external* IPAM system 
that OpenStack does not know about, that is assigning addresses to 
ports/instances.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358297

Title:
  Port doesn't receive IP SLAAC in subnets with Router advertisements
  without dnsmasq

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When creating network and subnet of IPv6 with arguments: --ipv6-ra-
  mode=slaac or --ipv6-ra-mode=dhcp-stateless and without
  ipv6_address_mode set, the created port doesn't receive SLLAC IPv6
  address, but one of fixed IPs.

  For example:
  Let's create network and subnet:
  ~$ neutron net-create net12
  Created a new network:
  ...
  ~$ neutron subnet-create --name=subnet-net12 --ipv6-ra-mode=slaac  
--ip-version=6 net12  2004::/64 
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | allocation_pools  | {start: 2004::2, end: 
2004:::::fffe} |
  | cidr  | 2004::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2004::1 
 |
  | host_routes   | 
 |
  | id| 81287be4-0d5a-4185-bdb4-4f3bcc6a0bdc
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | 
 |
  | ipv6_ra_mode  | slaac   
 |
  | name  | subnet-net12
 |
  ...

  ~$ neutron router-create router1
  Created a new router:
  ...
  ~$ neutron router-interface-add 2ca34fba-258f-4395-95b0-4fe32e4f19b7 
81287be4-0d5a-4185-bdb4-4f3bcc6a0bdc
  Added interface af8ac8d5-9461-4998-b561-007e3646f53f to router 
2ca34fba-258f-4395-95b0-4fe32e4f19b7.

  Now should be Router Advertisements in the network which should propagate /64 
subnet mask for all hosts.
  Created port now should receive SLAAC address and mask from RAs. But:

  localadmin@devstack-server:~$ neutron port-create net12
  Created a new port:
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:vnic_type | normal  
   |
  | device_id | 
   |
  | device_owner  | 
   |
  | fixed_ips | {subnet_id: 
81287be4-0d5a-4185-bdb4-4f3bcc6a0bdc, ip_address: 2004::2} |
  | id| 956d0c15-8ba4-473d-9674-a74d4cf19a47
   |
  | mac_address   | fa:16:3e:4b:14:2b   
   |
  | name  | 
   |
  | network_id| b9bfb75a-8908-4fbf-a771-74ec345a0ee4
   |
  | security_groups   | 5c0d2c99-65a4-497c-a5c4-3cb241f669f4
   |
  | status| DOWN
   |
  | tenant_id | 74267b5732114ca1a11b7e2849156363
   |
  
+---++

  Port receives IP from defualt fixed IP range, it means SLAAC doesn't work.
  The same thing with ra_mode=dhcpv6-stateless and add_mode=None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1361708] [NEW] Unable to Disasscoiate Floating IP

2014-08-26 Thread Amogh
Public bug reported:


1. Login to Devstack as admin user
2. Go to Instances page, and create new instance
3. Associate floating ip to the instance
4. Disassociate floating IP to the Instance. Observe that Unable to 
Disassociate floating ip error is displayed.

Screenshot attached.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Error: Unable to Disassociate Floating IP
   
https://bugs.launchpad.net/bugs/1361708/+attachment/4187410/+files/disassociate%20floating%20ip.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361708

Title:
  Unable to Disasscoiate Floating IP

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  1. Login to Devstack as admin user
  2. Go to Instances page, and create new instance
  3. Associate floating ip to the instance
  4. Disassociate floating IP to the Instance. Observe that Unable to 
Disassociate floating ip error is displayed.

  Screenshot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361725] [NEW] Dynamic select widget layout problem when help block displayed

2014-08-26 Thread Justin Pomeroy
Public bug reported:

When the help block is displayed, the dynamic select widget field does
not correctly display the help below the field like the other input
fields.

This is dependent on the fix to show the help block on modals:
https://review.openstack.org/#/c/111315/

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: New

** Attachment added: dynamic-select-help-issue.png
   
https://bugs.launchpad.net/bugs/1361725/+attachment/4187427/+files/dynamic-select-help-issue.png

** Changed in: horizon
 Assignee: (unassigned) = Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361725

Title:
  Dynamic select widget layout problem when help block displayed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the help block is displayed, the dynamic select widget field does
  not correctly display the help below the field like the other input
  fields.

  This is dependent on the fix to show the help block on modals:
  https://review.openstack.org/#/c/111315/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361605] Re: Use lazy init for L3 plugin reference

2014-08-26 Thread Armando Migliaccio
Kevin, I like the patch you posted above, that's very insightful.

Can you post a patch to calculate how much it takes a developer to type
'that garbage' over the lifespan of Neutron? :)

I don't have a strong opinion on both approaches, but I like the idea of
conciseness so that we can save the developer the trouble to just copy
and paste this all over the place, which makes the code simpler and more
readable. That to me is the real savings!

That said, I'd rather introduce the property on an ad-hoc basis rather
than doing a sweep patch as proposed here.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361605

Title:
  Use lazy init for L3 plugin reference

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  In many L3 plugins, there is a reference needed to the L3 core plugin.
  This is typically done as:

  plugin =
  manager.NeutronManager.get_service_plugins().get(constants.L3_ROUTER_NAT)

  Rather than looking up the plugin, each time it is needed (e.g.
  processing each VPN API request), this bug proposes to do a lazy init
  of the plugin as in:

  @property
  def l3_plugin(self):
  try:
  return self._l3_plugin
  except AttributeError:
  self._l3_plugin = 
manager.NeutronManager.get_service_plugins().get(
  constants.L3_ROUTER_NAT)
  return self._l3_plugin

  In addition, we can look at placing this in a common area (mixin?) or
  as a decorator, so that each class that needs it could use the mixin,
  rather than repeat this property.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1155765] Re: Offline compression enabled but key is missing from offline manifest

2014-08-26 Thread Yves-Gwenael Bourhis
I can reproduce the issue.
simply add COMPRESS_OFFLINE = True in your local settings, run python 
manage.py compress, and then run the server.


** Changed in: horizon
   Status: Expired = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1155765

Title:
  Offline compression enabled but key is missing from offline manifest

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  After enabling debug, got the following error in the browser on IE10
  while FireFox and Chrome silently return to a cleared log in screen:

  OfflineGenerationError at /admin/
  You have offline compression enabled but key 
6c3f4b40154653aaf8dd6e0393186d0a is missing from offline manifest. You may 
need to run python manage.py compress.Request Method: GET 
  Request URL: http://10.10.22.225/horizon/admin/ 
  Django Version: 1.4.1 
  Exception Type: OfflineGenerationError 
  Exception Value: You have offline compression enabled but key 
6c3f4b40154653aaf8dd6e0393186d0a is missing from offline manifest. You may 
need to run python manage.py compress. 
  Exception Location: 
/usr/lib/python2.7/dist-packages/compressor/templatetags/compress.py in 
render_offline, line 72 
  Python Executable: /usr/bin/python 
  Python Version: 2.7.3 
  Python Path: ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',
   '/usr/lib/python2.7',
   '/usr/lib/python2.7/plat-linux2',
   '/usr/lib/python2.7/lib-tk',
   '/usr/lib/python2.7/lib-old',
   '/usr/lib/python2.7/lib-dynload',
   '/usr/local/lib/python2.7/dist-packages',
   '/usr/lib/python2.7/dist-packages',
   '/usr/share/openstack-dashboard/',
   '/usr/share/openstack-dashboard/openstack_dashboard'] 
  Server time: Fri, 15 Mar 2013 19:40:31 + 

  Error during template rendering
  In template 
/usr/lib/python2.7/dist-packages/horizon/templates/horizon/_conf.html, error at 
line 3

  You have offline compression enabled but key 
6c3f4b40154653aaf8dd6e0393186d0a is missing from offline manifest. You may 
need to run python manage.py compress.
  1 {% load compress %}
   
  2  
  3 {% compress js %}
   
  4 script src='{{ STATIC_URL }}horizon/js/horizon.js' type='text/javascript' 
charset='utf-8'/script
   
  5 script src='{{ STATIC_URL }}horizon/js/horizon.conf.js' 
type='text/javascript' charset='utf-8'/script
   
  6 script type='text/javascript' charset='utf-8'
   
  7 /* Storage for backend configuration variables which the frontend
   
  8  * should be aware of.
   
  9  */
   
  10 horizon.conf.debug = {{ debug|yesno:true,false }};
   
  11 horizon.conf.static_url = {{ STATIC_URL }};
   
  12 horizon.conf.ajax = {
   
  13   queue_limit: {{ HORIZON_CONFIG.ajax_queue_limit|default:null }}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1155765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266513] Re: Some Python requirements are not hosted on PyPI

2014-08-26 Thread James E. Blair
** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1266513

Title:
  Some Python requirements are not hosted on PyPI

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in OpenStack Object Storage (Swift):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Pip 1.5 (released January 2nd, 2014) will by default refuse to
  download packages which are linked from PyPI but not hosted on
  pypi.python.org. The workaround is to whitelist these package names
  individually with both the --allow-external and --allow-insecure
  options.

  These options are new in pip 1.4, so encoding them will break for
  people trying to use pip 1.3.x or earlier. Those earlier versions of
  pip are not secure anyway since they don't connect via HTTPS with host
  certificate validation, so we should be encouraging people to use 1.4
  and later anyway.

  The --allow-insecure option is transitioning to a clearer --allow-
  unverified option name starting with 1.5, but the new form does not
  work with pip before 1.5 so we should use the old version for now to
  allow people to transition gracefully. The --allow-insecure form won't
  be removed until at least pip 1.7 according to comments in the source
  code.

  Virtualenv 1.11 (released the same day) bundles pip 1.5 by default,
  and so requires these workarounds when using requirements external to
  PyPI. Be aware that 1.11 is broken for projects using
  sitepackages=True in their tox.ini. The fix is
  https://github.com/pypa/virtualenv/commit/a6ca6f4 which is slated to
  appear in 1.11.1 (no ETA available). We've worked around it on our
  test infrastructure with https://git.openstack.org/cgit/openstack-
  infra/config/commit/?id=20cd18a for now, but that is hiding the
  external-packages issue since we're currently running all tests with
  pip 1.4.1 as a result.

  This bug will also be invisible in our test infrastructure for
  projects listed as having the PyPI mirror enforced in
  openstack/requirements (except for jobs which bypass the mirror, such
  as those for requirements changes), since our update jobs will pull in
  and mirror external packages and pip sees the mirror as being PyPI
  itself in that situation.

  We'll use this bug to track necessary whitelist updates to tox.ini and
  test scripts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1266513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361792] [NEW] pci requests saved as system metadata can be out of bound

2014-08-26 Thread Baodong (Robert) Li
Public bug reported:

system metadata table is a key-value pair with the size being 255 bytes.
PCI requests is saved as a json document in the system metadata table
and its size depends on the number of PCI requests, possibly more than
255 bytes. Currently, when outbound happens, DB throws an exception, and
the instance fails to boot. This needs to be changed to work with any
size of PCI requests.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361792

Title:
  pci requests saved as system metadata can be out of bound

Status in OpenStack Compute (Nova):
  New

Bug description:
  system metadata table is a key-value pair with the size being 255
  bytes. PCI requests is saved as a json document in the system metadata
  table and its size depends on the number of PCI requests, possibly
  more than 255 bytes. Currently, when outbound happens, DB throws an
  exception, and the instance fails to boot. This needs to be changed to
  work with any size of PCI requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361795] [NEW] context referenced in pci_manager.__init__, but not defined

2014-08-26 Thread Baodong (Robert) Li
Public bug reported:

the variable context is referenced in the pci_manager.__init__(), but
it's not passed in as an argument or defined anywhere. Thus exception
will be thrown when it's referenced.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361795

Title:
  context referenced in pci_manager.__init__, but not defined

Status in OpenStack Compute (Nova):
  New

Bug description:
  the variable context is referenced in the pci_manager.__init__(), but
  it's not passed in as an argument or defined anywhere. Thus exception
  will be thrown when it's referenced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361797] [NEW] unused code in pci_manager.get_instance_pci_devs()

2014-08-26 Thread Baodong (Robert) Li
Public bug reported:

def get_instance_pci_devs(inst):
Get the devices assigned to the instances.
if isinstance(inst, objects.Instance):
return inst.pci_devices
else:
ctxt = context.get_admin_context()
return objects.PciDeviceList.get_by_instance_uuid(
ctxt, inst['uuid'])

In the above code, the else part may not be used by the normal code
flow. Removing it may break some of the unit tests. Thus fix is also
needed in the unit test code that is using it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361797

Title:
  unused code in pci_manager.get_instance_pci_devs()

Status in OpenStack Compute (Nova):
  New

Bug description:
  def get_instance_pci_devs(inst):
  Get the devices assigned to the instances.
  if isinstance(inst, objects.Instance):
  return inst.pci_devices
  else:
  ctxt = context.get_admin_context()
  return objects.PciDeviceList.get_by_instance_uuid(
  ctxt, inst['uuid'])

  In the above code, the else part may not be used by the normal code
  flow. Removing it may break some of the unit tests. Thus fix is also
  needed in the unit test code that is using it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356736] Re: When executing 'vm resize' commad there is noresponse after a long time if the vm is down

2014-08-26 Thread Clark Boylan
The openstack infra team does not run or care for any code that does vm
resizes. I think this is meant to be a nova bug.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-ci
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356736

Title:
  When executing 'vm resize' commad there is noresponse after a long
  time if the vm is down

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  It seems if the vm is down and at the same time sending the resize
  command. The command will be hanging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361806] [NEW] requested_network as a tuple now should be converted to an object

2014-08-26 Thread Baodong (Robert) Li
Public bug reported:

requested_network is a tuple of (net_id, fixed_ip, port_id). Some of the
members can be None depending on the user input, for example, of the
nova boot command.  When SR-IOV tries to use it for SR-IOV, it needs to
add a pci_request_id into it. Concerns are raised on the tuple's
expandability, and its being prone to error when packing/unpacking the
tuple.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361806

Title:
  requested_network as a tuple now should be converted to an object

Status in OpenStack Compute (Nova):
  New

Bug description:
  requested_network is a tuple of (net_id, fixed_ip, port_id). Some of
  the members can be None depending on the user input, for example, of
  the nova boot command.  When SR-IOV tries to use it for SR-IOV, it
  needs to add a pci_request_id into it. Concerns are raised on the
  tuple's expandability, and its being prone to error when
  packing/unpacking the tuple.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361813] [NEW] Adding new service plugins requires updates to constants module

2014-08-26 Thread Sumit Naiksatam
Public bug reported:


Loading a new service plugin requires changes to the:
neutron/plugins/common/constants.py

If this can be made config driven, then eliminates the need to make code
changes, and is potentially helpful when services/features are separated
into a new repository.

** Affects: neutron
 Importance: Undecided
 Assignee: Sumit Naiksatam (snaiksat)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361813

Title:
  Adding new service plugins requires updates to constants module

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  Loading a new service plugin requires changes to the:
  neutron/plugins/common/constants.py

  If this can be made config driven, then eliminates the need to make
  code changes, and is potentially helpful when services/features are
  separated into a new repository.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361871] [NEW] sort Create Node Group Template's Flavor List

2014-08-26 Thread Cindy Lu
Public bug reported:

should be sorted and make use of CREATE_INSTANCE_FLAVOR_SORT in
local_settings.py.

Related: https://bugs.launchpad.net/horizon/+bug/1360014

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Description changed:

- should be sorted and make sure of CREATE_INSTANCE_FLAVOR_SORT in
+ should be sorted and make use of CREATE_INSTANCE_FLAVOR_SORT in
  local_settings.py.
  
  Related: https://bugs.launchpad.net/horizon/+bug/1360014

** Changed in: horizon
 Assignee: (unassigned) = Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361871

Title:
  sort Create Node Group Template's Flavor List

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  should be sorted and make use of CREATE_INSTANCE_FLAVOR_SORT in
  local_settings.py.

  Related: https://bugs.launchpad.net/horizon/+bug/1360014

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361896] [NEW] Test commit - Do not review

2014-08-26 Thread Dane LeBlanc
Public bug reported:

This is  a temporary bug filed to test direct reporting of 3rd party CI
test results.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361896

Title:
  Test commit - Do not review

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is  a temporary bug filed to test direct reporting of 3rd party
  CI test results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361896] Re: Test commit - Do not review

2014-08-26 Thread Dane LeBlanc
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361896

Title:
  Test commit - Do not review

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This is  a temporary bug filed to test direct reporting of 3rd party
  CI test results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361924] [NEW] python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Brant Knudson
Public bug reported:

Several keystone jobs have failed recently, in py26:

http://logs.openstack.org/78/111578/8/check/gate-keystone-
python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

Looks like the new python-subunit 0.0.20 fails to install.

This also failed for me locally:

$ .tox/py27/bin/pip install -U python-subunit=0.0.20
Downloading/unpacking python-subunit=0.0.20
...
copying and adjusting filters/subunit-tags - build/scripts-2.7

error: file '/opt/stack/keystone/.tox/py27/build/python-
subunit/filters/subunit2cvs' does not exist


So I think it's missing a file.

** Affects: keystone
 Importance: Undecided
 Status: Confirmed

** Affects: subunit
 Importance: Critical
 Assignee: Robert Collins (lifeless)
 Status: Triaged

** Affects: trove
 Importance: Undecided
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in OpenStack Identity (Keystone):
  Confirmed
Status in SubUnit:
  Triaged
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U python-subunit=0.0.20
  Downloading/unpacking python-subunit=0.0.20
  ...
  copying and adjusting filters/subunit-tags - build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361924] Re: python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Nikhil Manchanda
** Changed in: keystone
   Status: New = Confirmed

** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in OpenStack Identity (Keystone):
  Confirmed
Status in SubUnit:
  Fix Released
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U python-subunit=0.0.20
  Downloading/unpacking python-subunit=0.0.20
  ...
  copying and adjusting filters/subunit-tags - build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361924] Re: python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Robert Collins
** Also affects: subunit
   Importance: Undecided
   Status: New

** Changed in: subunit
   Status: New = Triaged

** Changed in: subunit
   Importance: Undecided = Critical

** Changed in: subunit
 Assignee: (unassigned) = Robert Collins (lifeless)

** Changed in: subunit
Milestone: None = next

** Changed in: subunit
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in OpenStack Identity (Keystone):
  Invalid
Status in SubUnit:
  Fix Released
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U python-subunit=0.0.20
  Downloading/unpacking python-subunit=0.0.20
  ...
  copying and adjusting filters/subunit-tags - build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361924] Re: python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Brant Knudson
subunit 0.0.21 is up now which should fix this.

** Changed in: keystone
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in OpenStack Identity (Keystone):
  Invalid
Status in SubUnit:
  Fix Released
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U python-subunit=0.0.20
  Downloading/unpacking python-subunit=0.0.20
  ...
  copying and adjusting filters/subunit-tags - build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361924] Re: python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Miguel Grinberg
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  Invalid
Status in SubUnit:
  Fix Released
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U python-subunit=0.0.20
  Downloading/unpacking python-subunit=0.0.20
  ...
  copying and adjusting filters/subunit-tags - build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361924] Re: python-subunit 0.0.20 failing to install causing gate failure

2014-08-26 Thread Nikhil Manchanda
Confirmed fixed in Trove as well. 
Thanks @bknudson and @lifeless!

** Changed in: trove
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361924

Title:
  python-subunit 0.0.20 failing to install causing gate failure

Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  Invalid
Status in SubUnit:
  Fix Released
Status in Openstack Database (Trove):
  Invalid

Bug description:
  Several keystone jobs have failed recently, in py26:

  http://logs.openstack.org/78/111578/8/check/gate-keystone-
  python26/c8ae14c/console.html#_2014-08-27_01_03_03_950

  Looks like the new python-subunit 0.0.20 fails to install.

  This also failed for me locally:

  $ .tox/py27/bin/pip install -U python-subunit=0.0.20
  Downloading/unpacking python-subunit=0.0.20
  ...
  copying and adjusting filters/subunit-tags - build/scripts-2.7

  error: file '/opt/stack/keystone/.tox/py27/build/python-
  subunit/filters/subunit2cvs' does not exist

  
  So I think it's missing a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1361924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361963] [NEW] No default control_exchange configuration prompt in glance-api.conf

2014-08-26 Thread Jin Liu
Public bug reported:

In current default glance-api.conf, messaging configurations as below,
but actually 'rabbit_notification_exchange = glance' and
'qpid_notification_exchange = glance' do not impact topic consumer_queue
creation. because Oslo .messaging uses 'control_exchange' as queue name,
default value is 'openstack'.  other component such as cinder has
written ''control_exchange=cinder'' into cinder conf.  glance should do
same change as well.

# Messaging driver used for 'messaging' notifications driver
# rpc_backend = 'rabbit'

# Configuration options if sending notifications via rabbitmq (these are
# the defaults)
rabbit_host = localhost
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
rabbit_notification_topic = notifications
rabbit_durable_queues = False

# Configuration options if sending notifications via Qpid (these are
# the defaults)
qpid_notification_exchange = glance
qpid_notification_topic = notifications
qpid_hostname = localhost
qpid_port = 5672
qpid_username =
qpid_password =
qpid_sasl_mechanisms =
qpid_reconnect_timeout = 0
qpid_reconnect_limit = 0
qpid_reconnect_interval_min = 0
qpid_reconnect_interval_max = 0
qpid_reconnect_interval = 0

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361963

Title:
  No default control_exchange configuration prompt in glance-api.conf

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In current default glance-api.conf, messaging configurations as below,
  but actually 'rabbit_notification_exchange = glance' and
  'qpid_notification_exchange = glance' do not impact topic
  consumer_queue creation. because Oslo .messaging uses
  'control_exchange' as queue name, default value is 'openstack'.  other
  component such as cinder has written ''control_exchange=cinder'' into
  cinder conf.  glance should do same change as well.

  # Messaging driver used for 'messaging' notifications driver
  # rpc_backend = 'rabbit'

  # Configuration options if sending notifications via rabbitmq (these are
  # the defaults)
  rabbit_host = localhost
  rabbit_port = 5672
  rabbit_use_ssl = false
  rabbit_userid = guest
  rabbit_password = guest
  rabbit_virtual_host = /
  rabbit_notification_exchange = glance
  rabbit_notification_topic = notifications
  rabbit_durable_queues = False

  # Configuration options if sending notifications via Qpid (these are
  # the defaults)
  qpid_notification_exchange = glance
  qpid_notification_topic = notifications
  qpid_hostname = localhost
  qpid_port = 5672
  qpid_username =
  qpid_password =
  qpid_sasl_mechanisms =
  qpid_reconnect_timeout = 0
  qpid_reconnect_limit = 0
  qpid_reconnect_interval_min = 0
  qpid_reconnect_interval_max = 0
  qpid_reconnect_interval = 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1361963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp