[Yahoo-eng-team] [Bug 1375564] Re: unable to delete correct security rules

2014-10-01 Thread Christopher Yeoh
This is already implemented in Juno as secgroup-delete-group-rule

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375564

Title:
  unable to delete correct security rules

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Description:
  ==

  Version: Icehouse/stable

  Try to add a security group rule, like:

  stack@ThinkCentre:~$ nova secgroup-add-group-rule default default tcp
  121 121

  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  | tcp | 121   | 121 |  | default  |
  +-+---+-+--+--+
  =
  Now try to delete that group rule :

  stack@ThinkCentre:~$ nova secgroup-delete-group-rule default default tcp 121 
121
   
  ERROR (AttributeError): 'NoneType' object has no attribute 'upper'
  
  Now try to add invalid group rule :

  stack@tcs-ThinkCentre:~$ nova secgroup-add-group-rule default default
  tcp -1 -1

  ERROR (BadRequest): Invalid port range -1:-1. Valid TCP ports should be 
between 1-65535 (HTTP 400) (Request-ID: 
req-4fb01dfe-c0f6-4309-87fb-e61777e980e2)
  =
  Now try to add group rule of icmp protocol :

  stack@ThinkCentre:~$ nova secgroup-add-group-rule default default icmp
  -1 -1

  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  | icmp| -1| -1  |  | default  |
  +-+---+-+--+--+

  this group rule is added because port range define as( -1 to 255) for icmp.
  ===
  Now try to add one more group rule as :
   
  stack@ThinkCentre:~$ nova secgroup-add-group-rule default default icmp -2 -2

  ERROR (BadRequest): Invalid port range -2:-2. For ICMP, the type:code must be 
valid (HTTP 400) (Request-ID: req-24432ef8-ef05-4d6c-bbfd-8c2d199340e0)
  ==
  Now check the group rule list:

  stack@ThinkCentre-M91P:~$ nova secgroup-list-rules default

  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  |
  | tcp | 12| 12  |  | default  |
  ||   | |  | default  |
  ||   | |  | default  |
  |  icmp | -1  | -1  |  | default  |
  ||   | |  |  |
  +-+---+-+--+--+
  =
  Actual results:
  Only valid rules can be created but not able to delete them.

  Expected results:
  There should be a way to delete them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376128] [NEW] Neutron agents won't pick up neutron-ns-metadata-proxy after I-J upgrade

2014-10-01 Thread Miguel Angel Ajo
Public bug reported:

The pid file path changes from %s.pid to %s/pid during juno, due to this
change:

https://github.com/openstack/neutron/commit/7f8ae630b87392193974dd9cb198c1165cdec93b
#diff-448060f24c6b560b2cbac833da6a143dL68

That means the l3-agent and dhcp-agent (when isolated metadata is enabled) 
will respawn a second neutron-ns-metadata proxy on each namespace/resource
after upgrade (I-J) and agent restart due the inability to find the old PID
file and external process PID.

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Miguel Angel Ajo (mangelajo)

** Description changed:

  The pid file path changes from %s.pid to %s/pid during juno, due to this
  change:
  
  
https://github.com/openstack/neutron/commit/7f8ae630b87392193974dd9cb198c1165cdec93b
  #diff-448060f24c6b560b2cbac833da6a143dL68
  
- That means the l3-agent and dhcp-agent (when isolated metadata is enabled) 
will respawn a second
- neutron-ns-metadata proxy on each namespace/resource after upgrade (I-J) and 
agent restart
- due the inability to find the old PID file and external process PID.
+ That means the l3-agent and dhcp-agent (when isolated metadata is enabled) 
+ will respawn a second neutron-ns-metadata proxy on each namespace/resource
+ after upgrade (I-J) and agent restart due the inability to find the old PID
+ file and external process PID.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376128

Title:
  Neutron agents won't pick up neutron-ns-metadata-proxy after I-J
  upgrade

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The pid file path changes from %s.pid to %s/pid during juno, due to
  this change:

  
https://github.com/openstack/neutron/commit/7f8ae630b87392193974dd9cb198c1165cdec93b
  #diff-448060f24c6b560b2cbac833da6a143dL68

  That means the l3-agent and dhcp-agent (when isolated metadata is enabled) 
  will respawn a second neutron-ns-metadata proxy on each namespace/resource
  after upgrade (I-J) and agent restart due the inability to find the old PID
  file and external process PID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318550] Re: Vpnaas: Vpn_agent is not able to handle two vpn service object for the same router.

2014-10-01 Thread vikas
** Also affects: openstack-tempest (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: openstack-tempest (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1318550

Title:
  Vpnaas: Vpn_agent is not able to handle two vpn service object for the
  same router.

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
  neutron net-create  ext-net1 --router:external=True
  neutron subnet-create  --allocation-pool start=192.142.0.60,end=192.142.0.100 
--gateway 192.142.0.1 ext-net1 192.142.0.0/16 --enable_dhcp=False

  step 1=
  neutron net-create net1
  neutron subnet-create net1 10.10.1.0/24 --name sub1
  neutron router-create r1
  neutron router-interface-add r1 sub1
  neutron router-gateway-set r1 ext-net1

  neutron net-create net2
  neutron subnet-create net2 10.10.2.0/24 --name sub2
  neutron router-create r2
  neutron router-interface-add r2 sub2
  neutron router-gateway-set r2 ext-net1

  
  neutron vpn-ikepolicy-create ikepolicy1
  neutron vpn-ipsecpolicy-create ipsecpolicy1
  neutron vpn-service-create --name myvpn1 --description My vpn service r1 
sub1
  neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id 
myvpn1 --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 
192.142.0.61 --peer-id 192.142.0.61 --peer-cidr 10.10.2.0/24 --psk secret


  neutron vpn-ikepolicy-create ikepolicy2
  neutron vpn-ipsecpolicy-create ipsecpolicy2
  neutron vpn-service-create --name myvpn2 --description My vpn service r2 
sub2

  neutron ipsec-site-connection-create --name vpnconnection2
  --vpnservice-id myvpn2 --ikepolicy-id ikepolicy2 --ipsecpolicy-id
  ipsecpolicy2 --peer-address 192.142.0.60 --peer-id 192.142.0.60
  --peer-cidr 10.10.1.0/24 --psk secret

  
  create  one more network on site1  net3  with subnet 5.5.5.0/24 sub3
  create  a network on site2 net4 with subnet 8.8.8.0/24 sub4

  create a  service objects   myvpn3  with  r1 and sub3
  create a service  objects  myvpn4 with r2 and sub4

  
  create a ipsecsite connection 

   neutron ipsec-site-connection-create --name vpnconnection3
  --vpnservice-id myvpn3 --ikepolicy-id ikepolicy1 --ipsecpolicy-id
  ipsecpolicy1 --peer-address 192.142.0.61 --peer-id 192.142.0.61
  --peer-cidr 5.5.5.0/24 --psk secret

  
  neutron ipsec-site-connection-create --name vpnconnection4  --vpnservice-id 
myvpn2 --ikepolicy-id ikepolicy2 --ipsecpolicy-id ipsecpolicy2 --peer-address 
192.142.0.60 --peer-id 192.142.0.60 --peer-cidr 8.8.8.0/24 --psk secret

  ipsecsite  connection with vpnconnection3 and  vpnconnection4  always
  goes into pending create state.

  basically i am trying to bind  two vpn service  objects  with one
  routerid

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1318550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368391] Re: sqlalchemy-migrate 0.9.2 is breaking nova unit tests

2014-10-01 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed = Fix Released

** Changed in: cinder
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368391

Title:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in Database schema migration for SQLAlchemy:
  New

Bug description:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

  OperationalError: (OperationalError) cannot commit - no transaction is
  active u'COMMIT;' ()

  http://logs.openstack.org/39/117839/18/gate/gate-nova-
  python27/8a7aa8c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364659] Re: [Sahara][HEAT engine] It's impossible to assign 'default' security group to node group

2014-10-01 Thread Thierry Carrez
** Changed in: sahara
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1364659

Title:
  [Sahara][HEAT engine] It's impossible to assign 'default' security
  group to node group

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released

Bug description:
  Steps to repro:
  1. Use HEAT provisioning engine
  2. Login as admin user who has access to several tenants
  3. Create node group template with 'default' security group assigned
  4. Create cluster with this node group

  Expected result: cluster is created
  Observed result: Cluster in error state. Heat stack is in state 
{stack_status_reason: Resource CREATE failed: PhysicalResourceNameAmbiguity: 
Multiple physical resources were found with name (default).,  stack_status: 
CREATE_FAILED}

  Problem investigation:
  Heat searches security group name in all tenants accessible for user, not 
only in tenant where stack is going to be created (heat bug?).

  Steps to make things better:
  1. We can allow specifying security group with ID
  2. Horizon UI can use IDs instead of names for security groups

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1364659/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365901] Re: cinder-api ran into hang loop in python2.6

2014-10-01 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365901

Title:
  cinder-api ran into hang loop in python2.6

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  cinder-api ran into hang loop in python2.6

  #cinder-api
  ...
  ...
  snip...
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  ...
  ...
  snip...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373950] Re: Serial proxy service and API broken by design

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373950

Title:
  Serial proxy service  and API broken by design

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  As part of the blueprint https://blueprints.launchpad.net/nova/+spec
  /serial-ports we introduced an API extension and a websocket proxy
  binary. The problem with the 2 is that a lot of the stuff was copied
  verbatim from the novnc-proxy API and service which relies heavily on
  the internal implementation details of NoVNC and python-websockify
  libraries.

  We should not ship a service that will proxy websocket traffic if we
  do not acutally serve a web-based client for it (in the NoVNC case, it
  has it's own HTML5 VNC implementation that works over ws://). No
  similar thing was part of the proposed (and accepted) implementation.
  The websocket proxy based on websockify that we currently have
  actually assumes it will serve static content (which we don't do for
  serial console case) which will then when excuted in the browser
  initiate a websocket connection that sends the security token in the
  cookie: field of the request. All of this is specific to the NoVNC
  implementation (see:
  
https://github.com/kanaka/noVNC/blob/e4e9a9b97fec107b25573b29d2e72a6abf8f0a46/vnc_auto.html#L18)
  and does not make any sense for serial console functionality.

  The proxy service was introduced in
  https://review.openstack.org/#/c/113963/

  In a similar manner - the API that was proposed and implemented (in
  https://review.openstack.org/#/c/113966/) that gives us back the URL
  with the security token makes no sense for the same reasons outlined
  above.

  We should revert at least these 2 patches before the final Juno
  release as we do not want to ship a useless service and commit to a
  useles API method.

  We could then look into providing similar functionality through
  possibly something like https://github.com/chjj/term.js which will
  require us to write a different proxy service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371677] Re: Race in resource tracker causes 500 response on deleting during verify_resize state

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371677

Title:
  Race in resource tracker causes 500 response on deleting during
  verify_resize state

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  During a tempest run occasionally a during the 
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_verify_resize_state
 
  test it will fail when the test attempts to delete a server in the 
verify_resize state. The failure is caused by a 500 response given being 
returned from nova. Looking at the nova-api log this is caused by an rpc call 
never receiving a response:

  http://logs.openstack.org/10/110110/40/check/check-tempest-dsvm-
  postgres-
  full/4cd8a81/logs/screen-n-api.txt.gz#_2014-09-19_10_24_07_221

  looking at the n-cpu logs for the handling of that rpc call yields:

  http://logs.openstack.org/10/110110/40/check/check-tempest-dsvm-
  postgres-
  full/4cd8a81/logs/screen-n-cpu.txt.gz#_2014-09-19_10_24_31_404

  Which looks like it is coming from attempting to updating the resource
  tracker being triggered by the server deletion. However the volume
  from that failure according to the tempest log is coming from a
  different test, in the test class ServerRescueNegativeTestJSON. It
  appears the tearDownClass for that test class is running concurrently
  with the failed test, and causing a race in the resource tracker,
  where the volume it expects to be there disappears, so when it goes to
  get the size it fails.

  Full logs for an example run that tripped this is here:
  
http://logs.openstack.org/10/110110/40/check/check-tempest-dsvm-postgres-full/4cd8a81

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369151] Re: TypeError: consume_from_instance() takes exactly 2 arguments

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369151

Title:
  TypeError: consume_from_instance() takes exactly 2 arguments

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  As of today we are seeing the following scheduler errors when trying
  to schedule Ironic instances:

  Sep 13 16:42:48 ubuntu nova-scheduler: a/local/lib/python2.7/site-
  packages/nova/scheduler/filter_scheduler.py, line 147, in
  select_destinations\nfilter_properties)\n', '  File
  /opt/stack/venvs/nova/local/lib/python2.7/site-
  packages/nova/scheduler/filter_scheduler.py, line 300, in _schedule\n
  chosen_host.obj.consume_from_instance(context,
  instance_properties)\n', 'TypeError: consume_from_instance() takes
  exactly 2 arguments (3 given)\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369627] Re: libvirt disk.config will have issues when booting two with different config drive values

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369627

Title:
  libvirt disk.config will have issues when booting two with different
  config drive values

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Currently, in the image creating code for Juno we have

  if configdrive.required_by(instance):
  LOG.info(_LI('Using config drive'), instance=instance)

  image_type = self._get_configdrive_image_type()
  backend = image('disk.config', image_type)
  backend.cache(fetch_func=self._create_configdrive,
filename='disk.config' + suffix,
instance=instance,
admin_pass=admin_pass,
files=files,
network_info=network_info)

  The important thing to notice here is that we have
  filename='disk.confg' + suffix.  This means that the filename for
  the config drive in the cache directory will be simply 'disk.config'
  followed by any potential suffix (e.g. '.rescue').  This name is not
  unique to the instance whose config drive we are creating.  Therefore,
  when we go to boot another instance with a different config drive, the
  cache function will detect the old config drive, and decide it doesn't
  need to create the new config drive with the appropriate config for
  the new instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368391] Re: sqlalchemy-migrate 0.9.2 is breaking nova unit tests

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368391

Title:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in Database schema migration for SQLAlchemy:
  New

Bug description:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

  OperationalError: (OperationalError) cannot commit - no transaction is
  active u'COMMIT;' ()

  http://logs.openstack.org/39/117839/18/gate/gate-nova-
  python27/8a7aa8c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367007] Re: Ironic driver requires extra_specs

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367007

Title:
  Ironic driver requires extra_specs

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Comments on review https://review.openstack.org/#/c/111429/ suggested
  that the Ironic driver should use

flavor = instance.get_flavor()

  instead of

flavor = flavor_obj.Flavor.get_by_id(context,
  instance['instance_type_id'])

  During the crunch to land things before feature freeze, these were
  integrated in the proposal to the Nova tree prior to being landed in
  the Ironic tree (the only place where they would have been tested).
  These changes actually broke the driver, since it requires access to
  flavor['extra_specs'] -- which is not present in the instance's cached
  copy of the flavor.

  This problem was discovered when attempting to update the devstack
  config and begin testing with the driver from the Nova tree (rather
  than the copy of the driver in the Ironic tree). That patch is here:

  https://review.openstack.org/#/c/119844/

  The error being encountered can be seen both on the devstack patch
  (eg, in the Nova code)

  http://logs.openstack.org/44/119844/2/check/check-tempest-dsvm-
  virtual-ironic-nv/ce443f8/logs/screen-n-cpu.txt.gz

  and in the back-port of the same code to Ironic here:

  http://logs.openstack.org/65/119165/3/check/check-tempest-dsvm-
  virtual-
  ironic/c161a89/logs/screen-n-cpu.txt.gz#_2014-09-08_08_41_06_821

  
  ==
  Proposed fix
  ==

  Fetch flavor['extra_specs'] on demand, when needed by the Ironic
  driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366859] Re: Ironic: extra_spec requirement 'amd64' does not match 'x86_64'

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366859

Title:
  Ironic: extra_spec requirement 'amd64' does not match 'x86_64'

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Using the latest Nova Ironic compute drivers (either from Ironic or
  Nova) I'm hitting scheduling ERRORS:

  Sep 08 15:26:45 localhost nova-scheduler[29761]: 2014-09-08
  15:26:45.620 29761 DEBUG
  nova.scheduler.filters.compute_capabilities_filter [req-9e34510e-268c-
  40de-8433-d7b41017b54e None] extra_spec requirement 'amd64' does not
  match 'x86_64' _satisfies_extra_specs
  /opt/stack/venvs/nova/lib/python2.7/site-
  packages/nova/scheduler/filters/compute_capabilities_filter.py:70

  I've gone ahead and patched in
  https://review.openstack.org/#/c/117555/.

  The issue seems to be that ComputeCapabilitiesFilter does not itself
  canonicalize instance_types when comparing them which will breaks
  existing TripleO baremetal clouds using x86_64 (amd64).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1366859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365061] Re: Warn against sorting requirements

2014-10-01 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed = Fix Released

** Changed in: cinder
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365061

Title:
  Warn against sorting requirements

Status in Cinder:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  Invalid
Status in OpenStack Object Storage (Swift):
  Fix Committed

Bug description:
  Contrary to bug 1285478, requirements files should not be sorted
  alphabetically. Given that requirements files can contain comments,
  I'd suggest a header in all requirements files along the lines of:

  # The order of packages is significant, because pip processes them in the 
order
  # of appearance. Changing the order has an impact on the overall integration
  # process, which may cause wedges in the gate later.

  This is the result of a mailing list discussion (thanks, Sean!):

http://www.mail-archive.com/openstack-
  d...@lists.openstack.org/msg33927.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362854] Re: Incorrect regex on rootwrap for encrypted volumes ln creation

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362854

Title:
  Incorrect regex on rootwrap for encrypted volumes ln creation

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  While running Tempest tests against my device, the encryption tests
  consistently fail to attach.  Turns out the problem is an attempt to
  create symbolic link for encryption process, however the rootwrap spec
  is restricted to targets with the default openstack.org iqn.

  Error Message from n-cpu:

  Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln
  --symbolic --force /dev/mapper/ip-10.10.8.112:3260-iscsi-
  iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-
  6b4269af9d4f.4710-lun-0 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-
  iqn.2010-01.com.sol

  
  Rootwrap entry currently implemented:

  ln: RegExpFilter, ln, root, ln, --symbolic, --force, /dev/mapper/ip
  -.*-iscsi-iqn.2010-10.org.openstack:volume-.*, /dev/disk/by-path/ip
  -.*-iscsi-iqn.2010-10.org.openstack:volume-.*

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py,
 line 52, in signal_handler
  2014-08-15 13:46:09.158 | raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361792] Re: pci requests saved as system metadata can be out of bound

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361792

Title:
  pci requests saved as system metadata can be out of bound

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  system metadata table is a key-value pair with the size being 255
  bytes. PCI requests is saved as a json document in the system metadata
  table and its size depends on the number of PCI requests, possibly
  more than 255 bytes. Currently, when outbound happens, DB throws an
  exception, and the instance fails to boot. This needs to be changed to
  work with any size of PCI requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357055] Re: Race to delete shared subnet in Tempest neutron full jobs

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357055

Title:
  Race to delete shared subnet in Tempest neutron full jobs

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  New

Bug description:
  This seems to show up in several different tests, basically anything
  using neutron.  I noticed it here:

  http://logs.openstack.org/89/112889/1/gate/check-tempest-dsvm-neutron-
  full/21fcf50/console.html#_2014-08-14_17_03_10_330

  That's on a stable/icehouse change, but logstash shows this on master
  mostly.

  I see this in the neutron server logs:

  http://logs.openstack.org/89/112889/1/gate/check-tempest-dsvm-neutron-
  full/21fcf50/logs/screen-q-svc.txt.gz#_2014-08-14_16_45_02_101

  This query shows 82 hits in 10 days:

  message:delete failed \(client error\)\: Unable to complete operation
  on subnet AND message:One or more ports have an IP allocation from
  this subnet AND tags:screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZGVsZXRlIGZhaWxlZCBcXChjbGllbnQgZXJyb3JcXClcXDogVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wNy0zMVQxOTo0Mzo0NSswMDowMCIsInRvIjoiMjAxNC0wOC0xNFQxOTo0Mzo0NSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4MDQ1NDY1OTU2fQ==

  Logstash doesn't show this in the gate queue but it does show up in
  the uncategorized bugs list which is in the gate queue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343604] Re: Exceptions thrown, and messages logged by execute() may include passwords

2014-10-01 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed = Fix Released

** Changed in: cinder
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343604

Title:
  Exceptions thrown, and messages logged by execute() may include
  passwords

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in The Oslo library incubator:
  Fix Released
Status in OpenStack Security Advisories:
  Triaged
Status in Openstack Database (Trove):
  Fix Committed
Status in Trove icehouse series:
  Fix Committed

Bug description:
  Currently when execute() throws a ProcessExecutionError, it returns
  the command without masking passwords. In the one place where it logs
  the command, it correctly masks the password.

  It would be prudent to mask the password in the exception as well so
  that upstream catchers don't have to go through the mask_password()
  motions.

  The same also goes for stdout and stderr information which should be
  sanitized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1343604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349452] Re: apparent deadlock on lock_bridge in n-cpu

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349452

Title:
  apparent deadlock on lock_bridge in n-cpu

Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in Oslo Concurrency Library:
  New

Bug description:
  It's not clear if n-cpu is dying trying to acquire the lock
  lock_bridge or if it's just hanging.

  http://logs.openstack.org/08/109108/1/check/check-tempest-dsvm-
  full/4417111/logs/screen-n-cpu.txt.gz

  The logs for n-cpu stop about 15 minutes before the rest of the test
  run, and all tests doing things that require the hypervisor executed
  after that point fail with different errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1100697] Re: libvirt should enable pae setting for Xen or KVM guest

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1100697

Title:
  libvirt should enable pae setting for Xen or KVM guest

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently, nova doesn't enable pae setting for Xen or KVM guest in its
  libvirt driver. Windows(Win7 in my enviroment) guests would not boot
  successful in such case. This patch adds pae setting in libvirt driver
  for Xen or KVM guest, which would fix this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1100697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374140] Re: Need to log the orignial libvirtError when InterfaceDetachFailed

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374140

Title:
  Need to log the orignial libvirtError when InterfaceDetachFailed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This is not really useful:

  http://logs.openstack.org/17/123917/2/check/check-tempest-dsvm-
  neutron/4bc2052/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-09-25_17_35_11_635

  2014-09-25 17:35:11.635 ERROR nova.virt.libvirt.driver 
[req-50afcbfb-203e-454d-a7eb-1549691caf77 TestNetworkBasicOps-985093118 
TestNetworkBasicOps-1055683132] [instance: 
960ee0b1-9c96-4d5b-b5f5-be76ae19a536] detaching network adapter failed.
  2014-09-25 17:35:11.635 27689 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: nova.objects.instance.Instance object at 
0x422fe90
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 393, in decorated_function
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 4411, in detach_interface
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 
self.driver.detach_interface(instance, condemned)
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 1448, in 
detach_interface
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher raise 
exception.InterfaceDetachFailed(instance)
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 
InterfaceDetachFailed: nova.objects.instance.Instance object at 0x422fe90
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 

  
  The code is logging that there was an error, but not the error itself:

  try:
  self.vif_driver.unplug(instance, vif)
  flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG
  state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
  if state == power_state.RUNNING or state == power_state.PAUSED:
  flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE
  virt_dom.detachDeviceFlags(cfg.to_xml(), flags)
  except libvirt.libvirtError as ex:
  error_code = ex.get_error_code()
  if error_code == libvirt.VIR_ERR_NO_DOMAIN:
  LOG.warn(_LW(During detach_interface, 
   instance disappeared.),
   instance=instance)
  else:
  LOG.error(_LE('detaching network adapter failed.'),
   instance=instance)
  raise exception.InterfaceDetachFailed(
  instance_uuid=instance['uuid'])

  We should log the original libvirt error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374902] Re: missing vcpupin elements in cputune for numa case

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374902

Title:
  missing vcpupin elements in cputune for numa case

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Boot instance with flavor as below:
  os@os2:~$ nova flavor-show 100
  +++
  | Property   | Value  |
  +++
  | OS-FLV-DISABLED:disabled   | False  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  |
  | disk   | 0  |
  | extra_specs| {hw:numa_nodes: 2} |
  | id | 100|
  | name   | numa.nano  |
  | os-flavor-access:is_public | True   |
  | ram| 512|
  | rxtx_factor| 1.0|
  | swap   ||
  | vcpus  | 8  |
  +++

  The result is

    cputune
  vcpupin vcpu='3' cpuset='0-7,16-23'/
  vcpupin vcpu='7' cpuset='8-15,24-31'/
    /cputune

  
  The cputune should be:

    cputune
  vcpupin vcpu='0' cpuset='0-7,16-23'/
  vcpupin vcpu='1' cpuset='0-7,16-23'/
  vcpupin vcpu='2' cpuset='0-7,16-23'/
  vcpupin vcpu='3' cpuset='0-7,16-23'/
  vcpupin vcpu='4' cpuset='8-15,24-31'/
  vcpupin vcpu='5' cpuset='8-15,24-31'/
  vcpupin vcpu='6' cpuset='8-15,24-31'/
  vcpupin vcpu='7' cpuset='8-15,24-31'/
    /cputune

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375432] Re: Duplicate entry in gitignore

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375432

Title:
  Duplicate entry in gitignore

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The .gitignore file for nova contains the line for the sample config
  file, etc/nova/nova.conf.sample, twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374458] Re: test_encrypted_cinder_volumes_luks fails to detach encrypted volume

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374458

Title:
  test_encrypted_cinder_volumes_luks fails to detach encrypted volume

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/98/124198/3/check/check-grenade-dsvm-
  icehouse/c89f18f/console.html#_2014-09-26_03_38_56_940

  2014-09-26 03:38:57.259 | Traceback (most recent call last):
  2014-09-26 03:38:57.259 |   File tempest/scenario/manager.py, line 142, 
in delete_wrapper
  2014-09-26 03:38:57.259 | delete_thing(*args, **kwargs)
  2014-09-26 03:38:57.259 |   File 
tempest/services/volume/json/volumes_client.py, line 108, in delete_volume
  2014-09-26 03:38:57.259 | resp, body = self.delete(volumes/%s % 
str(volume_id))
  2014-09-26 03:38:57.259 |   File tempest/common/rest_client.py, line 
225, in delete
  2014-09-26 03:38:57.259 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-09-26 03:38:57.259 |   File tempest/common/rest_client.py, line 
435, in request
  2014-09-26 03:38:57.259 | resp, resp_body)
  2014-09-26 03:38:57.259 |   File tempest/common/rest_client.py, line 
484, in _error_checker
  2014-09-26 03:38:57.259 | raise exceptions.BadRequest(resp_body)
  2014-09-26 03:38:57.259 | BadRequest: Bad request
  2014-09-26 03:38:57.260 | Details: {u'message': u'Invalid volume: Volume 
status must be available or error, but current status is: in-use', u'code': 400}
  2014-09-26 03:38:57.260 | }}}
  2014-09-26 03:38:57.260 | 
  2014-09-26 03:38:57.260 | traceback-2: {{{
  2014-09-26 03:38:57.260 | Traceback (most recent call last):
  2014-09-26 03:38:57.260 |   File tempest/common/rest_client.py, line 
561, in wait_for_resource_deletion
  2014-09-26 03:38:57.260 | raise exceptions.TimeoutException(message)
  2014-09-26 03:38:57.260 | TimeoutException: Request timed out
  2014-09-26 03:38:57.260 | Details: 
(TestEncryptedCinderVolumes:_run_cleanups) Failed to delete resource 
704461b6-3421-4959-8113-a011e6410ede within the required time (196 s).
  2014-09-26 03:38:57.260 | }}}
  2014-09-26 03:38:57.260 | 
  2014-09-26 03:38:57.261 | traceback-3: {{{
  2014-09-26 03:38:57.261 | Traceback (most recent call last):
  2014-09-26 03:38:57.261 |   File 
tempest/services/volume/json/admin/volume_types_client.py, line 97, in 
delete_volume_type
  2014-09-26 03:38:57.261 | resp, body = self.delete(types/%s % 
str(volume_id))
  2014-09-26 03:38:57.261 |   File tempest/common/rest_client.py, line 
225, in delete
  2014-09-26 03:38:57.261 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-09-26 03:38:57.261 |   File tempest/common/rest_client.py, line 
435, in request
  2014-09-26 03:38:57.261 | resp, resp_body)
  2014-09-26 03:38:57.261 |   File tempest/common/rest_client.py, line 
484, in _error_checker
  2014-09-26 03:38:57.261 | raise exceptions.BadRequest(resp_body)
  2014-09-26 03:38:57.261 | BadRequest: Bad request
  2014-09-26 03:38:57.261 | Details: {u'message': u'Target volume type is 
still in use.', u'code': 400}
  2014-09-26 03:38:57.262 | }}}
  2014-09-26 03:38:57.262 | 
  2014-09-26 03:38:57.262 | Traceback (most recent call last):
  2014-09-26 03:38:57.262 |   File tempest/test.py, line 142, in wrapper
  2014-09-26 03:38:57.262 | return f(self, *func_args, **func_kwargs)
  2014-09-26 03:38:57.262 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 56, in 
test_encrypted_cinder_volumes_luks
  2014-09-26 03:38:57.262 | self.attach_detach_volume()
  2014-09-26 03:38:57.262 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 49, in 
attach_detach_volume
  2014-09-26 03:38:57.262 | self.nova_volume_detach()
  2014-09-26 03:38:57.262 |   File tempest/scenario/manager.py, line 439, 
in nova_volume_detach
  2014-09-26 03:38:57.262 | 'available')
  2014-09-26 03:38:57.262 |   File 
tempest/services/volume/json/volumes_client.py, line 181, in 
wait_for_volume_status
  2014-09-26 03:38:57.263 | raise exceptions.TimeoutException(message)
  2014-09-26 03:38:57.263 | TimeoutException: Request timed out
  2014-09-26 03:38:57.263 | Details: Volume 
704461b6-3421-4959-8113-a011e6410ede failed to reach available status within 
the required time (196 s).

  
  

[Yahoo-eng-team] [Bug 1374666] Re: Nova devref documentation on hooks is incorrect

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374666

Title:
  Nova devref documentation on hooks is incorrect

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The syntax suggested for adding hooks to nova via setup.py entrypoints
  is:

  entry_points = {
  'nova.hooks': [
  'resize_hook': your_package.hooks.YourHookClass,
  ]
  },

  
  But this is incorrect.  The class name and module name need to be delimited 
with ':':

  entry_points = {
  'nova.hooks': [
  'resize_hook': your_package.hooks:YourHookClass,
  ]
  },

  Follow the example in the existing documentation will result in hooks
  that never get called.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374158] Re: Typo in call to LibvirtConfigObject's parse_dom() method

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374158

Title:
  Typo in call to LibvirtConfigObject's parse_dom() method

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In Juno in nova/virt/libvirt/config.py:

  LibvirtConfigGuestPUNUMA.parse_dom() calls super with a capital 'D' in
  parse_dom().

  super(LibvirtConfigGuestCPUNUMA, self).parse_Dom(xmldoc)

  LibvirtConfigObject does not have a 'parse_Dom()' method. It has a
  'parse_dom()' method. This causes the following exception to be
  raised.

  ...
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 1733, in 
parse_dom
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack obj.parse_dom(c)
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 542, in 
parse_dom
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
numa.parse_dom(child)
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 509, in 
parse_dom
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
super(LibvirtConfigGuestCPUNUMA, self).parse_Dom(xmldoc)
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstackAttributeError: 'super' 
object has no attribute 'parse_Dom'
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
  2014-09-25 15:35

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373962] Re: LVM backed VM fails to launch

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373962

Title:
  LVM backed VM fails to launch

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  LVM ephemeral storage backend is broken in the most recent Nova
  (commit 945646e1298a53be6ae284766f5023d754dfe57d)

  To reproduce in Devstack:

  1. Configure Nova to use LVM ephemeral storage by adding to
  create_nova_conf function in lib/nova

  iniset $NOVA_CONF libvirt images_type lvm
  iniset $NOVA_CONF libvirt images_volume_group nova-lvm

  2. Create a backing file for LVM

  truncate -s 5G nova-backing-file

  3. Mount the file via loop device

  sudo losetup /dev/loop0 nova-backing-file

  4. Create nova-lvm volume group

  sudo vgcreate nova-lvm /dev/loop0

  5. Launch Devstack

  6. Alternatively, skipping step 1, /etc/nova/nova.conf can be modified
  after Devstack is launched by adding

  [libvirt]
  images_type = lvm
  images_volume_group = nova-lvm

  and then restarting nova-compute by entering the Devstack screen
  session, going to the n-cpu screen and hitting Ctrl-C, Up-arrow, and
  Enter.

  7. Launch an instance

  nova boot test --flavor 1 --image cirros-0.3.2-x86_64-uec

  Instance fails to launch. Nova compute reports

  2014-09-25 10:11:08.180 ERROR nova.compute.manager 
[req-b7924ad0-5f4b-46eb-a798-571d97c77145 demo demo] [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] Instance failed to spawn
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] Traceback (most recent call last):
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/compute/manager.py, line 2231, in _build_resources
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] yield resources
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/compute/manager.py, line 2101, in _build_and_run_instance
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] block_device_info=block_device_info)
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2617, in spawn
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] block_device_info, 
disk_info=disk_info)
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4434, in 
_create_domain_and_network
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] domain.destroy()
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4358, in _create_domain
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] for vif in network_info if 
vif.get('active', True) is False]
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] six.reraise(self.type_, self.value, 
self.tb)
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4349, in _create_domain
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] raise 
exception.VirtualInterfaceCreateException()
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 183, in doit
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 141, in 
proxy_call
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] rv = execute(f, *args, **kwargs)
  2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1372845] Re: libvirt: Instance NUMA fitting code fails to account for vpu_pin_set config option properly

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372845

Title:
  libvirt: Instance NUMA fitting code fails to account for vpu_pin_set
  config option properly

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Looking at this branch of the NUMA fitting code

  
https://github.com/openstack/nova/blob/51de439a4d1fe5e17d59d3aac3fd2c49556e641b/nova/virt/libvirt/driver.py#L3738

  We do not account for allowed cpus when choosing viable cells for the
  given instance. meaning we could chose a NUMA cell that has no viable
  CPUs which we will try to pin to.

  We need to consider allowed_cpus when calculating viable NUMA cells
  for the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373159] Re: NUMA Topology cell memory sent to xml in MiB, but qemu uses KiB

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373159

Title:
  NUMA Topology cell memory sent to xml in MiB, but qemu uses KiB

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently when specifying NUMA cell memory via flavor extra_specs or
  image properties, MiB units are used. According to the libvirt xml
  domain format documentation (http://libvirt.org/formatdomain.html) ,
  cell memory should be specified in KiB.

  In this example, we use the following extra_specs:
  hw:numa_policy: strict, hw:numa_mem.1: 2048, hw:numa_mem.0: 6144, 
hw:numa_nodes: 2, hw:numa_cpus.0: 0,1,2, hw:numa_cpus.1: 3

  The flavor has 8192 MB of ram and 4 vcpus.

  When using qemu 2.1.0, the following will be seen in the n-cpu logs
  when booting a machine with NUMA specs.

  libvirtError: internal error: process exited while connecting to
  monitor: qemu-system-x86_64: total memory for NUMA nodes (8388608)
  should equal RAM size (2)

  Please note that the 2 is 8388608 KiB in bytes and hex (simply
  an issue with the qemu error message). The error shows that 8192 KiB
  is being requested rather than 8192 MiB. Because the RAM size does not
  equal the total memory size, the machine fails to boot.

  When using versions of qemu lower than 2.1.0 the issue is not obvious,
  as machines with  NUMA specs boot, but only because of a bug (that has
  since been resolved) in qemu. This is because the check to ensure that
  RAM size equals the NUMA node total memory does not happen in versions
  lower than 2.1.0

  In short, we should be using KiB units for NUMA cell memory, or at
  least be converting from MiB to KiB before creating the xml.
  Otherwise, NUMA placement will not behave as intended.

  To be fair, I haven't had the chance to look at the memory placement
  in a guest booted using qemu 2.0.0 or lower, though I suspect the
  memory placement would be incorrect.. If anyone has the chance to
  look, it would be greatly appreciated.

  I am currently investigating the appropriate fix for this alongside
  Tiago Mello. We made a quick fix in /nova/virt/libvirt/config.py on
  line 495:

  cell.set(memory, str(self.memory * 1024))

  Mutiplying by 1024 allowed the machine to properly boot, but it is
  probably a bit too quick and dirty. Just thought it would be worth
  mentioning.

  Sys-info:
  x86_64 machine

  Virt-info:
  qemu version 2.1.0
  libvirt version 1.2.2

  Kenerl-info:
  3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64 x86_64 
x86_64 GNU/Linux

  OS-info:
  Distributor ID:   Ubuntu
  Description:  Ubuntu 14.04.1 LTS
  Release:  14.04
  Codename: trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373230] Re: start/stop instance in EC2 API shouldn't return active/stopped status immediately

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373230

Title:
  start/stop instance in EC2 API shouldn't return active/stopped status
  immediately

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Always see this error in the gate:

  http://logs.openstack.org/73/122873/1/gate/gate-tempest-dsvm-neutron-
  full/e5a2bf6/logs/screen-n-cpu.txt.gz?level=ERROR#_2014-09-21_05_18_23_709

  014-09-21 05:18:23.709 ERROR oslo.messaging.rpc.dispatcher [req-
  52e7fee5-65ee-4c4d-abcc-099b29352846 InstanceRunTest-2053569555
  InstanceRunTest-179702724] Exception during message handling:
  Unexpected task state: expecting [u'powering-off'] but the actual
  state is deleting

  Checking the EC2 API test in tempest,

  def test_run_stop_terminate_instance(self):
  # EC2 run, stop and terminate instance
  image_ami = self.ec2_client.get_image(self.images[ami]
[image_id])
  reservation = image_ami.run(kernel_id=self.images[aki][image_id],
  ramdisk_id=self.images[ari][image_id],
  instance_type=self.instance_type)
  rcuk = self.addResourceCleanUp(self.destroy_reservation, reservation)

  for instance in reservation.instances:
  LOG.info(state: %s, instance.state)
  if instance.state != running:
  self.assertInstanceStateWait(instance, running)

  for instance in reservation.instances:
  instance.stop()
  LOG.info(state: %s, instance.state)
  if instance.state != stopped:
  self.assertInstanceStateWait(instance, stopped)

  self._terminate_reservation(reservation, rcuk)

  The test is wait for instance become to stopped. But check the ec2 api code
  https://github.com/openstack/nova/blob/master/nova/api/ec2/cloud.py#L1075

  it always return stopped status immediately. Actually start/stop
  action is async call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373535] Re: obj_make_compatible is wrong

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373535

Title:
  obj_make_compatible is wrong

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Commit 7cdfdccf1bb936d559bd3e247094a817bb3c03f4 attempted to make the
  obj_make_compatible calls consistent, but it actually changed them the
  wrong way.

  Change https://review.openstack.org/#/c/121663/ addresses the bug but
  is sitting on top of a change that might be too risky at this point
  for juno-rc1, so this should be a separate fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372829] Re: vcpu_pin_set setting raises exception

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372829

Title:
  vcpu_pin_set setting raises exception

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  once enabled vcpu_pin_set=0-9  in nova.conf, got the following
  exception:

  2014-09-23 11:00:41.603 14427 DEBUG nova.openstack.common.processutils [-] 
Result was 0 execute /opt/stack/nova/nova/openstack/common/processutils.py:195
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 
455, in fire_timers
  timer()
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 
58, in __call__
  cb(*args, **kw)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 168, 
in _do_send
  waiter.switch(result)
File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
212, in main
  result = function(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/service.py, line 490, in 
run_service
  service.start()
File /opt/stack/nova/nova/service.py, line 181, in start
  self.manager.pre_start_hook()
File /opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File /opt/stack/nova/nova/compute/manager.py, line 5922, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File /opt/stack/nova/nova/virt/driver.py, line 1237, in 
get_available_nodes
  stats = self.get_host_stats(refresh=refresh)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5760, in 
get_host_stats
  return self.host_state.get_host_stats(refresh=refresh)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 470, in host_state
  self._host_state = HostState(self)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6320, in __init__
  self.update_status()
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6376, in 
update_status
  numa_topology = self.driver._get_host_numa_topology()
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 4869, in 
_get_host_numa_topology
  cell.cpuset = allowed_cpus
  TypeError: unsupported operand type(s) for =: 'set' and 'list'
  2014-09-23 11:00:42.032 14427 ERROR nova.openstack.common.threadgroup [-] 
unsupported operand type(s) for =: 'set' and 'list'
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 125, in wait
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 47, in wait
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 173, in 
wait
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 293, in 
switch
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 212, in 
main
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 490, in run_service
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 181, in start
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
  2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 

[Yahoo-eng-team] [Bug 1372218] Re: servers.list, filtering on metadata doesn't work. unicode error

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372218

Title:
  servers.list, filtering on metadata doesn't work. unicode error

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I'm trying to list servers by filtering on system_metadata or
  metadata.

  I should be able to do something like (looking into the code)

  nclient.servers.list(search_opts={'system_metadata': {some_value:
  some_key}, 'all_tenants': 1})

  But this dictionary gets turned into a unicode string. I get a 500
  back from nova with the below stack trace in nova-api.

  The offending code is in exact_filter in the db api. It is expecting a
  list of dicts or a single dict when using system_metadata or metadata
  key when searching. It looks like this used to work but now somewhere
  higher up is ensuring this is a string.

  
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 582, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/wsgi.py, line 917, in __call__
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack content_type, 
body, accept)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/wsgi.py, line 983, in _process_stack
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/wsgi.py, line 1070, in dispatch
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/compute/servers.py, line 520, in detail
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack servers = 
self._get_servers(req, is_detail=True)
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/compute/servers.py, line 603, in _get_servers
  2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack want_objects=True)
  

[Yahoo-eng-team] [Bug 1370536] Re: DB migrations can go unchecked

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370536

Title:
  DB migrations can go unchecked

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently DB migrations can be added to the tree without the
  corresponding migration tests. This is bad and means that we have some
  that are untested in the tree already.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369984] Re: NUMA topology checking will not check if instance can fit properly.

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369984

Title:
  NUMA topology checking will not check if instance can fit properly.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When testing weather the instance can fit into the host topology will
  currently not take into account the number of cells hte instance has,
  and will only claim matching cells and pass an instance if the
  matching cells fit.

  So for example a 4 NUMA cell isntance would pass the claims test on a
  2 NUMA cell host, as long as the first 2 cells fit, without
  considering that the whole instance will not actually fit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371072] Re: xenapi: should clean up old snapshots before creating a new one

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371072

Title:
  xenapi: should clean up old snapshots before creating a new one

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When nova-compute gets forcably restarted, or fails, we get left over
  snapshots.

  We have some clean up code for after nova-compute comes back up, but
  it would be good to clean up older snapshots, and generally try to
  minimize the size of the snapshot that goes to glance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369537] Re: LibvirtConnTestCase.test_create_propagates_exceptions takes 30 seconds to run

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369537

Title:
  LibvirtConnTestCase.test_create_propagates_exceptions takes 30 seconds
  to run

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This is clearly unacceptable for a test case:

  $ .venv/bin/python -m testtools.run 
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions
  Tests running...

  Ran 1 test in 31.816s
  OK

  
  It is caused by  a looping sleep in the disk mount code, which shoudln't even 
be run in this test case.

File nova/virt/libvirt/driver.py, line 4412, in 
_create_domain_and_network
  disk_info):
File /usr/lib64/python2.7/contextlib.py, line 17, in __enter__
  return self.gen.next()
File nova/virt/libvirt/driver.py, line 4308, in _lxc_disk_handler
  self._create_domain_setup_lxc(instance, block_device_info, disk_info)
File nova/virt/libvirt/driver.py, line 4260, in 
_create_domain_setup_lxc
  use_cow=use_cow)
File nova/virt/disk/api.py, line 386, in setup_container
  dev = img.mount()
File nova/virt/disk/api.py, line 306, in mount
  if mounter.do_mount():
File nova/virt/disk/mount/api.py, line 218, in do_mount
  status = self.get_dev() and self.map_dev() and self.mnt_dev()
File nova/virt/disk/mount/nbd.py, line 120, in get_dev
  return self._get_dev_retry_helper()
File nova/virt/disk/mount/api.py, line 121, in _get_dev_retry_helper
  time.sleep(2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369502] Re: NUMA topology _get_constraints_auto assumes flavor object

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369502

Title:
  NUMA topology _get_constraints_auto assumes flavor object

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Resulting in AttributeError: 'dict' object has no attribute 'vcpus' if
  we try to start with a flavor that will result in Nova trying to
  decide on an automatic topology (for example providing only number of
  nodes with hw:numa_nodes extra_spec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370077] Re: Set default vnic_type in neutron.

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370077

Title:
  Set default vnic_type in neutron.

Status in OpenStack Neutron (virtual network service):
  Won't Fix
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  https://review.openstack.org/#/c/72334/ introduced binding:vnic_type
  into the neutron port binding extension. Ml2 plugin has been updated
  to support the vnic_type, but others may not. Nova expects every port
  has a correct vnic_type. Therefore, neutron should make sure each port
  to have this attribute set correctly. By default, the vnic_type should
  be VNIC_TYPE_NORMAL

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369009] Re: network_get_all_by_host query fails to use index

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369009

Title:
  network_get_all_by_host query fails to use index

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The get_networks_by_host query fails to use the existing indexes on
  fixed ips. This means a large number of fixed_ips can lead to the
  database getting bogged down and taking multiple seconds to return.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370578] Re: Ironic Hostmanager does not pass hypervisor_type to filters

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370578

Title:
  Ironic Hostmanager does not pass hypervisor_type to filters

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The Ironic Hostmager does not include the compute node hypervisor
  values such as type, version,  hostname.

  Including these values, which are included by the normal HostManager,
  is needed to allow the capabilities filter to work in a combined
  Ironic / virt Nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369508] Re: Instance with NUMA topology causes exception in the scheduler

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369508

Title:
  Instance with NUMA topology causes exception in the scheduler

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This was reported by Michael Turek as he was testing this while the
  patches were still in flight See:
  https://review.openstack.org/#/c/114938/26/nova/virt/hardware.py

  As described on there - the code there makes a bad assumption about
  the format in which it will get the data in the scheduler, which
  results in:

  2014-09-15 10:45:44.906 ERROR oslo.messaging.rpc.dispatcher 
[req-f29a469e-268d-49bf-abfa-0ccb228d768c admin admin] Exception during message 
handling: An object of type InstanceNUMACell is required here
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, 
in _dispatch_and_reply
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, 
in _dispatch
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 123, 
in _do_dispatch
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139, in 
inner
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/manager.py, line 175, in select_destinations
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/filter_scheduler.py, line 147, in 
select_destinations
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/filter_scheduler.py, line 300, in _schedule
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher 
chosen_host.obj.consume_from_instance(context, instance_properties)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/scheduler/host_manager.py, line 252, in 
consume_from_instance
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher self, 
instance)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/hardware.py, line 978, in 
get_host_numa_usage_from_instance
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher 
instance_numa_topology = instance_topology_from_instance(instance)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/hardware.py, line 949, in 
instance_topology_from_instance
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher cells=cells)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/objects/base.py, line 242, in __init__
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher self[key] = 
kwargs[key]
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/objects/base.py, line 474, in __setitem__
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher setattr(self, 
name, value)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/objects/base.py, line 75, in setter
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher field_value = 
field.coerce(self, name, value)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/objects/fields.py, line 189, in coerce
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher return 
self._type.coerce(obj, attr, value)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/objects/fields.py, line 388, in coerce
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher obj, '%s[%i]' 
% (attr, index), element)
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/objects/fields.py, line 189, in coerce
  2014-09-15 10:45:44.906 TRACE oslo.messaging.rpc.dispatcher return 
self._type.coerce(obj, 

[Yahoo-eng-team] [Bug 1368910] Re: intersphinx requires network access which sometimes fails

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368910

Title:
  intersphinx requires network access  which sometimes fails

Status in Cinder:
  In Progress
Status in Manila:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in python-manilaclient:
  Fix Committed

Bug description:
  The intersphinx module requires internet access, and periodically
  causes docs jobs to fail.

  This module also prevents docs from being built without internet
  access.

  Since we don't actually use intersphinx for much (if anything), lets
  just remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368404] Re: Uncaught 'libvirtError: Domain not found' errors during destroy

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368404

Title:
  Uncaught 'libvirtError: Domain not found' errors during destroy

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Some uncaught libvirt errors may result in instances being set to
  ERROR state and is causing sporadic gate failures. This can happen for
  any of the code paths that use _destroy().  Here is a recent example
  of a failed resize:

  [req-06dd4908-382e-455e-854e-e4d42a4bf62b TestServerAdvancedOps-724416891 
TestServerAdvancedOps-711228572] [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] Setting instance vm_state to ERROR
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] Traceback (most recent call last):
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 5902, in 
_error_out_instance_on_exception
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] yield
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3658, in resize_instance
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] timeout, retry_interval)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 5468, in 
migrate_disk_and_power_off
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] self.power_off(instance, timeout, 
retry_interval)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2400, in power_off
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] self._destroy(instance)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 998, in _destroy
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] timer.start(interval=0.5).wait()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] return hubs.get_hub().switch()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 293, in 
switch
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] return self.greenlet.switch()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/openstack/common/loopingcall.py, line 81, in _inner
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] self.f(*self.args, **self.kw)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 971, in 
_wait_for_destroy
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] dom_info = self.get_info(instance)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3922, in get_info
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] dom_info = virt_dom.info()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 183, in doit
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 141, in 
proxy_call
  2014-09-05 01:08:37.123 26984 

[Yahoo-eng-team] [Bug 1369858] Re: There is no migration test for migration #254

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369858

Title:
  There is no migration test for migration #254

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In change request  https://review.openstack.org/#/c/114286/ migration
  254 was added. But we have no migration test for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369487] Re: NIST: increase RSA key length to 2048 bit

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369487

Title:
  NIST: increase RSA key length to 2048 bit

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  According to NIST 800-131A, RSA key lenght for digital signature must
  = 2048 bit.

  In crypto.py, we use 1024 bit as the default key length to generate
  cert file, and does not specify any larger number to override the
  default value when utilizing it.

  def generate_x509_cert(user_id, project_id, bits=1024):

  Need to increase the default key length to 2048 bit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370782] Re: SecurityGroupExists error when booting multiple instances concurrently

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370782

Title:
  SecurityGroupExists error when booting multiple instances concurrently

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If the default security group doesn't exist for some particular
  tenant, booting of a few instances concurrently may lead to
  SecurityGroupExists error as one thread will win the race and create
  the security group, and others will fail.

  This is easily reproduced by running Rally jobs in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370191] Re: db deadlock on service_update()

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370191

Title:
  db deadlock on service_update()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  Several methods in nova.db.sqlalchemy.api are decorated with
  @_retry_on_deadlock.  service_update() is not currently one of them,
  but it should be based on the following backtrace:

  4-09-15 15:40:22.574 34384 ERROR nova.servicegroup.drivers.db [-] model
  server went away
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db Traceback
  (most recent call last):
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py, line
  95, in _report_state
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  service.service_ref, state_catalog)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/conductor/api.py, line 218, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return self._manager.service_update(context, service, values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/utils.py, line 967, in wrapper
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return func(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139,
  in inner
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return func(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 491, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db svc =
  self.db.service_update(context, service['id'], values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/db/api.py, line 148, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return IMPL.service_update(context, service_id, values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 146, in
  wrapper
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return f(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 533, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  service_ref.update(values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 447,
  in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.rollback()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, line
  58, in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  compat.reraise(exc_type, exc_value, exc_tb)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 444,
  in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 358,
  in commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  t[1].commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 1195,
  in commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self._do_commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 1226,
  in _do_commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.connection._commit_impl()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 491,
  in _commit_impl
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self._handle_dbapi_exception(e, None, None, None, None)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  

[Yahoo-eng-team] [Bug 1370068] Re: Numa filter fails to get instance_properties

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370068

Title:
  Numa filter fails to get instance_properties

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  NUMATopologyFilter tries to get  instance_properties from
  filter_properties. But in fact instance_properties are in another
  dictionary (request_spec)  that is embedded in  filter_properties.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365901] Re: cinder-api ran into hang loop in python2.6

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365901

Title:
  cinder-api ran into hang loop in python2.6

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  cinder-api ran into hang loop in python2.6

  #cinder-api
  ...
  ...
  snip...
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  ...
  ...
  snip...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365637] Re: chunk sender does not send terminator on subprocess exception

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365637

Title:
  chunk sender does not send terminator on subprocess exception

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If a subprocess exception occurs when uploading chunks via glance,
  eventlet.wsgi.py will result in the following exception:  ValueError:
  invalid literal for int() with base 16: ''. This happens because the
  chunk sender does not send the terminator and the server reads an EOF
  on client connection close instead of a properly formatted chunk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1365637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366548] Re: libvirt: spawning an instance may have an additional 4 db writes

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366548

Title:
  libvirt: spawning an instance may have an additional 4 db writes

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  instance save is used by the driver when it does not need to be. each
  instance save will invoke a db access. after the spawn method is
  called the instance is updated so there is no need for the save

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366594] Re: config generator using keystoneclient rather than middleware

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366594

Title:
  config generator using keystoneclient rather than middleware

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The auth_token middleware was moved from keystoneclient.middleware to
  keystonemiddleware.
  
http://git.openstack.org/cgit/openstack/nova/tree/tools/config/oslo.config.generator.rc
  should be updated to use keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362233] Re: instance_create() DB API method implicitly creates additional DB transactions

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362233

Title:
  instance_create() DB API method implicitly creates additional DB
  transactions

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In DB API code we have a notion of 'public' and 'private' methods. The
  former are conceptually executed within a *single* DB transaction and
  the latter can either create a new transaction or participate in the
  existing one. The whole point is to be able to roll back the results
  of DB API methods easily and be able to retry method calls on
  connection failures. We had a bp
  (https://blueprints.launchpad.net/nova/+spec/db-session-cleanup) in
  which all DB API have been re-factored to maintain these properties.

  instance_create() is one of the methods that currently violates the
  rules of 'public' DB API methods and creates a concurrent transaction
  implicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362929] Re: libvirt: KVM live migration failed due to VIR_DOMAIN_XML_MIGRATABLE flag

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362929

Title:
  libvirt: KVM live migration failed due to VIR_DOMAIN_XML_MIGRATABLE
  flag

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  OS version: RHEL 6.5
  libvirt version:  libvirt-0.10.2-29.el6_5.9.x86_64

  When I attempt to live migrate my KVM instance using latest Juno code
  on RHEL 6.5, I notice nova-compute error on source compute node:

  2014-08-27 09:24:41.836 26638 ERROR nova.virt.libvirt.driver [-]
  [instance: 1b1618fa-ddbd-4fce-aa04-720a72ec7dfe] Live Migration
  failure: unsupported configuration: Target CPU model SandyBridge does
  not match source (null)

  And this libvirt error on source compute node:

  2014-08-27 09:32:24.955+: 17721: error : virCPUDefIsEqual:753 :
  unsupported configuration: Target CPU model SandyBridge does not match
  source (null)

  After looking into the code, I notice that 
https://review.openstack.org/#/c/73428/ adds VIR_DOMAIN_XML_MIGRATABLE flag to 
dump instance xml. With this flag, the KVM instance xml will include full CPU 
information like this:
cpu mode='host-model' match='exact'
  model fallback='allow'SandyBridge/model
  vendorIntel/vendor

  Without this flag, the xml will not have those CPU information:
cpu mode='host-model'
  model fallback='allow'/
  topology sockets='1' cores='1' threads='1'/
/cpu

  The CPU model of my source and destination server are exactly
  identical. So I suspect it is a side effect of
  https://review.openstack.org/#/c/73428/. When libvirtd doing
  virDomainDefCheckABIStability(), its src domain xml does not include
  CPU model info, so that the checking fails.

  After I remove the code change of
  https://review.openstack.org/#/c/73428/ from my compute node, this
  libvirt checking error does not occur anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362405] Re: 'Force' option broken for quota update

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362405

Title:
  'Force' option broken for quota update

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This change broke the ability to force quotas below the current in-use
  value by adding new validation checks:

  https://review.openstack.org/#/c/28232/

  
  $ nova quota-update --force --cores 0 132
  ERROR (BadRequest): Quota limit must be greater than 1. (HTTP 400) 
(Request-ID: req-ff0751a9-9e87-443e-9965-a30768f91d9f)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366181] Re: Instance reschedule elevates to admin context

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366181

Title:
  Instance reschedule elevates to admin context

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Some resources should not be available when launching an instance
  (networks that belong to other projects for example). For this reason
  the original context should be maintained when rescheduling an
  instance. Currently the second launch occurs with an elevated context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363324] Re: a bug in quota check

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363324

Title:
  a bug in quota check

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  \nova\db\sqlalchemy\api.py   quota_reserve()

  when decided whether to refresh the user_usages[resource], one rule is
  that if the last refresh was too long time ago, we need refresh
  user_usages[resource].

   elif max_age and (user_usages[resource].updated_at -
timeutils.utcnow()).seconds = max_age:

  using last update time minus current time result in a overflow ,so
  that the refresh action always be executed,in consideration of the
  max_age won't be a max number.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362221] Re: VMs fail to start when Ceph is used as a backend for ephemeral drives

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362221

Title:
  VMs fail to start when Ceph is used as a backend for ephemeral drives

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  VMs' drives placement in Ceph option has been chosen
  (libvirt.images_types == 'rbd').

  When user creates a flavor and specifies:
 - root drive size 0
 - ephemeral drive size 0 (important)

  and tries to boot a VM, he gets no valid host was found in the
  scheduler log:

  Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
   File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1305, 
in _build_instance\n set_access_ip=set_access_ip)\n', u' File /usr/l
  ib/python2.6/site-packages/nova/compute/manager.py, line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
   /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
  e)\n', u' File 
/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
  lf.tb)\n', u' File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1714, in 
_spawn\n block_device_info)\n', u' File /usr/lib/py
  thon2.6/site-packages/nova/virt/libvirt/driver.py, line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File /usr/lib/python2.6/site-packages
  /nova/virt/libvirt/driver.py, line 2648, in _create_image\n 
ephemeral_size=ephemeral_gb)\n', u' File 
/usr/lib/python2.6/site-packages/nova/virt/
  libvirt/imagebackend.py, line 186, in cache\n *args, **kwargs)\n', u' File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py,
  line 587, in create_image\n prepare_template(target=base, max_size=size, 
*args, **kwargs)\n', u' File /usr/lib/python2.6/site-packages/nova/opens
  tack/common/lockutils.py, line 249, in inner\n return f(*args, **kwargs)\n', 
u' File /usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebac
  kend.py, line 176, in fetch_func_sync\n fetch_func(target=target, *args, 
**kwargs)\n', u' File /usr/lib/python2.6/site-packages/nova/virt/libvir
  t/driver.py, line 2458, in _create_ephemeral\n disk.mkfs(os_type, fs_label, 
target, run_as_root=is_block_dev)\n', u' File /usr/lib/python2.6/sit
  e-packages/nova/virt/disk/api.py, line 117, in mkfs\n utils.mkfs(default_fs, 
target, fs_label, run_as_root=run_as_root)\n', u' File /usr/lib/pyt
  hon2.6/site-packages/nova/utils.py, line 856, in mkfs\n execute(*args, 
run_as_root=run_as_root)\n', u' File /usr/lib/python2.6/site-packages/nov
  a/utils.py, line 165, in execute\n return processutils.execute(*cmd, 
**kwargs)\n', u' File /usr/lib/python2.6/site-packages/nova/openstack/commo
  n/processutils.py, line 193, in execute\n cmd=\' \'.join(cmd))\n', 
uProcessExecutionError: Unexpected error while running command.\nCommand: sudo
   nova-rootwrap /etc/nova/rootwrap.conf mkfs -t ext3 -F -L ephemeral0 
/var/lib/nova/instances/_base/ephemeral_1_default\nExit code: 1\nStdout: 
''\nStde
  rr: 'mke2fs 1.41.12 (17-May-2010)\\nmkfs.ext3: No such file or directory 
while trying to determine filesystem size\\n'\n]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366758] Re: notifications should include progress info and cell name

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366758

Title:
  notifications should include progress info and cell name

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The notifications are quite out of sync with some of the instance object 
changes, in particular these very useful details are not included:
  * progress
  * cell_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362699] Re: amd64 is not a valid arch

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362699

Title:
  amd64 is not a valid arch

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  'amd64' is not in the list of valid architectures, this should be
  canonicalized to 'x86_64'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363899] Re: HyperV Vm Console Log issues

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363899

Title:
  HyperV Vm Console Log issues

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The size of the console log can get bigger than expected because of a small 
nit when checking the existing log file size as well
  as a wrong size constant.

  The method which gets the serial port pipe at the moment returns a
  list which contains at most one element being the actual pipe path. In
  order to avoid confusion this should return the pipe path or None
  instead of a list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365751] Re: Use of assert_called_once() instead of assert_called_once_with()

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365751

Title:
  Use of assert_called_once() instead of assert_called_once_with()

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in Command Line Interface Formulation Framework:
  Fix Released

Bug description:
  mock.assert_called_once() is a noop, it doesn't test anything.

  Instead it should be mock.assert_called_once_with()

  This occurs in the following places:
    Nova
      nova/tests/virt/hyperv/test_ioutils.py
      nova/tests/virt/libvirt/test_driver.py
    Cliff
  cliff/tests/test_app.py
Neutron
  neutron/tests/unit/services/l3_router/test_l3_apic_plugin.py
  
neutron/tests/unit/services/loadbalancer/drivers/radware/test_plugin_driver.py
  neutron/tests/unit/test_l3_agent.py
  neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_sync.py
  
neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_mechanism_driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363349] Re: VMware: test_driver_api...test_snapshot_delete_vm_snapshot* need to be fixed

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363349

Title:
  VMware: test_driver_api...test_snapshot_delete_vm_snapshot* need to be
  fixed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  once converted to use oslo.vmware, 
  these test cases, 
test_driver_api.VMwareAPIVMTestCase.test_snapshot_delete_vm_snapshot* are 
failing:
  
http://logs.openstack.org/75/70175/43/check/gate-nova-python27/c714fde/console.html

  This is most likely related unintended consequence of mocking time.sleep
  These tests are currently proposed to be skipped but we should look to 
provide an fix for the test cases as soon as possible.

  A separate patch was posted to demonstrate the potential cause. See lone diff 
between patch 1 (which fails the above-mentioned tests) and patch 3 (which 
doesn't)
  
https://review.openstack.org/#/c/117897/1..3/nova/tests/virt/vmwareapi/test_driver_api.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366832] Re: serial console, ports are not released

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366832

Title:
  serial console, ports are not released

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When booting an instance with serial console activated, port(s) are allocated 
but never released since the code responsible to freeing port(s) is called 
after the domain is undefined from libvirt.
  Also since the domain is already undefined, when calling the method 
'_lookup_by_name' an exception DomainnotFound is raised which makes not 
possible to correctly finish the deleting process

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362916] Re: _rescan_multipath construct wrong parameter for “multipath -r”

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362916

Title:
  _rescan_multipath construct wrong parameter for “multipath -r”

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  At 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L590, 
the purpose of self._run_multipath('-r', check_exit_code=[0, 1, 21]) is to 
setup a command to reconstruct multipath devices.
  But the result of it is multipath - r, not the right format multipath -r.

  I think brackets is missed for '-r', it should be modified to
  self._run_multipath(['-r'], check_exit_code=[0, 1, 21])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363231] Re: Periodic thread lockup

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363231

Title:
  Periodic thread lockup

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The instance locking introduced in
  cc5388bbe81aba635fb757e202d860aeed98f3e8 keeps the power state sane
  between stop and the periodic task power sync.  However, locking on
  an instance in the periodic task thread can potentially lock
  that thread for a long time.

  Example:
  1) User boots an instance.  The instance gets locked by uuid.
  2) Driver spawn begins and the image starts downloading from glance.
  3) During spawn, periodic tasks run.  Sync power states tries to grab
  the same instance lock by uuid.
  4) Periodic task thread hangs until the driver spawn completes in
  another greenthread.

  This scenario results in nova-compute appearing unresponsive for
  a long time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364849] Re: VMware driver doesn't return typed console

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364849

Title:
  VMware driver doesn't return typed console

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Change I8f6a857b88659ee30b4aa1a25ac52d7e01156a68 added typed consoles,
  and updated drivers to use them. However, when it touched the VMware
  driver, it modified get_vnc_console in VMwareVMOps, but not in
  VMwareVCVMOps, which is the one which is actually used.

  Incidentally, VMwareVMOps has now been removed, so this type of
  confusion should not happen again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362799] Re: Hard reboot escalation regression

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362799

Title:
  Hard reboot escalation regression

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova used to allow a hard reboot when an instance is already being
  soft rebooted. However, with commit
  cc0be157d005c5588fe5db779fc30fefbf22b44d, this is no longer allowed.

  This is because two new task states were introduced, REBOOT_PENDING
  and REBOOT_STARTED (and corresponding values for hard reboots). A soft
  reboot now spends most of it's time in REBOOT_STARTED instead of
  REBOOTING.

  REBOOT_PENDING and REBOOT_STARTED were not added to the
  @check_instance_state decorator. As a result, an attempt to hard
  reboot an instance which is stuck trying to do a soft reboot will now
  fail with an InstanceInvalidState exception.

  This provides a poor user experience since a reboot is often attempted
  for instances that aren't responsive. A soft reboot is not guaranteed
  to work even if the system is responsive. The soft reboot prevents a
  hard reboot from being performed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364986] Re: oslo.db now wraps all DB exceptions

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364986

Title:
  oslo.db now wraps all DB exceptions

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  tl;dr

  In a few versions of oslo.db (maybe when we release 1.0.0?), every
  project using oslo.db should inspect their code and remove usages of
  'raw' DB exceptions like IntegrityError/OperationalError/etc from
  except clauses and replace them with the corresponding custom
  exceptions from oslo.db (at least a base one - DBError).

  Full version

  A recent commit to oslo.db changed the way the 'raw' DB exceptions are
  wrapped (e.g. IntegrityError, OperationalError, etc). Previously, we
  used decorators on Session methods and wrapped those exceptions with
  oslo.db custom ones. This is mostly useful for handling them later
  (e.g. to retry DB API methods on deadlocks).

  The problem with Session decorators was that it wasn't possible to
  catch and wrap all possible exceptions. E.g. SA Core exceptions and
  exceptions raised in Query.all() calls were ignored. Now we are using
  a low level SQLAlchemy event to catch all possible DB exceptions. This
  means that if consuming projects had workarounds for those cases and
  expected 'raw' exceptions instead of oslo.db ones, they would be
  broken. That's why we *temporarily* added both 'raw' exceptions and
  new ones to expect clauses in consuming projects code when they were
  ported to using of oslo.db to make the transition smooth and allow
  them to work with different oslo.db versions.

  On the positive side, we now have a solution for problems like
  https://bugs.launchpad.net/nova/+bug/1283987 when exceptions in Query
  methods calls weren't handled properly.

  In a few releases of oslo.db we can safely remove 'raw' DB exceptions
  like IntegrityError/OperationalError/etc from projects code and except
  only oslo.db specific ones like
  DBDuplicateError/DBReferenceError/DBDeadLockError/etc (at least, we
  wrap all the DB exceptions with our base exception DBError, if we
  haven't found a better match).

  oslo.db exceptions and their description:
  https://github.com/openstack/oslo.db/blob/master/oslo/db/exception.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362595] Re: move_vhds_into_sr - invalid cookie

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362595

Title:
  move_vhds_into_sr - invalid cookie

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When moving VHDs on the filesystem a coalesce may be in progress.  The
  result of this is that the VHD file is not valid when it is copied as
  it is being actively changed - and the VHD cookie is invalid.

  Seen in XenServer CI: http://dd6b71949550285df7dc-
  
dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/36/109836/4/23874/run_tests.log

  2014-08-28 12:26:37.538 | Traceback (most recent call last):
  2014-08-28 12:26:37.543 |   File 
tempest/api/compute/servers/test_server_actions.py, line 251, in 
test_resize_server_revert
  2014-08-28 12:26:37.550 | 
self.client.wait_for_server_status(self.server_id, 'VERIFY_RESIZE')
  2014-08-28 12:26:37.556 |   File 
tempest/services/compute/json/servers_client.py, line 179, in 
wait_for_server_status
  2014-08-28 12:26:37.563 | raise_on_error=raise_on_error)
  2014-08-28 12:26:37.570 |   File tempest/common/waiters.py, line 77, in 
wait_for_server_status
  2014-08-28 12:26:37.577 | server_id=server_id)
  2014-08-28 12:26:37.583 | BuildErrorException: Server 
e58677ac-dd72-4f10-9615-cb6763f34f50 failed to build and is in ERROR status
  2014-08-28 12:26:37.589 | Details: {u'message': 
u'[\'XENAPI_PLUGIN_FAILURE\', \'move_vhds_into_sr\', \'Exception\', VDI 
\'/var/run/sr-mount/16f5c980-eeb6-0fd3-e9b1-dec616309984/os-images/instancee58677ac-dd72-4f10-9615-cb6763f34f50/535cd7f2-80a5-463a-935c-9c4f52ba0ecf.vhd\'
 has an invalid footer: \' invalid cook', u'code': 500, u'created': 
u'2014-08-28T11:57:01Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364344] Re: Nova client produces a wrong exception when user tries to boot an instance without specific network UUID

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364344

Title:
  Nova client produces a wrong exception when user tries to boot an
  instance without specific network UUID

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Description of problem:
  ===
  pyhton-novaclient produces a wrong exception when user tries to boot an 
instance without specific network UUID.
  The issue will only reproduce when an external network is shared with the 
tenant, but not created from within it (I created it in admin tenant).

  Version-Release:
  
  python-novaclient-2.17.0-2

  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  1. Have 2 tenants (admin + additional tenant would do).
  2. In tenant A (admin), Create a network and mark it as both shared and 
external.
  3. In tenant B, Create a network which is not shared or external.
  4. Boot an instance within tenant B (I tested this via CLI), do not use the 
--nic option.

  Actual results:
  ===
  DEBUG (shell:783) It is not allowed to create an interface on external 
network 49d0cb8a-2631-4308-89c4-cac502ef0bad (HTTP 403) (Request-ID: 
req-caacfa72-82f8-492a-8ce2-9476be8f3e0c)
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/novaclient/shell.py, line 780, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File /usr/lib/python2.7/site-packages/novaclient/shell.py, line 716, in 
main
  args.func(self.cs, args)
File /usr/lib/python2.7/site-packages/novaclient/v1_1/shell.py, line 433, 
in do_boot
  server = cs.servers.create(*boot_args, **boot_kwargs)
File /usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py, line 
871, in create
  **boot_kwargs)
File /usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py, line 
534, in _boot
  return_raw=return_raw, **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/base.py, line 152, in 
_create
  _resp, body = self.api.client.post(url, body=body)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 312, in 
post
  return self._cs_request(url, 'POST', **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 286, in 
_cs_request
  **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 268, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 262, in 
request
  raise exceptions.from_response(resp, body, url, method)
  Forbidden: It is not allowed to create an interface on external network 
49d0cb8a-2631-4308-89c4-cac502ef0bad (HTTP 403) (Request-ID: 
req-afce2569-6902-4b25-a9b8-9ebf1a6ce1b9)
  ERROR: It is not allowed to create an interface on external network 
49d0cb8a-2631-4308-89c4-cac502ef0bad (HTTP 403) (Request-ID: 
req-afce2569-6902-4b25-a9b8-9ebf1a6ce1b9)

  Expected results:
  =
  This is what happens if:
  1. The shared network is no longer marked as external.
  2. The tenant itself has two networks.

  (+ no network UUID is speficied in the 'nova boot' command)

  
  DEBUG (shell:783) Multiple possible networks found, use a Network ID to be 
more specific. (HTTP 400) (Request-ID: req-a4e90abd-2ad7-4342-aa3c-1a9aa9f5e2a0)
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/novaclient/shell.py, line 780, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File /usr/lib/python2.7/site-packages/novaclient/shell.py, line 716, in 
main
  args.func(self.cs, args)
File /usr/lib/python2.7/site-packages/novaclient/v1_1/shell.py, line 433, 
in do_boot
  server = cs.servers.create(*boot_args, **boot_kwargs)
File /usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py, line 
871, in create
  **boot_kwargs)
File /usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py, line 
534, in _boot
  return_raw=return_raw, **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/base.py, line 152, in 
_create
  _resp, body = self.api.client.post(url, body=body)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 312, in 
post
  return self._cs_request(url, 'POST', **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 286, in 
_cs_request
  **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 268, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File /usr/lib/python2.7/site-packages/novaclient/client.py, line 262, in 
request
  raise exceptions.from_response(resp, body, url, method)
  

[Yahoo-eng-team] [Bug 1363955] Re: Broken links in doc/source/devref/filter_scheduler.rst

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363955

Title:
  Broken links in doc/source/devref/filter_scheduler.rst

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When you browse directly to
  http://docs.openstack.org/developer/nova/devref/filter_scheduler.html,
  there were broken links on the following class
  'AggregateNumInstancesFilter', 'RamWeigher'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363558] Re: check the value of the configuration item retries

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363558

Title:
  check the value of the configuration item retries

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  we need to to check the value of the configuration item
  block_device_retries in the code in order to ensure the
  block_device_retries   equal or bigger than 1 , done like that the
  configuration item network_allocate_retries

  =
  In ceilometer, there are similar issues, there is no check for value of 
retries
  ceilometer.storage.mongo.utils.ConnectionPool#_mongo_connect
  and:
  ceilometer.ipmi.platform.intel_node_manager.NodeManager#init_node_manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1363558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367342] Re: call to _set_instance_error_state is incorrect in do_build_and_run_instance

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367342

Title:
  call to _set_instance_error_state is incorrect in
  do_build_and_run_instance

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova/compute/manager.py  in do_build_and_run_instance
  Under  except exception.RescheduledException as e:
  ...
  self._set_instance_error_state(context, instance.uuid)

  This should be passing instance only not instance.uuid

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366549] Re: libvirt: som unit tests take more than 4 seconds

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366549

Title:
  libvirt: som unit tests take more than 4 seconds

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Ran 786 tests in 21.128s (-0.051s)
  PASSED (id=17)
  Slowest Tests
  Test id   
   Runtime (s)
  
---
  ---
  
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_clean_shutdown_failure
  5.091
  
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_clean_shutdown_with_retry
   4.093

  This is a sad waist of resources...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368202] Re: Nova console VMRCConsole will break with ESX driver

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368202

Title:
  Nova console VMRCConsole will break with ESX driver

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  VMware ESX driver was deprecated in J.This console class will break.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363575] Re: db deadlock in nova-conductor

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363575

Title:
  db deadlock in nova-conductor

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Log ERROR or TRACE messages are not allowed in nova-conductor. This
  one has been observed in this gate job:

  
  
http://logs.openstack.org/11/89211/22/gate/gate-tempest-dsvm-neutron-full/dfa4541/logs/screen-n-cond.txt.gz?level=TRACE

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire  %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/scheduler/manager.py, line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/api.py, line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
427, in _raise_if_deadlock_error
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
raise exception.DBDeadlock(operational_error)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
DBDeadlock: 

[Yahoo-eng-team] [Bug 1369973] Re: libvirt test cases fail due to lockutils error when run via testtools

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369973

Title:
  libvirt test cases fail due to lockutils error when run via testtools

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When running libvirt tests via testtools.run, there are a number of
  cases which fail due to lockutils setup

  $ .venv/bin/python -m testtools.run nova.tests.virt.libvirt.test_driver
  ..snip...
  ==
  ERROR: 
nova.tests.virt.libvirt.test_driver.IptablesFirewallTestCase.test_multinic_iptables
  --
  pythonlogging:'': {{{INFO [nova.network.driver] Loading network driver 
'nova.network.linux_net'}}}

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_driver.py, line 10182, in 
test_multinic_iptables
  self.fw.prepare_instance_filter(instance_ref, network_info)
File nova/virt/firewall.py, line 184, in prepare_instance_filter
  self.refresh_provider_fw_rules()
File nova/virt/firewall.py, line 474, in refresh_provider_fw_rules
  self._do_refresh_provider_fw_rules()
File nova/openstack/common/lockutils.py, line 267, in inner
  with lock(name, lock_file_prefix, external, lock_path):
File /usr/lib64/python2.7/contextlib.py, line 17, in __enter__
  return self.gen.next()
File nova/openstack/common/lockutils.py, line 231, in lock
  ext_lock = external_lock(name, lock_file_prefix, lock_path)
File nova/openstack/common/lockutils.py, line 180, in external_lock
  lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path)
File nova/openstack/common/lockutils.py, line 171, in _get_lock_path
  raise cfg.RequiredOptError('lock_path')
  RequiredOptError: value required for option: lock_path

  The tox.ini / run_tests.sh work around this problem by using -m
  nova.openstack.common.lockutils but this is somewhat tedious to
  remember to add. A simple mock addition to the tests in question can
  avoid the issue in the first place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179816] Re: ec2_eror_code mismatch

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1179816

Title:
  ec2_eror_code mismatch

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  It is reporting InstanceNotFound instead of InvalidAssociationID[.]NotFound
  in 
  tests/boto/test_ec2_network.py 

  self.assertBotoError(ec2_codes.client.InvalidAssociationID.NotFound,
   address.disassociate)

  
  AssertionError :Error code (InstanceNotFound) doesnot match the expexted re 
pattern InvalidAssociationID[.]NotFound

  boto: ERROR: 400 Bad RequInvalidAssociationID[.]NotFoundest
  boto: ERROR: ?xml version=1.0?
  ResponseErrrorsErrorCodeInstanceNotFound/CodeMessageInstance None 
could not be 
found./Message/Error/ErrorsRequestIDreq-05235a67-0a70-46b1-a503-91444ab2b88d/RequestID/Response

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1179816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348629] Re: Baremetal driver reports bogus vm_mode of 'baremetal'

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348629

Title:
  Baremetal driver reports bogus vm_mode of 'baremetal'

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The Baremetal driver reports a 'vm_mode' of 'baremetal' for supported
  instance types. This is bogus because the baremetal driver is running
  OS using the native machine ABI, which is represented by vm_mode.HVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363014] Re: NoopQuotasDriver.get_settable_quotas() method always fail with KeyError

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363014

Title:
  NoopQuotasDriver.get_settable_quotas() method always fail with
  KeyError

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  NoopQuotasDriver.get_settable_quotas() tries to call update() on non-
  existing dictionary entry. While NoopQuotasDriver is not really
  useful, we still want it to be working.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 890906] Re: os-virtual-interfaces extension should take a uuid instead of an instance id

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/890906

Title:
  os-virtual-interfaces extension should take a uuid instead of an
  instance id

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The other nova commands seem to be moving towards taking a uuid as a
  parameter instead of the instance id so the os-virtual-interfaces
  extension should follow suit.  This will allow us to add a vif-list
  command (listing the vifs of a given VM) that is similar to the other
  nova command (i.e. nova vif-list uuid).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/890906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348624] Re: XenAPI driver uses a bogus architecture type for i686 platforms

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348624

Title:
  XenAPI driver uses a bogus architecture type for i686 platforms

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The XenAPI driver simply parses the Xen hypervisor capabilities to
  report the architecture type in the supported instances list.
  Unfortunately the Xen hypervisor uses a architecture name of 'x86_32'
  for i686 platforms which means it won't match the standard OS 'uname'
  reported architecture used by other drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364692] Re: Error msg says No valid host found for cold migrate when resizing VM

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364692

Title:
  Error msg says No valid host found for cold migrate when resizing VM

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Because resize and cold migrate share common code, some of the migrate
  code shows through in the error handling code in that when a valid
  host can't be found when performing a resize operation, the user sees:

  No valid host found for cold migrate

  This can be confusing, especially when a number of different actions
  can be going in parallel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1204169] Re: compute instance.update messages sometimes have the wrong values for instance state

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1204169

Title:
  compute instance.update messages sometimes have the wrong values for
  instance state

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Compute instance.update messages that are not triggered by a state
  change (e.g. setting the host in the resource tracker) have default
  (None) values for task_state, old_vm_state and old_ task_state.

  This can make the instance state sequence look wrong to anything
  consuming the messages (e.g stacktach)

   compute.instance.update  None(None) - Building(none)
   scheduler.run_instance.scheduled 
   compute.instance.update  building(None) -  building(scheduling)
   compute.instance.create.start 
   compute.instance.update  building(None) -  building(None)
   compute.instance.update  building(None) -  building(networking)
   compute.instance.update  building(networking) - 
building(block_device_mapping)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1204169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349147] Re: test_db_api unit tests fail with: UnexpectedMethodCallError: Unexpected method call get_session.__call__(use_slave=False) - None

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349147

Title:
  test_db_api unit tests fail with: UnexpectedMethodCallError:
  Unexpected method call get_session.__call__(use_slave=False) - None

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/62/104262/7/gate/gate-nova-
  python27/3adf0e2/console.html

  2014-07-25 16:27:18.188 | Traceback (most recent call last):
  2014-07-25 16:27:18.188 |   File nova/tests/db/test_db_api.py, line 1236, 
in test_security_group_get_no_instances
  2014-07-25 16:27:18.188 | security_group = 
db.security_group_get(self.ctxt, sid)
  2014-07-25 16:27:18.188 |   File nova/db/api.py, line 1269, in 
security_group_get
  2014-07-25 16:27:18.188 | columns_to_join)
  2014-07-25 16:27:18.188 |   File nova/db/sqlalchemy/api.py, line 167, in 
wrapper
  2014-07-25 16:27:18.188 | return f(*args, **kwargs)
  2014-07-25 16:27:18.188 |   File nova/db/sqlalchemy/api.py, line 3668, in 
security_group_get
  2014-07-25 16:27:18.188 | query = _security_group_get_query(context, 
project_only=True).\
  2014-07-25 16:27:18.188 |   File nova/db/sqlalchemy/api.py, line 3635, in 
_security_group_get_query
  2014-07-25 16:27:18.188 | read_deleted=read_deleted, 
project_only=project_only)
  2014-07-25 16:27:18.189 |   File nova/db/sqlalchemy/api.py, line 237, in 
model_query
  2014-07-25 16:27:18.189 | session = kwargs.get('session') or 
get_session(use_slave=use_slave)
  2014-07-25 16:27:18.189 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 765, in __call__
  2014-07-25 16:27:18.189 | return mock_method(*params, **named_params)
  2014-07-25 16:27:18.189 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1002, in __call__
  2014-07-25 16:27:18.189 | expected_method = self._VerifyMethodCall()
  2014-07-25 16:27:18.189 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1049, in _VerifyMethodCall
  2014-07-25 16:27:18.189 | expected = self._PopNextMethod()
  2014-07-25 16:27:18.189 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 1035, in _PopNextMethod
  2014-07-25 16:27:18.189 | raise UnexpectedMethodCallError(self, None)
  2014-07-25 16:27:18.189 | UnexpectedMethodCallError: Unexpected method call 
get_session.__call__(use_slave=False) - None

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5leHBlY3RlZE1ldGhvZENhbGxFcnJvcjogVW5leHBlY3RlZCBtZXRob2QgY2FsbCBnZXRfc2Vzc2lvbi5fX2NhbGxfXyh1c2Vfc2xhdmU9RmFsc2UpIC0+IE5vbmVcIiBBTkQgcHJvamVjdDpcIm9wZW5zdGFjay9ub3ZhXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wNy0xM1QxNjo0MDo1NiswMDowMCIsInRvIjoiMjAxNC0wNy0yN1QxNjo0MDo1NiswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA2NDc5MzkzMjc0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  8 hits in 2 weeks, check and gate, all failures, looks like it started
  around 7/21.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351001] Re: nova force-delete does not delete BUILD state instances

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351001

Title:
  nova force-delete does not delete BUILD state instances

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Description of problem:

  Using nova force-delete $instance-id fails when an instance is in
  status BUILD and OS-EXT-STS:task_state deleting.  However, nova
  delete does seem to work after several tries.

  Version-Release number of selected component (if applicable):

  2013.2 (Havana)

  How reproducible:

  
  Steps to Reproduce:
  1. find a seemingly hung instance
  2. fire off nova-delete
  3. watch it complain

  Actual results:

  [root@host02 ~(keystone_admin)]$ nova force-delete 
3a83b712-4667-44c1-a83d-ada164ff78d1
  ERROR: Cannot 'forceDelete' while instance is in vm_state building (HTTP 409) 
(Request-ID: req-22737c83-32f4-4c6d-ae9c-09a542556907)

  
  Expected results:

  do the needful.

  
  Additional info:

  Here are some logs obtained from this behavior, this is on RHOS4 /
  RHEL6.5:

  --snip--

  [root@host02 ~(keystone_admin)]$ nova force-delete 
3a83b712-4667-44c1-a83d-ada164ff78d1
  ERROR: Cannot 'forceDelete' while instance is in vm_state building (HTTP 409) 
(Request-ID: req-22737c83-32f4-4c6d-ae9c-09a542556907)
  [root@host02 ~(keystone_admin)]$ nova list --all-tenants | grep 
3a83b712-4667-44c1-a83d-ada164ff78d1
  | 3a83b712-4667-44c1-a83d-ada164ff78d1 | bcrochet-foreman 
  | BUILD   | deleting | NOSTATE | default=192.168.87.7; 
foreman_int=192.168.200.6; foreman_ext=192.168.201.6 

  [root@host02 ~(keystone_admin)]$ nova show 
3a83b712-4667-44c1-a83d-ada164ff78d1
  
+--++
  | Property | Value
  |
  
+--++
  | status   | BUILD
  |
  | updated  | 2014-04-16T20:56:44Z 
  |
  | OS-EXT-STS:task_state| deleting 
  |
  | OS-EXT-SRV-ATTR:host | host08.oslab.priv
  |
  | foreman_ext network  | 192.168.201.6
  |
  | key_name | foreman-ci   
  |
  | image| rhel-guest-image-6-6.5-20140116.1-1 
(253354e7-8d65-4d95-b134-6b423d125579) |
  | hostId   | 
4b98ba395063916c15f5b96a791683fa5d116109987c6a6b0b8de2f1   |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-99e9
  |
  | foreman_int network  | 192.168.200.6
  |
  | OS-SRV-USG:launched_at   | None 
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | host08.oslab.priv
  |
  | flavor   | m1.large (4) 
  |
  | id   | 3a83b712-4667-44c1-a83d-ada164ff78d1 
  |
  | security_groups  | [{u'name': u'default'}, {u'name': 
u'default'}, {u'name': u'default'}]  |
  | OS-SRV-USG:terminated_at | None 
  |
  | user_id  | 13090770bacc46ccb8fb7f5e13e5de98 
  |
  | name | bcrochet-foreman 
  |
  | created  | 2014-04-16T20:27:51Z 
  |
  | tenant_id| f8e6ba11caa94ea98d24ec819eb746fd 
  |
  | OS-DCF:diskConfig| MANUAL   
  |
  | metadata | {}   
  |
  | 

[Yahoo-eng-team] [Bug 1349888] Re: Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349888

Title:
  Attempting to attach the same volume multiple times can cause bdm
  record for existing attachment to be deleted.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the
  volume is deleted however it is not necessarily the one that was just
  created. The following steps show how a volume can get stuck detaching
  because of this.

  
  $ nova list
  
c+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -  | 
Running | private=10.0.0.2 |
  
+--++++-+--+

  $ cinder list
  
+--+---++--+-+--+-+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---++--+-+--+-+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   | lvm1 
   |  false   | |
  
+--+---++--+-+--+-+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +--+--+

  $ cinder list
  
+--+++--+-+--+--+
  |  ID  | Status |  Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--+++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   | lvm1
|  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+++--+-+--+--+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) 
(Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)

  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4

  $ cinder list
  
+--+---++--+-+--+--+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to  |
  
+--+---++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 |  1   | lvm1 
   |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+---++--+-+--+--+


  
  2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher 
[req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message 
handling: type 'NoneType' can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1157922] Re: Instance state is not set to ERROR when guestfs file-injection fails

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1157922

Title:
  Instance state is not set to ERROR when guestfs file-injection fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  
  Instances have been observed to remain stuck forever in BUILD state, with 
no errors surfaced to `nova show` after nova boot fails with the following 
guestfs error.


  2013-03-20 18:49:08,590.590 ERROR nova.compute.manager 
[req-f85ccdcd-74f1-4f50-98eb-b68fb8dc8e1a dba071d520c9438ab9fb91077b6f3248 
1ba6328ea66c4041bfab7cfcbc2305cf] [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] Instance failed to spawn
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] Traceback (most recent call last):
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/local/lib/python2.7/dist-packages/nova/compute/manager.py, line 1055, in 
_spawn
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] block_device_info)
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
1517, in spawn
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] admin_pass=admin_password)
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
1913, in _create_image
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] instance=instance)
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] self.gen.next()
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
1908, in _create_image
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] mandatory=('files',))
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/local/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 304, in 
inject_data
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] fs.setup()
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db]   File 
/usr/local/lib/python2.7/dist-packages/nova/virt/disk/vfs/guestfs.py, line 
114, in setup
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] {'imgfile': self.imgfile, 'e': e})
  2013-03-20 18:49:08,590.590 14139 TRACE nova.compute.manager [instance: 
5f3fe8ba-a148-48e5-8e19-d2f65968b2db] NovaException: Error mounting 
/var/lib/nova/instances/5f3fe8ba-a148-48e5-8e19-d2f65968b2db/disk with 
libguestfs (cannot find any suitable libguestfs supermin, fixed or old-style 
appliance on LIBGUESTFS_PATH (search path: /usr/lib/guestfs))

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1157922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239864] Re: nova-api fails to query ServiceGroup status from Zookeeper

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239864

Title:
  nova-api fails to query ServiceGroup status from Zookeeper

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I am running with the ZooKeeper servicegroup driver on CentOS 6.4
  (Python 2.6) with the RDO distro of Grizzly.

  All nova services are successfully connecting to ZooKeeper, which I've
  verified using zkCli.

  However, when I run `nova service-list` I get an HTTP 500 error from
  nova-api.  The nova-api log (/var/log/nova/api.log) shows:

  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/servicegroup/api.py\
  , line 93, in service_is_up
  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack return 
self._driver.is_up(member)
  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers\
  /zk.py, line 116, in is_up
  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack all_members = 
self.get_all(group_id)
  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers\
  /zk.py, line 141, in get_all
  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack raise 
exception.ServiceGroupUnavailable(driver=ZooKeeperDrive\
  r)
  2013-10-14 16:33:15.110 6748 TRACE nova.api.openstack 
ServiceGroupUnavailable: The service from servicegroup driver ZooK\
  eeperDriver is temporarily unavailable.

  The problem seems to be around evzookeeper (using version 0.4.0).

  To isolate the problem, I added some evzookeeper.ZKSession synchronous
  get() calls to test the roundtrip communication to ZooKeeper.  When I
  do a `self._session.get(CONF.zookeeper.sg_prefix)` in the zk.py
  ZooKeeperDriver __init__() method it works fine.  The logs show that
  this is immediately before the wsgi server starts up.

  When I do the get() operation from within the ZooKeeperDriver
  get_all() method, the web request hangs indefinitely.  However, if I
  recreate the evzookeeper.ZKSession within the get_all() method (after
  the wsgi server has started) the nova-api request is successful.

  diff --git a/nova/servicegroup/drivers/zk.py b/nova/servicegroup/drivers/zk.py
  index 2a3edae..7de2488 100644
  --- a/nova/servicegroup/drivers/zk.py
  +++ b/nova/servicegroup/drivers/zk.py
  @@ -122,7 +122,14 @@ class ZooKeeperDriver(api.ServiceGroupDriver):
   monitor = self._monitors.get(group_id, None)
   if monitor is None:
   path = %s/%s % (CONF.zookeeper.sg_prefix, group_id)
  -monitor = membership.MembershipMonitor(self._session, path)
  +
  +null = open(os.devnull, w)
  +local_session = evzookeeper.ZKSession(CONF.zookeeper.address,
  +  recv_timeout=
  +
CONF.zookeeper.recv_timeout,
  +  zklog_fd=null)
  +
  +monitor = membership.MembershipMonitor(local_session, path)
   self._monitors[group_id] = monitor
   # Note(maoy): When initialized for the first time, it takes a
   # while to retrieve all members from zookeeper. To prevent

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in oslo-incubator havana series:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  == scheduler.log ==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0) None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266611] Re: test_create_image_with_reboot fails with InstanceInvalidState in gate-nova-python*

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266611

Title:
  test_create_image_with_reboot fails with InstanceInvalidState in gate-
  nova-python*

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Looks like an intermittent failure:

  http://logs.openstack.org/25/64725/4/check/gate-nova-
  python27/e603e9e/testr_results.html.gz

  2014-01-06 21:49:45.870 | Traceback (most recent call last):
  2014-01-06 21:49:45.870 |   File nova/tests/api/ec2/test_cloud.py, line 
2343, in test_create_image_with_reboot
  2014-01-06 21:49:45.870 | self._do_test_create_image(False)
  2014-01-06 21:49:45.871 |   File nova/tests/api/ec2/test_cloud.py, line 
2316, in _do_test_create_image
  2014-01-06 21:49:45.871 | no_reboot=no_reboot)
  2014-01-06 21:49:45.871 |   File nova/api/ec2/cloud.py, line 1709, in 
create_image
  2014-01-06 21:49:45.872 | name)
  2014-01-06 21:49:45.872 |   File nova/compute/api.py, line 161, in inner
  2014-01-06 21:49:45.872 | method=f.__name__)
  2014-01-06 21:49:45.873 | InstanceInvalidState: Instance 
b1d4d924-069c-409c-bbdb-4f0478526057 in task_state powering-off. Cannot 
snapshot_volume_backed while the instance is in this state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1168318] Re: qemu_image_info should not return empty QemuImgInfo objects

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1168318

Title:
  qemu_image_info should not return empty QemuImgInfo objects

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In nova/virt/images.py, if qemu-img command fails or the image is
  missing, instead of returning en empty QemuImgInfo it should probably
  throw some exception to inform the caller of the situation instead of
  hiding the problem like it does today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1168318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368035] Re: 'os-interface' resource name is wrong for Nova V2.1

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368035

Title:
  'os-interface' resource name is wrong for Nova V2.1

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  'os-interface' resource name needs to be fixed in V3 code base to work
  for V2.1 API.

  V2 and V3 diff for this resource name is below-

  V2 - '/servers/os-interface'
  V3 - '/servers/os-attach-interfaces'

  V3 resource name needs to be changed to work that for V2.1.

  This needs to be fixed to make V2.1 backward compatible with V2 APIs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353949] Re: nova.exception.ExternalNetworkAttachForbidden is not handled in V3

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353949

Title:
  nova.exception.ExternalNetworkAttachForbidden is not handled in V3

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When creating an instance and assigning a public network to it without admin 
authority, ExternalNetworkAttachForbidden
  will be raised. But this exception is not handled in V3 api.

  2014-08-07 19:40:55.032 ERROR nova.api.openstack.extensions 
[req-a3a824a2-d477-4720-98c7-d3161de268ba demo demo] Unexpected exception in 
API method
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/extensions.py, line 473, in wrapped
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/validation/__init__.py, line 39, in wrapper
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/servers.py, line 507, 
in create
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions 
**create_kwargs)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/hooks.py, line 131, in inner
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions rv = f(*args, 
**kwargs)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 1351, in create
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions 
legacy_bdm=legacy_bdm)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 967, in _create_instance
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions max_count)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 734, in 
_validate_and_build_base_options
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions 
requested_networks, max_count)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 447, in _check_requested_networks
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions max_count)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 709, in validate_networks
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions 
neutron=neutron)
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 169, in 
_get_available_networks
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions 
network_uuid=net['id'])
  2014-08-07 19:40:55.032 TRACE nova.api.openstack.extensions 
ExternalNetworkAttachForbidden: It is not allowed to create an interface on 
external network 447d82c5-bf58-4f39-ac2f-a30227a464e2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367575] Re: os-start/os-stop server actions does not work for v2.1 API

2014-10-01 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367575

Title:
  os-start/os-stop server actions does not work for v2.1 API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  os-start/os-stop server action does not work for V2.1 API.

  Those needs to be converted to V2.1 from V3 base code.

  This needs to be fixed to make V2.1 backward compatible with V2 APIs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >