[Yahoo-eng-team] [Bug 1212939] Re: periodic-keystone-python27-stable-grizzly fails due toNo module named netaddr

2013-10-18 Thread Alan Pevec
** Changed in: keystone/grizzly
 Assignee: (unassigned) = Jamie Lennox (jamielennox)

** Changed in: keystone/grizzly
   Importance: Undecided = Medium

** Changed in: keystone
 Assignee: Jamie Lennox (jamielennox) = (unassigned)

** Tags removed: in-stable-grizzly

** Changed in: keystone/grizzly
Milestone: None = 2013.1.4

** Changed in: keystone/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1212939

Title:
  periodic-keystone-python27-stable-grizzly fails due toNo module named
  netaddr

Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone grizzly series:
  Fix Released

Bug description:
  keystone of stable/grizzly  fails this test:

  ==
  ERROR: test_authenticate_invalid_tenant_id 
(test_keystoneclient.KcMasterTestCase)
  --
  Traceback (most recent call last):
File 
/home/jenkins/workspace/periodic-keystone-python27-stable-grizzly/tests/test_keystoneclient.py,
 line 117, in test_authenticate_invalid_tenant_id
  from keystoneclient import exceptions as client_exceptions
File 
/home/jenkins/workspace/periodic-keystone-python27-stable-grizzly/vendor/python-keystoneclient-master/keystoneclient/exceptions.py,
 line 7, in module
  from keystoneclient.openstack.common import jsonutils
File 
/home/jenkins/workspace/periodic-keystone-python27-stable-grizzly/vendor/python-keystoneclient-master/keystoneclient/openstack/common/jsonutils.py,
 line 44, in module
  import netaddr
  ImportError: No module named netaddr

  netaddr was not installed by tox for the test case.

  Complete log here: http://logs.openstack.org/periodic/periodic-
  keystone-python27-stable-grizzly/precise5/5/console.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1212939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253905] Re: Keystone doesn't handle UTF8 in exceptions

2013-12-05 Thread Alan Pevec
** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: keystone/havana
   Importance: Undecided = High

** Changed in: keystone/havana
   Status: New = Invalid

** Changed in: keystone/havana
 Assignee: (unassigned) = Jamie Lennox (jamielennox)

** Changed in: keystone/havana
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1253905

Title:
  Keystone doesn't handle UTF8 in exceptions

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress

Bug description:
  Originally reported:
  https://bugzilla.redhat.com/show_bug.cgi?id=1033190

  Description of problem:

  [root@public-control1 ~]# keystone tenant-create --name Consulting – 
Middleware Delivery 
  Unable to communicate with identity service: {error: {message: An 
unexpected error prevented the server from fulfilling your request. 'ascii' 
codec can't encode character u'\\u2013' in position 11: ordinal not in 
range(128), code: 500, title: Internal Server Error}}. (HTTP 500)

  
  NB: the dash in the name is not an ascii dash.  It's something else.

  Version-Release number of selected component (if applicable):

  openstack-keystone-2013.1.3-2.el6ost.noarch

  How reproducible:

  Every

  
  Additional info:

  Performing the same command on a Folsom cloud works just fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1253905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-06 Thread Alan Pevec
** Changed in: cinder
   Importance: Undecided = High

** Changed in: keystone
   Importance: Undecided = High

** Changed in: nova
   Importance: Undecided = High

** Changed in: heat
   Importance: Undecided = High

** Changed in: neutron
   Importance: Undecided = High

** Changed in: oslo
   Importance: Undecided = High

** Changed in: keystone
 Assignee: (unassigned) = Alan Pevec (apevec)

** Also affects: ceilometer/havana
   Importance: Undecided
   Status: New

** Changed in: ceilometer/havana
   Importance: Undecided = High

** Also affects: cinder/havana
   Importance: Undecided
   Status: New

** Changed in: cinder/havana
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  New
Status in Cinder:
  In Progress
Status in Cinder havana series:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-06 Thread Alan Pevec
** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Also affects: heat/havana
   Importance: Undecided
   Status: New

** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  New
Status in Orchestration API (Heat):
  New
Status in heat havana series:
  New
Status in OpenStack Identity (Keystone):
  New
Status in Keystone havana series:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in neutron havana series:
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Compute (nova) havana series:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-06 Thread Alan Pevec
** Also affects: oslo/havana
   Importance: Undecided
   Status: New

** Changed in: oslo/havana
   Importance: Undecided = High

** Changed in: oslo/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
   Importance: Undecided = High

** Changed in: heat/havana
   Importance: Undecided = High

** Changed in: nova/havana
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  New
Status in Orchestration API (Heat):
  New
Status in heat havana series:
  New
Status in OpenStack Identity (Keystone):
  New
Status in Keystone havana series:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in neutron havana series:
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Compute (nova) havana series:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-06 Thread Alan Pevec
** Also affects: oslo/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  In Progress
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  New
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-06 Thread Alan Pevec
** Also affects: ceilometer/havana
   Importance: Undecided
   Status: New

** Changed in: ceilometer/havana
   Status: New = In Progress

** Changed in: ceilometer/havana
   Importance: Undecided = High

** Changed in: cinder
   Importance: Undecided = High

** Also affects: cinder/havana
   Importance: Undecided
   Status: New

** Changed in: cinder/havana
   Importance: Undecided = High

** Changed in: cinder/havana
   Status: New = In Progress

** Changed in: ceilometer/havana
Milestone: None = 2013.2.1

** Changed in: cinder/havana
Milestone: None = 2013.2.1

** Changed in: cinder/havana
 Assignee: (unassigned) = Flavio Percoco (flaper87)

** Changed in: ceilometer/havana
 Assignee: (unassigned) = Eoghan Glynn (eglynn)

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = Flavio Percoco (flaper87)

** Changed in: nova/havana
Milestone: None = 2013.2.1

** Also affects: heat/havana
   Importance: Undecided
   Status: New

** Changed in: heat/havana
   Importance: Undecided = High

** Changed in: heat/havana
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  In Progress
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in neutron havana series:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  New
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: 

[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-06 Thread Alan Pevec
** Changed in: cinder
Milestone: None = icehouse-2

** Changed in: heat/havana
Milestone: None = 2013.2.1

** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = High

** Changed in: neutron/havana
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  In Progress
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in neutron havana series:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  New
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-06 Thread Alan Pevec
** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: oslo/havana
   Importance: Undecided = High

** Changed in: oslo/havana
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer havana series:
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  In Progress
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  In Progress
Status in OpenStack Identity (Keystone):
  New
Status in Keystone havana series:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in neutron havana series:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post 

[Yahoo-eng-team] [Bug 1178375] Re: Orphan exchanges in Qpid and lack of option for making queues [un]durable

2013-12-06 Thread Alan Pevec
** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: keystone/havana
   Importance: Undecided = Medium

** Changed in: keystone/havana
Milestone: None = 2013.2.1

** Changed in: keystone/havana
 Assignee: (unassigned) = Alan Pevec (apevec)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1178375

Title:
  Orphan exchanges in Qpid and lack of option for making queues
  [un]durable

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  New
Status in Keystone havana series:
  New
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Start qpid, nova-api, nova-scheduler, and nova-conductor, and nova-
  compute.

  There are orphan direct exchanges in qpid. Checked using qpid-config
  exchanges. The exchanges continue to grow, presumably, whenever nova-
  compute does a periodic update over AMQP.

  Moreover, the direct and topic exchanges are by default durable which
  is a problem. We want the ability to turn on/off the durable option
  just like Rabbit options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1178375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178375] Re: Orphan exchanges in Qpid and lack of option for making queues [un]durable

2013-12-06 Thread Alan Pevec
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1178375

Title:
  Orphan exchanges in Qpid and lack of option for making queues
  [un]durable

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Start qpid, nova-api, nova-scheduler, and nova-conductor, and nova-
  compute.

  There are orphan direct exchanges in qpid. Checked using qpid-config
  exchanges. The exchanges continue to grow, presumably, whenever nova-
  compute does a periodic update over AMQP.

  Moreover, the direct and topic exchanges are by default durable which
  is a problem. We want the ability to turn on/off the durable option
  just like Rabbit options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1178375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240790] Re: Allow using ipv6 address with omiting zero

2013-12-06 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Tags removed: havana-backport-potential

** Changed in: neutron/havana
   Importance: Undecided = Low

** Changed in: neutron/havana
   Status: New = In Progress

** Changed in: neutron/havana
Milestone: None = 2013.2.1

** Changed in: neutron/havana
 Assignee: (unassigned) = Hua Zhang (zhhuabj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240790

Title:
  Allow using ipv6 address with omiting zero

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress

Bug description:
  Now neutron support ipv6 address like 2001:db8::10:10:10:0/120,
  but don't support ipv6 address with omiting zero like 
2001:db8:0:0:10:10:10:0/120
  that will cause the exception '2001:db8:0:0:10:10:10:0/120' isn't a 
recognized IP subnet cidr, '2001:db8::10:10:10:0/120' is recommended

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255519] Re: NVP connection fails because port is a string

2013-12-06 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Importance: Undecided = Medium

** Changed in: neutron/havana
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

** Changed in: neutron/havana
Milestone: None = 2013.2.1

** Changed in: neutron/havana
   Status: New = In Progress

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255519

Title:
  NVP connection fails because port is a string

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  In Progress

Bug description:
  On a dev machine I've recently create I noticed failures at startup when 
Neutron is configured with the NVP plugin.
  I root caused the failure to port being explicitly passed to HTTPSConnection 
constructor as a string rather than an integer.

  This can be easily fixed ensuring port is always an integer.

  I am not sure of the severity of this bug as it might strictly related
  to this specific dev env, but it might be worth applying and
  backporting it

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1255519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229994] Re: VMwareVCDriver: snapshot failure when host in maintenance mode

2013-12-06 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
 Assignee: (unassigned) = Yaguang Tang (heut2008)

** Changed in: nova/havana
Milestone: None = 2013.2.1

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229994

Title:
  VMwareVCDriver: snapshot failure when host in maintenance mode

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Image snapshot through the VC cluster driver may fail if, within the
  datacenter containing the cluster managed by the driver, there are one
  or more hosts in maintenance mode with access to the datastore
  containing the disk image snapshot.

  A sign that this situation has occurred is the appearance in the nova
  compute log of an error similar to the following:

  2013-08-02 07:10:30.036 WARNING nova.virt.vmwareapi.driver [-] Task 
[DeleteVirtualDisk_Task] (returnval){
  value = task-228
  _type = Task
  } status: error The operation is not allowed in the current state.

  What this means is that even if all hosts in cluster are running fine in 
normal mode, a host outside of the cluster going into maintenance mode may
  lead to snapshot failure.

  The root cause of the problem is due to an issue in VC's handler of
  the VirtualDiskManager.DeleteVirtualDisk_Task API, which may
  incorrectly pick a host in maintenance mode to service the disk
  deletion even though such an operation will be rejected by the host
  under maintenance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1229994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241615] Re: rebuild with volume attached leaves instance without the volume and in an inconsistent state

2013-12-06 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = Nikola Đipanov (ndipanov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241615

Title:
  rebuild with volume attached leaves instance without the volume and in
  an inconsistent state

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  I created a backup to an instance which had no volume attached.

  attached a volume - rebuild the instance from the backup.

  It appears as though the volume is not attached anymore after the
  rebuild, but if we try to attach it to the same device we get an error
  that a device is already attached:

  2013-10-18 16:54:36.632 2478 DEBUG qpid.messaging [-] RETR[2fca830]: 
Message(properties={'x-amqp-0-10.routing-key': 
u'reply_70fb16e321724b38b3d3face4e83f363'}, content={u'oslo.message': 
u'{_unique_id: dd2a85b63c56498c8f2835f9b96e9bb9
  , failure: null, _msg_id: 7ce524cebbb34aecab9d608a48103a1c, result: 
null, ending: true}', u'oslo.version': u'2.0'}) _get 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py:654
  2013-10-18 16:54:36.633 2478 DEBUG qpid.messaging.io.ops [-] SENT[2fa6cb0]: 
MessageFlow(destination='0', unit=0, value=1L, id=serial(5206)) write_op 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
  2013-10-18 16:54:36.634 2478 DEBUG qpid.messaging.io.ops [-] SENT[2fa6cb0]: 
SessionCompleted(commands=[0-5199]) write_op 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
  2013-10-18 16:54:36.639 2478 ERROR nova.openstack.common.rpc.amqp 
[req-4a884bb4-5ba6-403d-8be7-df1eeebc1324 bbce236d5aac4d1dbc086a8835ed0ebc 
d09f3bf0f9224affa92ab97010b37270] Exception during message handling
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 90, in wrapped
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 73, in wrapped
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 243, in 
decorated_function
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 229, in 
decorated_function
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 271, in 
decorated_function
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 258, in 
decorated_function
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 3624, in 
reserve_block_device_name
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp return 
do_reserve()
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py, line 
246, in inner
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2013-10-18 16:54:36.639 2478 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 3613, in 
do_reserve
  2013-10-18 16:54:36.639 2478 TRACE 

[Yahoo-eng-team] [Bug 1252827] Re: VMWARE: Intermittent problem with stats reporting

2013-12-07 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = Gary Kotton (garyk)

** Changed in: nova/havana
Milestone: None = 2013.2.1

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252827

Title:
  VMWARE: Intermittent problem with stats reporting

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  I see that sometimes vmware driver reports 0 stats. Please take a look
  at the following log file for more information:
  http://162.209.83.206/logs/51404/6/screen-n-cpu.txt.gz

  excerpts from log file:
  2013-11-18 15:41:03.994 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for datastore Reason: None
  2013-11-18 15:41:04.029 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for host Reason: None
  2013-11-18 15:41:04.029 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for resourcePool Reason: None
  2013-11-18 15:41:04.029 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: free ram (MB): 0 _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:389
  2013-11-18 15:41:04.029 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: free disk (GB): 0 _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:390
  2013-11-18 15:41:04.030 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: VCPU information unavailable _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:397

  During this time we cannot spawn any server. Look at the
  http://162.209.83.206/logs/51404/6/screen-n-sch.txt.gz

  excerpts from log file:
  2013-11-18 15:41:52.475 DEBUG nova.filters 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] Filter AvailabilityZoneFilter 
returned 1 host(s) get_filtered_objects /opt/stack/nova/nova/filters.py:88
  2013-11-18 15:41:52.476 DEBUG nova.scheduler.filters.ram_filter 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] (Ubuntu1204Server, 
domain-c26(c1)) ram:-576 disk:0 io_ops:0 instances:1 does not have 64 MB usable 
ram, it only has -576.0 MB usable ram. host_passes 
/opt/stack/nova/nova/scheduler/filters/ram_filter.py:60
  2013-11-18 15:41:52.476 INFO nova.filters 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] Filter RamFilter returned 0 
hosts
  2013-11-18 15:41:52.477 WARNING nova.scheduler.driver 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] [instance: 
1a648022-1783-4874-8b41-c3f4c89d8500] Setting instance to ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250032] Re: libvirt's _create_ephemeral must accept max_size

2013-12-08 Thread Alan Pevec
*** This bug is a duplicate of bug 1251152 ***
https://bugs.launchpad.net/bugs/1251152

** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250032

Title:
  libvirt's _create_ephemeral must accept max_size

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Running libvirt / KVM with QCow2, local ephemeral storage.

  With revision f6810be4, Change-Id:
  I3d47adaa2ad07434853f447feb27d7aae0e2e717, a max_size parameter was
  introduced to calls to prepare_template. For ephemeral images, this
  introduced a regression.

  prepare_template triggers a call to _create_ephemeral in the case
  where an ephemeral backing file (of some size) is required, which
  doesn't yet exist on the host.

  This causes an error since nova/virt/libvirt/driver.py's
  LibvirtDriver._create_ephemeral doesn't recognise that kwarg.

  The fix ought to be trivial - teaching _create_ephemeral to understand
  the max_size kwarg (even if it ignores it).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251920] Re: Tempest failures due to failure to return console logs from an instance

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251920

Title:
  Tempest failures due to failure to return console logs from an
  instance

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  New
Status in Tempest:
  Fix Committed

Bug description:
  Logstash search:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpjb25zb2xlLmh0bWwgQU5EIG1lc3NhZ2U6XCJhc3NlcnRpb25lcnJvcjogY29uc29sZSBvdXRwdXQgd2FzIGVtcHR5XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODQ2NDEwNzIxODl9

  An example failure is http://logs.openstack.org/92/55492/8/check
  /check-tempest-devstack-vm-full/ef3a4a4/console.html

  console.html
  ===

  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,775 Request: POST 
http://127.0.0.1:8774/v2/3f6934d9aabf467aa8bc51397ccfa782/servers/10aace14-23c1-4cec-9bfd-2c873df1fbee/action
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': 'Token omitted'}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Body: 
{os-getConsoleOutput: {length: 10}}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:21,000 Response Status: 200
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Nova request id: 
req-7a2ee0ab-c977-4957-abb5-1d84191bf30c
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Headers: 
{'content-length': '14', 'date': 'Sat, 16 Nov 2013 21:41:20 GMT', 
'content-type': 'application/json', 'connection': 'close'}
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Body: {output: 
}
  2013-11-16 21:54:27.999 | }}}
  2013-11-16 21:54:27.999 | 
  2013-11-16 21:54:27.999 | Traceback (most recent call last):
  2013-11-16 21:54:27.999 |   File 
tempest/api/compute/servers/test_server_actions.py, line 281, in 
test_get_console_output
  2013-11-16 21:54:28.000 | self.wait_for(get_output)
  2013-11-16 21:54:28.000 |   File tempest/api/compute/base.py, line 133, in 
wait_for
  2013-11-16 21:54:28.000 | condition()
  2013-11-16 21:54:28.000 |   File 
tempest/api/compute/servers/test_server_actions.py, line 278, in get_output
  2013-11-16 21:54:28.000 | self.assertTrue(output, Console output was 
empty.)
  2013-11-16 21:54:28.000 |   File /usr/lib/python2.7/unittest/case.py, line 
420, in assertTrue
  2013-11-16 21:54:28.000 | raise self.failureException(msg)
  2013-11-16 21:54:28.001 | AssertionError: Console output was empty.

  n-api
  

  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Action: 'action', body: 
{os-getConsoleOutput: {length: 10}} _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:963
  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Calling method bound method 
ConsoleOutputController.get_console_output of 
nova.api.openstack.compute.contrib.console_output.ConsoleOutputController 
object at 0x3c1f990 _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:964
  2013-11-16 21:41:20.865 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Making synchronous call on 
compute.devstack-precise-hpcloud-az2-663635 ... multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:553
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] MSG_ID is 
a93dceabf6a441eb850b5fbb012d661f multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:556
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] UNIQUE_ID is 
706ab69dc066440fbe1bd7766b73d953. _add_unique_id 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:341
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] Closed channel #1 _do_close 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] using channel_id: 1 __init__ 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
  2013-11-16 21:41:20.870 22679 DEBUG amqp [-] Channel open _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
  2013-11-16 21:41:20.999 INFO nova.osapi_compute.wsgi.server 

[Yahoo-eng-team] [Bug 1239603] Re: Bogus ERROR level debug spew when creating a new instance

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239603

Title:
  Bogus ERROR level debug spew when creating a new instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Change-Id: Ifd41886b9bc7dff01cdf741a833946bed1bdddc implemented a
  number of items required for auto_disk_config to be more than just
  True or False.

  It appears that a logging statement used for debugging of the code has
  been left behind:

  1256 def _check_auto_disk_config(self, instance=None, image=None,
  1257 **extra_instance_updates):
  1258 auto_disk_config = extra_instance_updates.get(auto_disk_config)
  1259 if auto_disk_config is None:
  1260 return
  1261 if not image and not instance:
  1262 return
  1263 
  1264 if image:
  1265 image_props = image.get(properties, {})
  1266 LOG.error(image_props)

  
  This needs to be removed as it is causing false-positives to be picked up by 
our error-tracking software

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246592] Re: Nova live migration failed due to OLE error

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246592

Title:
  Nova live migration failed due to OLE error

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  When migrate vm on hyperV, command fails with the following error:

  2013-10-25 03:35:40.299 12396 ERROR nova.openstack.common.rpc.amqp 
[req-b542e0fd-74f5-4e53-889c-48a3b44e2887 3a75a18c8b60480d9369b25ab06519b3 
0d44e4afd3d448c6acf0089df2dc7658] Exception during message handling
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\amqp.py, line 461, 
in _process_data
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\dispatcher.py, line 
172, in dispatch
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 90, in wrapped
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
payload)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 73, in wrapped
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 4103, in 
live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 118, in 
live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
44, in wrapper
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp return 
function(self, *args, **kwds)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
76, in live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
recover_method(context, instance_ref, dest, block_migration)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
69, in live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp dest)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationutils.py, line 
231, in live_migrate_vm
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
disk_paths = self._get_physical_disk_paths(vm_name)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationutils.py, line 
114, in _get_physical_disk_paths
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
ide_paths = self._vmutils.get_controller_volume_paths(ide_ctrl_path)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py, line 553, in 
get_controller_volume_paths
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
parent: controller_path})
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\wmi.py, line 

[Yahoo-eng-team] [Bug 1239709] Re: NovaObject does not properly honor VERSION

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239709

Title:
  NovaObject does not properly honor VERSION

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  The base object infrastructure has been comparing Object.version
  instead of the Object.VERSION that *all* the objects have been setting
  and incrementing when changes have been made. Since the base object
  defined a .version, and that was used to determine the actual version
  of an object, all objects defining a different VERSION were ignored.

  All systems in the wild currently running broken code are sending
  version '1.0' for all of their objects. The fix is to change the base
  object infrastructure to properly examine, compare and send
  Object.VERSION.

  Impact should be minimal at this point, but getting systems patched as
  soon as possible will be important going forward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244311] Re: notification failure in _sync_power_states

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244311

Title:
  notification failure in _sync_power_states

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  The _sync_power_states periodic task pull instances without
  system_metadata in order to reduce network bandwidth being
  unnecessarily consumed.  Most of the time this is fine, but if
  vm_power_state != db_power_state then the instance is updated and
  saved.  As part of saving the instance a notification is sent.  In
  order to send the notification it extracts flavor information from the
  system_metadata on the instance.  But system_metadata isn't loaded,
  and won't be lazy loaded.  So an exception is raised and the
  notification isn't sent.

  2013-10-23 03:30:35.714 21492 ERROR nova.notifications [-] [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Failed to send state update notification
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Traceback (most recent call last):
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 146, in send_update
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] old_display_name=old_display_name)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 199, in _send_instance_update_notification
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] payload = info_from_instance(context, 
instance, None, None)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 343, in info_from_instance
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] instance_type = 
flavors.extract_flavor(instance_ref)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/compute/flavors.py,
 line 282, in extract_flavor
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] instance_type[key] = 
type_fn(sys_meta[type_key])
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] KeyError: 'instance_type_memory_mb'
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab]
  2013-10-23 03:30:35.718 21492 WARNING nova.compute.manager [-] [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Instance shutdown by itself. Calling the 
stop API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243260] Re: Nova api doesn't start with a backdoor port set

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243260

Title:
  Nova api doesn't start with a backdoor port set

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  nova api fails to start properly if a backdoor port is specified.
  Looking at the logs this traceback is repeatedly printed:

  2013-10-22 14:19:46.822 INFO nova.openstack.common.service [-] Child 1460 
exited with status 1
  2013-10-22 14:19:46.824 INFO nova.openstack.common.service [-] Started child 
1468
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 60684 for process 1467
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 58986 for process 1468
  2013-10-22 14:19:46.837 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 117, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup x.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 49, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 448, in run_service
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 357, in start
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
self.manager.backdoor_port = self.backdoor_port
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.840 TRACE nova   File /usr/local/bin/nova-api, line 10, 
in module
  2013-10-22 14:19:46.840 TRACE nova sys.exit(main())
  2013-10-22 14:19:46.840 TRACE nova   File /opt/stack/nova/nova/cmd/api.py, 
line 53, in main
  2013-10-22 14:19:46.840 TRACE nova launcher.wait()
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 351, in wait
  2013-10-22 14:19:46.840 TRACE nova self._respawn_children()
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 341, in 
_respawn_children
  2013-10-22 14:19:46.840 TRACE nova self._start_child(wrap)
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 287, in _start_child
  2013-10-22 14:19:46.840 TRACE nova os._exit(status)
  2013-10-22 14:19:46.840 TRACE nova TypeError: an integer is required
  2013-10-22 14:19:46.840 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243291] Re: Restarting nova compute has an exception

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243291

Title:
  Restarting nova compute has an exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  (latest havana code - libvirt driver)

  1. launch a nova vm
  2. see that the instance is deployed on the compute node
  3. restart the compute node

  get the following exception:

  2013-10-22 05:46:53.711 30742 INFO nova.openstack.common.rpc.common 
[req-57056535-4ecd-488a-a75e-ff83341afb98 None None] Connected to AMQP server 
on 192.168.10.111:5672
  2013-10-22 05:46:53.737 30742 AUDIT nova.service [-] Starting compute node 
(version 2013.2)
  2013-10-22 05:46:53.814 30742 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
x.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 65, 
in run_service
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 154, in start
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 786, in 
init_host
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 664, in 
_init_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
net_info = compute_utils.get_nw_info_for_instance(instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/utils.py, line 349, in 
get_nw_info_for_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return instance.info_cache.network_info
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246412] Re: Unshelving an instance with an attached volume causes the volume to not get attached

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246412

Title:
  Unshelving an instance with an attached volume causes the volume to
  not get attached

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  When shelving an instance that has a volume attached - once it's
  unshelved, the volume will not get re-attached.

  Reproduce by:

  $nova boot --image IMAGE --flavor FLAVOR test
  $nova attach INSTANCE VOLUME #ssh into the instance and make sure the 
volume is there
  $nova shelve INSTANCE #Make sure the instance is done shelving
  $nova unshelve INSTANCE #Log in and see that the volume is not visible any 
more

  It can also be seen that the volume remains attached as per

  $sinder list

  And if you take a look at the generated xml (if you use libvirt) you
  can see that the volume is not there when the instance is done
  unshelving.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240247] Re: API cell always doing local deletes

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240247

Title:
  API cell always doing local deletes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  It appears a regression was introduced in:

  https://review.openstack.org/#/c/36363/

  Where the API cell is now always doing a _local_delete()... before
  telling child cells to delete the instance.  There's at least a couple
  of bad side effects of this:

  1) The instance disappears immediately from API view, even though the 
instance still exists in the child cell.  The user does not see a 'deleting' 
task state.  And if the delete fails in the child cell, you have a sync issue 
until the instance is 'healed'.
  2) Double delete.start and delete.end notifications are sent.  1 from API 
cell, 1 from child cell.

  The problem seems to be that _local_delete is being called because the
  service is determined to be down... because the compute service does
  not run in the API cell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234759] Re: Hyper-V fails to spawn snapshots

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1234759

Title:
  Hyper-V fails to spawn snapshots

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  Creating a snapshot of an instance and then trying to boot from it
  will result the following Hyper-V exception: HyperVException: WMI job
  failed with status 10. Here is the trace:
  http://paste.openstack.org/show/47904/ .

  The ideea is that Hyper-V fails to expand the image, as it gets the
  request to resize it to it's actual size, which leads to an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1234759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233026] Re: exception.InstanceIsLocked is not caught in start and stop server api

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233026

Title:
  exception.InstanceIsLocked is not caught in start and stop server api

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  when port nova-v3-test:
  test_server_actions.ServerActionsV3TestXML.test_lock_unlock_server. We
  found the exception.InstanceIsLocked is not caught in start and stop
  server API.

  
  the following is the nova log:

  2013-09-30 15:03:29.306 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;32m] 
^[[01;35m^[[00;32mAction: 'action', body: ?xml version=1.0 encoding=UTF-8?
  stop xmlns=http://docs.openstack.org/compute/api/v1.1/^[[00m 
^[[00;33mfrom (pid=23798) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:935^[[00m
  2013-09-30 15:03:29.307 ^[[00;32mDEBUG nova.api.openstack.wsgi 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;32m] 
^[[01;35m^[[00;32mCalling method bound method ServersController._stop_server 
of nova.api.openstack.compute.plugins.v3.servers.ServersController object at 
0x577c250^[[00m ^[[00;33mfrom (pid=23798) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:936^[[00m
  2013-09-30 15:03:29.339 ^[[00;32mDEBUG 
nova.api.openstack.compute.plugins.v3.servers 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;32m] 
^[[01;35m[instance: cd4fec81-d2e8-43cd-ab5d-47da72dd90fa] ^[[00;32mstop 
instance^[[00m ^[[00;33mfrom (pid=23798) _stop_server 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/servers.py:1372^[[00m
  2013-09-30 15:03:29.340 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[01;31m] 
^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File /opt/stack/nova/nova/api/openstack/extensions.py, line 
469, in wrapped
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/servers.py, line 1374, 
in _stop_server
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mself.compute_api.stop(context, instance)
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File /opt/stack/nova/nova/compute/api.py, line 198, in 
wrapped
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(self, context, target, *args, **kwargs)
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File /opt/stack/nova/nova/compute/api.py, line 187, in inner
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mraise 
exception.InstanceIsLocked(instance_uuid=instance['uuid'])
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mInstanceIsLocked: Instance cd4fec81-d2e8-43cd-ab5d-47da72dd90fa 
is locked
  ^[[01;31m2013-09-30 15:03:29.340 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m
  2013-09-30 15:03:29.341 ^[[00;36mINFO nova.api.openstack.wsgi 
[^[[01;36mreq-d791baac-2015-4e65-8d02-720b0944e824 ^[[00;36mdemo demo^[[00;36m] 
^[[01;35m^[[00;36mHTTP exception thrown: Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  class 'nova.exception.InstanceIsLocked'^[[00m

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1233026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235435] Re: 'SubnetInUse: Unable to complete operation on subnet UUID. One or more ports have an IP allocation from this subnet.'

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235435

Title:
  'SubnetInUse: Unable to complete operation on subnet UUID. One or more
  ports have an IP allocation from this subnet.'

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Tempest:
  Invalid

Bug description:
  Occasional tempest failure:

  http://logs.openstack.org/86/49086/2/gate/gate-tempest-devstack-vm-
  neutron-isolated/ce14ceb/testr_results.html.gz

  ft3.1: tearDownClass 
(tempest.scenario.test_network_basic_ops.TestNetworkBasicOps)_StringException: 
Traceback (most recent call last):
File tempest/scenario/manager.py, line 239, in tearDownClass
  thing.delete()
File tempest/api/network/common.py, line 71, in delete
  self.client.delete_subnet(self.id)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 112, in with_params
  ret = self.function(instance, *args, **kwargs)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 380, in delete_subnet
  return self.delete(self.subnet_path % (subnet))
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1233, in delete
  headers=headers, params=params)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1222, in retry_request
  headers=headers, params=params)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1165, in do_request
  self._handle_fault_response(status_code, replybody)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 97, in exception_handler_v20
  message=msg)
  NeutronClientException: 409-{u'NeutronError': {u'message': u'Unable to 
complete operation on subnet 9e820b02-bfe2-47e3-b186-21c5644bc9cf. One or more 
ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', 
u'detail': u''}}

  
  logstash query:

  @message:One or more ports have an IP allocation from this subnet
  AND @fields.filename:logs/screen-q-svc.txt and @message:
  SubnetInUse: Unable to complete operation on subnet


  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiBhbmQgQG1lc3NhZ2U6XCIgU3VibmV0SW5Vc2U6IFVuYWJsZSB0byBjb21wbGV0ZSBvcGVyYXRpb24gb24gc3VibmV0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODA5MTY1NDUxODcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235022] Re: VMware: errors booting from volume via Horizon

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1235022

Title:
  VMware: errors booting from volume via Horizon

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  When using VMwareVC nova driver and VMwareVcVMDK cinder driver,
  booting from volume via the Horizon UI fails. The instance boots with
  ERROR status and the log shows Image  could not be found. In
  addition, the user is unable to access the instances index page in
  Horizon due to an error 500 (other pages work, however). Steps to
  reproduce:

  (Using horizon)
  1. Create a volume from an image
  2. Boot an instance from the volume 

  Expected result: 
  1. An instance is booted from the volume successfully
  2. User is redirected to the instances index page in Horizon

  Actual result:
  1. Instance fails to boot with status ERROR
  2. User is redirected to instances index page but page fails with 500 error. 
In debug mode, user sees TypeError at /project/instances: string indices must 
be integers (see link to trace below)

  Nova log error:

   Traceback (most recent call last):
 File /opt/stack/nova/nova/compute/manager.py, line 1037, in 
_build_instance
   set_access_ip=set_access_ip)
 File /opt/stack/nova/nova/compute/manager.py, line 1410, in _spawn
   LOG.exception(_('Instance failed to spawn'), instance=instance)
 File /opt/stack/nova/nova/compute/manager.py, line 1407, in _spawn
   block_device_info)
 File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 623, in spawn
   admin_password, network_info, block_device_info)
 File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 208, in spawn
   disk_type, vif_model, image_linked_clone) = _get_image_properties()
 File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 187, in 
   instance)
 File /opt/stack/nova/nova/virt/vmwareapi/vmware_images.py, line 184, in 
   meta_data = image_service.show(context, image_id)
 File /opt/stack/nova/nova/image/glance.py, line 290, in show
   _reraise_translated_image_exception(image_id)
 File /opt/stack/nova/nova/image/glance.py, line 288, in show
   image = self._client.call(context, 1, 'get', image_id)
 File /opt/stack/nova/nova/image/glance.py, line 212, in call
   return getattr(client.images, method)(*args, **kwargs)
 File /opt/stack/python-glanceclient/glanceclient/v1/images.py, line 114, 
in 
   % urllib.quote(str(image_id)))
 File /opt/stack/python-glanceclient/glanceclient/common/http.py, line 
272, 
   return self._http_request(url, method, **kwargs)
 File /opt/stack/python-glanceclient/glanceclient/common/http.py, line 
233, 
   raise exc.from_response(resp, body_str)
   ImageNotFound: Image  could not be found.
   
  Horizon error:

  Request Method:   GET
  Request URL:  http://10.20.72.218/project/instances/
  Django Version:   1.5.4
  Exception Type:   TypeError
  Exception Value:  
  string indices must be integers
  Exception Location:   
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/
dashboards/project/instances/views.py in get_data, line 92
  Python Executable:/usr/bin/python
  Python Version:   2.7.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1235022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230925] Re: Require new python-cinderclient for Havana

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1230925

Title:
  Require new python-cinderclient for Havana

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  Havana Nova needs to require cinderclient 1.0.6, which contains the
  update_snapshot_status() API used by assisted snapshots, as well as
  migrate_volume_completion() for volume migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1230925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237795] Re: VMware: restarting nova compute reports invalid instances

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237795

Title:
  VMware: restarting nova compute reports invalid instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  When nova compute restarts the running instances on the hypervisor are
  queried. None of the instances would be matched - this would prevent
  the instance states being in sync with the state in the database. See
  _destroy_evacuated_instances
  (https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L531)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226698] Re: flavor pagination incorrectly uses id rather than flavorid

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226698

Title:
  flavor pagination incorrectly uses id rather than flavorid

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  The ID in flavor-list response is really instance_types.flavorid in 
database.  When using the marker, it use instance_types.id field. The test pass 
as long as instance_types.id begin with 1 and it is sequential. If it does not 
begin with 1 or if it does not match instance_types.flavorid, the test fail 
with following error:   
   

  
  '''   
  
  Traceback (most recent call last):
  
File 
/Volumes/apple/openstack/tempest/tempest/api/compute/flavors/test_flavors.py, 
line 91, in test_list_flavors_detailed_using_marker 
  resp, flavors = self.client.list_flavors_with_detail(params)  
  
File 
/Volumes/apple/openstack/tempest/tempest/services/compute/json/flavors_client.py,
 line 45, in list_flavors_with_detail   
 
  resp, body = self.get(url)
  
File /Volumes/apple/openstack/tempest/tempest/common/rest_client.py, line 
263, in get
  return self.request('GET', url, headers)  
  
File /Volumes/apple/openstack/tempest/tempest/common/rest_client.py, line 
394, in request
  resp, resp_body)  
  
File /Volumes/apple/openstack/tempest/tempest/common/rest_client.py, line 
439, in _error_checker
  raise exceptions.NotFound(resp_body)  
  
  NotFound: Object not found
  
  Details: {itemNotFound: {message: The resource could not be found., 
code: 404}}

  
  ==
  
  FAIL: 
tempest.api.compute.flavors.test_flavors.FlavorsTestJSON.test_list_flavors_using_marker[gate]
  '''   
  

  
  Really, it should use flavorid for marker.  The flavor_get_all() method in 
nova.db.sqlalchemy.api should be fixed to use flavorid=marker in filter, as 
follows:
  -filter_by(id=marker).\   
  
  +filter_by(flavorid=marker).\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237126] Re: nova-api-{ec2, metadata, os-compute} don't allow SSL to be enabled

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237126

Title:
  nova-api-{ec2,metadata,os-compute} don't allow SSL to be enabled

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Although the script bin/nova-api will read nova.conf to determine
  which API services should have SSL enabled (via 'enabled_ssl_apis'),
  the individual API scripts

  bin/nova-api-ec2
  bin/nova-api-metadata
  bin/nova-api-os-compute

  do not contain similar logic to allow configuration of SSL. For
  installations that want to use SSL but not the nova-api wrapper, there
  should be a similar way to enable the former.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231263] Re: Clear text password has been print in log by some API call

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1231263

Title:
  Clear text password has been print in log by some API call

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  In current implementation, when perform some api call, like change server 
password, or rescue server, the password has been print in log in nova.
  i.e:

  2013-09-26 13:48:01.711 DEBUG routes.middleware [-] Match dict: {'action': 
u'action', 'controller': nova.api.openstack.wsgi.Resource object at 
0x46d09d0, 'project_id': u'05004a24b3304cd9b55a0fcad08107b3', 'id': 
u'8c4a1dfa-147a-4f
  f8-8116-010d8c346115'} from (pid=10629) __call__ 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-09-26 13:48:01.711 DEBUG nova.api.openstack.wsgi 
[req-10ebd201-ba52-453f-b1ce-1e41fbef8cdd admin demo] Action: 'action', body: 
{changePassword: {adminPass: 1234567}} from (pid=10629) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:926

  This is not secue which the password should be replaced by ***

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1231263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213927] Re: flavor extra spec api fails with XML content type if key contains a colon

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213927

Title:
  flavor extra spec api fails with XML content type if key contains a
  colon

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Tempest:
  Invalid

Bug description:
  The flavor extra spec API  extension (os-extra_specs) fails with HTTP
  500 when content-type application/xml is requested if the extra spec
  key contains a colon.

  For example:

  curl [endpoint]/flavors/[ID]/os-extra_specs -H Accept: application/json -H 
X-Auth-Token: $TOKEN
  {extra_specs: {foo:bar: 999}}

  curl -i [endpoint]/flavors/[ID]/os-extra_specs -H Accept: application/xml 
-H X-Auth-Token: $TOKEN
  {extra_specs: {foo:bar: 999}}
  HTTP/1.1 500 Internal Server Error

  The stack trace shows that the XML parser tries to interpret the :
  in key as if it would be a XML namespace, which fails, as the
  namespace is not valid:

  2013-08-19 13:08:14.374 27521 DEBUG nova.api.openstack.wsgi 
[req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Calling method 
bound method FlavorExtraSpecsController.index of 
nova.api.openstack.compute.contrib.flavorextraspecs.FlavorExtraSpecsController 
object at 0x2c01b90 _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:927
  2013-08-19 13:08:14.377 27521 ERROR nova.api.openstack 
[req-afe0c3c8-e7d6-48c5-84f1-782260850e6b redacted redacted] Caught error: 
Invalid tag name u'foo:bar'
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 110, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/hp/middleware/cs_auth_token.py, line 160, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
super(CsAuthProtocol, self).__call__(env, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 461, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py, line 903, in 
__call__
  2013-08-19 13:08:14.377 27521 TRACE nova.api.openstack content_type, 
body, accept)
  2013-08-19 13:08:14.377 

[Yahoo-eng-team] [Bug 1199954] Re: VCDriver: Failed to resize instance

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199954

Title:
  VCDriver: Failed to resize instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Steps to reproduce:
  nova resize UUID 2

  Error:
   ERROR nova.openstack.common.rpc.amqp 
[req-762f3a87-7642-4bd3-a531-2bcc095ec4a5 demo demo] Exception during message 
handling
Traceback (most recent call last):
  File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 421, in 
_process_data
**args)
  File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
result = getattr(proxyobj, method)(ctxt, **kwargs)
  File /opt/stack/nova/nova/exception.py, line 99, in wrapped
temp_level, payload)
  File /opt/stack/nova/nova/exception.py, line 76, in wrapped
return f(self, context, *args, **kw)
  File /opt/stack/nova/nova/compute/manager.py, line 218, in 
decorated_function
pass
  File /opt/stack/nova/nova/compute/manager.py, line 204, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 269, in 
decorated_function
function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 246, in 
decorated_function
e, sys.exc_info())
  File /opt/stack/nova/nova/compute/manager.py, line 233, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 2633, in 
resize_instance
block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 410, in 
migrate_disk_and_power_off
dest, instance_type)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 893, in 
migrate_disk_and_power_off
raise exception.HostNotFound(host=dest)
HostNotFound:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1199954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193980] Re: Regression: Cinder Volumes unable to find iscsi target for VMware instances

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1193980

Title:
  Regression: Cinder Volumes unable to find iscsi target for VMware
  instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  When trying to attach a cinder volume to a VMware based instance I am
  seeing the attached error in the nova-compute logs. Cinder does not
  report back any problem to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1193980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211742] Re: notification not available for deleting an instance having no host associated

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211742

Title:
  notification not available for deleting an instance having no host
  associated

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  Steps to reproduce issue:
  1. Set the Nova notification_driver (to say log_notifier) and monitor the 
notifications.
  2. Delete an instance which does not have a host associated with it.
  3. Check if any notifications are generated for the instance deletion.

  Expected Result:
  'delete.start' and 'delete.end' notifications should be generated for the 
instance being deleted.

  Actual Result:
  There are no 'delete' notifications being generated in this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1211742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177830] Re: [OSSA 2013-012] Unchecked qcow2 root disk sizes

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177830

Title:
  [OSSA 2013-012] Unchecked qcow2 root disk sizes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Currently there's no check on the root disk raw sizes. A user can
  create qcow2 images with any size and upload it to glance and spawn
  instances off this file. The raw backing file created in the compute
  node will be small at first due to it being a sparse file, but will
  grow as data is written to it. This can cause the following issues.

  1. Bypass storage quota restrictions
  2. Overrun compute host disk space

  This was reproduced in Devstack using recent trunk d7e4692.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188543] Re: NBD mount errors when booting an instance from volume

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188543

Title:
  NBD mount errors when booting an instance from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  My environment:
  - Grizzly OpenStack (installed from Ubuntu repository)
  - Network using Quantum
  - Cinder backed up by a Ceph cluster

  I'm able to boot an instance from a volume but it takes a long time
  for the instance to be active. I've got warnings in the logs of the
  nova-compute node (see attached file). The logs show that the problem
  is related to file injection in the disk image which isn't
  required/relevant when booting from a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188543] Re: NBD mount errors when booting an instance from volume

2013-12-08 Thread Alan Pevec
** Tags removed: havana-backport-potential in-stable-havana

** Changed in: nova/havana
   Importance: Undecided = Low

** Changed in: nova/havana
 Assignee: (unassigned) = Michael Davies (mrda)

** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Importance: Undecided = Low

** Changed in: nova/grizzly
   Status: New = In Progress

** Changed in: nova/grizzly
 Assignee: (unassigned) = Michael Davies (mrda)

** Tags removed: grizzly-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188543

Title:
  NBD mount errors when booting an instance from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  My environment:
  - Grizzly OpenStack (installed from Ubuntu repository)
  - Network using Quantum
  - Cinder backed up by a Ceph cluster

  I'm able to boot an instance from a volume but it takes a long time
  for the instance to be active. I've got warnings in the logs of the
  nova-compute node (see attached file). The logs show that the problem
  is related to file injection in the disk image which isn't
  required/relevant when booting from a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206081] Re: [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

2013-12-08 Thread Alan Pevec
** Changed in: nova/havana
 Assignee: (unassigned) = Pádraig Brady (p-draigbrady)

** Changed in: nova/folsom
   Status: In Progress = Won't Fix

** Tags removed: in-stable-havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1206081

Title:
  [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Won't Fix
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When doing QA for SUSE on bug 1177830
  I found that the fix is incomplete,
  because it assumed that the cached image would be mostly sparse.

  However, I can easily create non-sparse small compressed qcow2 images
  with

  perl -e 'for(1..11000){print x x 1024000}'  img
  qemu-img convert -c -O qcow2 img img.qcow2
  glance image-create --name=11gb --is-public=True --disk-format=qcow2 
--container-format=bare  img.qcow2
  nova boot --image 11gb --flavor m1.small testvm

  which (in Grizzly and Essex) results in one (or two in Essex) 11GB large 
files being created in /var/lib/nova/instances/_base/
  still allowing attackers to fill up disk space of compute nodes
  because the size check is only done after the uncompressing / caching

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1206081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230925] Re: Require new python-cinderclient for Havana

2013-12-08 Thread Alan Pevec
** Tags removed: havana-backport-potential in-stable-havana

** Changed in: nova
   Status: Confirmed = Fix Released

** Changed in: nova/havana
 Assignee: (unassigned) = Eric Harney (eharney)

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova/havana
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1230925

Title:
  Require new python-cinderclient for Havana

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed

Bug description:
  Havana Nova needs to require cinderclient 1.0.6, which contains the
  update_snapshot_status() API used by assisted snapshots, as well as
  migrate_volume_completion() for volume migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1230925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223859] Re: Network cache not correctly updated during interface-attach

2013-12-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
 Assignee: (unassigned) = Solly Ross (sross-7)

** Changed in: nova/havana
   Status: New = In Progress

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223859

Title:
  Network cache not correctly updated during interface-attach

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  The network cache is not correctly updated when running nova
  interface-attach: only the latest allocated IP is used. See this log:

  http://paste.openstack.org/show/46643/

  Nevermind the error reported when running nova interface-attach: I
  believe it is an unrelated issue, and I'll write another bug report
  for it.

  I noticed this issue a few months ago, but haven't had time to work on
  it. I'll try and submit a patch ASAP. See my analysis of the issue
  here: https://bugs.launchpad.net/nova/+bug/1197192/comments/3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1223859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252284] Re: OVS agent doesn't reclaim local VLAN

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252284

Title:
  OVS agent doesn't reclaim local VLAN

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Locally to an OVS agent, when the last port of a network disappears
  the local VLAN isn't reclaim.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243862] Re: fix nvp version validation for distributed router creation

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243862

Title:
  fix nvp version validation for distributed router creation

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Current test is not correct as it prevents the right creation policy
  to occur when for newer versions of NVP whose minor is 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241874] Re: L2 pop mech driver sends notif. even no related port changes

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241874

Title:
  L2 pop mech driver sends notif. even no related port changes

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  L2 population mechanism driver sends add notifications even if there
  is no related port changes, ex ip changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1241874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240744] Re: L2 pop sends updates for unrelated networks

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240744

Title:
  L2 pop sends updates for unrelated networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  The l2population mechanism driver sends update notifications for
  networks which are not related to the port which is being updated.
  Thus the fdb is populated with some incorrect entries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244255] Re: binding_failed because of l2 agent assumed down

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244255

Title:
  binding_failed because of l2 agent assumed down

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in neutron havana series:
  Fix Committed

Bug description:
  Tempest test ServerAddressesTestXML failed on a change that does not
  involve any code modification.

  https://review.openstack.org/53633

  2013-10-24 14:04:29.188 | 
==
  2013-10-24 14:04:29.189 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | setUpClass 
(tempest.api.compute.servers.test_server_addresses.ServerAddressesTestXML)
  2013-10-24 14:04:29.189 | 
--
  2013-10-24 14:04:29.189 | _StringException: Traceback (most recent call last):
  2013-10-24 14:04:29.189 |   File 
tempest/api/compute/servers/test_server_addresses.py, line 31, in setUpClass
  2013-10-24 14:04:29.189 | resp, cls.server = 
cls.create_server(wait_until='ACTIVE')
  2013-10-24 14:04:29.189 |   File tempest/api/compute/base.py, line 143, in 
create_server
  2013-10-24 14:04:29.190 | server['id'], kwargs['wait_until'])
  2013-10-24 14:04:29.190 |   File 
tempest/services/compute/xml/servers_client.py, line 356, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | return waiters.wait_for_server_status(self, 
server_id, status)
  2013-10-24 14:04:29.190 |   File tempest/common/waiters.py, line 71, in 
wait_for_server_status
  2013-10-24 14:04:29.190 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-10-24 14:04:29.190 | BuildErrorException: Server 
e21d695e-4f15-4215-bc62-8ea645645a26 failed to build and is in ERROR status


  From n-cpu.log (http://logs.openstack.org/33/53633/1/check/check-
  tempest-devstack-vm-
  neutron/4dd98e5/logs/screen-n-cpu.txt.gz#_2013-10-24_13_58_07_532):

   Error: Unexpected vif_type=binding_failed
   Traceback (most recent call last):
   set_access_ip=set_access_ip)
 File /opt/stack/new/nova/nova/compute/manager.py, line 1413, in _spawn
   LOG.exception(_('Instance failed to spawn'), instance=instance)
 File /opt/stack/new/nova/nova/compute/manager.py, line 1410, in _spawn
   block_device_info)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2084, in spawn
   write_to_disk=True)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3064, in 
to_xml
   disk_info, rescue, block_device_info)
 File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2951, in 
get_guest_config
   inst_type)
 File /opt/stack/new/nova/nova/virt/libvirt/vif.py, line 380, in 
get_config
   _(Unexpected vif_type=%s) % vif_type)
   NovaException: Unexpected vif_type=binding_failed
   TRACE nova.compute.manager [instance: e21d695e-4f15-4215-bc62-8ea645645a26]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241602] Re: AttributeError in plugins/linuxbridge/lb_neutron_plugin.py

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241602

Title:
  AttributeError in plugins/linuxbridge/lb_neutron_plugin.py

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  I'm running Ubuntu 12.04 LTS x64 + OpenStack Havana with the following
  neutron package versions:

  neutron-common 2013.2~rc3-0ubuntu1~cloud0
  neutron-dhcp-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-l3-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-metadata-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-plugin-linuxbridge 2013.2~rc3-0ubuntu1~cloud0
  neutron-plugin-linuxbridge-agent 2013.2~rc3-0ubuntu1~cloud0
  neutron-server 2013.2~rc3-0ubuntu1~cloud0
  python-neutron 2013.2~rc3-0ubuntu1~cloud0   
  python-neutronclient 2.3.0-0ubuntu1~cloud0


  When adding a router interface the following error message in
  /var/log/neutron/server.log:

  2013-10-18 15:35:14.862 15675 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, line 
438, in _process_data
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/common/rpc.py, line 44, in dispatch
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/lb_neutron_plugin.py,
 line 147, in update_device_up
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
port = self.get_port_from_device.get_port(device)
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp 
AttributeError: 'function' object has no attribute 'get_port'
  2013-10-18 15:35:14.862 15675 TRACE neutron.openstack.common.rpc.amqp
  2013-10-18 15:35:14.862 15675 ERROR neutron.openstack.common.rpc.common [-] 
Returning exception 'function' object has no attribute 'get_port' to caller
  2013-10-18 15:35:14.863 15675 ERROR neutron.openstack.common.rpc.common [-] 
['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, line 
438, in _process_data\n**args)\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/common/rpc.py, line 44, in 
dispatch\nneutron_ctxt, version, method, namespace, **kwargs)\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch\nresult = getattr(proxyobj, method)(ctxt, 
**kwargs)\n', '  File 
/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/lb_neutron_plugin.py,
 line 147, in update_device_up\nport = 
self.get_port_from_device.get_port(device)\n', AttributeError: 'function' 
object has no attribute 'get_port'\n]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1241602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246737] Re: ML2 plugin deletes port even if associated with multiple subnets on subnet deletion

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1246737

Title:
  ML2 plugin deletes port even if associated with multiple subnets on
  subnet deletion

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron havana series:
  Fix Committed

Bug description:
  On subnet deletion ml2 plugin deletes all the ports associated with this 
subnet and does not check if a port is associated with other subnets.
  Steps to reproduce:
  1) create a network with two subnets
  2) create dhcp port for the network, port is associated with both subnets
  3) delete one of the subnets
  4) dhcp port is getting deleted

  Though new dhcp port is created shortly I think it's not ok to delete
  existing dhcp port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1246737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251086] Re: nvp_cluster_uuid is no longer used in nvp.ini

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251086

Title:
  nvp_cluster_uuid is no longer used in nvp.ini

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  remove it!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240125] Re: Linux IP wrapper cannot handle VLAN interfaces

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240125

Title:
  Linux IP wrapper cannot handle VLAN interfaces

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Interface VLAN name have a '@' character in their names when iproute2 utility 
list them.
  But the usable interface name (for iproute2 commands) is the string before 
the '@' character, so this interface need a special parse.

  $ ip link show
  1: wlan0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state DOWN 
group default qlen 1000
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff
  inet 169.254.10.78/16 brd 169.254.255.255 scope link wlan0:avahi
 valid_lft forever preferred_lft forever
  2: wlan0.10@wlan0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group 
default 
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff
  3: vlan100@wlan0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group 
default 
  link/ether 6c:88:14:b7:fe:80 brd ff:ff:ff:ff:ff:ff

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240720] Re: Nicira plugin: 500 when removing a router port desynchronized from the backend

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240720

Title:
  Nicira plugin: 500 when removing a router port desynchronized from the
  backend

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  If the logical switch port backing a neutron router interface port
  (device_owner=network:router_interface) is removed, then the port goes
  into ERROR state. However the interface remove process still tries to
  retrieve that port from the NVP backend, causing a 500 error.

  Different tracebacks can be generated according to the conditions
  which led to the switch port (or the peer router port) to be removed
  from the backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240742] Re: linuxbridge agent doesn't remove vxlan interface if no interface mappings

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1240742

Title:
  linuxbridge agent doesn't remove vxlan interface if no interface
  mappings

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  The LinuxBridge Agent doesn't remove vxlan interfaces if  
  physical_interface_mappings isn't set  in the config file

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1240742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239637] Re: internal neutron server error on tempest VolumesActionsTest

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239637

Title:
  internal neutron server error on tempest VolumesActionsTest

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Logstash query:
  @message:DBError: (IntegrityError) null value in column \network_id\ 
violates not-null constraint AND @fields.filename:logs/screen-q-svc.txt

  
http://logs.openstack.org/22/51522/2/check/check-tempest-devstack-vm-neutron-pg-isolated/015b3d9/logs/screen-q-svc.txt.gz#_2013-10-14_10_13_01_431
  
http://logs.openstack.org/22/51522/2/check/check-tempest-devstack-vm-neutron-pg-isolated/015b3d9/console.html

  
  2013-10-14 10:16:28.034 | 
==
  2013-10-14 10:16:28.034 | FAIL: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesActionsTest)
  2013-10-14 10:16:28.035 | tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesActionsTest)
  2013-10-14 10:16:28.035 | 
--
  2013-10-14 10:16:28.035 | _StringException: Traceback (most recent call last):
  2013-10-14 10:16:28.035 |   File 
tempest/api/volume/test_volumes_actions.py, line 55, in tearDownClass
  2013-10-14 10:16:28.036 | super(VolumesActionsTest, cls).tearDownClass()
  2013-10-14 10:16:28.036 |   File tempest/api/volume/base.py, line 72, in 
tearDownClass
  2013-10-14 10:16:28.036 | cls.isolated_creds.clear_isolated_creds()
  2013-10-14 10:16:28.037 |   File tempest/common/isolated_creds.py, line 
453, in clear_isolated_creds
  2013-10-14 10:16:28.037 | self._clear_isolated_net_resources()
  2013-10-14 10:16:28.037 |   File tempest/common/isolated_creds.py, line 
445, in _clear_isolated_net_resources
  2013-10-14 10:16:28.038 | self._clear_isolated_network(network['id'], 
network['name'])
  2013-10-14 10:16:28.038 |   File tempest/common/isolated_creds.py, line 
399, in _clear_isolated_network
  2013-10-14 10:16:28.038 | net_client.delete_network(network_id)
  2013-10-14 10:16:28.038 |   File 
tempest/services/network/json/network_client.py, line 76, in delete_network
  2013-10-14 10:16:28.039 | resp, body = self.delete(uri, self.headers)
  2013-10-14 10:16:28.039 |   File tempest/common/rest_client.py, line 308, 
in delete
  2013-10-14 10:16:28.039 | return self.request('DELETE', url, headers)
  2013-10-14 10:16:28.040 |   File tempest/common/rest_client.py, line 436, 
in request
  2013-10-14 10:16:28.040 | resp, resp_body)
  2013-10-14 10:16:28.040 |   File tempest/common/rest_client.py, line 522, 
in _error_checker
  2013-10-14 10:16:28.041 | raise exceptions.ComputeFault(message)
  2013-10-14 10:16:28.041 | ComputeFault: Got compute fault
  2013-10-14 10:16:28.041 | Details: {NeutronError: Request Failed: internal 
server error while processing your request.}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1239637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234857] Re: neutron unittest require minimum 4gb memory

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1234857

Title:
  neutron unittest require minimum 4gb memory

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron havana series:
  Fix Committed

Bug description:
  tox -e py26

  The unittest hang forever. Each test seem to take around 25mins to
  complete. Each test report following error, though it is PASS. It
  sounds like a regression caused by fix for
  https://bugs.launchpad.net/neutron/+bug/1191768.

  
https://github.com/openstack/neutron/commit/06f679df5d025e657b2204151688ffa60c97a3d3

  As per this fix, the default behavior for
  neutron.agent.rpc.report_state() is modified to use cast(), to report
  back the state in json format. The original behavior was to use call()
  method.

  Using call() method by default might fix this problem.

  ERROR:neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent:Failed 
reporting state!
  Traceback (most recent call last):
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 759, in _report_state
  self.agent_state)
File /home/jenkins/workspace/csi-neutron-upstream/neutron/agent/rpc.py, 
line 74, in report_state
  return self.cast(context, msg, topic=self.topic)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/proxy.py,
 line 171, in cast
  rpc.cast(context, self._get_topic(topic), msg)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/__init__.py,
 line 158, in cast
  return _get_impl().cast(CONF, context, topic, msg)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/impl_fake.py,
 line 166, in cast
  check_serialize(msg)
File 
/home/jenkins/workspace/csi-neutron-upstream/neutron/openstack/common/rpc/impl_fake.py,
 line 131, in check_serialize
  json.dumps(msg)
File /usr/lib64/python2.6/json/__init__.py, line 230, in dumps
  return _default_encoder.encode(obj)
File /usr/lib64/python2.6/json/encoder.py, line 367, in encode
  chunks = list(self.iterencode(o))
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 309, in _iterencode
  for chunk in self._iterencode_dict(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 275, in _iterencode_dict
  for chunk in self._iterencode(value, markers):
File /usr/lib64/python2.6/json/encoder.py, line 317, in _iterencode
  for chunk in self._iterencode_default(o, markers):
File /usr/lib64/python2.6/json/encoder.py, line 323, in 
_iterencode_default
  newobj = self.default(o)
File /usr/lib64/python2.6/json/encoder.py, line 344, in default
  raise TypeError(repr(o) +  is not JSON serializable)
  TypeError: MagicMock name='LinuxBridgeManager().local_ip' id='666599248' is 
not JSON serializable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1234857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210236] Re: traceback is suppressed when deploy.loadapp fails

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210236

Title:
  traceback is suppressed when deploy.loadapp fails

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  I saw this error when attempt to start a relatively recent quantum (setup.py 
--version says 2013.2.a782.ga36f237):
   ERROR: Unable to load quantum from configuration file 
/etc/quantum/api-paste.ini.

  After running quantum-server through strace I determined that the
  error was due to missing mysql client libraries:

  ...
  open(/lib64/tls/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No such 
file or directory)
  open(/lib64/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No such file 
or directory)
  open(/usr/lib64/tls/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No 
such file or directory)
  open(/usr/lib64/libmysqlclient.so.18, O_RDONLY) = -1 ENOENT (No such 
file or directory)
  munmap(0x7ffcd8132000, 34794)   = 0
  munmap(0x7ffccd147000, 2153456) = 0
  close(4)= 0
  close(3)= 0
  write(2, ERROR: Unable to load quantum fr..., 95ERROR: Unable to load 
quantum from configuration file /usr/local/csi/etc/quantum/api-paste.ini.) = 95
  write(2, \n, 1 )   = 1
  rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x3eec80f500}, 
{0x3eef90db70, [], SA_RESTORER, 0x3eec80f500}, 8) = 0
  exit_group(1)

  
  The error message is completely bogus and the lack of traceback made it 
difficult to debug.

  This is a regression from commit 6869821 which was to fix related bug
  1004062

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237912] Re: Cannot update IPSec Policy lifetime

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1237912

Title:
  Cannot update IPSec Policy lifetime

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  When you try to update IPSec Policy lifetime, you get an error:

  (neutron) vpn-ipsecpolicy-update ipsecpolicy --lifetime 
units=seconds,value=36001
  Request Failed: internal server error while processing your request.

  Meanwhile updating IKE Policy lifetime works well:

  (neutron) vpn-ikepolicy-update ikepolicy --lifetime units=seconds,value=36001
  Updated ikepolicy: ikepolicy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1237912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184696] Re: GRE tunneling is broken if hosts are on multiple subnets (multi-homed)

2013-12-08 Thread Alan Pevec
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
 Assignee: (unassigned) = Adin Scannell (amscanne)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1184696

Title:
  GRE tunneling is broken if hosts are on multiple subnets (multi-homed)

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Basic setup:
  * Bunch of hosts on subnet X
  * Host on subnet X and subnet Y (controller)
  * Bunch of hosts on subnet Y

  If local_ip for controller is from subnet X, then GRE tunnels are
  broken from controller to subnet Y.

  -- more detail --

  Because you can only specify a single local_ip when using GRE
  tunneling in openvswitch and this information is propagated to all
  hosts regardless of their subnet -- allowing GRE to choose the
  local_ip for tunnels results in one-directional flows because the IP
  won't be recognized (as one or more hosts may be sending traffic on an
  IP that is not their recognized local_ip).

  There is a pretty straight-forward fix -- the local_ip should be
  specified for all GRE tunnels, that way all traffic will originate
  from the IP that the hosts in the cluster are aware of. The local_ip
  needs to be routable from all hosts, but this is no different than
  before. There are more complex ways of dealing with this problem, but
  I think that this is the right fix and keeps it simple.

  I will be submitting a fix shortly via Gerrit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1184696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235486] Re: Integrity violation on delete network

2013-12-08 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Status: New = Fix Committed

** Changed in: neutron/havana
   Importance: Undecided = High

** Changed in: neutron/havana
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

** Changed in: neutron
 Assignee: Robert Kukura (rkukura) = Armando Migliaccio 
(armando-migliaccio)

** Changed in: neutron
Milestone: 2013.2.1 = icehouse-2

** Changed in: neutron/havana
Milestone: None = 2013.2.1

** Changed in: neutron
Milestone: icehouse-2 = icehouse-1

** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235486

Title:
  Integrity violation on delete network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Committed

Bug description:
  Found while running tests for bug 1224001.
  Full logs here: 
http://logs.openstack.org/24/49424/13/check/check-tempest-devstack-vm-neutron-pg-isolated/405d3b4

  Keeping to medium priority for now.
  Will raise priority if we found more occurrences.

  2013-10-04 21:20:46.888 31438 ERROR neutron.api.v2.resource [-] delete failed
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 84, in resource
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 432, in delete
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 411, in 
delete_network
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource break
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 456, 
in __exit__
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource self.commit()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 368, 
in commit
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
self._prepare_impl()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 347, 
in _prepare_impl
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource 
self.session.flush()
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 542, in _wrap
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource raise 
exception.DBError(e)
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource DBError: 
(IntegrityError) update or delete on table networks violates foreign key 
constraint ports_network_id_fkey on table ports
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource DETAIL:  Key 
(id)=(c63057f4-8d8e-497c-95d6-0d93d2cc83f5) is still referenced from table 
ports.
  2013-10-04 21:20:46.888 31438 TRACE neutron.api.v2.resource  'DELETE FROM 
networks WHERE networks.id = %(id)s' {'id': 
u'c63057f4-8d8e-497c-95d6-0d93d2cc83f5'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223754] Re: Multiple Neutron operations using a script fails on Brocade Plugin

2013-12-08 Thread Alan Pevec
Mark, commit 51b44251ba705a3bedc3d146590613d9ff9c0690 landed on master
after RC was branched and has not been backported to Havana yet.

** Changed in: neutron
Milestone: 2013.2.1 = None

** Tags added: havana-backport-potential

** Changed in: neutron
Milestone: None = icehouse-1

** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223754

Title:
  Multiple  Neutron operations  using a script fails on Brocade Plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released

Bug description:
  Multiple  Neutron operations  using a script fails on Brocade Plugin, However 
the same
  operations pass when executed using cli or dashboard .

  Also observed that  netconf connections to VDX devices fails after few
  operations

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1223754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235358] Re: invalid volume when source image virtual size is bigger than the requested size

2013-12-08 Thread Alan Pevec
** Also affects: cinder/havana
   Importance: Undecided
   Status: New

** Changed in: cinder/havana
   Status: New = Fix Committed

** Changed in: cinder/havana
Milestone: None = 2013.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1235358

Title:
  invalid volume when source image virtual size is bigger than the
  requested size

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I created a volume from an image and booted an instance from it 
  when instance boots I get this: 'selected cylinder exceeds maximum supported 
by bios'
  If I boot an instance from the same image I can boot with no issues so its 
just booting from the volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1235358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255419] Re: jenkins tests fails for neutron/grizzly duo to iso8601 version requirement conflict

2013-12-08 Thread Alan Pevec
So let's try C. although I don't think bumping required versions of
dependencies on stable is a good idea in general, but we need to unblock
stable/grizzly somehow.

I've restored https://review.openstack.org/55939 and tests pass now, but
I'm not sure if that's enough to declare grizzly Glance is fully working
with iso8601 - I'll add more Glance floks as reviewers.

** Changed in: tempest/grizzly
   Status: Confirmed = Invalid

** Changed in: glance/grizzly
   Status: Opinion = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1255419

Title:
  jenkins tests fails for neutron/grizzly duo to iso8601 version
  requirement conflict

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Glance grizzly series:
  In Progress
Status in OpenStack Core Infrastructure:
  Invalid
Status in Tempest:
  Invalid
Status in tempest grizzly series:
  Invalid

Bug description:
  2013-11-27 02:51:09.989 | 2013-11-27 02:51:09 Installed /opt/stack/new/neutron
  2013-11-27 02:51:09.990 | 2013-11-27 02:51:09 Processing dependencies for 
quantum==2013.1.5.a1.g666826a
  2013-11-27 02:51:09.991 | 2013-11-27 02:51:09 error: Installed distribution 
iso8601 0.1.4 conflicts with requirement iso8601=0.1.8
  2013-11-27 02:51:09.991 | 2013-11-27 02:51:09 ++ failed
  2013-11-27 02:51:09.993 | 2013-11-27 02:51:09 ++ local r=1
  2013-11-27 02:51:09.993 | 2013-11-27 02:51:09 +++ jobs -p
  2013-11-27 02:51:09.994 | 2013-11-27 02:51:09 ++ kill
  2013-11-27 02:51:09.994 | 2013-11-27 02:51:09 ++ set +o xtrace
  2013-11-27 02:51:09.995 | 2013-11-27 02:51:09 stack.sh failed: full log in 
/opt/stack/new/devstacklog.txt.2013-11-27-024805

  full log https://jenkins02.openstack.org/job/periodic-tempest-
  devstack-vm-neutron-stable-grizzly/43/console

  the root case is that recently iso8601 updated to =0.1.8 , python-
  novaclient updated to catch this.  but stable/glance requirement is
  iso8601=0.1.4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1255419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242501] Re: Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

2013-12-13 Thread Alan Pevec
** Changed in: glance/folsom
   Status: In Progress = Won't Fix

** Changed in: glance/folsom
 Assignee: Alan Pevec (apevec) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242501

Title:
  Jenkins failed due to
  TestGlanceAPI.test_get_details_filter_changes_since

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance folsom series:
  Won't Fix
Status in Glance grizzly series:
  In Progress
Status in Glance havana series:
  Fix Committed

Bug description:
  Now we're running into the Jenkins failure due to below test case
  failure:

  2013-10-20 06:12:31.930 | 
==
  2013-10-20 06:12:31.930 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_get_details_filter_changes_since
  2013-10-20 06:12:31.930 | 
--
  2013-10-20 06:12:31.930 | _StringException: Traceback (most recent call last):
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v1/test_api.py,
 line 1358, in test_get_details_filter_changes_since
  2013-10-20 06:12:31.931 | self.assertEquals(res.status_int, 400)
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 322, in assertEqual
  2013-10-20 06:12:31.931 | self.assertThat(observed, matcher, message)
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 417, in assertThat
  2013-10-20 06:12:31.931 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-20 06:12:31.931 | MismatchError: 200 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255419] Re: jenkins tests fails for neutron/grizzly duo to iso8601 version requirement conflict

2013-12-13 Thread Alan Pevec
*** This bug is a duplicate of bug 1242501 ***
https://bugs.launchpad.net/bugs/1242501

** This bug has been marked a duplicate of bug 1242501
   Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1255419

Title:
  jenkins tests fails for neutron/grizzly duo to iso8601 version
  requirement conflict

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Glance grizzly series:
  In Progress
Status in OpenStack Core Infrastructure:
  Invalid
Status in Tempest:
  Invalid
Status in tempest grizzly series:
  Invalid

Bug description:
  2013-11-27 02:51:09.989 | 2013-11-27 02:51:09 Installed /opt/stack/new/neutron
  2013-11-27 02:51:09.990 | 2013-11-27 02:51:09 Processing dependencies for 
quantum==2013.1.5.a1.g666826a
  2013-11-27 02:51:09.991 | 2013-11-27 02:51:09 error: Installed distribution 
iso8601 0.1.4 conflicts with requirement iso8601=0.1.8
  2013-11-27 02:51:09.991 | 2013-11-27 02:51:09 ++ failed
  2013-11-27 02:51:09.993 | 2013-11-27 02:51:09 ++ local r=1
  2013-11-27 02:51:09.993 | 2013-11-27 02:51:09 +++ jobs -p
  2013-11-27 02:51:09.994 | 2013-11-27 02:51:09 ++ kill
  2013-11-27 02:51:09.994 | 2013-11-27 02:51:09 ++ set +o xtrace
  2013-11-27 02:51:09.995 | 2013-11-27 02:51:09 stack.sh failed: full log in 
/opt/stack/new/devstacklog.txt.2013-11-27-024805

  full log https://jenkins02.openstack.org/job/periodic-tempest-
  devstack-vm-neutron-stable-grizzly/43/console

  the root case is that recently iso8601 updated to =0.1.8 , python-
  novaclient updated to catch this.  but stable/glance requirement is
  iso8601=0.1.4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1255419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242501] Re: Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

2013-12-13 Thread Alan Pevec
** No longer affects: glance/folsom

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242501

Title:
  Jenkins failed due to
  TestGlanceAPI.test_get_details_filter_changes_since

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance grizzly series:
  In Progress
Status in Glance havana series:
  Fix Committed

Bug description:
  Now we're running into the Jenkins failure due to below test case
  failure:

  2013-10-20 06:12:31.930 | 
==
  2013-10-20 06:12:31.930 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_get_details_filter_changes_since
  2013-10-20 06:12:31.930 | 
--
  2013-10-20 06:12:31.930 | _StringException: Traceback (most recent call last):
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v1/test_api.py,
 line 1358, in test_get_details_filter_changes_since
  2013-10-20 06:12:31.931 | self.assertEquals(res.status_int, 400)
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 322, in assertEqual
  2013-10-20 06:12:31.931 | self.assertThat(observed, matcher, message)
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 417, in assertThat
  2013-10-20 06:12:31.931 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-20 06:12:31.931 | MismatchError: 200 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244055] Re: six has no attribute 'add_metaclass'

2013-12-14 Thread Alan Pevec
** Also affects: tempest/havana
   Importance: Critical
   Status: Fix Released

** Changed in: tempest/havana
 Assignee: (unassigned) = Joe Gordon (jogo)

** Tags removed: in-stable-havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244055

Title:
  six has no attribute 'add_metaclass'

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released
Status in tempest havana series:
  Fix Released

Bug description:
  I have a patch failing in gate with traces containing the following:

  2013-10-23 22:27:54.336 |   File 
/opt/stack/new/python-novaclient/novaclient/base.py, line 166, in module
  2013-10-23 22:27:54.336 | @six.add_metaclass(abc.ABCMeta)
  2013-10-23 22:27:54.337 | AttributeError: 'module' object has no attribute 
'add_metaclass'

  For full logs, see the failing patch:
  https://review.openstack.org/#/c/52876/

  It looks like this was caused by this recent commit:
  https://review.openstack.org/#/c/52255/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206081] Re: [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1206081

Title:
  [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Won't Fix
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When doing QA for SUSE on bug 1177830
  I found that the fix is incomplete,
  because it assumed that the cached image would be mostly sparse.

  However, I can easily create non-sparse small compressed qcow2 images
  with

  perl -e 'for(1..11000){print x x 1024000}'  img
  qemu-img convert -c -O qcow2 img img.qcow2
  glance image-create --name=11gb --is-public=True --disk-format=qcow2 
--container-format=bare  img.qcow2
  nova boot --image 11gb --flavor m1.small testvm

  which (in Grizzly and Essex) results in one (or two in Essex) 11GB large 
files being created in /var/lib/nova/instances/_base/
  still allowing attackers to fill up disk space of compute nodes
  because the size check is only done after the uncompressing / caching

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1206081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1195947] Re: VM re-scheduler mechanism will cause BDM-volumes conflict

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1195947

Title:
  VM re-scheduler mechanism will cause BDM-volumes conflict

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Due to re-scheduler mechanism, when a user tries to 
   create (in error) an instance using a volume
   which is already in use by another instance,
  the error is correctly detected, but the recovery code
   will incorrectly affect the original instance.

  Need to raise exception directly when the situation above occurred.

  
  
  We can create VM1 with BDM-volumes (for example, one volume we called it 
“Vol-1”).

  But when the attached-volume (Vol-1..) involved in BDM parameters to
  create a new VM2, due to VM re-scheduler mechanism, the volume will
  change to attach on the new VM2 in Nova  Cinder, instead of raise an
  “InvalidVolume” exception of “Vol-1 is already attached on VM1”.

  In actually, Vol-1 both attached on VM1 and VM2 on hypervisor. But
  when you operate Vol-1 on VM1, you can’t see any corresponding changes
  on VM2…

  I reproduced it and wrote in the doc. Please check the attachment for
  details~

  -
  I checked on the Nova codes, the problem is caused by VM re-scheduler 
mechanism:

  Now Nova will check the state of BDM-volumes from Cinder now [def
  _setup_block_device_mapping() in manager.py]. If any state is “in-
  use”, this request will fail, and trigger VM re-scheduler.

  According to existing processes in Nova, before VM re-scheduler, it
  will shutdown VM and detach all BDM-volumes in Cinder for rollback
  [def _shutdown_instance() in manager.py]. As the result, the state of
  Vol-1 will change from “in-use” to “available” in Cinder. But,
  there’re nothing detach-operations on the Nova side…

  Therefore, after re-scheduler, it will pass the BDM-volumes checking
  in creating VM2 on the second time, and all VM1’s BDM-volumes (Vol-1)
  will be possessed by VM2 and are recorded in Nova  Cinder DB. But
  Vol-1 is still attached on VM1 on hypervisor, and will also attach on
  VM2 after VM creation success…

  ---

  Moreover, the problem mentioned-above will occur when “delete_on_termination” 
of BDMs is “False”. If the flag is “True”, all BDM-volumes will be deleted in 
Cinder because the states are already changed from “in-use” to “available” 
before [def _cleanup_volumes() in manager.py].
  (P.S. Success depends on the specific implementation of Cinder Driver)

  Thanks~

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1195947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177830] Re: [OSSA 2013-012] Unchecked qcow2 root disk sizes

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177830

Title:
  [OSSA 2013-012] Unchecked qcow2 root disk sizes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Currently there's no check on the root disk raw sizes. A user can
  create qcow2 images with any size and upload it to glance and spawn
  instances off this file. The raw backing file created in the compute
  node will be small at first due to it being a sparse file, but will
  grow as data is written to it. This can cause the following issues.

  1. Bypass storage quota restrictions
  2. Overrun compute host disk space

  This was reproduced in Devstack using recent trunk d7e4692.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251152] Re: create instance with ephemeral disk fails

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251152

Title:
  create instance with ephemeral disk fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1037, in _build_instance
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] set_access_ip=set_access_ip)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1410, in _spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] LOG.exception(_('Instance faile
  d to spawn'), instance=instance)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1407, in _spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] block_device_info)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/driver.py, line 2063, in spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] admin_pass=admin_password)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/driver.py, line 2370, in _create_image
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] ephemeral_size=ephemeral_gb)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/imagebackend.py, line 174, in cache
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] *args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/imagebackend.py, line 307, in create_image
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] prepare_template(target=base, m
  ax_size=size, *args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/openst
  ack/common/lockutils.py, line 246, in inner
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] return f(*args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 162, in 
call_if_not_exists
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] fetch_func(target=target, *args, 
**kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] TypeError: _create_ephemeral() got an 
unexpected keyword argument 'max_size'
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] 

  max_size argument was add in 3cdfe894ab58f7b91bf7fb690fc5bc724e44066f,
  when creating ephemeral disks , _create_ephemeral method will get an
  unexpected keyword argument  max_size

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251792] Re: infinite recursion when deleting an instance with no network interfaces

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251792

Title:
  infinite recursion when deleting an instance with no network
  interfaces

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In some situations when an instance has no network information (a
  phrase that I'm using loosely), deleting the instance results in
  infinite recursion. The stack looks like this:

  2013-11-15 18:50:28.995 DEBUG nova.network.neutronv2.api 
[req-28f48294-0877-4f09-bcc1-7595dbd4c15a demo demo]   File 
/usr/lib/python2.7/dist-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
_process_data
  **args)
File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 354, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/exception.py, line 73, in wrapped
  return f(self, context, *args, **kw)
File /opt/stack/nova/nova/compute/manager.py, line 230, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 295, in 
decorated_function
  function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 259, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1984, in 
terminate_instance
  do_terminate_instance(instance, bdms)
File /opt/stack/nova/nova/openstack/common/lockutils.py, line 248, in 
inner
  return f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1976, in 
do_terminate_instance
  reservations=reservations)
File /opt/stack/nova/nova/hooks.py, line 105, in inner
  rv = f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1919, in 
_delete_instance
  self._shutdown_instance(context, db_inst, bdms)
File /opt/stack/nova/nova/compute/manager.py, line 1829, in 
_shutdown_instance
  network_info = self._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/compute/manager.py, line 868, in 
_get_instance_nw_info
  instance)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 449, in 
get_instance_nw_info
  result = self._get_instance_nw_info(context, instance, networks)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  RECURSION STARTS HERE

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  ... REPEATS AD NAUSEUM ...

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)
File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 49, in wrapper
  res = f(self, context, *args, **kwargs)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 459, in 
_get_instance_nw_info
  LOG.debug('%s', ''.join(traceback.format_stack()))

  Here's a step-by-step explanation of how the infinite recursion
  arises:

  1. somebody calls nova.network.neutronv2.api.API.get_instance_nw_info

  2. in the above call, the network info is successfully retrieved as
  result = self._get_instance_nw_info(context, instance, networks)

  3. however, since the instance has no network information, result is
  the empty list (i.e., [])

  4. the result is put in the cache by calling
  nova.network.api.update_instance_cache_with_nw_info

  5. update_instance_cache_with_nw_info is supposed to add the result to
  the cache, but due to a bug in update_instance_cache_with_nw_info, it
  recursively calls api.get_instance_nw_info, which brings us back to
  step 1. The bug is the check before the recursive call:

  if not nw_info:
  nw_info = api._get_instance_nw_info(context, instance)

  which erroneously equates [] and None. Hence the check should be if
  nw_info is None:

  I should clarify that the instance _did_ have network information at
  some point (i.e., I booted it normally with a NIC), however, some time
  after I issued a nova delete request, the network information was
  gone (i.e., in 

[Yahoo-eng-team] [Bug 1235450] Re: [OSSA 2013-033] Metadata queries from Neutron to Nova are not restricted by tenant (CVE-2013-6419)

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235450

Title:
  [OSSA 2013-033] Metadata queries from Neutron to Nova are not
  restricted by tenant (CVE-2013-6419)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron grizzly series:
  In Progress
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Committed

Bug description:
  The neutron metadata service works in the following way:

  Instance makes a GET request to http://169.254.169.254/

  This is directed to the metadata-agent which knows which
  router(namespace) he is running on and determines the ip_address from
  the http request he receives.

  Now, the neturon-metadata-agent queries neutron-server  using the
  router_id and ip_address from the request to determine the port the
  request came from. Next, the agent takes the device_id (nova-instance-
  id) on the port and passes that to nova as X-Instance-ID.

  The vulnerability is that if someone exposes their instance_id their
  metadata can be retrieved. In order to exploit this, one would need to
  update the device_id  on a port to match the instance_id they want to
  hijack the data from.

  To demonstrate:

  arosen@arosen-desktop:~/devstack$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1eb33bf1-6400-483a-9747-e19168b68933 | vm1  | ACTIVE | None   | Running 
| private=10.0.0.4 |
  | eed973e2-58ea-42c4-858d-582ff6ac3a51 | vm2  | ACTIVE | None   | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  
  arosen@arosen-desktop:~/devstack$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 3128f195-c41b-4160-9a42-40e024771323 |  | fa:16:3e:7d:a5:df | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.1} 
|
  | 62465157-8494-4fb7-bdce-2b8697f03c12 |  | fa:16:3e:94:62:47 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} 
|
  | 8473fb8d-b649-4281-b03a-06febf61b400 |  | fa:16:3e:4f:a3:b0 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.2} 
|
  | 92c42c1a-efb0-46a6-89eb-a38ae170d76d |  | fa:16:3e:de:9a:39 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.3} 
|
  
+--+--+---+-+

  
  arosen@arosen-desktop:~/devstack$ neutron port-show  
62465157-8494-4fb7-bdce-2b8697f03c12
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 1eb33bf1-6400-483a-9747-e19168b68933
|
  | device_owner  | compute:None
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {subnet_id: 
d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} |
  | id| 62465157-8494-4fb7-bdce-2b8697f03c12
|
  | mac_address   | fa:16:3e:94:62:47   
|
  | name  | 
|
  | network_id| 

[Yahoo-eng-team] [Bug 1247526] Re: libvirt evacuate(shared storage) fails w/ Permission denied on disk.config

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247526

Title:
  libvirt evacuate(shared storage) fails w/ Permission denied on
  disk.config

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  when doing a evacuate for an instance on shared storage, the following
  error occurs

  2013-10-25 01:20:49.843 INFO nova.compute.manager 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] disk on shared storage, recreating using 
existing disk
  2013-10-25 01:20:53.325 INFO nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Creating image
  2013-10-25 01:20:53.413 INFO nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Using config drive
  2013-10-25 01:20:57.812 INFO nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Creating config drive at 
/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config
  2013-10-25 01:20:57.835 ERROR nova.virt.libvirt.driver 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Creating config drive failed with error: 
Unexpected error while running command.
  Command: genisoimage -o 
/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config -ldots 
-allow-lowercase -allow-multidot -l -publisher OpenStack Nova 2013.1.3 -quiet 
-J -r -V config-2 /tmp/cd_gen_I3EQUN
  Exit code: 13
  Stdout: ''
  Stderr: Warning: creating filesystem that does not conform to 
ISO-9660.\ngenisoimage: Permission denied. Unable to open disc image file 
'/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config'.\n
  2013-10-25 01:20:57.837 ERROR nova.compute.manager 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] [instance: 
dc2c583c-7488-4a32-81f1-2282097eb358] Unexpected error while running command.
  Command: genisoimage -o 
/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config -ldots 
-allow-lowercase -allow-multidot -l -publisher OpenStack Nova 2013.1.3 -quiet 
-J -r -V config-2 /tmp/cd_gen_I3EQUN
  Exit code: 13
  Stdout: ''
  Stderr: Warning: creating filesystem that does not conform to 
ISO-9660.\ngenisoimage: Permission denied. Unable to open disc image file 
'/var/lib/nova/instances/dc2c583c-7488-4a32-81f1-2282097eb358/disk.config'.\n. 
Setting instance vm_state to ERROR
  2013-10-25 01:20:58.693 ERROR nova.openstack.common.rpc.amqp 
[req-3ecfa138-b688-447c-b352-18cca32b1a1d 8394996e48204a8d878bac7bf6a88db0 
d05694d82d984dd495d25bfdf08ba049] Exception during message handling

  
  the nova version it 2013.1 but after looking the code, it should also affect 
the latest trunk

  Allowing the nova user to read/write disk.config should fix this
  issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1247526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246327] Re: the snapshot of a volume-backed instance cannot be used to boot a new instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246327

Title:
  the snapshot of a volume-backed instance cannot be used to boot a new
  instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  After the changes in the block device mappings introduced for Havana,
  if we try to create an snapshot of a volume-backed instance the
  resulting image cannot be used to boot a new instance due to conflicts
  with the bootindex between the block_device_mapping stored in the
  image properties and the current image.

  The steps to reproduce are:

  $ glance image-create --name f20 --disk-format qcow2 --container-
  format bare --min-disk 2 --is-public True --min-ram 512 --copy-from
  
http://download.fedoraproject.org/pub/fedora/linux/releases/test/20-Alpha/Images/x86_64
  /Fedora-x86_64-20-Alpha-20130918-sda.qcow2

  $ cinder create --image-id uuid of the new image --display-name f20
  2

  $ nova boot --boot-volume uuid of the new volume --flavor m1.tiny
  test-instance

  $ nova image-create test-instance test-snap

  This will create an snapshot of the volume and an image in glance with
  a block_device_mapping containing the snapshot_id and all the other
  values from the original block_device_mapping (id, connection_info,
  instance_uuid, ...):

  | Property 'block_device_mapping' | [{instance_uuid:
  989f03dc-2736-4884-ab66-97360102d804, virtual_name: null,
  no_device: null, connection_info: {\driver_volume_type\:
  \iscsi\, \serial\: \cb6d4406-1c66-4f9a-9fd8-7e246a3b93b7\,
  \data\: {\access_mode\: \rw\, \target_discovered\: false,
  \encrypted\: false, \qos_spec\: null, \device_path\: \/dev/disk
  /by-path/ip-192.168.122.2:3260-iscsi-iqn.2010-10.org.openstack:volume-
  cb6d4406-1c66-4f9a-9fd8-7e246a3b93b7-lun-1\, \target_iqn\:
  \iqn.2010-10.org.openstack:volume-cb6d4406-1c66-4f9a-
  9fd8-7e246a3b93b7\, \target_portal\: \192.168.122.2:3260\,
  \volume_id\: \cb6d4406-1c66-4f9a-9fd8-7e246a3b93b7\,
  \target_lun\: 1, \auth_password\: \wh5bWkAjKv7Dy6Ptt4nY\,
  \auth_username\: \oPbN9FzbEPQ3iFpPhv5d\, \auth_method\:
  \CHAP\}}, created_at: 2013-10-30T13:18:57.00,
  snapshot_id: f6a25cc2-b3af-400b-9ef9-519d28239920, updated_at:
  2013-10-30T13:19:08.00, device_name: /dev/vda, deleted: 0,
  volume_size: null, volume_id: null, id: 3, deleted_at: null,
  delete_on_termination: false}] |

  When we try latter to use this image to boot a new instance, the API
  won't let us because both, the device in the image bdm and the image
  (which is empty) are considered to be the boot device:

  $ nova boot --image test-snap --flavor m1.nano test-instance2
  ERROR: Block Device Mapping is Invalid: Boot sequence for the instance and 
image/block device mapping combination is not valid. (HTTP 400) (Request-ID: 
req-3e502a29-9cd3-4c0c-8ddc-a28d315d21ea)

  If we check the internal flow we can see that nova considers the image
  to be the boot device even thought the image itself doesn't define any
  local disk but only a block_device_mapping pointing to the snapshot.

  To be able to generate proper images from volume-backed instances we should:
   1. copy only the relevant keys from the original block_device_mapping to 
prevent duplicities in DB
   2. prevent nova from adding a new block device for the image if this one 
doesn't define any local disk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257293

Title:
  [messaging] QPID broadcast RPC requests to all servers for a given
  topic

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed

Bug description:
  According to the oslo.messaging documentation, when a RPC request is
  made to a given topic, and there are multiple servers for that topic,
  only _one_ server should service that RPC request.  See
  http://docs.openstack.org/developer/oslo.messaging/target.html

  topic (str) – A name which identifies the set of interfaces exposed
  by a server. Multiple servers may listen on a topic and messages will
  be dispatched to one of the servers in a round-robin fashion.

  In the case of a QPID-based deployment using topology version 2, this
  is not the case.  Instead, each listening server gets a copy of the
  RPC and will process it.

  For more detail, see

  https://bugs.launchpad.net/oslo/+bug/1178375/comments/26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229994] Re: VMwareVCDriver: snapshot failure when host in maintenance mode

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229994

Title:
  VMwareVCDriver: snapshot failure when host in maintenance mode

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  Image snapshot through the VC cluster driver may fail if, within the
  datacenter containing the cluster managed by the driver, there are one
  or more hosts in maintenance mode with access to the datastore
  containing the disk image snapshot.

  A sign that this situation has occurred is the appearance in the nova
  compute log of an error similar to the following:

  2013-08-02 07:10:30.036 WARNING nova.virt.vmwareapi.driver [-] Task 
[DeleteVirtualDisk_Task] (returnval){
  value = task-228
  _type = Task
  } status: error The operation is not allowed in the current state.

  What this means is that even if all hosts in cluster are running fine in 
normal mode, a host outside of the cluster going into maintenance mode may
  lead to snapshot failure.

  The root cause of the problem is due to an issue in VC's handler of
  the VirtualDiskManager.DeleteVirtualDisk_Task API, which may
  incorrectly pick a host in maintenance mode to service the disk
  deletion even though such an operation will be rejected by the host
  under maintenance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1229994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252827] Re: VMWARE: Intermittent problem with stats reporting

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252827

Title:
  VMWARE: Intermittent problem with stats reporting

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  I see that sometimes vmware driver reports 0 stats. Please take a look
  at the following log file for more information:
  http://162.209.83.206/logs/51404/6/screen-n-cpu.txt.gz

  excerpts from log file:
  2013-11-18 15:41:03.994 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for datastore Reason: None
  2013-11-18 15:41:04.029 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for host Reason: None
  2013-11-18 15:41:04.029 20162 WARNING nova.virt.vmwareapi.vim_util [-] Unable 
to retrieve value for resourcePool Reason: None
  2013-11-18 15:41:04.029 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: free ram (MB): 0 _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:389
  2013-11-18 15:41:04.029 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: free disk (GB): 0 _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:390
  2013-11-18 15:41:04.030 20162 DEBUG nova.compute.resource_tracker [-] 
Hypervisor: VCPU information unavailable _report_hypervisor_resource_view 
/opt/stack/nova/nova/compute/resource_tracker.py:397

  During this time we cannot spawn any server. Look at the
  http://162.209.83.206/logs/51404/6/screen-n-sch.txt.gz

  excerpts from log file:
  2013-11-18 15:41:52.475 DEBUG nova.filters 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] Filter AvailabilityZoneFilter 
returned 1 host(s) get_filtered_objects /opt/stack/nova/nova/filters.py:88
  2013-11-18 15:41:52.476 DEBUG nova.scheduler.filters.ram_filter 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] (Ubuntu1204Server, 
domain-c26(c1)) ram:-576 disk:0 io_ops:0 instances:1 does not have 64 MB usable 
ram, it only has -576.0 MB usable ram. host_passes 
/opt/stack/nova/nova/scheduler/filters/ram_filter.py:60
  2013-11-18 15:41:52.476 INFO nova.filters 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] Filter RamFilter returned 0 
hosts
  2013-11-18 15:41:52.477 WARNING nova.scheduler.driver 
[req-dc82a954-3cc5-4627-ae01-b3d1ec2155af 
InstanceActionsTestXML-tempest-716947327-user 
InstanceActionsTestXML-tempest-716947327-tenant] [instance: 
1a648022-1783-4874-8b41-c3f4c89d8500] Setting instance to ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251920] Re: Tempest failures due to failure to return console logs from an instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251920

Title:
  Tempest failures due to failure to return console logs from an
  instance

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  Logstash search:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpjb25zb2xlLmh0bWwgQU5EIG1lc3NhZ2U6XCJhc3NlcnRpb25lcnJvcjogY29uc29sZSBvdXRwdXQgd2FzIGVtcHR5XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODQ2NDEwNzIxODl9

  An example failure is http://logs.openstack.org/92/55492/8/check
  /check-tempest-devstack-vm-full/ef3a4a4/console.html

  console.html
  ===

  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,775 Request: POST 
http://127.0.0.1:8774/v2/3f6934d9aabf467aa8bc51397ccfa782/servers/10aace14-23c1-4cec-9bfd-2c873df1fbee/action
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': 'Token omitted'}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:20,776 Request Body: 
{os-getConsoleOutput: {length: 10}}
  2013-11-16 21:54:27.998 | 2013-11-16 21:41:21,000 Response Status: 200
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Nova request id: 
req-7a2ee0ab-c977-4957-abb5-1d84191bf30c
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Headers: 
{'content-length': '14', 'date': 'Sat, 16 Nov 2013 21:41:20 GMT', 
'content-type': 'application/json', 'connection': 'close'}
  2013-11-16 21:54:27.999 | 2013-11-16 21:41:21,001 Response Body: {output: 
}
  2013-11-16 21:54:27.999 | }}}
  2013-11-16 21:54:27.999 | 
  2013-11-16 21:54:27.999 | Traceback (most recent call last):
  2013-11-16 21:54:27.999 |   File 
tempest/api/compute/servers/test_server_actions.py, line 281, in 
test_get_console_output
  2013-11-16 21:54:28.000 | self.wait_for(get_output)
  2013-11-16 21:54:28.000 |   File tempest/api/compute/base.py, line 133, in 
wait_for
  2013-11-16 21:54:28.000 | condition()
  2013-11-16 21:54:28.000 |   File 
tempest/api/compute/servers/test_server_actions.py, line 278, in get_output
  2013-11-16 21:54:28.000 | self.assertTrue(output, Console output was 
empty.)
  2013-11-16 21:54:28.000 |   File /usr/lib/python2.7/unittest/case.py, line 
420, in assertTrue
  2013-11-16 21:54:28.000 | raise self.failureException(msg)
  2013-11-16 21:54:28.001 | AssertionError: Console output was empty.

  n-api
  

  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Action: 'action', body: 
{os-getConsoleOutput: {length: 10}} _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:963
  2013-11-16 21:41:20.782 DEBUG nova.api.openstack.wsgi 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Calling method bound method 
ConsoleOutputController.get_console_output of 
nova.api.openstack.compute.contrib.console_output.ConsoleOutputController 
object at 0x3c1f990 _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:964
  2013-11-16 21:41:20.865 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] Making synchronous call on 
compute.devstack-precise-hpcloud-az2-663635 ... multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:553
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] MSG_ID is 
a93dceabf6a441eb850b5fbb012d661f multicall 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:556
  2013-11-16 21:41:20.866 DEBUG nova.openstack.common.rpc.amqp 
[req-7a2ee0ab-c977-4957-abb5-1d84191bf30c 
ServerActionsTestJSON-tempest-2102529866-user 
ServerActionsTestJSON-tempest-2102529866-tenant] UNIQUE_ID is 
706ab69dc066440fbe1bd7766b73d953. _add_unique_id 
/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:341
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] Closed channel #1 _do_close 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
  2013-11-16 21:41:20.869 22679 DEBUG amqp [-] using channel_id: 1 __init__ 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
  2013-11-16 21:41:20.870 22679 DEBUG amqp [-] Channel open _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
  2013-11-16 21:41:20.999 INFO nova.osapi_compute.wsgi.server 

[Yahoo-eng-team] [Bug 1246592] Re: Nova live migration failed due to OLE error

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246592

Title:
  Nova live migration failed due to OLE error

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When migrate vm on hyperV, command fails with the following error:

  2013-10-25 03:35:40.299 12396 ERROR nova.openstack.common.rpc.amqp 
[req-b542e0fd-74f5-4e53-889c-48a3b44e2887 3a75a18c8b60480d9369b25ab06519b3 
0d44e4afd3d448c6acf0089df2dc7658] Exception during message handling
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\amqp.py, line 461, 
in _process_data
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\openstack\common\rpc\dispatcher.py, line 
172, in dispatch
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 90, in wrapped
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
payload)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\exception.py, line 73, in wrapped
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 4103, in 
live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 118, in 
live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
block_migration, migrate_data)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
44, in wrapper
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp return 
function(self, *args, **kwds)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
76, in live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
recover_method(context, instance_ref, dest, block_migration)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationops.py, line 
69, in live_migration
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp dest)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationutils.py, line 
231, in live_migrate_vm
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
disk_paths = self._get_physical_disk_paths(vm_name)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\livemigrationutils.py, line 
114, in _get_physical_disk_paths
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
ide_paths = self._vmutils.get_controller_volume_paths(ide_ctrl_path)
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py, line 553, in 
get_controller_volume_paths
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp 
parent: controller_path})
  2013-10-25 03:35:40.299 12396 TRACE nova.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\wmi.py, line 

[Yahoo-eng-team] [Bug 1239603] Re: Bogus ERROR level debug spew when creating a new instance

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239603

Title:
  Bogus ERROR level debug spew when creating a new instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Change-Id: Ifd41886b9bc7dff01cdf741a833946bed1bdddc implemented a
  number of items required for auto_disk_config to be more than just
  True or False.

  It appears that a logging statement used for debugging of the code has
  been left behind:

  1256 def _check_auto_disk_config(self, instance=None, image=None,
  1257 **extra_instance_updates):
  1258 auto_disk_config = extra_instance_updates.get(auto_disk_config)
  1259 if auto_disk_config is None:
  1260 return
  1261 if not image and not instance:
  1262 return
  1263 
  1264 if image:
  1265 image_props = image.get(properties, {})
  1266 LOG.error(image_props)

  
  This needs to be removed as it is causing false-positives to be picked up by 
our error-tracking software

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244311] Re: notification failure in _sync_power_states

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244311

Title:
  notification failure in _sync_power_states

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The _sync_power_states periodic task pull instances without
  system_metadata in order to reduce network bandwidth being
  unnecessarily consumed.  Most of the time this is fine, but if
  vm_power_state != db_power_state then the instance is updated and
  saved.  As part of saving the instance a notification is sent.  In
  order to send the notification it extracts flavor information from the
  system_metadata on the instance.  But system_metadata isn't loaded,
  and won't be lazy loaded.  So an exception is raised and the
  notification isn't sent.

  2013-10-23 03:30:35.714 21492 ERROR nova.notifications [-] [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Failed to send state update notification
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Traceback (most recent call last):
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 146, in send_update
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] old_display_name=old_display_name)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 199, in _send_instance_update_notification
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] payload = info_from_instance(context, 
instance, None, None)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/notifications.py, 
line 343, in info_from_instance
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] instance_type = 
flavors.extract_flavor(instance_ref)
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] File 
/opt/rackstack/472.23/nova/lib/python2.6/site-packages/nova/compute/flavors.py,
 line 282, in extract_flavor
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] instance_type[key] = 
type_fn(sys_meta[type_key])
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] KeyError: 'instance_type_memory_mb'
  2013-10-23 03:30:35.714 21492 TRACE nova.notifications [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab]
  2013-10-23 03:30:35.718 21492 WARNING nova.compute.manager [-] [instance: 
fa0cee4b-6825-47af-bf6f-64491326feab] Instance shutdown by itself. Calling the 
stop API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237795] Re: VMware: restarting nova compute reports invalid instances

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237795

Title:
  VMware: restarting nova compute reports invalid instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When nova compute restarts the running instances on the hypervisor are
  queried. None of the instances would be matched - this would prevent
  the instance states being in sync with the state in the database. See
  _destroy_evacuated_instances
  (https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L531)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230925] Re: Require new python-cinderclient for Havana

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1230925

Title:
  Require new python-cinderclient for Havana

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Havana Nova needs to require cinderclient 1.0.6, which contains the
  update_snapshot_status() API used by assisted snapshots, as well as
  migrate_volume_completion() for volume migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1230925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229671] Re: Deploy instances failed on Hyper-V with Chinese locale

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229671

Title:
  Deploy instances failed on Hyper-V with Chinese locale

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I am deploying instances on my Hyper-V host but I met the below error.
  I remember in the past, vmops.py calls vhdutils.py but now it calls
  vhdutilsv2.py. Not sure if that is the correct place that caused this
  issue.  Please help to check.

  
  2013-09-24 18:46:47.079 2304 WARNING nova.network.neutronv2.api [-] 
[instance: 973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] No network configured!
  2013-09-24 18:46:47.734 2304 INFO nova.virt.hyperv.vmops 
[req-474eb715-9048-475f-9734-7b5fdc005a64 b13861ca49f641d7a818e6b8335f2351 
29db386367fa4c4e9ffb3c369a46ee90] [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] Spawning new instance
  2013-09-24 18:46:49.996 2304 ERROR nova.compute.manager 
[req-474eb715-9048-475f-9734-7b5fdc005a64 b13861ca49f641d7a818e6b8335f2351 
29db386367fa4c4e9ffb3c369a46ee90] [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] Instance failed to spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] Traceback (most recent call last):
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\compute\manager.py, line 1431, in _spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] block_device_info)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\driver.py, line 55, in spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] admin_password, network_info, 
block_device_info)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 90, in wrapper
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] return function(self, *args, **kwds)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 208, in spawn
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] root_vhd_path = 
self._create_root_vhd(context, instance)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 177, in 
_create_root_vhd
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] self._pathutils.remove(root_vhd_path)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 161, in 
_create_root_vhd
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] base_vhd_info = 
self._vhdutils.get_vhd_info(base_vhd_path)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vhdutilsv2.py, line 124, in 
get_vhd_info
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] et = 
ElementTree.fromstring(vhd_info_xml)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\xml\etree\ElementTree.py, line 1301, in XML
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f] parser.feed(text)
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager [instance: 
973ff9d0-fb57-4ca6-a3ba-7b08783bcb9f]   File C:\Program Files 
(x86)\IBM\SmartCloud Entry\Hyper-V 
Agent\Python27\lib\xml\etree\ElementTree.py, line 1641, in feed
  2013-09-24 18:46:49.996 2304 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1241350] Re: VMware: Detaching a volume from an instance also deletes the volume's backing vmdk (ESXDriver only)

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241350

Title:
  VMware: Detaching a volume from an instance also deletes the volume's
  backing vmdk (ESXDriver only)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I found that when I run:

  % nova volume-detach my_instance c54ad11f-4e51-41a0-97db-7e551776db59

  where the volume with given id is currently attached to my running
  instance named my_instance, the operation completes successfully.
  Nevertheless a subsequent attach of the same volume again will fail.
  So:

  % nova volume-attach my_instance c54ad11f-4e51-41a0-97db-7e551776db59
  /dev/sdb

  fails with the error that the volume's vmdk file is not found.

  Cause:

  During volume detach a delete_virtual_disk_spec is used to remove the
  device from the running instance. This spec also destroys the
  underlying vmdk file. The offending line is :
  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vm_util.py#L471

  Possible fix:
  The fileOperation field of the device config during this reconfigure 
operation should be left unset. We should continue setting 
device_config.operation field to remove. This will remove the device from the 
VM without deleting the underlying vmdk backing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253510] Re: Error mispelt in disk api file

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253510

Title:
  Error mispelt in disk api file

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Error is spelt errror, this is causing a key error. See bug 1253508

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246412] Re: Unshelving an instance with an attached volume causes the volume to not get attached

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246412

Title:
  Unshelving an instance with an attached volume causes the volume to
  not get attached

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When shelving an instance that has a volume attached - once it's
  unshelved, the volume will not get re-attached.

  Reproduce by:

  $nova boot --image IMAGE --flavor FLAVOR test
  $nova attach INSTANCE VOLUME #ssh into the instance and make sure the 
volume is there
  $nova shelve INSTANCE #Make sure the instance is done shelving
  $nova unshelve INSTANCE #Log in and see that the volume is not visible any 
more

  It can also be seen that the volume remains attached as per

  $sinder list

  And if you take a look at the generated xml (if you use libvirt) you
  can see that the volume is not there when the instance is done
  unshelving.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246103] Re: encryptors module forces cert and scheduler services to depend on cinderclient

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246103

Title:
  encryptors module forces cert and scheduler services to depend on
  cinderclient

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Packstack:
  Invalid

Bug description:
  When Nova Scheduler is installed via packstack as the only explicitly
  installed service on a particular node, it will fail to start.  This
  is because it depends on the Python cinderclient library, which is not
  marked as a dependency in 'nova::scheduler' class in Packstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243260] Re: Nova api doesn't start with a backdoor port set

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243260

Title:
  Nova api doesn't start with a backdoor port set

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  nova api fails to start properly if a backdoor port is specified.
  Looking at the logs this traceback is repeatedly printed:

  2013-10-22 14:19:46.822 INFO nova.openstack.common.service [-] Child 1460 
exited with status 1
  2013-10-22 14:19:46.824 INFO nova.openstack.common.service [-] Started child 
1468
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 60684 for process 1467
  2013-10-22 14:19:46.833 INFO nova.openstack.common.eventlet_backdoor [-] 
Eventlet backdoor listening on 58986 for process 1468
  2013-10-22 14:19:46.837 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 117, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup x.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 49, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 448, in run_service
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 357, in start
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
self.manager.backdoor_port = self.backdoor_port
  2013-10-22 14:19:46.837 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'backdoor_port'
  2013-10-22 14:19:46.840 TRACE nova   File /usr/local/bin/nova-api, line 10, 
in module
  2013-10-22 14:19:46.840 TRACE nova sys.exit(main())
  2013-10-22 14:19:46.840 TRACE nova   File /opt/stack/nova/nova/cmd/api.py, 
line 53, in main
  2013-10-22 14:19:46.840 TRACE nova launcher.wait()
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 351, in wait
  2013-10-22 14:19:46.840 TRACE nova self._respawn_children()
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 341, in 
_respawn_children
  2013-10-22 14:19:46.840 TRACE nova self._start_child(wrap)
  2013-10-22 14:19:46.840 TRACE nova   File 
/opt/stack/nova/nova/openstack/common/service.py, line 287, in _start_child
  2013-10-22 14:19:46.840 TRACE nova os._exit(status)
  2013-10-22 14:19:46.840 TRACE nova TypeError: an integer is required
  2013-10-22 14:19:46.840 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243291] Re: Restarting nova compute has an exception

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243291

Title:
  Restarting nova compute has an exception

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  (latest havana code - libvirt driver)

  1. launch a nova vm
  2. see that the instance is deployed on the compute node
  3. restart the compute node

  get the following exception:

  2013-10-22 05:46:53.711 30742 INFO nova.openstack.common.rpc.common 
[req-57056535-4ecd-488a-a75e-ff83341afb98 None None] Connected to AMQP server 
on 192.168.10.111:5672
  2013-10-22 05:46:53.737 30742 AUDIT nova.service [-] Starting compute node 
(version 2013.2)
  2013-10-22 05:46:53.814 30742 ERROR nova.openstack.common.threadgroup [-] 
'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
x.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 65, 
in run_service
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 154, in start
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 786, in 
init_host
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 664, in 
_init_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
net_info = compute_utils.get_nw_info_for_instance(instance)
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/utils.py, line 349, in 
get_nw_info_for_instance
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
return instance.info_cache.network_info
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup 
AttributeError: 'NoneType' object has no attribute 'network_info'
  2013-10-22 05:46:53.814 30742 TRACE nova.openstack.common.threadgroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240247] Re: API cell always doing local deletes

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240247

Title:
  API cell always doing local deletes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  It appears a regression was introduced in:

  https://review.openstack.org/#/c/36363/

  Where the API cell is now always doing a _local_delete()... before
  telling child cells to delete the instance.  There's at least a couple
  of bad side effects of this:

  1) The instance disappears immediately from API view, even though the 
instance still exists in the child cell.  The user does not see a 'deleting' 
task state.  And if the delete fails in the child cell, you have a sync issue 
until the instance is 'healed'.
  2) Double delete.start and delete.end notifications are sent.  1 from API 
cell, 1 from child cell.

  The problem seems to be that _local_delete is being called because the
  service is determined to be down... because the compute service does
  not run in the API cell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237126] Re: nova-api-{ec2, metadata, os-compute} don't allow SSL to be enabled

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237126

Title:
  nova-api-{ec2,metadata,os-compute} don't allow SSL to be enabled

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Although the script bin/nova-api will read nova.conf to determine
  which API services should have SSL enabled (via 'enabled_ssl_apis'),
  the individual API scripts

  bin/nova-api-ec2
  bin/nova-api-metadata
  bin/nova-api-os-compute

  do not contain similar logic to allow configuration of SSL. For
  installations that want to use SSL but not the nova-api wrapper, there
  should be a similar way to enable the former.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1237126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242855] Re: [OSSA 2013-028] Removing role adds role with LDAP backend

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242855

Title:
  [OSSA 2013-028] Removing role adds role with LDAP backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Using the LDAP assignment backend, if you attempt to remove a role
  from a user on a tenant and the user doesn't have that role on the
  tenant then the user is actually granted the role on the tenant. Also,
  the role must not have been granted to anyone on the tenant before.

  To recreate

  0) Start with devstack, configured with LDAP (note especially to set
  KEYSTONE_ASSIGNMENT_BACKEND):

  In localrc,
   enable_service ldap
   KEYSTONE_IDENTITY_BACKEND=ldap
   KEYSTONE_ASSIGNMENT_BACKEND=ldap

  1) set up environment with OS_USERNAME=admin

  export OS_USERNAME=admin
  ...

  2) Create a new user, give admin role, list roles:

  $ keystone user-create --name blktest1 --pass blkpwd
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| 3b71182dc36e45c6be4733d508201694 |
  |   name   | blktest1 |
  +--+--+

  $ keystone user-role-add --user blktest1 --role admin --tenant service
  (no output)

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  3) Remove a role from that user that they don't have (using otherrole
  here since devstack sets it up):

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - Expected to fail with 404, but it doesn't!

  4) List roles as that user:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+-+--+--+
  |id| name| user_id
  |tenant_id |
  
+--+-+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b |admin| 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  | afe23e7955704ccfad803b4a104b28a7 | anotherrole | 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  
+--+-+--+--+

  - Expected to not include the role that was just removed!

  5) Remove the role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - No errors, which I guess is expected since list just said they had
  the role...

  6) List roles, and now it's gone:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  7) Remove role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-remove --user blktest1 --role anotherrole --tenant service
  Could not find user, 3b71182dc36e45c6be4733d508201694. (HTTP 404)

  - Strangely says user not found rather than role not assigned.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1239709] Re: NovaObject does not properly honor VERSION

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239709

Title:
  NovaObject does not properly honor VERSION

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The base object infrastructure has been comparing Object.version
  instead of the Object.VERSION that *all* the objects have been setting
  and incrementing when changes have been made. Since the base object
  defined a .version, and that was used to determine the actual version
  of an object, all objects defining a different VERSION were ignored.

  All systems in the wild currently running broken code are sending
  version '1.0' for all of their objects. The fix is to change the base
  object infrastructure to properly examine, compare and send
  Object.VERSION.

  Impact should be minimal at this point, but getting systems patched as
  soon as possible will be important going forward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242597] Re: [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens (CVE-2013-6391)

2013-12-16 Thread Alan Pevec
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242597

Title:
  [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens
  (CVE-2013-6391)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  So I finally got around to investigating the scenario I mentioned in
  https://review.openstack.org/#/c/40444/, and unfortunately it seems
  that the ec2tokens API does indeed provide a way to circumvent the
  role delegation provided by trusts, and obtain all the roles of the
  trustor user, not just those explicitly delegated.

  Steps to reproduce:
  - Trustor creates a trust delegating a subset of roles
  - Trustee gets a token scoped to that trust
  - Trustee creates an ec2-keypair
  - Trustee makes a request to the ec2tokens API, to validate a signature 
created with the keypair
  - ec2tokens API returns a new token, which is not scoped to the trust and 
enables access to all the trustor's roles.

  I can provide some test code which demonstrates the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1242597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238374] Re: TypeError in periodic task 'update_available_resource'

2013-12-16 Thread Alan Pevec
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238374

Title:
  TypeError in periodic task 'update_available_resource'

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  this occurs while I creating an instance under my devstack env:

  2013-10-11 02:56:29.374 ERROR nova.openstack.common.periodic_task [-] Error 
during ComputeManager.update_available_resource: 'NoneType' object is not 
iterable
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task Traceback 
(most recent call last):
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/openstack/common/periodic_task.py, line 180, in 
run_periodic_tasks
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/compute/manager.py, line 4859, in 
update_available_resource
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 246, in inner
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task return 
f(*args, **kwargs)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/compute/resource_tracker.py, line 313, in 
update_available_resource
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task 
self.pci_tracker.clean_usage(instances, migrations, orphans)
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/nova/nova/pci/pci_manager.py, line 285, in clean_usage
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task for dev 
in self.claims.pop(uuid):
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task TypeError: 
'NoneType' object is not iterable
  2013-10-11 02:56:29.374 TRACE nova.openstack.common.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   7   8   9   10   >