[Yahoo-eng-team] [Bug 1373756] [NEW] Unique check in allowed address pair's extension not work well

2014-09-25 Thread Wei Wang
Public bug reported:

Test this case:

Assume a port's mac_address is 12:34:56:78:aa:bb

Then put these to allowed address pair:
[{ip_address: 10.0.0.1},
 {ip_address: 10.0.0.2,
   mac_address: 12:34:56:78:aa:bb}]

This can pass in extension's validator, but will cause error in db, for 
mac_address is None in extension, but conver to
port's real mac_address in db.


Unit test code:

def test_update_add_none_and_own_mac_address_pairs(self):
with self.network() as net:
res = self._create_port(self.fmt, net['network']['id'])
port = self.deserialize(self.fmt, res)
mac_address = port['port']['mac_address']
address_pairs = [{'ip_address': '10.0.0.1'},
 {'mac_address': mac_address,
  'ip_address': '10.0.0.1'}]
update_port = {'port': {addr_pair.ADDRESS_PAIRS:
address_pairs}}
req = self.new_update_request('ports', update_port,
  port['port']['id'])
res = req.get_response(self.api)
self.assertEqual(res.status_int, 400)
self._delete('ports', port['port']['id'])

** Affects: neutron
 Importance: Undecided
 Assignee: Wei Wang (damon-devops)
 Status: New

** Description changed:

  Test this case:
  
  Assume a port's mac_address is 12:34:56:78:aa:bb
+ 
  Then put these to allowed address pair:
  [{ip_address: 10.0.0.1},
-  {ip_address: 10.0.0.2,
-mac_address: 12:34:56:78:aa:bb}]
+  {ip_address: 10.0.0.2,
+    mac_address: 12:34:56:78:aa:bb}]
  
  This can pass in extension's validator, but will cause error in db, for 
mac_address is None in extension, but conver to
  port's real mac_address in db.
  
+ 
  Unit test code:
  
- def test_update_add_none_and_own_mac_address_pairs(self):
- with self.network() as net:
- res = self._create_port(self.fmt, net['network']['id'])
- port = self.deserialize(self.fmt, res)
- mac_address = port['port']['mac_address']
- address_pairs = [{'ip_address': '10.0.0.1'},
-  {'mac_address': mac_address,
-   'ip_address': '10.0.0.1'}]
- update_port = {'port': {addr_pair.ADDRESS_PAIRS:
- address_pairs}}
- req = self.new_update_request('ports', update_port,
-   port['port']['id'])
- res = req.get_response(self.api)
- self.assertEqual(res.status_int, 400)
- self._delete('ports', port['port']['id'])
+ def test_update_add_none_and_own_mac_address_pairs(self):
+ with self.network() as net:
+ res = self._create_port(self.fmt, net['network']['id'])
+ port = self.deserialize(self.fmt, res)
+ mac_address = port['port']['mac_address']
+ address_pairs = [{'ip_address': '10.0.0.1'},
+  {'mac_address': mac_address,
+   'ip_address': '10.0.0.1'}]
+ update_port = {'port': {addr_pair.ADDRESS_PAIRS:
+ address_pairs}}
+ req = self.new_update_request('ports', update_port,
+   port['port']['id'])
+ res = req.get_response(self.api)
+ self.assertEqual(res.status_int, 400)
+ self._delete('ports', port['port']['id'])

** Changed in: neutron
 Assignee: (unassigned) = Wei Wang (damon-devops)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373756

Title:
  Unique check in allowed address pair's extension not work well

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Test this case:

  Assume a port's mac_address is 12:34:56:78:aa:bb

  Then put these to allowed address pair:
  [{ip_address: 10.0.0.1},
   {ip_address: 10.0.0.2,
     mac_address: 12:34:56:78:aa:bb}]

  This can pass in extension's validator, but will cause error in db, for 
mac_address is None in extension, but conver to
  port's real mac_address in db.

  
  Unit test code:

  def test_update_add_none_and_own_mac_address_pairs(self):
  with self.network() as net:
  res = self._create_port(self.fmt, net['network']['id'])
  port = self.deserialize(self.fmt, res)
  mac_address = port['port']['mac_address']
  address_pairs = [{'ip_address': '10.0.0.1'},
   {'mac_address': mac_address,
    'ip_address': '10.0.0.1'}]
  update_port = {'port': {addr_pair.ADDRESS_PAIRS:
  address_pairs}}
  req = self.new_update_request('ports', update_port,
    

[Yahoo-eng-team] [Bug 1373761] [NEW] Better error message for attach/detach interface failed

2014-09-25 Thread Alex Xu
Public bug reported:

Some time we can see attach/detach interface failed, but we didn't log
the detail info, that's hard to debug.

for example:
http://logs.openstack.org/02/111802/1/gate/gate-tempest-dsvm-neutron/eff16a6/logs/screen-n-cpu.txt.gz?#_2014-09-24_07_54_12_206

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373761

Title:
  Better error message for attach/detach interface failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  Some time we can see attach/detach interface failed, but we didn't log
  the detail info, that's hard to debug.

  for example:
  
http://logs.openstack.org/02/111802/1/gate/gate-tempest-dsvm-neutron/eff16a6/logs/screen-n-cpu.txt.gz?#_2014-09-24_07_54_12_206

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373777] [NEW] race condition when create Nova default security group for same tenant

2014-09-25 Thread ChangBo Guo(gcb)
Public bug reported:

Boot one instance with security_group_api=nova , it will check  Nova's
default security  group firstly,

There is race condition that boot some instances at the first time for
same tenant.  with config option  security_group_api=nova  , current
logic is :

1. get default security group
2. if not get it , create one.

two threads may in step2 , than a exception.SecurityGroupExists is
raised.

Note: this only occure  randomly  when boot some instances at the first
time for same tenant.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373777

Title:
  race condition when create Nova default security group for same tenant

Status in OpenStack Compute (Nova):
  New

Bug description:
  Boot one instance with security_group_api=nova , it will check  Nova's
  default security  group firstly,

  There is race condition that boot some instances at the first time for
  same tenant.  with config option  security_group_api=nova  , current
  logic is :

  1. get default security group
  2. if not get it , create one.

  two threads may in step2 , than a exception.SecurityGroupExists is
  raised.

  Note: this only occure  randomly  when boot some instances at the
  first time for same tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373774] [NEW] security groups are not attached to an instance if port-id is specified during boot

2014-09-25 Thread Oleg Bondarev
Public bug reported:

Creation of server with command  
‘nova boot  --image image --flavor m1.medium --nic port-id=port-id 
--security-groups  sec_grp name’ 
fails to attach the security group to the port/instance. The response payload 
has the security group added but only default security group is attached to the 
instance.  
Separate action has to be performed on the instance to add sec_grp, and it is 
successful. Supplying the same with ‘--nic net-id=net-id’ works as expected

** Affects: nova
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373774

Title:
  security groups are not attached to an instance if port-id is
  specified during boot

Status in OpenStack Compute (Nova):
  New

Bug description:
  Creation of server with command  
  ‘nova boot  --image image --flavor m1.medium --nic port-id=port-id 
--security-groups  sec_grp name’ 
  fails to attach the security group to the port/instance. The response payload 
has the security group added but only default security group is attached to the 
instance.  
  Separate action has to be performed on the instance to add sec_grp, and it is 
successful. Supplying the same with ‘--nic net-id=net-id’ works as expected

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233365] Re: LDAP backend fails when connecting to Active Directory root DN

2014-09-25 Thread Morgan Fainberg
At this time, it is outside the window to add this to Havana.

** Changed in: keystone/havana
   Status: In Progress = Won't Fix

** Changed in: keystone/havana
 Assignee: Adam Young (ayoung) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1233365

Title:
  LDAP backend fails when connecting to Active Directory root DN

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Won't Fix

Bug description:
  When using the LDAP backend and connecting to Active Directory, trying
  to use the root DN (dc=example,dc=com) as the user_tree_dn (or
  tenant/role_tree_dn) fails with Authorization Failed: Unable to
  communicate with identity service: {error: {message: An
  unexpected error prevented the server from fulfilling your request.
  {'info': '04DC: LdapErr: DSID-0C0906E8, comment: In order to
  perform this operation a successful bind must be completed on the
  connection., data 0, v1db1', 'desc': 'Operations error'}, code:
  500, title: Internal Server Error}}. (HTTP 500).

  This is because python-ldap chases all referrals with anonymous
  access, which is disabled by default in AD for security reasons.
  Adding a line in core.py under ldap.initialize to not chase referrals
  (self.conn.set_option(ldap.OPT_REFERRALS, 0)) gets around this error,
  but then we get AttributeError: 'list' object has no attribute
  'iteritems' in search_s. This is because while the referrals aren't
  chased, they still show up in the results list. The keystone code
  can't seem to handle the format the referrals come in. I was able to
  work around this by adding an if statement before o.append to ignore
  the referral results (if type(dn) is not NoneType). I also added from
  types import * in the beginning of core.py.

  I'm sure this isn't the best workaround for everybody, but in general
  I think there should be an option in keystone.conf to enable or
  disable chasing of referrals. If it is disabled, then the previous
  ldap option should be set and something should be done to remove the
  referrals from the results list.

  Edit: I'm using the Grizzly packages from the Ubuntu Cloud Archive on
  Ubuntu 12.04.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1233365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373794] [NEW] Run integration_tests cause Internal Server Error

2014-09-25 Thread Hong-Guang
Public bug reported:

Reproduce steps
1: git clone git://git.openstack.org/openstack/horizon.git
2: cd horiozn  bash -xxx  ./run_tests.sh --integration 
openstack_dashboard.test.integration_tests.tests.test_user_settings  a.out 
  21

3:Internal Server Error

The server encountered an internal error or misconfiguration and was
unable to complete your request.

Please contact the server administrator at [no address given] to inform
them of the time this error occurred, and the actions you performed just
before this error.

More information about this error may be available in the server error
log.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Testing log ouput
   https://bugs.launchpad.net/bugs/1373794/+attachment/4214530/+files/a.out

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373794

Title:
  Run integration_tests  cause   Internal Server Error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Reproduce steps
  1: git clone git://git.openstack.org/openstack/horizon.git
  2: cd horiozn  bash -xxx  ./run_tests.sh --integration 
openstack_dashboard.test.integration_tests.tests.test_user_settings  a.out 
  21

  3:Internal Server Error

  The server encountered an internal error or misconfiguration and was
  unable to complete your request.

  Please contact the server administrator at [no address given] to
  inform them of the time this error occurred, and the actions you
  performed just before this error.

  More information about this error may be available in the server error
  log.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373816] [NEW] _get_security_groups_on_port tries to get [0] on a set type

2014-09-25 Thread Jacek Świderski
Public bug reported:

_get_security_groups_on_port checks before that all security groups on port 
belong to tenant - and if there are any that don't fulfill this requirement it 
tries to raise SecurityGroupNotFound but fails with :
TypeError: 'set' object does not support indexing

port_sg_missing = requested_groups - valid_groups
if port_sg_missing:
raise ext_sg.SecurityGroupNotFound(id=str(port_sg_missing[0]))


One thing is the fail itself - but beside I think that message = _(Security 
group %(id)s does not exist), where id would be a randomly chosen missing id 
isn't really clear in this context and new exception should be created for this 
case.

** Affects: neutron
 Importance: Undecided
 Assignee: Jacek Świderski (jacek-swiderski)
 Status: New


** Tags: sg-fw

** Changed in: neutron
 Assignee: (unassigned) = Jacek Świderski (jacek-swiderski)

** Tags added: sg-fw

** Description changed:

  _get_security_groups_on_port checks before that all security groups on port 
belong to tenant - and if there are any that don't fulfill this requirement it 
tries to raise SecurityGroupNotFound but fails with :
  TypeError: 'set' object does not support indexing
  
- One thing is the fail itself - but beside I think that message =
- _(Security group %(id)s does not exist), where id would be a randomly
- chosen missing id isn't really clear in this context and new exception
- should be created for this case.
+ port_sg_missing = requested_groups - valid_groups
+ if port_sg_missing:
+ raise ext_sg.SecurityGroupNotFound(id=str(port_sg_missing[0]))
+ 
+ 
+ One thing is the fail itself - but beside I think that message = _(Security 
group %(id)s does not exist), where id would be a randomly chosen missing id 
isn't really clear in this context and new exception should be created for this 
case.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373816

Title:
  _get_security_groups_on_port tries to get [0] on a set type

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  _get_security_groups_on_port checks before that all security groups on port 
belong to tenant - and if there are any that don't fulfill this requirement it 
tries to raise SecurityGroupNotFound but fails with :
  TypeError: 'set' object does not support indexing

  port_sg_missing = requested_groups - valid_groups
  if port_sg_missing:
  raise ext_sg.SecurityGroupNotFound(id=str(port_sg_missing[0]))

  
  One thing is the fail itself - but beside I think that message = _(Security 
group %(id)s does not exist), where id would be a randomly chosen missing id 
isn't really clear in this context and new exception should be created for this 
case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373817] [NEW] ODL MD should not reraise ODL exceptions as they are

2014-09-25 Thread Cédric OLLIVIER
Public bug reported:

ODL MD reraised errors returned by ODL in single operations and in full 
synchronization.
In both cases, ODL MD could raise more adapted errors (i.e. server errors 
instead of client errors).

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: icehouse-backport-potential opendaylight

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373817

Title:
  ODL MD should not reraise ODL exceptions as they are

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ODL MD reraised errors returned by ODL in single operations and in full 
synchronization.
  In both cases, ODL MD could raise more adapted errors (i.e. server errors 
instead of client errors).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373820] [NEW] --sort-key option for neutron cli does not always work

2014-09-25 Thread Deepak Jadiya
Public bug reported:

$neutron security-group-list --sort-key name
---
id  namedescription

---
3d2b51cf-8f30-4be8-b720-364e62c0ca45als-Core-Router Inbound User 
Traffic
9d9598da-27f8-46fc-9f5a-72e0968a2e2cals-Core-Router Inbound User 
Traffic
eb39ab0f-3974-4fa5-a7ad-6e94caec29c7als-InternalIntra-Cluster Traffic

However, it does not work with other neutron commands, although help on
them describe it.

for eg:

sort-key on neutron net-list or floatingip-list does not work as follows:
$neutron net-list --sort-key name
--
id  namesubnets

--
0d520976-480d-4e56-8dc9-f550eab660eeSVC 
f06cdf57-dff4-4a93-823f-39fa534f2409 10.9.236.192/26
123f7ac9-f357-407a-be27-cacde4f62476umanet  
16232cfa-520c-4f33-8db2-a6754729dbe2 198.51.100.0/24
18f732b3-1242-46ce-beb7-875703c10c3dMnet
4091477b-6961-4cbe-b08a-e22a0ac6ab25 10.0.6.0/24
20d9167b-a5a2-49c9-adb8-b12cbf9ca73cext-net 
21f4bc85-3ec8-4c16-86f6-0a22a8d4b6ef 10.9.236.0/26 

 neutron floatingip-list --sort-key floating_ip_address
-+
id  fixed_ip_addressfloating_ip_address port_id

-+
0541b567-40ba-451b-93d5-27886eb4172.17.0.25 10.9.236.31 
6e3ae31b-1ebd-42d9-8f8e-c3e057f5736f
08bf0e75-7307-4e23-95bf-6ca1705c406d198.51.100.510.9.236.4  
a70cd4b7-2657-4cbc-8601-5e936cfecfae
16d60b41-e113-4bf2-8f46-1f06f254563910.0.7.210.9.236.56 
7fe623b2-ac6e-465d-a916-75febb32e1a9
251ff272-308c-4fc3-826e-d08d9ab68495172.17.0.10 10.9.236.8  
0b42886f-7c50-4feb-9d31-b2e10a1a4f5d
2bca19f3-b991-486e-99c7-771994a1347e10.9.236.10

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373820

Title:
  --sort-key option for neutron cli does not always work

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  $neutron security-group-list --sort-key name
  
---
  idnamedescription

  
---
  3d2b51cf-8f30-4be8-b720-364e62c0ca45  als-Core-Router Inbound User 
Traffic
  9d9598da-27f8-46fc-9f5a-72e0968a2e2c  als-Core-Router Inbound User 
Traffic
  eb39ab0f-3974-4fa5-a7ad-6e94caec29c7  als-InternalIntra-Cluster Traffic

  However, it does not work with other neutron commands, although help
  on them describe it.

  for eg:

  sort-key on neutron net-list or floatingip-list does not work as follows:
  $neutron net-list --sort-key name
  
--
  idnamesubnets

  
--
  0d520976-480d-4e56-8dc9-f550eab660ee  SVC 
f06cdf57-dff4-4a93-823f-39fa534f2409 10.9.236.192/26
  123f7ac9-f357-407a-be27-cacde4f62476  umanet  
16232cfa-520c-4f33-8db2-a6754729dbe2 198.51.100.0/24
  18f732b3-1242-46ce-beb7-875703c10c3d  Mnet
4091477b-6961-4cbe-b08a-e22a0ac6ab25 10.0.6.0/24
  20d9167b-a5a2-49c9-adb8-b12cbf9ca73c  ext-net 
21f4bc85-3ec8-4c16-86f6-0a22a8d4b6ef 10.9.236.0/26 

   neutron floatingip-list --sort-key floating_ip_address
  
-+
  idfixed_ip_addressfloating_ip_address port_id

  
-+
  0541b567-40ba-451b-93d5-27886eb4  172.17.0.25 10.9.236.31 
6e3ae31b-1ebd-42d9-8f8e-c3e057f5736f
  08bf0e75-7307-4e23-95bf-6ca1705c406d  198.51.100.510.9.236.4  
a70cd4b7-2657-4cbc-8601-5e936cfecfae
  

[Yahoo-eng-team] [Bug 1373832] [NEW] The source group of security group does not work

2014-09-25 Thread Ken'ichi Ohmichi
Public bug reported:

I created a security group with the other security group as the source
group, and booted a server with the security group:

$ nova secgroup-create source-any secgroup for any sources 
$ nova secgroup-add-rule source-any tcp 1 65535 0.0.0.0/0
$
$ nova secgroup-create accept-ssh secgroup for ssh 
$ nova secgroup-add-group-rule accept-ssh source-any tcp 22 22
$
$ nova boot --flavor m1.nano --security-groups accept-ssh --image 
cirros-0.3.2-x86_64-uec vm01

but I could not access the server with SSH.

According to 
http://docs.openstack.org/developer/nova/nova.concepts.html#concept-security-groups
 , the source group is
considered as CIDR of acceptable source addresses and we can reuse it for new 
security groups.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373832

Title:
  The source group of security group does not work

Status in OpenStack Compute (Nova):
  New

Bug description:
  I created a security group with the other security group as the source
  group, and booted a server with the security group:

  $ nova secgroup-create source-any secgroup for any sources 
  $ nova secgroup-add-rule source-any tcp 1 65535 0.0.0.0/0
  $
  $ nova secgroup-create accept-ssh secgroup for ssh 
  $ nova secgroup-add-group-rule accept-ssh source-any tcp 22 22
  $
  $ nova boot --flavor m1.nano --security-groups accept-ssh --image 
cirros-0.3.2-x86_64-uec vm01

  but I could not access the server with SSH.

  According to 
http://docs.openstack.org/developer/nova/nova.concepts.html#concept-security-groups
 , the source group is
  considered as CIDR of acceptable source addresses and we can reuse it for new 
security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373852] [NEW] unable to boot nova instance from boot volume id

2014-09-25 Thread satyadev svn
Public bug reported:

Test steps : 
1) create a volume from image
2) now boot instance with above volume


ssatya@juno:~/juno/devstack$ nova image-list
+--++++
| ID   | Name   | Status | 
Server |
+--++++
| b99f9093-cc69-4a2a-a130-49005c31fd1f | cirros-0.3.2-i386-disk | ACTIVE |  
  |
+--++++
ssatya@juno:~/juno/devstack$ cinder create --image-id 
b99f9093-cc69-4a2a-a130-49005c31fd1f 1 


ssatya@juno:~/juno/devstack$ cinder list
+--+---+--+--+-+--+-+
|  ID  |   Status  | Name | Size | Volume Type 
| Bootable | Attached to |
+--+---+--+--+-+--+-+
| 664c8014-9863-488c-9a9f-9f60f19ac609 | available | None |  1   | None
|   true   | |
+--+---+--+--+-+--+-+
ssatya@juno:~/juno/devstack$ nova boot  --boot-volume 
664c8014-9863-488c-9a9f-9f60f19ac609  testboot3 --flavor 1
ssatya@juno:~/juno/devstack$ nova list
+--+---+++-+--+
| ID   | Name  | Status | Task State | 
Power State | Networks |
+--+---+++-+--+
| d9f121e0-f76d-4a94-83e9-d9ccf7168c9b | testboot3 | ERROR  | -  | 
NOSTATE | private=10.0.0.4 |
+--+---+++-+--+


014-09-25 15:05:53.073 DEBUG nova.openstack.common.lockutils [-] Got semaphore 
/ lock update_usage from (pid=12624) inner 
/opt/stack/nova/nova/openstack/common/lockutils
.py:271
2014-09-25 15:05:53.126 INFO nova.scheduler.client.report [-] Compute_service 
record updated for ('juno', domain-c7(cls))
2014-09-25 15:05:53.126 DEBUG nova.openstack.common.lockutils [-] Releasing 
semaphore compute_resources from (pid=12624) lock 
/opt/stack/nova/nova/openstack/common/lockut
ils.py:238
2014-09-25 15:05:53.126 DEBUG nova.openstack.common.lockutils [-] Semaphore / 
lock released update_usage from (pid=12624) inner 
/opt/stack/nova/nova/openstack/common/lock
utils.py:275
2014-09-25 15:05:56.197 DEBUG nova.openstack.common.periodic_task [-] Running 
periodic task ComputeManager._poll_volume_usage from (pid=12624) 
run_periodic_tasks /opt/stack
/nova/nova/openstack/common/periodic_task.py:193
2014-09-25 15:05:56.198 DEBUG nova.openstack.common.loopingcall [-] Dynamic 
looping call bound method Service.periodic_tasks of nova.service.Service 
object at 0x2aa4e90
 sleeping for 1.97 seconds from (pid=12624) _inner 
/opt/stack/nova/nova/openstack/common/loopingcall.py:132
2014-09-25 15:05:56.294 DEBUG nova.volume.cinder 
[req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Cinderclient connection 
created using URL: http://10.112.185.114:8776
/v1/30ea152a107248bba878a1c8d31467b6 from (pid=12624) get_cinder_client_version 
/opt/stack/nova/nova/volume/cinder.py:255
2014-09-25 15:05:56.955 ERROR nova.compute.manager 
[req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance failed to
 spawn
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Traceback (most recent call last):
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File 
/opt/stack/nova/nova/compute/manager.py, line , in _build_r
esources
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] yield resources
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File 
/opt/stack/nova/nova/compute/manager.py, line 2101, in _build_a
nd_run_instance
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] block_device_info=block_device_info)
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 447, in spa
wn
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] admin_password, network_info, 
block_device_info)
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 437, in spaw
n
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: 
d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] vi = 

[Yahoo-eng-team] [Bug 1373851] [NEW] security groups db queries load excessive data

2014-09-25 Thread Kevin Benton
Public bug reported:

The security groups db queries are loading extra data from the ports
table that is unnecessarily hindering performance.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373851

Title:
  security groups db queries load excessive data

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The security groups db queries are loading extra data from the ports
  table that is unnecessarily hindering performance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373478] Re: filter scheduler makes invalid assumption of monotonicity

2014-09-25 Thread Sean Dague
So this is an interesting possible feature, but that means it should
come in via a spec, not really via launchpad.

** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373478

Title:
  filter scheduler makes invalid assumption of monotonicity

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  The current filter scheduler handles the scheduling of a homogenous
  batch of N instances with a loop that assumes that a host ruled out in
  one iteration can not be desirable in a later iteration --- but that
  is a false assumption.

  Consider the case of a filter whose purpose is to achieve balance
  across some sort of areas.  These might be AZs, host aggregates,
  racks, whatever.  Consider a request to schedule 4 identical
  instances; suppose that there are two hosts, one in each of two
  different areas, initially hosting nothing.  For the first iteration,
  both hosts pass this filter.  One gets picked, call it host A.  On the
  second iteration, only the other host (call it B) passes the filter.
  So the second instance goes on B.  On the third iteration, both hosts
  would pass the filter but the filter is only asked about host B.  So
  the third instance goes on B.  On the fourth iteration, host B is
  unacceptable but that is the only host about which the filter is
  asked.  So the scheduling fails with a complaint about no acceptable
  host found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373865] [NEW] Refactor domain usage in test_backend

2014-09-25 Thread Henry Nash
Public bug reported:

The way test_backend uses domains leads to either many of the tests
being over overridden in test_backend_ldap, or just skipped (leading to
a risk that we are not sufficiently testing certain functionality - see
bug 1373113 as an example).

There is already a construct for getting the default domain in
backend_ldap, but we should use a more flexible scheme for getting test
domains so that tests can run whether there is one domain, multiple
domains, read-only domains etc.

** Affects: keystone
 Importance: Wishlist
 Status: New


** Tags: test-improvement

** Changed in: keystone
   Importance: Undecided = Wishlist

** Tags added: test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373865

Title:
  Refactor domain usage in test_backend

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The way test_backend uses domains leads to either many of the tests
  being over overridden in test_backend_ldap, or just skipped (leading
  to a risk that we are not sufficiently testing certain functionality -
  see bug 1373113 as an example).

  There is already a construct for getting the default domain in
  backend_ldap, but we should use a more flexible scheme for getting
  test domains so that tests can run whether there is one domain,
  multiple domains, read-only domains etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373868] [NEW] Should we alow all network can use allowed address pairs?

2014-09-25 Thread Wei Wang
Public bug reported:

Now we can add allowed address pair to every net's port if allowed
address pair is enable.

This will cause security problem in a shared network, I think.

So we should add an limit for shared net or add a config entry in neutron.conf, 
so administrator
can disables some net's ports' allowed address pairs.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373868

Title:
  Should we alow all network can use allowed address pairs?

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now we can add allowed address pair to every net's port if allowed
  address pair is enable.

  This will cause security problem in a shared network, I think.

  So we should add an limit for shared net or add a config entry in 
neutron.conf, so administrator
  can disables some net's ports' allowed address pairs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373872] [NEW] OpenContrail neutron plugin doesn't support portbinding.vnic_type

2014-09-25 Thread Numan Siddique
Public bug reported:

OpenContrail neutron plugin is not supporting portbinding.vnic_type
during port creation. Nova expects portbindings.vnic_type.

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373872

Title:
  OpenContrail neutron plugin doesn't support portbinding.vnic_type

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  OpenContrail neutron plugin is not supporting portbinding.vnic_type
  during port creation. Nova expects portbindings.vnic_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373832] Re: The source group of security group does not work

2014-09-25 Thread Ken'ichi Ohmichi
I'm not sure now what is right usage of this feature.
Hopefully, some typos on the documentation would be nice.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373832

Title:
  The source group of security group does not work

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Manuals:
  New

Bug description:
  I created a security group with the other security group as the source
  group, and booted a server with the security group:

  $ nova secgroup-create source-any secgroup for any sources 
  $ nova secgroup-add-rule source-any tcp 1 65535 0.0.0.0/0
  $
  $ nova secgroup-create accept-ssh secgroup for ssh 
  $ nova secgroup-add-group-rule accept-ssh source-any tcp 22 22
  $
  $ nova boot --flavor m1.nano --security-groups accept-ssh --image 
cirros-0.3.2-x86_64-uec vm01

  but I could not access the server with SSH.

  According to 
http://docs.openstack.org/developer/nova/nova.concepts.html#concept-security-groups
 , the source group is
  considered as CIDR of acceptable source addresses and we can reuse it for new 
security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373886] [NEW] create a simple way to add/remove policies to new role

2014-09-25 Thread Dafna Ron
Public bug reported:

I wanted to create a unique user role and add some build in policies to it. 
I can create a new role but than discovered that instead of being able to add 
storage permissions or network permissions for a user (so specific system 
functionality) I have to build my own policies. 
I opened a bug to Horizon but I think that for them to implement such a change 
in the UX they need keystone to do some work as well. 
what I am suggesting is that we build some default policies that would allow us 
to add a storage admin, a network admin, an instance admin and so on to a new 
created role without asking the user to edit /etc/keystone/policy.json 
manually. 

I think adding this functionality would not only improve keystone and
make it more agile and east to use but improve horizon as well.

*Before someone marks this as invalid I will add that I am not a coder
and based on the community decisions to add a technical design to any
blueprint opened I cannot open a blueprint my self :) *

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373886

Title:
  create a simple way to add/remove policies to new role

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I wanted to create a unique user role and add some build in policies to it. 
  I can create a new role but than discovered that instead of being able to add 
storage permissions or network permissions for a user (so specific system 
functionality) I have to build my own policies. 
  I opened a bug to Horizon but I think that for them to implement such a 
change in the UX they need keystone to do some work as well. 
  what I am suggesting is that we build some default policies that would allow 
us to add a storage admin, a network admin, an instance admin and so on to a 
new created role without asking the user to edit /etc/keystone/policy.json 
manually. 

  I think adding this functionality would not only improve keystone and
  make it more agile and east to use but improve horizon as well.

  *Before someone marks this as invalid I will add that I am not a coder
  and based on the community decisions to add a technical design to any
  blueprint opened I cannot open a blueprint my self :) *

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373927] [NEW] Inconsistent usage of Status and State in System Info panel makes them hard to translate

2014-09-25 Thread Akihiro Motoki
Public bug reported:

There are columns named State and Status in Compute Services, Volume 
Services and Network Agents tabs in System Information panel, however the 
meaning of these two words are inconsistent.
It makes hard to translate them appropriately and needs to assign the 
same/similar words in translations.

State in Compute Services and Volume Services and Status in Network 
Agents are used in the same meaning.
Status in Compute Services and Volume Services and State in Network 
Agents are used in the same meaning.
It is very confusing.

At least we need to use consistent terms in System Information Panel.
I would suggest to swap Status and State in Network Agents tab.


This inconsistency comes from inconsistency wording in back-end API 
(nova/cinder and neutron).
nova and cinder use status to indicate administrative mode which takes 
Enabled/Disabled, but Neutron API uses admin_state_up in this meaning.
As a long term solution, we need to resolve this kind of API inconsistency 
among projects.

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

** Changed in: horizon
 Assignee: (unassigned) = Akihiro Motoki (amotoki)

** Tags added: i18n

** Changed in: horizon
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373927

Title:
  Inconsistent usage of Status and State in System Info panel makes
  them hard to translate

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are columns named State and Status in Compute Services, Volume 
Services and Network Agents tabs in System Information panel, however the 
meaning of these two words are inconsistent.
  It makes hard to translate them appropriately and needs to assign the 
same/similar words in translations.

  State in Compute Services and Volume Services and Status in Network 
Agents are used in the same meaning.
  Status in Compute Services and Volume Services and State in Network 
Agents are used in the same meaning.
  It is very confusing.

  At least we need to use consistent terms in System Information Panel.
  I would suggest to swap Status and State in Network Agents tab.

  
  This inconsistency comes from inconsistency wording in back-end API 
(nova/cinder and neutron).
  nova and cinder use status to indicate administrative mode which takes 
Enabled/Disabled, but Neutron API uses admin_state_up in this meaning.
  As a long term solution, we need to resolve this kind of API inconsistency 
among projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373936] [NEW] Functional tests for metadata seeding for glance-manage

2014-09-25 Thread Bartosz Fic
Public bug reported:

Currently there are no functional tests for metadata seeding for glance-
manage. This tests should cover a specific lifecycle of metadata using
three api methods:

db_load_metadefs
db_unload_metadefs
db_export_metadefs

This tests concerns json files stored in /glance/etc/metadefs

** Affects: glance
 Importance: Undecided
 Assignee: Bartosz Fic (bartosz-fic)
 Status: New


** Tags: api metadef

** Changed in: glance
 Assignee: (unassigned) = Bartosz Fic (bartosz-fic)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1373936

Title:
  Functional tests for metadata seeding for glance-manage

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Currently there are no functional tests for metadata seeding for
  glance-manage. This tests should cover a specific lifecycle of
  metadata using three api methods:

  db_load_metadefs
  db_unload_metadefs
  db_export_metadefs

  This tests concerns json files stored in /glance/etc/metadefs

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1373936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348204] Re: test_encrypted_cinder_volumes_cryptsetup times out waiting for volume to be available

2014-09-25 Thread Sean Dague
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348204

Title:
  test_encrypted_cinder_volumes_cryptsetup times out waiting for volume
  to be available

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm-
  full/168a5dd/console.html#_2014-07-24_01_07_09_115

  2014-07-24 01:07:09.116 | 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,image,volume]
  2014-07-24 01:07:09.116 | 

  2014-07-24 01:07:09.116 | 
  2014-07-24 01:07:09.116 | Captured traceback:
  2014-07-24 01:07:09.117 | ~~~
  2014-07-24 01:07:09.117 | Traceback (most recent call last):
  2014-07-24 01:07:09.117 |   File tempest/test.py, line 128, in wrapper
  2014-07-24 01:07:09.117 | return f(self, *func_args, **func_kwargs)
  2014-07-24 01:07:09.117 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 63, in 
test_encrypted_cinder_volumes_cryptsetup
  2014-07-24 01:07:09.117 | self.attach_detach_volume()
  2014-07-24 01:07:09.117 |   File 
tempest/scenario/test_encrypted_cinder_volumes.py, line 49, in 
attach_detach_volume
  2014-07-24 01:07:09.117 | self.nova_volume_detach()
  2014-07-24 01:07:09.117 |   File tempest/scenario/manager.py, line 757, 
in nova_volume_detach
  2014-07-24 01:07:09.117 | self._wait_for_volume_status('available')
  2014-07-24 01:07:09.117 |   File tempest/scenario/manager.py, line 710, 
in _wait_for_volume_status
  2014-07-24 01:07:09.117 | self.volume_client.volumes, self.volume.id, 
status)
  2014-07-24 01:07:09.118 |   File tempest/scenario/manager.py, line 230, 
in status_timeout
  2014-07-24 01:07:09.118 | not_found_exception=not_found_exception)
  2014-07-24 01:07:09.118 |   File tempest/scenario/manager.py, line 296, 
in _status_timeout
  2014-07-24 01:07:09.118 | raise exceptions.TimeoutException(message)
  2014-07-24 01:07:09.118 | TimeoutException: Request timed out
  2014-07-24 01:07:09.118 | Details: Timed out waiting for thing 
4ef6a14a-3fce-417f-aa13-5aab1789436e to become available

  I've actually been seeing this out of tree in our internal CI also but
  thought it was just us or our slow VMs, this is the first I've seen it
  upstream.

  From the traceback in the console log, it looks like the volume does
  get to available status because it doesn't get out of that state when
  tempest is trying to delete the volume on tear down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368910] Re: intersphinx requires network access which sometimes fails

2014-09-25 Thread Ben Swartzlander
** Changed in: manila
   Status: Fix Committed = Fix Released

** Changed in: manila
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368910

Title:
  intersphinx requires network access  which sometimes fails

Status in Cinder:
  In Progress
Status in Manila:
  Fix Released
Status in OpenStack Compute (Nova):
  In Progress
Status in The Oslo library incubator:
  Fix Released
Status in python-manilaclient:
  Fix Committed

Bug description:
  The intersphinx module requires internet access, and periodically
  causes docs jobs to fail.

  This module also prevents docs from being built without internet
  access.

  Since we don't actually use intersphinx for much (if anything), lets
  just remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373927] Re: Inconsistent usage of Status and State in System Info panel makes them hard to translate

2014-09-25 Thread Akihiro Motoki
Hmm. my previous comment seems wrong.

Anyway consistency is important and the majority of tabs is
to use Status columns with enabled/disabled.
I will swap status and state in Network Agents.

** Changed in: horizon
   Importance: Medium = Undecided

** Changed in: horizon
   Status: New = Invalid

** Changed in: horizon
 Assignee: Akihiro Motoki (amotoki) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373927

Title:
  Inconsistent usage of Status and State in System Info panel makes
  them hard to translate

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  There are columns named State and Status in Compute Services, Volume 
Services and Network Agents tabs in System Information panel, however the 
meaning of these two words are inconsistent.
  It makes hard to translate them appropriately and needs to assign the 
same/similar words in translations.

  State in Compute Services and Volume Services and Status in Network 
Agents are used in the same meaning.
  Status in Compute Services and Volume Services and State in Network 
Agents are used in the same meaning.
  It is very confusing.

  At least we need to use consistent terms in System Information Panel.
  I would suggest to swap Status and State in Network Agents tab.

  
  This inconsistency comes from inconsistency wording in back-end API 
(nova/cinder and neutron).
  nova and cinder use status to indicate administrative mode which takes 
Enabled/Disabled, but Neutron API uses admin_state_up in this meaning.
  As a long term solution, we need to resolve this kind of API inconsistency 
among projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373949] [NEW] live-migration fails because of CPU feature invtsc

2014-09-25 Thread Daniel Lundqvist
Public bug reported:

Hi!

I'm running the latest git nova code from branch stable/icehouse and have 
patched it to get rid of the duplicate feature bug
(commit 0f28fbef8bedeafca0bf488b84f783568fefc960).
I'm running libvirt 1.2.8 and qemu 2.0.2.

When I issue the command to do a live migration it fails with this stack
trace:

2014-09-25 13:51:46.837 16995 ERROR nova.virt.libvirt.driver [-] [instance: 
3b8dbddc-ba24-4ec6-bb3b-be227b5fb689] Live Migration failure: Requested 
operation is not valid: domain has CPU feature: invtsc
Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/eventlet/hubs/poll.py, line 97, in wait
readers.get(fileno, noop).cb(fileno)
File /usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 194, in 
main
result = function(*args, **kwargs)
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4595, 
in _live_migration
recover_method(context, instance, dest, block_migration)
File /usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 
68, in __exit__
six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4589, 
in _live_migration
CONF.libvirt.live_migration_bandwidth)
File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 179, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 139, in 
proxy_call
rv = execute(f,*args,**kwargs)
File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 77, in tworker
rv = meth(*args,**kwargs)
File /usr/lib/python2.7/site-packages/libvirt.py, line 1590, in migrateToURI
if ret == -1: raise libvirtError ('virDomainMigrateToURI() failed', dom=self)
libvirtError: Requested operation is not valid: domain has CPU feature: invtsc

when googling for invtsc it seems to be fairly new feature in libvirt
(http://www.redhat.com/archives/libvir-list/2014-May/msg00214.html),
which might be the reason this has not showed up for other people that
use ubuntu for example.

Regards
Daniel Lundqvist

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt live-migration

** Description changed:

  Hi!
  
  I'm running the latest git nova code from brach stable/icehouse and have 
patched it to get rid of the duplicate feature bug.
  I'm running libvirt 1.2.8 and qemu 2.0.2.
  
- When I issue the command to do a live migration gets stuck with this
- stack trace:
+ When I issue the command to do a live migration it fails with this stack
+ trace:
  
  2014-09-25 13:51:46.837 16995 ERROR nova.virt.libvirt.driver [-] [instance: 
3b8dbddc-ba24-4ec6-bb3b-be227b5fb689] Live Migration failure: Requested 
operation is not valid: domain has CPU feature: invtsc
  Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/eventlet/hubs/poll.py, line 97, in 
wait
  readers.get(fileno, noop).cb(fileno)
  File /usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 194, in 
main
  result = function(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 
4595, in _live_migration
  recover_method(context, instance, dest, block_migration)
  File /usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
  File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 
4589, in _live_migration
  CONF.libvirt.live_migration_bandwidth)
  File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 179, in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 139, in 
proxy_call
  rv = execute(f,*args,**kwargs)
  File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 77, in tworker
  rv = meth(*args,**kwargs)
  File /usr/lib/python2.7/site-packages/libvirt.py, line 1590, in migrateToURI
  if ret == -1: raise libvirtError ('virDomainMigrateToURI() failed', dom=self)
  libvirtError: Requested operation is not valid: domain has CPU feature: invtsc
  
- 
- when googling for invtsc it seems to be fairly new feature in libvirt 
(http://www.redhat.com/archives/libvir-list/2014-May/msg00214.html), which 
might be the reason this has not showed up for other people that use ubuntu for 
example. 
+ when googling for invtsc it seems to be fairly new feature in libvirt
+ (http://www.redhat.com/archives/libvir-list/2014-May/msg00214.html),
+ which might be the reason this has not showed up for other people that
+ use ubuntu for example.
  
  Regards
  Daniel Lundqvist

** Description changed:

  Hi!
  
- I'm running the latest git nova code from brach stable/icehouse and have 
patched it to get rid of the duplicate feature bug.
+ I'm running the latest git nova code from branch stable/icehouse and have 
patched it to get rid of the duplicate feature bug.
  I'm running libvirt 1.2.8 and qemu 2.0.2.
  
  When I issue the command to do a live migration it 

[Yahoo-eng-team] [Bug 1373950] [NEW] Serial proxy service and API broken by design

2014-09-25 Thread Nikola Đipanov
Public bug reported:

As part of the blueprint https://blueprints.launchpad.net/nova/+spec
/serial-ports we introduced an API extension and a websocket proxy
binary. The problem with the 2 is that a lot of the stuff was copied
verbatim from the novnc-proxy API and service which relies heavily on
the internal implementation details of NoVNC and python-websockify
libraries.

We should not ship a service that will proxy websocket traffic if we do
not acutally serve a web-based client for it (in the NoVNC case, it has
it's own HTML5 VNC implementation that works over ws://). No similar
thing was part of the proposed (and accepted) implementation. The
websocket proxy based on websockify that we currently have actually
assumes it will serve static content (which we don't do for serial
console case) which will then when excuted in the browser initiate a
websocket connection that sends the security token in the cookie: field
of the request. All of this is specific to the NoVNC implementation
(see:
https://github.com/kanaka/noVNC/blob/e4e9a9b97fec107b25573b29d2e72a6abf8f0a46/vnc_auto.html#L18)
and does not make any sense for serial console functionality.

The proxy service was introduced in
https://review.openstack.org/#/c/113963/

In a similar manner - the API that was proposed and implemented (in
https://review.openstack.org/#/c/113966/) that gives us back the URL
with the security token makes no sense for the same reasons outlined
above.

We should revert at least these 2 patches before the final Juno release
as we do not want to ship a useless service and commit to a useles API
method.

We could then look into providing similar functionality through possibly
something like https://github.com/chjj/term.js which will require us to
write a different proxy service.

** Affects: nova
 Importance: Critical
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373950

Title:
  Serial proxy service  and API broken by design

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  As part of the blueprint https://blueprints.launchpad.net/nova/+spec
  /serial-ports we introduced an API extension and a websocket proxy
  binary. The problem with the 2 is that a lot of the stuff was copied
  verbatim from the novnc-proxy API and service which relies heavily on
  the internal implementation details of NoVNC and python-websockify
  libraries.

  We should not ship a service that will proxy websocket traffic if we
  do not acutally serve a web-based client for it (in the NoVNC case, it
  has it's own HTML5 VNC implementation that works over ws://). No
  similar thing was part of the proposed (and accepted) implementation.
  The websocket proxy based on websockify that we currently have
  actually assumes it will serve static content (which we don't do for
  serial console case) which will then when excuted in the browser
  initiate a websocket connection that sends the security token in the
  cookie: field of the request. All of this is specific to the NoVNC
  implementation (see:
  
https://github.com/kanaka/noVNC/blob/e4e9a9b97fec107b25573b29d2e72a6abf8f0a46/vnc_auto.html#L18)
  and does not make any sense for serial console functionality.

  The proxy service was introduced in
  https://review.openstack.org/#/c/113963/

  In a similar manner - the API that was proposed and implemented (in
  https://review.openstack.org/#/c/113966/) that gives us back the URL
  with the security token makes no sense for the same reasons outlined
  above.

  We should revert at least these 2 patches before the final Juno
  release as we do not want to ship a useless service and commit to a
  useles API method.

  We could then look into providing similar functionality through
  possibly something like https://github.com/chjj/term.js which will
  require us to write a different proxy service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373961] [NEW] Missing version attribute while generating K2K SAML assertion

2014-09-25 Thread Marek Denis
Public bug reported:

In Keystone2Keystone federation Assertion XML object is missing
attribute 'version' which makes Shibboleth Service Providers complain
badly.

This parameter should be statically set to string value '2.0'

** Affects: keystone
 Importance: Undecided
 Assignee: Marek Denis (marek-denis)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Marek Denis (marek-denis)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373961

Title:
  Missing version attribute while generating K2K SAML assertion

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In Keystone2Keystone federation Assertion XML object is missing
  attribute 'version' which makes Shibboleth Service Providers complain
  badly.

  This parameter should be statically set to string value '2.0'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373962] [NEW] LVM backed VM fails to launch

2014-09-25 Thread Dan Genin
Public bug reported:

LVM ephemeral storage backend is broken in the most recent Nova (commit
945646e1298a53be6ae284766f5023d754dfe57d)

To reproduce in Devstack:

1. Configure Nova to use LVM ephemeral storage by adding to
create_nova_conf function in lib/nova

iniset $NOVA_CONF libvirt images_type lvm
iniset $NOVA_CONF libvirt images_volume_group nova-lvm

2. Create a backing file for LVM

truncate -s 5G nova-backing-file

3. Mount the file via loop device

sudo losetup /dev/loop0 nova-backing-file

4. Create nova-lvm volume group

sudo vgcreate nova-lvm /dev/loop0

5. Launch Devstack

6. Alternatively, skipping step 1, /etc/nova/nova.conf can be modified
after Devstack is launched by adding

[libvirt]
images_type = lvm
images_volume_group = nova-lvm

and then restarting nova-compute by entering the Devstack screen
session, going to the n-cpu screen and hitting Ctrl-C, Up-arrow, and
Enter.

7. Launch an instance

nova boot test --flavor 1 --image cirros-0.3.2-x86_64-uec

Instance fails to launch. Nova compute reports

2014-09-25 10:11:08.180 ERROR nova.compute.manager 
[req-b7924ad0-5f4b-46eb-a798-571d97c77145 demo demo] [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] Instance failed to spawn
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] Traceback (most recent call last):
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/compute/manager.py, line 2231, in _build_resources
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] yield resources
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/compute/manager.py, line 2101, in _build_and_run_instance
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] block_device_info=block_device_info)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2617, in spawn
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] block_device_info, 
disk_info=disk_info)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4434, in 
_create_domain_and_network
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] domain.destroy()
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4358, in _create_domain
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] for vif in network_info if 
vif.get('active', True) is False]
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] six.reraise(self.type_, self.value, 
self.tb)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 4349, in _create_domain
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] raise 
exception.VirtualInterfaceCreateException()
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 183, in doit
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 141, in 
proxy_call
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] rv = execute(f, *args, **kwargs)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 122, in execute
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c] six.reraise(c, e, tb)
2014-09-25 10:11:08.180 29649 TRACE nova.compute.manager [instance: 
8d69f0de-253a-403e-a137-0da77b0d415c]   File 

[Yahoo-eng-team] [Bug 1328067] Re: Token with placeholder ID issued

2014-09-25 Thread Dolph Mathews
** Changed in: keystonemiddleware
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328067

Title:
  Token with placeholder ID issued

Status in OpenStack Identity (Keystone):
  Won't Fix
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Released
Status in Python client library for Keystone:
  Fix Released

Bug description:
  We're seeing test failures, where it seems that an invalid token is
  issued, with the ID of placeholder

  http://logs.openstack.org/69/97569/2/check/check-tempest-dsvm-
  full/565d328/logs/screen-h-eng.txt.gz

  See context_auth_token_info which is being passed using the auth_token
  keystone.token_info request environment variable (ref
  https://review.openstack.org/#/c/97568/ which is the previous patch in
  the chain from the log referenced above).

  It seems like auth_token is getting a token, but there's some sort of
  race in the backend which prevents an actual token being stored?
  Trying to use placeholder as a token ID doesn't work, so it seems
  like this default assigned in the controller is passed back to
  auth_token, which treats it as a valid token, even though it's not.

  
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L121

  I'm not sure how to debug this further, as I can't reproduce this
  problem locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368942] Re: lxc test failure under osx

2014-09-25 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368942

Title:
  lxc test failure under osx

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Heres the stack trace from the following test:
  
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_driver.py, line 9231, in 
test_create_propagates_exceptions
  instance, None)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 420, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 431, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 481, in _matchHelper
  mismatch = matcher.match(matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  mismatch = matcher.match(matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 412, in match
  reraise(*matchee)
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  result = matchee()
File 
/Users/dims/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 955, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File nova/virt/libvirt/driver.py, line 4229, in _create_domain_and_network
  disk_info):
File 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py,
 line 17, in __enter__
  return self.gen.next()
File nova/virt/libvirt/driver.py, line 4125, in _lxc_disk_handler
  self._create_domain_setup_lxc(instance, block_device_info, disk_info)
File nova/virt/libvirt/driver.py, line 4077, in _create_domain_setup_lxc
  use_cow=use_cow)
File nova/virt/disk/api.py, line 385, in setup_container
  img = _DiskImage(image=image, use_cow=use_cow, mount_dir=container_dir)
File nova/virt/disk/api.py, line 252, in __init__
  device = self._device_for_path(mount_dir)
File nova/virt/disk/api.py, line 260, in _device_for_path
  with open(/proc/mounts, 'r') as ifp:
  IOError: [Errno 2] No such file or directory: '/proc/mounts'
  Ran 1 tests in 1.172s (-30.247s)
  FAILED (id=12, failures=1 (+1))
  error: testr failed (1)
  ERROR: InvocationError: '/Users/dims/openstack/nova/.tox/py27/bin/python -m 
nova.openstack.common.lockutils python setup.py test --slowest 
--testr-args=nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_propagates_exceptions'
  
__
 summary 
___
  ERROR:   py27: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330985] Re: test_authorize_revoke_security_group_cidr_v6 failed: Security group name is not a string or unicode

2014-09-25 Thread Mauro Sergio Martins Rodrigues
This is unit test not tempest' tests.

** Changed in: tempest
   Status: New = Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330985

Title:
  test_authorize_revoke_security_group_cidr_v6 failed: Security group
  name is not a string or unicode

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  For python 2.6 test:

  http://logs.openstack.org/40/98340/4/check/gate-nova-
  python26/030786a/testr_results.html.gz

  ft1.3: 
nova.tests.api.ec2.test_api.ApiEc2TestCase.test_authorize_revoke_security_group_cidr_v6_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [migrate.versioning.api] 215 - 216... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 216 - 217... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 217 - 218... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 218 - 219... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 219 - 220... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 220 - 221... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 221 - 222... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 222 - 223... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 223 - 224... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 224 - 225... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 225 - 226... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 226 - 227... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 227 - 228... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 228 - 229... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 229 - 230... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 230 - 231... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 231 - 232... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 232 - 233... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 233 - 234... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 234 - 235... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 235 - 236... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 236 - 237... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 237 - 238... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 238 - 239... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 239 - 240... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 240 - 241... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 241 - 242... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 242 - 243... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 243 - 244... 
  INFO [migrate.versioning.api] done
  INFO [nova.api.ec2] 0.3434s None POST /services/Cloud/ 
CloudController:CreateSecurityGroup 400 [Boto/2.29.1 Python/2.6.6 
Linux/2.6.32-431.17.1.el6.x86_64] application/x-www-form-urlencoded text/xml
  ERROR [boto] 400 Bad Request
  ERROR [boto] ?xml version=1.0?
  ResponseErrorsErrorCodeInvalidParameterValue/CodeMessageSecurity 
group name is not a string or 
unicode/Message/Error/ErrorsRequestIDreq-ee36126d-6536-4edd-810b-a03a95f80ec9/RequestID/Response
  }}}

  pythonlogging:'boto': {{{
  400 Bad Request
  ?xml version=1.0?
  ResponseErrorsErrorCodeInvalidParameterValue/CodeMessageSecurity 
group name is not a string or 
unicode/Message/Error/ErrorsRequestIDreq-ee36126d-6536-4edd-810b-a03a95f80ec9/RequestID/Response
  }}}

  Traceback (most recent call last):
File nova/tests/api/ec2/test_api.py, line 553, in 
test_authorize_revoke_security_group_cidr_v6
  'test group')
File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/boto/ec2/connection.py,
 line 2970, in create_security_group
  SecurityGroup, verb='POST')
File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/boto/connection.py,
 line 1177, in get_object
  raise self.ResponseError(response.status, response.reason, body)
  EC2ResponseError: EC2ResponseError: 400 Bad Request
  ?xml version=1.0?
  ResponseErrorsErrorCodeInvalidParameterValue/CodeMessageSecurity 
group name is not a string or 
unicode/Message/Error/ErrorsRequestIDreq-ee36126d-6536-4edd-810b-a03a95f80ec9/RequestID/Response

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1360446] Re: client connection leak to memcached under eventlet due to threadlocal

2014-09-25 Thread Dolph Mathews
** Changed in: keystonemiddleware
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1360446

Title:
  client connection leak to memcached under eventlet due to threadlocal

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Triaged
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Released

Bug description:
  When Keystone configured with memcached as backend and token storage
  keystone didn't reuse connections to it and starting to fail after
  having more than 500 connections to the memcached.

  Steps to reproduce:

  1. Configure keystone with memcached as backend.
  2. Create moreless good load (creating of VM's creates a lot of connections) 
on keystone and watch for connections to memcached using netstat, ex. netstat 
-an |grep -c :11211

  Expected behavior:
  connections number should be reasonable and be not more that the number of 
connections to the keystone (ideally :)

  Observed bahavior:
  Number of connections growing and seems than
  1. They didn't reused at all.
  2. Lifetime of some connection is 600 seconds.
  3. It looks like not all the connections stay for 600 seconds.

  UPDATE from MorganFainberg
  This is specific to deploying under eventlet and the python-memcached library 
and it's explicit/unavoidable use of threadlocal. Use of threadlocal under 
eventlet causes the client connections to leak until the GC / kernel cleans up 
the connections. This was confirmed to only affect eventlet with patch 
threading.

  Keystone deployed under apache is not affected.

  All services deployed with keystonemiddleware that utilize eventlet
  and memcache for token cache are also affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1360446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332058] Re: keystone behavior when one memcache backend is down

2014-09-25 Thread Dolph Mathews
** Changed in: keystonemiddleware
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1332058

Title:
  keystone behavior when one memcache backend is down

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed

Bug description:
  Hi,

  Our implementation uses dogpile.cache.memcached as a backend for
  tokens. Recently, I have found interesting behavior when one of
  memcache regions went down. There is a 3-6 second delay when I try to
  get a token. If I have 2 backends then I have 6-12 seconds delay. It's
  very easy to test

  Test connection using

  for i in {1..20}; do (time keystone token-get  log2) 21 | grep
  real | awk '{print $2}'; done

  Block one memcache backend using

  iptables -I INPUT -p tcp --dport 11211 -j DROP  (Simulation power
  outage of node)

  Test the speed using

  for i in {1..20}; do (time keystone token-get  log2) 21 | grep
  real | awk '{print $2}'; done

  Also I straced keystone process with

  strace -tt -s 512 -o /root/log1 -f -p PID

  and got

  26872 connect(9, {sa_family=AF_INET, sin_port=htons(11211),
  sin_addr=inet_addr(10.108.2.3)}, 16) = -1 EINPROGRESS (Operation now
  in progress)

  though this IP is down

  Also I checked the code

  
https://github.com/openstack/keystone/blob/master/keystone/common/kvs/core.py#L210-L237
  
https://github.com/openstack/keystone/blob/master/keystone/common/kvs/core.py#L285-L289
   
https://github.com/openstack/keystone/blob/master/keystone/common/kvs/backends/memcached.py#L96

  and was not able to find any piece of details how keystone treats with
  backend when it's down

  There should be a logic which temporarily blocks backend when it's not
  accessible. After timeout period, backend should be probed (but not
  blocking get/set operations of current backends) and if connection is
  successful it should be added back to operation. Here is a sample how
  it could be implemented

  http://dogpilecache.readthedocs.org/en/latest/usage.html#changing-
  backend-behavior

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1332058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370515] Re: allow edit of user role

2014-09-25 Thread Gary W. Smith
I missed this too, as I also was using Keystone v2. Thanks, Julie

** Changed in: horizon
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370515

Title:
  allow edit of user role

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I think it would be helpful to allow changing and updating a user's
  role from horizon

  root@tigris01 ~(keystone_admin)]# keystone help |grep role
  role-create Create new role.
  role-delete Delete role.
  role-getDisplay role details.
  role-list   List all roles.
  user-role-add   Add role to user.
  user-role-list  List roles granted to a user.
  user-role-removeRemove role from user.
  bootstrap   Grants a new role to a new user on a new tenant, after
  [root@tigris01 ~(keystone_admin)]# keystone help user-role-add
  usage: keystone user-role-add --user user --role role [--tenant tenant]

  Add role to user.

  we can actually user role-delete + role-create or role-create + --role
  role, --role-id role, --role_id role

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373992] [NEW] EC2 keystone auth token is using unsafe SSL connection

2014-09-25 Thread Sean Dague
Public bug reported:

EC2KeystoneAuth uses httplib.HTTPSConnection objects. In Python 2.x
those do not perform CA checks so client connections are vulnerable to
MiM attacks.

This should use requests instead, and pick up the local cacert params if
needed.

** Affects: nova
 Importance: Critical
 Status: Triaged


** Tags: ec2

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
   Importance: Undecided = Critical

** Tags added: ec2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373992

Title:
  EC2 keystone auth token is using unsafe SSL connection

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  EC2KeystoneAuth uses httplib.HTTPSConnection objects. In Python 2.x
  those do not perform CA checks so client connections are vulnerable to
  MiM attacks.

  This should use requests instead, and pick up the local cacert params
  if needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373993] [NEW] Trusted Filter uses unsafe SSL connection

2014-09-25 Thread Sean Dague
Public bug reported:

HTTPSClientAuthConnection uses httplib.HTTPSConnection objects. In
Python 2.x those do not perform CA checks so client connections are
vulnerable to MiM attacks.

This should be changed to use the requests lib.

** Affects: nova
 Importance: Critical
 Status: Triaged


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373993

Title:
  Trusted Filter uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  HTTPSClientAuthConnection uses httplib.HTTPSConnection objects. In
  Python 2.x those do not perform CA checks so client connections are
  vulnerable to MiM attacks.

  This should be changed to use the requests lib.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374000] [NEW] VMWare: file writer class uses unsafe SSL connection

2014-09-25 Thread Sean Dague
Public bug reported:

VMwareHTTPWriteFile uses httplib.HTTPSConnection objects. In Python 2.x
those do not perform CA checks so client connections are vulnerable to
MiM attacks.

This is the specific version of
https://bugs.launchpad.net/nova/+bug/1188189

** Affects: nova
 Importance: Critical
 Status: Triaged


** Tags: vmware

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
   Importance: Undecided = Critical

** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374000

Title:
  VMWare: file writer class uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  VMwareHTTPWriteFile uses httplib.HTTPSConnection objects. In Python
  2.x those do not perform CA checks so client connections are
  vulnerable to MiM attacks.

  This is the specific version of
  https://bugs.launchpad.net/nova/+bug/1188189

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374001] [NEW] Xenserver glance plugin uses unsafe SSL connection

2014-09-25 Thread Sean Dague
Public bug reported:

plugins/xenserver/xenapi/etc/xapi.d/plugins/glance _upload_tarball uses
httplib.HTTPSConnection objects. In Python 2.x those do not perform CA
checks so client connections are vulnerable to MiM attacks.

This is the specific version of
https://bugs.launchpad.net/nova/+bug/1188189.

** Affects: nova
 Importance: Critical
 Status: Triaged


** Tags: xenserver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374001

Title:
  Xenserver glance plugin uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  plugins/xenserver/xenapi/etc/xapi.d/plugins/glance _upload_tarball
  uses httplib.HTTPSConnection objects. In Python 2.x those do not
  perform CA checks so client connections are vulnerable to MiM attacks.

  This is the specific version of
  https://bugs.launchpad.net/nova/+bug/1188189.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353008] Re: MAAS Provider: LXC did not get DHCP address, stuck in pending

2014-09-25 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.2

---
cloud-init (0.7.5-0ubuntu1.2) trusty-proposed; urgency=medium

  * d/patches/lp-1353008-cloud-init-local-needs-run.conf:
backport change to cloud-init-local.conf to depend on /run being
mounted (LP: #1353008)
 -- Scott Moser smo...@ubuntu.com   Wed, 17 Sep 2014 09:15:54 -0400

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1353008

Title:
  MAAS Provider: LXC did not get DHCP address, stuck in pending

Status in Init scripts for use on cloud images:
  Fix Committed
Status in juju-core:
  Triaged
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Trusty:
  Fix Released

Bug description:
  === Begin SRU Information ===
  This bug causes lxc containers created by the ubuntu-cloud template 
(lxc-create -t ubuntu-cloud) to sometimes not obtain an IP address, and thus 
not correctly boot to completion.

  The bug is in an assumption by cloud-init that /run is mounted before
  the cloud-init-local job is run.  The fix is very simply to guarantee
  that it is via modification to its upstart 'start on'.

  When booting with an initramfs /run will be mounted before /, so the
  race condition is not possible.  Thus, the failure case is only either
  in non-initramfs boot (which is very unlikely) or in lxc boot.  The
  lxc case seems only to occur very rarely, somewhere well under one
  percent of the time.

  [Test Case]
  A test case is written at [1] that launches many instances in an attempt 
brute force find the error.  However, I've not been able to make it fail.

  The original bug reporter has been running with the 'start on' change
  and has seen no errors since.

  We will request the original bug reporter to apply the uploaded
  changes and run through their battery.

  [Regression Potential]
  The possibility for regression here is in the second boot of an instance.  
The following scenario is a change of behavior:
   * the user boots an instance with NoCloud or ConfigDrive in ds=local mode
   * user changes /etc/network/interfaces in a way that would cause
     static-networking to not be emitted on subsequent boot
   * user reboots
  Now, instead of a quick boot, the user may see cloud-init-nonet blocking on 
network coming up.

  This would be a uncommon scenario, and the broken-etc-network-
  interfaces scenario is already one that causes timeouts on boot.

  --
  [1] 
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/cloud-init-test/view/head:/tests/lxc-test-new-instance

  === End  SRU Information ===

  Note, that after I went onto the system, it *did* have an IP address.

    0/lxc/3:
  agent-state: pending
  instance-id: juju-machine-0-lxc-3
  series: trusty
  hardware: arch=amd64

  cloud-init-output.log snip:

  Cloud-init v. 0.7.5 running 'init' at Mon, 04 Aug 2014 23:57:12 +. Up 
572.29 seconds.
  ci-info: +++Net device info+++
  ci-info: ++--+---+---+---+
  ci-info: | Device |  Up  |  Address  |Mask   | Hw-Address|
  ci-info: ++--+---+---+---+
  ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 | . |
  ci-info: |  eth0  | True | . | . | 00:16:3e:34:aa:57 |
  ci-info: ++--+---+---+---+
  ci-info: !!!Route info 
failed
  Cloud-init v. 0.7.5 running 'modules:config' at Mon, 04 Aug 2014 23:57:12 
+. Up 572.99 seconds.
  Cloud-init v. 0.7.5 running 'modules:final' at Mon, 04 Aug 2014 23:57:14 
+. Up 574.42 seconds.
  Cloud-init v. 0.7.5 finished at Mon, 04 Aug 2014 23:57:14 +. Datasource 
DataSourceNoCloudNet [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 
574.54 seconds

  syslog on system, showing DHCPACK 1 second later:

  root@juju-machine-0-lxc-3:/home/ubuntu# grep DHCP /var/log/syslog
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 255.255.255.255 port 67 (xid=0x1687c544)
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPOFFER of 10.96.3.173 from 
10.96.0.10
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10
  Aug  5 05:28:15 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 10.96.0.10 port 67 (xid=0x1687c544)
  Aug  5 05:28:15 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10
  Aug  5 11:15:00 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 10.96.0.10 port 67 (xid=0x1687c544)
  Aug  5 11:15:00 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10

  It appears in every 

[Yahoo-eng-team] [Bug 1374033] [NEW] wsgi generating wrong entity_id values when issuing saml assertions.

2014-09-25 Thread Marek Denis
Public bug reported:

Attribute issuer should always be set to CONF.saml.idp_entity_id,
otherwise entityID from the IdP metadata and the generated assertion can
differ and hence make Service Provider reject the assertion.

** Affects: keystone
 Importance: Undecided
 Assignee: Marek Denis (marek-denis)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Marek Denis (marek-denis)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1374033

Title:
  wsgi generating wrong entity_id values when issuing saml assertions.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Attribute issuer should always be set to CONF.saml.idp_entity_id,
  otherwise entityID from the IdP metadata and the generated assertion
  can differ and hence make Service Provider reject the assertion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1374033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188189] Re: Some server-side 'SSL' communication fails to check certificates (use of HTTPSConnection)

2014-09-25 Thread Sean Dague
Nova bugs will be tracked in the separate bugs listed above, so removing
Nova from this bug.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1188189

Title:
  Some server-side 'SSL' communication fails to check certificates (use
  of HTTPSConnection)

Status in Cinder:
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  New
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in OpenStack Object Storage (Swift):
  Invalid

Bug description:
  Grant Murphy from Red Hat reported usage of httplib.HTTPSConnection
  objects. In Python 2.x those do not perform CA checks so client
  connections are vulnerable to MiM attacks.

  
  The following files use httplib.HTTPSConnection :
  keystone/middleware/s3_token.py
  keystone/middleware/ec2_token.py
  keystone/common/bufferedhttp.py
  vendor/python-keystoneclient-master/keystoneclient/middleware/auth_token.py

  AFAICT HTTPSConnection does not validate server certificates and
  should be avoided. This is fixed in Python 3, however in 2.X no
  validation occurs. I suspect this is also applicable to most OpenStack
  modules that make HTTPS client calls.

  Similar problems were found in ovirt:
  https://bugzilla.redhat.com/show_bug.cgi?id=851672 (CVE-2012-3533)

  With solutions for ovirt:
  http://gerrit.ovirt.org/#/c/7209/
  http://gerrit.ovirt.org/#/c/7249/
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1188189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310815] Re: bad django conf example

2014-09-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/119980
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=d5a92e28ef8f85012eced9d08d0c2592771da4bc
Submitter: Jenkins
Branch:master

commit d5a92e28ef8f85012eced9d08d0c2592771da4bc
Author: darrenchan dazzac...@yahoo.com.au
Date:   Tue Sep 9 13:30:24 2014 +1000

Minor fix to Django settings in dashboard database session section

Minor fix to Django settings in the local_settings file

Change-Id: Ie38e154f20c2ad77e5860fef7c0e7688d3c75809
backport: havana
Closes-Bug: #1310815


** Changed in: openstack-manuals
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1310815

Title:
  bad django conf example

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Manuals:
  Fix Released

Bug description:
  
  With Django 1.6, the setting is wrong. You need to use

  SESSION_ENGINE = 'django.contrib.sessions.backends.db'

  as described on:

  https://docs.djangoproject.com/en/1.6/ref/settings/#std:setting-
  SESSION_ENGINE

  If i use the one described on this page (SESSION_ENGINE =
  'django.core.cache.backends.db.DatabaseCache'), i got a 500 error and
  i have this in logs:

File .../django-1.6/django/core/handlers/base.py, line 90, in get_response
  response = middleware_method(request)

File .../django-1.6/django/contrib/sessions/middleware.py, line 10, in 
process_request
  engine = import_module(settings.SESSION_ENGINE)

File .../django-1.6/django/utils/importlib.py, line 40, in import_module
  __import__(name)

  ImportError: No module named DatabaseCache

  greetings,
  Thomas

  ---
  Built: 2014-04-07T07:45:00 00:00
  git SHA: b7557a0bb682410c86f8022eb07980840d82c8cf
  URL: 
http://docs.openstack.org/havana/install-guide/install/apt/content/dashboard-session-database.html
  source File: 
file:/home/jenkins/workspace/openstack-install-deploy-guide-ubuntu/doc/common/section_dashboard_sessions.xml
  xml:id: dashboard-session-database

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1310815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324450] Re: add delete operations for the ODL MechanismDriver

2014-09-25 Thread Alan Pevec
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = In Progress

** Changed in: neutron/icehouse
 Assignee: (unassigned) = Cédric OLLIVIER (m.col)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324450

Title:
  add delete operations for the ODL MechanismDriver

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  In Progress

Bug description:
  The delete operations (networks, subnets and ports) haven't been managed 
since the 12th review of the initial support.
  It seems sync_single_resource only implements create and update operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374045] [NEW] Add v3 endpoint for identity in catalog

2014-09-25 Thread Haneef Ali
Public bug reported:

This is a wish list.

Since we are moving to v3, it is better to add v3 endpoint in
sample_data.sh.  We still have only v2.0 endpoint.I don't think
keystoenclient will be affected since it doesn't use the endpoint from
catalog, but relies on version discovery

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1374045

Title:
  Add v3 endpoint for identity in catalog

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is a wish list.

  Since we are moving to v3, it is better to add v3 endpoint in
  sample_data.sh.  We still have only v2.0 endpoint.I don't think
  keystoenclient will be affected since it doesn't use the endpoint from
  catalog, but relies on version discovery

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1374045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330985] Re: test_authorize_revoke_security_group_cidr_v6 failed: Security group name is not a string or unicode

2014-09-25 Thread Sean Dague
This should be fixed now upstream

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330985

Title:
  test_authorize_revoke_security_group_cidr_v6 failed: Security group
  name is not a string or unicode

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  For python 2.6 test:

  http://logs.openstack.org/40/98340/4/check/gate-nova-
  python26/030786a/testr_results.html.gz

  ft1.3: 
nova.tests.api.ec2.test_api.ApiEc2TestCase.test_authorize_revoke_security_group_cidr_v6_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [migrate.versioning.api] 215 - 216... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 216 - 217... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 217 - 218... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 218 - 219... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 219 - 220... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 220 - 221... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 221 - 222... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 222 - 223... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 223 - 224... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 224 - 225... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 225 - 226... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 226 - 227... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 227 - 228... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 228 - 229... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 229 - 230... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 230 - 231... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 231 - 232... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 232 - 233... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 233 - 234... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 234 - 235... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 235 - 236... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 236 - 237... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 237 - 238... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 238 - 239... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 239 - 240... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 240 - 241... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 241 - 242... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 242 - 243... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 243 - 244... 
  INFO [migrate.versioning.api] done
  INFO [nova.api.ec2] 0.3434s None POST /services/Cloud/ 
CloudController:CreateSecurityGroup 400 [Boto/2.29.1 Python/2.6.6 
Linux/2.6.32-431.17.1.el6.x86_64] application/x-www-form-urlencoded text/xml
  ERROR [boto] 400 Bad Request
  ERROR [boto] ?xml version=1.0?
  ResponseErrorsErrorCodeInvalidParameterValue/CodeMessageSecurity 
group name is not a string or 
unicode/Message/Error/ErrorsRequestIDreq-ee36126d-6536-4edd-810b-a03a95f80ec9/RequestID/Response
  }}}

  pythonlogging:'boto': {{{
  400 Bad Request
  ?xml version=1.0?
  ResponseErrorsErrorCodeInvalidParameterValue/CodeMessageSecurity 
group name is not a string or 
unicode/Message/Error/ErrorsRequestIDreq-ee36126d-6536-4edd-810b-a03a95f80ec9/RequestID/Response
  }}}

  Traceback (most recent call last):
File nova/tests/api/ec2/test_api.py, line 553, in 
test_authorize_revoke_security_group_cidr_v6
  'test group')
File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/boto/ec2/connection.py,
 line 2970, in create_security_group
  SecurityGroup, verb='POST')
File 
/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/boto/connection.py,
 line 1177, in get_object
  raise self.ResponseError(response.status, response.reason, body)
  EC2ResponseError: EC2ResponseError: 400 Bad Request
  ?xml version=1.0?
  ResponseErrorsErrorCodeInvalidParameterValue/CodeMessageSecurity 
group name is not a string or 
unicode/Message/Error/ErrorsRequestIDreq-ee36126d-6536-4edd-810b-a03a95f80ec9/RequestID/Response

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1373949] Re: live-migration fails because of CPU feature invtsc

2014-09-25 Thread Sean Dague
This is actually libvirt itself failing to do live migration, not
anything in Nova. So this really needs to be taken upstream.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373949

Title:
  live-migration fails because of CPU feature invtsc

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hi!

  I'm running the latest git nova code from branch stable/icehouse and have 
patched it to get rid of the duplicate feature bug
  (commit 0f28fbef8bedeafca0bf488b84f783568fefc960).
  I'm running libvirt 1.2.8 and qemu 2.0.2.

  When I issue the command to do a live migration nova-compute fails
  with this stack trace:

  2014-09-25 13:51:46.837 16995 ERROR nova.virt.libvirt.driver [-] [instance: 
3b8dbddc-ba24-4ec6-bb3b-be227b5fb689] Live Migration failure: Requested 
operation is not valid: domain has CPU feature: invtsc
  Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/eventlet/hubs/poll.py, line 97, in 
wait
  readers.get(fileno, noop).cb(fileno)
  File /usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 194, in 
main
  result = function(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 
4595, in _live_migration
  recover_method(context, instance, dest, block_migration)
  File /usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
  File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 
4589, in _live_migration
  CONF.libvirt.live_migration_bandwidth)
  File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 179, in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 139, in 
proxy_call
  rv = execute(f,*args,**kwargs)
  File /usr/lib/python2.7/site-packages/eventlet/tpool.py, line 77, in tworker
  rv = meth(*args,**kwargs)
  File /usr/lib/python2.7/site-packages/libvirt.py, line 1590, in migrateToURI
  if ret == -1: raise libvirtError ('virDomainMigrateToURI() failed', dom=self)
  libvirtError: Requested operation is not valid: domain has CPU feature: invtsc

  when googling for invtsc it seems to be fairly new feature in libvirt
  (http://www.redhat.com/archives/libvir-list/2014-May/msg00214.html),
  which might be the reason this has not showed up for other people that
  use ubuntu for example.

  Regards
  Daniel Lundqvist

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372311] Re: pre-populate gateway in create network dialog

2014-09-25 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/124125

** Changed in: horizon
   Status: Opinion = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372311

Title:
  pre-populate gateway in create network dialog

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  
  On the second step of Create Network dialog, the Gateway IP field acts oddly. 
According to the help text, to use the default value, leave it blank. If you 
want to not use a Gateway, check the box underneath it that says Disable 
Gateway. 

  If most people just want to use the default value as I'd expect they
  would, it would be nice to explicitly show that value and populate the
  field after the user enters a Network Address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357379] Re: policy admin_only rules not enforced when changing value to default (CVE-2014-6414)

2014-09-25 Thread Elena Ezhova
** Changed in: neutron/havana
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357379

Title:
  policy admin_only rules not enforced when changing value to default
  (CVE-2014-6414)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Invalid
Status in neutron icehouse series:
  In Progress
Status in OpenStack Security Advisories:
  In Progress

Bug description:
  If a non-admin user tries to update an attribute, which should be
  updated only by admin, from a non-default value to default,  the
  update is successfully performed and PolicyNotAuthorized exception is
  not raised.

  The reason is that when a rule to match for a given action is built
  there is a verification that each attribute in a body of the resource
  is present and has a non-default value. Thus, if we try to change some
  attribute's value to default, it is not considered to be explicitly
  set and a corresponding rule is not enforced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374108] [NEW] Hyper-V agent cannot disconnect orphaned switch ports

2014-09-25 Thread Claudiu Belu
Public bug reported:

On Windows / Hyper-V Server 2008 R2, when a switch port have to be disconnected 
because the VM using it was removed,
DisconnectSwitchPort will fail, returning an error code and a HyperVException 
is raised. If the exception is raised, the switch port is not removed and will 
make the WMI operations more expensive.

If the VM's VNIC has been removed, disconnecting the switch port is no
longer necessary and it should be removed.

Trace:
http://paste.openstack.org/show/115297/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374108

Title:
  Hyper-V agent cannot disconnect orphaned switch ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On Windows / Hyper-V Server 2008 R2, when a switch port have to be 
disconnected because the VM using it was removed,
  DisconnectSwitchPort will fail, returning an error code and a HyperVException 
is raised. If the exception is raised, the switch port is not removed and will 
make the WMI operations more expensive.

  If the VM's VNIC has been removed, disconnecting the switch port is no
  longer necessary and it should be removed.

  Trace:
  http://paste.openstack.org/show/115297/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374132] [NEW] Nova libvirt driver convertion error?

2014-09-25 Thread David Hill
Public bug reported:

Hi guys,

We've noticed a weird behavior with nova compute reporting the wrong
free memory size in Grizzly and Ubuntu.   The version of libvirt used is
1.0.2 and according to the documentation of libvirt, the memory is
returned in KB but in the code, it says MB?  Did I miss something?


Dave

Doc:
http://libvirt.org/guide/html/ch03s04s04.html
http://libvirt.org/html/libvirt-libvirt.html#virNodeInfo

The chunk of code in question:
def get_memory_mb_total(self):
Get the total memory size(MB) of physical computer.

:returns: the total amount of memory(MB).



return self._conn.getInfo()[1]

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute memory nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374132

Title:
  Nova libvirt driver convertion error?

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi guys,

  We've noticed a weird behavior with nova compute reporting the
  wrong free memory size in Grizzly and Ubuntu.   The version of libvirt
  used is 1.0.2 and according to the documentation of libvirt, the
  memory is returned in KB but in the code, it says MB?  Did I miss
  something?

  
  Dave

  Doc:
  http://libvirt.org/guide/html/ch03s04s04.html
  http://libvirt.org/html/libvirt-libvirt.html#virNodeInfo

  The chunk of code in question:
  def get_memory_mb_total(self):
  Get the total memory size(MB) of physical computer.

  :returns: the total amount of memory(MB).

  

  return self._conn.getInfo()[1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374140] [NEW] Need to log the orignial libvirtError when InterfaceDetachFailed

2014-09-25 Thread Matt Riedemann
Public bug reported:

This is not really useful:

http://logs.openstack.org/17/123917/2/check/check-tempest-dsvm-
neutron/4bc2052/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-09-25_17_35_11_635

2014-09-25 17:35:11.635 ERROR nova.virt.libvirt.driver 
[req-50afcbfb-203e-454d-a7eb-1549691caf77 TestNetworkBasicOps-985093118 
TestNetworkBasicOps-1055683132] [instance: 
960ee0b1-9c96-4d5b-b5f5-be76ae19a536] detaching network adapter failed.
2014-09-25 17:35:11.635 27689 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: nova.objects.instance.Instance object at 0x422fe90
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 393, in decorated_function
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 4411, in detach_interface
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 
self.driver.detach_interface(instance, condemned)
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 1448, in 
detach_interface
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher raise 
exception.InterfaceDetachFailed(instance)
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 
InterfaceDetachFailed: nova.objects.instance.Instance object at 0x422fe90
2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher 


The code is logging that there was an error, but not the error itself:

try:
self.vif_driver.unplug(instance, vif)
flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
if state == power_state.RUNNING or state == power_state.PAUSED:
flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE
virt_dom.detachDeviceFlags(cfg.to_xml(), flags)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
LOG.warn(_LW(During detach_interface, 
 instance disappeared.),
 instance=instance)
else:
LOG.error(_LE('detaching network adapter failed.'),
 instance=instance)
raise exception.InterfaceDetachFailed(
instance_uuid=instance['uuid'])

We should log the original libvirt error.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: libvirt logging

** Tags added: libvirt

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374140

Title:
  Need to log the orignial libvirtError when InterfaceDetachFailed

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  This is not really useful:

  http://logs.openstack.org/17/123917/2/check/check-tempest-dsvm-
  neutron/4bc2052/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-09-25_17_35_11_635

  2014-09-25 17:35:11.635 ERROR nova.virt.libvirt.driver 
[req-50afcbfb-203e-454d-a7eb-1549691caf77 TestNetworkBasicOps-985093118 
TestNetworkBasicOps-1055683132] [instance: 
960ee0b1-9c96-4d5b-b5f5-be76ae19a536] detaching network adapter failed.
  2014-09-25 17:35:11.635 27689 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: nova.objects.instance.Instance object at 
0x422fe90
  2014-09-25 17:35:11.635 27689 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-09-25 17:35:11.635 27689 TRACE 

[Yahoo-eng-team] [Bug 1374158] [NEW] Typo in call to LibvirtConfigObject's parse_dom() method

2014-09-25 Thread Jennifer Mulsow
Public bug reported:

In Juno in nova/virt/libvirt/config.py:

LibvirtConfigGuestPUNUMA.parse_dom() calls super with a capital 'D' in
parse_dom().

super(LibvirtConfigGuestCPUNUMA, self).parse_Dom(xmldoc)

LibvirtConfigObject does not have a 'parse_Dom()' method. It has a
'parse_dom()' method. This causes the following exception to be raised.

...
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 1733, in 
parse_dom
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack obj.parse_dom(c)
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 542, in 
parse_dom
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack numa.parse_dom(child)
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 509, in 
parse_dom
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
super(LibvirtConfigGuestCPUNUMA, self).parse_Dom(xmldoc)
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstackAttributeError: 'super' 
object has no attribute 'parse_Dom'
2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
2014-09-25 15:35

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374158

Title:
  Typo in call to LibvirtConfigObject's parse_dom() method

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Juno in nova/virt/libvirt/config.py:

  LibvirtConfigGuestPUNUMA.parse_dom() calls super with a capital 'D' in
  parse_dom().

  super(LibvirtConfigGuestCPUNUMA, self).parse_Dom(xmldoc)

  LibvirtConfigObject does not have a 'parse_Dom()' method. It has a
  'parse_dom()' method. This causes the following exception to be
  raised.

  ...
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 1733, in 
parse_dom
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack obj.parse_dom(c)
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 542, in 
parse_dom
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
numa.parse_dom(child)
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py, line 509, in 
parse_dom
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
super(LibvirtConfigGuestCPUNUMA, self).parse_Dom(xmldoc)
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstackAttributeError: 'super' 
object has no attribute 'parse_Dom'
  2014-09-25 15:35:21.546 14344 TRACE nova.api.openstack 
  2014-09-25 15:35

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374199] [NEW] Remove unneed method _convert_to_nsx_transport_zones

2014-09-25 Thread Aaron Rosen
Public bug reported:

This patch removes the method _convert_to_nsx_transport_zones
and instead calls it from nsx_utils directly.

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374199

Title:
  Remove unneed method _convert_to_nsx_transport_zones

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  This patch removes the method _convert_to_nsx_transport_zones
  and instead calls it from nsx_utils directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310135] Re: Stopping an instance via the Nova API when using the Nova Ironic driver incorrectly reports powerstate

2014-09-25 Thread Devananda van der Veen
Digging further after proposing a fix to the Nova driver, there is
*also* a race inside of ironic/conductor/manager.py and
ironic/conductor/utils.py -- I am posting a fix for those now.

** Changed in: ironic
   Status: Invalid = In Progress

** Changed in: ironic
 Assignee: Rakesh H S (rh-s) = Devananda van der Veen (devananda)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310135

Title:
  Stopping an instance via the Nova API when using the Nova Ironic
  driver incorrectly reports powerstate

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When using the Ironic Nova driver, a stopped server is still presented
  as Running even when the server is stopped. Checking via the Ironic
  API correctly shows the instance as powered down:

  stack@ironic:~/logs/screen$ nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | 5b43d631-91e1-4384-9b87-93283b3ae958 | testing | ACTIVE | -  | 
Running | private=10.1.0.10 |
  
+--+-+++-+---+
  stack@ironic:~/logs/screen$ nova stop 5b43d631-91e1-4384-9b87-93283b3ae958
  stack@ironic:~/logs/screen$ nova list
  
+--+-+-++-+---+
  | ID   | Name| Status  | Task State | 
Power State | Networks  |
  
+--+-+-++-+---+
  | 5b43d631-91e1-4384-9b87-93283b3ae958 | testing | SHUTOFF | -  | 
Running | private=10.1.0.10 |
  
+--+-+-++-+---+
  stack@ironic:~/logs/screen$ ping 10.1.0.10
  PING 10.1.0.10 (10.1.0.10) 56(84) bytes of data.
  From 172.24.4.2 icmp_seq=1 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=5 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=6 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=7 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=8 Destination Host Unreachable
  --- 10.1.0.10 ping statistics ---
  9 packets transmitted, 0 received, +5 errors, 100% packet loss, time 8000ms
  stack@ironic:~/logs/screen$ ironic node-list
  
+--+--+-++-+
  | UUID | Instance UUID
| Power State | Provisioning State | Maintenance |
  
+--+--+-++-+
  | 91e81c38-4dce-412b-8a1b-a914d28943e4 | 5b43d631-91e1-4384-9b87-93283b3ae958 
| power off   | active | False   |
  
+--+--+-++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1310135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374210] [NEW] VimExceptions need to support i18n objects

2014-09-25 Thread Davanum Srinivas (DIMS)
Public bug reported:

When lazy is enabled the i18n translation object does not support
str() which causes failures like:
  UnicodeError: Message objects do not support str() because they may
  contain non-ascii characters. Please use unicode() or translate()
  instead.

** Affects: nova
 Importance: Medium
 Assignee: James Carey (jecarey)
 Status: Confirmed

** Affects: oslo.vmware
 Importance: High
 Assignee: James Carey (jecarey)
 Status: Confirmed


** Tags: vmware

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: vmware

** Changed in: nova
   Status: New = Confirmed

** Changed in: oslo.vmware
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: oslo.vmware
   Importance: Undecided = High

** Changed in: nova
 Assignee: (unassigned) = James Carey (jecarey)

** Changed in: oslo.vmware
 Assignee: (unassigned) = James Carey (jecarey)

** Changed in: oslo.vmware
Milestone: None = next-kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374210

Title:
  VimExceptions need to support i18n objects

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Confirmed

Bug description:
  When lazy is enabled the i18n translation object does not support
  str() which causes failures like:
UnicodeError: Message objects do not support str() because they may
contain non-ascii characters. Please use unicode() or translate()
instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362847] Re: Spell Errors in Keystone core.py

2014-09-25 Thread Dolph Mathews
I'm forced into assuming this has been fixed without being tracked,
since there's no actual spelling errors cited here to confirm that
assumption against.

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362847

Title:
  Spell Errors in Keystone core.py

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  There are few spelling errors that I observed in the Keystone core.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374000] Re: VMWare: file writer class uses unsafe SSL connection

2014-09-25 Thread Davanum Srinivas (DIMS)
Same code is also in oslo/vmware/rw_handles.py


** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374000

Title:
  VMWare: file writer class uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  VMwareHTTPWriteFile uses httplib.HTTPSConnection objects. In Python
  2.x those do not perform CA checks so client connections are
  vulnerable to MiM attacks.

  This is the specific version of
  https://bugs.launchpad.net/nova/+bug/1188189

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274034] Re: Neutron firewall anti-spoofing does not prevent ARP poisoning

2014-09-25 Thread Nathan Kinder
This was published as OSSN-0027:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0027

** Changed in: ossn
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274034

Title:
  Neutron firewall anti-spoofing does not prevent ARP poisoning

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Security Advisories:
  Invalid
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache 
poisoning.
  When anti-spoofing rules are handled by Nova, a list of rules was added 
through the libvirt network filter feature:
  - no-mac-spoofing
  - no-ip-spoofing
  - no-arp-spoofing
  - nova-no-nd-reflection
  - allow-dhcp-server

  Actually, the neutron firewall driver 'iptabes_firawall' handles only
  MAC and IP anti-spoofing rules.

  This is a security vulnerability, especially on shared networks.

  Reproduce an ARP cache poisoning and man in the middle:
  - Create a private network/subnet 10.0.0.0/24
  - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 
10.0.0.4)
  - Log on VM1 and install ettercap [1]
  - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:'
  - Log on too on VM2 (with VNC/spice console) and ping google.fr = ping is ok
  - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 
instead to be send directly to the network gateway and forwarded by the VM1 to 
the gw. The ICMP capture looks something like that [2]
  - Go back to VM2 and check the ARP table = the MAC address associated to the 
GW is the MAC address of VM1

  [1] http://ettercap.github.io/ettercap/
  [2] http://paste.openstack.org/show/62112/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1004114] Re: Password logging

2014-09-25 Thread Nathan Kinder
This was published as OSSN-0024:

https://wiki.openstack.org/wiki/OSSN/OSSN-0024

** Changed in: ossn
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1004114

Title:
  Password logging

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Security Notes:
  Fix Released
Status in Python client library for Keystone:
  Fix Released

Bug description:
  When the log level is set to DEBUG, keystoneclient's full-request
  logging mechanism kicks in, exposing plaintext passwords, etc.

  This bug is mostly out of the scope of Horizon, however Horizon can
  also be more secure in this regard. We should make sure that wherever
  we *are* handling sensitive data we use Django's error report
  filtering mechanisms so they don't appear in tracebacks, etc.
  (https://docs.djangoproject.com/en/dev/howto/error-reporting
  /#filtering-error-reports)

  Keystone may also want to look at respecting such annotations in their
  logging mechanism, i.e. if Django were properly annotating these data
  objects, keystoneclient could check for those annotations and properly
  sanitize the log output.

  If not this exact mechanism, then something similar would be wise.

  For the time being, it's also worth documenting in both projects that
  a log level of DEBUG will log passwords in plain text.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1004114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374257] [NEW] LBaaS API accepts invalid parameters

2014-09-25 Thread Xurong Yang
Public bug reported:

LBaaS API doesn't check the validity of the input parameters. Creating a
pool with invalid subnet_id, and updating a pool with invalid
health_monitors, can both success. The API should return a BadRequest
response instead.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374257

Title:
  LBaaS API accepts invalid parameters

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  LBaaS API doesn't check the validity of the input parameters. Creating
  a pool with invalid subnet_id, and updating a pool with invalid
  health_monitors, can both success. The API should return a BadRequest
  response instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374260] [NEW] HTTPBadRequest is raised when creating floating_ip_bulk which already exists

2014-09-25 Thread Haiwei Xu
Public bug reported:

When creating a floating_ip_bulk which already exists,
HTTPBadRequest(400) is returned, which should be changed to
HTTPConflict(409).

$ nova  floating-ip-bulk-create 192.0.20.0/28 --pool private
ERROR (BadRequest): Floating ip 192.0.20.1 already exists. (HTTP 400) 
(Request-ID: req-cf6ba91a-8a5f-4772-91b5-a159d5c06719)

** Affects: nova
 Importance: Undecided
 Assignee: Haiwei Xu (xu-haiwei)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374260

Title:
  HTTPBadRequest is raised when creating floating_ip_bulk which already
  exists

Status in OpenStack Compute (Nova):
  New

Bug description:
  When creating a floating_ip_bulk which already exists,
  HTTPBadRequest(400) is returned, which should be changed to
  HTTPConflict(409).

  $ nova  floating-ip-bulk-create 192.0.20.0/28 --pool private
  ERROR (BadRequest): Floating ip 192.0.20.1 already exists. (HTTP 400) 
(Request-ID: req-cf6ba91a-8a5f-4772-91b5-a159d5c06719)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374261] [NEW] BSN consistency hash not multi-server safe

2014-09-25 Thread Kevin Benton
Public bug reported:

Multiple neutron servers may read from the consistency hash table in the
big switch plugin simultaneously, which will cause the one with a later
request to receive an inconsistency error.

This is an issue with RPC induced backend requests (port update) or
active-active deployments.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress


** Tags: icehouse-backport-potential

** Tags added: folsom-backport-potential

** Tags removed: folsom-backport-potential
** Tags added: icehouse-backport-potential

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374261

Title:
  BSN consistency hash not multi-server safe

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Multiple neutron servers may read from the consistency hash table in
  the big switch plugin simultaneously, which will cause the one with a
  later request to receive an inconsistency error.

  This is an issue with RPC induced backend requests (port update) or
  active-active deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208743] Re: network uuid hasn't been checked in create server

2014-09-25 Thread Christopher Yeoh
** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1208743

Title:
  network uuid hasn't been checked in create server

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when I port a negative tempest tests into v3, this test cann't pass
  but it can pass when use nova v2 api. I think there is no validation
  for networks.

  @attr(type=['negative', 'gate'])
  def test_create_with_invalid_network_uuid(self):
  # Pass invalid network uuid while creating a server

  networks = [{'fixed_ip': '10.0.1.1', 'uuid':
  'a-b-c-d-e-f-g-h-i-j'}]

  self.assertRaises(exceptions.BadRequest,
self.create_server,
networks=networks)
   
  The following is the log: 

  ==
  FAIL: 
tempest.api.compute.servers.v3.test_servers_negative.ServersNegativeV3TestJSON.test_create_with_invalid_network_uuid[gate,negative]
  --
  _StringException: Traceback (most recent call last):
File 
/opt/stack/tempest/tempest/api/compute/servers/v3/test_servers_negative.py, 
line 153, in test_create_with_invalid_network_uuid
  networks=networks)
File 
/opt/stack/tempest/.venv/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 394, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/opt/stack/tempest/.venv/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 417, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: bound method type.create_server of class 
'tempest.api.compute.servers.v3.test_servers_negative.ServersNegativeV3TestJSON'
 returned ({'status': '202', 'content-length': '345', 'x-compute-request-id': 
'req-0d34c7cf-5047-4c75-848b-d30df693ead6', 'location': 
'http://192.168.1.101:8774/v3/servers/d91862ce-d80d-44d0-957c-8b28370dd460', 
'date': 'Tue, 06 Aug 2013 09:03:28 GMT', 'content-type': 'application/json'}, 
{u'links': [{u'href': 
u'http://192.168.1.101:8774/v3/servers/d91862ce-d80d-44d0-957c-8b28370dd460', 
u'rel': u'self'}, {u'href': 
u'http://192.168.1.101:8774/servers/d91862ce-d80d-44d0-957c-8b28370dd460', 
u'rel': u'bookmark'}], u'id': u'd91862ce-d80d-44d0-957c-8b28370dd460', 
u'security_groups': [{u'name': u'default'}], u'adminPass': u'r8vSmWK5W8rC'})

    begin captured logging  
  tempest.common.rest_client: INFO: Request: POST 
http://192.168.1.101:8774/v3/servers
  tempest.common.rest_client: DEBUG: Request Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': 'Token 
omitted'}
  tempest.common.rest_client: DEBUG: Request Body: {server: {flavorRef: 42, 
name: ServersNegativeV3TestJSON-instance1742202142, imageRef: 
cade0819-2939-484b-a52f-600d039aefc1, networks: [{fixed_ip: 10.0.1.1, 
uuid: a-b-c-d-e-f-g-h-i-j}]}}
  tempest.common.rest_client: INFO: Response Status: 202
  tempest.common.rest_client: DEBUG: Response Headers: {'content-length': 
'345', 'location': 
'http://192.168.1.101:8774/v3/servers/d91862ce-d80d-44d0-957c-8b28370dd460', 
'date': 'Tue, 06 Aug 2013 09:03:28 GMT', 'x-compute-request-id': 
'req-0d34c7cf-5047-4c75-848b-d30df693ead6', 'content-type': 'application/json'}
  tempest.common.rest_client: DEBUG: Response Body: {server: 
{security_groups: [{name: default}], id: 
d91862ce-d80d-44d0-957c-8b28370dd460, links: [{href: 
http://192.168.1.101:8774/v3/servers/d91862ce-d80d-44d0-957c-8b28370dd460;, 
rel: self}, {href: 
http://192.168.1.101:8774/servers/d91862ce-d80d-44d0-957c-8b28370dd460;, 
rel: bookmark}], adminPass: r8vSmWK5W8rC}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1208743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp