[Yahoo-eng-team] [Bug 1338735] Re: Live-migration with volumes creates orphan access records

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338735

Title:
  Live-migration with volumes creates orphan access records

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When live migration is performed on instances with volume attached,
  nova sends two initiator commands and one terminate connection. This
  causes orphan access records in some storage arrays ( tested with Dell
  EqualLogic Driver).

  Steps to reproduce:
  1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array.
  2. Create an instance
  3. Create a volume and attach it to the instance
  4. Check the location of the instance ( computenode1 or 2)
  nova instance1 show
  4. Perform live migration of the instance and move it to the second compute 
node
  nova live-migration instance1 computenode2
  5. Check the cinder api log. c-api
  There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1338735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512199] [NEW] change vm fixed ips will cause unable to communicate to vm in other network

2015-11-01 Thread yujie
Public bug reported:

I use dvr+kilo,  vxlan.  The environment is like:

vm2-2<- compute1  --vxlan-  comupte2 ->vm2-1
vm3-1<-

vm2-1<- net2  -router1- net3 ->vm3-1
vm2-2<-


vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2 
192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in 
net3(192.168.3.0/24). net2 and net3 are connected by router1. The three vms are 
in default security-group. Not use firewall.

1. Using command below to change the ip of vm2-1.
neutron port-update portID  --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.10 --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.20
In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed.
Then reboot vm2-1. The ip of vm2-1 turned to be 192.168.2.20.

2. Using vm2-2 could ping 192.168.2.20 successfully . But vm3-1 could
not ping 192.168.2.20 successfully.

By capturing packets and looking for related information, the reason maybe:
1. newIP(192.168.2.20) and MAC of vm2-1 was not wrote to arp cache in the 
namespace of router1 in compute1 node.
2. In dvr mode, the arp request from gw port(192.168.2.1) from compute1 to 
vm2-1 was dropped by flowtable in compute2. So the arp 
request(192.168.2.1->192.168.2.20) could not arrive at vm2-1.
3. For vm2-2, the arp request(192.168.2.4->192.168.2.20) was not dropped and 
could connect with vm2-1.

In my opinion, if both new fixed IPs of vm2-1(192.168.2.10 and
102.168.2.20) and MAC is wrote to arp cache in namespace of router1 in
compute1 node, the problem will resolved. But only one ip(192.168.2.10)
and MAC is wrote.

BTW, if only set one fixed ip for vm2-1, it works fine. But if set two
fixed ips for vm2-1, the problem above most probably happens.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  I use dvr+kilo,  vxlan.  The environment is like:
-  __ __
- | compute1  |   | comupte2   |
 router1
- |__|  ---vxlan---  |__ |  
 /   \
-   /\| 
 net2net3
- vm2-2   vm3-1  vm2-1  
 / \  |
-   
  vm2-1  vm2-2vm3-1
+  _   _
+ | compute1  |  | comupte2  |  
  
+ |_|  ---vxlan---   | |
 
+   /\| 
   
+ vm2-2   vm3-1   vm2-1 
   
+   
  
  
- vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2
- 192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in
- net3(192.168.3.0/24). net2 and net3 are connected by router1. The three
- vms are in default security-group. Not use firewall.
+   router1
+   /\
+net2  net3
+   /\|
+ vm2-1  vm2-2 vm3-1
+ vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2 
192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in 
net3(192.168.3.0/24). net2 and net3 are connected by router1. The three vms are 
in default security-group. Not use firewall.
  
- 1. Using command below to change the ip of vm2-1. 
+ 1. Using command below to change the ip of vm2-1.
  neutron port-update portID  --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.10 --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.20
- In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed. 
+ In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed.
  Then reboot vm2-1. The ip of vm2-1 turned to be 192.168.2.20.
  
  2. Using vm2-2 could ping 192.168.2.20 successfully . But vm3-1 could
  not ping 192.168.2.20 successfully.
  
  By capturing packets and looking for related information, the reason maybe:
  1. newIP(192.168.2.20) and MAC of vm2-1 was not wrote to arp cache in the 
namespace of router1 in compute1 node.
  2. In dvr mode, the arp request from gw port(192.168.2.1) from compute1 to 
vm2-1 was dropped by flowtable in compute2. So the arp 
request(192.168.2.1->192.168.2.20) could not arrive at vm2-1.
  3. For vm2-2, the arp request(192.168.2.4->192.168.2.20) was not dropped and 
could connect with vm2-1.
  
  In my opinion, if both new fixed IPs of vm2-1(192.168.2.10 and
  102.168.2.20) and MAC is wrote to 

[Yahoo-eng-team] [Bug 1503055] Re: Use AssertIsNone

2015-11-01 Thread Goutham Pacha Ravi
** Also affects: manila
   Importance: Undecided
   Status: New

** No longer affects: manila

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503055

Title:
  Use AssertIsNone

Status in neutron:
  Confirmed
Status in senlin:
  Fix Committed

Bug description:
  Neutron should use the specific assertion:

    self.assertIs(Not)None(observed)

  instead of the generic assertion:

    self.assert(Not)Equal(None, observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512207] [NEW] Fix usage of assertions in Manila unit tests

2015-11-01 Thread yapeng Yang
Public bug reported:

Manila  should use the specific assertion:

  self.assertIsTrue/False(observed)

instead of the generic assertion:

  self.assertEqual(True/False, observed)

** Affects: manila
 Importance: Undecided
 Assignee: yapeng Yang (yang-yapeng)
 Status: In Progress

** Project changed: neutron => manila

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512207

Title:
  Fix usage of assertions in Manila unit tests

Status in Manila:
  In Progress

Bug description:
  Manila  should use the specific assertion:

self.assertIsTrue/False(observed)

  instead of the generic assertion:

self.assertEqual(True/False, observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/manila/+bug/1512207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490842] Re: UnexpectedTaskStateError_Remote: Unexpected task state: expecting (u'resize_migrating', ) but the actual state is None

2015-11-01 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490842

Title:
  UnexpectedTaskStateError_Remote: Unexpected task state: expecting
  (u'resize_migrating',) but the actual state is None

Status in OpenStack Compute (nova):
  Expired

Bug description:
  [req-7a72cf1e-b163-4863-9330-f2b60bd15a6e None] [instance: 
5dbb0778-e7d2-42bd-8427-b727301972cb] Setting instance vm_state to ERROR
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] Traceback (most recent call 
last):
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6705, in 
_error_out_instance_on_exception
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] yield
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3841, in 
resize_instance
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
instance.save(expected_task_state=task_states.RESIZE_MIGRATING)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 189, in wrapper
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] ctxt, self, fn.__name__, 
args, kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 351, in 
object_action
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] objmethod=objmethod, 
args=args, kwargs=kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in 
call
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] retry=self.retry)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] timeout=timeout, 
retry=retry)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
408, in send
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] retry=retry)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
399, in _send
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] raise result
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
UnexpectedTaskStateError_Remote: Unexpected task state: expecting 
(u'resize_migrating',) but the actual state is None
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] Traceback (most recent call 
last):
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in 
_object_dispatch
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] return getattr(target, 
method)(context, *args, **kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 204, in wrapper
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] return fn(self, ctxt, 
*args, **kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 500, in save
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
columns_to_join=_expected_cols(expected_attrs))
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 766, in 
instance_update_and_get_original
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
columns_to_join=columns_to_join)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 143, in 
wrapper
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] return f(*args, **kwargs)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2318, in 
instance_update_and_get_original
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
columns_to_join=columns_to_join)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb]   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2369, in 
_instance_update
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] actual=actual_state, 
expected=expected)
  [instance: 5dbb0778-e7d2-42bd-8427-b727301972cb] 

[Yahoo-eng-team] [Bug 1362672] Re: Volume stuck in "deleting" state cannot be deleted.

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362672

Title:
  Volume stuck in "deleting" state cannot be deleted.

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  [root@ctrl01 test_cloud]# nova volume-snapshot-list
  
+--+--+--+--+--+
  | ID   | Volume ID
| Status   | Display Name | Size |
  
+--+--+--+--+--+
  | 89053e9b-d35a-47d2-98dd-4b031ce4c6b4 | 505fd31d-7b33-4afa-ad0d-e6fb1475a994 
| deleting | FuncTests_Python_XCloudAPI_VolumeSnapshot_IQIG8P | 2|
  | 9925ed11-8c0e-4979-be61-1f156ed4ba2c | 5f51859e-001d-4ceb-9e70-3a371233615b 
| deleting | FuncTests_Python_XCloudAPI_VolumeSnapshot_IQIG8P | 2|
  
+--+--+--+--+--+
  [root@pdc-ostck-ctrl01 test_cloud]# nova volume-snapshot-delete 
89053e9b-d35a-47d2-98dd-4b031ce4c6b4
  ERROR: Invalid snapshot: Volume Snapshot status must be available or error 
(HTTP 400) (Request-ID: req-735926a0-e306-4635-9e51-49d7b27614cf)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359407] Re: tempest errors in logs

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359407

Title:
  tempest errors in logs

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Invalid
Status in Glance:
  New
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  
  This is regarding a tempest run on a Keystone change, here's the log: 
http://logs.openstack.org/73/111573/3/check/check-tempest-dsvm-full/f4e3313/console.html

  All the tempest tests ran successfully. Then it runs the log checker
  and there are several errors in the logs.

  - Log File Has Errors: n-cond

  nova.quota - Failed to commit reservations ...

  - Log File Has Errors: n-cpu

  There's several errors here:

  glanceclient.common.http -- Request returned failure status 404.
  (there's several of these)

  oslo.messaging.rpc.dispatcher -- Exception during message handling: 
Unexpected task state: expecting (u'powering-off',) but the actual state is None
  (this generates a lot of logs and there are several of them)

  - Log File Has Errors: n-api

  glanceclient.common.http - Request returned failure status 404.
  (there's several of these)

  - Log File Has Errors: g-api

  glance.store.sheepdog [-] Error in store configuration: [Errno 2] No such 
file or directory
  swiftclient [-] Container HEAD failed: 
http://127.0.0.1:8080/v1/AUTH_3c05c27e027f451b9837e04c9d8ae1e5/glance 404 Not 
Found

  - Log File Has Errors: c-api

  ERROR cinder.volume.api Volume status must be available to reserve
  (There's 4 of these)

  - Log File Has Errors: ceilometer-alarm-evaluator

  ceilometer.alarm.service [-] alarm evaluation cycle failed
  (several of these)

  - Log File Has Errors: ceilometer-acentral

  ERROR ceilometer.neutron_client [-] publicURL endpoint for network service 
not found
  (there's several errors related to endpoints)

  - Log File Has Errors: ceilometer-acompute

  ceilometer.compute.pollsters.disk [-] Ignoring instance instance-0087: 
internal error: cannot find statistics for device 'virtio-disk1'
  (there's a few of these)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1359407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375519] Re: Cisco N1kv: Enable quota support in stable/icehouse

2015-11-01 Thread Cedric Brandily
** Changed in: neutron/icehouse
   Status: New => Fix Released

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375519

Title:
  Cisco N1kv: Enable quota support in stable/icehouse

Status in networking-cisco:
  New
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  With the quotas table being populated in stable/icehouse, the N1kv
  plugin should be able to support quotas. Otherwise VMs end up in error
  state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1375519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218123] Re: shared filesystem drivers never disconnect

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218123

Title:
  shared filesystem drivers never disconnect

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  While refactoring the NFS code and moving the higher level functions
  in to brick it was noticed that the NFS driver doesn't actually "do
  anything" on a disconnect_volume.  Understood that disconnecting NFS
  mounts is tricky, but leaving stale connections around in a large
  scale env like OpenStack seems like pretty bad practice.

  There should probably be some tracking here, eharney had a good
  suggestion of using something like a ref counter of attaches and
  perhaps an audit process could kill them all when none are in use
  anymore (I winged that a bit, he may have a better idea/description
  here).

  Also note that this issues is present in the existing Nova code base
  as well, it was just copied over in to the cinder/brick modules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1218123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206396] Re: Name validations for compute resources

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1206396

Title:
  Name validations for compute resources

Status in Cinder:
  Invalid
Status in OpenStack Identity (keystone):
  Confirmed
Status in OpenStack Compute (nova):
  Opinion
Status in oslo-incubator:
  Invalid

Bug description:
  There is no consistent validation for the 'name' parameter across
  compute resources. The following characters need to be validated in
  the input:

  1. One more whitespaces (like ' ' or '') -
  2. Leading or trailing whitespaces (like '   test123 ')

  Currently flavor name, volume name, role name, group name, security
  group name and keypair name accept input in each of the two cases (no
  validation).

  Adding the two cases above to name parameter validation would be useful.
  It makes sense to move this validation code to a common utility that can be 
used across all Create resource methods.
  Although the 'name' is not as significant the resource's ID, it does act as a 
label for the resource and should be validated properly.

  For example, from the dasbhboard, a role with a blank name, i.e single
  whitespace string like ' ' can be created. This get's stored in the
  keystone db as NULL and appears in the dashboard roles drop down
  during Create User as None. This behavior should be fixed.

  Refer:
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L3086
  _validate_new_keypair() can be moved to a common utility to provide
  'name' field validations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1206396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274523] Re: connection_trace does not work with DB2 backend

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  Invalid
Status in Glance:
  Triaged
Status in heat:
  Triaged
Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo.db:
  Fix Released

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services."binary" AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1274523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244092] Re: db connection retrying doesn't work against db2

2015-11-01 Thread Sean McGinnis
Closing stale bug. If this is still an issue please reopen.

** Changed in: cinder
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244092

Title:
  db connection retrying doesn't work against db2

Status in Cinder:
  Invalid
Status in Cinder havana series:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in oslo-incubator:
  Fix Released

Bug description:
  When I start Openstack following below steps, Openstack services can't be 
started without db2 connection:
  1, start openstack services;
  2, start db2 service.

  I checked codes in session.py under
  nova/openstack/common/db/sqlalchemy, the root cause is db2 connection
  error code "-30081" isn't in conn_err_codes in _is_db_connection_error
  function, connection retrying codes are skipped against db2, in order
  to enable connection retrying function against db2, we need add db2
  support in _is_db_connection_error function

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1244092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512239] [NEW] Typo in doc about "git co {branch_name}"

2015-11-01 Thread zouyee
Public bug reported:

Typo in doc about "git co {branch_name}" in
http://docs.openstack.org/infra/publications/ci-automation/#%2814%29

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1512239

Title:
  Typo in doc about "git co {branch_name}"

Status in Glance:
  New

Bug description:
  Typo in doc about "git co {branch_name}" in
  http://docs.openstack.org/infra/publications/ci-automation/#%2814%29

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1512239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512231] [NEW] Glance doesn't return direct_url if location is None.

2015-11-01 Thread wangxiyuan
Public bug reported:

When turn on the show_image_direct_url option, the image-show should
return the image's direct_url. If the image has no locations, it should
return []. But now Glance return nothing when show a queued image.

We should make the behavior same as the show_multiple_locations option
does.

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1512231

Title:
  Glance doesn't return direct_url if location is None.

Status in Glance:
  New

Bug description:
  When turn on the show_image_direct_url option, the image-show should
  return the image's direct_url. If the image has no locations, it
  should return []. But now Glance return nothing when show a queued
  image.

  We should make the behavior same as the show_multiple_locations option
  does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1512231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512233] [NEW] The explanation of Remote field in Add Rule does not show partly. (japanese only)

2015-11-01 Thread Kenji Ishii
Public bug reported:

The explanation of Remote field in Add Rule does not show partly.
(japanese only)

We need to show like below (ja/django.po).
  許可 IP 範囲を指定するには、\"CIDR\" を選択してください。他のセキュリティーグ"
  "ループのすべてのメンバーからアクセスを許可するには、\"セキュリティーグループ"
  "\" を選択してください。

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Attachment added: "bug_image.png"
   
https://bugs.launchpad.net/bugs/1512233/+attachment/4510975/+files/bug_image.png

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1512233

Title:
  The explanation of Remote field in Add Rule does not show partly.
  (japanese only)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The explanation of Remote field in Add Rule does not show partly.
  (japanese only)

  We need to show like below (ja/django.po).
許可 IP 範囲を指定するには、\"CIDR\" を選択してください。他のセキュリティーグ"
"ループのすべてのメンバーからアクセスを許可するには、\"セキュリティーグループ"
"\" を選択してください。

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1512233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp