I have now also tried:

--volume_scheduler_driver=nova.scheduler.simple.SimpleScheduler

And have exact same bad behavior when it hits the second sever.  Could
this be LUN number conflicts? They both default to LUN0.

** Description changed:

  OS: Ubuntu 12.04
  Arch: amd64
- Nova Version:  2012.1+stable~20120612-3ee026e-0ubuntu1.2  OpenStack Compute - 
storage
+ Nova Version:  2012.1+stable~20120612-3ee026e-0ubuntu1.2
  Storage Driver: Nexenta
  
  Using a single Nexenta server with one nova-volume works fine. When
  adding a second nova-volume service pointing to a second Nexenta server
  I get failures when creating the volume on the second server. Volumes on
  first server still work.
  
  (both have same rabbit setting and talk to api ok)
  
  scheduler conf:
  --scheduler_driver=nova.scheduler.multi.MultiScheduler
  --volume_scheduler_driver=nova.scheduler.chance.ChanceScheduler
  
  nova-volume1 conf:
  
  # VOLUME
  --volume_name_template=volume-nova-volumes-1-14-2%08x
  --volume_group=nova-volume-1-14-2
  --quota_gigabytes=5000
  # Nexenta Storage Driver
  --volume_driver=nexenta.volume.NexentaDriver
  --use_local_volumes=False
  --nexenta_host=172.16.14.3
  --nexenta_volume=nova-volume-1-14-1
  --nexenta_user=<SANITZED>
  --nexenta_password=<SANITZED>
  --nexenta_rest_port=2000
  --nexenta_rest_protocol=http
  
  nova-volume2 conf:
  # VOLUME
  --volume_name_template=volume-nova-volumes-1-14-1%08x
  --volume_group=nova-volumes-1-14-1
  --quota_gigabytes=5000
  # Nexenta Storage Driver
  --volume_driver=nexenta.volume.NexentaDriver
  --use_local_volumes=False
  --nexenta_host=10.50.0.254
  --nexenta_volume=nova-volume-1-14-1
  --nexenta_user=<SANITZED>
  --nexenta_password=<SANITZED>
  --nexenta_rest_port=2000
  --nexenta_rest_protocol=http
  
  I start the second service with a second upstart script which points to
  the second nova-volume conf. There are no errors on startup.
  Authentication succeeds. Error shown in second nova volume logs:
  
  nova.service: AUDIT: Starting volume node (version 
2012.1-LOCALBRANCH:LOCALREVISION)
  nova.volume.nexenta.jsonrpc: DEBUG: [req-62f2b7fe-1d8b-4278-b413-5c18ae3e9b44 
None None] Sending JSON data: {"object": "volume", "params": 
["nova-volume-1-14-1"], "method": "object_exists"} from (pid=87464) __call__ 
/usr/lib/python2.7/dist-packages/nova/volume/nexenta/jsonrpc.py:64
  nova.volume.nexenta.jsonrpc: DEBUG: [req-62f2b7fe-1d8b-4278-b413-5c18ae3e9b44 
None None] Got response: {"tg_flash": null, "result": 1, "error": null} from 
(pid=87464) __call__ 
/usr/lib/python2.7/dist-packages/nova/volume/nexenta/jsonrpc.py:79
  nova.utils: DEBUG: [req-62f2b7fe-1d8b-4278-b413-5c18ae3e9b44 None None] 
backend <module 'nova.db.sqlalchemy.api' from 
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from (pid=87464) 
__get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
  nova.volume.manager: DEBUG: [req-62f2b7fe-1d8b-4278-b413-5c18ae3e9b44 None 
None] Re-exporting 0 volumes from (pid=87464) init_host 
/usr/lib/python2.7/dist-packages/nova/volume/manager.py:96
  nova.rpc.common: INFO: Connected to AMQP server on 172.16.14.1:5672
  nova.service: DEBUG: Creating Consumer connection for Service volume from 
(pid=87464) start /usr/lib/python2.7/dist-packages/nova/service.py:178
  nova.rpc.amqp: DEBUG: received {u'_context_roles': [u'admin'], 
u'_context_request_id': u'req-14847c59-101c-4c1a-a7e5-a5f7c9936a15', 
u'_context_read_deleted': u'no', u'args': {u'volume_id': 21, u'snapshot_id': 
None}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, 
u'_context_project_id': u'7f44c421ea134d8e9d33ef28e1ded1ba', 
u'_context_timestamp': u'2012-08-21T21:19:35.026564', u'_context_user_id': 
u'8a90a4d3fe9a476c8cbcc43dc6534d4d', u'method': u'create_volume', 
u'_context_remote_address': u'127.0.0.1'} from (pid=87464) _safe_log 
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
  nova.rpc.amqp: DEBUG: [req-14847c59-101c-4c1a-a7e5-a5f7c9936a15 
8a90a4d3fe9a476c8cbcc43dc6534d4d 7f44c421ea134d8e9d33ef28e1ded1ba] unpacked 
context: {'user_id': u'8a90a4d3fe9a476c8cbcc43dc6534d4d', 'roles': [u'admin'], 
'timestamp': '2012-08-21T21:19:35.026564', 'auth_token': '<SANITIZED>', 
'remote_address': u'127.0.0.1', 'is_admin': True, 'request_id': 
u'req-14847c59-101c-4c1a-a7e5-a5f7c9936a15', 'project_id': 
u'7f44c421ea134d8e9d33ef28e1ded1ba', 'read_deleted': u'no'} from (pid=87464) 
_safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
  nova.rpc.amqp: ERROR: [req-14847c59-101c-4c1a-a7e5-a5f7c9936a15 
8a90a4d3fe9a476c8cbcc43dc6534d4d 7f44c421ea134d8e9d33ef28e1ded1ba] Exception 
during message handling
  TRACE nova.rpc.amqp Traceback (most recent call last):
  TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in _process_data
  TRACE nova.rpc.amqp     rval = node_func(context=ctxt, **node_args)
  TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/volume/manager.py", line 106, in 
create_volume
  TRACE nova.rpc.amqp     volume_ref = self.db.volume_get(context, volume_id)
  TRACE nova.rpc.amqp   File "/usr/lib/python2.7/dist-packages/nova/db/api.py", 
line 948, in volume_get
  TRACE nova.rpc.amqp     return IMPL.volume_get(context, volume_id)
  TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 120, in 
wrapper
  TRACE nova.rpc.amqp     return f(*args, **kwargs)
  TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 2403, in 
volume_get
  TRACE nova.rpc.amqp     raise exception.VolumeNotFound(volume_id=volume_id)
  TRACE nova.rpc.amqp VolumeNotFound: Volume 21 could not be found.
  TRACE nova.rpc.amqp
  
  ---------------------------------------------------

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1039763

Title:
  Multiple nova-volume services fails to create volume on second storage
  server when using Nexenta driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-ci/+bug/1039763/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to