Doh.
Now I feel stupid.

It is getting much farther

Now I am seeing the following but I expect it may be because of all the other stuff I broke trying to figure out my previous problem.


 Error: 'volume_id'
 Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 1030, in _build_instance
     set_access_ip=set_access_ip)
   File "/opt/stack/nova/nova/compute/manager.py", line 1439, in _spawn
     LOG.exception(_('Instance failed to spawn'), instance=instance)
   File "/opt/stack/nova/nova/compute/manager.py", line 1436, in _spawn
     block_device_info)
   File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 219, in spawn
     admin_password, network_info, block_device_info)
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 351, in spawn
     network_info, block_device_info, name_label, rescue)
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 499, in _spawn
     undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
   File "/opt/stack/nova/nova/utils.py", line 823, in rollback_and_reraise
     self._rollback()
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 477, in _spawn
     name_label)
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 139, in inner
     rv = f(*args, **kwargs)
File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 339, in create_disks_step
     disk_image_type, block_device_info=block_device_info)
File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 535, in get_vdis_for_instance
     vdi_uuid = get_vdi_uuid_for_volume(session, connection_data)
File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 488, in get_vdi_uuid_for_volume sr_uuid, label, sr_params = volume_utils.parse_sr_info(connection_data) File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 213, in parse_sr_info
     params = parse_volume_info(connection_data)
File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 232, in parse_volume_info
     volume_id = connection_data['volume_id']
 KeyError: 'volume_id'


A partly off topic question.
Why not use rdb-fuse and mount the ceph blobs as files instead of going through libvirt?

On 11/22/2013 09:41 AM, Bob Ball wrote:
Could you provide the full error log with nova crashing?

Thanks,

Bob
------------------------------------------------------------------------
*From:* Alvin Starr [[email protected]]
*Sent:* 22 November 2013 14:31
*To:* Bob Ball; [email protected]
*Subject:* Re: [Openstack] Openstack and xen issues.

I have put Openstack on a separate machine to try and separate and isolate the various components I need to work with in the interests of making my debugging easier.
This in retrospect may not have been the best idea.

I have had a very long history with xen and that may be more of an impediment because I think I know things about it that are no longer true.

I am using the default devstack scripts as of a few weeks ago so it should be grabbing the latest version of Openstack or at least that is my belief.

Here is my sr-param-list.

uuid ( RO)                    : 7d56f548-174b-d42b-12f2-e0849588e503
              name-label ( RW): Ceph Storage
        name-description ( RW):
                    host ( RO): localhost
allowed-operations (SRO): unplug; plug; PBD.create; PBD.destroy; VDI.clone; scan; VDI.create; VDI.destroy
      current-operations (SRO):
                    VDIs (SRO):
                    PBDs (SRO): 40dd29a3-154a-e841-ce52-4547c817d856
      virtual-allocation ( RO): 348064577384
    physical-utilisation ( RO): 342363992064
           physical-size ( RO): 18986006446080
                    type ( RO): libvirt
            content-type ( RO):
                  shared ( RW): true
           introduced-by ( RO): <not in database>
            other-config (MRW): ceph_sr: true
               sm-config (MRO):
                   blobs ( RO):
     local-cache-enabled ( RO): false
                    tags (SRW):


I started tracing the xenapi transactions over the network and could see the pool.get_all and pool.get_default when the sr_filter was not set but once I set it nova would crash complaining about no repository. I checked the TCP transactions and did not see any SR.get_all while some debugging prints assured me that the code was being exercised.



On 11/22/2013 04:40 AM, Bob Ball wrote:

Hi Alvin,

Yes, we typically do expect Nova to be running in a DomU. It’s worth checking out http://docs.openstack.org/trunk/openstack-compute/install/yum/content/introduction-to-xen.html just to make sure you’ve got everything covered there.

I say typically because in some configurations (notably using xenserver-core) it may be possible to run Nova in dom0 by setting the connection URL to “unix://local”. This is an experimental configuration and was added near the end of Havana – see https://blueprints.launchpad.net/nova/+spec/xenserver-core.

In terms of sr_matching_filter, check that you’re setting it in the right group. If you’re using the latest builds of Icehouse then it should be in the xenserver group. I’m also assuming that the other-config for the SR does indeed contain ceph-sr=true?

Is the SR that is used for VMs still the default-SR?

Thanks,

Bob

*From:*Alvin Starr [mailto:[email protected]]
*Sent:* 22 November 2013 01:32
*To:* [email protected]
*Subject:* [Openstack] Openstack and xen issues.


I am trying to use xen with Ceph and openstack using the devstack package.
I am slowly wacking my way through things and have noticed a few issues.

 1.  openstack expects to be running in a domU and generates error
    messages even if xenapi_check_host is false. I am not sure if
    this causes other side effects. The tests for the local dom0
    should be completley bypassed if the check is disabled.
 2. Open stack tries to read the xen SRs and checks the default one
    which ends up being the xen local storage and not any other SR.
    If I set the sr_matching_filter = other-config:ceph-sr=true there
    should be a xapi SR.get_all request generated but it looks like
    it is not generated at all. I have tracked the http traffic and
    no out put is generated even though the approprate code is being
    called.



--
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
[email protected]  <mailto:[email protected]>               ||


--
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
[email protected]               ||


--
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
[email protected]              ||

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to