Well some progress.
I figured out the problem with the pools on xenserver/libvirt but I am back to my same problem with a missing volume_id.

It looks like the code relates to iscsi devices and I noticed a warning in the log files that has me wondering. 2013-11-26 22:19:26.017 WARNING nova.virt.xenapi.driver [req-65e019f3-47d3-4e21-8717-8acb2b4b1ef5 admin demo] [instance: 26d231b2-4199-46b0-aacf-b5cb43de7993] Could not determine key: 'iscsi_iqn'

Why would openstack be trying to use iscsi at this point since the volume managment is through ceph?


On 11/26/2013 02:11 AM, Bob Ball wrote:
Hi,

I'll admit I really don't know the answer.

Can you send the question to xs-devel?  That's where xenserver-core and the 
tech preview are typically developed and discussed.

Bob

Alvin Starr <[email protected]> wrote:

should I be able to get libvirt to talk with ceph under the xenserver tech preivew?

I start with a clean 6.4 then add.
yum -y install xenserver-core
I run xenserver-install-wizard
yum -y install ceph ceph-radosgw  rbd-fuse python-argparse
rpm -ihv http://xenbits.xen.org/djs/xenserver-tech-preview-release-0.3.0-0.x86_64.rpm
yum -y update
reboot

I can access my rbd pools using the ceph direct tools but through libvirt I can see the volumes but I get the following error when I try to use libvirt to do anything real with the volumes.

2013-11-26 04:24:16.207+0000: 7357: error : virFDStreamOpenFileInternal:592 : Unable to open stream for 'volumes/volume-f066424b-1163-4e79-9f11-c5a12919d986': No such file or directory

am I missing something fundemental?

On 11/22/2013 11:29 AM, Bob Ball wrote:

The volume_id missing from the connection_details is highly suspicious, yes. I’ve not seen that before, and don’t know what could cause it.

Hopefully someone else on the list will be able to assist. If not, I may be able to have another look on Monday.

I’m not sure why we didn’t use rdb-fuse – that’s a question best asked on the xs-devel mailing list.

Bob

*From:*Alvin Starr [mailto:[email protected]]
*Sent:* 22 November 2013 15:40
*To:* Bob Ball; [email protected]
*Subject:* Re: [Openstack] Openstack and xen issues.

Doh.
Now I feel stupid.

It is getting much farther

Now I am seeing the following but I expect it may be because of all the other stuff I broke trying to figure out my previous problem.


 Error: 'volume_id'
 Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 1030, in _build_instance
     set_access_ip=set_access_ip)
   File "/opt/stack/nova/nova/compute/manager.py", line 1439, in _spawn
     LOG.exception(_('Instance failed to spawn'), instance=instance)
   File "/opt/stack/nova/nova/compute/manager.py", line 1436, in _spawn
     block_device_info)
   File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 219, in spawn
     admin_password, network_info, block_device_info)
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 351, in spawn
     network_info, block_device_info, name_label, rescue)
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 499, in _spawn
     undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
File "/opt/stack/nova/nova/utils.py", line 823, in rollback_and_reraise
     self._rollback()
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 477, in _spawn
     name_label)
   File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 139, in inner
     rv = f(*args, **kwargs)
File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 339, in create_disks_step
     disk_image_type, block_device_info=block_device_info)
File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 535, in get_vdis_for_instance
     vdi_uuid = get_vdi_uuid_for_volume(session, connection_data)
File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 488, in get_vdi_uuid_for_volume sr_uuid, label, sr_params = volume_utils.parse_sr_info(connection_data) File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 213, in parse_sr_info
     params = parse_volume_info(connection_data)
File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 232, in parse_volume_info
     volume_id = connection_data['volume_id']
 KeyError: 'volume_id'


A partly off topic question.
Why not use rdb-fuse and mount the ceph blobs as files instead of going through libvirt?

On 11/22/2013 09:41 AM, Bob Ball wrote:

    Could you provide the full error log with nova crashing?

    Thanks,

    Bob

    ------------------------------------------------------------------------

    *From:*Alvin Starr [[email protected] <mailto:[email protected]>]
    *Sent:* 22 November 2013 14:31
    *To:* Bob Ball; [email protected]
    <mailto:[email protected]>
    *Subject:* Re: [Openstack] Openstack and xen issues.

    I have put Openstack on a separate machine to try and separate
    and isolate the various components I need to work with in the
    interests of making my debugging easier.
    This in retrospect may not have been the best idea.

    I have had a very long history with xen and that may be more of
    an impediment because I think I know things about it that are no
    longer true.

    I am using the default devstack scripts as of a few weeks ago so
    it should be grabbing the latest version of Openstack or at least
    that is my belief.

    Here is my sr-param-list.

    uuid ( RO)                    : 7d56f548-174b-d42b-12f2-e0849588e503
                  name-label ( RW): Ceph Storage
            name-description ( RW):
                        host ( RO): localhost
          allowed-operations (SRO): unplug; plug; PBD.create;
    PBD.destroy; VDI.clone; scan; VDI.create; VDI.destroy
          current-operations (SRO):
                        VDIs (SRO):
                        PBDs (SRO): 40dd29a3-154a-e841-ce52-4547c817d856
          virtual-allocation ( RO): 348064577384
        physical-utilisation ( RO): 342363992064
               physical-size ( RO): 18986006446080
                        type ( RO): libvirt
                content-type ( RO):
                      shared ( RW): true
               introduced-by ( RO): <not in database>
                other-config (MRW): ceph_sr: true
                   sm-config (MRO):
                       blobs ( RO):
         local-cache-enabled ( RO): false
                        tags (SRW):


    I started tracing the xenapi transactions over the network and
    could see the pool.get_all and pool.get_default when the
    sr_filter was not set but once I set it nova would crash
    complaining about no repository.
    I checked the TCP transactions and did not see any SR.get_all
    while some debugging prints assured me that the code was being
    exercised.



    On 11/22/2013 04:40 AM, Bob Ball wrote:

        Hi Alvin,

        Yes, we typically do expect Nova to be running in a DomU.
        It’s worth checking out
        
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/introduction-to-xen.html
        just to make sure you’ve got everything covered there.

        I say typically because in some configurations (notably using
        xenserver-core) it may be possible to run Nova in dom0 by
        setting the connection URL to “unix://local”.  This is an
        experimental configuration and was added near the end of
        Havana – see
        https://blueprints.launchpad.net/nova/+spec/xenserver-core.

        In terms of sr_matching_filter, check that you’re setting it
        in the right group.  If you’re using the latest builds of
        Icehouse then it should be in the xenserver group.  I’m also
        assuming that the other-config for the SR does indeed contain
        ceph-sr=true?

        Is the SR that is used for VMs still the default-SR?

        Thanks,

        Bob

        *From:*Alvin Starr [mailto:[email protected]]
        *Sent:* 22 November 2013 01:32
        *To:* [email protected]
        <mailto:[email protected]>
        *Subject:* [Openstack] Openstack and xen issues.


        I am trying to use xen with Ceph and openstack using the
        devstack package.
        I am slowly wacking my way through things and have noticed a
        few issues.

         1.  openstack expects to be running in a domU and generates
            error messages even if xenapi_check_host is false. I am
            not sure if this causes other side effects. The tests for
            the local dom0 should be completley bypassed if the check
            is disabled.
         2. Open stack tries to read the xen SRs and checks the
            default one which ends up being the xen local storage and
            not any other SR. If I set the sr_matching_filter =
            other-config:ceph-sr=true there should be a xapi
            SR.get_all request generated but it looks like it is not
            generated at all. I have tracked the http traffic and no
            out put is generated even though the approprate code is
            being called.

--
        Alvin Starr                   ||   voice: (905)513-7688

        Netvel Inc.                   ||   Cell:  (416)806-0133

        [email protected]  <mailto:[email protected]>               ||




--
    Alvin Starr                   ||   voice: (905)513-7688

    Netvel Inc.                   ||   Cell:  (416)806-0133

    [email protected]  <mailto:[email protected]>               ||




--
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
[email protected]  <mailto:[email protected]>               ||


--
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
[email protected]               ||


--
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
[email protected]              ||

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to