Appreciate for your reply, Yannick.

In my host, the service of tgtd and configuration in /etc/cinder/cinder.conf  
is normal.

With the kindly help of Wanghao, I find out the reason here.

The image in my server is a livecd iso. The 'root_device_name' of these 
instances created by the image are '/dev/hda'.
mysql> select * from instances;
+---------------------+---------------------+---------------------+----+-------------+----------------------------------+----------------------------------+--------------------------------------+-----------+------------+--------------+----------+----------+-------------+----------+-----------+-------+----------+-----------+-----------+----------------+---------------------+---------------------+---------------------+--------------+---------------------+-------------------+--------+---------+-------------+------------------+---------+--------------------------------------+--------------+------------------+--------------+--------------+--------------+------------+--------------------------+---------------------+----------+------------------+--------------------+-------------------+---------+--------------+-----------+-----------+---------+
| created_at          | updated_at          | deleted_at          | id | 
internal_id | user_id                          | project_id                     
  | image_ref                            | kernel_id | ramdisk_id | 
launch_index | key_name | key_data | power_state | vm_state | memory_mb | vcpus 
| hostname | host      | user_data | reservation_id | scheduled_at        | 
launched_at         | terminated_at       | display_name | display_description 
| availability_zone | locked | os_type | launched_on | instance_type_id | 
vm_mode | uuid                                 | architecture | 
root_device_name | access_ip_v4 | access_ip_v6 | config_drive | task_state | 
default_ephemeral_device | default_swap_device | progress | auto_disk_config | 
shutdown_terminate | disable_terminate | root_gb | ephemeral_gb | cell_name | 
node      | deleted |
+---------------------+---------------------+---------------------+----+-------------+----------------------------------+----------------------------------+--------------------------------------+-----------+------------+--------------+----------+----------+-------------+----------+-----------+-------+----------+-----------+-----------+----------------+---------------------+---------------------+---------------------+--------------+---------------------+-------------------+--------+---------+-------------+------------------+---------+--------------------------------------+--------------+------------------+--------------+--------------+--------------+------------+--------------------------+---------------------+----------+------------------+--------------------+-------------------+---------+--------------+-----------+-----------+---------+

| 2013-09-22 04:05:22 | 2013-09-22 04:06:20 | 2013-09-22 04:06:21 |  7 |        
NULL | 7b0e8dc57dba4612aafbb7ec199076ac | ffce15ba3b6549b28857679dcb1b8660 | 
7c4faf22-b132-46eb-89ad-cee9ecea4c85 |           |            |            0 | 
NULL     | NULL     |           1 | deleted  |      2048 |     1 | test     | 
localhost | NULL      | r-q09d30oo     | 2013-09-22 04:05:22 | 2013-09-22 
04:05:43 | 2013-09-22 04:06:20 | test         | test                | NULL      
        |      0 | NULL    | localhost   |                5 | NULL    | 
6ffed2fc-a9e6-4c0d-b958-bf302b01dbb1 | NULL         | /dev/hda         | NULL   
      | NULL         |              | NULL       | NULL                     | 
NULL                |        0 |             NULL |                  0 |        
         0 |      20 |            0 | NULL      | localhost |       7 |


The instance gets its disk type in nova/block_device.py.

def properties_root_device_name(properties):
    """get root device name from image meta data.
    If it isn't specified, return None.
    """
    root_device_name = None

    # NOTE(yamahata): see image_service.s3.s3create()
    for bdm in properties.get('mappings', []):
        if bdm['virtual'] == 'root':
            root_device_name = bdm['device']

    # NOTE(yamahata): register_image's command line can override
    #                 <machine>.manifest.xml
    if 'root_device_name' in properties:
        root_device_name = properties['root_device_name']

    return root_device_name

Best Regards,

Qi

From: Yannick Foeillet [mailto:[email protected]]
Sent: Wednesday, September 18, 2013 10:05 PM
To: [email protected]
Subject: Re: [Openstack] [Cinder] Attach the volume in Local Disk failed. The 
log in nova/compute.log said "libvirtError: unsupported configuration: disk bus 
'ide' cannot be hotplugged"

Hi,

The openstack was running on one server in my experiment. The VolumeGroup named 
'cinder-volumes' was comprised by a local disk partition whose name was 
'/dev/sda2'.

A volume was created in the dashboard, and I tried to attach it to a running 
instance. However, the operation was failed.

The exception in nova/compute.log said that the disk bus ide could not be 
hotplugged.

As known to all, the ide bus cannot be hotplugged to a running instance.

It seems that the current version used the bus type 'ide' as the default one.

In this case, all the volumes created in the 'cinder-volumes' cannot be 
hotplugged into the running vm. You can never attach a volume to a suspend 
instance in the dashboard. There is no choice to this in the portal.

How can I choose the type 'virtio' when the volume has been attached?

[root@localhost nova]# nova --version
2.13.0
[root@localhost nova]# uname -a
Linux localhost 3.9.4-200.fc18.x86_64 #1 SMP Fri May 24 20:10:49 UTC 2013 
x86_64 x86_64 x86_64 GNU/Linux
[root@localhost nova]#

I had the same issue on my storage node when iet and tgt process was running at 
the same time.

For example in my cinder.conf :

iscsi_helper = etadm

So i stopped tgt service (service tgt stop) and restart iscsitarget and cinder 
services .

After that everything was ok, perhaps you should try this method.

I hope it will help you.

--
Yannick Foeillet
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to