Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:

> Actually, minor correction. When adding to VM/templates the name of the  
> detail is rootDiskController for Root controller and dataDiskController  
> for additional disks.
> Also, if you want to make changes on a global scale the changes need to  
> go to vm_template_details and user_vm_details tables respectively.

Thanks!  Very helpful

>
> On 2/21/17, 8:03 PM, "Sergey Levitskiy" <sergey.levits...@autodesk.com>  
> wrote:
>
> Here it is the logic.
> 1. Default value is taken from a global configuration 
> vmware.root.disk.controller 
> 2. To override add the same config to template or VM (starting from 4.10 
> UI allows adding advanced settings to templates and/or VMs). If added to a 
> template all VMs deployed from it will inherit this value. If added to VM and 
> then template is created it will also inherits all advanced settings.
>
>
>
>
> On 2/21/17, 7:06 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>
> Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:
>
>> On vmware rootdiskcontroller is passed over to the hypervisor in VM start
>> command. I know for the fact that the following rootdiskcontroller option
>> specified in template/vm details work fine:
>> ide
>> scsi
>> lsilogic
>> lsilogic1068
>>
>> In general, any scsi controller option that vmware recognizes should work.
>>
>> Thanks,
>> Sergey
>
> Thanks Sergey!  So do you happen to know where on the management 
> server
> side the determination is made as to which rootDiskController 
> parameter to
> pass?
>
>
>
>
>> On 2/21/17, 6:13 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>>
>> Wido den Hollander <w...@widodh.nl> wrote:
>>
>>>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>>>
>>>>
>>>> Maybe this is a good opportunity to discuss modernizing the OS
>>>> selections so that drivers (and other features) could be selectable per
>>>> OS.
>>>
>>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>>> previous to that will get VirtIO-blk.
>>
>> So one thing I noticed, there is a possibility of a rootDiskController
>> parameter passed to the Start Command.  So this means that the Management
>> server could control whether to use scsi or virtio, assuming I’m reading
>> this correctly, and we shouldn’t necessarily have to rely on the os type
>> name inside the agent code.  From a quick glance at the vmware code, it
>> looks like maybe they already use this parameter?  It would be great if
>> someone familiar with the vmware code could chime in here.
>>
>> Thanks,
>>
>> Nathan
>>
>>
>>
>>> Wido
>>>
>>>> Thoughts?
>>>>
>>>>
>>>> 
>>>> From: Syed Ahmed <sah...@cloudops.com>
>>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>>> To: dev@cloudstack.apache.org
>>>> Cc: Simon Weller
>>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>>
>>>> To maintain backward compatibility we would have to add a config option
>>>> here unfortunately. I do like the idea however. We can make the default
>>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>>> installations.
>>>>
>>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>>>
>>>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
>>>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>>>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende
>>>>>> geschreven:
>>>>>>
>>>>>> Exposing this via an API would be tricky but it can definitely be
>>>>>> added as
>>>>>> a cluster-wide or a global setting in my opinion. By enabling that,
>>>>>> all the
>>>>>> instances would be using VirtIO SCSI. Is there a reason you'd want  
>>>>>> some
>>>>>&

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
Actually, minor correction. When adding to VM/templates the name of the detail 
is rootDiskController for Root controller and dataDiskController for additional 
disks.
Also, if you want to make changes on a global scale the changes need to go to 
vm_template_details and user_vm_details tables respectively.

On 2/21/17, 8:03 PM, "Sergey Levitskiy" <sergey.levits...@autodesk.com> wrote:

Here it is the logic.
1. Default value is taken from a global configuration 
vmware.root.disk.controller   
2. To override add the same config to template or VM (starting from 4.10 UI 
allows adding advanced settings to templates and/or VMs). If added to a 
template all VMs deployed from it will inherit this value. If added to VM and 
then template is created it will also inherits all advanced settings.




On 2/21/17, 7:06 PM, "Nathan Johnson" <njohn...@ena.com> wrote:

Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM 
start  
> command. I know for the fact that the following rootdiskcontroller 
option  
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should 
work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server 
 
side the determination is made as to which rootDiskController parameter 
to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>
> Wido den Hollander <w...@widodh.nl> wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable 
per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a 
rootDiskController
> parameter passed to the Start Command.  So this means that the 
Management
> server could control whether to use scsi or virtio, assuming I’m 
reading
> this correctly, and we shouldn’t necessarily have to rely on the 
os type
> name inside the agent code.  From a quick glance at the vmware 
code, it
> looks like maybe they already use this parameter?  It would be 
great if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
    >>>
    >>>
>>> 
>>> From: Syed Ahmed <sah...@cloudops.com>
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config 
option
>>> here unfortunately. I do like the idea however. We can make the 
default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>>
>>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
>>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>>
>>>>
>>>>
>>>>
>>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende
>>>>> geschreven:
>>>>>
>>>>> Exposing this via an API would be tricky but it can definitely be
>>>>> added as
>>>>> a cluster-wide or a global setting in my opinion. By enabling 
th

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:

> Here it is the logic.
> 1. Default value is taken from a global configuration  
> vmware.root.disk.controller   
> 2. To override add the same config to template or VM (starting from 4.10  
> UI allows adding advanced settings to templates and/or VMs). If added to  
> a template all VMs deployed from it will inherit this value. If added to  
> VM and then template is created it will also inherits all advanced  
> settings.
>

Excellent, thanks.  Do you happen to know where this is stored in the  
database?

Thanks again!

>
>
>
> On 2/21/17, 7:06 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>
> Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:
>
>> On vmware rootdiskcontroller is passed over to the hypervisor in VM start
>> command. I know for the fact that the following rootdiskcontroller option
>> specified in template/vm details work fine:
>> ide
>> scsi
>> lsilogic
>> lsilogic1068
>>
>> In general, any scsi controller option that vmware recognizes should work.
>>
>> Thanks,
>> Sergey
>
> Thanks Sergey!  So do you happen to know where on the management server
> side the determination is made as to which rootDiskController parameter to
> pass?
>
>
>
>
>> On 2/21/17, 6:13 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>>
>> Wido den Hollander <w...@widodh.nl> wrote:
>>
>>>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>>>
>>>>
>>>> Maybe this is a good opportunity to discuss modernizing the OS
>>>> selections so that drivers (and other features) could be selectable per
>>>> OS.
>>>
>>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>>> previous to that will get VirtIO-blk.
>>
>> So one thing I noticed, there is a possibility of a rootDiskController
>> parameter passed to the Start Command.  So this means that the Management
>> server could control whether to use scsi or virtio, assuming I’m reading
>> this correctly, and we shouldn’t necessarily have to rely on the os type
>> name inside the agent code.  From a quick glance at the vmware code, it
>> looks like maybe they already use this parameter?  It would be great if
>> someone familiar with the vmware code could chime in here.
>>
>> Thanks,
>>
>> Nathan
>>
>>
>>
>>> Wido
>>>
>>>> Thoughts?
>>>>
>>>>
>>>> 
>>>> From: Syed Ahmed <sah...@cloudops.com>
>>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>>> To: dev@cloudstack.apache.org
>>>> Cc: Simon Weller
>>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>>
>>>> To maintain backward compatibility we would have to add a config option
>>>> here unfortunately. I do like the idea however. We can make the default
>>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>>> installations.
>>>>
>>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>>>
>>>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
>>>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>>>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende
>>>>>> geschreven:
>>>>>>
>>>>>> Exposing this via an API would be tricky but it can definitely be
>>>>>> added as
>>>>>> a cluster-wide or a global setting in my opinion. By enabling that,
>>>>>> all the
>>>>>> instances would be using VirtIO SCSI. Is there a reason you'd want  
>>>>>> some
>>>>>> instances to use VirtIIO and others to use VirtIO SCSI?
>>>>>
>>>>> Even a global setting would be a bit of work and hacky as well.
>>>>>
>>>>> I do not see any reason to keep VirtIO, it os just that devices will be
>>>>> named sdX instead of vdX in the guest.
>>>>
>>>> To add, t

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
Here it is the logic.
1. Default value is taken from a global configuration 
vmware.root.disk.controller   
2. To override add the same config to template or VM (starting from 4.10 UI 
allows adding advanced settings to templates and/or VMs). If added to a 
template all VMs deployed from it will inherit this value. If added to VM and 
then template is created it will also inherits all advanced settings.




On 2/21/17, 7:06 PM, "Nathan Johnson" <njohn...@ena.com> wrote:

Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM start 
 
> command. I know for the fact that the following rootdiskcontroller option 
 
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server  
side the determination is made as to which rootDiskController parameter to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>
> Wido den Hollander <w...@widodh.nl> wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a rootDiskController
> parameter passed to the Start Command.  So this means that the 
Management
> server could control whether to use scsi or virtio, assuming I’m 
reading
> this correctly, and we shouldn’t necessarily have to rely on the os 
type
> name inside the agent code.  From a quick glance at the vmware code, 
it
> looks like maybe they already use this parameter?  It would be great 
if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
    >>>
>>>
>>> 
>>> From: Syed Ahmed <sah...@cloudops.com>
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config option
>>> here unfortunately. I do like the idea however. We can make the default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>>
>>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
>>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>>
>>>>
>>>>
>>>>
>>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende
>>>>> geschreven:
>>>>>
>>>>> Exposing this via an API would be tricky but it can definitely be
>>>>> added as
>>>>> a cluster-wide or a global setting in my opinion. By enabling that,
>>>>> all the
>>>>> instances would be using VirtIO SCSI. Is there a reason you'd want 
some
>>>>> instances to use VirtIIO and others to use VirtIO SCSI?
>>>>
>>>> Even a global setting would be a bit of work and hacky as well.
>>>>
>>>> I do not see any reason to keep VirtIO, it os just that devices will be
>>>> named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they als

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy <sergey.levits...@autodesk.com> wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM start  
> command. I know for the fact that the following rootdiskcontroller option  
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server  
side the determination is made as to which rootDiskController parameter to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson" <njohn...@ena.com> wrote:
>
> Wido den Hollander <w...@widodh.nl> wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a rootDiskController
> parameter passed to the Start Command.  So this means that the Management
> server could control whether to use scsi or virtio, assuming I’m reading
> this correctly, and we shouldn’t necessarily have to rely on the os type
> name inside the agent code.  From a quick glance at the vmware code, it
> looks like maybe they already use this parameter?  It would be great if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed <sah...@cloudops.com>
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config option
>>> here unfortunately. I do like the idea however. We can make the default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>>
>>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
>>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>>
>>>>
>>>>
>>>>
>>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende
>>>>> geschreven:
>>>>>
>>>>> Exposing this via an API would be tricky but it can definitely be
>>>>> added as
>>>>> a cluster-wide or a global setting in my opinion. By enabling that,
>>>>> all the
>>>>> instances would be using VirtIO SCSI. Is there a reason you'd want some
>>>>> instances to use VirtIIO and others to use VirtIO SCSI?
>>>>
>>>> Even a global setting would be a bit of work and hacky as well.
>>>>
>>>> I do not see any reason to keep VirtIO, it os just that devices will be
>>>> named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they also say:
>>>
>>> "It has been designed to replace virtio-blk, increase it's performance
>>> and improve scalability."
>>>
>>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
>>> SCSI at version 5.X? :)
>>>
>>> Wido
>>>
>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>>
>>>> That might break existing Instances when not using labels or UUIDs in
>>>> the Instance when mounting.
>>>>
>>>> Wido
>>>>
>>>>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>>>>>> <swel...@ena.com<mailto:swel...@ena.com>> wrote:
>>>>>>
>>>>>> For the record, we've been looking into this as we

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
On vmware rootdiskcontroller is passed over to the hypervisor in VM start 
command. I know for the fact that the following rootdiskcontroller option 
specified in template/vm details work fine:
ide
scsi
lsilogic
lsilogic1068

In general, any scsi controller option that vmware recognizes should work.

Thanks,
Sergey


On 2/21/17, 6:13 PM, "Nathan Johnson" <njohn...@ena.com> wrote:

Wido den Hollander <w...@widodh.nl> wrote:

>
>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>
>>
>> Maybe this is a good opportunity to discuss modernizing the OS  
>> selections so that drivers (and other features) could be selectable per  
>> OS.
>
> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3  
> then for example it will give you a VirtIO SCSI disk on KVM, anything  
> previous to that will get VirtIO-blk.

So one thing I noticed, there is a possibility of a rootDiskController  
parameter passed to the Start Command.  So this means that the Management  
server could control whether to use scsi or virtio, assuming I’m reading  
this correctly, and we shouldn’t necessarily have to rely on the os type  
name inside the agent code.  From a quick glance at the vmware code, it  
looks like maybe they already use this parameter?  It would be great if  
someone familiar with the vmware code could chime in here.

Thanks,

Nathan



>
> Wido
>
>> Thoughts?
>>
>>
>> 
>> From: Syed Ahmed <sah...@cloudops.com>
>> Sent: Tuesday, January 24, 2017 10:46 AM
>> To: dev@cloudstack.apache.org
>> Cc: Simon Weller
>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>
>> To maintain backward compatibility we would have to add a config option  
>> here unfortunately. I do like the idea however. We can make the default  
>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing  
>> installations.
>>
>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander  
>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>
>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander  
>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>
>>>
>>>
>>>
>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  
>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende  
>>>> geschreven:
>>>>
>>>> Exposing this via an API would be tricky but it can definitely be  
>>>> added as
>>>> a cluster-wide or a global setting in my opinion. By enabling that,  
>>>> all the
>>>> instances would be using VirtIO SCSI. Is there a reason you'd want some
>>>> instances to use VirtIIO and others to use VirtIO SCSI?
>>>
>>> Even a global setting would be a bit of work and hacky as well.
>>>
>>> I do not see any reason to keep VirtIO, it os just that devices will be 
 
>>> named sdX instead of vdX in the guest.
>>
>> To add, the Qemu wiki [0] says:
>>
>> "A virtio storage interface for efficient I/O that overcomes virtio-blk  
>> limitations and supports advanced SCSI hardware."
>>
>> At OpenStack [1] they also say:
>>
>> "It has been designed to replace virtio-blk, increase it's performance  
>> and improve scalability."
>>
>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO 
 
>> SCSI at version 5.X? :)
>>
>> Wido
>>
>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>
>>> That might break existing Instances when not using labels or UUIDs in  
>>> the Instance when mounting.
>>>
>>> Wido
>>>
>>>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  
>>>>> <swel...@ena.com<mailto:swel...@ena.com>> wrote:
>>>>>
>>>>> For the record, we've been looking into this as well.
>>>>> Has anyone tried it with Windows VMs before? The standard virtio 
driver
>>>>> doesn't support spanned disks and that's something we'd really like to
>>>>> enable for our customers.
>>>>>
>&

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Wido den Hollander <w...@widodh.nl> wrote:

>
>> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
>>
>>
>> Maybe this is a good opportunity to discuss modernizing the OS  
>> selections so that drivers (and other features) could be selectable per  
>> OS.
>
> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3  
> then for example it will give you a VirtIO SCSI disk on KVM, anything  
> previous to that will get VirtIO-blk.

So one thing I noticed, there is a possibility of a rootDiskController  
parameter passed to the Start Command.  So this means that the Management  
server could control whether to use scsi or virtio, assuming I’m reading  
this correctly, and we shouldn’t necessarily have to rely on the os type  
name inside the agent code.  From a quick glance at the vmware code, it  
looks like maybe they already use this parameter?  It would be great if  
someone familiar with the vmware code could chime in here.

Thanks,

Nathan



>
> Wido
>
>> Thoughts?
>>
>>
>> 
>> From: Syed Ahmed <sah...@cloudops.com>
>> Sent: Tuesday, January 24, 2017 10:46 AM
>> To: dev@cloudstack.apache.org
>> Cc: Simon Weller
>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>
>> To maintain backward compatibility we would have to add a config option  
>> here unfortunately. I do like the idea however. We can make the default  
>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing  
>> installations.
>>
>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander  
>> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
>>
>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander  
>>> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>>>
>>>
>>>
>>>
>>>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  
>>>> <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende  
>>>> geschreven:
>>>>
>>>> Exposing this via an API would be tricky but it can definitely be  
>>>> added as
>>>> a cluster-wide or a global setting in my opinion. By enabling that,  
>>>> all the
>>>> instances would be using VirtIO SCSI. Is there a reason you'd want some
>>>> instances to use VirtIIO and others to use VirtIO SCSI?
>>>
>>> Even a global setting would be a bit of work and hacky as well.
>>>
>>> I do not see any reason to keep VirtIO, it os just that devices will be  
>>> named sdX instead of vdX in the guest.
>>
>> To add, the Qemu wiki [0] says:
>>
>> "A virtio storage interface for efficient I/O that overcomes virtio-blk  
>> limitations and supports advanced SCSI hardware."
>>
>> At OpenStack [1] they also say:
>>
>> "It has been designed to replace virtio-blk, increase it's performance  
>> and improve scalability."
>>
>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO  
>> SCSI at version 5.X? :)
>>
>> Wido
>>
>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>
>>> That might break existing Instances when not using labels or UUIDs in  
>>> the Instance when mounting.
>>>
>>> Wido
>>>
>>>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  
>>>>> <swel...@ena.com<mailto:swel...@ena.com>> wrote:
>>>>>
>>>>> For the record, we've been looking into this as well.
>>>>> Has anyone tried it with Windows VMs before? The standard virtio driver
>>>>> doesn't support spanned disks and that's something we'd really like to
>>>>> enable for our customers.
>>>>>
>>>>>
>>>>>
>>>>> Simon Weller/615-312-6068 <(615)%20312-6068>
>>>>>
>>>>>
>>>>> -Original Message-
>>>>> *From:* Wido den Hollander [w...@widodh.nl<mailto:w...@widodh.nl>]
>>>>> *Received:* Saturday, 21 Jan 2017, 2:56PM
>>>>> *To:* Syed Ahmed [sah...@cloudops.com<mailto:sah...@cloudops.com>];  
>>>>> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org> [
>>>>> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>]
>>>>> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>>>>>
>>>>>
>>>>>> Op 21 januari 2017 om 16:15 schreef Syed A

Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-24 Thread Wido den Hollander

> Op 25 januari 2017 om 4:44 schreef Simon Weller <swel...@ena.com>:
> 
> 
> Maybe this is a good opportunity to discuss modernizing the OS selections so 
> that drivers (and other features) could be selectable per OS.
> 

That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3 then for 
example it will give you a VirtIO SCSI disk on KVM, anything previous to that 
will get VirtIO-blk.

Wido

> 
> Thoughts?
> 
> 
> 
> From: Syed Ahmed <sah...@cloudops.com>
> Sent: Tuesday, January 24, 2017 10:46 AM
> To: dev@cloudstack.apache.org
> Cc: Simon Weller
> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
> 
> To maintain backward compatibility we would have to add a config option here 
> unfortunately. I do like the idea however. We can make the default VirtIO 
> ISCSI and keep the VirtIO-blk as an alternative for existing installations.
> 
> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander 
> <w...@widodh.nl<mailto:w...@widodh.nl>> wrote:
> 
> > Op 21 januari 2017 om 23:50 schreef Wido den Hollander 
> > <w...@widodh.nl<mailto:w...@widodh.nl>>:
> >
> >
> >
> >
> > > Op 21 jan. 2017 om 22:59 heeft Syed Ahmed 
> > > <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende geschreven:
> > >
> > > Exposing this via an API would be tricky but it can definitely be added as
> > > a cluster-wide or a global setting in my opinion. By enabling that, all 
> > > the
> > > instances would be using VirtIO SCSI. Is there a reason you'd want some
> > > instances to use VirtIIO and others to use VirtIO SCSI?
> > >
> >
> > Even a global setting would be a bit of work and hacky as well.
> >
> > I do not see any reason to keep VirtIO, it os just that devices will be 
> > named sdX instead of vdX in the guest.
> 
> To add, the Qemu wiki [0] says:
> 
> "A virtio storage interface for efficient I/O that overcomes virtio-blk 
> limitations and supports advanced SCSI hardware."
> 
> At OpenStack [1] they also say:
> 
> "It has been designed to replace virtio-blk, increase it's performance and 
> improve scalability."
> 
> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO SCSI 
> at version 5.X? :)
> 
> Wido
> 
> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
> 
> >
> > That might break existing Instances when not using labels or UUIDs in the 
> > Instance when mounting.
> >
> > Wido
> >
> > >
> > >> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller 
> > >> <swel...@ena.com<mailto:swel...@ena.com>> wrote:
> > >>
> > >> For the record, we've been looking into this as well.
> > >> Has anyone tried it with Windows VMs before? The standard virtio driver
> > >> doesn't support spanned disks and that's something we'd really like to
> > >> enable for our customers.
> > >>
> > >>
> > >>
> > >> Simon Weller/615-312-6068 <(615)%20312-6068>
> > >>
> > >>
> > >> -Original Message-
> > >> *From:* Wido den Hollander [w...@widodh.nl<mailto:w...@widodh.nl>]
> > >> *Received:* Saturday, 21 Jan 2017, 2:56PM
> > >> *To:* Syed Ahmed [sah...@cloudops.com<mailto:sah...@cloudops.com>]; 
> > >> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org> [
> > >> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>]
> > >> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
> > >>
> > >>
> > >>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed 
> > >>> <sah...@cloudops.com<mailto:sah...@cloudops.com>>:
> > >>>
> > >>>
> > >>> Wido,
> > >>>
> > >>> Were you thinking of adding this as a global setting? I can see why it
> > >> will
> > >>> be useful. I'm happy to review any ideas you might have around this.
> > >>>
> > >>
> > >> Well, not really. We don't have any structure for this in place right now
> > >> to define what type of driver/disk we present to a guest.
> > >>
> > >> See my answer below.
> > >>
> > >>> Thanks,
> > >>> -Syed
> > >>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> > >>> <laszlo.horn...@gmail.com<mailto:laszlo.horn...

Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-24 Thread Simon Weller
Maybe this is a good opportunity to discuss modernizing the OS selections so 
that drivers (and other features) could be selectable per OS.


Thoughts?



From: Syed Ahmed <sah...@cloudops.com>
Sent: Tuesday, January 24, 2017 10:46 AM
To: dev@cloudstack.apache.org
Cc: Simon Weller
Subject: Re: Adding VirtIO SCSI to KVM hypervisors

To maintain backward compatibility we would have to add a config option here 
unfortunately. I do like the idea however. We can make the default VirtIO ISCSI 
and keep the VirtIO-blk as an alternative for existing installations.

On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander 
<w...@widodh.nl<mailto:w...@widodh.nl>> wrote:

> Op 21 januari 2017 om 23:50 schreef Wido den Hollander 
> <w...@widodh.nl<mailto:w...@widodh.nl>>:
>
>
>
>
> > Op 21 jan. 2017 om 22:59 heeft Syed Ahmed 
> > <sah...@cloudops.com<mailto:sah...@cloudops.com>> het volgende geschreven:
> >
> > Exposing this via an API would be tricky but it can definitely be added as
> > a cluster-wide or a global setting in my opinion. By enabling that, all the
> > instances would be using VirtIO SCSI. Is there a reason you'd want some
> > instances to use VirtIIO and others to use VirtIO SCSI?
> >
>
> Even a global setting would be a bit of work and hacky as well.
>
> I do not see any reason to keep VirtIO, it os just that devices will be named 
> sdX instead of vdX in the guest.

To add, the Qemu wiki [0] says:

"A virtio storage interface for efficient I/O that overcomes virtio-blk 
limitations and supports advanced SCSI hardware."

At OpenStack [1] they also say:

"It has been designed to replace virtio-blk, increase it's performance and 
improve scalability."

So it seems that VirtIO is there to be removed. I'd say switch to VirtIO SCSI 
at version 5.X? :)

Wido

[0]: http://wiki.qemu.org/Features/VirtioSCSI
[1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi

>
> That might break existing Instances when not using labels or UUIDs in the 
> Instance when mounting.
>
> Wido
>
> >
> >> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller 
> >> <swel...@ena.com<mailto:swel...@ena.com>> wrote:
> >>
> >> For the record, we've been looking into this as well.
> >> Has anyone tried it with Windows VMs before? The standard virtio driver
> >> doesn't support spanned disks and that's something we'd really like to
> >> enable for our customers.
> >>
> >>
> >>
> >> Simon Weller/615-312-6068 <(615)%20312-6068>
> >>
> >>
> >> -Original Message-
> >> *From:* Wido den Hollander [w...@widodh.nl<mailto:w...@widodh.nl>]
> >> *Received:* Saturday, 21 Jan 2017, 2:56PM
> >> *To:* Syed Ahmed [sah...@cloudops.com<mailto:sah...@cloudops.com>]; 
> >> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org> [
> >> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>]
> >> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
> >>
> >>
> >>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed 
> >>> <sah...@cloudops.com<mailto:sah...@cloudops.com>>:
> >>>
> >>>
> >>> Wido,
> >>>
> >>> Were you thinking of adding this as a global setting? I can see why it
> >> will
> >>> be useful. I'm happy to review any ideas you might have around this.
> >>>
> >>
> >> Well, not really. We don't have any structure for this in place right now
> >> to define what type of driver/disk we present to a guest.
> >>
> >> See my answer below.
> >>
> >>> Thanks,
> >>> -Syed
> >>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> >>> <laszlo.horn...@gmail.com<mailto:laszlo.horn...@gmail.com>>
> >>> wrote:
> >>>
> >>>> Hi Wido,
> >>>>
> >>>> If I understand correctly from the documentation and your examples,
> >> virtio
> >>>> provides virtio interface to the guest while virtio-scsi provides scsi
> >>>> interface, therefore an IaaS service should not replace it without user
> >>>> request / approval. It would be probably better to let the user set
> >> what
> >>>> kind of IO interface the VM needs.
> >>>>
> >>
> >> You'd say, but we already do those. Some Operating Systems get a IDE disk,
> >> others a SCSI disk and when Linux guest support it according to our
> >> database we use VirtIO.
> >&g

Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-24 Thread Syed Ahmed
To maintain backward compatibility we would have to add a config option
here unfortunately. I do like the idea however. We can make the default
VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
installations.

On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander <w...@widodh.nl> wrote:

>
> > Op 21 januari 2017 om 23:50 schreef Wido den Hollander <w...@widodh.nl>:
> >
> >
> >
> >
> > > Op 21 jan. 2017 om 22:59 heeft Syed Ahmed <sah...@cloudops.com> het
> volgende geschreven:
> > >
> > > Exposing this via an API would be tricky but it can definitely be
> added as
> > > a cluster-wide or a global setting in my opinion. By enabling that,
> all the
> > > instances would be using VirtIO SCSI. Is there a reason you'd want some
> > > instances to use VirtIIO and others to use VirtIO SCSI?
> > >
> >
> > Even a global setting would be a bit of work and hacky as well.
> >
> > I do not see any reason to keep VirtIO, it os just that devices will be
> named sdX instead of vdX in the guest.
>
> To add, the Qemu wiki [0] says:
>
> "A virtio storage interface for efficient I/O that overcomes virtio-blk
> limitations and supports advanced SCSI hardware."
>
> At OpenStack [1] they also say:
>
> "It has been designed to replace virtio-blk, increase it's performance and
> improve scalability."
>
> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
> SCSI at version 5.X? :)
>
> Wido
>
> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>
> >
> > That might break existing Instances when not using labels or UUIDs in
> the Instance when mounting.
> >
> > Wido
> >
> > >
> > >> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller <swel...@ena.com>
> wrote:
> > >>
> > >> For the record, we've been looking into this as well.
> > >> Has anyone tried it with Windows VMs before? The standard virtio
> driver
> > >> doesn't support spanned disks and that's something we'd really like to
> > >> enable for our customers.
> > >>
> > >>
> > >>
> > >> Simon Weller/615-312-6068 <(615)%20312-6068>
> > >>
> > >>
> > >> -Original Message-
> > >> *From:* Wido den Hollander [w...@widodh.nl]
> > >> *Received:* Saturday, 21 Jan 2017, 2:56PM
> > >> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
> > >> dev@cloudstack.apache.org]
> > >> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
> > >>
> > >>
> > >>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed <sah...@cloudops.com
> >:
> > >>>
> > >>>
> > >>> Wido,
> > >>>
> > >>> Were you thinking of adding this as a global setting? I can see why
> it
> > >> will
> > >>> be useful. I'm happy to review any ideas you might have around this.
> > >>>
> > >>
> > >> Well, not really. We don't have any structure for this in place right
> now
> > >> to define what type of driver/disk we present to a guest.
> > >>
> > >> See my answer below.
> > >>
> > >>> Thanks,
> > >>> -Syed
> > >>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak <
> laszlo.horn...@gmail.com>
> > >>> wrote:
> > >>>
> > >>>> Hi Wido,
> > >>>>
> > >>>> If I understand correctly from the documentation and your examples,
> > >> virtio
> > >>>> provides virtio interface to the guest while virtio-scsi provides
> scsi
> > >>>> interface, therefore an IaaS service should not replace it without
> user
> > >>>> request / approval. It would be probably better to let the user set
> > >> what
> > >>>> kind of IO interface the VM needs.
> > >>>>
> > >>
> > >> You'd say, but we already do those. Some Operating Systems get a IDE
> disk,
> > >> others a SCSI disk and when Linux guest support it according to our
> > >> database we use VirtIO.
> > >>
> > >> CloudStack has no way of telling how to present a volume to a guest. I
> > >> think it would be a bit to much to just make that configurable. That
> would
> > >> mean extra database entries, AP

Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-23 Thread Wido den Hollander

> Op 21 januari 2017 om 23:50 schreef Wido den Hollander <w...@widodh.nl>:
> 
> 
> 
> 
> > Op 21 jan. 2017 om 22:59 heeft Syed Ahmed <sah...@cloudops.com> het 
> > volgende geschreven:
> > 
> > Exposing this via an API would be tricky but it can definitely be added as
> > a cluster-wide or a global setting in my opinion. By enabling that, all the
> > instances would be using VirtIO SCSI. Is there a reason you'd want some
> > instances to use VirtIIO and others to use VirtIO SCSI?
> > 
> 
> Even a global setting would be a bit of work and hacky as well.
> 
> I do not see any reason to keep VirtIO, it os just that devices will be named 
> sdX instead of vdX in the guest.

To add, the Qemu wiki [0] says:

"A virtio storage interface for efficient I/O that overcomes virtio-blk 
limitations and supports advanced SCSI hardware."

At OpenStack [1] they also say:

"It has been designed to replace virtio-blk, increase it's performance and 
improve scalability."

So it seems that VirtIO is there to be removed. I'd say switch to VirtIO SCSI 
at version 5.X? :)

Wido

[0]: http://wiki.qemu.org/Features/VirtioSCSI
[1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi

> 
> That might break existing Instances when not using labels or UUIDs in the 
> Instance when mounting.
> 
> Wido
> 
> > 
> >> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller <swel...@ena.com> wrote:
> >> 
> >> For the record, we've been looking into this as well.
> >> Has anyone tried it with Windows VMs before? The standard virtio driver
> >> doesn't support spanned disks and that's something we'd really like to
> >> enable for our customers.
> >> 
> >> 
> >> 
> >> Simon Weller/615-312-6068 <(615)%20312-6068>
> >> 
> >> 
> >> -----Original Message-
> >> *From:* Wido den Hollander [w...@widodh.nl]
> >> *Received:* Saturday, 21 Jan 2017, 2:56PM
> >> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
> >> dev@cloudstack.apache.org]
> >> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
> >> 
> >> 
> >>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed <sah...@cloudops.com>:
> >>> 
> >>> 
> >>> Wido,
> >>> 
> >>> Were you thinking of adding this as a global setting? I can see why it
> >> will
> >>> be useful. I'm happy to review any ideas you might have around this.
> >>> 
> >> 
> >> Well, not really. We don't have any structure for this in place right now
> >> to define what type of driver/disk we present to a guest.
> >> 
> >> See my answer below.
> >> 
> >>> Thanks,
> >>> -Syed
> >>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak <laszlo.horn...@gmail.com>
> >>> wrote:
> >>> 
> >>>> Hi Wido,
> >>>> 
> >>>> If I understand correctly from the documentation and your examples,
> >> virtio
> >>>> provides virtio interface to the guest while virtio-scsi provides scsi
> >>>> interface, therefore an IaaS service should not replace it without user
> >>>> request / approval. It would be probably better to let the user set
> >> what
> >>>> kind of IO interface the VM needs.
> >>>> 
> >> 
> >> You'd say, but we already do those. Some Operating Systems get a IDE disk,
> >> others a SCSI disk and when Linux guest support it according to our
> >> database we use VirtIO.
> >> 
> >> CloudStack has no way of telling how to present a volume to a guest. I
> >> think it would be a bit to much to just make that configurable. That would
> >> mean extra database entries, API calls. A bit overkill imho in this case.
> >> 
> >> VirtIO SCSI is supported by all Linux distributions for a very long time.
> >> 
> >> Wido
> >> 
> >>>> Best regards,
> >>>> Laszlo
> >>>> 
> >>>> On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander <w...@widodh.nl>
> >>>> wrote:
> >>>> 
> >>>>> Hi,
> >>>>> 
> >>>>> VirtIO SCSI [0] has been supported a while now by Linux and all
> >> kernels,
> >>>>> but inside CloudStack we are not using it. There is a issue for this
> >> [1].
> >>>>> 
> >>>>> It would bring more (theoretical) performance to VMs, but one of

Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Wido den Hollander


> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed <sah...@cloudops.com> het volgende 
> geschreven:
> 
> Exposing this via an API would be tricky but it can definitely be added as
> a cluster-wide or a global setting in my opinion. By enabling that, all the
> instances would be using VirtIO SCSI. Is there a reason you'd want some
> instances to use VirtIIO and others to use VirtIO SCSI?
> 

Even a global setting would be a bit of work and hacky as well.

I do not see any reason to keep VirtIO, it os just that devices will be named 
sdX instead of vdX in the guest.

That might break existing Instances when not using labels or UUIDs in the 
Instance when mounting.

Wido

> 
>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller <swel...@ena.com> wrote:
>> 
>> For the record, we've been looking into this as well.
>> Has anyone tried it with Windows VMs before? The standard virtio driver
>> doesn't support spanned disks and that's something we'd really like to
>> enable for our customers.
>> 
>> 
>> 
>> Simon Weller/615-312-6068 <(615)%20312-6068>
>> 
>> 
>> -Original Message-
>> *From:* Wido den Hollander [w...@widodh.nl]
>> *Received:* Saturday, 21 Jan 2017, 2:56PM
>> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
>> dev@cloudstack.apache.org]
>> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>> 
>> 
>>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed <sah...@cloudops.com>:
>>> 
>>> 
>>> Wido,
>>> 
>>> Were you thinking of adding this as a global setting? I can see why it
>> will
>>> be useful. I'm happy to review any ideas you might have around this.
>>> 
>> 
>> Well, not really. We don't have any structure for this in place right now
>> to define what type of driver/disk we present to a guest.
>> 
>> See my answer below.
>> 
>>> Thanks,
>>> -Syed
>>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak <laszlo.horn...@gmail.com>
>>> wrote:
>>> 
>>>> Hi Wido,
>>>> 
>>>> If I understand correctly from the documentation and your examples,
>> virtio
>>>> provides virtio interface to the guest while virtio-scsi provides scsi
>>>> interface, therefore an IaaS service should not replace it without user
>>>> request / approval. It would be probably better to let the user set
>> what
>>>> kind of IO interface the VM needs.
>>>> 
>> 
>> You'd say, but we already do those. Some Operating Systems get a IDE disk,
>> others a SCSI disk and when Linux guest support it according to our
>> database we use VirtIO.
>> 
>> CloudStack has no way of telling how to present a volume to a guest. I
>> think it would be a bit to much to just make that configurable. That would
>> mean extra database entries, API calls. A bit overkill imho in this case.
>> 
>> VirtIO SCSI is supported by all Linux distributions for a very long time.
>> 
>> Wido
>> 
>>>> Best regards,
>>>> Laszlo
>>>> 
>>>> On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander <w...@widodh.nl>
>>>> wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> VirtIO SCSI [0] has been supported a while now by Linux and all
>> kernels,
>>>>> but inside CloudStack we are not using it. There is a issue for this
>> [1].
>>>>> 
>>>>> It would bring more (theoretical) performance to VMs, but one of the
>>>>> motivators (for me) is that we can support TRIM/DISCARD [2].
>>>>> 
>>>>> This would allow for RBD images on Ceph to shrink, but it can also
>> give
>>>>> back free space on QCOW2 images if quests run fstrim. Something all
>>>> modern
>>>>> distributions all do weekly in a CRON.
>>>>> 
>>>>> Now, it is simple to swap VirtIO for VirtIO SCSI. This would however
>> mean
>>>>> that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
>>>>> 
>>>>> For GRUB and such this is no problems. This usually work on UUIDs
>> and/or
>>>>> labels, but for static mounts on /dev/vdb1 for example things break.
>>>>> 
>>>>> We currently don't have any configuration method on how we want to
>>>> present
>>>>> a disk to a guest, so when attaching a volume we can't say that we
>> want
>>>> to
>>>>> use a different driver. If we think that a Operating System supports
>>>> VirtIO
>>>>> we use that driver in KVM.
>>>>> 
>>>>> Any suggestion on how to add VirtIO SCSI support?
>>>>> 
>>>>> Wido
>>>>> 
>>>>> 
>>>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>>>>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
>>>>> [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> 
>>>> EOF
>>>> 
>> 


RE: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Simon Weller
Personally, I'd doubt it. The scsi is the replacement for the blk driver.

Simon Weller/615-312-6068

-Original Message-
From: Syed Ahmed [sah...@cloudops.com]
Received: Saturday, 21 Jan 2017, 3:59PM
To: Simon Weller [swel...@ena.com]
CC: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: Adding VirtIO SCSI to KVM hypervisors

Exposing this via an API would be tricky but it can definitely be added as a 
cluster-wide or a global setting in my opinion. By enabling that, all the 
instances would be using VirtIO SCSI. Is there a reason you'd want some 
instances to use VirtIIO and others to use VirtIO SCSI?



On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller 
<swel...@ena.com<mailto:swel...@ena.com>> wrote:
For the record, we've been looking into this as well.
Has anyone tried it with Windows VMs before? The standard virtio driver doesn't 
support spanned disks and that's something we'd really like to enable for our 
customers.



Simon Weller/615-312-6068<tel:(615)%20312-6068>


-Original Message-
From: Wido den Hollander [w...@widodh.nl<mailto:w...@widodh.nl>]
Received: Saturday, 21 Jan 2017, 2:56PM
To: Syed Ahmed [sah...@cloudops.com<mailto:sah...@cloudops.com>]; 
dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org> 
[dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>]
Subject: Re: Adding VirtIO SCSI to KVM hypervisors


> Op 21 januari 2017 om 16:15 schreef Syed Ahmed 
> <sah...@cloudops.com<mailto:sah...@cloudops.com>>:
>
>
> Wido,
>
> Were you thinking of adding this as a global setting? I can see why it will
> be useful. I'm happy to review any ideas you might have around this.
>

Well, not really. We don't have any structure for this in place right now to 
define what type of driver/disk we present to a guest.

See my answer below.

> Thanks,
> -Syed
> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> <laszlo.horn...@gmail.com<mailto:laszlo.horn...@gmail.com>>
> wrote:
>
> > Hi Wido,
> >
> > If I understand correctly from the documentation and your examples, virtio
> > provides virtio interface to the guest while virtio-scsi provides scsi
> > interface, therefore an IaaS service should not replace it without user
> > request / approval. It would be probably better to let the user set what
> > kind of IO interface the VM needs.
> >

You'd say, but we already do those. Some Operating Systems get a IDE disk, 
others a SCSI disk and when Linux guest support it according to our database we 
use VirtIO.

CloudStack has no way of telling how to present a volume to a guest. I think it 
would be a bit to much to just make that configurable. That would mean extra 
database entries, API calls. A bit overkill imho in this case.

VirtIO SCSI is supported by all Linux distributions for a very long time.

Wido

> > Best regards,
> > Laszlo
> >
> > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> > <w...@widodh.nl<mailto:w...@widodh.nl>>
> > wrote:
> >
> > > Hi,
> > >
> > > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > > but inside CloudStack we are not using it. There is a issue for this [1].
> > >
> > > It would bring more (theoretical) performance to VMs, but one of the
> > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > >
> > > This would allow for RBD images on Ceph to shrink, but it can also give
> > > back free space on QCOW2 images if quests run fstrim. Something all
> > modern
> > > distributions all do weekly in a CRON.
> > >
> > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >
> > > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > > labels, but for static mounts on /dev/vdb1 for example things break.
> > >
> > > We currently don't have any configuration method on how we want to
> > present
> > > a disk to a guest, so when attaching a volume we can't say that we want
> > to
> > > use a different driver. If we think that a Operating System supports
> > VirtIO
> > > we use that driver in KVM.
> > >
> > > Any suggestion on how to add VirtIO SCSI support?
> > >
> > > Wido
> > >
> > >
> > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >
> >
> >
> >
> > --
> >
> > EOF
> >



Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Syed Ahmed
Exposing this via an API would be tricky but it can definitely be added as
a cluster-wide or a global setting in my opinion. By enabling that, all the
instances would be using VirtIO SCSI. Is there a reason you'd want some
instances to use VirtIIO and others to use VirtIO SCSI?



On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller <swel...@ena.com> wrote:

> For the record, we've been looking into this as well.
> Has anyone tried it with Windows VMs before? The standard virtio driver
> doesn't support spanned disks and that's something we'd really like to
> enable for our customers.
>
>
>
> Simon Weller/615-312-6068 <(615)%20312-6068>
>
>
> -Original Message-
> *From:* Wido den Hollander [w...@widodh.nl]
> *Received:* Saturday, 21 Jan 2017, 2:56PM
> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
> dev@cloudstack.apache.org]
> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>
>
> > Op 21 januari 2017 om 16:15 schreef Syed Ahmed <sah...@cloudops.com>:
> >
> >
> > Wido,
> >
> > Were you thinking of adding this as a global setting? I can see why it
> will
> > be useful. I'm happy to review any ideas you might have around this.
> >
>
> Well, not really. We don't have any structure for this in place right now
> to define what type of driver/disk we present to a guest.
>
> See my answer below.
>
> > Thanks,
> > -Syed
> > On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak <laszlo.horn...@gmail.com>
> > wrote:
> >
> > > Hi Wido,
> > >
> > > If I understand correctly from the documentation and your examples,
> virtio
> > > provides virtio interface to the guest while virtio-scsi provides scsi
> > > interface, therefore an IaaS service should not replace it without user
> > > request / approval. It would be probably better to let the user set
> what
> > > kind of IO interface the VM needs.
> > >
>
> You'd say, but we already do those. Some Operating Systems get a IDE disk,
> others a SCSI disk and when Linux guest support it according to our
> database we use VirtIO.
>
> CloudStack has no way of telling how to present a volume to a guest. I
> think it would be a bit to much to just make that configurable. That would
> mean extra database entries, API calls. A bit overkill imho in this case.
>
> VirtIO SCSI is supported by all Linux distributions for a very long time.
>
> Wido
>
> > > Best regards,
> > > Laszlo
> > >
> > > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander <w...@widodh.nl>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > VirtIO SCSI [0] has been supported a while now by Linux and all
> kernels,
> > > > but inside CloudStack we are not using it. There is a issue for this
> [1].
> > > >
> > > > It would bring more (theoretical) performance to VMs, but one of the
> > > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > > >
> > > > This would allow for RBD images on Ceph to shrink, but it can also
> give
> > > > back free space on QCOW2 images if quests run fstrim. Something all
> > > modern
> > > > distributions all do weekly in a CRON.
> > > >
> > > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however
> mean
> > > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > > >
> > > > For GRUB and such this is no problems. This usually work on UUIDs
> and/or
> > > > labels, but for static mounts on /dev/vdb1 for example things break.
> > > >
> > > > We currently don't have any configuration method on how we want to
> > > present
> > > > a disk to a guest, so when attaching a volume we can't say that we
> want
> > > to
> > > > use a different driver. If we think that a Operating System supports
> > > VirtIO
> > > > we use that driver in KVM.
> > > >
> > > > Any suggestion on how to add VirtIO SCSI support?
> > > >
> > > > Wido
> > > >
> > > >
> > > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > EOF
> > >
>


RE: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Simon Weller
For the record, we've been looking into this as well.
Has anyone tried it with Windows VMs before? The standard virtio driver doesn't 
support spanned disks and that's something we'd really like to enable for our 
customers.



Simon Weller/615-312-6068

-Original Message-
From: Wido den Hollander [w...@widodh.nl]
Received: Saturday, 21 Jan 2017, 2:56PM
To: Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org 
[dev@cloudstack.apache.org]
Subject: Re: Adding VirtIO SCSI to KVM hypervisors


> Op 21 januari 2017 om 16:15 schreef Syed Ahmed <sah...@cloudops.com>:
>
>
> Wido,
>
> Were you thinking of adding this as a global setting? I can see why it will
> be useful. I'm happy to review any ideas you might have around this.
>

Well, not really. We don't have any structure for this in place right now to 
define what type of driver/disk we present to a guest.

See my answer below.

> Thanks,
> -Syed
> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak <laszlo.horn...@gmail.com>
> wrote:
>
> > Hi Wido,
> >
> > If I understand correctly from the documentation and your examples, virtio
> > provides virtio interface to the guest while virtio-scsi provides scsi
> > interface, therefore an IaaS service should not replace it without user
> > request / approval. It would be probably better to let the user set what
> > kind of IO interface the VM needs.
> >

You'd say, but we already do those. Some Operating Systems get a IDE disk, 
others a SCSI disk and when Linux guest support it according to our database we 
use VirtIO.

CloudStack has no way of telling how to present a volume to a guest. I think it 
would be a bit to much to just make that configurable. That would mean extra 
database entries, API calls. A bit overkill imho in this case.

VirtIO SCSI is supported by all Linux distributions for a very long time.

Wido

> > Best regards,
> > Laszlo
> >
> > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander <w...@widodh.nl>
> > wrote:
> >
> > > Hi,
> > >
> > > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > > but inside CloudStack we are not using it. There is a issue for this [1].
> > >
> > > It would bring more (theoretical) performance to VMs, but one of the
> > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > >
> > > This would allow for RBD images on Ceph to shrink, but it can also give
> > > back free space on QCOW2 images if quests run fstrim. Something all
> > modern
> > > distributions all do weekly in a CRON.
> > >
> > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >
> > > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > > labels, but for static mounts on /dev/vdb1 for example things break.
> > >
> > > We currently don't have any configuration method on how we want to
> > present
> > > a disk to a guest, so when attaching a volume we can't say that we want
> > to
> > > use a different driver. If we think that a Operating System supports
> > VirtIO
> > > we use that driver in KVM.
> > >
> > > Any suggestion on how to add VirtIO SCSI support?
> > >
> > > Wido
> > >
> > >
> > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >
> >
> >
> >
> > --
> >
> > EOF
> >


Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Wido den Hollander

> Op 21 januari 2017 om 16:15 schreef Syed Ahmed :
> 
> 
> Wido,
> 
> Were you thinking of adding this as a global setting? I can see why it will
> be useful. I'm happy to review any ideas you might have around this.
> 

Well, not really. We don't have any structure for this in place right now to 
define what type of driver/disk we present to a guest.

See my answer below.

> Thanks,
> -Syed
> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
> wrote:
> 
> > Hi Wido,
> >
> > If I understand correctly from the documentation and your examples, virtio
> > provides virtio interface to the guest while virtio-scsi provides scsi
> > interface, therefore an IaaS service should not replace it without user
> > request / approval. It would be probably better to let the user set what
> > kind of IO interface the VM needs.
> >

You'd say, but we already do those. Some Operating Systems get a IDE disk, 
others a SCSI disk and when Linux guest support it according to our database we 
use VirtIO.

CloudStack has no way of telling how to present a volume to a guest. I think it 
would be a bit to much to just make that configurable. That would mean extra 
database entries, API calls. A bit overkill imho in this case.

VirtIO SCSI is supported by all Linux distributions for a very long time.

Wido

> > Best regards,
> > Laszlo
> >
> > On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> > wrote:
> >
> > > Hi,
> > >
> > > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > > but inside CloudStack we are not using it. There is a issue for this [1].
> > >
> > > It would bring more (theoretical) performance to VMs, but one of the
> > > motivators (for me) is that we can support TRIM/DISCARD [2].
> > >
> > > This would allow for RBD images on Ceph to shrink, but it can also give
> > > back free space on QCOW2 images if quests run fstrim. Something all
> > modern
> > > distributions all do weekly in a CRON.
> > >
> > > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >
> > > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > > labels, but for static mounts on /dev/vdb1 for example things break.
> > >
> > > We currently don't have any configuration method on how we want to
> > present
> > > a disk to a guest, so when attaching a volume we can't say that we want
> > to
> > > use a different driver. If we think that a Operating System supports
> > VirtIO
> > > we use that driver in KVM.
> > >
> > > Any suggestion on how to add VirtIO SCSI support?
> > >
> > > Wido
> > >
> > >
> > > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >
> >
> >
> >
> > --
> >
> > EOF
> >


Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Syed Ahmed
Wido,

Were you thinking of adding this as a global setting? I can see why it will
be useful. I'm happy to review any ideas you might have around this.

Thanks,
-Syed
On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak 
wrote:

> Hi Wido,
>
> If I understand correctly from the documentation and your examples, virtio
> provides virtio interface to the guest while virtio-scsi provides scsi
> interface, therefore an IaaS service should not replace it without user
> request / approval. It would be probably better to let the user set what
> kind of IO interface the VM needs.
>
> Best regards,
> Laszlo
>
> On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander 
> wrote:
>
> > Hi,
> >
> > VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> > but inside CloudStack we are not using it. There is a issue for this [1].
> >
> > It would bring more (theoretical) performance to VMs, but one of the
> > motivators (for me) is that we can support TRIM/DISCARD [2].
> >
> > This would allow for RBD images on Ceph to shrink, but it can also give
> > back free space on QCOW2 images if quests run fstrim. Something all
> modern
> > distributions all do weekly in a CRON.
> >
> > Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> > that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> >
> > For GRUB and such this is no problems. This usually work on UUIDs and/or
> > labels, but for static mounts on /dev/vdb1 for example things break.
> >
> > We currently don't have any configuration method on how we want to
> present
> > a disk to a guest, so when attaching a volume we can't say that we want
> to
> > use a different driver. If we think that a Operating System supports
> VirtIO
> > we use that driver in KVM.
> >
> > Any suggestion on how to add VirtIO SCSI support?
> >
> > Wido
> >
> >
> > [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> >
>
>
>
> --
>
> EOF
>


Re: Adding VirtIO SCSI to KVM hypervisors

2017-01-21 Thread Laszlo Hornyak
Hi Wido,

If I understand correctly from the documentation and your examples, virtio
provides virtio interface to the guest while virtio-scsi provides scsi
interface, therefore an IaaS service should not replace it without user
request / approval. It would be probably better to let the user set what
kind of IO interface the VM needs.

Best regards,
Laszlo

On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander  wrote:

> Hi,
>
> VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> but inside CloudStack we are not using it. There is a issue for this [1].
>
> It would bring more (theoretical) performance to VMs, but one of the
> motivators (for me) is that we can support TRIM/DISCARD [2].
>
> This would allow for RBD images on Ceph to shrink, but it can also give
> back free space on QCOW2 images if quests run fstrim. Something all modern
> distributions all do weekly in a CRON.
>
> Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
>
> For GRUB and such this is no problems. This usually work on UUIDs and/or
> labels, but for static mounts on /dev/vdb1 for example things break.
>
> We currently don't have any configuration method on how we want to present
> a disk to a guest, so when attaching a volume we can't say that we want to
> use a different driver. If we think that a Operating System supports VirtIO
> we use that driver in KVM.
>
> Any suggestion on how to add VirtIO SCSI support?
>
> Wido
>
>
> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
>



-- 

EOF


Adding VirtIO SCSI to KVM hypervisors

2017-01-20 Thread Wido den Hollander
Hi,

VirtIO SCSI [0] has been supported a while now by Linux and all kernels, but 
inside CloudStack we are not using it. There is a issue for this [1].

It would bring more (theoretical) performance to VMs, but one of the motivators 
(for me) is that we can support TRIM/DISCARD [2].

This would allow for RBD images on Ceph to shrink, but it can also give back 
free space on QCOW2 images if quests run fstrim. Something all modern 
distributions all do weekly in a CRON.

Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean that 
disks inside VMs are then called /dev/sdX instead of /dev/vdX.

For GRUB and such this is no problems. This usually work on UUIDs and/or 
labels, but for static mounts on /dev/vdb1 for example things break.

We currently don't have any configuration method on how we want to present a 
disk to a guest, so when attaching a volume we can't say that we want to use a 
different driver. If we think that a Operating System supports VirtIO we use 
that driver in KVM.

Any suggestion on how to add VirtIO SCSI support?

Wido


[0]: http://wiki.qemu.org/Features/VirtioSCSI
[1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
[2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104