Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-18 Thread Marc Zyngier
On 15/01/16 22:47, Sinan Kaya wrote:
> On 1/15/2016 12:32 PM, Marc Zyngier wrote:
 Do you have a link to that? Seeing it would help to ease my concerns.

 The QEMU driver has not been posted yet. As far as I know, it just 
 discovers the memory
 resources on the platform object and creates mappings for the guest 
 machine only. 

 Shanker Donthineni and Vikram Sethi will post the QEMU patch later.
>> Then may I suggest you both synchronize your submissions? I'd really
>> like to hear from the QEMU maintainers that they are satisfied with that
>> side of the story as well.
> 
> The HIDMA QEMU driver is also based on VFIO platform driver in QEMU. It is 
> not a new concept
> or new framework. All tried and tested solutions. 
> 
> The driver below is already using this feature. HIDMA is no exception. 
> I have verified functionality of HIDMA linux driver with HIDMA QEMU driver 
> already.

That you have tested what you propose is the minimum you can do.

> https://lxr.missinglinkelectronics.com/qemu/hw/arm/sysbus-fdt.c#L67
> https://lxr.missinglinkelectronics.com/qemu/hw/vfio/calxeda-xgmac.c#L18
> https://lxr.missinglinkelectronics.com/qemu/include/hw/vfio/vfio-calxeda-xgmac.h
>  

None of which warrants that what you're doing is the right thing. Since
nobody has seen your QEMU code, I'm not going to take any bet.

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Mark Rutland
Hi,

[adding KVM people, given this is meant for virtualization]

On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
> The Qualcomm Technologies HIDMA device has been designed to support
> virtualization technology. The driver has been divided into two to follow
> the hardware design.
> 
> 1. HIDMA Management driver
> 2. HIDMA Channel driver
> 
> Each HIDMA HW consists of multiple channels. These channels share some set
> of common parameters. These parameters are initialized by the management
> driver during power up. Same management driver is used for monitoring the
> execution of the channels. Management driver can change the performance
> behavior dynamically such as bandwidth allocation and prioritization.
> 
> The management driver is executed in hypervisor context and is the main
> management entity for all channels provided by the device.

You mention repeatedly that this is designed for virtualization, but
looking at the series as it stands today I can't see how this operates
from the host side.

This doesn't seem to tie into KVM or VFIO, and as far as I can tell
there's no mechanism for associating channels with a particular virtual
address space (i.e. no configuration of an external or internal IOMMU),
nor pinning of guest pages to allow for DMA to occur safely.

Given that, I'm at a loss as to how this would be used in a hypervisor
context. What am I missing?

Are there additional patches, or do you have some userspace that works
with this in some limited configuration?

Thanks,
Mark.

> Signed-off-by: Sinan Kaya 
> Reviewed-by: Andy Shevchenko 
> ---
>  .../ABI/testing/sysfs-platform-hidma-mgmt  |  97 +++
>  drivers/dma/qcom/Kconfig   |  11 +
>  drivers/dma/qcom/Makefile  |   2 +
>  drivers/dma/qcom/hidma_mgmt.c  | 302 
> +
>  drivers/dma/qcom/hidma_mgmt.h  |  39 +++
>  drivers/dma/qcom/hidma_mgmt_sys.c  | 295 
>  6 files changed, 746 insertions(+)
>  create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
>  create mode 100644 drivers/dma/qcom/hidma_mgmt.c
>  create mode 100644 drivers/dma/qcom/hidma_mgmt.h
>  create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c
> 
> diff --git a/Documentation/ABI/testing/sysfs-platform-hidma-mgmt 
> b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
> new file mode 100644
> index 000..c2fb5d0
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
> @@ -0,0 +1,97 @@
> +What:/sys/devices/platform/hidma-mgmt*/chanops/chan*/priority
> + /sys/devices/platform/QCOM8060:*/chanops/chan*/priority
> +Date:Nov 2015
> +KernelVersion:   4.4
> +Contact: "Sinan Kaya "
> +Description:
> + Contains either 0 or 1 and indicates if the DMA channel is a
> + low priority (0) or high priority (1) channel.
> +
> +What:/sys/devices/platform/hidma-mgmt*/chanops/chan*/weight
> + /sys/devices/platform/QCOM8060:*/chanops/chan*/weight
> +Date:Nov 2015
> +KernelVersion:   4.4
> +Contact: "Sinan Kaya "
> +Description:
> + Contains 0..15 and indicates the weight of the channel among
> + equal priority channels during round robin scheduling.
> +
> +What:/sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles
> + /sys/devices/platform/QCOM8060:*/chreset_timeout_cycles
> +Date:Nov 2015
> +KernelVersion:   4.4
> +Contact: "Sinan Kaya "
> +Description:
> + Contains the platform specific cycle value to wait after a
> + reset command is issued. If the value is chosen too short,
> + then the HW will issue a reset failure interrupt. The value
> + is platform specific and should not be changed without
> + consultance.
> +
> +What:/sys/devices/platform/hidma-mgmt*/dma_channels
> + /sys/devices/platform/QCOM8060:*/dma_channels
> +Date:Nov 2015
> +KernelVersion:   4.4
> +Contact: "Sinan Kaya "
> +Description:
> + Contains the number of dma channels supported by one instance
> + of HIDMA hardware. The value may change from chip to chip.
> +
> +What:/sys/devices/platform/hidma-mgmt*/hw_version_major
> + /sys/devices/platform/QCOM8060:*/hw_version_major
> +Date:Nov 2015
> +KernelVersion:   4.4
> +Contact: "Sinan Kaya "
> +Description:
> + Version number major for the hardware.
> +
> +What:/sys/devices/platform/hidma-mgmt*/hw_version_minor
> + /sys/devices/platform/QCOM8060:*/hw_version_minor
> +Date:Nov 

Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Mark Rutland
On Fri, Jan 15, 2016 at 10:12:00AM -0500, Sinan Kaya wrote:
> Hi Mark,
> 
> On 1/15/2016 9:56 AM, Mark Rutland wrote:
> > Hi,
> > 
> > [adding KVM people, given this is meant for virtualization]
> > 
> > On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
> >> The Qualcomm Technologies HIDMA device has been designed to support
> >> virtualization technology. The driver has been divided into two to follow
> >> the hardware design.
> >>
> >> 1. HIDMA Management driver
> >> 2. HIDMA Channel driver
> >>
> >> Each HIDMA HW consists of multiple channels. These channels share some set
> >> of common parameters. These parameters are initialized by the management
> >> driver during power up. Same management driver is used for monitoring the
> >> execution of the channels. Management driver can change the performance
> >> behavior dynamically such as bandwidth allocation and prioritization.
> >>
> >> The management driver is executed in hypervisor context and is the main
> >> management entity for all channels provided by the device.
> > 
> > You mention repeatedly that this is designed for virtualization, but
> > looking at the series as it stands today I can't see how this operates
> > from the host side.
> > 
> > This doesn't seem to tie into KVM or VFIO, and as far as I can tell
> > there's no mechanism for associating channels with a particular virtual
> > address space (i.e. no configuration of an external or internal IOMMU),
> > nor pinning of guest pages to allow for DMA to occur safely.
> 
> I'm using VFIO platform driver for this purpose. VFIO platform driver is 
> capable of assigning any platform device to a guest machine with this driver. 

Typically VFIO-platform also comes with a corresponding reset driver.
You don't need one?

> You just unbind the HIDMA channel driver from the hypervisor and bind to vfio
> driver using the very same approach you'd use with PCIe. 
> 
> Of course, this all assumes the presence of an IOMMU driver on the system. 
> VFIO
> driver uses the IOMMU driver to create the mappings. 

No IOMMU was described in the DT binding. It sounds like you'd need an
optional (not present in the guest) iommus property per-channel

> The mechanism used here is not different from VFIO PCI from user perspective.
> 
> > 
> > Given that, I'm at a loss as to how this would be used in a hypervisor
> > context. What am I missing?
> > 
> > Are there additional patches, or do you have some userspace that works
> > with this in some limited configuration?
> 
> No, these are the only patches. We have one patch for the QEMU but from kernel
> perspective this is it. 

Do you have a link to that? Seeing it would help to ease my concerns.

Thanks,
Mark.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Mark Rutland
On Fri, Jan 15, 2016 at 03:14:28PM +, Marc Zyngier wrote:
> On 15/01/16 14:56, Mark Rutland wrote:
> > Hi,
> > 
> > [adding KVM people, given this is meant for virtualization]
> > 
> > On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
> >> The Qualcomm Technologies HIDMA device has been designed to support
> >> virtualization technology. The driver has been divided into two to follow
> >> the hardware design.
> >>
> >> 1. HIDMA Management driver
> >> 2. HIDMA Channel driver
> >>
> >> Each HIDMA HW consists of multiple channels. These channels share some set
> >> of common parameters. These parameters are initialized by the management
> >> driver during power up. Same management driver is used for monitoring the
> >> execution of the channels. Management driver can change the performance
> >> behavior dynamically such as bandwidth allocation and prioritization.
> >>
> >> The management driver is executed in hypervisor context and is the main
> >> management entity for all channels provided by the device.
> > 
> > You mention repeatedly that this is designed for virtualization, but
> > looking at the series as it stands today I can't see how this operates
> > from the host side.
> 
> Nor the guest's, TBH. How do host and guest communicate, what is the
> infrastructure, how is it meant to be used? A lot of questions, and no
> answer whatsoever in this series.

I think the guest's PoV is fairly simple and understood. The DMA channel
is pased in as with any passthrough of any other platform device.

No communication with the host is necessary -- an isolated channel is
usable.

The larger concern is isolation, given the lack of IOMMU, or anything
obvious w.r.t. pinning of pages.

> > This doesn't seem to tie into KVM or VFIO, and as far as I can tell
> > there's no mechanism for associating channels with a particular virtual
> > address space (i.e. no configuration of an external or internal IOMMU),
> > nor pinning of guest pages to allow for DMA to occur safely.
> > 
> > Given that, I'm at a loss as to how this would be used in a hypervisor
> > context. What am I missing?
> > 
> > Are there additional patches, or do you have some userspace that works
> > with this in some limited configuration?
> 
> Well, this looks so far like a code dumping exercise. I'd very much
> appreciate a HIDMA101 crash course:
> 
> - How do host and guest communicate?
> - How is the integration performed in the hypervisor?
> - Does the HYP side requires any context switch (and how is that done)?

I don't believe this requires any context-switch -- it's the same as
assigning any other platform device other than additional proeprties
being controlled in the managament interface.

> - What makes it safe?

I'm concerned with how this is safe, and with the userspace interface.
e.g. if the user wants to up the QoS for a VM, how to they find the
right channel in sysfs  to alter?

> Without any of this information (and pointer to the code to back it up),
> I'm very reluctant to take any of this.

Likewise.

Thanks,
Mark.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Sinan Kaya
>>> This doesn't seem to tie into KVM or VFIO, and as far as I can tell
>>> there's no mechanism for associating channels with a particular virtual
>>> address space (i.e. no configuration of an external or internal IOMMU),
>>> nor pinning of guest pages to allow for DMA to occur safely.
>>
>> I'm using VFIO platform driver for this purpose. VFIO platform driver is 
>> capable of assigning any platform device to a guest machine with this 
>> driver. 
> 
> Typically VFIO-platform also comes with a corresponding reset driver.
> You don't need one?

The HIDMA channel driver resets the channel before using it. That's why, I never
bothered with writing a reset driver on the hypervisor.

> 
>> You just unbind the HIDMA channel driver from the hypervisor and bind to vfio
>> driver using the very same approach you'd use with PCIe. 
>>
>> Of course, this all assumes the presence of an IOMMU driver on the system. 
>> VFIO
>> driver uses the IOMMU driver to create the mappings. 
> 
> No IOMMU was described in the DT binding. It sounds like you'd need an
> optional (not present in the guest) iommus property per-channel

You are right. I missed that part. I'll update the device-tree binding 
documentation.

> 
>> The mechanism used here is not different from VFIO PCI from user perspective.
>>
>>>
>>> Given that, I'm at a loss as to how this would be used in a hypervisor
>>> context. What am I missing?
>>>
>>> Are there additional patches, or do you have some userspace that works
>>> with this in some limited configuration?
>>
>> No, these are the only patches. We have one patch for the QEMU but from 
>> kernel
>> perspective this is it. 
> 
> Do you have a link to that? Seeing it would help to ease my concerns.

The QEMU driver has not been posted yet. As far as I know, it just discovers 
the memory
resources on the platform object and creates mappings for the guest machine 
only. 

Shanker Donthineni and Vikram Sethi will post the QEMU patch later.

> 
> Thanks,
> Mark.
> 


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Marc Zyngier
On 15/01/16 15:40, Sinan Kaya wrote:
> On 1/15/2016 10:14 AM, Marc Zyngier wrote:
>> On 15/01/16 14:56, Mark Rutland wrote:
>>> Hi,
>>>
>>> [adding KVM people, given this is meant for virtualization]
>>>
>>> On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
 The Qualcomm Technologies HIDMA device has been designed to support
 virtualization technology. The driver has been divided into two to follow
 the hardware design.

 1. HIDMA Management driver
 2. HIDMA Channel driver

 Each HIDMA HW consists of multiple channels. These channels share some set
 of common parameters. These parameters are initialized by the management
 driver during power up. Same management driver is used for monitoring the
 execution of the channels. Management driver can change the performance
 behavior dynamically such as bandwidth allocation and prioritization.

 The management driver is executed in hypervisor context and is the main
 management entity for all channels provided by the device.
>>>
>>> You mention repeatedly that this is designed for virtualization, but
>>> looking at the series as it stands today I can't see how this operates
>>> from the host side.
>>
>> Nor the guest's, TBH. How do host and guest communicate, what is the
>> infrastructure, how is it meant to be used? A lot of questions, and no
>> answer whatsoever in this series.
> 
> I always make an analogy of HIDMA channel driver to a PCI endpoint device 
> driver (8139too for example)
> running on the guest machine.
> 
> Both HIDMA and PCI uses device pass-through approach.
> 
> I don't have an infrastructure for host and guest to communicate as I don't 
> need to.
> A HIDMA channel is assigned to a guest machine after an unbind from the host 
> machine. 
> 
> Guest machine uses HIDMA channel driver to offload DMA operations. The guest 
> machine owns the
> HW registers for the channel. It doesn't need to trap to host for register 
> read/writes etc.
> 
> All guest machine pages used are assumed to be pinned similar to VFIO PCI. 
> The reason is performance. The IOMMU takes care of the address translation 
> for me.
> 
>>
>>>
>>> This doesn't seem to tie into KVM or VFIO, and as far as I can tell
>>> there's no mechanism for associating channels with a particular virtual
>>> address space (i.e. no configuration of an external or internal IOMMU),
>>> nor pinning of guest pages to allow for DMA to occur safely.
>>>
>>> Given that, I'm at a loss as to how this would be used in a hypervisor
>>> context. What am I missing?
>>>
>>> Are there additional patches, or do you have some userspace that works
>>> with this in some limited configuration?
>>
>> Well, this looks so far like a code dumping exercise. I'd very much
>> appreciate a HIDMA101 crash course:
> 
> Sure, I'm ready to answer any questions. This is really a VFIO platform 
> course. Not
> a HIDMA driver course. The approach is not different if you assign a platfom 
> SATA (AHCI) or SDHC driver to a guest machine.

I happen to have an idea of how VFIO works...

> 
> The summary is that:
> - IOMMU takes care of the mappings via VFIO driver.
> - Guest machine owns the HW. No hypervisor interaction.

Then it might be worth mentioning all of this

> 
>>
>> - How do host and guest communicate?
> They don't.
> 
>> - How is the integration performed in the hypervisor?
> Hypervisor has a bunch of channel resources. For each guest machine, the 
> channel gets
> unbound from the hypervisor. Channels get bind to each VFIO platform device 
> and then
> control is given to the guest machine.

And what does the hypervisor do with those in the meantime? Above, you
say "Guest machine owns the HW". So what is that hypervisor code used
for? Is that your reset driver?

You may want to drop the "hypervisor" designation, BTW, because this has
no real connection to virtualisation.

> 
> Once the guest machine is shutdown, VFIO driver still owns the channel 
> device. It can
> assign the device to another guest machine.
> 
>> - Does the HYP side requires any context switch (and how is that done)?
> No communication is needed.
> 
>> - What makes it safe?
> No communication is needed.
> 
>>
>> Without any of this information (and pointer to the code to back it up),
>> I'm very reluctant to take any of this.
> 
> Please let me know what exactly is not clear. 
> 
> You don't write a virtualization driver for 8139too driver. The driver works 
> whether it is running in the 
> guest machine or the hypervisor. 

Exactly. No hypervisor code needed whatsoever. So please get rid of this
hypervisor nonsense! ;-)

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Marc Zyngier
On 15/01/16 17:16, Sinan Kaya wrote:
 This doesn't seem to tie into KVM or VFIO, and as far as I can tell
 there's no mechanism for associating channels with a particular virtual
 address space (i.e. no configuration of an external or internal IOMMU),
 nor pinning of guest pages to allow for DMA to occur safely.
>>>
>>> I'm using VFIO platform driver for this purpose. VFIO platform driver is 
>>> capable of assigning any platform device to a guest machine with this 
>>> driver. 
>>
>> Typically VFIO-platform also comes with a corresponding reset driver.
>> You don't need one?
> 
> The HIDMA channel driver resets the channel before using it. That's why, I 
> never
> bothered with writing a reset driver on the hypervisor.
> 
>>
>>> You just unbind the HIDMA channel driver from the hypervisor and bind to 
>>> vfio
>>> driver using the very same approach you'd use with PCIe. 
>>>
>>> Of course, this all assumes the presence of an IOMMU driver on the system. 
>>> VFIO
>>> driver uses the IOMMU driver to create the mappings. 
>>
>> No IOMMU was described in the DT binding. It sounds like you'd need an
>> optional (not present in the guest) iommus property per-channel
> 
> You are right. I missed that part. I'll update the device-tree binding 
> documentation.
> 
>>
>>> The mechanism used here is not different from VFIO PCI from user 
>>> perspective.
>>>

 Given that, I'm at a loss as to how this would be used in a hypervisor
 context. What am I missing?

 Are there additional patches, or do you have some userspace that works
 with this in some limited configuration?
>>>
>>> No, these are the only patches. We have one patch for the QEMU but from 
>>> kernel
>>> perspective this is it. 
>>
>> Do you have a link to that? Seeing it would help to ease my concerns.
> 
> The QEMU driver has not been posted yet. As far as I know, it just discovers 
> the memory
> resources on the platform object and creates mappings for the guest machine 
> only. 
> 
> Shanker Donthineni and Vikram Sethi will post the QEMU patch later.

Then may I suggest you both synchronize your submissions? I'd really
like to hear from the QEMU maintainers that they are satisfied with that
side of the story as well.

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Sinan Kaya
On 1/15/2016 10:14 AM, Marc Zyngier wrote:
> On 15/01/16 14:56, Mark Rutland wrote:
>> Hi,
>>
>> [adding KVM people, given this is meant for virtualization]
>>
>> On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
>>> The Qualcomm Technologies HIDMA device has been designed to support
>>> virtualization technology. The driver has been divided into two to follow
>>> the hardware design.
>>>
>>> 1. HIDMA Management driver
>>> 2. HIDMA Channel driver
>>>
>>> Each HIDMA HW consists of multiple channels. These channels share some set
>>> of common parameters. These parameters are initialized by the management
>>> driver during power up. Same management driver is used for monitoring the
>>> execution of the channels. Management driver can change the performance
>>> behavior dynamically such as bandwidth allocation and prioritization.
>>>
>>> The management driver is executed in hypervisor context and is the main
>>> management entity for all channels provided by the device.
>>
>> You mention repeatedly that this is designed for virtualization, but
>> looking at the series as it stands today I can't see how this operates
>> from the host side.
> 
> Nor the guest's, TBH. How do host and guest communicate, what is the
> infrastructure, how is it meant to be used? A lot of questions, and no
> answer whatsoever in this series.

I always make an analogy of HIDMA channel driver to a PCI endpoint device 
driver (8139too for example)
running on the guest machine.

Both HIDMA and PCI uses device pass-through approach.

I don't have an infrastructure for host and guest to communicate as I don't 
need to.
A HIDMA channel is assigned to a guest machine after an unbind from the host 
machine. 

Guest machine uses HIDMA channel driver to offload DMA operations. The guest 
machine owns the
HW registers for the channel. It doesn't need to trap to host for register 
read/writes etc.

All guest machine pages used are assumed to be pinned similar to VFIO PCI. 
The reason is performance. The IOMMU takes care of the address translation for 
me.

> 
>>
>> This doesn't seem to tie into KVM or VFIO, and as far as I can tell
>> there's no mechanism for associating channels with a particular virtual
>> address space (i.e. no configuration of an external or internal IOMMU),
>> nor pinning of guest pages to allow for DMA to occur safely.
>>
>> Given that, I'm at a loss as to how this would be used in a hypervisor
>> context. What am I missing?
>>
>> Are there additional patches, or do you have some userspace that works
>> with this in some limited configuration?
> 
> Well, this looks so far like a code dumping exercise. I'd very much
> appreciate a HIDMA101 crash course:

Sure, I'm ready to answer any questions. This is really a VFIO platform course. 
Not
a HIDMA driver course. The approach is not different if you assign a platfom 
SATA (AHCI) or SDHC driver to a guest machine.

The summary is that:
- IOMMU takes care of the mappings via VFIO driver.
- Guest machine owns the HW. No hypervisor interaction.

> 
> - How do host and guest communicate?
They don't.

> - How is the integration performed in the hypervisor?
Hypervisor has a bunch of channel resources. For each guest machine, the 
channel gets
unbound from the hypervisor. Channels get bind to each VFIO platform device and 
then
control is given to the guest machine.

Once the guest machine is shutdown, VFIO driver still owns the channel device. 
It can
assign the device to another guest machine.

> - Does the HYP side requires any context switch (and how is that done)?
No communication is needed.

> - What makes it safe?
No communication is needed.

> 
> Without any of this information (and pointer to the code to back it up),
> I'm very reluctant to take any of this.

Please let me know what exactly is not clear. 

You don't write a virtualization driver for 8139too driver. The driver works 
whether it is running in the 
guest machine or the hypervisor. 

The 8139too driver does not trap to the hypervisor for functionality when used 
in device
pass-through mode.

No difference here.

> 
> Thanks,
> 
>   M.
> 


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Sinan Kaya
On 1/15/2016 10:36 AM, Mark Rutland wrote:
> On Fri, Jan 15, 2016 at 03:14:28PM +, Marc Zyngier wrote:
>> On 15/01/16 14:56, Mark Rutland wrote:
>>> Hi,
>>>
>>> [adding KVM people, given this is meant for virtualization]
>>>
>>> On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
 The Qualcomm Technologies HIDMA device has been designed to support
 virtualization technology. The driver has been divided into two to follow
 the hardware design.

 1. HIDMA Management driver
 2. HIDMA Channel driver

 Each HIDMA HW consists of multiple channels. These channels share some set
 of common parameters. These parameters are initialized by the management
 driver during power up. Same management driver is used for monitoring the
 execution of the channels. Management driver can change the performance
 behavior dynamically such as bandwidth allocation and prioritization.

 The management driver is executed in hypervisor context and is the main
 management entity for all channels provided by the device.
>>>
>>> You mention repeatedly that this is designed for virtualization, but
>>> looking at the series as it stands today I can't see how this operates
>>> from the host side.
>>
>> Nor the guest's, TBH. How do host and guest communicate, what is the
>> infrastructure, how is it meant to be used? A lot of questions, and no
>> answer whatsoever in this series.
> 
> I think the guest's PoV is fairly simple and understood. The DMA channel
> is pased in as with any passthrough of any other platform device.
> 
> No communication with the host is necessary -- an isolated channel is
> usable.
> 

Correct, I'm behind on emails. I'm following you. 

> The larger concern is isolation, given the lack of IOMMU, or anything
> obvious w.r.t. pinning of pages.
> 
I assume the presence of an IOMMU if used in the guest machine. I wonder
if I can place a check and make the driver fail if IOMMU driver is not present.

Any ideas?

>>> This doesn't seem to tie into KVM or VFIO, and as far as I can tell
>>> there's no mechanism for associating channels with a particular virtual
>>> address space (i.e. no configuration of an external or internal IOMMU),
>>> nor pinning of guest pages to allow for DMA to occur safely.
>>>
>>> Given that, I'm at a loss as to how this would be used in a hypervisor
>>> context. What am I missing?
>>>
>>> Are there additional patches, or do you have some userspace that works
>>> with this in some limited configuration?

I forgot to mention that these are the only kernel patches. A userspace 
application
is being built as we speak by another team.

The userspace application will use sysfs to communicate to the management 
driver. 
The management driver knows how to change runtime characteristics like priority 
and
weight.



>>
>> Well, this looks so far like a code dumping exercise. I'd very much
>> appreciate a HIDMA101 crash course:
>>
>> - How do host and guest communicate?
>> - How is the integration performed in the hypervisor?
>> - Does the HYP side requires any context switch (and how is that done)?
> 
> I don't believe this requires any context-switch -- it's the same as
> assigning any other platform device other than additional proeprties
> being controlled in the managament interface.

Agreed.

> 
>> - What makes it safe?
> 
> I'm concerned with how this is safe, and with the userspace interface.
> e.g. if the user wants to up the QoS for a VM, how to they find the
> right channel in sysfs  to alter?

The HW supports changing the QoS values on the flight. In order to locate the
object, I'm exporting a 

I tried to address your concern on v10 last series. Here is brief summary.

Each channel device has a sysfs entry named chid.
What:   /sys/devices/platform/hidma-*/chid
+   /sys/devices/platform/QCOM8061:*/chid


Each management object has one priority and weight file per channel.
+What:  /sys/devices/platform/hidma-mgmt*/chanops/chan*/priority
+   /sys/devices/platform/QCOM8060:*/chanops/chan*/priority

Suppose you want to change the priority of a channel you assigned to guess,
the userspace application goes and reads the chid value of the channel.

Then goes to chanops/chan/ directory and can change priority and weight 
parameters here.

Here is how the directory looks like. QCOM8060:00 is a management object.
QCOM8061:0x are the channel objects.

/sys/devices/platform/QCOM8060:00# ls
QCOM8061:00
QCOM8061:01
QCOM8061:02
QCOM8061:03
QCOM8061:04
QCOM8061:05
chanops





> 
>> Without any of this information (and pointer to the code to back it up),
>> I'm very reluctant to take any of this.
> 
> Likewise.
> 
> Thanks,
> Mark.
> 


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project
___
kvmarm mailing list

Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Sinan Kaya
Hi Mark,

On 1/15/2016 9:56 AM, Mark Rutland wrote:
> Hi,
> 
> [adding KVM people, given this is meant for virtualization]
> 
> On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
>> The Qualcomm Technologies HIDMA device has been designed to support
>> virtualization technology. The driver has been divided into two to follow
>> the hardware design.
>>
>> 1. HIDMA Management driver
>> 2. HIDMA Channel driver
>>
>> Each HIDMA HW consists of multiple channels. These channels share some set
>> of common parameters. These parameters are initialized by the management
>> driver during power up. Same management driver is used for monitoring the
>> execution of the channels. Management driver can change the performance
>> behavior dynamically such as bandwidth allocation and prioritization.
>>
>> The management driver is executed in hypervisor context and is the main
>> management entity for all channels provided by the device.
> 
> You mention repeatedly that this is designed for virtualization, but
> looking at the series as it stands today I can't see how this operates
> from the host side.
> 
> This doesn't seem to tie into KVM or VFIO, and as far as I can tell
> there's no mechanism for associating channels with a particular virtual
> address space (i.e. no configuration of an external or internal IOMMU),
> nor pinning of guest pages to allow for DMA to occur safely.

I'm using VFIO platform driver for this purpose. VFIO platform driver is 
capable of assigning any platform device to a guest machine with this driver. 

You just unbind the HIDMA channel driver from the hypervisor and bind to vfio
driver using the very same approach you'd use with PCIe. 

Of course, this all assumes the presence of an IOMMU driver on the system. VFIO
driver uses the IOMMU driver to create the mappings. 

The mechanism used here is not different from VFIO PCI from user perspective.

> 
> Given that, I'm at a loss as to how this would be used in a hypervisor
> context. What am I missing?
> 
> Are there additional patches, or do you have some userspace that works
> with this in some limited configuration?

No, these are the only patches. We have one patch for the QEMU but from kernel
perspective this is it. 

I just rely on the platform VFIO driver to do the work. 

> 
> Thanks,
> Mark.
> 

Sinan
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver

2016-01-15 Thread Sinan Kaya
On 1/15/2016 12:32 PM, Marc Zyngier wrote:
>>> Do you have a link to that? Seeing it would help to ease my concerns.
>> > 
>> > The QEMU driver has not been posted yet. As far as I know, it just 
>> > discovers the memory
>> > resources on the platform object and creates mappings for the guest 
>> > machine only. 
>> > 
>> > Shanker Donthineni and Vikram Sethi will post the QEMU patch later.
> Then may I suggest you both synchronize your submissions? I'd really
> like to hear from the QEMU maintainers that they are satisfied with that
> side of the story as well.

The HIDMA QEMU driver is also based on VFIO platform driver in QEMU. It is not 
a new concept
or new framework. All tried and tested solutions. 

The driver below is already using this feature. HIDMA is no exception. 
I have verified functionality of HIDMA linux driver with HIDMA QEMU driver 
already.

https://lxr.missinglinkelectronics.com/qemu/hw/arm/sysbus-fdt.c#L67
https://lxr.missinglinkelectronics.com/qemu/hw/vfio/calxeda-xgmac.c#L18
https://lxr.missinglinkelectronics.com/qemu/include/hw/vfio/vfio-calxeda-xgmac.h
 



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm