Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, 27 Oct 2014 18:58:35 +0100 Joerg Roedel wrote: > On Mon, Oct 27, 2014 at 06:02:19PM +0100, Gerald Schaefer wrote: > > On Mon, 27 Oct 2014 17:25:02 +0100 > > Joerg Roedel wrote: > > > Is there some hardware reason for this or is that just an > > > implementation detail that can be changed. In other words, does > > > the hardware allow to use the same DMA table for multiple devices? > > > > Yes, the HW would allow shared DMA tables, but the implementation > > would need some non-trivial changes. For example, we have a > > per-device spin_lock for DMA table manipulations and the code in > > arch/s390/pci/pci_dma.c knows nothing about IOMMU domains or shared > > DMA tables, it just implements a set of dma_map_ops. > > I think it would make sense to move the DMA table handling code and > the dma_map_ops implementation to the IOMMU driver too. This is also > how some other IOMMU drivers implement it. Yes, I feared that this would come up, but I agree that it looks like the best solution, at least if we really want/need the IOMMU API for s390 now. I'll need to discuss this with Frank, he seems to be on vacation this week. Thanks for your feedback and explanations! > The plan is to consolidate the dma_ops implementations someday and > have a common implementation that works with all IOMMU drivers across > architectures. This would benefit s390 as well and obsoletes the > driver specific dma_ops implementation. > > > Of course this would also go horribly wrong if a device was already > > in use (via the current dma_map_ops), but I guess using devices > > through the IOMMU_API prevents using them otherwise? > > This is taken care of by the device drivers. A driver for a device > either uses the DMA-API or does its own management of DMA mappings > using the IOMMU-API. VFIO is an example for the later case. > > > > I think it is much easier to use the same DMA table for all > > > devices in a domain, if the hardware allows that. > > > > Yes, in this case, having one DMA table per domain and sharing it > > between all devices in that domain sounds like a good idea. However, > > I can't think of any use case for this, and Frank probably had a > > very special use case in mind where this scenario doesn't appear, > > hence the "one device per domain" restriction. > > One usecase is device access from user-space via VFIO. A userspace > process might want to access multiple devices at the same time and > VFIO would implement this by assigning all of these devices to the > same IOMMU domain. > > This requirement also comes also from the IOMMU-API itself. The > intention of the API is to make different IOMMUs look the same through > the API, and this is violated when drivers implement a 1-1 > domain->device mapping. > > > So, if having multiple devices per domain is a must, then we > > probably need a thorough rewrite of the arch/s390/pci/pci_dma.c > > code. > > Yes, this is a requirement for new IOMMU drivers. We already have > drivers implementing the same 1-1 relation and we are about to fix > them. But I don't want to add new drivers doing the same. > > > Joerg > > -- > To unsubscribe from this list: send the line "unsubscribe linux-s390" > in the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, Oct 27, 2014 at 06:02:19PM +0100, Gerald Schaefer wrote: > On Mon, 27 Oct 2014 17:25:02 +0100 > Joerg Roedel wrote: > > Is there some hardware reason for this or is that just an > > implementation detail that can be changed. In other words, does the > > hardware allow to use the same DMA table for multiple devices? > > Yes, the HW would allow shared DMA tables, but the implementation would > need some non-trivial changes. For example, we have a per-device spin_lock > for DMA table manipulations and the code in arch/s390/pci/pci_dma.c knows > nothing about IOMMU domains or shared DMA tables, it just implements a set > of dma_map_ops. I think it would make sense to move the DMA table handling code and the dma_map_ops implementation to the IOMMU driver too. This is also how some other IOMMU drivers implement it. The plan is to consolidate the dma_ops implementations someday and have a common implementation that works with all IOMMU drivers across architectures. This would benefit s390 as well and obsoletes the driver specific dma_ops implementation. > Of course this would also go horribly wrong if a device was already > in use (via the current dma_map_ops), but I guess using devices through > the IOMMU_API prevents using them otherwise? This is taken care of by the device drivers. A driver for a device either uses the DMA-API or does its own management of DMA mappings using the IOMMU-API. VFIO is an example for the later case. > > I think it is much easier to use the same DMA table for all devices > > in a domain, if the hardware allows that. > > Yes, in this case, having one DMA table per domain and sharing it > between all devices in that domain sounds like a good idea. However, > I can't think of any use case for this, and Frank probably had a very > special use case in mind where this scenario doesn't appear, hence the > "one device per domain" restriction. One usecase is device access from user-space via VFIO. A userspace process might want to access multiple devices at the same time and VFIO would implement this by assigning all of these devices to the same IOMMU domain. This requirement also comes also from the IOMMU-API itself. The intention of the API is to make different IOMMUs look the same through the API, and this is violated when drivers implement a 1-1 domain->device mapping. > So, if having multiple devices per domain is a must, then we probably > need a thorough rewrite of the arch/s390/pci/pci_dma.c code. Yes, this is a requirement for new IOMMU drivers. We already have drivers implementing the same 1-1 relation and we are about to fix them. But I don't want to add new drivers doing the same. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, 27 Oct 2014 17:25:02 +0100 Joerg Roedel wrote: > On Mon, Oct 27, 2014 at 03:32:01PM +0100, Gerald Schaefer wrote: > > Not sure if I understood the concept of IOMMU domains right. But if > > this is about having multiple devices in the same domain, so that > > iommu_ops->map will establish the _same_ DMA mapping on _all_ > > registered devices, then this should be possible. > > Yes, this is what domains are about. A domain describes a set of DMA > mappings which can be assigned to multiple devices in parallel. > > > We cannot have shared DMA tables because each device gets its own > > DMA table allocated during device initialization. > > Is there some hardware reason for this or is that just an > implementation detail that can be changed. In other words, does the > hardware allow to use the same DMA table for multiple devices? Yes, the HW would allow shared DMA tables, but the implementation would need some non-trivial changes. For example, we have a per-device spin_lock for DMA table manipulations and the code in arch/s390/pci/pci_dma.c knows nothing about IOMMU domains or shared DMA tables, it just implements a set of dma_map_ops. Of course this would also go horribly wrong if a device was already in use (via the current dma_map_ops), but I guess using devices through the IOMMU_API prevents using them otherwise? > > > But we could just keep all devices from one domain in a list and > > then call dma_update_trans() for all devices during > > iommu_ops->map/unmap. > > This sounds complicated. Note that a device can be assigned to a > domain that already has existing mappings. In this case you need to > make sure that the new device inherits these mappings (and destroy > all old mappings for the device that possibly exist). > > I think it is much easier to use the same DMA table for all devices > in a domain, if the hardware allows that. Yes, in this case, having one DMA table per domain and sharing it between all devices in that domain sounds like a good idea. However, I can't think of any use case for this, and Frank probably had a very special use case in mind where this scenario doesn't appear, hence the "one device per domain" restriction. So, if having multiple devices per domain is a must, then we probably need a thorough rewrite of the arch/s390/pci/pci_dma.c code. Gerald -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, Oct 27, 2014 at 03:32:01PM +0100, Gerald Schaefer wrote: > Not sure if I understood the concept of IOMMU domains right. But if this > is about having multiple devices in the same domain, so that iommu_ops->map > will establish the _same_ DMA mapping on _all_ registered devices, then > this should be possible. Yes, this is what domains are about. A domain describes a set of DMA mappings which can be assigned to multiple devices in parallel. > We cannot have shared DMA tables because each device gets its own DMA table > allocated during device initialization. Is there some hardware reason for this or is that just an implementation detail that can be changed. In other words, does the hardware allow to use the same DMA table for multiple devices? > But we could just keep all devices from one domain in a list and then > call dma_update_trans() for all devices during iommu_ops->map/unmap. This sounds complicated. Note that a device can be assigned to a domain that already has existing mappings. In this case you need to make sure that the new device inherits these mappings (and destroy all old mappings for the device that possibly exist). I think it is much easier to use the same DMA table for all devices in a domain, if the hardware allows that. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Thu, 23 Oct 2014 16:04:37 +0200 Frank Blaschka wrote: > On Thu, Oct 23, 2014 at 02:41:15PM +0200, Joerg Roedel wrote: > > On Wed, Oct 22, 2014 at 05:43:20PM +0200, Frank Blaschka wrote: > > > Basically there are no limitations. Depending on the s390 maschine > > > generation a device starts its IOVA at a specific address > > > (announced by the HW). But as I already told each device starts > > > at the same address. I think this prevents having multiple > > > devices on the same IOMMU domain. > > > > Why, each device has its own IOVA address space, so IOVA A could > > map to physical address X for one device and to Y for another, no? > > And if you point multiple devices to the same dma_table they share > > the mappings (and thus the address space). Or am I getting > > something wrong? > > > > > yes, you are absolutely right. There is a per-device dma_table. > > > There is no general IOMMU device but each pci device has its own > > > IOMMU translation capability. > > > > I see, in this way it is similar to ARM where there is often also > > one IOMMU per master device. > > > > > Is there a possibility the IOMMU domain can support e.g. > > > something like > > > > > > VIOA 0x1 -> pci device 1 > > > VIOA 0x1 -> pci device 2 > > > > A domain is basically an abstraction for a DMA page table (or a > > dma_table, as you call it on s390). So you can easily create similar > > mappings for more than one device with it. > > > ok, maybe I was too close to the existing s390 dma implementation or > simply wrong, maybe Sebastian or Gerald can give more background Not sure if I understood the concept of IOMMU domains right. But if this is about having multiple devices in the same domain, so that iommu_ops->map will establish the _same_ DMA mapping on _all_ registered devices, then this should be possible. We cannot have shared DMA tables because each device gets its own DMA table allocated during device initialization. But we could just keep all devices from one domain in a list and then call dma_update_trans() for all devices during iommu_ops->map/unmap. Gerald -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Thu, 23 Oct 2014 16:04:37 +0200 Frank Blaschka blasc...@linux.vnet.ibm.com wrote: On Thu, Oct 23, 2014 at 02:41:15PM +0200, Joerg Roedel wrote: On Wed, Oct 22, 2014 at 05:43:20PM +0200, Frank Blaschka wrote: Basically there are no limitations. Depending on the s390 maschine generation a device starts its IOVA at a specific address (announced by the HW). But as I already told each device starts at the same address. I think this prevents having multiple devices on the same IOMMU domain. Why, each device has its own IOVA address space, so IOVA A could map to physical address X for one device and to Y for another, no? And if you point multiple devices to the same dma_table they share the mappings (and thus the address space). Or am I getting something wrong? yes, you are absolutely right. There is a per-device dma_table. There is no general IOMMU device but each pci device has its own IOMMU translation capability. I see, in this way it is similar to ARM where there is often also one IOMMU per master device. Is there a possibility the IOMMU domain can support e.g. something like VIOA 0x1 - pci device 1 VIOA 0x1 - pci device 2 A domain is basically an abstraction for a DMA page table (or a dma_table, as you call it on s390). So you can easily create similar mappings for more than one device with it. ok, maybe I was too close to the existing s390 dma implementation or simply wrong, maybe Sebastian or Gerald can give more background Not sure if I understood the concept of IOMMU domains right. But if this is about having multiple devices in the same domain, so that iommu_ops-map will establish the _same_ DMA mapping on _all_ registered devices, then this should be possible. We cannot have shared DMA tables because each device gets its own DMA table allocated during device initialization. But we could just keep all devices from one domain in a list and then call dma_update_trans() for all devices during iommu_ops-map/unmap. Gerald -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, Oct 27, 2014 at 03:32:01PM +0100, Gerald Schaefer wrote: Not sure if I understood the concept of IOMMU domains right. But if this is about having multiple devices in the same domain, so that iommu_ops-map will establish the _same_ DMA mapping on _all_ registered devices, then this should be possible. Yes, this is what domains are about. A domain describes a set of DMA mappings which can be assigned to multiple devices in parallel. We cannot have shared DMA tables because each device gets its own DMA table allocated during device initialization. Is there some hardware reason for this or is that just an implementation detail that can be changed. In other words, does the hardware allow to use the same DMA table for multiple devices? But we could just keep all devices from one domain in a list and then call dma_update_trans() for all devices during iommu_ops-map/unmap. This sounds complicated. Note that a device can be assigned to a domain that already has existing mappings. In this case you need to make sure that the new device inherits these mappings (and destroy all old mappings for the device that possibly exist). I think it is much easier to use the same DMA table for all devices in a domain, if the hardware allows that. Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, 27 Oct 2014 17:25:02 +0100 Joerg Roedel j...@8bytes.org wrote: On Mon, Oct 27, 2014 at 03:32:01PM +0100, Gerald Schaefer wrote: Not sure if I understood the concept of IOMMU domains right. But if this is about having multiple devices in the same domain, so that iommu_ops-map will establish the _same_ DMA mapping on _all_ registered devices, then this should be possible. Yes, this is what domains are about. A domain describes a set of DMA mappings which can be assigned to multiple devices in parallel. We cannot have shared DMA tables because each device gets its own DMA table allocated during device initialization. Is there some hardware reason for this or is that just an implementation detail that can be changed. In other words, does the hardware allow to use the same DMA table for multiple devices? Yes, the HW would allow shared DMA tables, but the implementation would need some non-trivial changes. For example, we have a per-device spin_lock for DMA table manipulations and the code in arch/s390/pci/pci_dma.c knows nothing about IOMMU domains or shared DMA tables, it just implements a set of dma_map_ops. Of course this would also go horribly wrong if a device was already in use (via the current dma_map_ops), but I guess using devices through the IOMMU_API prevents using them otherwise? But we could just keep all devices from one domain in a list and then call dma_update_trans() for all devices during iommu_ops-map/unmap. This sounds complicated. Note that a device can be assigned to a domain that already has existing mappings. In this case you need to make sure that the new device inherits these mappings (and destroy all old mappings for the device that possibly exist). I think it is much easier to use the same DMA table for all devices in a domain, if the hardware allows that. Yes, in this case, having one DMA table per domain and sharing it between all devices in that domain sounds like a good idea. However, I can't think of any use case for this, and Frank probably had a very special use case in mind where this scenario doesn't appear, hence the one device per domain restriction. So, if having multiple devices per domain is a must, then we probably need a thorough rewrite of the arch/s390/pci/pci_dma.c code. Gerald -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, Oct 27, 2014 at 06:02:19PM +0100, Gerald Schaefer wrote: On Mon, 27 Oct 2014 17:25:02 +0100 Joerg Roedel j...@8bytes.org wrote: Is there some hardware reason for this or is that just an implementation detail that can be changed. In other words, does the hardware allow to use the same DMA table for multiple devices? Yes, the HW would allow shared DMA tables, but the implementation would need some non-trivial changes. For example, we have a per-device spin_lock for DMA table manipulations and the code in arch/s390/pci/pci_dma.c knows nothing about IOMMU domains or shared DMA tables, it just implements a set of dma_map_ops. I think it would make sense to move the DMA table handling code and the dma_map_ops implementation to the IOMMU driver too. This is also how some other IOMMU drivers implement it. The plan is to consolidate the dma_ops implementations someday and have a common implementation that works with all IOMMU drivers across architectures. This would benefit s390 as well and obsoletes the driver specific dma_ops implementation. Of course this would also go horribly wrong if a device was already in use (via the current dma_map_ops), but I guess using devices through the IOMMU_API prevents using them otherwise? This is taken care of by the device drivers. A driver for a device either uses the DMA-API or does its own management of DMA mappings using the IOMMU-API. VFIO is an example for the later case. I think it is much easier to use the same DMA table for all devices in a domain, if the hardware allows that. Yes, in this case, having one DMA table per domain and sharing it between all devices in that domain sounds like a good idea. However, I can't think of any use case for this, and Frank probably had a very special use case in mind where this scenario doesn't appear, hence the one device per domain restriction. One usecase is device access from user-space via VFIO. A userspace process might want to access multiple devices at the same time and VFIO would implement this by assigning all of these devices to the same IOMMU domain. This requirement also comes also from the IOMMU-API itself. The intention of the API is to make different IOMMUs look the same through the API, and this is violated when drivers implement a 1-1 domain-device mapping. So, if having multiple devices per domain is a must, then we probably need a thorough rewrite of the arch/s390/pci/pci_dma.c code. Yes, this is a requirement for new IOMMU drivers. We already have drivers implementing the same 1-1 relation and we are about to fix them. But I don't want to add new drivers doing the same. Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, 27 Oct 2014 18:58:35 +0100 Joerg Roedel j...@8bytes.org wrote: On Mon, Oct 27, 2014 at 06:02:19PM +0100, Gerald Schaefer wrote: On Mon, 27 Oct 2014 17:25:02 +0100 Joerg Roedel j...@8bytes.org wrote: Is there some hardware reason for this or is that just an implementation detail that can be changed. In other words, does the hardware allow to use the same DMA table for multiple devices? Yes, the HW would allow shared DMA tables, but the implementation would need some non-trivial changes. For example, we have a per-device spin_lock for DMA table manipulations and the code in arch/s390/pci/pci_dma.c knows nothing about IOMMU domains or shared DMA tables, it just implements a set of dma_map_ops. I think it would make sense to move the DMA table handling code and the dma_map_ops implementation to the IOMMU driver too. This is also how some other IOMMU drivers implement it. Yes, I feared that this would come up, but I agree that it looks like the best solution, at least if we really want/need the IOMMU API for s390 now. I'll need to discuss this with Frank, he seems to be on vacation this week. Thanks for your feedback and explanations! The plan is to consolidate the dma_ops implementations someday and have a common implementation that works with all IOMMU drivers across architectures. This would benefit s390 as well and obsoletes the driver specific dma_ops implementation. Of course this would also go horribly wrong if a device was already in use (via the current dma_map_ops), but I guess using devices through the IOMMU_API prevents using them otherwise? This is taken care of by the device drivers. A driver for a device either uses the DMA-API or does its own management of DMA mappings using the IOMMU-API. VFIO is an example for the later case. I think it is much easier to use the same DMA table for all devices in a domain, if the hardware allows that. Yes, in this case, having one DMA table per domain and sharing it between all devices in that domain sounds like a good idea. However, I can't think of any use case for this, and Frank probably had a very special use case in mind where this scenario doesn't appear, hence the one device per domain restriction. One usecase is device access from user-space via VFIO. A userspace process might want to access multiple devices at the same time and VFIO would implement this by assigning all of these devices to the same IOMMU domain. This requirement also comes also from the IOMMU-API itself. The intention of the API is to make different IOMMUs look the same through the API, and this is violated when drivers implement a 1-1 domain-device mapping. So, if having multiple devices per domain is a must, then we probably need a thorough rewrite of the arch/s390/pci/pci_dma.c code. Yes, this is a requirement for new IOMMU drivers. We already have drivers implementing the same 1-1 relation and we are about to fix them. But I don't want to add new drivers doing the same. Joerg -- To unsubscribe from this list: send the line unsubscribe linux-s390 in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
Hi Frank, On Thu, Oct 23, 2014 at 04:04:37PM +0200, Frank Blaschka wrote: > > A domain is basically an abstraction for a DMA page table (or a > > dma_table, as you call it on s390). So you can easily create similar > > mappings for more than one device with it. > > > the clp instructions reports a start/end dma address for the pci device. > on my system all devices report: > > sdma = 0x1; > edma = 0x1ff; These values need to be reported through the IOMMU-API, so that the users know which address ranges they can map. > dma mappings are created for each device separately starting from 0x1 > and filling the the VIOA space for this device (until 0x1ff) > > If we would like to have more then one device per domain I think: > > we would have to slice the IOVA address space (0x1 - > 0x1ff) > of the domain and report only a slice to the pci device (clp) > The iommu code would have to find the device by the dma (VIOA) address > and then program the entry to the table of the particular device (and only > this > device). Why do you need to splice an address space when more than one device is assigned to it? Does that come from the hardware? Usually its not problematic when devices share an address space. The partitioning of that address-space between devices is done by an address allocator which works on small chunks of memory (io-page-size granularity). But such an address allocator is part of the DMA-API, the IOMMU-API which you implement here only cares about the mappings itself, not about address allocation. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
Hi Frank, On Thu, Oct 23, 2014 at 04:04:37PM +0200, Frank Blaschka wrote: A domain is basically an abstraction for a DMA page table (or a dma_table, as you call it on s390). So you can easily create similar mappings for more than one device with it. the clp instructions reports a start/end dma address for the pci device. on my system all devices report: sdma = 0x1; edma = 0x1ff; These values need to be reported through the IOMMU-API, so that the users know which address ranges they can map. dma mappings are created for each device separately starting from 0x1 and filling the the VIOA space for this device (until 0x1ff) If we would like to have more then one device per domain I think: we would have to slice the IOVA address space (0x1 - 0x1ff) of the domain and report only a slice to the pci device (clp) The iommu code would have to find the device by the dma (VIOA) address and then program the entry to the table of the particular device (and only this device). Why do you need to splice an address space when more than one device is assigned to it? Does that come from the hardware? Usually its not problematic when devices share an address space. The partitioning of that address-space between devices is done by an address allocator which works on small chunks of memory (io-page-size granularity). But such an address allocator is part of the DMA-API, the IOMMU-API which you implement here only cares about the mappings itself, not about address allocation. Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Thu, Oct 23, 2014 at 02:41:15PM +0200, Joerg Roedel wrote: > On Wed, Oct 22, 2014 at 05:43:20PM +0200, Frank Blaschka wrote: > > Basically there are no limitations. Depending on the s390 maschine > > generation a device starts its IOVA at a specific address (announced by > > the HW). But as I already told each device starts at the same address. > > I think this prevents having multiple devices on the same IOMMU domain. > > Why, each device has its own IOVA address space, so IOVA A could map to > physical address X for one device and to Y for another, no? And if you > point multiple devices to the same dma_table they share the mappings > (and thus the address space). Or am I getting something wrong? > > > yes, you are absolutely right. There is a per-device dma_table. > > There is no general IOMMU device but each pci device has its own IOMMU > > translation capability. > > I see, in this way it is similar to ARM where there is often also one IOMMU > per master device. > > > Is there a possibility the IOMMU domain can support e.g. something like > > > > VIOA 0x1 -> pci device 1 > > VIOA 0x1 -> pci device 2 > > A domain is basically an abstraction for a DMA page table (or a > dma_table, as you call it on s390). So you can easily create similar > mappings for more than one device with it. > ok, maybe I was too close to the existing s390 dma implementation or simply wrong, maybe Sebastian or Gerald can give more background information. Here is my understanding so far: the clp instructions reports a start/end dma address for the pci device. on my system all devices report: sdma = 0x1; edma = 0x1ff; dma mappings are created for each device separately starting from 0x1 and filling the the VIOA space for this device (until 0x1ff) If we would like to have more then one device per domain I think: we would have to slice the IOVA address space (0x1 - 0x1ff) of the domain and report only a slice to the pci device (clp) The iommu code would have to find the device by the dma (VIOA) address and then program the entry to the table of the particular device (and only this device). Is this understanding more appropriate? Thx Frank > > > Joerg > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Wed, Oct 22, 2014 at 05:43:20PM +0200, Frank Blaschka wrote: > Basically there are no limitations. Depending on the s390 maschine > generation a device starts its IOVA at a specific address (announced by > the HW). But as I already told each device starts at the same address. > I think this prevents having multiple devices on the same IOMMU domain. Why, each device has its own IOVA address space, so IOVA A could map to physical address X for one device and to Y for another, no? And if you point multiple devices to the same dma_table they share the mappings (and thus the address space). Or am I getting something wrong? > yes, you are absolutely right. There is a per-device dma_table. > There is no general IOMMU device but each pci device has its own IOMMU > translation capability. I see, in this way it is similar to ARM where there is often also one IOMMU per master device. > Is there a possibility the IOMMU domain can support e.g. something like > > VIOA 0x1 -> pci device 1 > VIOA 0x1 -> pci device 2 A domain is basically an abstraction for a DMA page table (or a dma_table, as you call it on s390). So you can easily create similar mappings for more than one device with it. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Wed, Oct 22, 2014 at 05:43:20PM +0200, Frank Blaschka wrote: Basically there are no limitations. Depending on the s390 maschine generation a device starts its IOVA at a specific address (announced by the HW). But as I already told each device starts at the same address. I think this prevents having multiple devices on the same IOMMU domain. Why, each device has its own IOVA address space, so IOVA A could map to physical address X for one device and to Y for another, no? And if you point multiple devices to the same dma_table they share the mappings (and thus the address space). Or am I getting something wrong? yes, you are absolutely right. There is a per-device dma_table. There is no general IOMMU device but each pci device has its own IOMMU translation capability. I see, in this way it is similar to ARM where there is often also one IOMMU per master device. Is there a possibility the IOMMU domain can support e.g. something like VIOA 0x1 - pci device 1 VIOA 0x1 - pci device 2 A domain is basically an abstraction for a DMA page table (or a dma_table, as you call it on s390). So you can easily create similar mappings for more than one device with it. Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Thu, Oct 23, 2014 at 02:41:15PM +0200, Joerg Roedel wrote: On Wed, Oct 22, 2014 at 05:43:20PM +0200, Frank Blaschka wrote: Basically there are no limitations. Depending on the s390 maschine generation a device starts its IOVA at a specific address (announced by the HW). But as I already told each device starts at the same address. I think this prevents having multiple devices on the same IOMMU domain. Why, each device has its own IOVA address space, so IOVA A could map to physical address X for one device and to Y for another, no? And if you point multiple devices to the same dma_table they share the mappings (and thus the address space). Or am I getting something wrong? yes, you are absolutely right. There is a per-device dma_table. There is no general IOMMU device but each pci device has its own IOMMU translation capability. I see, in this way it is similar to ARM where there is often also one IOMMU per master device. Is there a possibility the IOMMU domain can support e.g. something like VIOA 0x1 - pci device 1 VIOA 0x1 - pci device 2 A domain is basically an abstraction for a DMA page table (or a dma_table, as you call it on s390). So you can easily create similar mappings for more than one device with it. ok, maybe I was too close to the existing s390 dma implementation or simply wrong, maybe Sebastian or Gerald can give more background information. Here is my understanding so far: the clp instructions reports a start/end dma address for the pci device. on my system all devices report: sdma = 0x1; edma = 0x1ff; dma mappings are created for each device separately starting from 0x1 and filling the the VIOA space for this device (until 0x1ff) If we would like to have more then one device per domain I think: we would have to slice the IOVA address space (0x1 - 0x1ff) of the domain and report only a slice to the pci device (clp) The iommu code would have to find the device by the dma (VIOA) address and then program the entry to the table of the particular device (and only this device). Is this understanding more appropriate? Thx Frank Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Wed, Oct 22, 2014 at 04:17:29PM +0200, Joerg Roedel wrote: > Hi Frank, > > On Tue, Oct 21, 2014 at 01:57:25PM +0200, Frank Blaschka wrote: > > Add a basic iommu for the s390 platform. The code is pretty simple > > since on s390 each PCI device has its own virtual io address space > > starting at the same vio address. > > Are there any limitations on IOVA address space for the devices or can > be really any system physical address mapped starting from 0 to 2^64? > Hi Joerg, Basically there are no limitations. Depending on the s390 maschine generation a device starts its IOVA at a specific address (announced by the HW). But as I already told each device starts at the same address. I think this prevents having multiple devices on the same IOMMU domain. > > For this a domain could hold only one pci device. > > This bothers me, as it is not compatible with the IOMMU-API. I looked a > little bit into how the mappings are created, and it seems there is a > per-device dma_table. > yes, you are absolutely right. There is a per-device dma_table. There is no general IOMMU device but each pci device has its own IOMMU translation capability. > Is there any reason a dma_table can't be per IOMMU domain and assigned > to multiple devices at the same time? Is there a possibility the IOMMU domain can support e.g. something like VIOA 0x1 -> pci device 1 VIOA 0x1 -> pci device 2 > > Otherwise the code looks quite simple and straight forward. > Thx for your review and help Frank > > Joerg > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
Hi Frank, On Tue, Oct 21, 2014 at 01:57:25PM +0200, Frank Blaschka wrote: > Add a basic iommu for the s390 platform. The code is pretty simple > since on s390 each PCI device has its own virtual io address space > starting at the same vio address. Are there any limitations on IOVA address space for the devices or can be really any system physical address mapped starting from 0 to 2^64? > For this a domain could hold only one pci device. This bothers me, as it is not compatible with the IOMMU-API. I looked a little bit into how the mappings are created, and it seems there is a per-device dma_table. Is there any reason a dma_table can't be per IOMMU domain and assigned to multiple devices at the same time? Otherwise the code looks quite simple and straight forward. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
Hi Frank, On Tue, Oct 21, 2014 at 01:57:25PM +0200, Frank Blaschka wrote: Add a basic iommu for the s390 platform. The code is pretty simple since on s390 each PCI device has its own virtual io address space starting at the same vio address. Are there any limitations on IOVA address space for the devices or can be really any system physical address mapped starting from 0 to 2^64? For this a domain could hold only one pci device. This bothers me, as it is not compatible with the IOMMU-API. I looked a little bit into how the mappings are created, and it seems there is a per-device dma_table. Is there any reason a dma_table can't be per IOMMU domain and assigned to multiple devices at the same time? Otherwise the code looks quite simple and straight forward. Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Wed, Oct 22, 2014 at 04:17:29PM +0200, Joerg Roedel wrote: Hi Frank, On Tue, Oct 21, 2014 at 01:57:25PM +0200, Frank Blaschka wrote: Add a basic iommu for the s390 platform. The code is pretty simple since on s390 each PCI device has its own virtual io address space starting at the same vio address. Are there any limitations on IOVA address space for the devices or can be really any system physical address mapped starting from 0 to 2^64? Hi Joerg, Basically there are no limitations. Depending on the s390 maschine generation a device starts its IOVA at a specific address (announced by the HW). But as I already told each device starts at the same address. I think this prevents having multiple devices on the same IOMMU domain. For this a domain could hold only one pci device. This bothers me, as it is not compatible with the IOMMU-API. I looked a little bit into how the mappings are created, and it seems there is a per-device dma_table. yes, you are absolutely right. There is a per-device dma_table. There is no general IOMMU device but each pci device has its own IOMMU translation capability. Is there any reason a dma_table can't be per IOMMU domain and assigned to multiple devices at the same time? Is there a possibility the IOMMU domain can support e.g. something like VIOA 0x1 - pci device 1 VIOA 0x1 - pci device 2 Otherwise the code looks quite simple and straight forward. Thx for your review and help Frank Joerg -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/