On 27/04/2019 09:38, Auger Eric wrote:
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -5153,7 +5153,7 @@ static void auxiliary_unlink_device(struct
dmar_domain *domain, domain->auxd_refcnt--;
if (!domain->auxd_refcnt &&
On 21/04/2019 02:17, Lu Baolu wrote:
This makes it possible for other modules to know the minimal
page size supported by a domain without the knowledge of the
structure details.
Signed-off-by: Lu Baolu
---
include/linux/iommu.h | 13 +
1 file changed, 13 insertions(+)
diff
On Mon, Apr 29, 2019 at 12:06:52PM +0100, Robin Murphy wrote:
>
> From the reply up-thread I guess you're trying to include an optimisation
> to only copy the head and tail of the buffer if it spans multiple pages,
> and directly map the ones in the middle, but AFAICS that's going to tie you
>
On 29/04/2019 06:10, Lu Baolu wrote:
Hi Christoph,
On 4/26/19 11:04 PM, Christoph Hellwig wrote:
On Thu, Apr 25, 2019 at 10:07:19AM +0800, Lu Baolu wrote:
This is not VT-d specific. It's just how generic IOMMU works.
Normally, IOMMU works in paging mode. So if a driver issues DMA with
IOVA
On 22/04/2019 18:59, Christoph Hellwig wrote:
There is nothing really arm64 specific in the iommu_dma_ops
implementation, so move it to dma-iommu.c and keep a lot of symbols
self-contained. Note the implementation does depend on the
DMA_DIRECT_REMAP infrastructure for now, so we'll have to make
On 22/04/2019 18:59, Christoph Hellwig wrote:
Move the call to dma_common_pages_remap into __iommu_dma_alloc and
rename it to iommu_dma_alloc_remap. This creates a self-contained
helper for remapped pages allocation and mapping.
Reviewed-by: Robin Murphy
Signed-off-by: Christoph Hellwig
On 22/04/2019 18:59, Christoph Hellwig wrote:
From: Robin Murphy
The freeing logic was made particularly horrible by part of it being
opaque to the arch wrapper, which led to a lot of convoluted repetition
to ensure each path did everything in the right order. Now that it's
all private, we can
On 22/04/2019 18:59, Christoph Hellwig wrote:
Inline __iommu_dma_get_sgtable_page into the main function, and use the
fact that __iommu_dma_get_pages return NULL for remapped contigous
allocations to simplify the code flow a bit.
Yeah, even I was a bit dubious about the readability of "if
On Fri, 26 Apr 2019 18:15:27 +0200
Auger Eric wrote:
> Hi Jacob,
>
> On 4/24/19 1:31 AM, Jacob Pan wrote:
> > When supporting guest SVA with emulated IOMMU, the guest PASID
> > table is shadowed in VMM. Updates to guest vIOMMU PASID table
> > will result in PASID cache flush which will be
On 22/04/2019 18:59, Christoph Hellwig wrote:
With most of the previous functionality now elsewhere a lot of the
headers included in this file are not needed.
Signed-off-by: Christoph Hellwig
---
arch/arm64/mm/dma-mapping.c | 11 ---
1 file changed, 11 deletions(-)
diff --git
On 29/04/2019 15:44, Julien Grall wrote:
A recent change split iommu_dma_map_msi_msg() in two new functions. The
function was still implemented to avoid modifying all the callers at
once.
Now that all the callers have been reworked, iommu_dma_map_msi_msg() can
be removed.
Yay! The end of my
On 29/04/2019 15:44, Julien Grall wrote:
When an MSI doorbell is located downstream of an IOMMU, it is required
to swizzle the physical address with an appropriately-mapped IOVA for any
device attached to one of our DMA ops domain.
At the moment, the allocation of the mapping may be done when
On Mon, Apr 29, 2019 at 12:59 PM Christoph Hellwig wrote:
>
> On Sat, Apr 27, 2019 at 03:20:35PM +0100, Tom Murphy wrote:
> > I am working on another patch to improve the intel iotlb flushing in
> > the iommu ops patch which should cover this too.
>
> So are you looking into converting the
On 22/04/2019 18:59, Christoph Hellwig wrote:
The nr_pages checks should be done for all mmap requests, not just those
using remap_pfn_range.
I think it probably makes sense now to just squash this with #22 one way
or the other, but if you really really still want to keep it as a
separate
On 22/04/2019 18:59, Christoph Hellwig wrote:
From: Robin Murphy
Since we duplicate the find_vm_area() logic a few times in places where
we only care aboute the pages, factor out a helper to abstract it.
Signed-off-by: Robin Murphy
[hch: don't warn when not finding a region, as we'll rely on
On 22/04/2019 18:59, Christoph Hellwig wrote:
From: Robin Murphy
Most importantly clear up the size / iosize confusion. Also rename addr
to cpu_addr to match the surrounding code and make the intention a little
more clear.
Signed-off-by: Robin Murphy
[hch: split from a larger patch]
I
On Tue, Apr 23, 2019 at 11:01:44AM +0100, Robin Murphy wrote:
> Wouldn't this suffice? Since we also use alloc_pages() in the coherent
> atomic case, the free path should already be able to deal with it.
>
> Let me take a proper look at v3 and see how it all looks in context.
Any comments on v3?
On 29/04/2019 12:49, Christoph Hellwig wrote:
On Tue, Apr 23, 2019 at 11:01:44AM +0100, Robin Murphy wrote:
Wouldn't this suffice? Since we also use alloc_pages() in the coherent
atomic case, the free path should already be able to deal with it.
Let me take a proper look at v3 and see how it
On 22/04/2019 18:59, Christoph Hellwig wrote:
We only have a single caller of this function left, so open code it there.
Heh, I even caught myself out for a moment thinking this looked
redundant with #18 now, but no :)
Reviewed-by: Robin Murphy
Signed-off-by: Christoph Hellwig
---
On 22/04/2019 18:59, Christoph Hellwig wrote:
Inline __iommu_dma_mmap_pfn into the main function, and use the
fact that __iommu_dma_get_pages return NULL for remapped contigous
allocations to simplify the code flow a bit.
...and later we can squash __iommu_dma_mmap() once the dust settles on
Hi all,
On RT, the function iommu_dma_map_msi_msg expects to be called from preemptible
context. However, this is not always the case resulting a splat with
!CONFIG_DEBUG_ATOMIC_SLEEP:
[ 48.875777] BUG: sleeping function called from invalid context at
kernel/locking/rtmutex.c:974
[
The functions mbi_compose_m{b, s}i_msg may be called from non-preemptible
context. However, on RT, iommu_dma_map_msi_msg() requires to be called
from a preemptible context.
A recent patch split iommu_dma_map_msi_msg in two new functions:
one that should be called in preemptible context, the other
On RT, iommu_dma_map_msi_msg() may be called from non-preemptible
context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as
the function is using spin_lock (they can sleep on RT).
iommu_dma_map_msi_msg() is used to map the MSI page in the IOMMU PT
and update the MSI message with the
its_irq_compose_msi_msg() may be called from non-preemptible context.
However, on RT, iommu_dma_map_msi_msg requires to be called from a
preemptible context.
A recent change split iommu_dma_map_msi_msg() in two new functions:
one that should be called in preemptible context, the other does
not
When an MSI doorbell is located downstream of an IOMMU, it is required
to swizzle the physical address with an appropriately-mapped IOVA for any
device attached to one of our DMA ops domain.
At the moment, the allocation of the mapping may be done when composing
the message. However, the
gicv2m_compose_msi_msg() may be called from non-preemptible context.
However, on RT, iommu_dma_map_msi_msg() requires to be called from a
preemptible context.
A recent change split iommu_dma_map_msi_msg() in two new functions:
one that should be called in preemptible context, the other does
not
On 29/04/2019 15:44, Julien Grall wrote:
On RT, iommu_dma_map_msi_msg() may be called from non-preemptible
context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as
the function is using spin_lock (they can sleep on RT).
iommu_dma_map_msi_msg() is used to map the MSI page in the
On 29/04/2019 15:44, Julien Grall wrote:
> On RT, iommu_dma_map_msi_msg() may be called from non-preemptible
> context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as
> the function is using spin_lock (they can sleep on RT).
>
> iommu_dma_map_msi_msg() is used to map the MSI page in
On Sat, Apr 27, 2019 at 03:20:35PM +0100, Tom Murphy wrote:
> I am working on another patch to improve the intel iotlb flushing in
> the iommu ops patch which should cover this too.
So are you looking into converting the intel-iommu driver to use
dma-iommu as well? That would be great!
Hi Marc,
On 23/04/2019 11:54, Marc Zyngier wrote:
On 18/04/2019 18:26, Julien Grall wrote:
On RT, the function iommu_dma_map_msi_msg may be called from
non-preemptible context. This will lead to a splat with
CONFIG_DEBUG_ATOMIC_SLEEP as the function is using spin_lock
(they can sleep on RT).
ls_scfg_msi_compose_msg() may be called from non-preemptible context.
However, on RT, iommu_dma_map_msi_msg() requires to be called from a
preemptible context.
A recent patch split iommu_dma_map_msi_msg() in two new functions:
one that should be called in preemptible context, the other does
not
On 22/04/2019 18:59, Christoph Hellwig wrote:
For entirely dma coherent architectures there is no requirement to ever
remap dma coherent allocation. Move all the remap and pool code under
IS_ENABLED() checks and drop the Kconfig dependency.
Reviewed-by: Robin Murphy
Signed-off-by:
A recent change split iommu_dma_map_msi_msg() in two new functions. The
function was still implemented to avoid modifying all the callers at
once.
Now that all the callers have been reworked, iommu_dma_map_msi_msg() can
be removed.
Signed-off-by: Julien Grall
---
Changes in v2:
-
On 22/04/2019 18:59, Christoph Hellwig wrote:
From: Robin Murphy
Honestly I don't think anything left of my patch here...
Apart from the iommu_dma_alloc_remap() case which remains sufficiently
different that it's better off being self-contained, the rest of the
logic can now be consolidated
On 22/04/2019 18:59, Christoph Hellwig wrote:
Hi Robin,
please take a look at this series, which implements a completely generic
set of dma_map_ops for IOMMU drivers. This is done by taking the
existing arm64 code, moving it to drivers/iommu and then massaging it
so that it can also work for
On Mon, Apr 29, 2019 at 01:17:44PM +0100, Tom Murphy wrote:
> Yes. My patches depend on the "iommu/vt-d: Delegate DMA domain to
> generic iommu" patch which is currently being reviewed.
Nice!
___
iommu mailing list
iommu@lists.linux-foundation.org
On Fri, Apr 26, 2019 at 11:55:12AM -0400, Qian Cai wrote:
> https://git.sr.ht/~cai/linux-debug/blob/master/dmesg
Thanks, I can't see any definitions for unity ranges or exclusion ranges
in the IVRS table dump, which makes it even more weird.
Can you please send me the output of
for f in
On Mon, Apr 29, 2019 at 02:59:43PM +0100, Robin Murphy wrote:
> Hmm, I do still prefer my original flow with the dma_common_free_remap()
> call right out of the way at the end rather than being a special case in
> the middle of all the page-freeing (which is the kind of existing
> complexity I
On Mon, Apr 29, 2019 at 01:35:46PM +0100, Robin Murphy wrote:
> On 22/04/2019 18:59, Christoph Hellwig wrote:
>> The nr_pages checks should be done for all mmap requests, not just those
>> using remap_pfn_range.
>
> I think it probably makes sense now to just squash this with #22 one way or
> the
On Mon, Apr 29, 2019 at 09:03:48PM +0200, Christoph Hellwig wrote:
> On Mon, Apr 29, 2019 at 02:59:43PM +0100, Robin Murphy wrote:
> > Hmm, I do still prefer my original flow with the dma_common_free_remap()
> > call right out of the way at the end rather than being a special case in
> > the
On 12/04/2019 04:13, Srinath Mannam wrote:
dma_ranges field of PCI host bridge structure has resource entries in
sorted order of address range given through dma-ranges DT property. This
list is the accessible DMA address range. So that this resource list will
be processed and reserve IOVA
Ok,
I attach log from today test on Ubuntu 19.04 with5.0.9-050009-generickernel.
Full kern.log: https://paste.ee/p/yF3Qi#section0
Dmesg log: https://paste.ee/p/yF3Qi#section1
Summary:
- motherboard with AMD A320M chipset, CPU AMD Athlon 200GE
- Ubuntu 19.04 default instalation, kernel
On Fri, Apr 26, 2019 at 03:47:15PM +0200, starost...@gmail.com wrote:
> Hello all,
> we are development and manufacturing company that use your FT232R serial
> converter for couple of years with our software. We consume about a
> hundreds pcs of FT232R per yer. We use FT232R as USB serial
On Mon, Apr 29, 2019 at 12:51:20PM +0200, starost...@gmail.com wrote:
> Hello,
> sorry for other questions, but I am new in this list:
> Is Ubuntu server 19.04 with "kernel 5.0.9-050009-generic" good for this
> test?
Yes, that might do depending on what else debian put in that kernel.
> Can I
Hello,
sorry for other questions, but I am new in this list:
Is Ubuntu server 19.04 with "kernel 5.0.9-050009-generic" good for this
test?
Can I add attachments to this lists?
And who is xhci and iommu maintainers? Are they CC in this mail?
starosta
Dne 29.4.2019 v 11:48 Johan Hovold
On Mon, Apr 29, 2019 at 02:05:46PM +0100, Robin Murphy wrote:
> On 22/04/2019 18:59, Christoph Hellwig wrote:
>> From: Robin Murphy
>>
>> Since we duplicate the find_vm_area() logic a few times in places where
>> we only care aboute the pages, factor out a helper to abstract it.
>>
>>
On Fri, 26 Apr 2019 18:22:46 +0200
Auger Eric wrote:
> Hi Jacob,
>
> On 4/24/19 1:31 AM, Jacob Pan wrote:
> > To convert to/from cache types and granularities between generic and
> > VT-d specific counterparts, a 2D arrary is used. Introduce the
> > limits
> array
> > to help define the
Hi Julien,
On 29/04/2019 15:44, Julien Grall wrote:
> Hi all,
>
> On RT, the function iommu_dma_map_msi_msg expects to be called from
> preemptible
> context. However, this is not always the case resulting a splat with
> !CONFIG_DEBUG_ATOMIC_SLEEP:
>
> [ 48.875777] BUG: sleeping function
On Sat, 27 Apr 2019 11:04:04 +0200
Auger Eric wrote:
> Hi Jacob,
>
> On 4/24/19 1:31 AM, Jacob Pan wrote:
> > When Shared Virtual Memory is exposed to a guest via vIOMMU,
> > extended IOTLB invalidation may be passed down from outside IOMMU
> > subsystems. This patch adds invalidation functions
On Fri, 26 Apr 2019 19:23:03 +0200
Auger Eric wrote:
> Hi Jacob,
> On 4/24/19 1:31 AM, Jacob Pan wrote:
> > When Shared Virtual Address (SVA) is enabled for a guest OS via
> > vIOMMU, we need to provide invalidation support at IOMMU API and
> > driver level. This patch adds Intel VT-d specific
On Wed, Apr 24, 2019 at 09:10:21PM +0200, Heiner Kallweit wrote:
> In several places in the kernel we find PCI_DEVID used like this:
> PCI_DEVID(dev->bus->number, dev->devfn) Therefore create a helper
> for it.
>
> v2:
> - apply the change to all affected places in the kernel
>
> Heiner Kallweit
On Mon, Apr 29, 2019 at 04:57:20PM +0100, Marc Zyngier wrote:
> Thanks for having reworked this. I'm quite happy with the way this looks
> now (modulo the couple of nits Robin and I mentioned, which I'm to
> address myself).
>
> Jorg: are you OK with this going via the irq tree?
As-is this has a
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Signed-off-by: Tom Murphy
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 680 --
2 files changed, 70
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.
The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread:
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api.
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
v2:
-Rebase on top of this series:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-iommu-ops.3
-Add a gfp_t parameter to the
Handle devices which defer their attach to the iommu in the dma-iommu api
Signed-off-by: Tom Murphy
---
drivers/iommu/dma-iommu.c | 30 ++
1 file changed, 30 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index
Hi Jacob,
On 4/29/19 11:29 PM, Jacob Pan wrote:
> On Sat, 27 Apr 2019 11:04:04 +0200
> Auger Eric wrote:
>
>> Hi Jacob,
>>
>> On 4/24/19 1:31 AM, Jacob Pan wrote:
>>> When Shared Virtual Memory is exposed to a guest via vIOMMU,
>>> extended IOTLB invalidation may be passed down from outside
Hi Robin,
On 4/29/19 7:06 PM, Robin Murphy wrote:
On 29/04/2019 06:10, Lu Baolu wrote:
Hi Christoph,
On 4/26/19 11:04 PM, Christoph Hellwig wrote:
On Thu, Apr 25, 2019 at 10:07:19AM +0800, Lu Baolu wrote:
This is not VT-d specific. It's just how generic IOMMU works.
Normally, IOMMU works
Hi Jacob,
On 4/29/19 6:17 PM, Jacob Pan wrote:
> On Fri, 26 Apr 2019 18:22:46 +0200
> Auger Eric wrote:
>
>> Hi Jacob,
>>
>> On 4/24/19 1:31 AM, Jacob Pan wrote:
>>> To convert to/from cache types and granularities between generic and
>>> VT-d specific counterparts, a 2D arrary is used.
This series of patches try to optimize dma_*_from_contiguous calls:
PATCH-1 does some abstraction and cleanup.
PATCH-2 saves single pages and reduce fragmentations from CMA area.
Both two patches may impact the source of pages (CMA or normal)
depending on the use cases, so are being tagged with
Hi Robin,
On 4/29/19 6:55 PM, Robin Murphy wrote:
On 21/04/2019 02:17, Lu Baolu wrote:
This makes it possible for other modules to know the minimal
page size supported by a domain without the knowledge of the
structure details.
Signed-off-by: Lu Baolu
---
include/linux/iommu.h | 13
Hi Christoph,
On 4/30/19 4:03 AM, Christoph Hellwig wrote:
@@ -3631,35 +3607,30 @@ static int iommu_no_mapping(struct device *dev)
if (iommu_dummy(dev))
return 1;
- if (!iommu_identity_mapping)
- return 0;
-
FYI, iommu_no_mapping has been refactored
63 matches
Mail list logo