[PATCH] kernel/dma/direct: Do not include SME mask in the DMA supported check

2018-12-13 Thread Lendacky, Thomas
The dma_direct_supported() function intends to check the DMA mask against
specific values. However, the phys_to_dma() function includes the SME
encryption mask, which defeats the intended purpose of the check. This
results in drivers that support less than 48-bit DMA (SME encryption mask
is bit 47) from being able to set the DMA mask successfully when SME is
active, which results in the driver failing to initialize.

Change the function used to check the mask from phys_to_dma() to
__phys_to_dma() so that the SME encryption mask is not part of the check.

Fixes: c1d0af1a1d5d ("kernel/dma/direct: take DMA offset into account in 
dma_direct_supported")
Signed-off-by: Tom Lendacky 
---
 kernel/dma/direct.c |7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 22a12ab..375c77e 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -309,7 +309,12 @@ int dma_direct_supported(struct device *dev, u64 mask)
 
min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT);
 
-   return mask >= phys_to_dma(dev, min_mask);
+   /*
+* This check needs to be against the actual bit mask value, so
+* use __phys_to_dma() here so that the SME encryption mask isn't
+* part of the check.
+*/
+   return mask >= __phys_to_dma(dev, min_mask);
 }
 
 int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr)

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christian Zigotzky

On 13 December 2018 at 6:48PM, Christian Zigotzky wrote:

On 13 December 2018 at 2:34PM, Christian Zigotzky wrote:

On 13 December 2018 at 12:25PM, Christoph Hellwig wrote:

On Thu, Dec 13, 2018 at 12:19:26PM +0100, Christian Zigotzky wrote:

I tried it again but I get the following error message:

MODPOST vmlinux.o
arch/powerpc/kernel/dma-iommu.o: In function 
`.dma_iommu_get_required_mask':

(.text+0x274): undefined reference to `.dma_direct_get_required_mask'
make: *** [vmlinux] Error 1

Sorry, you need this one liner before all the patches posted last time:

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d8819e3a1eb1..7e78c2798f2f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -154,6 +154,7 @@ config PPC
  select CLONE_BACKWARDS
  select DCACHE_WORD_ACCESS    if PPC64 && CPU_LITTLE_ENDIAN
  select DYNAMIC_FTRACE    if FUNCTION_TRACER
+    select DMA_DIRECT_OPS
  select EDAC_ATOMIC_SCRUB
  select EDAC_SUPPORT
  select GENERIC_ATOMIC64    if PPC32

Thanks. Result: PASEMI onboard ethernet works and the X5000 (P5020 
board) boots with the patch '0001-get_required_mask.patch'.


-- Christian


Next patch: '0002-swiotlb-dma_supported.patch' for the last good 
commit (977706f9755d2d697aa6f45b4f9f0e07516efeda).


The PASEMI onboard ethernet works and the X5000 (P5020 board) boots.

-- Christian



Next patch: '0003-nommu-dma_supported.patch'

No problems with the PASEMI onboard ethernet and the P5020 board boots.

-- Christian

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] iommu/ipmmu-vmsa: Hook up r8a774c0 DT matching code

2018-12-13 Thread Fabrizio Castro
Support RZ/G2E (a.k.a. R8A774C0) IPMMU.

Signed-off-by: Fabrizio Castro 
---
 drivers/iommu/ipmmu-vmsa.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 32e572b..8074dec 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -757,6 +757,7 @@ static int ipmmu_init_platform_device(struct device *dev,
 
 static const struct soc_device_attribute soc_rcar_gen3[] = {
{ .soc_id = "r8a774a1", },
+   { .soc_id = "r8a774c0", },
{ .soc_id = "r8a7795", },
{ .soc_id = "r8a7796", },
{ .soc_id = "r8a77965", },
@@ -767,6 +768,7 @@ static const struct soc_device_attribute soc_rcar_gen3[] = {
 };
 
 static const struct soc_device_attribute soc_rcar_gen3_whitelist[] = {
+   { .soc_id = "r8a774c0", },
{ .soc_id = "r8a7795", .revision = "ES3.*" },
{ .soc_id = "r8a77965", },
{ .soc_id = "r8a77990", },
@@ -976,6 +978,9 @@ static const struct of_device_id ipmmu_of_ids[] = {
.compatible = "renesas,ipmmu-r8a774a1",
.data = _features_rcar_gen3,
}, {
+   .compatible = "renesas,ipmmu-r8a774c0",
+   .data = _features_rcar_gen3,
+   }, {
.compatible = "renesas,ipmmu-r8a7795",
.data = _features_rcar_gen3,
}, {
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] dt-bindings: iommu: ipmmu-vmsa: Add r8a774c0 support

2018-12-13 Thread Fabrizio Castro
Document RZ/G2E (R8A774C0) SoC bindings.

Signed-off-by: Fabrizio Castro 
---
 Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt 
b/Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt
index e285c8a..b6bfbec 100644
--- a/Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt
+++ b/Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt
@@ -15,6 +15,7 @@ Required Properties:
 - "renesas,ipmmu-r8a7744" for the R8A7744 (RZ/G1N) IPMMU.
 - "renesas,ipmmu-r8a7745" for the R8A7745 (RZ/G1E) IPMMU.
 - "renesas,ipmmu-r8a774a1" for the R8A774A1 (RZ/G2M) IPMMU.
+- "renesas,ipmmu-r8a774c0" for the R8A774C0 (RZ/G2E) IPMMU.
 - "renesas,ipmmu-r8a7790" for the R8A7790 (R-Car H2) IPMMU.
 - "renesas,ipmmu-r8a7791" for the R8A7791 (R-Car M2-W) IPMMU.
 - "renesas,ipmmu-r8a7793" for the R8A7793 (R-Car M2-N) IPMMU.
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC] avoid indirect calls for DMA direct mappings v2

2018-12-13 Thread Christoph Hellwig
I've pulled v2 with the ia64 into dma-mapping for-next.  This should
give us a little more than a week in linux-next to sort out any
issues.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dma-direct: Fix return value of dma_direct_supported

2018-12-13 Thread Christoph Hellwig
On Thu, Dec 13, 2018 at 07:45:57PM +, Lendacky, Thomas wrote:
> So I think this needs to be __phys_to_dma() here. I only recently got a
> system that had a device where the driver only supported 32-bit DMA and
> found that when SME is active this returns 0 and causes the driver to fail
> to initialize. This is because the SME encryption bit (bit 47) is part of
> the check when using phys_to_dma(). During actual DMA when SME is active,
> bounce buffers will be used for anything that can't meet the 48-bit
> requirement. But for this test, using __phys_to_dma() should give the
> desired results, right?
> 
> If you agree with this, I'll submit a patch to make the change. I missed
> this in 4.19, so I'll need to submit something to stable, too. The only
> issue there is the 4.20 fix won't apply cleanly to 4.19.

Yes, please send a patch.  Please make sure it includes a code comment
that explains why the __-prefixed version is used.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dma-direct: Fix return value of dma_direct_supported

2018-12-13 Thread Lendacky, Thomas
On 10/04/2018 10:13 AM, Alexander Duyck wrote:
> On Thu, Oct 4, 2018 at 4:25 AM Robin Murphy  wrote:
>>
>> On 04/10/18 00:48, Alexander Duyck wrote:
>>> It appears that in commit 9d7a224b463e ("dma-direct: always allow dma mask
>>> <= physiscal memory size") the logic of the test was changed from a "<" to
>>> a ">=" however I don't see any reason for that change. I am assuming that
>>> there was some additional change planned, specifically I suspect the logic
>>> was intended to be reversed and possibly used for a return. Since that is
>>> the case I have gone ahead and done that.
>>
>> Bah, seems I got hung up on the min_mask code above it and totally
>> overlooked that the condition itself got flipped. It probably also can't
>> help that it's an int return type, but treated as a bool by callers
>> rather than "0 for success" as int tends to imply in isolation.
>>
>> Anyway, paying a bit more attention this time, I think this looks like
>> the right fix - cheers Alex.
>>
>> Robin.
> 
> Thanks for the review.
> 
> - Alex
> 
> P.S. It looks like I forgot to add Christoph to the original mail
> since I had just copied the To and Cc from the original submission, so
> I added him to the Cc for this.
> 
>>> This addresses issues I had on my system that prevented me from booting
>>> with the above mentioned commit applied on an x86_64 system w/ Intel IOMMU.
>>>
>>> Fixes: 9d7a224b463e ("dma-direct: always allow dma mask <= physiscal memory 
>>> size")
>>> Signed-off-by: Alexander Duyck 
>>> ---
>>>   kernel/dma/direct.c |4 +---
>>>   1 file changed, 1 insertion(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>>> index 5a0806b5351b..65872f6c2e93 100644
>>> --- a/kernel/dma/direct.c
>>> +++ b/kernel/dma/direct.c
>>> @@ -301,9 +301,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
>>>
>>>   min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT);
>>>
>>> - if (mask >= phys_to_dma(dev, min_mask))
>>> - return 0;
>>> - return 1;
>>> + return mask >= phys_to_dma(dev, min_mask);

So I think this needs to be __phys_to_dma() here. I only recently got a
system that had a device where the driver only supported 32-bit DMA and
found that when SME is active this returns 0 and causes the driver to fail
to initialize. This is because the SME encryption bit (bit 47) is part of
the check when using phys_to_dma(). During actual DMA when SME is active,
bounce buffers will be used for anything that can't meet the 48-bit
requirement. But for this test, using __phys_to_dma() should give the
desired results, right?

If you agree with this, I'll submit a patch to make the change. I missed
this in 4.19, so I'll need to submit something to stable, too. The only
issue there is the 4.20 fix won't apply cleanly to 4.19.

Thanks,
Tom

>>>   }
>>>
>>>   int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr)
>>>
>>> ___
>>> iommu mailing list
>>> iommu@lists.linux-foundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>>
>> ___
>> iommu mailing list
>> iommu@lists.linux-foundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christian Zigotzky

On 13 December 2018 at 2:34PM, Christian Zigotzky wrote:

On 13 December 2018 at 12:25PM, Christoph Hellwig wrote:

On Thu, Dec 13, 2018 at 12:19:26PM +0100, Christian Zigotzky wrote:

I tried it again but I get the following error message:

MODPOST vmlinux.o
arch/powerpc/kernel/dma-iommu.o: In function 
`.dma_iommu_get_required_mask':

(.text+0x274): undefined reference to `.dma_direct_get_required_mask'
make: *** [vmlinux] Error 1

Sorry, you need this one liner before all the patches posted last time:

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d8819e3a1eb1..7e78c2798f2f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -154,6 +154,7 @@ config PPC
  select CLONE_BACKWARDS
  select DCACHE_WORD_ACCESS    if PPC64 && CPU_LITTLE_ENDIAN
  select DYNAMIC_FTRACE    if FUNCTION_TRACER
+    select DMA_DIRECT_OPS
  select EDAC_ATOMIC_SCRUB
  select EDAC_SUPPORT
  select GENERIC_ATOMIC64    if PPC32

Thanks. Result: PASEMI onboard ethernet works and the X5000 (P5020 
board) boots with the patch '0001-get_required_mask.patch'.


-- Christian


Next patch: '0002-swiotlb-dma_supported.patch' for the last good commit 
(977706f9755d2d697aa6f45b4f9f0e07516efeda).


The PASEMI onboard ethernet works and the X5000 (P5020 board) boots.

-- Christian

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v6 0/7] Add virtio-iommu driver

2018-12-13 Thread Christoph Hellwig
On Thu, Dec 13, 2018 at 12:50:29PM +, Jean-Philippe Brucker wrote:
> * DMA ops for x86 (see "HACK" commit). I'd like to use dma-iommu but I'm
> not sure how to implement the glue that sets dma_ops properly.

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-iommu-ops
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christian Zigotzky

On 13 December 2018 at 12:25PM, Christoph Hellwig wrote:

On Thu, Dec 13, 2018 at 12:19:26PM +0100, Christian Zigotzky wrote:

I tried it again but I get the following error message:

MODPOST vmlinux.o
arch/powerpc/kernel/dma-iommu.o: In function `.dma_iommu_get_required_mask':
(.text+0x274): undefined reference to `.dma_direct_get_required_mask'
make: *** [vmlinux] Error 1

Sorry, you need this one liner before all the patches posted last time:

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d8819e3a1eb1..7e78c2798f2f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -154,6 +154,7 @@ config PPC
select CLONE_BACKWARDS
select DCACHE_WORD_ACCESS   if PPC64 && CPU_LITTLE_ENDIAN
select DYNAMIC_FTRACE   if FUNCTION_TRACER
+   select DMA_DIRECT_OPS
select EDAC_ATOMIC_SCRUB
select EDAC_SUPPORT
select GENERIC_ATOMIC64 if PPC32

Thanks. Result: PASEMI onboard ethernet works and the X5000 (P5020 
board) boots with the patch '0001-get_required_mask.patch'.


-- Christian

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [GIT PULL] iommu/arm-smmu: Updates for 4.21

2018-12-13 Thread Will Deacon
On Wed, Dec 12, 2018 at 10:17:02AM +0100, Joerg Roedel wrote:
> On Tue, Dec 11, 2018 at 08:08:48PM +, Will Deacon wrote:
> > The following changes since commit 9ff01193a20d391e8dbce4403dd5ef87c7eaaca6:
> > 
> >   Linux 4.20-rc3 (2018-11-18 13:33:44 -0800)
> > 
> > are available in the git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git 
> > for-joerg/arm-smmu/updates
> 
> Pulled, thanks Will.
> 
> Btw, there was a merge conflict with the patches to unmodularize the
> IOMMU drivers in drivers/iommu/arm-smmu.c. I think I fixed it up, but it
> would be good if you can check it later when I pushed it out.

Looks fine to me, thanks Joerg.

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v6 0/7] Add virtio-iommu driver

2018-12-13 Thread Jean-Philippe Brucker
Hi Joerg,

On 12/12/2018 10:35, Joerg Roedel wrote:
> Hi,
> 
> to make progress on this, we should first agree on the protocol used
> between guest and host. I have a few points to discuss on the protocol
> first.
> 
> On Tue, Dec 11, 2018 at 06:20:57PM +, Jean-Philippe Brucker wrote:
>> [1] Virtio-iommu specification v0.9, sources and pdf
>> git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.9
>> http://jpbrucker.net/virtio-iommu/spec/v0.9/virtio-iommu-v0.9.pdf
> 
> Looking at this I wonder why it doesn't make the IOTLB visible to the
> guest. the UNMAP requests seem to require that the TLB is already
> flushed to make the unmap visible.
> 
> I think that will cost significant performance for both, vfio and
> dma-iommu use-cases which both do (vfio at least to some degree),
> deferred flushing.

We already do deferred flush: UNMAP requests are added to the queue by
iommu_unmap(), and then flushed out by iotlb_sync(). So we switch to the
host only on iotlb_sync(), or when the request queue is full.

> I also wonder whether the protocol should implement a
> protocol version handshake and iommu-feature set queries.

With the virtio transport there is a handshake when the device (IOMMU)
is initialized, through feature bits and global config fields. Feature
bits are made of both transport-specific features, including the version
number, and device-specific features defined in section 2.3 of the above
document (the transport is described in the virtio 1.0 specification).
The device presents features that it supports in a register, and the
driver masks out the feature bits that it doesn't support. Then the
driver sets the global status to FEATURES_OK and initialization continues.

In addition virtio-iommu has per-endpoint features through the PROBE
request, since the vIOMMU may manage hardware (VFIO) and software
(virtio) endpoints at the same time, which don't have the same DMA
capabilities (different IOVA ranges, page granularity, reserved ranges,
pgtable sharing, etc). At the moment this is a one-way probe, not a
handshake. The device simply fills the properties of each endpoint, but
the driver doesn't have to ack them. Initially there was a way to
negotiate each PROBE property but it was deemed unnecessary during
review. By leaving a few spare bits in the property headers I made sure
it can be added back with a feature bit if we ever need it.

>> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.9.1
>> git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.9
> 
> Unfortunatly gitweb seems to be broken on linux-arm.org. What is missing
> in this patch-set to make this work on x86?

You should be able to access it here:
http://www.linux-arm.org/git?p=linux-jpb.git;a=shortlog;h=refs/heads/virtio-iommu/devel

That branch contains missing bits for x86 support:

* ACPI support. We have the code but it's waiting for an IORT spec
update, to reserve the IORT node ID. I expect it to take a while, given
that I'm alone requesting a change for something that's not upstream or
in hardware.

* DMA ops for x86 (see "HACK" commit). I'd like to use dma-iommu but I'm
not sure how to implement the glue that sets dma_ops properly.

Thanks,
Jean
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [virtio-dev] Re: [PATCH v5 5/7] iommu: Add virtio-iommu driver

2018-12-13 Thread Robin Murphy

On 2018-12-12 3:27 pm, Auger Eric wrote:

Hi,

On 12/12/18 3:56 PM, Michael S. Tsirkin wrote:

On Fri, Dec 07, 2018 at 06:52:31PM +, Jean-Philippe Brucker wrote:

Sorry for the delay, I wanted to do a little more performance analysis
before continuing.

On 27/11/2018 18:10, Michael S. Tsirkin wrote:

On Tue, Nov 27, 2018 at 05:55:20PM +, Jean-Philippe Brucker wrote:

+   if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1) ||
+   !virtio_has_feature(vdev, VIRTIO_IOMMU_F_MAP_UNMAP))


Why bother with a feature bit for this then btw?


We'll need a new feature bit for sharing page tables with the hardware,
because they require different requests (attach_table/invalidate instead
of map/unmap.) A future device supporting page table sharing won't
necessarily need to support map/unmap.


I don't see virtio iommu being extended to support ARM specific
requests. This just won't scale, too many different
descriptor formats out there.


They aren't really ARM specific requests. The two new requests are
ATTACH_TABLE and INVALIDATE, which would be used by x86 IOMMUs as well.

Sharing CPU address space with the HW IOMMU (SVM) has been in the scope
of virtio-iommu since the first RFC, and I've been working with that
extension in mind since the beginning. As an example you can have a look
at my current draft for this [1], which is inspired from the VFIO work
we've been doing with Intel.

The negotiation phase inevitably requires vendor-specific fields in the
descriptors - host tells which formats are supported, guest chooses a
format and attaches page tables. But invalidation and fault reporting
descriptors are fairly generic.


We need to tread carefully here.  People expect it that if user does
lspci and sees a virtio device then it's reasonably portable.


If you want to go that way down the road, you should avoid
virtio iommu, instead emulate and share code with the ARM SMMU (probably
with a different vendor id so you can implement the
report on map for devices without PRI).


vSMMU has to stay in userspace though. The main reason we're proposing
virtio-iommu is that emulating every possible vIOMMU model in the kernel
would be unmaintainable. With virtio-iommu we can process the fast path
in the host kernel, through vhost-iommu, and do the heavy lifting in
userspace.


Interesting.


As said above, I'm trying to keep the fast path for
virtio-iommu generic.

More notes on what I consider to be the fast path, and comparison with
vSMMU:

(1) The primary use-case we have in mind for vIOMMU is something like
DPDK in the guest, assigning a hardware device to guest userspace. DPDK
maps a large amount of memory statically, to be used by a pass-through
device. For this case I don't think we care about vIOMMU performance.
Setup and teardown need to be reasonably fast, sure, but the MAP/UNMAP
requests don't have to be optimal.


(2) If the assigned device is owned by the guest kernel, then mappings
are dynamic and require dma_map/unmap() to be fast, but there generally
is no need for a vIOMMU, since device and drivers are trusted by the
guest kernel. Even when the user does enable a vIOMMU for this case
(allowing to over-commit guest memory, which needs to be pinned
otherwise),


BTW that's in theory in practice it doesn't really work.


we generally play tricks like lazy TLBI (non-strict mode) to
make it faster.


Simple lazy TLB for guest/userspace drivers would be a big no no.
You need something smarter.


Here device and drivers are trusted, therefore the
vulnerability window of lazy mode isn't a concern.

If the reason to enable the vIOMMU is over-comitting guest memory
however, you can't use nested translation because it requires pinning
the second-level tables. For this case performance matters a bit,
because your invalidate-on-map needs to be fast, even if you enable lazy
mode and only receive inval-on-unmap every 10ms. It won't ever be as
fast as nested translation, though. For this case I think vSMMU+Caching
Mode and userspace virtio-iommu with MAP/UNMAP would perform similarly
(given page-sized payloads), because the pagetable walk doesn't add a
lot of overhead compared to the context switch. But given the results
below, vhost-iommu would be faster than vSMMU+CM.


(3) Then there is SVM. For SVM, any destructive change to the process
address space requires a synchronous invalidation command to the
hardware (at least when using PCI ATS). Given that SVM is based on page
faults, fault reporting from host to guest also needs to be fast, as
well as fault response from guest to host.

I think this is where performance matters the most. To get a feel of the
advantage we get with virtio-iommu, I compared the vSMMU page-table
sharing implementation [2] and vhost-iommu + VFIO with page table
sharing (based on Tomasz Nowicki's vhost-iommu prototype). That's on a
ThunderX2 with a 10Gb NIC assigned to the guest kernel, which
corresponds to case (2) above, with nesting page tables and without the
lazy mode. The 

Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christoph Hellwig
On Thu, Dec 13, 2018 at 12:19:26PM +0100, Christian Zigotzky wrote:
> I tried it again but I get the following error message:
>
> MODPOST vmlinux.o
> arch/powerpc/kernel/dma-iommu.o: In function `.dma_iommu_get_required_mask':
> (.text+0x274): undefined reference to `.dma_direct_get_required_mask'
> make: *** [vmlinux] Error 1

Sorry, you need this one liner before all the patches posted last time:

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d8819e3a1eb1..7e78c2798f2f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -154,6 +154,7 @@ config PPC
select CLONE_BACKWARDS
select DCACHE_WORD_ACCESS   if PPC64 && CPU_LITTLE_ENDIAN
select DYNAMIC_FTRACE   if FUNCTION_TRACER
+   select DMA_DIRECT_OPS
select EDAC_ATOMIC_SCRUB
select EDAC_SUPPORT
select GENERIC_ATOMIC64 if PPC32
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 0/3] PCIe Host request to reserve IOVA

2018-12-13 Thread poza

On 2018-12-13 16:02, Srinath Mannam wrote:

Few SOCs have limitation that their PCIe host can't allow few inbound
address ranges.
Allowed inbound address ranges are listed in dma-ranges DT property and
this address ranges are required to do IOVA mapping.
Remaining address ranges have to be reserved in IOVA mapping.

PCIe Host driver of those SOCs has to list all address ranges which 
have
to reserve their IOVA address into PCIe host bridge resource entry 
list.
IOMMU framework will reserve these IOVAs while initializing IOMMU 
domain.


This patch set is based on Linux-4.19-rc1.

Changes from v1:
  - Addressed Oza review comments.

Srinath Mannam (3):
  PCI: Add dma-resv window list
  iommu/dma: IOVA reserve for PCI host reserve address list
  PCI: iproc: Add dma reserve resources to host

 drivers/iommu/dma-iommu.c   |  8 ++
 drivers/pci/controller/pcie-iproc.c | 51 
-

 drivers/pci/probe.c |  3 +++
 include/linux/pci.h |  1 +
 4 files changed, 62 insertions(+), 1 deletion(-)


Looks good to me.

Reviewed-by: Oza Pawandeep 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v4 1/1] dt-bindings: arm-smmu: Add binding doc for Qcom smmu-500

2018-12-13 Thread Will Deacon
On Thu, Dec 13, 2018 at 02:35:07PM +0530, Vivek Gautam wrote:
> Qcom's implementation of arm,mmu-500 works well with current
> arm-smmu driver implementation. Adding a soc specific compatible
> along with arm,mmu-500 makes the bindings future safe.
> 
> Signed-off-by: Vivek Gautam 
> Reviewed-by: Rob Herring 
> Cc: Will Deacon 
> ---
> 
> Hi Joerg,
> I am picking this out separately from the sdm845 smmu support
> series [1], so that this can go through iommu tree.
> The dt patch from the series [1] can be taken through arm-soc tree.
> 
> Hi Will,
> As asked [2], here's the resend version of dt binding patch for sdm845.
> Kindly ack this so that Joerg can pull this in.

Acked-by: Will Deacon 

Joerg -- please can you take this on top of the pull request I sent already?
Vivek included it as part of a separate series which I thought was going
via arm-soc, but actually it needs to go with the other arm-smmu patches
in order to avoid conflicts.

Cheers,

Will

>  Documentation/devicetree/bindings/iommu/arm,smmu.txt | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/iommu/arm,smmu.txt 
> b/Documentation/devicetree/bindings/iommu/arm,smmu.txt
> index a6504b37cc21..3133f3ba7567 100644
> --- a/Documentation/devicetree/bindings/iommu/arm,smmu.txt
> +++ b/Documentation/devicetree/bindings/iommu/arm,smmu.txt
> @@ -27,6 +27,10 @@ conditions.
>"qcom,msm8996-smmu-v2", "qcom,smmu-v2",
>"qcom,sdm845-smmu-v2", "qcom,smmu-v2".
>  
> +  Qcom SoCs implementing "arm,mmu-500" must also include,
> +  as below, SoC-specific compatibles:
> +  "qcom,sdm845-smmu-500", "arm,mmu-500"
> +
>  - reg   : Base address and size of the SMMU.
>  
>  - #global-interrupts : The number of global interrupts exposed by the
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 3/3] PCI: iproc: Add dma reserve resources to host

2018-12-13 Thread Srinath Mannam via iommu
IPROC host has the limitation that it can use only those address ranges
given by dma-ranges property as inbound address.
So that the memory address holes in dma-ranges should be reserved to
allocate as DMA address.

Inbound address of host accessed by pcie devices will not be translated
before it comes to IOMMU or directly to PE.
But the limitation of this host is, access to few address ranges are
ignored. So that IOVA ranges for these address ranges have to be reserved.

All such addresses ranges are created as resource entries by parsing
dma-ranges DT parameter and add to dma_resv list of pci host bridge.

Ex:
dma-ranges = < \
  0x4300 0x00 0x8000 0x00 0x8000 0x00 0x8000 \
  0x4300 0x08 0x 0x08 0x 0x08 0x \
  0x4300 0x80 0x 0x80 0x 0x40 0x>

In the above example of dma-ranges, memory address from
0x0 - 0x8000,
0x1 - 0x8,
0x10 - 0x80 and
0x100 - 0x.
are not allowed to use as inbound addresses.
So that we need to add these address ranges to dma_resv list to reserve
corresponding IOVA address ranges.

Signed-off-by: Srinath Mannam 
Based-on-patch-by: Oza Pawandeep 
---
 drivers/pci/controller/pcie-iproc.c | 51 -
 1 file changed, 50 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/controller/pcie-iproc.c 
b/drivers/pci/controller/pcie-iproc.c
index 3160e93..636e92d 100644
--- a/drivers/pci/controller/pcie-iproc.c
+++ b/drivers/pci/controller/pcie-iproc.c
@@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct iproc_pcie *pcie,
return ret;
 }
 
+static int
+iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head *resources,
+ uint64_t start, uint64_t end)
+{
+   struct resource *res;
+
+   res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
+   if (!res)
+   return -ENOMEM;
+
+   res->start = (resource_size_t)start;
+   res->end = (resource_size_t)end;
+   pci_add_resource_offset(resources, res, 0);
+
+   return 0;
+}
+
 static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
 {
+   struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct of_pci_range range;
struct of_pci_range_parser parser;
int ret;
+   uint64_t start, end;
+   LIST_HEAD(resources);
 
/* Get the dma-ranges from DT */
ret = of_pci_dma_range_parser_init(, pcie->dev->of_node);
if (ret)
return ret;
 
+   start = 0;
for_each_of_pci_range(, ) {
+   end = range.pci_addr;
+   /* dma-ranges list expected in sorted order */
+   if (end < start) {
+   ret = -EINVAL;
+   goto out;
+   }
/* Each range entry corresponds to an inbound mapping region */
ret = iproc_pcie_setup_ib(pcie, , IPROC_PCIE_IB_MAP_MEM);
if (ret)
-   return ret;
+   goto out;
+
+   if (end - start) {
+   ret = iproc_pcie_add_dma_resv_range(pcie->dev,
+   ,
+   start, end);
+   if (ret)
+   goto out;
+   }
+   start = range.pci_addr + range.size;
}
 
+   end = DMA_BIT_MASK(sizeof(dma_addr_t) * BITS_PER_BYTE);
+   if (end - start) {
+   ret = iproc_pcie_add_dma_resv_range(pcie->dev, ,
+   start, end);
+   if (ret)
+   goto out;
+   }
+
+   list_splice_init(, >dma_resv);
+
return 0;
+out:
+   pci_free_resource_list();
+   return ret;
 }
 
 static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 1/3] PCI: Add dma-resv window list

2018-12-13 Thread Srinath Mannam via iommu
Add a dma_resv parameter in pci host bridge structure to hold resource
entries list of memory regions for which IOVAs have to reserve.

PCIe host driver will add resource entries to this list based on its
requirements.
Few inbound address ranges can't be allowed by few PCIe host, so those
address ranges will be add to this list to avoid IOMMU mapping.

While initializing IOMMU domain of PCI EPs connected to that host bridge
IOVAs for this given list of address ranges will be reserved.

Signed-off-by: Srinath Mannam 
Based-on-patch-by: Oza Pawandeep 
---
 drivers/pci/probe.c | 3 +++
 include/linux/pci.h | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index ec78400..bbed0e7 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -544,6 +544,7 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
return NULL;
 
INIT_LIST_HEAD(>windows);
+   INIT_LIST_HEAD(>dma_resv);
bridge->dev.release = pci_release_host_bridge_dev;
 
/*
@@ -572,6 +573,7 @@ struct pci_host_bridge *devm_pci_alloc_host_bridge(struct 
device *dev,
return NULL;
 
INIT_LIST_HEAD(>windows);
+   INIT_LIST_HEAD(>dma_resv);
bridge->dev.release = devm_pci_release_host_bridge_dev;
 
return bridge;
@@ -581,6 +583,7 @@ EXPORT_SYMBOL(devm_pci_alloc_host_bridge);
 void pci_free_host_bridge(struct pci_host_bridge *bridge)
 {
pci_free_resource_list(>windows);
+   pci_free_resource_list(>dma_resv);
 
kfree(bridge);
 }
diff --git a/include/linux/pci.h b/include/linux/pci.h
index e72ca8d..1f0a32a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -472,6 +472,7 @@ struct pci_host_bridge {
void*sysdata;
int busnr;
struct list_head windows;   /* resource_entry */
+   struct list_head dma_resv;  /* reserv dma ranges */
u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
int (*map_irq)(const struct pci_dev *, u8, u8);
void (*release_fn)(struct pci_host_bridge *);
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 2/3] iommu/dma: IOVA reserve for PCI host reserve address list

2018-12-13 Thread Srinath Mannam via iommu
PCI host bridge has list of resource entries contain address ranges for
which IOVA address mapping has to be reserve.
These address ranges are the address holes in dma-ranges DT property.

It is similar to PCI IO resources address ranges reserving in IOMMU for
each EP connected to host bridge.

Signed-off-by: Srinath Mannam 
Based-on-patch-by: Oza Pawandeep 
---
 drivers/iommu/dma-iommu.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 511ff9a..346da81 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -220,6 +220,14 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
hi = iova_pfn(iovad, window->res->end - window->offset);
reserve_iova(iovad, lo, hi);
}
+
+   /* Get reserved DMA windows from host bridge */
+   resource_list_for_each_entry(window, >dma_resv) {
+
+   lo = iova_pfn(iovad, window->res->start - window->offset);
+   hi = iova_pfn(iovad, window->res->end - window->offset);
+   reserve_iova(iovad, lo, hi);
+   }
 }
 
 static int iova_reserve_iommu_regions(struct device *dev,
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 0/3] PCIe Host request to reserve IOVA

2018-12-13 Thread Srinath Mannam via iommu
Few SOCs have limitation that their PCIe host can't allow few inbound
address ranges.
Allowed inbound address ranges are listed in dma-ranges DT property and
this address ranges are required to do IOVA mapping.
Remaining address ranges have to be reserved in IOVA mapping.

PCIe Host driver of those SOCs has to list all address ranges which have
to reserve their IOVA address into PCIe host bridge resource entry list.
IOMMU framework will reserve these IOVAs while initializing IOMMU domain.

This patch set is based on Linux-4.19-rc1.

Changes from v1:
  - Addressed Oza review comments.

Srinath Mannam (3):
  PCI: Add dma-resv window list
  iommu/dma: IOVA reserve for PCI host reserve address list
  PCI: iproc: Add dma reserve resources to host

 drivers/iommu/dma-iommu.c   |  8 ++
 drivers/pci/controller/pcie-iproc.c | 51 -
 drivers/pci/probe.c |  3 +++
 include/linux/pci.h |  1 +
 4 files changed, 62 insertions(+), 1 deletion(-)

-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] Revert "iommu/io-pgtable-arm: Check for v7s-incapable systems"

2018-12-13 Thread Robin Murphy

On 2018-12-13 9:19 am, Yong Wu wrote:

This reverts commit 82db33dc5e49fb625262d81125625d07a0d6184e.

After the commit 29859aeb8a6e ("iommu/io-pgtable-arm-v7s: Abort
allocation when table address overflows the PTE"), v7s will return fail
if the page table allocation isn't expected. this PHYS_OFFSET check
is unnecessary now.

And this check may lead to fail. For example, If CONFIG_RANDOMIZE_BASE
is enabled, the "memstart_addr" will be updated randomly, then the
PHYS_OFFSET may be random.


Reviewed-by: Robin Murphy 

Joerg, if you have any more fixes to send for 4.19, please consider 
picking this up directly.


Thanks,
Robin.


Reported-by: CK Hu 
Signed-off-by: Yong Wu 
---
  drivers/iommu/io-pgtable-arm-v7s.c | 4 
  1 file changed, 4 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
b/drivers/iommu/io-pgtable-arm-v7s.c
index 445c3bd..cec29bf 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -709,10 +709,6 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct 
io_pgtable_cfg *cfg,
  {
struct arm_v7s_io_pgtable *data;
  
-#ifdef PHYS_OFFSET

-   if (upper_32_bits(PHYS_OFFSET))
-   return NULL;
-#endif
if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
return NULL;
  


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH 3/3] PCI: iproc: Add dma reserve resources to host

2018-12-13 Thread poza

On 2018-12-13 14:47, Srinath Mannam wrote:

Hi Oza,

Thank you for the review.
Please find my comments in lined.

On Thu, Dec 13, 2018 at 11:33 AM  wrote:


On 2018-12-12 11:16, Srinath Mannam wrote:
> IPROC host has the limitation that it can use
> only those address ranges given by dma-ranges
> property as inbound address.
> So that the memory address holes in dma-ranges
> should be reserved to allocate as DMA address.
>
> All such reserved addresses are created as resource
> entries and add to dma_resv list of pci host bridge.
>
> These dma reserve resources created by parsing
> dma-ranges parameter.
>
> Ex:
> dma-ranges = < \
>   0x4300 0x00 0x8000 0x00 0x8000 0x00 0x8000 \
>   0x4300 0x08 0x 0x08 0x 0x08 0x \
>   0x4300 0x80 0x 0x80 0x 0x40 0x>
>
> In the above example of dma-ranges, memory address from
> 0x0 - 0x8000,
> 0x1 - 0x8,
> 0x10 - 0x80 and
> 0x100 - 0x.
> are not allowed to use as inbound addresses.
> So that we need to add these address range to dma_resv
> list to reserve their IOVA address ranges.
>
> Signed-off-by: Srinath Mannam 
> ---
>  drivers/pci/controller/pcie-iproc.c | 49
> +
>  1 file changed, 49 insertions(+)
>
> diff --git a/drivers/pci/controller/pcie-iproc.c
> b/drivers/pci/controller/pcie-iproc.c
> index 3160e93..43e465a 100644
> --- a/drivers/pci/controller/pcie-iproc.c
> +++ b/drivers/pci/controller/pcie-iproc.c
> @@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct
> iproc_pcie *pcie,
>   return ret;
>  }
>
> +static int
> +iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head
> *resources,
> +   uint64_t start, uint64_t end)
> +{
> + struct resource *res;
> +
> + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + res->start = (resource_size_t)start;
> + res->end = (resource_size_t)end;
> + pci_add_resource_offset(resources, res, 0);
> +
> + return 0;
> +}
> +
>  static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
>  {
> + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
>   struct of_pci_range range;
>   struct of_pci_range_parser parser;
>   int ret;
> + uint64_t start, end;
> + LIST_HEAD(resources);
>
>   /* Get the dma-ranges from DT */
>   ret = of_pci_dma_range_parser_init(, pcie->dev->of_node);
>   if (ret)
>   return ret;
>
> + start = 0;
>   for_each_of_pci_range(, ) {
> + end = range.pci_addr;
> + /* dma-ranges list expected in sorted order */
> + if (end < start) {
> + ret = -EINVAL;
> + goto out;
> + }
>   /* Each range entry corresponds to an inbound mapping region */
>   ret = iproc_pcie_setup_ib(pcie, , IPROC_PCIE_IB_MAP_MEM);
>   if (ret)
>   return ret;
> +
> + if (end - start) {
> + ret = iproc_pcie_add_dma_resv_range(pcie->dev,
> + ,
> + start, end);
> + if (ret)
> + goto out;
> + }
> + start = range.pci_addr + range.size;
>   }
>
> + end = ~0;
Hi Srinath,

this series is based on following patch sets.

https://lkml.org/lkml/2017/5/16/19
https://lkml.org/lkml/2017/5/16/23
https://lkml.org/lkml/2017/5/16/21,


Yes, this patch series is done based on the inputs of the patches you
sent earlier.


some comments to be adapted from the patch-set I did.

end = ~0;
you should consider DMA_MASK, to see iproc controller is in 32 bit or 
64

bit system.
please check following code snippet.

if (tmp_dma_addr < DMA_BIT_MASK(sizeof(dma_addr_t) * 8)) {
+   lo = iova_pfn(iovad, tmp_dma_addr);
+   hi = iova_pfn(iovad,
+ DMA_BIT_MASK(sizeof(dma_addr_t) 
* 8) - 1);

+   reserve_iova(iovad, lo, hi);
+   }

Also if this controller is integrated to 64bit platform, but decide to
restrict DMA to 32 bit for some reason, the code should address such
scenarios.
so it is always safe to do

#define BITS_PER_BYTE 8
DMA_BIT_MASK(sizeof(dma_addr_t) * BITS_PER_BYTE)
so please use kernle macro to find the end of DMA region.


this change done with the assumption, that end_address is max bus
address(~0) instead
pcie RC dma mask.
Even dma-ranges has 64bit size dma-mask of PCIe host is forced to 
32bit.

// in of_dma_configure function
dev->coherent_dma_mask = DMA_BIT_MASK(32);
And dma-mask of endpoint was set to 64bit in their drivers. also SMMU 
supported

dma mask is 48-bit.
But here requirement is all address ranges except 

Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christian Zigotzky

On 13 December 2018 at 10:10AM, Christoph Hellwig wrote:

On Thu, Dec 13, 2018 at 09:41:50AM +0100, Christian Zigotzky wrote:

Today I tried the first patch (0001-get_required_mask.patch) with the last
good commit (977706f9755d2d697aa6f45b4f9f0e07516efeda). Unfortunately this
patch is already included in the last good commit
(977706f9755d2d697aa6f45b4f9f0e07516efeda). I will try the next patch.

Hmm, I don't think this is the case.  This is my local git log output:

commit 83a4b87de6bc6a75b500c9959de88e2157fbcd7c
Author: Christoph Hellwig 
Date:   Wed Dec 12 15:07:49 2018 +0100

 get_required_mask

commit 977706f9755d2d697aa6f45b4f9f0e07516efeda
Author: Christoph Hellwig 
Date:   Sat Nov 10 22:34:27 2018 +0100

 powerpc/dma: remove dma_nommu_mmap_coherent

I've also pushed a git branch with these out to:

 git://git.infradead.org/users/hch/misc.git powerpc-dma.5-debug

Sorry Christioph. I was wrong. The first patch isn't included in the 
last good commit. I will try it again. I can only test beside my main 
work. That means it takes longer.


-- Christian

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] Revert "iommu/io-pgtable-arm: Check for v7s-incapable systems"

2018-12-13 Thread Yong Wu
This reverts commit 82db33dc5e49fb625262d81125625d07a0d6184e.

After the commit 29859aeb8a6e ("iommu/io-pgtable-arm-v7s: Abort
allocation when table address overflows the PTE"), v7s will return fail
if the page table allocation isn't expected. this PHYS_OFFSET check
is unnecessary now.

And this check may lead to fail. For example, If CONFIG_RANDOMIZE_BASE
is enabled, the "memstart_addr" will be updated randomly, then the
PHYS_OFFSET may be random.

Reported-by: CK Hu 
Signed-off-by: Yong Wu 
---
 drivers/iommu/io-pgtable-arm-v7s.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
b/drivers/iommu/io-pgtable-arm-v7s.c
index 445c3bd..cec29bf 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -709,10 +709,6 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct 
io_pgtable_cfg *cfg,
 {
struct arm_v7s_io_pgtable *data;
 
-#ifdef PHYS_OFFSET
-   if (upper_32_bits(PHYS_OFFSET))
-   return NULL;
-#endif
if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
return NULL;
 
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH 3/3] PCI: iproc: Add dma reserve resources to host

2018-12-13 Thread Srinath Mannam via iommu
Hi Oza,

Thank you for the review.
Please find my comments in lined.

On Thu, Dec 13, 2018 at 11:33 AM  wrote:
>
> On 2018-12-12 11:16, Srinath Mannam wrote:
> > IPROC host has the limitation that it can use
> > only those address ranges given by dma-ranges
> > property as inbound address.
> > So that the memory address holes in dma-ranges
> > should be reserved to allocate as DMA address.
> >
> > All such reserved addresses are created as resource
> > entries and add to dma_resv list of pci host bridge.
> >
> > These dma reserve resources created by parsing
> > dma-ranges parameter.
> >
> > Ex:
> > dma-ranges = < \
> >   0x4300 0x00 0x8000 0x00 0x8000 0x00 0x8000 \
> >   0x4300 0x08 0x 0x08 0x 0x08 0x \
> >   0x4300 0x80 0x 0x80 0x 0x40 0x>
> >
> > In the above example of dma-ranges, memory address from
> > 0x0 - 0x8000,
> > 0x1 - 0x8,
> > 0x10 - 0x80 and
> > 0x100 - 0x.
> > are not allowed to use as inbound addresses.
> > So that we need to add these address range to dma_resv
> > list to reserve their IOVA address ranges.
> >
> > Signed-off-by: Srinath Mannam 
> > ---
> >  drivers/pci/controller/pcie-iproc.c | 49
> > +
> >  1 file changed, 49 insertions(+)
> >
> > diff --git a/drivers/pci/controller/pcie-iproc.c
> > b/drivers/pci/controller/pcie-iproc.c
> > index 3160e93..43e465a 100644
> > --- a/drivers/pci/controller/pcie-iproc.c
> > +++ b/drivers/pci/controller/pcie-iproc.c
> > @@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct
> > iproc_pcie *pcie,
> >   return ret;
> >  }
> >
> > +static int
> > +iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head
> > *resources,
> > +   uint64_t start, uint64_t end)
> > +{
> > + struct resource *res;
> > +
> > + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
> > + if (!res)
> > + return -ENOMEM;
> > +
> > + res->start = (resource_size_t)start;
> > + res->end = (resource_size_t)end;
> > + pci_add_resource_offset(resources, res, 0);
> > +
> > + return 0;
> > +}
> > +
> >  static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
> >  {
> > + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
> >   struct of_pci_range range;
> >   struct of_pci_range_parser parser;
> >   int ret;
> > + uint64_t start, end;
> > + LIST_HEAD(resources);
> >
> >   /* Get the dma-ranges from DT */
> >   ret = of_pci_dma_range_parser_init(, pcie->dev->of_node);
> >   if (ret)
> >   return ret;
> >
> > + start = 0;
> >   for_each_of_pci_range(, ) {
> > + end = range.pci_addr;
> > + /* dma-ranges list expected in sorted order */
> > + if (end < start) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> >   /* Each range entry corresponds to an inbound mapping region 
> > */
> >   ret = iproc_pcie_setup_ib(pcie, , 
> > IPROC_PCIE_IB_MAP_MEM);
> >   if (ret)
> >   return ret;
> > +
> > + if (end - start) {
> > + ret = iproc_pcie_add_dma_resv_range(pcie->dev,
> > + ,
> > + start, end);
> > + if (ret)
> > + goto out;
> > + }
> > + start = range.pci_addr + range.size;
> >   }
> >
> > + end = ~0;
> Hi Srinath,
>
> this series is based on following patch sets.
>
> https://lkml.org/lkml/2017/5/16/19
> https://lkml.org/lkml/2017/5/16/23
> https://lkml.org/lkml/2017/5/16/21,
>
Yes, this patch series is done based on the inputs of the patches you
sent earlier.

> some comments to be adapted from the patch-set I did.
>
> end = ~0;
> you should consider DMA_MASK, to see iproc controller is in 32 bit or 64
> bit system.
> please check following code snippet.
>
> if (tmp_dma_addr < DMA_BIT_MASK(sizeof(dma_addr_t) * 8)) {
> +   lo = iova_pfn(iovad, tmp_dma_addr);
> +   hi = iova_pfn(iovad,
> + DMA_BIT_MASK(sizeof(dma_addr_t) * 8) - 
> 1);
> +   reserve_iova(iovad, lo, hi);
> +   }
>
> Also if this controller is integrated to 64bit platform, but decide to
> restrict DMA to 32 bit for some reason, the code should address such
> scenarios.
> so it is always safe to do
>
> #define BITS_PER_BYTE 8
> DMA_BIT_MASK(sizeof(dma_addr_t) * BITS_PER_BYTE)
> so please use kernle macro to find the end of DMA region.
>
this change done with the assumption, that end_address is max bus
address(~0) instead
pcie RC dma mask.
Even dma-ranges has 64bit size dma-mask of PCIe host is forced to 32bit.
// in of_dma_configure 

Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christoph Hellwig
On Thu, Dec 13, 2018 at 09:41:50AM +0100, Christian Zigotzky wrote:
> Today I tried the first patch (0001-get_required_mask.patch) with the last 
> good commit (977706f9755d2d697aa6f45b4f9f0e07516efeda). Unfortunately this 
> patch is already included in the last good commit 
> (977706f9755d2d697aa6f45b4f9f0e07516efeda). I will try the next patch.

Hmm, I don't think this is the case.  This is my local git log output:

commit 83a4b87de6bc6a75b500c9959de88e2157fbcd7c
Author: Christoph Hellwig 
Date:   Wed Dec 12 15:07:49 2018 +0100

get_required_mask

commit 977706f9755d2d697aa6f45b4f9f0e07516efeda
Author: Christoph Hellwig 
Date:   Sat Nov 10 22:34:27 2018 +0100

powerpc/dma: remove dma_nommu_mmap_coherent

I've also pushed a git branch with these out to:

git://git.infradead.org/users/hch/misc.git powerpc-dma.5-debug
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[RESEND PATCH v4 1/1] dt-bindings: arm-smmu: Add binding doc for Qcom smmu-500

2018-12-13 Thread Vivek Gautam
Qcom's implementation of arm,mmu-500 works well with current
arm-smmu driver implementation. Adding a soc specific compatible
along with arm,mmu-500 makes the bindings future safe.

Signed-off-by: Vivek Gautam 
Reviewed-by: Rob Herring 
Cc: Will Deacon 
---

Hi Joerg,
I am picking this out separately from the sdm845 smmu support
series [1], so that this can go through iommu tree.
The dt patch from the series [1] can be taken through arm-soc tree.

Hi Will,
As asked [2], here's the resend version of dt binding patch for sdm845.
Kindly ack this so that Joerg can pull this in.

Thanks
Vivek

[1] https://patchwork.kernel.org/cover/10636359/
[2] https://patchwork.kernel.org/patch/10636363/

 Documentation/devicetree/bindings/iommu/arm,smmu.txt | 4 
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/iommu/arm,smmu.txt 
b/Documentation/devicetree/bindings/iommu/arm,smmu.txt
index a6504b37cc21..3133f3ba7567 100644
--- a/Documentation/devicetree/bindings/iommu/arm,smmu.txt
+++ b/Documentation/devicetree/bindings/iommu/arm,smmu.txt
@@ -27,6 +27,10 @@ conditions.
   "qcom,msm8996-smmu-v2", "qcom,smmu-v2",
   "qcom,sdm845-smmu-v2", "qcom,smmu-v2".
 
+  Qcom SoCs implementing "arm,mmu-500" must also include,
+  as below, SoC-specific compatibles:
+  "qcom,sdm845-smmu-500", "arm,mmu-500"
+
 - reg   : Base address and size of the SMMU.
 
 - #global-interrupts : The number of global interrupts exposed by the
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: use generic DMA mapping code in powerpc V4

2018-12-13 Thread Christian Zigotzky

On 12 December 2018 at 3:39PM, Christian Zigotzky wrote:

Hi Christoph,

Thanks a lot for your reply. I will test your patches tomorrow.

Cheers,
Christian

Sent from my iPhone


On 12. Dec 2018, at 15:15, Christoph Hellwig  wrote:

Thanks for bisecting.  I've spent some time going over the conversion
but can't really pinpoint it.  I have three little patches that switch
parts of the code to the generic version.  This is on top of the
last good commmit (977706f9755d2d697aa6f45b4f9f0e07516efeda).

Can you check with whіch one things stop working?


<0001-get_required_mask.patch>
<0002-swiotlb-dma_supported.patch>
<0003-nommu-dma_supported.patch>
<0004-alloc-free.patch>


Today I tried the first patch (0001-get_required_mask.patch) with the 
last good commit (977706f9755d2d697aa6f45b4f9f0e07516efeda). 
Unfortunately this patch is already included in the last good commit 
(977706f9755d2d697aa6f45b4f9f0e07516efeda). I will try the next patch.


-- Christian


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu