[PATCH 1/2] iommu/vt-d: Check capability before disabling protected memory

2019-03-19 Thread Lu Baolu
The spec states in 10.4.16 that the Protected Memory Enable
Register should be treated as read-only for implementations
not supporting protected memory regions (PLMR and PHMR fields
reported as Clear in the Capability register).

Cc: Jacob Pan 
Cc: mark gross 
Suggested-by: Ashok Raj 
Fixes: f8bab73515ca5 ("intel-iommu: PMEN support")
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel-iommu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 87274b54febd..f002d47d2f27 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1538,6 +1538,9 @@ static void iommu_disable_protect_mem_regions(struct 
intel_iommu *iommu)
u32 pmen;
unsigned long flags;
 
+   if (!cap_plmr(iommu->cap) && !cap_phmr(iommu->cap))
+   return;
+
raw_spin_lock_irqsave(>register_lock, flags);
pmen = readl(iommu->reg + DMAR_PMEN_REG);
pmen &= ~DMA_PMEN_EPM;
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 2/2] iommu/vt-d: Save the right domain ID used by hardware

2019-03-19 Thread Lu Baolu
The driver sets a default domain id (FLPT_DEFAULT_DID) in the
first level only pasid entry, but saves a different domain id
in @sdev->did. The value saved in @sdev->did will be used to
invalidate the translation caches. Hence, the driver might
result in invalidating the caches with a wrong domain id.

Cc: Ashok Raj 
Cc: Jacob Pan 
Fixes: 1c4f88b7f1f92 ("iommu/vt-d: Shared virtual address in scalable mode")
Signed-off-by: Liu Yi L 
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index f002d47d2f27..28cb713d728c 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -5335,7 +5335,7 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, 
struct intel_svm_dev *sd
 
ctx_lo = context[0].lo;
 
-   sdev->did = domain->iommu_did[iommu->seq_id];
+   sdev->did = FLPT_DEFAULT_DID;
sdev->sid = PCI_DEVID(info->bus, info->devfn);
 
if (!(ctx_lo & CONTEXT_PASIDE)) {
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/1] iommu: Remove iommu_callback_data

2019-03-19 Thread Lu Baolu
The iommu_callback_data is not used anywhere, remove it to make
the code more concise.

Signed-off-by: Lu Baolu 
---
 drivers/iommu/iommu.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 33a982e33716..1164b9926a2b 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -45,10 +45,6 @@ static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_DMA;
 #endif
 static bool iommu_dma_strict __read_mostly = true;
 
-struct iommu_callback_data {
-   const struct iommu_ops *ops;
-};
-
 struct iommu_group {
struct kobject kobj;
struct kobject *devices_kobj;
@@ -1215,9 +1211,6 @@ static int iommu_bus_init(struct bus_type *bus, const 
struct iommu_ops *ops)
 {
int err;
struct notifier_block *nb;
-   struct iommu_callback_data cb = {
-   .ops = ops,
-   };
 
nb = kzalloc(sizeof(struct notifier_block), GFP_KERNEL);
if (!nb)
@@ -1229,7 +1222,7 @@ static int iommu_bus_init(struct bus_type *bus, const 
struct iommu_ops *ops)
if (err)
goto out_free;
 
-   err = bus_for_each_dev(bus, NULL, , add_iommu_group);
+   err = bus_for_each_dev(bus, NULL, NULL, add_iommu_group);
if (err)
goto out_err;
 
@@ -1238,7 +1231,7 @@ static int iommu_bus_init(struct bus_type *bus, const 
struct iommu_ops *ops)
 
 out_err:
/* Clean up */
-   bus_for_each_dev(bus, NULL, , remove_iommu_group);
+   bus_for_each_dev(bus, NULL, NULL, remove_iommu_group);
bus_unregister_notifier(bus, nb);
 
 out_free:
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 0/7] iommu/vt-d: Fix-up device-domain relationship by refactoring to use iommu group default domain.

2019-03-19 Thread Lu Baolu

Hi James,

On 3/19/19 9:35 PM, James Sewart wrote:

Hey Lu,


On 15 Mar 2019, at 03:13, Lu Baolu  wrote:

Hi James,

On 3/14/19 7:56 PM, James Sewart wrote:

Patches 1 and 2 are the same as v1.
v1-v2:
   Refactored ISA direct mappings to be returned by iommu_get_resv_regions.
   Integrated patch by Lu to defer turning on DMAR until iommu.c has mapped
reserved regions.
   Integrated patches by Lu to remove more unused code in cleanup.
Lu: I didn't integrate your patch to set the default domain type as it
isn't directly related to the aim of this patchset. Instead patch 4


Without those patches, user experience will be affected and some devices
will not work on Intel platforms anymore.

For a long time, Intel IOMMU driver has its own logic to determine
whether a device requires an identity domain. For example, when user
specifies "iommu=pt" in kernel parameter, all device will be attached
with the identity domain. Further more, some quirky devices require
an identity domain to be used before enabling DMA remapping, otherwise,
it will not work. This was done by adding quirk bits in Intel IOMMU
driver.

So from my point of view, one way is porting all those quirks and kernel
parameters into IOMMU generic layer, or opening a door for vendor IOMMU
driver to determine the default domain type by their own. I prefer the
latter option since it will not impact any behaviors on other
architectures.


I see your point. I’m not confident that using the proposed door to set a
groups default domain has the desired behaviour. As discussed before the
default domain type will be set based on the desired type for only the
first device attached to a group. I think to change the default domain
type you would need a slightly different door that wasn’t conditioned on
device.


I think this as another problem. Just a summarize for the ease of
discussion. We saw two problems:

1. When allocating a new group for a device, how should we determine the
type of the default domain? This is what my proposal patches trying to
address.

2. If we need to put a device into an existing group which uses a
different type of domain from what the device desires to use, we might
break the functionality of the device. For this problem I'd second your
proposal below if I get your point correctly.



For situations where individual devices require an identity domain because
of quirks then maybe calling is_identity_map per device in
iommu_group_get_for_dev is a better solution than the one I proposed.



Do you mean if we see a quirky device requires a different domain type
other than the default domain type of the group, we will assign a new
group to it? That looks good to me as far as I can see. I suppose this
should be done in vt-d's ops callback.

Best regards,
Lu Baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v6 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables

2019-03-19 Thread Nicolas Boichat
On Wed, Mar 20, 2019 at 1:56 AM Andrew Morton  wrote:
>
> On Tue, 19 Mar 2019 15:41:43 +0800 Nicolas Boichat  
> wrote:
>
> > On Mon, Feb 25, 2019 at 8:23 AM Nicolas Boichat  
> > wrote:
> > >
> > > On Thu, Feb 14, 2019 at 1:12 AM Vlastimil Babka  wrote:
> > > >
> > > > On 1/22/19 11:51 PM, Nicolas Boichat wrote:
> > > > > Hi Andrew,
> > > > >
> > > > > On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel  wrote:
> > > > >>
> > > > >> On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
> > > > >> > Does anyone have any further comment on this series? If not, which
> > > > >> > maintainer is going to pick this up? I assume Andrew Morton?
> > > > >>
> > > > >> Probably, yes. I don't like to carry the mm-changes in iommu-tree, so
> > > > >> this should go through mm.
> > > > >
> > > > > Gentle ping on this series, it seems like it's better if it goes
> > > > > through your tree.
> > > > >
> > > > > Series still applies cleanly on linux-next, but I'm happy to resend if
> > > > > that helps.
> > > >
> > > > Ping, Andrew?
> > >
> > > Another gentle ping, I still don't see these patches in mmot[ms]. Thanks.
> >
> > Andrew: AFAICT this still applies cleanly on linux-next/master, so I
> > don't plan to resend... is there any other issues with this series?
> >
> > This is a regression, so it'd be nice to have it fixed in mainline, 
> > eventually.
>
> Sorry, seeing "iommu" and "arm" made these escape my gimlet eye.

Thanks for picking them up!

> I'm only seeing acks on [1/3].  What's the review status of [2/3] and [3/3]?

Replied on the notification, [2/3] had a Ack, [3/3] is somewhat controversial.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 RFC/RFT] dma-contiguous: Get normal pages for single-page allocations

2019-03-19 Thread Catalin Marinas
On Tue, Mar 05, 2019 at 10:32:02AM -0800, Nicolin Chen wrote:
> The addresses within a single page are always contiguous, so it's
> not so necessary to always allocate one single page from CMA area.
> Since the CMA area has a limited predefined size of space, it may
> run out of space in heavy use cases, where there might be quite a
> lot CMA pages being allocated for single pages.
> 
> However, there is also a concern that a device might care where a
> page comes from -- it might expect the page from CMA area and act
> differently if the page doesn't.
> 
> This patch tries to get normal pages for single-page allocations
> unless the device has its own CMA area. This would save resources
> from the CMA area for more CMA allocations. And it'd also reduce
> CMA fragmentations resulted from trivial allocations.

This is not sufficient. Some architectures/platforms declare limits on
the CMA range so that DMA is possible with all expected devices. For
example, on arm64 we keep the CMA in the lower 4GB of the address range,
though with this patch you only covered the iommu ops allocation.

Do you have any numbers to back this up? You don't seem to address
dma_direct_alloc() either but, as I said above, it's not trivial since
some platforms expect certain physical range for DMA allocations.

-- 
Catalin
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 0/7] iommu/vt-d: Fix-up device-domain relationship by refactoring to use iommu group default domain.

2019-03-19 Thread James Sewart via iommu
Hey Lu,

> On 15 Mar 2019, at 03:13, Lu Baolu  wrote:
> 
> Hi James,
> 
> On 3/14/19 7:56 PM, James Sewart wrote:
>> Patches 1 and 2 are the same as v1.
>> v1-v2:
>>   Refactored ISA direct mappings to be returned by iommu_get_resv_regions.
>>   Integrated patch by Lu to defer turning on DMAR until iommu.c has mapped
>> reserved regions.
>>   Integrated patches by Lu to remove more unused code in cleanup.
>> Lu: I didn't integrate your patch to set the default domain type as it
>> isn't directly related to the aim of this patchset. Instead patch 4
> 
> Without those patches, user experience will be affected and some devices
> will not work on Intel platforms anymore.
> 
> For a long time, Intel IOMMU driver has its own logic to determine
> whether a device requires an identity domain. For example, when user
> specifies "iommu=pt" in kernel parameter, all device will be attached
> with the identity domain. Further more, some quirky devices require
> an identity domain to be used before enabling DMA remapping, otherwise,
> it will not work. This was done by adding quirk bits in Intel IOMMU
> driver.
> 
> So from my point of view, one way is porting all those quirks and kernel
> parameters into IOMMU generic layer, or opening a door for vendor IOMMU
> driver to determine the default domain type by their own. I prefer the
> latter option since it will not impact any behaviors on other
> architectures.

I see your point. I’m not confident that using the proposed door to set a 
groups default domain has the desired behaviour. As discussed before the 
default domain type will be set based on the desired type for only the 
first device attached to a group. I think to change the default domain 
type you would need a slightly different door that wasn’t conditioned on 
device.

For situations where individual devices require an identity domain because 
of quirks then maybe calling is_identity_map per device in 
iommu_group_get_for_dev is a better solution than the one I proposed.

> 
>> addresses the issue of a device requiring an identity domain by ignoring
>> the domain param in attach_device and printing a warning.
> 
> This will not work as I commented in that thread.
> 
>> I booted some of our devices with this patchset and haven't seen any
>> issues. It doesn't look like we have any devices with RMRR's though so
>> those codepaths aren't tested.
>> James Sewart (7):
>>   iommu: Move iommu_group_create_direct_mappings to after device_attach
>>   iommu/vt-d: Implement apply_resv_region for reserving IOVA ranges
>>   iommu/vt-d: Expose ISA direct mapping region via
>> iommu_get_resv_regions
>>   iommu/vt-d: Ignore domain parameter in attach_device if device
>> requires identity map
>>   iommu/vt-d: Allow IOMMU_DOMAIN_DMA to be allocated by iommu_ops
>>   iommu/vt-d: Remove lazy allocation of domains
>> Lu Baolu (1):
>>   iommu/vt-d: Enable DMA remapping after rmrr mapped
>>  drivers/iommu/intel-iommu.c | 444 +++-
>>  drivers/iommu/iommu.c   |   4 +-
>>  2 files changed, 131 insertions(+), 317 deletions(-)
> 
> Best regards,
> Lu Baolu

Cheers,
James.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-19 Thread Robin Murphy

On 19/03/2019 07:59, Lu Baolu wrote:

Hi Christoph,

On 3/14/19 12:10 AM, Christoph Hellwig wrote:

On Wed, Mar 13, 2019 at 10:31:52AM +0800, Lu Baolu wrote:

Hi again,

On 3/13/19 10:04 AM, Lu Baolu wrote:

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



Just looked into the code again. At least we could reuse below
functions:

swiotlb_tbl_map_single()
swiotlb_tbl_unmap_single()
swiotlb_tbl_sync_single()

Anything else?


Yes, that is probably about the level you want to reuse, given that the
next higher layer already has hooks into the direct mapping code.



I am trying to change my code to reuse swiotlb. But I found that swiotlb
might not be suitable for my case.

Below is what I got with swiotlb_map():

phy_addr    size    tlb_addr

0x167eec330 0x8 0x85dc6000
0x167eef5c0 0x40    0x85dc6800
0x167eec330 0x8 0x85dc7000
0x167eef5c0 0x40    0x85dc7800

But what I expected to get is:

phy_addr    size    tlb_addr

0x167eec330 0x8 0xA330
0x167eef5c0 0x40    0xB5c0
0x167eec330 0x8 0xC330
0x167eef5c0 0x40    0xD5c0

, where 0xXX000 is the physical address of a bounced page.

Basically, I want a bounce page to replace a leaf page in the vt-d page
table, which maps a buffer with size less than a PAGE_SIZE.


I'd imagine the thing to do would be to factor out the slot allocation 
in swiotlb_tbl_map_single() so that an IOMMU page pool/allocator can be 
hooked in as an alternative.


However we implement it, though, this should absolutely be a common 
IOMMU thing that all relevant DMA backends can opt into, and not 
specific to VT-d. I mean, it's already more or less the same concept as 
the PowerPC secure VM thing.


Robin.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

5.1-rc1: mpt init crash in scsi_map_dma, dma_4v_map_sg on sparc64

2019-03-19 Thread Meelis Roos

Tried 5.1-rc1 on a bunch of sparcs, this hits all my sparcs with sun4v and mpt 
scsi.

[2.733263] Fusion MPT base driver 3.04.20
[2.742995] Copyright (c) 1999-2008 LSI Corporation
[2.743052] Fusion MPT SAS Host driver 3.04.20
[2.743881] mptbase: ioc0: Initiating bringup
[3.737822] ioc0: LSISAS1064 A3: Capabilities={Initiator}
[   17.566584] scsi host0: ioc0: LSISAS1064 A3, FwRev=010ah, Ports=1, 
MaxQ=511, IRQ=27
[   17.595897] mptsas: ioc0: attaching ssp device: fw_channel 0, fw_id 0, phy 
0, sas_addr 0x5000c5001799a45d
[   17.598465] Unable to handle kernel NULL pointer dereference
[   17.598623] tsk->{mm,active_mm}->context = 
[   17.598723] tsk->{mm,active_mm}->pgd = 88802000
[   17.598774]   \|/  \|/
[   17.598774]   "@'/ .. \`@"
[   17.598774]   /_| \__/ |_\
[   17.598774]  \__U_/
[   17.598894] swapper/0(1): Oops [#1]
[   17.598937] CPU: 12 PID: 1 Comm: swapper/0 Not tainted 5.1.0-rc1 #118
[   17.598994] TSTATE: 80e01601 TPC: 004483a8 TNPC: 
004483ac Y: Not tainted
[   17.599086] TPC: 
[   17.599127] g0: 886d1d51 g1:  g2: 0001 
g3: 886b8000
[   17.599197] g4: 886c g5: 8001fef78000 g6: 886d 
g7: 
[   17.599267] o0: 8001f526bc90 o1: 01e2 o2: 8001f4fc2000 
o3: 8001f4fc2000
[   17.599337] o4: 8001f4fc1144 o5: 8001f5002800 sp: 886d1db1 
ret_pc: 00740720
[   17.599415] RPC: 
[   17.599456] l0: 2400 l1: ff00 l2: 0008 
l3: 0001
[   17.599526] l4: 8001f5002830 l5: 00ff l6: 8001f46c7e10 
l7: 8001f4fc1000
[   17.599596] i0: 8001f4b350b0 i1: 8001f526be28 i2: 0001 
i3: 0002
[   17.599665] i4: 0010 i5:  i6: 886d1f01 
i7: 00725570
[   17.599745] I7: 
[   17.599781] Call Trace:
[   17.599824]  [00725570] scsi_dma_map+0x50/0xc0
[   17.599881]  [00740720] mptscsih_qcmd+0x280/0x660
[   17.599940]  [00723dec] scsi_queue_rq+0x6ac/0x880
[   17.65]  [00680198] blk_mq_dispatch_rq_list+0x138/0x540
[   17.600065]  [00685154] blk_mq_do_dispatch_sched+0x54/0x100
[   17.600124]  [0068560c] blk_mq_sched_dispatch_requests+0xec/0x160
[   17.600186]  [0067e83c] __blk_mq_run_hw_queue+0x9c/0x180
[   17.600246]  [0067eaa8] __blk_mq_delay_run_hw_queue+0x188/0x1e0
[   17.600307]  [0067ff74] blk_mq_run_hw_queue+0x54/0x140
[   17.600365]  [00685be0] blk_mq_sched_insert_request+0x120/0x180
[   17.600424]  [0067a394] blk_execute_rq+0x34/0x60
[   17.600483]  [007218cc] __scsi_execute+0xcc/0x1a0
[   17.600543]  [00725f40] scsi_probe_and_add_lun+0x1e0/0xec0
[   17.600603]  [00726e98] __scsi_scan_target+0xb8/0x680
[   17.600663]  [0072757c] scsi_scan_target+0x11c/0x140
[   17.600727]  [0072e9b8] sas_rphy_add+0x138/0x1c0
[   17.600777] Disabling lock debugging due to kernel taint
[   17.600837] Caller[00725570]: scsi_dma_map+0x50/0xc0
[   17.600896] Caller[00740720]: mptscsih_qcmd+0x280/0x660
[   17.600956] Caller[00723dec]: scsi_queue_rq+0x6ac/0x880
[   17.601018] Caller[00680198]: blk_mq_dispatch_rq_list+0x138/0x540
[   17.601078] Caller[00685154]: blk_mq_do_dispatch_sched+0x54/0x100
[   17.601138] Caller[0068560c]: 
blk_mq_sched_dispatch_requests+0xec/0x160
[   17.601210] Caller[0067e83c]: __blk_mq_run_hw_queue+0x9c/0x180
[   17.601271] Caller[0067eaa8]: __blk_mq_delay_run_hw_queue+0x188/0x1e0
[   17.601333] Caller[0067ff74]: blk_mq_run_hw_queue+0x54/0x140
[   17.601392] Caller[00685be0]: blk_mq_sched_insert_request+0x120/0x180
[   17.601453] Caller[0067a394]: blk_execute_rq+0x34/0x60
[   17.601513] Caller[007218cc]: __scsi_execute+0xcc/0x1a0
[   17.601574] Caller[00725f40]: scsi_probe_and_add_lun+0x1e0/0xec0
[   17.601635] Caller[00726e98]: __scsi_scan_target+0xb8/0x680
[   17.601696] Caller[0072757c]: scsi_scan_target+0x11c/0x140
[   17.601758] Caller[0072e9b8]: sas_rphy_add+0x138/0x1c0
[   17.601819] Caller[00743b64]: mptsas_add_end_device+0xc4/0x100
[   17.601882] Caller[00746964]: mptsas_scan_sas_topology+0x164/0x300
[   17.601943] Caller[00749094]: mptsas_probe+0x2d4/0x440
[   17.602004] Caller[006bf948]: pci_device_probe+0xc8/0x160
[   17.602066] Caller[0070dab0]: really_probe+0x1b0/0x2e0
[   17.602126] Caller[0070de10]: driver_probe_device+0x50/0x100
[   17.602186] Caller[0070e0a8]: device_driver_attach+0x48/0x60
[   17.602245] Caller[0070e140]: __driver_attach+0x80/0xe0
[   17.602302] Caller[0070c484]: bus_for_each_dev+0x44/0x80
[   17.602360] Caller[0070ca74]: 

Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-19 Thread Lu Baolu

Hi Christoph,

On 3/14/19 12:10 AM, Christoph Hellwig wrote:

On Wed, Mar 13, 2019 at 10:31:52AM +0800, Lu Baolu wrote:

Hi again,

On 3/13/19 10:04 AM, Lu Baolu wrote:

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



Just looked into the code again. At least we could reuse below
functions:

swiotlb_tbl_map_single()
swiotlb_tbl_unmap_single()
swiotlb_tbl_sync_single()

Anything else?


Yes, that is probably about the level you want to reuse, given that the
next higher layer already has hooks into the direct mapping code.



I am trying to change my code to reuse swiotlb. But I found that swiotlb
might not be suitable for my case.

Below is what I got with swiotlb_map():

phy_addrsizetlb_addr

0x167eec330 0x8 0x85dc6000
0x167eef5c0 0x400x85dc6800
0x167eec330 0x8 0x85dc7000
0x167eef5c0 0x400x85dc7800

But what I expected to get is:

phy_addrsizetlb_addr

0x167eec330 0x8 0xA330
0x167eef5c0 0x400xB5c0
0x167eec330 0x8 0xC330
0x167eef5c0 0x400xD5c0

, where 0xXX000 is the physical address of a bounced page.

Basically, I want a bounce page to replace a leaf page in the vt-d page
table, which maps a buffer with size less than a PAGE_SIZE.

Best regards,
Lu Baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v6 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables

2019-03-19 Thread Nicolas Boichat
On Mon, Feb 25, 2019 at 8:23 AM Nicolas Boichat  wrote:
>
> On Thu, Feb 14, 2019 at 1:12 AM Vlastimil Babka  wrote:
> >
> > On 1/22/19 11:51 PM, Nicolas Boichat wrote:
> > > Hi Andrew,
> > >
> > > On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel  wrote:
> > >>
> > >> On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
> > >> > Does anyone have any further comment on this series? If not, which
> > >> > maintainer is going to pick this up? I assume Andrew Morton?
> > >>
> > >> Probably, yes. I don't like to carry the mm-changes in iommu-tree, so
> > >> this should go through mm.
> > >
> > > Gentle ping on this series, it seems like it's better if it goes
> > > through your tree.
> > >
> > > Series still applies cleanly on linux-next, but I'm happy to resend if
> > > that helps.
> >
> > Ping, Andrew?
>
> Another gentle ping, I still don't see these patches in mmot[ms]. Thanks.

Andrew: AFAICT this still applies cleanly on linux-next/master, so I
don't plan to resend... is there any other issues with this series?

This is a regression, so it'd be nice to have it fixed in mainline, eventually.

Thanks,

> > > Thanks!
> > >
> > >> Regards,
> > >>
> > >> Joerg
> > >
> >
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu