PASID support and enable bit in the context entry isn't the right
indicator for the type of tables (legacy or scalable mode). Check
the DMA_RTADDR_SMT bit in the root context pointer instead.
Cc: Ashok Raj
Cc: Jacob Pan
Cc: Kevin Tian
Cc: Sai Praneeth
Fixes: dd5142ca5d24b ("iommu/vt-d: Add deb
Hi Alex,
On 7/19/19 11:19 PM, Alex Williamson wrote:
On Fri, 19 Jul 2019 16:27:04 +0800
Lu Baolu wrote:
Hi Alex,
On 7/19/19 7:16 AM, Alex Williamson wrote:
On Wed, 12 Jun 2019 08:28:51 +0800
Lu Baolu wrote:
The domain_init() and md_domain_init() do almost the same job.
Consolidate the
kbuild test robot writes:
> Hi Thiago,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on linus/master]
> [cannot apply to v5.2 next-20190718]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improve the system]
>
> url:
>
Hello Lianbo,
lijiang writes:
> 在 2019年07月19日 01:47, Lendacky, Thomas 写道:
>> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>>> Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
>>> appear in generic kernel code because it forces non-x86 architectures to
>>> define
Lendacky, Thomas writes:
> On 7/18/19 2:44 PM, Thiago Jung Bauermann wrote:
>>
>> Lendacky, Thomas writes:
>>
>>> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
Hello,
This version is mostly about splitting up patch 2/3 into three separate
patches, as suggested by Chri
On Fri, 19 Jul 2019 17:04:26 +0800
Lu Baolu wrote:
> Hi Alex,
>
> On 7/18/19 11:12 AM, Alex Williamson wrote:
> > On Sat, 25 May 2019 13:41:33 +0800
> > Lu Baolu wrote:
> >
> >> Previously, get_valid_domain_for_dev() is used to retrieve the
> >> DMA domain which has been attached to the devi
On Fri, 19 Jul 2019 16:27:04 +0800
Lu Baolu wrote:
> Hi Alex,
>
> On 7/19/19 7:16 AM, Alex Williamson wrote:
> > On Wed, 12 Jun 2019 08:28:51 +0800
> > Lu Baolu wrote:
> >
> >> The domain_init() and md_domain_init() do almost the same job.
> >> Consolidate them to avoid duplication.
> >>
> >
The patch sent in the below thread fixed the problem in kernel v5.2 on
my system.
A sincere Thank You to everyone who jumped in to help, using their
valuable time on this obscure issue.
Best Regards,
Alfred Farleigh
Re: [PATCH dma 1/1] dma-direct: correct the physical addr in
dma_dire
On 7/18/19 2:44 PM, Thiago Jung Bauermann wrote:
>
> Lendacky, Thomas writes:
>
>> On 7/17/19 10:28 PM, Thiago Jung Bauermann wrote:
>>> Hello,
>>>
>>> This version is mostly about splitting up patch 2/3 into three separate
>>> patches, as suggested by Christoph Hellwig. Two other changes are a
On Thu, 2019-07-18 at 13:18 +0200, Nicolas Saenz Julienne wrote:
> On Thu, 2019-07-18 at 11:15 +0200, Christoph Hellwig wrote:
> > On Wed, Jul 17, 2019 at 05:31:34PM +0200, Nicolas Saenz Julienne wrote:
> > > Historically devices with ZONE_DMA32 have been assumed to be able to
> > > address at leas
On Wed, Jul 17, 2019 at 06:51:46PM +0530, Vignesh Raghavendra wrote:
> > This series adds swiotlb support to the 32-bit arm port to ensure
> > platforms with LPAE support can support DMA mapping for all devices
> > using 32-bit dma masks, just like we do on other ports that support
> >> 32-bit phys
Thanks,
applied to the dma-mapping tree and I'll sent it to Linus this
weekend.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Thu, Jul 18, 2019 at 11:52 AM Christoph Hellwig wrote:
>
> On Thu, Jul 18, 2019 at 10:49:34AM +0200, Christoph Hellwig wrote:
> > On Thu, Jul 18, 2019 at 01:45:16PM +1000, Oliver O'Halloran wrote:
> > > > Other than m68k, mips, and arm64, everybody else that doesn't have
> > > > ARCH_NO_COHEREN
+static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
+ u64 *cmds, int n, bool sync)
+{
+ u64 cmd_sync[CMDQ_ENT_DWORDS];
+ u32 prod;
unsigned long flags;
- bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
- stru
On 19/07/2019 10:26, fugang.d...@nxp.com wrote:
From: Fugang Duan
dma_map_sg() may use swiotlb buffer when kernel parameter include
"swiotlb=force" or the dma_addr is out of dev->dma_mask range. After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to
From: Fugang Duan
dma_map_sg() may use swiotlb buffer when kernel parameter include
"swiotlb=force" or the dma_addr is out of dev->dma_mask range. After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtua
Hi,
On 7/17/19 5:38 AM, Dmitry Safonov wrote:
Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.
Before deferring a flush, the queue should be allocated and initialized.
Currently only
Hi,
On 7/17/19 5:38 AM, Dmitry Safonov wrote:
There is a couple of places where on domain_init() failure domain_exit()
is called. While currently domain_init() can fail only if
alloc_pgtable_page() has failed.
Make domain_exit() check if domain->pgd present, before calling
domain_unmap(), as it
Hi Thiago,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[cannot apply to v5.2 next-20190718]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Thiago-
Hi Alex,
On 7/18/19 11:12 AM, Alex Williamson wrote:
On Sat, 25 May 2019 13:41:33 +0800
Lu Baolu wrote:
Previously, get_valid_domain_for_dev() is used to retrieve the
DMA domain which has been attached to the device or allocate one
if no domain has been attached yet. As we have delegated the
Hi Alex,
On 7/19/19 7:16 AM, Alex Williamson wrote:
On Wed, 12 Jun 2019 08:28:51 +0800
Lu Baolu wrote:
The domain_init() and md_domain_init() do almost the same job.
Consolidate them to avoid duplication.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-iommu.c | 123 +++---
On 7/19/19 2:06 AM, Christoph Hellwig wrote:
> What is inherently architecture specific here over the fact that
> the pgprot_* expand to architecture specific bits?
What I meant is that different architectures seem to have different
criteria for setting the different pgprot_ bits. i.e. ppc checks
On Thu, Jul 18, 2019 at 02:46:00PM -0500, Shawn Anastasio wrote:
> Personally, I'm not a huge fan of an implicit default for something
> inherently architecture-dependent like this at all.
What is inherently architecture specific here over the fact that
the pgprot_* expand to architecture specific
23 matches
Mail list logo