Module Version : PiccasoCpu 10
AGESA Version : PiccasoPI 100A
I did not try to enter the system in any other way (like via ssh) than via
Desktop.
-Original Message-
From: Huang Rui
Sent: Dienstag, 24. November 2020 07:43
To: Kuehling, Felix
Cc: Will Deacon ; Deucher, Alexander
;
On Tue, Nov 24, 2020 at 06:51:11AM +0800, Kuehling, Felix wrote:
> On 2020-11-23 5:33 p.m., Will Deacon wrote:
> > On Mon, Nov 23, 2020 at 09:04:14PM +, Deucher, Alexander wrote:
> >> [AMD Public Use]
> >>
> >>> -Original Message-
> >>> From: Will Deacon
> >>> Sent: Monday, November
This is a board developed by my company.
Subsystem-ID is ea50:0c19 or ea50:cc10 (depending on which particular carrier
board the compute module is attached to), however we haven´t managed yet to
enter this Subsystem-ID to every PCI-Device in the system, because of missing
means to do that by
On 2020-11-24 00:52, Rob Clark wrote:
On Mon, Nov 23, 2020 at 9:01 AM Sai Prakash Ranjan
wrote:
On 2020-11-23 20:51, Will Deacon wrote:
> On Tue, Nov 17, 2020 at 08:00:39PM +0530, Sai Prakash Ranjan wrote:
>> Some hardware variants contain a system cache or the last level
>> cache(llc). This
NVMe driver and other applications may depend on the data offset
to operate correctly. Currently when unaligned data is mapped via
SWIOTLB, the data is mapped as slab aligned with the SWIOTLB. When
booting with --swiotlb=force option and using NVMe as interface,
running mkfs.xfs on Rhel fails
On 2020-11-23 5:33 p.m., Will Deacon wrote:
On Mon, Nov 23, 2020 at 09:04:14PM +, Deucher, Alexander wrote:
[AMD Public Use]
-Original Message-
From: Will Deacon
Sent: Monday, November 23, 2020 8:44 AM
To: linux-ker...@vger.kernel.org
Cc: linux-...@vger.kernel.org;
Hello Konrad,
On Mon, Nov 23, 2020 at 12:56:32PM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Nov 23, 2020 at 06:06:47PM +0100, Borislav Petkov wrote:
> > On Thu, Nov 19, 2020 at 09:42:05PM +, Ashish Kalra wrote:
> > > From: Ashish Kalra
> > >
> > > For SEV, all DMA to and from guest has
On Mon, Nov 23, 2020 at 09:04:14PM +, Deucher, Alexander wrote:
> [AMD Public Use]
>
> > -Original Message-
> > From: Will Deacon
> > Sent: Monday, November 23, 2020 8:44 AM
> > To: linux-ker...@vger.kernel.org
> > Cc: linux-...@vger.kernel.org; iommu@lists.linux-foundation.org; Will
[AMD Public Use]
> -Original Message-
> From: Will Deacon
> Sent: Monday, November 23, 2020 8:44 AM
> To: linux-ker...@vger.kernel.org
> Cc: linux-...@vger.kernel.org; iommu@lists.linux-foundation.org; Will
> Deacon ; Bjorn Helgaas ;
> Deucher, Alexander ; Edgar Merger
> ; Joerg Roedel
On Mon, Nov 23, 2020 at 9:01 AM Sai Prakash Ranjan
wrote:
>
> On 2020-11-23 20:51, Will Deacon wrote:
> > On Tue, Nov 17, 2020 at 08:00:39PM +0530, Sai Prakash Ranjan wrote:
> >> Some hardware variants contain a system cache or the last level
> >> cache(llc). This cache is typically a large block
On Mon, Nov 23, 2020 at 07:02:15PM +0100, Borislav Petkov wrote:
> On Mon, Nov 23, 2020 at 12:56:32PM -0500, Konrad Rzeszutek Wilk wrote:
> > This is not going to work for TDX. I think having a registration
> > to SWIOTLB to have this function would be better going forward.
> >
> > As in there
On Mon, Nov 23, 2020 at 12:56:32PM -0500, Konrad Rzeszutek Wilk wrote:
> This is not going to work for TDX. I think having a registration
> to SWIOTLB to have this function would be better going forward.
>
> As in there will be a swiotlb_register_adjuster() which AMD SEV
> code can call at start,
On Mon, Nov 23, 2020 at 06:06:47PM +0100, Borislav Petkov wrote:
> On Thu, Nov 19, 2020 at 09:42:05PM +, Ashish Kalra wrote:
> > From: Ashish Kalra
> >
> > For SEV, all DMA to and from guest has to use shared (un-encrypted) pages.
> > SEV uses SWIOTLB to make this happen without requiring
Fix the checkpatch warning for space required before the open
parenthesis.
Signed-off-by: Sai Prakash Ranjan
Acked-by: Will Deacon
---
drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c
From: Jordan Crouse
GPU targets with an MMU-500 attached have a slightly different process for
enabling system cache. Use the compatible string on the IOMMU phandle
to see if an MMU-500 is attached and modify the programming sequence
accordingly.
Signed-off-by: Jordan Crouse
Signed-off-by: Sai
Use table and of_match_node() to match qcom implementation
instead of multiple of_device_compatible() calls for each
QCOM SMMU implementation.
Signed-off-by: Sai Prakash Ranjan
Acked-by: Will Deacon
---
drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 9 +
From: Sharat Masetty
The last level system cache can be partitioned to 32 different
slices of which GPU has two slices preallocated. One slice is
used for caching GPU buffers and the other slice is used for
caching the GPU SMMU pagetables. This talks to the core system
cache driver to acquire
From: Sharat Masetty
The register read-modify-write construct is generic enough
that it can be used by other subsystems as needed, create
a more generic rmw() function and have the gpu_rmw() use
this new function.
Signed-off-by: Sharat Masetty
Reviewed-by: Jordan Crouse
Signed-off-by: Sai
On Thu, Nov 19, 2020 at 09:42:05PM +, Ashish Kalra wrote:
> From: Ashish Kalra
>
> For SEV, all DMA to and from guest has to use shared (un-encrypted) pages.
> SEV uses SWIOTLB to make this happen without requiring changes to device
> drivers. However, depending on workload being run, the
Now that we have a struct io_pgtable_domain_attr with quirks,
use that for non_strict mode as well thereby removing the need
for more members of arm_smmu_domain in the future.
Signed-off-by: Sai Prakash Ranjan
---
drivers/iommu/arm/arm-smmu/arm-smmu.c | 8 +++-
Some hardware variants contain a system cache or the last level
cache(llc). This cache is typically a large block which is shared
by multiple clients on the SOC. GPU uses the system cache to cache
both the GPU data buffers(like textures) as well the SMMU pagetables.
This helps with improved render
Add iommu domain attribute for pagetable configuration which
initially will be used to set quirks like for system cache aka
last level cache to be used by client drivers like GPU to set
right attributes for caching the hardware pagetables into the
system cache and later can be extended to include
Add a quirk IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to override
the outer-cacheability attributes set in the TCR for a
non-coherent page table walker when using system cache.
Signed-off-by: Sai Prakash Ranjan
---
drivers/iommu/io-pgtable-arm.c | 10 --
include/linux/io-pgtable.h | 4
On 2020-11-23 20:51, Will Deacon wrote:
On Tue, Nov 17, 2020 at 08:00:39PM +0530, Sai Prakash Ranjan wrote:
Some hardware variants contain a system cache or the last level
cache(llc). This cache is typically a large block which is shared
by multiple clients on the SOC. GPU uses the system cache
On 2020-11-23 20:49, Will Deacon wrote:
On Tue, Nov 17, 2020 at 08:00:42PM +0530, Sai Prakash Ranjan wrote:
Now that we have a struct domain_attr_io_pgtbl_cfg with quirks,
use that for non_strict mode as well thereby removing the need
for more members of arm_smmu_domain in the future.
On 2020-11-23 20:48, Will Deacon wrote:
On Tue, Nov 17, 2020 at 08:00:41PM +0530, Sai Prakash Ranjan wrote:
Add iommu domain attribute for pagetable configuration which
initially will be used to set quirks like for system cache aka
last level cache to be used by client drivers like GPU to set
On 2020-11-23 20:36, Will Deacon wrote:
On Tue, Nov 17, 2020 at 08:00:40PM +0530, Sai Prakash Ranjan wrote:
Add a quirk IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to override
the attributes set in TCR for the page table walker when
using system cache.
Signed-off-by: Sai Prakash Ranjan
---
On Thu, 12 Nov 2020 22:05:19 +, John Stultz wrote:
> Robin Murphy pointed out that if the arm-smmu driver probes before
> the qcom_scm driver, we may call qcom_scm_qsmmu500_wait_safe_toggle()
> before the __scm is initialized.
>
> Now, getting this to happen is a bit contrived, as in my
On Thu, 19 Nov 2020 16:58:46 +, Shameer Kolothum wrote:
> Currently iommu_create_device_direct_mappings() is called
> without checking the return of __iommu_attach_device(). This
> may result in failures in iommu driver if dev attach returns
> error.
Applied to arm64 (for-next/iommu/fixes),
On Fri, 6 Nov 2020 16:50:46 +0100, Jean-Philippe Brucker wrote:
> These are the remaining bits implementing iommu_sva_bind_device() for
> SMMUv3. They didn't make it into v5.10 because an Ack was missing for
> adding the PASID field to mm_struct. That is now upstream, in commit
> 52ad9bc64c74
Hi Ashish, non-technical comment: in the subject, you might want to
s/SWIOTBL/SWIOTLB .
cheers,
Guilherme
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Tue, Nov 17, 2020 at 08:00:39PM +0530, Sai Prakash Ranjan wrote:
> Some hardware variants contain a system cache or the last level
> cache(llc). This cache is typically a large block which is shared
> by multiple clients on the SOC. GPU uses the system cache to cache
> both the GPU data
On Tue, Nov 17, 2020 at 08:00:42PM +0530, Sai Prakash Ranjan wrote:
> Now that we have a struct domain_attr_io_pgtbl_cfg with quirks,
> use that for non_strict mode as well thereby removing the need
> for more members of arm_smmu_domain in the future.
>
> Signed-off-by: Sai Prakash Ranjan
> ---
On Tue, Nov 17, 2020 at 08:00:41PM +0530, Sai Prakash Ranjan wrote:
> Add iommu domain attribute for pagetable configuration which
> initially will be used to set quirks like for system cache aka
> last level cache to be used by client drivers like GPU to set
> right attributes for caching the
On Tue, Nov 17, 2020 at 08:00:40PM +0530, Sai Prakash Ranjan wrote:
> Add a quirk IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to override
> the attributes set in TCR for the page table walker when
> using system cache.
>
> Signed-off-by: Sai Prakash Ranjan
> ---
> drivers/iommu/io-pgtable-arm.c | 10
On Thu, Nov 19, 2020 at 10:19:01PM -0600, Brijesh Singh wrote:
> On 11/19/20 8:30 PM, Suravee Suthikulpanit wrote:
> > On 11/18/20 5:57 AM, Will Deacon wrote:
> > > I think I'm missing something here. set_memory_4k() will break the
> > kernel
> > > linear mapping up into page granular mappings,
Sorry for the empty message. I would like to add Joerg Roedel and
IOMMU related maillist in case they can help on this issue.
пн, 23 нояб. 2020 г. в 16:32, Matwey V. Kornilov :
>
> пн, 23 нояб. 2020 г. в 14:19, Thimo Emmerich :
> >
> > Hi,
> >
> > thanks for your prompt answer. Unfortunately I
пн, 23 нояб. 2020 г. в 14:19, Thimo Emmerich :
>
> Hi,
>
> thanks for your prompt answer. Unfortunately I was out of office on Friday.
>
> Here the updates:
> - Tracing showed the lines you mentioned.
> - Updating the kernel to the latest stable (v5.9.10) did not change
> anything.
> -
On Mon, Nov 23, 2020 at 09:54:49PM +0800, Lu Baolu wrote:
> Hi Will,
>
> On 2020/11/23 21:03, Will Deacon wrote:
> > Hi Baolu,
> >
> > On Mon, Nov 23, 2020 at 08:55:17PM +0800, Lu Baolu wrote:
> > > On 2020/11/23 20:04, Will Deacon wrote:
> > > > On Sat, Nov 21, 2020 at 09:56:17PM +0800, Lu
Hi Will,
On 2020/11/23 21:03, Will Deacon wrote:
Hi Baolu,
On Mon, Nov 23, 2020 at 08:55:17PM +0800, Lu Baolu wrote:
On 2020/11/23 20:04, Will Deacon wrote:
On Sat, Nov 21, 2020 at 09:56:17PM +0800, Lu Baolu wrote:
@@ -1645,13 +1655,10 @@ struct __group_domain_type {
static int
Edgar Merger reports that the AMD Raven GPU does not work reliably on
his system when the IOMMU is enabled:
| [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx timeout, signaled
seq=1, emitted seq=3
| [...]
| amdgpu :0b:00.0: GPU reset begin!
| AMD-Vi: Completion-Wait loop timed
On Mon, Nov 23, 2020 at 01:35:14PM +0100, Linus Walleij wrote:
> On Mon, Nov 16, 2020 at 5:36 PM Will Deacon wrote:
>
> > Linus -- please can you drop this one (patch 3/3) for now, given that it's
> > causing problems?
>
> Reverted now, sorry for missing to do this earlier.
Cheers, Linus!
Hi Baolu,
On Mon, Nov 23, 2020 at 08:55:17PM +0800, Lu Baolu wrote:
> On 2020/11/23 20:04, Will Deacon wrote:
> > On Sat, Nov 21, 2020 at 09:56:17PM +0800, Lu Baolu wrote:
> > > @@ -1645,13 +1655,10 @@ struct __group_domain_type {
> > > static int probe_get_default_domain_type(struct device
Hi Will,
On 2020/11/23 20:04, Will Deacon wrote:
On Sat, Nov 21, 2020 at 09:56:17PM +0800, Lu Baolu wrote:
So that the vendor iommu drivers are no more required to provide the
def_domain_type callback to always isolate the untrusted devices.
Link:
On Mon, Nov 16, 2020 at 5:36 PM Will Deacon wrote:
> Linus -- please can you drop this one (patch 3/3) for now, given that it's
> causing problems?
Reverted now, sorry for missing to do this earlier.
Yours,
Linus Walleij
___
iommu mailing list
On Fri, Nov 20, 2020 at 05:06:28PM +0800, Yong Wu wrote:
> Currently direct_mapping always use the smallest pgsize which is SZ_4K
> normally to mapping. This is unnecessary. we could gather the size, and
> call iommu_map then, iommu_map could decide how to map better with the
> just right pgsize.
On Sat, Nov 21, 2020 at 09:56:17PM +0800, Lu Baolu wrote:
> So that the vendor iommu drivers are no more required to provide the
> def_domain_type callback to always isolate the untrusted devices.
>
> Link:
> https://lore.kernel.org/linux-iommu/243ce89c33fe4b9da4c56ba35aceb...@huawei.com/
> Cc:
Hi Will,
On 2020/11/23 19:47, Will Deacon wrote:
On Mon, Nov 23, 2020 at 07:40:57PM +0800, Lu Baolu wrote:
On 2020/11/23 18:08, Christoph Hellwig wrote:
+ /*
+* If both the physical buffer start address and size are
+* page aligned, we don't need to use a bounce page.
+
On Mon, Nov 23, 2020 at 07:40:57PM +0800, Lu Baolu wrote:
> On 2020/11/23 18:08, Christoph Hellwig wrote:
> > > + /*
> > > + * If both the physical buffer start address and size are
> > > + * page aligned, we don't need to use a bounce page.
> > > + */
> > > + if (IS_ENABLED(CONFIG_SWIOTLB) &&
Hi Christoph,
On 2020/11/23 18:08, Christoph Hellwig wrote:
+ /*
+* If both the physical buffer start address and size are
+* page aligned, we don't need to use a bounce page.
+*/
+ if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) &&
+
> + /*
> + * If both the physical buffer start address and size are
> + * page aligned, we don't need to use a bounce page.
> + */
> + if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) &&
> + iova_offset(iovad, phys | org_size)) {
> + aligned_size =
51 matches
Mail list logo