On 6/18/19 5:07 AM, Tero Kristo wrote:
> On 07/06/2019 22:35, Andrew F. Davis wrote:
>> This patch adds a driver for the Page-based Address Translator (PAT)
>> present on various TI SoCs. A PAT device performs address translation
>> using tables stored in an internal SRAM. Each PAT supports a set
By the time we call zones_sizes_init() arm64_dma_phys_limit already
contains the result of max_zone_dma_phys(). We use the variable instead
of calling the function directly to save some precious cpu time.
Signed-off-by: Nicolas Saenz Julienne
---
arch/arm64/mm/init.c | 2 +-
1 file changed, 1
With the introduction of ZONE_DMA in arm64 devices are not forced to
support 32 bit DMA masks. We have to inform dma-direct of this
limitation whenever it happens.
Signed-off-by: Nicolas Saenz Julienne
---
arch/arm64/mm/init.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
Let the name indicate that they are used to calculate ZONE_DMA32's size
as opposed to ZONE_DMA.
Signed-off-by: Nicolas Saenz Julienne
---
arch/arm64/mm/init.c | 30 +++---
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/mm/init.c
Some SoCs might have multiple interconnects each with their own DMA
addressing limitations. This function parses the 'dma-ranges' on each of
them and tries to guess the maximum SoC wide DMA addressable memory
size.
This is specially useful for arch code in order to properly setup CMA
and memory
Some architectures, notably arm64, are interested in tweaking this
depending on their runtime dma addressing limitations.
Signed-off-by: Nicolas Saenz Julienne
---
arch/powerpc/include/asm/page.h | 9 -
arch/powerpc/mm/mem.c | 14 --
arch/s390/include/asm/page.h
arm64 uses both ZONE_DMA and ZONE_DMA32 for the same reasons x86_64
does: peripherals with different DMA addressing limitations. This
updates both ZONE_DMAs comments to inform about the usage.
Signed-off-by: Nicolas Saenz Julienne
---
include/linux/mmzone.h | 21 +++--
1 file
So far all arm64 devices have supported 32 bit DMA masks for their
peripherals. This is not true anymore for the Raspberry Pi 4. Most of
it's peripherals can only address the first GB or memory of a total of
up to 4 GB.
This goes against ZONE_DMA32's original intent, and breaks other
subsystems
Hi all,
this series attempts to address some issues we found while bringing up
the new Raspberry Pi 4 in arm64 and it's intended to serve as a follow
up of this discussion:
https://lkml.org/lkml/2019/7/17/476
The new Raspberry Pi 4 has up to 4GB of memory but most peripherals can
only address the
Hi all,
I'm currently looking at an issue with an NVMe device, which isn't
working properly under some specific conditions.
The issue comes down to my platform having DMA addressing restrictions,
with only 3 of the total 4GiB of RAM being device addressable, which
means a bunch of DMA mappings
Some devices might have multiple interconnects with different DMA
addressing limitations. This function provides the higher physical
address accessible by all peripherals on the SoC. If such limitation
doesn't exist it'll return 0.
Signed-off-by: Nicolas Saenz Julienne
---
arch/arm64/mm/init.c
Hi,
On 7/30/19 7:22 PM, Robin Murphy wrote:
On 30/07/2019 05:28, Lu Baolu wrote:
Hi,
On 7/29/19 6:05 PM, Vlad Buslov wrote:
On Sat 27 Jul 2019 at 05:15, Lu Baolu wrote:
Hi Vilad,
On 7/27/19 12:30 AM, Vlad Buslov wrote:
Hi Lu Baolu,
Our mlx5 driver fails to recreate VFs when cmdline
On Wed 31 Jul 2019 at 10:29, Lu Baolu wrote:
> Hi,
>
> On 7/30/19 7:22 PM, Robin Murphy wrote:
>> On 30/07/2019 05:28, Lu Baolu wrote:
>>> Hi,
>>>
>>> On 7/29/19 6:05 PM, Vlad Buslov wrote:
On Sat 27 Jul 2019 at 05:15, Lu Baolu wrote:
> Hi Vilad,
>
> On 7/27/19 12:30 AM, Vlad
When removing a device from an iommu group, the domain should
be detached from the device. Otherwise, the stale domain info
will still be cached by the driver and the driver will refuse
to attach any domain to the device again.
Cc: Ashok Raj
Cc: Jacob Pan
Cc: Kevin Tian
Fixes: b7297783c2bb6
Hi,
On 7/31/19 7:19 PM, Vlad Buslov wrote:
On Wed 31 Jul 2019 at 10:29, Lu Baolu wrote:
Hi,
On 7/30/19 7:22 PM, Robin Murphy wrote:
On 30/07/2019 05:28, Lu Baolu wrote:
Hi,
On 7/29/19 6:05 PM, Vlad Buslov wrote:
On Sat 27 Jul 2019 at 05:15, Lu Baolu wrote:
Hi Vilad,
On 7/27/19 12:30
Backport commits from master that fix boot failure on some intel
machines.
I have only boot tested this in a VM. Functional testing for v4.14 is
out of my scope as patches differ only on a trivial conflict from v4.19,
where I discovered/debugged the issue. While testing v4.14 stable on
affected
[ Upstream commit effa467870c7612012885df4e246bdb8ffd8e44c ]
Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.
Before deferring a flush, the queue should be allocated and initialized.
[ Upstream commit effa467870c7612012885df4e246bdb8ffd8e44c ]
Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.
Before deferring a flush, the queue should be allocated and initialized.
Backport commits from master that fix boot failure on some intel
machines.
Cc: David Woodhouse
Cc: Joerg Roedel
Cc: Joerg Roedel
Cc: Lu Baolu
Dmitry Safonov (1):
iommu/vt-d: Don't queue_iova() if there is no flush queue
Joerg Roedel (1):
iommu/iova: Fix compilation error with
On Wed, Jul 31, 2019 at 05:22:18PM +0100, Dmitry Safonov wrote:
> Backport commits from master that fix boot failure on some intel
> machines.
>
> Cc: David Woodhouse
> Cc: Joerg Roedel
> Cc: Joerg Roedel
> Cc: Lu Baolu
Thanks for the backports, 4.19.y and 4.14.y patches now queued up.
greg
On Wed, Jul 31, 2019 at 05:47:48PM +0200, Nicolas Saenz Julienne wrote:
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 1c4ffabbe1cb..f5279ef85756 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -50,6 +50,13 @@
> s64 memstart_addr __ro_after_init = -1;
>
This is a note to let you know that I've just added the patch titled
iommu/vt-d: Don't queue_iova() if there is no flush queue
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
This is a note to let you know that I've just added the patch titled
iommu/vt-d: Don't queue_iova() if there is no flush queue
to the 4.19-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
23 matches
Mail list logo