M_BITS -
page_shift) where it matters (phyp).
Reviewed-by: Alexey Kardashevskiy
Fixes: bf6e2d562bbc4("powerpc/dma: Fallback to dma_ops when persistent memory
present")
Signed-off-by: Leonardo Bras
---
arch/powerpc/platforms/pseries/iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 d
On 08/04/2021 19:04, Michael Ellerman wrote:
Alexey Kardashevskiy writes:
On 08/04/2021 15:37, Michael Ellerman wrote:
Leonardo Bras writes:
According to LoPAR, ibm,query-pe-dma-window output named "IO Page Sizes"
will let the OS know all possible pagesizes that can be used fo
On 08/04/2021 15:37, Michael Ellerman wrote:
Leonardo Bras writes:
According to LoPAR, ibm,query-pe-dma-window output named "IO Page Sizes"
will let the OS know all possible pagesizes that can be used for creating a
new DDW.
Currently Linux will only try using 3 of the 8 available options:
On 24/03/2021 06:32, Jason Gunthorpe wrote:
For NVIDIA GPU Max checked internally and we saw it looks very much
like how Intel GPU works. Only some PCI IDs trigger checking on the
feature the firmware thing is linked to.
And as Alexey noted, the table came up incomplete. But also those
On 23/03/2021 06:09, Leonardo Bras wrote:
According to LoPAR, ibm,query-pe-dma-window output named "IO Page Sizes"
will let the OS know all possible pagesizes that can be used for creating a
new DDW.
Currently Linux will only try using 3 of the 8 available options:
4K, 64K and 16M. According
On 11/03/2021 13:00, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 12:42:56PM +1100, Alexey Kardashevskiy wrote:
btw can the id list have only vendor ids and not have device ids?
The PCI matcher is quite flexable, see the other patch from Max for
the igd
ah cool, do this for NVIDIA
On 11/03/2021 12:34, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 12:20:33PM +1100, Alexey Kardashevskiy wrote:
It is supposed to match exactly the same match table as the pci_driver
above. We *don't* want different behavior from what the standrd PCI
driver matcher will do
On 11/03/2021 06:40, Jason Gunthorpe wrote:
On Thu, Mar 11, 2021 at 01:24:47AM +1100, Alexey Kardashevskiy wrote:
On 11/03/2021 00:02, Jason Gunthorpe wrote:
On Wed, Mar 10, 2021 at 02:57:57PM +0200, Max Gurtovoy wrote:
+ .err_handler = _pci_core_err_handlers,
+};
+
+#ifdef
On 11/03/2021 00:02, Jason Gunthorpe wrote:
On Wed, Mar 10, 2021 at 02:57:57PM +0200, Max Gurtovoy wrote:
+ .err_handler = _pci_core_err_handlers,
+};
+
+#ifdef CONFIG_VFIO_PCI_DRIVER_COMPAT
+struct pci_driver *get_nvlink2gpu_vfio_pci_driver(struct pci_dev *pdev)
+{
+ if
On 10/03/2021 23:57, Max Gurtovoy wrote:
On 3/10/2021 8:39 AM, Alexey Kardashevskiy wrote:
On 09/03/2021 19:33, Max Gurtovoy wrote:
The new drivers introduced are nvlink2gpu_vfio_pci.ko and
npu2_vfio_pci.ko.
The first will be responsible for providing special extensions for
NVIDIA GPUs
ot; fmt
+
+#include
#include
#include
#include
#include
+#include
#include
#include
#include
#include "vfio_pci_core.h"
+#include "npu2_vfio_pci.h"
#define CREATE_TRACE_POINTS
#include "npu2_trace.h"
+#define DRIVER_VERSION &quo
On 08/02/2021 23:44, Max Gurtovoy wrote:
On 2/5/2021 2:42 AM, Alexey Kardashevskiy wrote:
On 04/02/2021 23:51, Jason Gunthorpe wrote:
On Thu, Feb 04, 2021 at 12:05:22PM +1100, Alexey Kardashevskiy wrote:
It is system firmware (==bios) which puts stuff in the device tree. The
stuff
On 09/02/2021 05:13, Jason Gunthorpe wrote:
On Fri, Feb 05, 2021 at 11:42:11AM +1100, Alexey Kardashevskiy wrote:
A real nvswitch function?
What do you mean by this exactly? The cpu side of nvlink is "emulated pci
devices", the gpu side is not in pci space at all, the nvidia driv
On 04/02/2021 23:51, Jason Gunthorpe wrote:
On Thu, Feb 04, 2021 at 12:05:22PM +1100, Alexey Kardashevskiy wrote:
It is system firmware (==bios) which puts stuff in the device tree. The
stuff is:
1. emulated pci devices (custom pci bridges), one per nvlink, emulated by
the firmware
On 02/02/2021 04:10, Steven Rostedt wrote:
On Mon, 1 Feb 2021 12:18:34 +1100
Alexey Kardashevskiy wrote:
Just curious, does the following patch fix it for v5?
Yes it does!
Thanks for verifying.
-- Steve
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 7261fa0f5e3c
.
This adds a check before referencing the pointer in tracepoint_ptr_deref.
Fixes: d25e37d89dd2f ("tracepoint: Optimize using static_call()")
Signed-off-by: Alexey Kardashevskiy
---
This is in reply to https://lkml.org/lkml/2021/2/1/868
Feel free to change the commit log. Thanks!
Fixing i
On 31/01/2021 01:42, Steven Rostedt wrote:
On Sat, 30 Jan 2021 09:36:26 -0500
Steven Rostedt wrote:
Do you still have the same crash with v3 (that's the one I'm going to
go with for now.)
https://lore.kernel.org/r/20201118093405.7a6d2...@gandalf.local.home
Just curious, does the
On 28/01/2021 09:07, Steven Rostedt wrote:
From: "Steven Rostedt (VMware)"
The list of tracepoint callbacks is managed by an array that is protected
by RCU. To update this array, a new array is allocated, the updates are
copied over to the new array, and then the list of functions for the
On 18/11/2020 23:46, Steven Rostedt wrote:
On Tue, 17 Nov 2020 20:54:24 -0800
Alexei Starovoitov wrote:
extern int
@@ -310,7 +312,12 @@ static inline struct tracepoint
*tracepoint_ptr_deref(tracepoint_ptr_t *p)
do {\
On 23/01/2021 21:29, Tetsuo Handa wrote:
On 2021/01/23 15:35, Alexey Kardashevskiy wrote:
this behaves quite different but still produces the message (i have
show_workqueue_state() right after the bug message):
[ 85.803991] BUG: MAX_LOCKDEP_KEYS too low!
[ 85.804338] turning off
On 23/01/2021 17:01, Hillf Danton wrote:
On Sat, 23 Jan 2021 09:53:42 +1100 Alexey Kardashevskiy wrote:
On 23/01/2021 02:30, Tetsuo Handa wrote:
On 2021/01/22 22:28, Tetsuo Handa wrote:
On 2021/01/22 21:10, Dmitry Vyukov wrote:
On Fri, Jan 22, 2021 at 1:03 PM Alexey Kardashevskiy wrote
On 23/01/2021 02:30, Tetsuo Handa wrote:
On 2021/01/22 22:28, Tetsuo Handa wrote:
On 2021/01/22 21:10, Dmitry Vyukov wrote:
On Fri, Jan 22, 2021 at 1:03 PM Alexey Kardashevskiy wrote:
On 22/01/2021 21:30, Tetsuo Handa wrote:
On 2021/01/22 18:16, Dmitry Vyukov wrote:
The reproducer
Hi!
Syzkaller found this bug and it has a repro (below). I googled a similar
bug in 2019 which was fixed so this seems new.
The repro takes about a half a minute to produce the message, "grep
lock-classes /proc/lockdep_stats" reports 8177 of 8192, before running
the repro it is 702. It is
On 07/01/2021 18:48, Christoph Hellwig wrote:
On Thu, Jan 07, 2021 at 10:58:39AM +1100, Alexey Kardashevskiy wrote:
And AFAICT the root inode on
bdev superblock can get only to bdev_evict_inode() and bdev_free_inode().
Looking at bdev_evict_inode() the only thing that's used there from
On 06/01/2021 21:41, Jan Kara wrote:
On Wed 06-01-21 20:29:00, Alexey Kardashevskiy wrote:
This is a workaround to fix a null derefence crash:
[cb01f840] cb01f880 (unreliable)
[cb01f880] c0769a3c bdev_evict_inode+0x21c/0x370
[cb01f8c0
This is a workaround to fix a null derefence crash:
[cb01f840] cb01f880 (unreliable)
[cb01f880] c0769a3c bdev_evict_inode+0x21c/0x370
[cb01f8c0] c070bacc evict+0x11c/0x230
[cb01f900] c070c138 iput+0x2a8/0x4a0
[cb01f970]
On 04/12/2020 12:25, Michael Ellerman wrote:
Dmitry Vyukov writes:
On Thu, Dec 3, 2020 at 10:19 AM Dmitry Vyukov wrote:
On Thu, Dec 3, 2020 at 10:10 AM Alexey Kardashevskiy wrote:
Hi!
Syzkaller triggered WARN_ON_ONCE at
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds
tty_port_initialized() to uart_set_ldisc() to prevent the crash.
Found by syzkaller.
Signed-off-by: Alexey Kardashevskiy
---
Changes:
v2:
* changed to tty_port_initialized() as suggested in
https://www.spinics.net/lists/linux-serial/msg39942.html (sorry for delay)
---
The example of crash on PPC64/pseries
On 01/12/2020 03:34, Pavel Begunkov wrote:
On 30/11/2020 02:00, Alexey Kardashevskiy wrote:
There are a few potential deadlocks reported by lockdep and triggered by
syzkaller (a syscall fuzzer). These are reported as timer interrupts can
execute softirq handlers and if we were executing
.
Signed-off-by: Alexey Kardashevskiy
---
There are 2 reports.
Warning#1:
WARNING: inconsistent lock state
5.10.0-rc5_irqs_a+fstn1 #5 Not tainted
inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
swapper/14/0 [HC0[0]:SC1[1]:HE0:
On 11/24/20 8:19 PM, Andy Shevchenko wrote:
On Tue, Nov 24, 2020 at 8:20 AM Alexey Kardashevskiy wrote:
There are 10 users of __irq_domain_alloc_irqs() and only one - IOAPIC -
passes realloc==true. There is no obvious reason for handling this
specific case in the generic code.
This splits
by adding the corresponding unmap operation when
the device is removed. There's no pcibios_* hook for the remove case, but
the same effect can be achieved using a bus notifier.
Signed-off-by: Oliver O'Halloran
Reviewed-by: Cédric Le Goater
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/kernel/pci-com
the existing
users; however they seem to do the right thing and call dispose once
per mapping.
Signed-off-by: Alexey Kardashevskiy
---
include/linux/irqdesc.h| 1 +
include/linux/irqdomain.h | 2 --
include/linux/irqhandler.h | 1 +
kernel/irq/irqdesc.c | 3 +++
kernel/irq/irqdomain.c
into the kobject_release hook.
As a bonus, we do not need irq_sysfs_del() as kobj removes itself from
sysfs if it knows that it was added.
This should cause no behavioral change.
Signed-off-by: Alexey Kardashevskiy
---
kernel/irq/irqdesc.c | 42 --
1 file changed
ing(). Most (all?) users do not bother with
disposing though so it is not very likely to break many things.
Signed-off-by: Alexey Kardashevskiy
---
kernel/irq/irqdomain.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
in
__irq_domain_alloc_irqs() can already handle virq==-1 and free
descriptors if it failed allocating hardware interrupts so let's skip
this extra step.
Signed-off-by: Alexey Kardashevskiy
---
kernel/irq/ipi.c | 16 +++-
1 file changed, 3 insertions(+), 13 deletions(-)
diff --git
This moves hierarchical domain's irqs cleanup into the kobject release
hook to make irq_domain_free_irqs() as simple as kobject_put.
Signed-off-by: Alexey Kardashevskiy
---
kernel/irq/irqdomain.c | 43 +-
1 file changed, 22 insertions(+), 21 deletions
pushed out to
https://github.com/aik/linux/commits/irqs
sha1 3955f97c448242f6a
Please comment. Thanks.
Alexey Kardashevskiy (7):
genirq/ipi: Simplify irq_reserve_ipi
genirq/irqdomain: Clean legacy IRQ allocation
genirq/irqdomain: Drop unused realloc parameter from
__irq_domain_alloc_irq
() cleaner.
This should cause no behavioral change.
Signed-off-by: Alexey Kardashevskiy
---
include/linux/irqdomain.h | 3 ++
arch/x86/kernel/apic/io_apic.c | 13 +++--
kernel/irq/irqdomain.c | 89 --
3 files changed, 65 insertions(+), 40 deletions(-)
diff
The two previous patches made @realloc obsolete. This finishes removing it.
Signed-off-by: Alexey Kardashevskiy
---
include/linux/irqdomain.h | 4 +---
arch/x86/kernel/apic/io_apic.c | 2 +-
drivers/gpio/gpiolib.c | 1 -
drivers/irqchip/irq-armada-370-xp.c | 2
On 14/11/2020 05:19, Cédric Le Goater wrote:
On 11/9/20 10:46 AM, Alexey Kardashevskiy wrote:
PCI devices share 4 legacy INTx interrupts from the same PCI host bridge.
Device drivers map/unmap hardware interrupts via irq_create_mapping()/
irq_dispose_mapping(). The problem
On 14/11/2020 05:34, Marc Zyngier wrote:
Hi Alexey,
On 2020-11-09 09:46, Alexey Kardashevskiy wrote:
PCI devices share 4 legacy INTx interrupts from the same PCI host bridge.
Device drivers map/unmap hardware interrupts via irq_create_mapping()/
irq_dispose_mapping(). The problem
Fixed already
https://ozlabs.org/~akpm/mmots/broken-out/panic-dont-dump-stack-twice-on-warn.patch
Sorry for breaking this :(
On 13/11/2020 16:47, Kefeng Wang wrote:
stacktrace will be dumped twice on ARM64 after commit 3f388f28639f
("panic: dump registers on panic_on_warn"), will not
which needs https://lkml.org/lkml/2020/10/27/259
Cc: Cédric Le Goater
Cc: Marc Zyngier
Cc: Michael Ellerman
Cc: Qian Cai
Cc: Rob Herring
Cc: Frederic Barrat
Cc: Michal Suchánek
Cc: Thomas Gleixner
Signed-off-by: Alexey Kardashevskiy
---
This is what it is fixing for powerpc
Hi,
This one seems to be broken in the domain associating part so please
ignore it, I'll post v3 soon. Thanks,
On 29/10/2020 22:01, Alexey Kardashevskiy wrote:
PCI devices share 4 legacy INTx interrupts from the same PCI host bridge.
Device drivers map/unmap hardware interrupts via
still work and
we invoke direct DMA API. The following patch checks the bus limit
on POWERPC to allow or disallow direct mapping.
This adds a CONFIG_ARCH_HAS_DMA_SET_MASK config option to make arch_
hooks no-op by default.
Signed-off-by: Alexey Kardashevskiy
---
kernel/dma/mapping.c | 24
emove incorrect sparse #ifdef".
Please comment. Thanks.
Alexey Kardashevskiy (2):
dma: Allow mixing bypass and mapped DMA operation
powerpc/dma: Fallback to dma_ops when persistent memory present
arch/powerpc/kernel/dma-iommu.c| 70 +-
arch/powerpc/platfor
apping only for RAM and
sets the dev->bus_dma_limit to let the generic code decide whether to
call into the direct DMA or the indirect DMA ops.
This should not change the existing behaviour when no persistent memory
as dev->dma_ops_bypass is expected to be set.
Signed-off-by: Alexey Kardashevskiy
(at least) PPC/pseries
which needs https://lkml.org/lkml/2020/10/27/259
Signed-off-by: Alexey Kardashevskiy
---
What is the easiest way to get irq-hierarchical hardware?
I have a bunch of powerpc boxes (no good) but also a raspberry pi,
a bunch of 32/64bit orange pi's, an "armada" arm box,
On 28/10/2020 03:09, Marc Zyngier wrote:
Hi Alexey,
On 2020-10-27 09:06, Alexey Kardashevskiy wrote:
PCI devices share 4 legacy INTx interrupts from the same PCI host bridge.
Device drivers map/unmap hardware interrupts via irq_create_mapping()/
irq_dispose_mapping(). The problem
4525c8781ec0 Linus Torvalds "scsi: qla2xxx: remove incorrect sparse #ifdef".
Please comment. Thanks.
Alexey Kardashevskiy (2):
dma: Allow mixing bypass and mapped DMA operation
powerpc/dma: Fallback to dma_ops when persistent memory present
arch/powerpc/kernel/dma-iommu.c
still work and
we invoke direct DMA API. The following patch checks the bus limit
on POWERPC to allow or disallow direct mapping.
This adds a CONFIG_ARCH_HAS_DMA_SET_MASK config option to make arch_
hooks no-op by default.
Signed-off-by: Alexey Kardashevskiy
---
Changes:
v4:
* wrapped long lines
apping only for RAM and
sets the dev->bus_dma_limit to let the generic code decide whether to
call into the direct DMA or the indirect DMA ops.
This should not change the existing behaviour when no persistent memory
as dev->dma_ops_bypass is expected to be set.
Signed-off-by: Alexey Kardashevskiy
On 28/10/2020 03:48, Christoph Hellwig wrote:
+static inline bool dma_handle_direct(struct device *dev, dma_addr_t dma_handle)
+{
+ return dma_handle >= dev->archdata.dma_offset;
+}
This won't compile except for powerpc, and directly accesing arch members
in common code is a bad idea.
On 29/10/2020 11:40, Michael Ellerman wrote:
Alexey Kardashevskiy writes:
diff --git a/arch/powerpc/platforms/pseries/iommu.c
b/arch/powerpc/platforms/pseries/iommu.c
index e4198700ed1a..91112e748491 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries
On 29/10/2020 04:21, Christoph Hellwig wrote:
On Wed, Oct 28, 2020 at 05:55:23PM +1100, Alexey Kardashevskiy wrote:
It is passing an address of the end of the mapped area so passing a page
struct means passing page and offset which is an extra parameter and we do
not want to do anything
On 29/10/2020 04:22, Christoph Hellwig wrote:
On Wed, Oct 28, 2020 at 06:00:29PM +1100, Alexey Kardashevskiy wrote:
At the moment we allow bypassing DMA ops only when we can do this for
the entire RAM. However there are configs with mixed type memory
where we could still allow bypassing
ifdef".
Please comment. Thanks.
Alexey Kardashevskiy (2):
dma: Allow mixing bypass and normal IOMMU operation
powerpc/dma: Fallback to dma_ops when persistent memory present
arch/powerpc/kernel/dma-iommu.c| 12 -
arch/powerpc/platforms/pseries/iommu.c | 44 ++---
where bypass
can still work and we invoke direct DMA API; when DMA handle is outside
that limit, we fall back to DMA ops.
This adds a CONFIG_DMA_OPS_BYPASS_BUS_LIMIT config option which is off
by default and will be enable for PPC_PSERIES in the following patch.
Signed-off-by: Alexey Kardashevskiy
DMA ops.
This should not change the existing behaviour when no persistent memory
as dev->dma_ops_bypass is expected to be set.
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/kernel/dma-iommu.c| 12 +--
arch/powerpc/platforms/pseries/iommu.c | 44 ---
and let
the release callback do the cleanup.
If some driver or platform does its own reference counting, this expects
those parties to call irq_find_mapping() and call irq_dispose_mapping()
for every irq_create_fwspec_mapping()/irq_create_mapping().
Signed-off-by: Alexey Kardashevskiy
---
kernel
by adding the corresponding unmap operation when
the device is removed. There's no pcibios_* hook for the remove case, but
the same effect can be achieved using a bus notifier.
Cc: Cédric Le Goater
Cc: Michael Ellerman
Signed-off-by: Oliver O'Halloran
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/
.
This is based on sha1
4525c8781ec0 Linus Torvalds "scsi: qla2xxx: remove incorrect sparse #ifdef".
Please comment. Thanks.
Alexey Kardashevskiy (1):
irq: Add reference counting to IRQ mappings
Oliver O'Halloran (1):
powerpc/pci: Remove LSI mappings on device teardown
ar
-5.10-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest".
Please comment. Thanks.
Alexey Kardashevskiy (2):
Revert "dma-mapping: move large parts of to
kernel/dma"
powerpc/dma: Fallback to dma_ops when persistent memory present
includ
ur when no persistent memory.
Signed-off-by: Alexey Kardashevskiy
---
Without reverting 19c65c3d30bb5a97170, I could have added
I can repost if this is preferrable. Thanks.
---
Changelog:
v2:
* rebased on current upstream with the device::bypass added and DMA
direct code movement reverted
---
arc
patch.
Signed-off-by: Alexey Kardashevskiy
---
include/linux/dma-direct.h | 106 +
kernel/dma/direct.h| 119 -
kernel/dma/direct.c| 2 +-
kernel/dma/mapping.c | 2 +-
4 files changed, 108 insertions
On 30/09/2020 18:55, Christoph Hellwig wrote:
Most of the dma_direct symbols should only be used by direct.c and
mapping.c, so move them to kernel/dma. In fact more of dma-direct.h
should eventually move, but that will require more coordination with
other subsystems.
Because of this
gacy INTx interrupts,
we can not restrict the size of the mapping array to PCI_NUM_INTX. The
number of interrupt mappings is computed from the "interrupt-map"
property and the mapping array is allocated accordingly.
Cc: "Oliver O'Halloran"
Cc: Alexey Kardashevskiy
Signed-off-by:
;> we can not restrict the size of the mapping array to PCI_NUM_INTX. The
>>> number of interrupt mappings is computed from the "interrupt-map"
>>> property and the mapping array is allocated accordingly.
>>>
>>> Cc: "Oliver O'Halloran"
>&g
On 17/09/2020 02:12, Paul E. McKenney wrote:
> On Fri, Sep 11, 2020 at 06:52:08AM -0700, Paul E. McKenney wrote:
>> On Fri, Sep 11, 2020 at 03:09:41PM +1000, Alexey Kardashevskiy wrote:
>>> On 11/09/2020 04:53, Paul E. McKenney wrote:
>>>> On Wed, Sep 09, 20
On 11/09/2020 04:53, Paul E. McKenney wrote:
> On Wed, Sep 09, 2020 at 10:31:03PM +1000, Alexey Kardashevskiy wrote:
>>
>>
>> On 09/09/2020 21:50, Paul E. McKenney wrote:
>>> On Wed, Sep 09, 2020 at 07:24:11PM +1000, Alexey Kardashevskiy wrote:
>>>
On 09/09/2020 21:50, Paul E. McKenney wrote:
> On Wed, Sep 09, 2020 at 07:24:11PM +1000, Alexey Kardashevskiy wrote:
>>
>>
>> On 09/09/2020 00:43, Alexey Kardashevskiy wrote:
>>> init_srcu_struct_nodes() is called with is_static==true only internally
>>&g
On 09/09/2020 00:43, Alexey Kardashevskiy wrote:
> init_srcu_struct_nodes() is called with is_static==true only internally
> and when this happens, the srcu->sda is not initialized in
> init_srcu_struct_fields() and we crash on dereferencing @sdp.
>
> This fixes t
s useful work for is_static=false case anyway.
Found by syzkaller.
Signed-off-by: Alexey Kardashevskiy
---
kernel/rcu/srcutree.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index c100acf332ed..49b54a50bde8 100644
--- a/kernel/rcu/src
On 04/09/2020 16:04, Leonardo Bras wrote:
On Thu, 2020-09-03 at 14:41 +1000, Alexey Kardashevskiy wrote:
I am new to this, so I am trying to understand how a memory page mapped
as DMA, and used for something else could be a problem.
From the device prospective, there is PCI space
On 31/08/2020 16:40, Christoph Hellwig wrote:
On Sun, Aug 30, 2020 at 11:04:21AM +0200, Cédric Le Goater wrote:
Hello,
On 7/8/20 5:24 PM, Christoph Hellwig wrote:
Use the DMA API bypass mechanism for direct window mappings. This uses
common code and speed up the direct mapping case by
On 02/09/2020 16:11, Leonardo Bras wrote:
On Mon, 2020-08-31 at 14:35 +1000, Alexey Kardashevskiy wrote:
On 29/08/2020 04:36, Leonardo Bras wrote:
On Mon, 2020-08-24 at 15:17 +1000, Alexey Kardashevskiy wrote:
On 18/08/2020 09:40, Leonardo Bras wrote:
As of today, if the biggest DDW
On 02/09/2020 08:34, Leonardo Bras wrote:
On Mon, 2020-08-31 at 10:47 +1000, Alexey Kardashevskiy wrote:
Maybe testing with host 64k pagesize and IOMMU 16MB pagesize in qemu
should be enough, is there any chance to get indirect mapping in qemu
like this? (DDW but with smaller DMA window
On 02/09/2020 07:38, Leonardo Bras wrote:
On Mon, 2020-08-31 at 13:48 +1000, Alexey Kardashevskiy wrote:
Well, I created this TCE_RPN_BITS = 52 because the previous mask was a
hardcoded 40-bit mask (0xfful), for hard-coded 12-bit (4k)
pagesize, and on PAPR+/LoPAR also defines TCE
On 29/08/2020 04:36, Leonardo Bras wrote:
> On Mon, 2020-08-24 at 15:17 +1000, Alexey Kardashevskiy wrote:
>>
>> On 18/08/2020 09:40, Leonardo Bras wrote:
>>> As of today, if the biggest DDW that can be created can't map the whole
>>> partition, it's creati
On 29/08/2020 01:25, Leonardo Bras wrote:
> On Mon, 2020-08-24 at 15:07 +1000, Alexey Kardashevskiy wrote:
>>
>> On 18/08/2020 09:40, Leonardo Bras wrote:
>>> Code used to create a ddw property that was previously scattered in
>>> enable_ddw() is now gathere
On 31/08/2020 11:41, Oliver O'Halloran wrote:
> On Mon, Aug 31, 2020 at 10:08 AM Alexey Kardashevskiy wrote:
>>
>> On 29/08/2020 05:55, Leonardo Bras wrote:
>>> On Fri, 2020-08-28 at 12:27 +1000, Alexey Kardashevskiy wrote:
>>>>
>>>> On 28/08/202
On 29/08/2020 00:04, Leonardo Bras wrote:
> On Mon, 2020-08-24 at 13:44 +1000, Alexey Kardashevskiy wrote:
>>
>>> On 18/08/2020 09:40, Leonardo Bras wrote:
>>> enable_ddw() currently returns the address of the DMA window, which is
>>> considered invalid
On 29/08/2020 06:41, Leonardo Bras wrote:
> On Fri, 2020-08-28 at 11:40 +1000, Alexey Kardashevskiy wrote:
>>> I think it would be better to keep the code as much generic as possible
>>> regarding page sizes.
>>
>> Then you need to test it. Does 4K guest
On 29/08/2020 05:55, Leonardo Bras wrote:
> On Fri, 2020-08-28 at 12:27 +1000, Alexey Kardashevskiy wrote:
>>
>> On 28/08/2020 01:32, Leonardo Bras wrote:
>>> Hello Alexey, thank you for this feedback!
>>>
>>> On Sat, 2020-08-22 at 19:33 +1000,
On 28/08/2020 01:32, Leonardo Bras wrote:
> Hello Alexey, thank you for this feedback!
>
> On Sat, 2020-08-22 at 19:33 +1000, Alexey Kardashevskiy wrote:
>>> +#define TCE_RPN_BITS 52 /* Bits 0-51 represent
>>> RPN on TCE */
&
On 28/08/2020 08:11, Leonardo Bras wrote:
> On Mon, 2020-08-24 at 13:46 +1000, Alexey Kardashevskiy wrote:
>>> static int find_existing_ddw_windows(void)
>>> {
>>> int len;
>>> @@ -887,18 +905,11 @@ static int find_existing_ddw_wind
On 28/08/2020 04:34, Leonardo Bras wrote:
> On Sat, 2020-08-22 at 20:34 +1000, Alexey Kardashevskiy wrote:
>>> +
>>> + /*ignore reserved bit0*/
>>
>> s/ignore reserved bit0/ ignore reserved bit0 / (add spaces)
>
> Fixed
>
>>> + if
On 28/08/2020 02:51, Leonardo Bras wrote:
> On Sat, 2020-08-22 at 20:07 +1000, Alexey Kardashevskiy wrote:
>>
>> On 18/08/2020 09:40, Leonardo Bras wrote:
>>> Both iommu_alloc_coherent() and iommu_free_coherent() assume that once
>>> size is aligne
On 18/08/2020 09:40, Leonardo Bras wrote:
> As of today, if the biggest DDW that can be created can't map the whole
> partition, it's creation is skipped and the default DMA window
> "ibm,dma-window" is used instead.
>
> DDW is 16x bigger than the default DMA window,
16x only under very
On 18/08/2020 09:40, Leonardo Bras wrote:
> Code used to create a ddw property that was previously scattered in
> enable_ddw() is now gathered in ddw_property_create(), which deals with
> allocation and filling the property, letting it ready for
> of_property_add(), which now occurs in
On 18/08/2020 09:40, Leonardo Bras wrote:
> There are two functions adding DDW to the direct_window_list in a
> similar way, so create a ddw_list_add() to avoid duplicity and
> simplify those functions.
>
> Also, on enable_ddw(), add list_del() on out_free_window to allow
> removing the window
On 18/08/2020 09:40, Leonardo Bras wrote:
> enable_ddw() currently returns the address of the DMA window, which is
> considered invalid if has the value 0x00.
>
> Also, it only considers valid an address returned from find_existing_ddw
> if it's not 0x00.
>
> Changing this behavior makes
roup), GFP_KERNEL, node);
I'd prefer you did not make unrelated changes (sizeof(struct
iommu_table_group) -> sizeof(*table_group)) so the diff stays shorter
and easier to follow. You changed sizeof(struct iommu_table_group) but
not sizeof(struct iommu_table) and this c
On 18/08/2020 09:40, Leonardo Bras wrote:
> Having a function to check if the iommu table has any allocation helps
> deciding if a tbl can be reset for using a new DMA window.
>
> It should be enough to replace all instances of !bitmap_empty(tbl...).
>
> iommu_table_in_use() skips reserved
lock);
> + pool = >large_pool;
> + spin_lock(>lock);
> + pool->hint = pool->start;
> + pass++;
> + goto again;
> +
A nit: unnecessary new line.
Reviewed-by: Alexey Kardashevskiy
>
On 18/08/2020 09:40, Leonardo Bras wrote:
> Both iommu_alloc_coherent() and iommu_free_coherent() assume that once
> size is aligned to PAGE_SIZE it will be aligned to IOMMU_PAGE_SIZE.
The only case when it is not aligned is when IOMMU_PAGE_SIZE > PAGE_SIZE
which is unlikely but not
On 18/08/2020 09:40, Leonardo Bras wrote:
> Some functions assume IOMMU page size can only be 4K (pageshift == 12).
> Update them to accept any page size passed, so we can use 64K pages.
>
> In the process, some defines like TCE_SHIFT were made obsolete, and then
> removed. TCE_RPN_MASK was
On 19/08/2020 09:54, Nicholas Piggin wrote:
> Excerpts from pet...@infradead.org's message of August 19, 2020 1:41 am:
>> On Tue, Aug 18, 2020 at 05:22:33PM +1000, Nicholas Piggin wrote:
>>> Excerpts from pet...@infradead.org's message of August 12, 2020 8:35 pm:
On Wed, Aug 12, 2020 at
; the default DMA window for a device, if it has been deleted.
>
> It does so by resetting the TCE table allocation for the PE to it's
> boot time value, available in "ibm,dma-window" device tree node.
>
> Signed-off-by: Leonardo Bras
> Tested-by: David Dai
R
1 - 100 of 1847 matches
Mail list logo