Le 27/08/2020 à 15:19, Michael Ellerman a écrit :
Christophe Leroy writes:
On 08/26/2020 02:58 PM, Michael Ellerman wrote:
Christophe Leroy writes:
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index daef14a284a3..bbb69832fd46 100644
---
Le 28/08/2020 à 07:40, Christophe Leroy a écrit :
Le 27/08/2020 à 15:19, Michael Ellerman a écrit :
Christophe Leroy writes:
On 08/26/2020 02:58 PM, Michael Ellerman wrote:
Christophe Leroy writes:
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index
ppc64 supports huge vmap only with radix translation. Hence use arch helper
to determine the huge vmap support.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c
set_pte_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pte_at() and hence expect it to be used to set locations
that are not a valid PTE.
Signed-off-by: Aneesh Kumar K.V
---
pmd_clear() should not be used to clear pmd level pte entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 0a6e771ebd13..a188b6e4e37e 100644
---
Il giorno gio, 27/08/2020 alle 09.46 +0200, Giuseppe Sacco ha scritto:
> Il giorno gio, 27/08/2020 alle 00.28 +0200, Giuseppe Sacco ha scritto:
> > Hello Christophe,
> >
> > Il giorno mer, 26/08/2020 alle 15.53 +0200, Christophe Leroy ha
> > scritto:
> > [...]
> > > If there is no warning, then
Le 10/08/2020 à 10:52, Pingfan Liu a écrit :
This patch prepares for the incoming patch which swaps the order of
KOBJ_ADD/REMOVE uevent and dt's updating.
The dt updating should come after lmb operations, and before
__remove_memory()/__add_memory(). Accordingly, grouping all lmb operations
Make sure we call pte accessors with correct lock held.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 34 --
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
This will help in adding proper locks in a later patch
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 52 ---
1 file changed, 29 insertions(+), 23 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
Il giorno gio, 27/08/2020 alle 00.28 +0200, Giuseppe Sacco ha scritto:
> Hello Christophe,
>
> Il giorno mer, 26/08/2020 alle 15.53 +0200, Christophe Leroy ha
> scritto:
> [...]
> > If there is no warning, then the issue is something else, bad luck.
> >
> > Could you increase the loglevel and
On Mon, Aug 17, 2020 at 08:23:11AM +, David Laight wrote:
> From: Christoph Hellwig
> > Sent: 17 August 2020 08:32
> >
> > Stop providing the possibility to override the address space using
> > set_fs() now that there is no need for that any more. To properly
> > handle the TASK_SIZE_MAX
With the hash page table, the kernel should not use pmd_clear for clearing
huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++
1 file changed, 14 insertions(+)
diff --git
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
Architectures like ppc64 use deposited page table while updating the huge pte
entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
pte_clear_tests operate on an existing pte entry. Make sure that is not a none
pte entry.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
Hi "Christopher,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on char-misc/char-misc-testing tip/x86/core v5.9-rc2
next-20200827]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submi
Hi,
Le 27/08/2020 à 10:28, Giuseppe Sacco a écrit :
Il giorno gio, 27/08/2020 alle 09.46 +0200, Giuseppe Sacco ha scritto:
Il giorno gio, 27/08/2020 alle 00.28 +0200, Giuseppe Sacco ha scritto:
Hello Christophe,
Il giorno mer, 26/08/2020 alle 15.53 +0200, Christophe Leroy ha
scritto:
[...]
Saved write support was added to track the write bit of a pte after marking the
pte protnone. This was done so that AUTONUMA can convert a write pte to protnone
and still track the old write bit. When converting it back we set the pte write
bit correctly thereby avoiding a write fault again. Hence
The seems to be missing quite a lot of details w.r.t allocating
the correct pgtable_t page (huge_pte_alloc()), holding the right
lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA.
ppc64 do have runtime checks within CONFIG_DEBUG_VM for most of these.
Hence disable the test on
On 20-08-25 20:05, Yu Kuai wrote:
> if of_find_device_by_node() succeed, imx_es8328_probe() doesn't have
> a corresponding put_device().
Why do we need the ssi_pdev reference here at all?
Regards,
Marco
powerpc used to set the pte specific flags in set_pte_at(). This is different
from other architectures. To be consistent with other architecture update
pfn_pte to set _PAGE_PTE on ppc64. Also, drop now unused pte_mkpte.
We add a VM_WARN_ON() to catch the usage of calling set_pte_at() without
kernel expects entries to be marked huge before we use
set_pmd_at()/set_pud_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 21 -
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
Hi "Aneesh,
I love your patch! Perhaps something to improve:
[auto build test WARNING on hnaz-linux-mm/master]
[also build test WARNING on powerpc/next v5.9-rc2 next-20200827]
[cannot apply to mmotm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And
Il giorno gio, 27/08/2020 alle 12.39 +0200, Christophe Leroy ha
scritto:
> Hi,
>
> Le 27/08/2020 à 10:28, Giuseppe Sacco a écrit :
[...]
> > Sorry, I made a mistake. The real problem is down, on the same
> > function, when it calls low_sleep_handler(). This is where the problem
> > probably is.
>
Christophe Leroy writes:
> On 08/26/2020 02:58 PM, Michael Ellerman wrote:
>> Christophe Leroy writes:
>>> diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
>>> index daef14a284a3..bbb69832fd46 100644
>>> --- a/arch/powerpc/kernel/vdso.c
>>> +++ b/arch/powerpc/kernel/vdso.c
Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
mappings to other CPUs. These mappings should be kept local to the CPU
doing the patching. Use the pre-initialized temporary mm and patching
address for this purpose. Also add a check after patching to ensure the
patch succeeded.
When live patching a STRICT_RWX kernel, a mapping is installed at a
"patching address" with temporary write permissions. Provide a
LKDTM-only accessor function for this address in preparation for a LKDTM
test which attempts to "hijack" this mapping by writing to it from
another CPU.
When compiled with CONFIG_STRICT_KERNEL_RWX, the kernel must create
temporary mappings when patching itself. These mappings temporarily
override the strict RWX text protections to permit a write. Currently,
powerpc allocates a per-CPU VM area for patching. Patching occurs as
follows:
1.
On Tue, 25 Aug 2020 19:34:24 +1000, Michael Ellerman wrote:
> The recent commit 01eb01877f33 ("powerpc/64s: Fix restore_math
> unnecessarily changing MSR") changed some of the handling of floating
> point/vector restore.
>
> In particular it caused current->thread.fpexc_mode to be copied into
>
On Fri, 21 Aug 2020 07:15:25 + (UTC), Christophe Leroy wrote:
> In is_module_segment(), when VMALLOC_END is over 0xf000,
> ALIGN(VMALLOC_END, SZ_256M) has value 0.
>
> In that case, addr >= ALIGN(VMALLOC_END, SZ_256M) is always
> true then is_module_segment() always returns false.
>
>
Hello guys. Do you have further comments on this version?
Thanks,
Pingfan
On Mon, Aug 10, 2020 at 4:53 PM Pingfan Liu wrote:
>
> A bug is observed on pseries by taking the following steps on rhel:
> -1. drmgr -c mem -r -q 5
> -2. echo c > /proc/sysrq-trigger
>
> And then, the failure looks
x86 supports the notion of a temporary mm which restricts access to
temporary PTEs to a single CPU. A temporary mm is useful for situations
where a CPU needs to perform sensitive operations (such as patching a
STRICT_KERNEL_RWX kernel) requiring temporary mappings without exposing
said mappings to
Excerpts from Heiner Kallweit's message of August 26, 2020 4:38 pm:
> On 26.08.2020 08:07, Chris Packham wrote:
>>
>> On 26/08/20 1:48 pm, Chris Packham wrote:
>>>
>>> On 26/08/20 10:22 am, Chris Packham wrote:
On 25/08/20 7:22 pm, Heiner Kallweit wrote:
> I've been staring at
On Wed, 26 Aug 2020 13:59:18 +0530, Pratik Rajesh Sampat wrote:
> Cpuidle stop state implementation has minor optimizations for P10
> where hardware preserves more SPR registers compared to P9.
> The current P9 driver works for P10, although does few extra
> save-restores. P9 driver can provide
Le 10/08/2020 à 10:52, Pingfan Liu a écrit :
A bug is observed on pseries by taking the following steps on rhel:
-1. drmgr -c mem -r -q 5
-2. echo c > /proc/sysrq-trigger
And then, the failure looks like:
kdump: saving to /sysroot//var/crash/127.0.0.1-2020-01-16-02:06:14/
kdump: saving
Jordan Niethe writes:
> Update the CPU to ISA Version Mapping document to include Power10 and
> ISA v3.1.
>
> Signed-off-by: Jordan Niethe
> ---
> v2: Transactional Memory = No
> ---
> Documentation/powerpc/isa-versions.rst | 4
> 1 file changed, 4 insertions(+)
>
> diff --git
When live patching with STRICT_KERNEL_RWX, the CPU doing the patching
must temporarily remap the page(s) containing the patch site with +W
permissions. While this temporary mapping is in use another CPU could
write to the same mapping and maliciously alter kernel text. Implement a
LKDTM test to
On Fri, 21 Aug 2020 13:36:10 +0530, Kajol Jain wrote:
> Commit 792f73f747b8 ("powerpc/hv-24x7: Add sysfs files inside hv-24x7
> device to show cpumask") added cpumask file as part of hv-24x7 driver
> inside the interface folder. Cpumask file suppose to be in the top
> folder of the pmu driver
On 08/26/2020 02:58 PM, Michael Ellerman wrote:
Christophe Leroy writes:
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index daef14a284a3..bbb69832fd46 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -718,16 +710,14 @@ static int __init
On Tue, 2 Jun 2020 12:56:12 +1000, Alexey Kardashevskiy wrote:
> The bhrb_filter_map ("The Branch History Rolling Buffer") callback is
> only defined in raw CPUs' power_pmu structs. The "architected" CPUs use
> generic_compat_pmu which does not have this callback and crashed occur.
>
> This
On Tue, 25 Aug 2020 17:53:09 +1000, Nicholas Piggin wrote:
> Kernel entry sets PPR to HMT_MEDIUM by convention. The scv entry
> path missed this.
Applied to powerpc/fixes.
[1/1] powerpc/64s: scv entry should set PPR
https://git.kernel.org/powerpc/c/e5fe56092e753c50093c60e757561984abff335e
On Sun, 23 Aug 2020 17:31:16 -0700, Randy Dunlap wrote:
> Fix malformed table warning in powerpc/syscall64-abi.rst by making
> two tables and moving the headings.
>
> Documentation/powerpc/syscall64-abi.rst:53: WARNING: Malformed table.
> Text in column margin in table line 2.
>
> ===
On Wed, 26 Aug 2020 02:40:29 -0400, Athira Rajeev wrote:
> IMC trace-mode uses MSR[HV PR] bits to set the cpumode
> for the instruction pointer captured in each sample.
> The bits are fetched from third DW of the trace record.
> Reading third DW from IMC trace record should use be64_to_cpu
> along
On Fri, 21 Aug 2020 20:49:10 +1000, Michael Ellerman wrote:
> The build is currently broken, if COMPILE_TEST=y and PPC_PMAC=n:
>
> linux/drivers/video/fbdev/controlfb.c: In function
> âcontrol_set_hardwareâ:
> linux/drivers/video/fbdev/controlfb.c:276:2: error: implicit declaration of
>
On Fri, 21 Aug 2020 13:55:55 -0500, Shawn Anastasio wrote:
> Changes in v2:
> - Update prot_sao selftest to skip ISA 3.1
>
> This set re-introduces the PROT_SAO prot flag removed in
> Commit 5c9fa16e8abd ("powerpc/64s: Remove PROT_SAO support").
>
> To address concerns regarding live
Pratik Sampat writes:
> On 26/08/20 2:07 pm, Christophe Leroy wrote:
>> Le 26/08/2020 à 10:29, Pratik Rajesh Sampat a écrit :
>>> Cpuidle stop state implementation has minor optimizations for P10
>>> where hardware preserves more SPR registers compared to P9.
>>> The current P9 driver works for
This patch series includes fixes for debug_vm_pgtable test code so that
they follow page table updates rules correctly. The first two patches introduce
changes w.r.t ppc64. The patches are included in this series for completeness.
We can
merge them via ppc64 tree if required.
Hugetlb test is
When code patching a STRICT_KERNEL_RWX kernel the page containing the
address to be patched is temporarily mapped with permissive memory
protections. Currently, a per-cpu vmalloc patch area is used for this
purpose. While the patch area is per-cpu, the temporary page mapping is
inserted into the
Hi all,
this series removes the last set_fs() used to force a kernel address
space for the uaccess code in the kernel read/write/splice code, and then
stops implementing the address space overrides entirely for x86 and
powerpc.
The file system part has been posted a few times, and the read/write
We can't run the tests for userspace bitmap parsing if set_fs() doesn't
exist.
Signed-off-by: Christoph Hellwig
Reviewed-by: Kees Cook
---
lib/test_bitmap.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
index
Stop providing the possibility to override the address space using
set_fs() now that there is no need for that any more. To properly
handle the TASK_SIZE_MAX checking for 4 vs 5-level page tables on
x86 a new alternative is introduced, which just like the one in
entry_64.S has to use the
> Diffstat:
Actually no diffstat here as David Howells pointed out. Here we go:
arch/Kconfig |3
arch/alpha/Kconfig |1
arch/arc/Kconfig |1
arch/arm/Kconfig |1
arch/arm64/Kconfig
DBG() is defined as void when DEBUG is not defined,
and DEBUG is explicitly undefined.
It means there is no other way than modifying source code
to get the messages printed.
It was most likely useful in the first days of VDSO, but
today the only 3 DBG() calls don't deserve a special
handling.
default_file_splice_write is the last piece of generic code that uses
set_fs to make the uaccess routines operate on kernel pointers. It
implements a "fallback loop" for splicing from files that do not actually
provide a proper splice_read method. The usual file systems and other
high bandwith
Hello Alexey, thank you for this feedback!
On Sat, 2020-08-22 at 19:33 +1000, Alexey Kardashevskiy wrote:
> > +#define TCE_RPN_BITS 52 /* Bits 0-51 represent
> > RPN on TCE */
>
> Ditch this one and use MAX_PHYSMEM_BITS instead? I am pretty sure this
> is the actual
Once we can't manipulate the address limit, we also can't test what
happens when the manipulation is abused.
Signed-off-by: Christoph Hellwig
---
drivers/misc/lkdtm/bugs.c | 4
drivers/misc/lkdtm/usercopy.c | 4
2 files changed, 8 insertions(+)
diff --git
At least for 64-bit this moves them closer to some of the defines
they are based on, and it prepares for using the TASK_SIZE_MAX
definition from assembly.
Signed-off-by: Christoph Hellwig
Reviewed-by: Kees Cook
---
arch/x86/include/asm/page_32_types.h | 11 +++
Stop providing the possibility to override the address space using
set_fs() now that there is no need for that any more.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/Kconfig | 1 -
arch/powerpc/include/asm/processor.h | 7 ---
arch/powerpc/include/asm/thread_info.h |
If vdso initialisation failed, vdso_ready is not set.
Otherwise, vdso_pages is only 0 when it is a 32 bits task
and CONFIG_VDSO32 is not selected.
As arch_setup_additional_pages() now bails out directly in
that case, we don't need to set vdso_pages to 0.
Signed-off-by: Christophe Leroy
---
Initialise vdso32_kbase at compile time like vdso64_kbase.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index c173c70ca7d2..6390a37dacea 100644
From: Christoph Hellwig
> Sent: 27 August 2020 16:00
>
> Don't allow calling ->read or ->write with set_fs as a preparation for
> killing off set_fs. All the instances that we use kernel_read/write on
> are using the iter ops already.
>
> If a file has both the regular ->read/->write methods
Don't allow calling ->read or ->write with set_fs as a preparation for
killing off set_fs. All the instances that we use kernel_read/write on
are using the iter ops already.
If a file has both the regular ->read/->write methods and the iter
variants those could have different semantics for
Add a CONFIG_SET_FS option that is selected by architecturess that
implement set_fs, which is all of them initially. If the option is not
set stubs for routines related to overriding the address space are
provided so that architectures can start to opt out of providing set_fs.
Signed-off-by:
Le 27/08/2020 à 11:07, kernel test robot a écrit :
Hi Christophe,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on linus/master v5.9-rc2 next-20200827]
[If your patch is applied to the wrong git tree, kindly drop us a note
On Sat, 2020-08-22 at 20:07 +1000, Alexey Kardashevskiy wrote:
>
> On 18/08/2020 09:40, Leonardo Bras wrote:
> > Both iommu_alloc_coherent() and iommu_free_coherent() assume that once
> > size is aligned to PAGE_SIZE it will be aligned to IOMMU_PAGE_SIZE.
>
> The only case when it is not aligned
On Sat, 2020-08-22 at 20:09 +1000, Alexey Kardashevskiy wrote:
> > + goto again;
> > +
>
> A nit: unnecessary new line.
I was following the pattern used above. There is a newline after every
"goto again" in this 'if'.
> Reviewed-by: Alexey Kardashevskiy
Thank you!
For 64-bit the only thing missing was a strategic _AC, and for 32-bit we
need to use __PAGE_OFFSET instead of PAGE_OFFSET in the TASK_SIZE
definition to escape the explicit unsigned long cast. This just works
because __PAGE_OFFSET is defined using _AC itself and thus never needs
the cast anyway.
Provide __get_kernel_nofault and __put_kernel_nofault routines to
implement the maccess routines without messing with set_fs and without
opening up access to user space.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/include/asm/uaccess.h | 16
1 file changed, 16
When CONFIG_VDSO32 is not selected, just don't reference the static
vdso32 variables and functions.
This allows the compiler to optimise them out, and allows to
drop an #ifdef CONFIG_VDSO32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso.c | 14 +++---
1 file changed, 7
To avoid any risk of modification of vital VDSO variables,
declare them __ro_after_init.
vdso32_kbase and vdso64_kbase could be made 'const', but it would
have high impact on all functions using them as the compiler doesn't
expect const property to be discarded.
Signed-off-by: Christophe Leroy
vdso_patches[] table is used only at init time.
Mark it __initdata.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 600df1164a0b..efaaee94f273 100644
Hi "Christopher,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on char-misc/char-misc-testing tip/x86/core v5.9-rc2
next-20200827]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submi
On Thu, Aug 27, 2020 at 8:00 AM Christoph Hellwig wrote:
>
> SYM_FUNC_START(__get_user_2)
> add $1,%_ASM_AX
> jc bad_get_user
This no longer makes sense, and
> - mov PER_CPU_VAR(current_task), %_ASM_DX
> - cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
> +
low_sleep_handler() can't restore the context from virtual
stack because the stack can hardly be accessed with MMU OFF.
For now, disable VMAP stack when CONFIG_ADB_PMU is selected.
Reported-by: Giuseppe Sacco
Fixes: cd08f109e262 ("powerpc/32s: Enable CONFIG_VMAP_STACK")
Signed-off-by:
low_sleep_handler() can't restore the context from virtual
stack because the stack can hardly be accessed with MMU OFF.
For now, disable VMAP stack when CONFIG_ADB_PMU is selected.
Reported-by: Giuseppe Sacco
Fixes: cd08f109e262 ("powerpc/32s: Enable CONFIG_VMAP_STACK")
Signed-off-by:
Le 27/08/2020 à 16:37, Giuseppe Sacco a écrit :
Il giorno gio, 27/08/2020 alle 12.39 +0200, Christophe Leroy ha
scritto:
Hi,
Le 27/08/2020 à 10:28, Giuseppe Sacco a écrit :
[...]
Sorry, I made a mistake. The real problem is down, on the same
function, when it calls low_sleep_handler().
On Sat, 2020-08-22 at 20:34 +1000, Alexey Kardashevskiy wrote:
> > +
> > + /*ignore reserved bit0*/
>
> s/ignore reserved bit0/ ignore reserved bit0 / (add spaces)
Fixed
> > + if (tbl->it_offset == 0)
> > + p1_start = 1;
> > +
> > + /* Check if reserved memory is valid*/
>
> A
On Thu, Aug 27, 2020 at 8:00 AM Christoph Hellwig wrote:
>
> Once we can't manipulate the address limit, we also can't test what
> happens when the manipulation is abused.
Just remove these tests entirely.
Once set_fs() doesn't exist on x86, the tests no longer make any sense
what-so-ever,
On 28/08/2020 02:51, Leonardo Bras wrote:
> On Sat, 2020-08-22 at 20:07 +1000, Alexey Kardashevskiy wrote:
>>
>> On 18/08/2020 09:40, Leonardo Bras wrote:
>>> Both iommu_alloc_coherent() and iommu_free_coherent() assume that once
>>> size is aligned to PAGE_SIZE it will be aligned to
On 28/08/2020 04:34, Leonardo Bras wrote:
> On Sat, 2020-08-22 at 20:34 +1000, Alexey Kardashevskiy wrote:
>>> +
>>> + /*ignore reserved bit0*/
>>
>> s/ignore reserved bit0/ ignore reserved bit0 / (add spaces)
>
> Fixed
>
>>> + if (tbl->it_offset == 0)
>>> + p1_start = 1;
>>> +
On 28/08/2020 08:11, Leonardo Bras wrote:
> On Mon, 2020-08-24 at 13:46 +1000, Alexey Kardashevskiy wrote:
>>> static int find_existing_ddw_windows(void)
>>> {
>>> int len;
>>> @@ -887,18 +905,11 @@ static int find_existing_ddw_windows(void)
>>> if (!direct64)
>>>
allmodconfig
powerpc defconfig
powerpc allyesconfig
powerpc allmodconfig
powerpc allnoconfig
x86_64 randconfig-a003-20200827
x86_64 randconfig
Hello,
On Wed, 26 Aug 2020 at 15:39, Michael Ellerman wrote:
> Christophe Leroy writes:
[..]
> > arch_remap() gets replaced by vdso_remap()
> >
> > For arch_unmap(), I'm wondering how/what other architectures do, because
> > powerpc seems to be the only one to erase the vdso context pointer
On 27/08/20 7:12 pm, Nicholas Piggin wrote:
> Excerpts from Heiner Kallweit's message of August 26, 2020 4:38 pm:
>> On 26.08.2020 08:07, Chris Packham wrote:
>>> On 26/08/20 1:48 pm, Chris Packham wrote:
On 26/08/20 10:22 am, Chris Packham wrote:
> On 25/08/20 7:22 pm, Heiner Kallweit
fig
x86_64 randconfig-a005-20200827
x86_64 randconfig-a006-20200827
x86_64 randconfig-a004-20200827
x86_64 randconfig-a003-20200827
x86_64 randconfig-a002-20200827
x86_64 randconfig-a001-20200827
i386 randconfig-a002-202
On Mon, 2020-08-24 at 13:46 +1000, Alexey Kardashevskiy wrote:
> > static int find_existing_ddw_windows(void)
> > {
> > int len;
> > @@ -887,18 +905,11 @@ static int find_existing_ddw_windows(void)
> > if (!direct64)
> > continue;
> >
> > - window
allmodconfig
powerpc allnoconfig
x86_64 randconfig-a003-20200827
x86_64 randconfig-a002-20200827
x86_64 randconfig-a001-20200827
x86_64 randconfig-a005-20200827
x86_64 randconfig-a006-20200827
x86_64
On Mon, 2020-08-24 at 10:38 +1000, Alexey Kardashevskiy wrote:
>
> On 18/08/2020 09:40, Leonardo Bras wrote:
> > Creates a helper to allow allocating a new iommu_table without the need
> > to reallocate the iommu_group.
> >
> > This will be helpful for replacing the iommu_table for the new DMA
As of commit bdc48fa11e46, scripts/checkpatch.pl now has a default line
length warning of 100 characters. The powerpc wrapper script was using
a length of 90 instead of 80 in order to make checkpatch less
restrictive, but now it's making it more restrictive instead.
I think it makes sense to
Dmitry Safonov <0x7f454...@gmail.com> writes:
> Hello,
>
> On Wed, 26 Aug 2020 at 15:39, Michael Ellerman wrote:
>> Christophe Leroy writes:
> [..]
>> > arch_remap() gets replaced by vdso_remap()
>> >
>> > For arch_unmap(), I'm wondering how/what other architectures do, because
>> > powerpc
On 28/08/2020 01:32, Leonardo Bras wrote:
> Hello Alexey, thank you for this feedback!
>
> On Sat, 2020-08-22 at 19:33 +1000, Alexey Kardashevskiy wrote:
>>> +#define TCE_RPN_BITS 52 /* Bits 0-51 represent
>>> RPN on TCE */
>>
>> Ditch this one and use
91 matches
Mail list logo