On Wed, Apr 24, 2024 at 1:53 PM Ard Biesheuvel wrote:
>
> Hi Brian,
>
> Thanks for taking a look.
>
> On Wed, 24 Apr 2024 at 19:39, Brian Gerst wrote:
> >
> > On Wed, Apr 24, 2024 at 12:06 PM Ard Biesheuvel wrote:
> > >
> > > From: Ard Biesheuv
SYM_DATA_START_LOCAL(gdt)
> * 0x08 unused
> * so use them as gdt ptr
obsolete comment
> */
> - .word gdt_end - gdt - 1
> - .quad gdt
> + .word 0
> + .quad 0
> .word 0, 0, 0
This can be condensed down to:
.quad 0, 0
>
>
On Thu, Apr 11, 2024 at 11:26 AM Jason Andryuk wrote:
>
> On 2024-04-10 17:00, Brian Gerst wrote:
> > On Wed, Apr 10, 2024 at 3:50 PM Jason Andryuk wrote:
>
> >> /* 64-bit entry point. */
> >> .code64
> >> 1:
> >> + U
lea rva(pvh_bootparams)(%ebp), %rsi
> + lea rva(startup_64)(%ebp), %rax
RIP-relative here too.
> ANNOTATE_RETPOLINE_SAFE
> jmp *%rax
>
> @@ -143,7 +167,7 @@ SYM_CODE_END(pvh_start_xen)
> .balign 8
> SYM_DATA_START_LOCAL(gdt)
> .word gdt_end - gdt_start
> - .long _pa(gdt_start)
> + .long _pa(gdt_start) /* x86-64 will overwrite if relocated. */
> .word 0
> SYM_DATA_END(gdt)
> SYM_DATA_START_LOCAL(gdt_start)
> --
> 2.44.0
>
>
Brian Gerst
orld?
>
> First, I agree with you because it makes things simple and neat.
>
> However, the latest SDM and FRED spec 5.0 both doesn't disallow it, so it
> becomes an OS implementation choice.
>
> >
> > Is there anything (other than perhaps the selftests) which would even
> > notice?
>
> I'm just conservative :)
>
> If a user app can do it with IDT, we should still allow it when FRED is
> enabled. But if all key stakeholders don't care whatever gets broken
> due to the change and agree to change it.
One case to consider is Windows software running under Wine.
Anti-tampering code has been known to do some non-standard things,
like using ICEBP or using SYSCALL directly instead of through system
DLLs. Keeping the status quo should be preferred, especially if
Microsoft does the same.
Brian Gerst
orld?
>
> First, I agree with you because it makes things simple and neat.
>
> However, the latest SDM and FRED spec 5.0 both doesn't disallow it, so it
> becomes an OS implementation choice.
>
> >
> > Is there anything (other than perhaps the selftests) which would even
> > notice?
>
> I'm just conservative :)
>
> If a user app can do it with IDT, we should still allow it when FRED is
> enabled. But if all key stakeholders don't care whatever gets broken
> due to the change and agree to change it.
One case to consider is Windows software running under Wine.
Anti-tampering code has been known to do some non-standard things,
like using ICEBP or using SYSCALL directly instead of through system
DLLs. Keeping the status quo should be preferred, especially if
Microsoft does the same.
Brian Gerst
x, PER_CPU_VAR(0(%esi))
> - movl%ecx, PER_CPU_VAR(4(%esi))
> + movl%ebx, %fs:(%esi)
> + movl%ecx, %fs:4(%esi)
>
> orl $X86_EFLAGS_ZF, (%esp)
>
> @@ -72,8 +84,8 @@ SYM_FUNC_START(this_cpu_cmpxchg8b_emu)
> RET
>
> .Lnot_same2:
> - movlPER_CPU_VAR(0(%esi)), %eax
> - movlPER_CPU_VAR(4(%esi)), %edx
> + movl%fs:(%esi), %eax
> + movl%fs:4(%esi), %edx
>
> andl$(~X86_EFLAGS_ZF), (%esp)
>
> --
> 2.41.0
>
This will break on !SMP builds, where per-cpu variables are just
regular data and not accessed with a segment prefix.
Brian Gerst
initial_gs in common_cpu_up() ]
> [ Oleksandr Natalenko: reported suspend/resume issue fixed in
> x86_acpi_suspend_lowlevel ]
>
> Co-developed-by: Thomas Gleixner
> Signed-off-by: Thomas Gleixner
> Co-developed-by: Brian Gerst
> Signed-off-by: Brian Gerst
> Signed-off-by
+721,10 @@ static void impress_friends(void)
> * Allow the user to impress friends.
> */
> pr_debug("Before bogomips\n");
> - for_each_possible_cpu(cpu)
> - if (cpumask_test_cpu(cpu, cpu_callout_mask))
> + for_each_possible_cpu(cpu) {
> + if (cpumask_test_cpu(cpu, cpu_online_mask))
> bogosum += cpu_data(cpu).loops_per_jiffy;
This should be the same as for_each_online_cpu().
--
Brian Gerst
On Xen PV, the GDT must be read-only because the hypervisor
> * requires it.
> */
> - pgprot_t gdt_prot = boot_cpu_has(X86_FEATURE_XENPV) ?
> + pgprot_t gdt_prot = cpu_feature_enabled(X86_FEATURE_XENPV) ?
> PAGE_KERNEL_RO : PAGE_KERNEL;
> pgprot_t tss_prot = PAGE_KERNEL;
> #endif
This is another case that can be removed because it's for 32-bit.
--
Brian Gerst
_fn)(void *),
> +void *kernel_thread_arg,
> +struct pt_regs *user_regs)
> +{
> + instrumentation_begin();
> +
> + schedule_tail(prev);
> +
> + if (kernel_thread_fn) {
This should have an unlikely(), as kernel threads should be the rare case.
--
Brian Gerst
/* pt_regs->ip = 0 (placeholder) */
> - pushl %eax/* pt_regs->orig_ax */
> + pushl (%eax) /* pt_regs->orig_ax */
Add an %ss: override here too.
> SAVE_ALL pt_regs_ax=$-ENOSYS/* save rest, stack already switched
> */
>
> /*
> --
> 2.19.1.6.gb485710b
>
--
Brian Gerst
vm86regs.pt.bx = v.regs.ebx;
> @@ -370,9 +336,6 @@ static long do_sys_vm86(struct vm86plus_struct __user
> *user_vm86, bool plus)
> update_task_stack(tsk);
> preempt_enable();
>
> - if (vm86->flags & VM86_SCREEN_BITMAP)
> - mark_screen_rdonly(tsk->mm);
> -
> memcpy((struct kernel_vm86_regs *)regs, &vm86regs, sizeof(vm86regs));
> return regs->ax;
> }
You can also remove screen_bitmap from struct vm86 (the kernel
internal structure), and the check_v8086_mode() function.
--
Brian Gerst
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 2ca408d9c749c32288bc28725f9f12ba30299e8f
Gitweb:
https://git.kernel.org/tip/2ca408d9c749c32288bc28725f9f12ba30299e8f
Author:Brian Gerst
AuthorDate:Mon, 30 Nov 2020 17:30:59 -05:00
Committer
On Tue, Dec 1, 2020 at 12:34 PM Andy Lutomirski wrote:
>
> On Tue, Dec 1, 2020 at 9:23 AM Andy Lutomirski wrote:
> >
> > On Mon, Nov 30, 2020 at 2:31 PM Brian Gerst wrote:
> > >
> > > Commit 121b32a58a3a converted native x86-32 which take 64-bit arguments
args for native 32-bit.
Reported-by: Paweł Jasiak
Fixes: 121b32a58a3a ("x86/entry/32: Use IA32-specific wrappers for syscalls
taking 64-bit arguments")
Signed-off-by: Brian Gerst
---
arch/Kconfig | 6 ++
arch/x86/Kconfig | 1 +
fs/notif
est fix
would be to define __KERNEL_PERCPU when either SMP or STACKPROTECTOR
are enabled.
--
Brian Gerst
l that would need to be done
is to remove the zero-base of the percpu segment (which would simplify
alot of other code).
--
Brian Gerst
i_is_native(void)
> {
> - if (!IS_ENABLED(CONFIG_X86_64))
> - return true;
> return efi_is_64bit();
> }
This would then return false for native 32-bit.
--
Brian Gerst
h,
> but it shouldn't be all that costly. Famous last words, of course...
>
> Does anybody see fundamental problems with that?
I think this would be a good idea. I have been working on a patchset
to clean up the conditional syscall handling (sys_ni.c), and conflicts
with the prototypes in syscalls.h have been getting in the way.
Having the prototypes use SYSCALL_DECLAREx(...) would solve that
issue.
--
Brian Gerst
h,
> but it shouldn't be all that costly. Famous last words, of course...
>
> Does anybody see fundamental problems with that?
I think this would be a good idea. I have been working on a patchset
to clean up the conditional syscall handling (sys_ni.c), and conflicts
with the prototypes in syscalls.h have been getting in the way.
Having the prototypes use SYSCALL_DECLAREx(...) would solve that
issue.
--
Brian Gerst
h,
> but it shouldn't be all that costly. Famous last words, of course...
>
> Does anybody see fundamental problems with that?
I think this would be a good idea. I have been working on a patchset
to clean up the conditional syscall handling (sys_ni.c), and conflicts
with the prototypes in syscalls.h have been getting in the way.
Having the prototypes use SYSCALL_DECLAREx(...) would solve that
issue.
--
Brian Gerst
An alternative to the patch I proposed earlier would be to use aliases
with the __x32_ prefix for the common syscalls.
--
Brian Gerst
On Sat, Sep 19, 2020 at 1:14 PM wrote:
>
> On September 19, 2020 9:23:22 AM PDT, Andy Lutomirski wrote:
> >On Fri, Sep 18, 2020 at 10:35 PM Chris
An alternative to the patch I proposed earlier would be to use aliases
with the __x32_ prefix for the common syscalls.
--
Brian Gerst
On Sat, Sep 19, 2020 at 1:14 PM wrote:
>
> On September 19, 2020 9:23:22 AM PDT, Andy Lutomirski wrote:
> >On Fri, Sep 18, 2020 at 10:35 PM Chris
On Wed, Sep 2, 2020 at 12:31 PM wrote:
>
> On Wed, Sep 02, 2020 at 06:24:27PM +0200, Jürgen Groß wrote:
> > On 02.09.20 17:58, Brian Gerst wrote:
> > > On Wed, Sep 2, 2020 at 9:38 AM Peter Zijlstra
> > > wrote:
> > > >
> > > > From: Pe
(IS_ENABLED(CONFIG_64_BIT) &&
> boot_cpu_has(X86_FEATURE_XENPV)))
> + mask |= X86_EFLAGS_AC;
Is the explicit Xen check necessary? IIRC the Xen hypervisor will
filter out the SMAP bit in the cpuid pvop.
--
Brian Gerst
USR1
> [RUN]Step again
> [OK]pause(2) restarted correctly
Bisected to commit 0b085e68f407 ("x86/entry: Consolidate 32/64 bit
syscall entry").
It looks like it is because syscall_enter_from_user_mode() is called
before reading the 6th argument from the user stack.
--
Brian Gerst
callinterrupt_entry
> UNWIND_HINT_REGS indirect=1
> movqORIG_RAX(%rdi), %rsi/* get vector from stack */
> - movq$-1, ORIG_RAX(%rdi) /* no syscall to restart */
> callsmp_spurious_interrupt /* rdi points to pt_regs */
> jmp ret_from_intr
> SYM_CODE_END(common_spurious)
> @@ -746,7 +745,6 @@ SYM_CODE_START_LOCAL(common_interrupt)
> callinterrupt_entry
> UNWIND_HINT_REGS indirect=1
> movqORIG_RAX(%rdi), %rsi/* get vector from stack */
> - movq$-1, ORIG_RAX(%rdi) /* no syscall to restart */
> calldo_IRQ /* rdi points to pt_regs */
> /* 0(%rsp): old RSP */
> ret_from_intr:
> diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
> index 67768e5443..5b6f74e 100644
> --- a/arch/x86/kernel/apic/vector.c
> +++ b/arch/x86/kernel/apic/vector.c
> @@ -934,7 +934,7 @@ static void __irq_complete_move(struct irq_cfg *cfg,
> unsigned vector)
>
> void irq_complete_move(struct irq_cfg *cfg)
> {
> - __irq_complete_move(cfg, ~get_irq_regs()->orig_ax);
> + __irq_complete_move(cfg, get_irq_regs()->orig_ax);
> }
I think you need to also truncate the vector to 8-bits, since it now
gets sign-extended when pushed into the orig_ax slot.
--
Brian Gerst
On Tue, Aug 25, 2020 at 6:44 AM Thomas Gleixner wrote:
>
> On Fri, Aug 21 2020 at 11:35, Brian Gerst wrote:
> > On Fri, Aug 21, 2020 at 10:22 AM Sean Christopherson
> >> > .macro GET_PERCPU_BASE reg:req
> >> > - ALTERNATIVE \
> >> >
CPU returns to userspace.
> > + * Thus the kernel would consume a guest's TSC_AUX if an NMI arrives
> > + * while running KVM's run loop.
> > */
> > .macro GET_PERCPU_BASE reg:req
> > - ALTERNATIVE \
> > - "LOAD_CPU_AND_NODE_SEG_LIMIT \reg", \
> > - "RDPID \reg", \
>
> This was the only user of the RDPID macro, I assume we want to yank that out
> as well?
No. That one should be kept until the minimum binutils version is
raised to one that supports the RDPID opcode.
--
Brian Gerst
\
> - X86_FEATURE_RDPID
> + LOAD_CPU_AND_NODE_SEG_LIMIT \reg
> andq$VDSO_CPUNODE_MASK, \reg
> movq__per_cpu_offset(, \reg, 8), \reg
> .endm
LOAD_CPU_AND_NODE_SEG_LIMIT can be merged into this, as its only
purpose was to work around using CPP macros in an alternative.
--
Brian Gerst
* an oops.
> +*/
> + dr6 &= ~DR_STEP;
> + set_thread_flag(TIF_SINGLESTEP);
> + regs->flags &= ~X86_EFLAGS_TF;
> + }
> +
> handle_debug(regs, dr6, false);
>
> out:
Can this be removed or changed to a BUG()? The warning has been there
since 2016 and nobody has apparently complained about it.
--
Brian Gerst
enter_from_user_mode(regs);
> - instrumentation_begin();
> + unsigned int nr = syscall_32_enter(regs);
>
> - local_irq_enable();
> - do_syscall_32_irqs_on(regs);
> -
> - instrumentation_end();
> - exit_to_user_mode();
> + do_syscall_32_irqs_on(regs, nr);
> + syscall_return_slowpath(regs);
> }
>
> -static bool __do_fast_syscall_32(struct pt_regs *regs)
> +static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
Can __do_fast_syscall_32() be merged back into do_fast_syscall_32()
now that both are marked noinstr?
--
Brian Gerst
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: 4719ffecbb0659faf1fd39f4b8eb2674f0042890
Gitweb:
https://git.kernel.org/tip/4719ffecbb0659faf1fd39f4b8eb2674f0042890
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:24 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: c94055fe93c8d00bfa23fa2cb9af080f7fc53aa0
Gitweb:
https://git.kernel.org/tip/c94055fe93c8d00bfa23fa2cb9af080f7fc53aa0
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:23 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: ebcd580bed4a357ea894e6878d9099b3919f727f
Gitweb:
https://git.kernel.org/tip/ebcd580bed4a357ea894e6878d9099b3919f727f
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:22 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: bbff583b84a130d4d1234d68906c41690575be36
Gitweb:
https://git.kernel.org/tip/bbff583b84a130d4d1234d68906c41690575be36
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:20 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: 33e5614a435ff8047d768e6501454ae1cc7f131f
Gitweb:
https://git.kernel.org/tip/33e5614a435ff8047d768e6501454ae1cc7f131f
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:18 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: bb631e3002840706362a7d76e3ebb3604cce91a7
Gitweb:
https://git.kernel.org/tip/bb631e3002840706362a7d76e3ebb3604cce91a7
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:17 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: c175acc14719e69ecec4dafbb642a7f38c76c064
Gitweb:
https://git.kernel.org/tip/c175acc14719e69ecec4dafbb642a7f38c76c064
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:16 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: 73ca542fbabb68deaa90130a8153cab1fa8288fe
Gitweb:
https://git.kernel.org/tip/73ca542fbabb68deaa90130a8153cab1fa8288fe
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:21 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: 6865dc3ae93b9acb336ca48bd7b2db3446d89370
Gitweb:
https://git.kernel.org/tip/6865dc3ae93b9acb336ca48bd7b2db3446d89370
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:15 -07:00
Committer
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: e4d16defbbde028aeab2026995f0ced4240df6d6
Gitweb:
https://git.kernel.org/tip/e4d16defbbde028aeab2026995f0ced4240df6d6
Author:Brian Gerst
AuthorDate:Mon, 20 Jul 2020 13:49:19 -07:00
Committer
Reorganize the tests for SYSEXITS/SYSRETL, cleaning up comments and merging
native and compat code.
Signed-off-by: Brian Gerst
---
arch/x86/entry/common.c | 85 ++--
arch/x86/entry/entry_32.S| 6 +--
arch/x86/entry/entry_64_compat.S | 13 ++---
arch
Signed-off-by: Brian Gerst
---
arch/x86/entry/calling.h | 10 +
arch/x86/entry/common.c| 56 ++-
arch/x86/entry/entry_64.S | 71 ++
arch/x86/include/asm/syscall.h | 2 +-
4 files changed, 60 insertions(+), 79
This series cleans up the tests for using the SYSRET or SYSEXIT
instructions on exit from a syscall, moving some code from assembly to C
and merging native and compat tests.
Brian Gerst (3):
x86-64: Move SYSRET validation code to C
x86-32: Remove SEP test for SYSEXIT
x86: Clean up
SEP must be present in order for do_fast_syscall_32() to be called on native
32-bit. Therefore the check here is redundant.
Signed-off-by: Brian Gerst
---
arch/x86/entry/common.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/entry/common.c b/arch/x86/entry
On Tue, Jul 14, 2020 at 2:40 AM Christoph Hellwig wrote:
>
> On Tue, Jun 16, 2020 at 10:23:13AM -0400, Brian Gerst wrote:
> > Christoph Hellwig uncovered an issue with how we currently handle X32
> > syscalls. Currently, we can only use COMPAT_SYS_DEFINEx() for X32
> > s
RST_SYSTEM_VECTOR + i - 1
> jmp asm_spurious_interrupt
> nop
> /* Ensure that the above is 8 bytes max */
> - . = pos + 8
> -pos=pos+8
> -vector=vector+1
> + . = pos2 + 8 * i
> + i = i + 1
> .endr
> SYM_CODE_END(spurious_entries_start)
> #endif
--
Brian Gerst
On Thu, Jul 9, 2020 at 6:30 AM Peter Zijlstra wrote:
>
> On Sat, May 30, 2020 at 06:11:19PM -0400, Brian Gerst wrote:
> > + if (0) {\
> > + typeof(_var) pto_tmp__; \
> >
= -EBADF;
> struct file *file = fget_raw(fildes);
> @@ -1000,11 +1000,6 @@ int ksys_dup(unsigned int fildes)
> return ret;
> }
>
> -SYSCALL_DEFINE1(dup, unsigned int, fildes)
> -{
> - return ksys_dup(fildes);
> -}
> -
Please split the removal of the now unused ksys_*() functions into a
separate patch.
--
Brian Gerst
One thing that you missed is removing VDSO_NOTE_NONEGSEG_BIT from
vdso32/note.S. With that removed there is no difference from the
64-bit version.
Otherwise this series looks good to me.
--
Brian Gerst
One thing that you missed is removing VDSO_NOTE_NONEGSEG_BIT from
vdso32/note.S. With that removed there is no difference from the
64-bit version.
Otherwise this series looks good to me.
--
Brian Gerst
One thing that you missed is removing VDSO_NOTE_NONEGSEG_BIT from
vdso32/note.S. With that removed there is no difference from the
64-bit version.
Otherwise this series looks good to me.
--
Brian Gerst
___
Virtualization mailing list
Virtualization
ther HVM or PVH, or they can use a 64 bit kernel.
>
> Remove the 32-bit Xen PV support from the kernel.
If you send a v3, it would be better to split the move of the 64-bit
code into xen-asm.S into a separate patch.
--
Brian Gerst
ther HVM or PVH, or they can use a 64 bit kernel.
>
> Remove the 32-bit Xen PV support from the kernel.
If you send a v3, it would be better to split the move of the 64-bit
code into xen-asm.S into a separate patch.
--
Brian Gerst
ER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
This skips over the section that truncates the syscall number to
32-bits. The comments present some doubt that it is actually
necessary, but the Xen path shouldn't differ from native. That code
should be moved after this new label.
--
Brian Gerst
ER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
This skips over the section that truncates the syscall number to
32-bits. The comments present some doubt that it is actually
necessary, but the Xen path shouldn't differ from native. That code
should be moved after this new label.
--
Brian Gerst
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: c9a1ff316bc9b1d1806a4366d0aef6e18833ba52
Gitweb:
https://git.kernel.org/tip/c9a1ff316bc9b1d1806a4366d0aef6e18833ba52
Author:Brian Gerst
AuthorDate:Wed, 17 Jun 2020 18:56:24 -04:00
Committer
The idle tasks created for each secondary CPU already have a random stack
canary generated by fork(). Copy the canary to the percpu variable before
starting the secondary CPU which removes the need to call
boot_init_stack_canary().
Signed-off-by: Brian Gerst
---
V2: Fixed stack protector
On Tue, Jun 16, 2020 at 12:49 PM Andy Lutomirski wrote:
>
> On Tue, Jun 16, 2020 at 7:23 AM Brian Gerst wrote:
> >
> > The ABI prefix for syscalls specifies the argument register mapping, so
> > there is no specific reason to continue using the __x32 prefix for the
&g
x32_rt_sigreturn doesn't need to be a compat syscall because there aren't two
versions.
Signed-off-by: Brian Gerst
---
arch/x86/entry/syscalls/syscall_64.tbl| 2 +-
arch/x86/kernel/signal.c | 2 +-
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
Christoph Hellwig uncovered an issue with how we currently handle X32
syscalls. Currently, we can only use COMPAT_SYS_DEFINEx() for X32
specific syscalls. These changes remove that restriction and allow
native syscalls.
Brian Gerst (2):
x86/x32: Use __x64 prefix for X32 compat syscalls
x86
The ABI prefix for syscalls specifies the argument register mapping, so
there is no specific reason to continue using the __x32 prefix for the
compat syscalls. This change will allow using native syscalls in the X32
specific portion of the syscall table.
Signed-off-by: Brian Gerst
---
arch/x86
On Mon, Jun 15, 2020 at 2:47 PM Arnd Bergmann wrote:
>
> On Mon, Jun 15, 2020 at 4:48 PM Brian Gerst wrote:
> > On Mon, Jun 15, 2020 at 10:13 AM Christoph Hellwig wrote:
> > > On Mon, Jun 15, 2020 at 03:31:35PM +0200, Arnd Bergmann wrote:
>
> > >
> > >
On Mon, Jun 15, 2020 at 2:47 PM Arnd Bergmann wrote:
>
> On Mon, Jun 15, 2020 at 4:48 PM Brian Gerst wrote:
> > On Mon, Jun 15, 2020 at 10:13 AM Christoph Hellwig wrote:
> > > On Mon, Jun 15, 2020 at 03:31:35PM +0200, Arnd Bergmann wrote:
>
> > >
> > >
ry] Error 2
> make[1]: *** Waiting for unfinished jobs
> kernel/exit.o: warning: objtool: __x64_sys_exit_group()+0x14: unreachable
> instruction
> make: *** [Makefile:1764: arch/x86] Error 2
> make: *** Waiting for unfinished jobs
If you move those aliases above all the __SYSCALL_* defines it will
work, since that will get the forward declaration too. This would be
the simplest workaround.
--
Brian Gerst
ry] Error 2
> make[1]: *** Waiting for unfinished jobs
> kernel/exit.o: warning: objtool: __x64_sys_exit_group()+0x14: unreachable
> instruction
> make: *** [Makefile:1764: arch/x86] Error 2
> make: *** Waiting for unfinished jobs
If you move those aliases above all the __SYSCALL_* defines it will
work, since that will get the forward declaration too. This would be
the simplest workaround.
--
Brian Gerst
ariants through copy and paste.
> smart compiler to d
>
> > I don't really understand
> > the comment, why can't this just use this?
>
> That errors out with:
>
> ld: arch/x86/entry/syscall_x32.o:(.rodata+0x1040): undefined reference to
> `__x32_sys_execve'
> ld: arch/x86/entry/syscall_x32.o:(.rodata+0x1108): undefined reference to
> `__x32_sys_execveat'
> make: *** [Makefile:1139: vmlinux] Error 1
I think I have a fix for this, by modifying the syscall wrappers to
add an alias for the __x32 variant to the native __x64_sys_foo().
I'll get back to you with a patch.
--
Brian Gerst
ariants through copy and paste.
> smart compiler to d
>
> > I don't really understand
> > the comment, why can't this just use this?
>
> That errors out with:
>
> ld: arch/x86/entry/syscall_x32.o:(.rodata+0x1040): undefined reference to
> `__x32_sys_execve'
> ld: arch/x86/entry/syscall_x32.o:(.rodata+0x1108): undefined reference to
> `__x32_sys_execveat'
> make: *** [Makefile:1139: vmlinux] Error 1
I think I have a fix for this, by modifying the syscall wrappers to
add an alias for the __x32 variant to the native __x64_sys_foo().
I'll get back to you with a patch.
--
Brian Gerst
off-by: Jiri Slaby
> Fixes: 121b32a58a3a (x86/entry/32: Use IA32-specific wrappers for syscalls
> taking 64-bit arguments)
> Cc: Brian Gerst
> Cc: Thomas Gleixner
> Cc: Dominik Brodowski
> ---
> include/linux/syscalls.h | 2 +-
> 1 file changed, 1 insertion(+), 1
The idle tasks created for each secondary CPU already have a random stack
canary generated by fork(). Copy the canary to the percpu variable before
starting the secondary CPU which removes the need to call
boot_init_stack_canary().
Signed-off-by: Brian Gerst
---
arch/x86/include/asm
On Wed, Jun 3, 2020 at 11:18 AM Joerg Roedel wrote:
>
> On Tue, May 19, 2020 at 09:58:18AM -0400, Brian Gerst wrote:
> > On Tue, Apr 28, 2020 at 11:28 AM Joerg Roedel wrote:
>
> > The proper fix would be to initialize MSR_GS_BASE earlier.
>
> That'll mean to ini
On Wed, Jun 3, 2020 at 11:18 AM Joerg Roedel wrote:
>
> On Tue, May 19, 2020 at 09:58:18AM -0400, Brian Gerst wrote:
> > On Tue, Apr 28, 2020 at 11:28 AM Joerg Roedel wrote:
>
> > The proper fix would be to initialize MSR_GS_BASE earlier.
>
> That'll mean to ini
On Mon, Jun 1, 2020 at 4:43 PM Nick Desaulniers wrote:
>
> On Sat, May 30, 2020 at 3:11 PM Brian Gerst wrote:
> >
> > Use __pcpu_size_call_return() to simplify this_cpu_read_stable().
>
> Clever! As in this_cpu_read() in include/linux/percpu-defs.h. Could
> be its
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
Reviewed-by: Nick
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
Reviewed-by: Nick
Also remove now unused __percpu_mov_op.
Signed-off-by: Brian Gerst
---
arch/x86/include/asm/percpu.h | 18 --
1 file changed, 18 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index cf2b9c2a241e..a3c33b79fb86 100644
--- a/arch/x86
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
Reviewed-by: Nick
Use __pcpu_size_call_return() to simplify this_cpu_read_stable().
Also remove __bad_percpu_size() which is now unused.
Signed-off-by: Brian Gerst
---
arch/x86/include/asm/percpu.h | 41 ++-
1 file changed, 12 insertions(+), 29 deletions(-)
diff --git a/arch/x86
The "e" constraint represents a constant, but the XADD instruction doesn't
accept immediate operands.
Signed-off-by: Brian Gerst
---
arch/x86/include/asm/percpu.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/inclu
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
Reviewed-by: Nick
operands, and to cast variables to the width used in
the assembly to make Clang happy.
Changes from v1:
- Add separate patch for XADD constraint fix
- Fixed sparse truncation warning
- Add cleanup of percpu_stable_op()
- Add patch to Remove PER_CPU()
Brian Gerst (10):
x86/percpu: Introduce size
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
In preparation for cleaning up the percpu operations, define macros for
abstraction based on the width of the operation.
Signed-off-by: Brian Gerst
---
arch/x86/include/asm/percpu.h | 30 ++
1 file changed, 30 insertions(+)
diff --git a/arch/x86/include/asm/percpu.h
d, from the other thread [1] in case you missed it -- the plain
> hidden.h fails to build in-tree. We need something like
> KBUILD_CFLAGS += -include $(srctree)/$(src)/hidden.h
> instead.
>
> [1] https://lore.kernel.org/lkml/20200526153104.gc2190...@rani.riverdale.lan/
How about using -fvisibility=hidden instead of including this header?
--
Brian Gerst
.ld in the same directory.
If the compiler is making assumptions based on the function name
"main" wouldn't it be simpler just to rename the function?
--
Brian Gerst
On Wed, May 20, 2020 at 1:26 PM Nick Desaulniers
wrote:
>
> On Mon, May 18, 2020 at 8:38 PM Brian Gerst wrote:
> >
> > On Mon, May 18, 2020 at 5:15 PM Nick Desaulniers
> > wrote:
> > >
> > > On Sun, May 17, 2020 at 8:29 AM Brian Gerst wrote:
> >
otector)
> +CFLAGS_head64.o:= $(nostackp)
> +
> # If instrumentation of this dir is enabled, boot hangs during first second.
> # Probably could be more selective here, but note that files related to irqs,
> # boot, dumpstack/stacktrace, etc are either non-interesting
otector)
> +CFLAGS_head64.o:= $(nostackp)
> +
> # If instrumentation of this dir is enabled, boot hangs during first second.
> # Probably could be more selective here, but note that files related to irqs,
> # boot, dumpstack/stacktrace, etc are either non-interesting
On Mon, May 18, 2020 at 5:15 PM Nick Desaulniers
wrote:
>
> On Sun, May 17, 2020 at 8:29 AM Brian Gerst wrote:
> >
> > The core percpu macros already have a switch on the data size, so the switch
> > in the x86 code is redundant and produces more dead code.
> >
>
On Mon, May 18, 2020 at 6:46 PM Nick Desaulniers
wrote:
>
> On Sun, May 17, 2020 at 8:29 AM Brian Gerst wrote:
> >
> > The core percpu macros already have a switch on the data size, so the switch
> > in the x86 code is redundant and produces more dead code.
> >
>
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
The core percpu macros already have a switch on the data size, so the switch
in the x86 code is redundant and produces more dead code.
Also use appropriate types for the width of the instructions. This avoids
errors when compiling with Clang.
Signed-off-by: Brian Gerst
---
arch/x86/include
operands, and to cast variables to the width used in
the assembly to make Clang happy.
Brian Gerst (7):
x86/percpu: Introduce size abstraction macros
x86/percpu: Clean up percpu_to_op()
x86/percpu: Clean up percpu_from_op()
x86/percpu: Clean up percpu_add_op()
x86/percpu: Clean up
1 - 100 of 778 matches
Mail list logo