On Fri, Oct 2, 2020 at 11:28 PM Marco Elver wrote:
> On Fri, 2 Oct 2020 at 21:32, Jann Horn wrote:
> > > That's another check; we don't want to make this more expensive.
> >
> > Ah, right, I missed that this is the one piece of KFENCE that is
> > actually really
On Fri, Oct 2, 2020 at 7:20 PM Marco Elver wrote:
> On Fri, Oct 02, 2020 at 08:33AM +0200, Jann Horn wrote:
> > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> > > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> > > low-overhead samplin
On Fri, Oct 2, 2020 at 4:23 PM Dmitry Vyukov wrote:
> On Fri, Oct 2, 2020 at 9:54 AM Jann Horn wrote:
> > On Fri, Oct 2, 2020 at 8:33 AM Jann Horn wrote:
> > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> > > > This adds the Kernel Electric-Fence (
On Fri, Oct 2, 2020 at 11:18 AM Michel Lespinasse wrote:
> On Thu, Oct 1, 2020 at 6:25 PM Jann Horn wrote:
> > Until now, the mmap lock of the nascent mm was ordered inside the mmap lock
> > of the old mm (in dup_mmap() and in UML's activate_mm()).
> > A following patch will
On Fri, Oct 2, 2020 at 4:19 PM Marco Elver wrote:
>
> On Fri, 2 Oct 2020 at 08:48, Jann Horn wrote:
> >
> > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> > > Add architecture specific implementation details for KFENCE and enable
> > > KFENCE for t
On Fri, Oct 2, 2020 at 8:33 AM Jann Horn wrote:
> On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> > low-overhead sampling-based memory safety error detector of heap
> > use-after-free,
On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> Inserts KFENCE hooks into the SLUB allocator.
[...]
> diff --git a/mm/slub.c b/mm/slub.c
[...]
> @@ -3290,8 +3314,14 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t
> flags, size_t size,
> c = this_cpu_ptr(s->cpu_slab);
>
>
On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> Add architecture specific implementation details for KFENCE and enable
> KFENCE for the arm64 architecture. In particular, this implements the
> required interface in . Currently, the arm64 version does
> not yet use a statically allocated
On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> low-overhead sampling-based memory safety error detector of heap
> use-after-free, invalid-free, and out-of-bounds access errors.
>
> KFENCE is designed to be enabled in
On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> Add architecture specific implementation details for KFENCE and enable
> KFENCE for the x86 architecture. In particular, this implements the
> required interface in for setting up the pool and
> providing helper functions for protecting and
On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote:
> Add architecture specific implementation details for KFENCE and enable
> KFENCE for the x86 architecture. In particular, this implements the
> required interface in for setting up the pool and
> providing helper functions for protecting and
rn code to trap to userspace")
Reviewed-by: Tycho Andersen
Signed-off-by: Jann Horn
---
NOTE: After Tycho gave his Reviewed-by, I've had to adjust the errno
to -EBUSY (my original patch broke UAPI, good thing we have selftests),
remove the unused "cur" from init_listener(), and re
p code.
Suggested-by: Jason Gunthorpe
Suggested-by: Michel Lespinasse
Signed-off-by: Jann Horn
---
fs/exec.c | 68 -
include/linux/binfmts.h | 2 +-
2 files changed, 35 insertions(+), 35 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 229dbc
in exec_mmap() are temporary; the following patch
will move the locking out of exec_mmap().
Signed-off-by: Jann Horn
---
arch/um/include/asm/mmu_context.h | 3 +--
fs/exec.c | 4
include/linux/mmap_lock.h | 23 +--
kernel/fork.c
.kernel.org/r/cag48ez03yjg9ju_6tgimcavjutyre_o4leq7901b5zocnna...@mail.gmail.com
Jann Horn (2):
mmap locking API: Order lock of nascent mm outside lock of live mm
exec: Broadly lock nascent mm until setup_arg_pages()
arch/um/include/asm/mmu_context.h | 3 +-
fs/exec.c
On Fri, Oct 2, 2020 at 1:41 AM Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 10:16:35PM +0200, Jann Horn wrote:
> > > A subclass isn't right, it has to be a _nested annotation.
> > >
> > > nested locking is a pretty good reason to not be able to do this, this
&
On Thu, Oct 1, 2020 at 1:28 PM YiFei Zhu wrote:
> On Wed, Sep 30, 2020 at 5:24 PM Jann Horn wrote:
> > If you did the architecture enablement for X86 later in the series,
> > you could move this part over into that patch, that'd be cleaner.
>
> As in, patch 1: bitmap
On Thu, Oct 1, 2020 at 9:15 PM Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 01:51:33AM +0200, Jann Horn wrote:
> > On Thu, Oct 1, 2020 at 1:26 AM Jason Gunthorpe wrote:
> > > On Wed, Sep 30, 2020 at 10:14:57PM +0200, Jann Horn wrote:
> > > > On Wed, Sep 30, 20
On Thu, Oct 1, 2020 at 6:58 PM Tycho Andersen wrote:
> On Thu, Oct 01, 2020 at 05:47:54PM +0200, Jann Horn via Containers wrote:
> > On Thu, Oct 1, 2020 at 2:54 PM Christian Brauner
> > wrote:
> > > On Wed, Sep 30, 2020 at 05:53:46PM +0200, Jann Horn via Containers wrote
On Thu, Oct 1, 2020 at 2:06 PM YiFei Zhu wrote:
> On Wed, Sep 30, 2020 at 5:01 PM Jann Horn wrote:
> > Hmm, this won't work, because the task could be exiting, and seccomp
> > filters are detached in release_task() (using
> > seccomp_filter_release()). And at the moment, s
On Thu, Oct 1, 2020 at 2:54 PM Christian Brauner
wrote:
> On Wed, Sep 30, 2020 at 05:53:46PM +0200, Jann Horn via Containers wrote:
> > On Wed, Sep 30, 2020 at 1:07 PM Michael Kerrisk (man-pages)
> > wrote:
> > > NOTES
> > >The file descriptor ret
On Thu, Oct 1, 2020 at 3:52 AM Jann Horn wrote:
> On Thu, Oct 1, 2020 at 1:25 AM Tycho Andersen wrote:
> > On Thu, Oct 01, 2020 at 01:11:33AM +0200, Jann Horn wrote:
> > > On Thu, Oct 1, 2020 at 1:03 AM Tycho Andersen wrote:
> > > > On Wed, Sep 30, 2020 at 10:34:51
On Thu, Oct 1, 2020 at 1:25 AM Tycho Andersen wrote:
> On Thu, Oct 01, 2020 at 01:11:33AM +0200, Jann Horn wrote:
> > On Thu, Oct 1, 2020 at 1:03 AM Tycho Andersen wrote:
> > > On Wed, Sep 30, 2020 at 10:34:51PM +0200, Michael Kerrisk (man-pages)
> > > wrote:
> &
On Thu, Oct 1, 2020 at 1:26 AM Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 10:14:57PM +0200, Jann Horn wrote:
> > On Wed, Sep 30, 2020 at 2:50 PM Jann Horn wrote:
> > > On Wed, Sep 30, 2020 at 2:30 PM Jason Gunthorpe wrote:
> > > > On Tue, Sep 29, 2020 at 06:
On Thu, Oct 1, 2020 at 12:53 AM Kees Cook wrote:
>
> On Wed, Sep 30, 2020 at 11:33:15PM +0200, Jann Horn wrote:
> > On Wed, Sep 30, 2020 at 11:21 PM Kees Cook wrote:
> > > On Wed, Sep 30, 2020 at 10:19:12AM -0500, YiFei Zhu wrote:
> > > > From: Kees Cook
On Thu, Oct 1, 2020 at 1:03 AM Tycho Andersen wrote:
> On Wed, Sep 30, 2020 at 10:34:51PM +0200, Michael Kerrisk (man-pages) wrote:
> > On 9/30/20 5:03 PM, Tycho Andersen wrote:
> > > On Wed, Sep 30, 2020 at 01:07:38PM +0200, Michael Kerrisk (man-pages)
> > > wrote:
> > >>
[adding x86 folks to enhance bikeshedding]
On Thu, Oct 1, 2020 at 12:59 AM Kees Cook wrote:
> On Wed, Sep 30, 2020 at 10:19:16AM -0500, YiFei Zhu wrote:
> > From: YiFei Zhu
> >
> > Currently the kernel does not provide an infrastructure to translate
> > architecture numbers to a human-readable
On Wed, Sep 30, 2020 at 11:30 PM Gustavo A. R. Silva
wrote:
> On Wed, Sep 30, 2020 at 11:10:43PM +0200, Jann Horn wrote:
> > On Wed, Sep 30, 2020 at 11:02 PM Gustavo A. R. Silva
> > wrote:
> > > There is a regular need in the kernel to provide a way to declare having
&
On Wed, Sep 30, 2020 at 5:20 PM YiFei Zhu wrote:
> SECCOMP_CACHE_NR_ONLY will only operate on syscalls that do not
> access any syscall arguments or instruction pointer. To facilitate
> this we need a static analyser to know whether a filter will
> return allow regardless of syscall arguments for
5 FILTER
> x86_64 136 FILTER
> x86_64 137 ALLOW
> x86_64 138 ALLOW
> x86_64 139 FILTER
> x86_64 140 ALLOW
> x86_64 141 ALLOW
> [...]
Oooh, neat! :) Thanks!
> Suggested-by: Jann Horn
> Link:
> https://lore.kernel.org/lkml/CAG48ez3Ofqp4crXGksLmZY6=fgrf_twyucg7pbkaetv
On Wed, Sep 30, 2020 at 11:21 PM Kees Cook wrote:
> On Wed, Sep 30, 2020 at 10:19:12AM -0500, YiFei Zhu wrote:
> > From: Kees Cook
> >
> > Provide seccomp internals with the details to calculate which syscall
> > table the running kernel is expecting to deal with. This allows for
> > efficient
On Wed, Sep 30, 2020 at 11:02 PM Gustavo A. R. Silva
wrote:
> There is a regular need in the kernel to provide a way to declare having
> a dynamically sized set of trailing elements in a structure. Kernel code
> should always use “flexible array members”[1] for these cases. The older
> style of
On Wed, Sep 30, 2020 at 2:50 PM Jann Horn wrote:
> On Wed, Sep 30, 2020 at 2:30 PM Jason Gunthorpe wrote:
> > On Tue, Sep 29, 2020 at 06:20:00PM -0700, Jann Horn wrote:
> > > In preparation for adding a mmap_assert_locked() check in
> > > __get_user_pages(), te
On Wed, Sep 30, 2020 at 1:07 PM Michael Kerrisk (man-pages)
wrote:
> I knew it would be a big ask, but below is kind of the manual page
> I was hoping you might write [1] for the seccomp user-space notification
> mechanism. Since you didn't (and because 5.9 adds various new pieces
> such as
On Wed, Sep 30, 2020 at 2:30 PM Jason Gunthorpe wrote:
> On Tue, Sep 29, 2020 at 06:20:00PM -0700, Jann Horn wrote:
> > In preparation for adding a mmap_assert_locked() check in
> > __get_user_pages(), teach the mmap_assert_*locked() helpers that it's fine
> > to operate on
On Wed, Sep 30, 2020 at 3:19 AM Jann Horn wrote:
> To be safe against concurrent changes to the VMA tree, we must take the
> mmap lock around GUP operations (excluding the GUP-fast family of
> operations, which will take the mmap lock by themselves if necessary).
(Sorry, my mail setup
-by: Jann Horn
---
fs/binfmt_elf.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 40ec0b9b4b4f..cd7c574a91a4 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -309,7 +309,10 @@ create_elf_tables(struct linux_binprm *bprm,
const struct elfhdr *exec
callgraph):
get_user_pages_remote
get_arg_page
copy_strings
copy_string_kernel
remove_arg_zero
tomoyo_dump_page
tomoyo_print_bprm
tomoyo_scan_bprm
tomoyo_environ
Signed-off-by: Jann Horn
---
fs/exec.c | 8
include/linux
going forward.
[1]
https://lore.kernel.org/lkml/CAG48ez3tZAb9JVhw4T5e-i=h2_duzxfnrtdsagsrcvaznxx...@mail.gmail.com/
Signed-off-by: Jann Horn
---
mm/gup.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/gup.c b/mm/gup.c
index f11d39867cf5..3e5d843215b9 100644
--- a/mm/gup.c
+++ b/mm
this doesn't really have any impact; however, if we want to add
lockdep asserts into the GUP path, we need to have clean locking here.
Signed-off-by: Jann Horn
---
This series should go on top of the coredump locking series (in
particular "mm/gup: Take mmap_lock in get_dump_page()"), which
static_key_count(), which uses
atomic_read(), which calls instrument_atomic_read(), which uses
kasan_check_read(), which is __kasan_check_read().
Let's permit these KASAN helpers in UACCESS regions - static keys should
probably work under UACCESS, I think.
Signed-off-by: Jann Horn
---
Calling
>From what I can tell from looking at the code:
SPARC's arch_validate_prot() looks up the VMA and peeks at it; that's
not permitted though. do_mprotect_pkey() calls arch_validate_prot()
before taking the mmap lock, so we can hit use-after-free reads if
someone concurrently deletes a VMA we're
We need to take the mmap lock around find_vma() and subsequent use of the
VMA. Otherwise, we can race with concurrent operations like munmap(), which
can lead to use-after-free accesses to freed VMAs.
Fixes: 1000197d8013 ("nios2: System calls handling")
Signed-off-by:
We need to take the mmap lock around find_vma() and subsequent use of the
VMA. Otherwise, we can race with concurrent operations like munmap(), which
can lead to use-after-free accesses to freed VMAs.
Fixes: 1932fbe36e02 ("nds32: System calls handling")
Signed-off-by:
On Fri, Sep 25, 2020 at 2:18 AM Al Viro wrote:
> On Fri, Sep 25, 2020 at 02:15:50AM +0200, Jann Horn wrote:
> > On Fri, Sep 25, 2020 at 2:01 AM Kees Cook wrote:
> > > 2) seccomp needs to handle "multiplexed" tables like x86_x32 (distros
> > >haven't
On Fri, Sep 25, 2020 at 2:01 AM Kees Cook wrote:
> 2) seccomp needs to handle "multiplexed" tables like x86_x32 (distros
>haven't removed CONFIG_X86_X32 widely yet, so it is a reality that
>it must be dealt with), which means seccomp's idea of the arch
>"number" can't be the same as
vative BPF emulation
> innovation that obsoleted your TLB magic of June:
>
> https://lists.linuxfoundation.org/pipermail/containers/2020-September/042153.html
>
> And on Sep 23 instead of collaborating and helping YiFei Zhu to
> improve his BPF emulator, you posted the same techn
On Thu, Sep 24, 2020 at 3:40 PM Rasmus Villemoes
wrote:
> On 24/09/2020 01.29, Kees Cook wrote:
> > rfc:
> > https://lore.kernel.org/lkml/20200616074934.1600036-1-keesc...@chromium.org/
> > alternative:
> > https://lore.kernel.org/containers/cover.1600661418.git.yifei...@illinois.edu/
> > v1:
>
On Thu, Sep 24, 2020 at 2:37 PM David Laight wrote:
> From: Jann Horn
> > Sent: 24 September 2020 13:29
> ...
> > I think our goal here should be that if a syscall is always allowed,
> > seccomp should execute the smallest amount of instructions we can get
> > awa
On Thu, Sep 24, 2020 at 9:37 AM Kees Cook wrote:
> On Thu, Sep 24, 2020 at 02:25:03AM +0200, Jann Horn wrote:
> > On Thu, Sep 24, 2020 at 1:29 AM Kees Cook wrote:
[...]
> > (However, a "which syscalls have a fixed result" bitmap might make
> > sense if we want
On Thu, Sep 24, 2020 at 1:29 AM Kees Cook wrote:
> Provide seccomp internals with the details to calculate which syscall
> table the running kernel is expecting to deal with. This allows for
> efficient architecture pinning and paves the way for constant-action
> bitmaps.
[...]
> diff --git
On Thu, Sep 24, 2020 at 1:29 AM Kees Cook wrote:
> For systems that provide multiple syscall maps based on audit
> architectures (e.g. AUDIT_ARCH_X86_64 and AUDIT_ARCH_I386 via
> CONFIG_COMPAT) or via syscall masks (e.g. x86_x32), allow a fast way
> to pin the process to a specific syscall table,
On Thu, Sep 24, 2020 at 1:29 AM Kees Cook wrote:
> One of the most common pain points with seccomp filters has been dealing
> with the overhead of processing the filters, especially for "always allow"
> or "always reject" cases.
The "always reject" cases don't need to be fast, in particular not
> Not yet implemented are:
>
> BPF_ALU | BPF_AND (generated by libseccomp and Chrome)
BPF_AND is normally only used on syscall arguments, not on the syscall
number or the architecture, right? And when a syscall argument is
loaded, we abort execution anyway. So I think there is no ne
On Wed, Sep 23, 2020 at 9:20 PM Kees Cook wrote:
> On Tue, Sep 22, 2020 at 09:58:52AM +0200, Thomas Gleixner wrote:
> > -void asm_call_on_stack(void *sp, void *func, void *arg);
> > +void asm_call_on_stack(void *sp, void (*func)(void), void *arg);
> > +void asm_call_sysvec_on_stack(void *sp, void
I noticed this code in alloc_user_pages() in
drivers/staging/media/atomisp/pci/hmm/hmm_bo.c:
/*
* Convert user space virtual address into pages list
*/
static int alloc_user_pages(struct hmm_buffer_object *bo,
const void __user *userptr, bool cached)
{
int
On Tue, Sep 22, 2020 at 1:44 AM YiFei Zhu wrote:
> On Mon, Sep 21, 2020 at 12:47 PM Jann Horn wrote:
> > > + depends on SECCOMP
> > > + depends on SECCOMP_FILTER
> >
> > SECCOMP_FILTER already depends on SECCOMP, so the "depends on SECCOMP&qu
On Tue, Sep 22, 2020 at 12:51 AM YiFei Zhu wrote:
> On Mon, Sep 21, 2020 at 1:09 PM Jann Horn wrote:
> >
> > On Mon, Sep 21, 2020 at 7:35 AM YiFei Zhu wrote:
> > [...]
> > > We do this by creating a per-task bitmap of permitted syscalls.
> > >
On Tue, Sep 22, 2020 at 12:30 AM Peter Xu wrote:
> On Mon, Sep 21, 2020 at 11:43:38PM +0200, Jann Horn wrote:
> > On Mon, Sep 21, 2020 at 11:17 PM Peter Xu wrote:
> > > (Commit message collected from Jason Gunthorpe)
> > >
> > > Reduce the chance of false
On Tue, Sep 22, 2020 at 12:18 AM John Hubbard wrote:
> On 9/21/20 2:55 PM, Jann Horn wrote:
> > On Mon, Sep 21, 2020 at 11:20 PM Peter Xu wrote:
> ...
> > I dislike the whole pin_user_pages() concept because (as far as I
> > understand) it fundamentally tries to fix
On Mon, Sep 21, 2020 at 11:20 PM Peter Xu wrote:
> This patch is greatly inspired by the discussions on the list from Linus,
> Jason
> Gunthorpe and others [1].
>
> It allows copy_pte_range() to do early cow if the pages were pinned on the
> source mm. Currently we don't have an accurate way to
On Mon, Sep 21, 2020 at 11:17 PM Peter Xu wrote:
>
> (Commit message collected from Jason Gunthorpe)
>
> Reduce the chance of false positive from page_maybe_dma_pinned() by keeping
> track if the mm_struct has ever been used with pin_user_pages(). mm_structs
> that have never been passed to
On Mon, Sep 21, 2020 at 9:35 PM Hubertus Franke wrote:
> I suggest we first bring it down to the minimal features we what and
> successively build the functions as these ideas evolve.
> We asked YiFei to prepare a minimal set that brings home the basic features.
> Might not be 100% optimal but
On Mon, Sep 21, 2020 at 7:35 AM YiFei Zhu wrote:
> This series adds a bitmap to cache seccomp filter results if the
> result permits a syscall and is indepenent of syscall arguments.
> This visibly decreases seccomp overhead for most common seccomp
> filters with very little memory footprint.
It
On Mon, Sep 21, 2020 at 7:47 PM Jann Horn wrote:
> On Mon, Sep 21, 2020 at 7:35 AM YiFei Zhu wrote:
> > SECCOMP_CACHE_NR_ONLY will only operate on syscalls that do not
> > access any syscall arguments or instruction pointer. To facilitate
> > this we need a static ana
On Mon, Sep 21, 2020 at 7:35 AM YiFei Zhu wrote:
[...]
> We do this by creating a per-task bitmap of permitted syscalls.
> If seccomp filter is invoked we check if it is cached and if so
> directly return allow. Else we call into the cBPF filter, and if
> the result is an allow then we cache the
On Mon, Sep 21, 2020 at 7:35 AM YiFei Zhu wrote:
> SECCOMP_CACHE_NR_ONLY will only operate on syscalls that do not
> access any syscall arguments or instruction pointer. To facilitate
> this we need a static analyser to know whether a filter will
> access. This is implemented here with a
On Sun, Sep 13, 2020 at 7:55 PM John Wood wrote:
> On Thu, Sep 10, 2020 at 11:10:38PM +0200, Jann Horn wrote:
> > On Thu, Sep 10, 2020 at 10:22 PM Kees Cook wrote:
> > > To detect a fork brute force attack it is necessary to compute the
> > > crashing rate of the ap
On Sun, Sep 13, 2020 at 6:56 PM John Wood wrote:
> On Fri, Sep 11, 2020 at 02:01:56AM +0200, Jann Horn wrote:
> > On Fri, Sep 11, 2020 at 1:49 AM Kees Cook wrote:
> > > On Thu, Sep 10, 2020 at 01:21:06PM -0700, Kees Cook wrote:
> > > > diff --git a/fs/coredump.c
On Mon, Aug 17, 2020 at 8:23 AM Lai Jiangshan wrote:
> 7f2590a110b8("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
> made a change that when any exception happens on userspace, the
> entry code will save the pt_regs on the sp0 stack, and then copy it
> to the thread stack via
On Fri, Sep 11, 2020 at 5:15 PM Elena Petrova wrote:
> On Thu, 10 Sep 2020 at 20:35, Jann Horn wrote:
> > On Thu, Sep 10, 2020 at 3:48 PM Elena Petrova wrote:
> > > in_ubsan field of task_struct is only used in lib/ubsan.c, which in its
> > > turn is used only `if
On Fri, Sep 11, 2020 at 1:56 AM Kees Cook wrote:
> On Thu, Sep 10, 2020 at 01:21:07PM -0700, Kees Cook wrote:
> > From: John Wood
> >
> > In order to mitigate a fork brute force attack it is necessary to kill
> > all the offending tasks. This tasks are all the ones that share the
> > statistical
On Fri, Sep 11, 2020 at 1:49 AM Kees Cook wrote:
> On Thu, Sep 10, 2020 at 01:21:06PM -0700, Kees Cook wrote:
> > From: John Wood
> >
> > To detect a fork brute force attack it is necessary to compute the
> > crashing rate of the application. This calculation is performed in each
> > fatal fail
On Thu, Sep 10, 2020 at 10:21 PM Kees Cook wrote:
> From: John Wood
>
> Add a menu entry under "Security options" to enable the "Fork brute
> force attack mitigation" feature.
[...]
> +config FBFAM
Please give this a more descriptive name than FBFAM. Some name where,
if a random kernel
On Thu, Sep 10, 2020 at 10:22 PM Kees Cook wrote:
> To detect a fork brute force attack it is necessary to compute the
> crashing rate of the application. This calculation is performed in each
> fatal fail of a task, or in other words, when a core dump is triggered.
> If this rate shows that the
On Thu, Sep 10, 2020 at 10:22 PM Kees Cook wrote:
> In order to mitigate a fork brute force attack it is necessary to kill
> all the offending tasks. This tasks are all the ones that share the
> statistical data with the current task (the task that has crashed).
>
> Since the attack detection is
On Thu, Sep 10, 2020 at 10:21 PM Kees Cook wrote:
> Use the previous defined api to manage statistics calling it accordingly
> when a task forks, calls execve or exits.
You defined functions that return error codes in the previous patch,
but here you ignore the return values. That's a bad idea.
On Thu, Sep 10, 2020 at 10:21 PM Kees Cook wrote:
> [kees: re-sending this series on behalf of John Wood
> also visible at https://github.com/johwood/linux fbfam]
[...]
> The goal of this patch serie is to detect and mitigate a fork brute force
> attack.
>
> Attacks with the purpose to break
On Thu, Sep 10, 2020 at 3:48 PM Elena Petrova wrote:
> in_ubsan field of task_struct is only used in lib/ubsan.c, which in its
> turn is used only `ifneq ($(CONFIG_UBSAN_TRAP),y)`.
>
> Removing unnecessary field from a task_struct will help preserve the
> ABI between vanilla and
On Thu, Sep 10, 2020 at 10:24 AM Chunfeng Yun wrote:
> Use readl_poll_timeout_atomic() to simplify code
>
> Cc: Lu Baolu
> Cc: Mathias Nyman
> Signed-off-by: Chunfeng Yun
Reviewed-by: Jann Horn
On Thu, Sep 3, 2020 at 12:13 AM Yu, Yu-cheng wrote:
> On 9/2/2020 1:03 PM, Jann Horn wrote:
> > On Tue, Aug 25, 2020 at 2:30 AM Yu-cheng Yu wrote:
> >> Add REGSET_CET64/REGSET_CET32 to get/set CET MSRs:
> >>
> >> IA32_U_CET (user-mode CET settings) a
On Tue, Aug 25, 2020 at 2:30 AM Yu-cheng Yu wrote:
> Add REGSET_CET64/REGSET_CET32 to get/set CET MSRs:
>
> IA32_U_CET (user-mode CET settings) and
> IA32_PL3_SSP (user-mode Shadow Stack)
[...]
> diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
[...]
> +int
ck should be on
mm->mm_users. Fix it up accordingly.
Fixes: 99cb252f5e68 ("mm/mmu_notifier: add an interval tree notifier")
Signed-off-by: Jann Horn
---
Can someone please double-check this? I'm like 90% sure that I fixed
this the right way around, but it'd be good if someone
Since pwritev2 checks flag
> validity (in kiocb_set_rw_flags) and reports unknown ones with
> EOPNOTSUPP, callers will not get wrong behavior on old kernels that
> don't support the new flag; the error is reported and the caller can
> decide how to handle it.
>
> Sig
On Mon, Aug 31, 2020 at 2:57 PM Rich Felker wrote:
> On Mon, Aug 31, 2020 at 11:15:57AM +0200, Jann Horn wrote:
> > On Mon, Aug 31, 2020 at 3:46 AM Rich Felker wrote:
> > > On Mon, Aug 31, 2020 at 03:15:04AM +0200, Jann Horn wrote:
> > > > On Sun, Aug 30, 2020
On Mon, Aug 31, 2020 at 8:07 AM Hugh Dickins wrote:
> On Thu, 27 Aug 2020, Jann Horn wrote:
>
> > The preceding patches have ensured that core dumping properly takes the
> > mmap_lock. Thanks to that, we can now remove mmget_still_valid() and all
> > its users.
>
>
On Mon, Aug 31, 2020 at 3:46 AM Rich Felker wrote:
> On Mon, Aug 31, 2020 at 03:15:04AM +0200, Jann Horn wrote:
> > On Sun, Aug 30, 2020 at 10:00 PM Rich Felker wrote:
> > > On Sun, Aug 30, 2020 at 09:02:31PM +0200, Jann Horn wrote:
> > > > On Sun, Aug 30, 2020
On Sun, Aug 30, 2020 at 10:00 PM Rich Felker wrote:
> On Sun, Aug 30, 2020 at 09:02:31PM +0200, Jann Horn wrote:
> > On Sun, Aug 30, 2020 at 8:43 PM Rich Felker wrote:
> > > On Sun, Aug 30, 2020 at 08:31:36PM +0200, Jann Horn wrote:
> > > > On Sun, Aug 30, 2020
On Sun, Aug 30, 2020 at 8:43 PM Rich Felker wrote:
> On Sun, Aug 30, 2020 at 08:31:36PM +0200, Jann Horn wrote:
> > On Sun, Aug 30, 2020 at 6:36 PM Rich Felker wrote:
> > > On Sun, Aug 30, 2020 at 05:05:45PM +0200, Jann Horn wrote:
> > > > On Sat, Aug 29, 2020
On Sun, Aug 30, 2020 at 6:36 PM Rich Felker wrote:
> On Sun, Aug 30, 2020 at 05:05:45PM +0200, Jann Horn wrote:
> > On Sat, Aug 29, 2020 at 4:00 AM Rich Felker wrote:
> > > The pwrite function, originally defined by POSIX (thus the "p"), is
> > >
On Sat, Aug 29, 2020 at 4:00 AM Rich Felker wrote:
> The pwrite function, originally defined by POSIX (thus the "p"), is
> defined to ignore O_APPEND and write at the offset passed as its
> argument. However, historically Linux honored O_APPEND if set and
> ignored the offset. This cannot be
oked around in the hexdump and it looked decent.)
Jann Horn (7):
binfmt_elf_fdpic: Stop using dump_emit() on user pointers on !MMU
coredump: Let dump_emit() bail out on short writes
coredump: Refactor page range dumping into common helper
coredump: Rework elf/elf_fdpic vma_dump_size() into
. (And arguably it looks nicer and makes more
sense in generic code.)
Adjust a little bit based on the binfmt_elf_fdpic version:
->anon_vma is only meaningful under CONFIG_MMU, otherwise we have to assume
that the VMA has been written to.
Suggested-by: Linus Torvalds
Signed-off-by: Jann Horn
---
-by: Jann Horn
---
fs/binfmt_elf_fdpic.c | 8 --
mm/gup.c | 57 +--
2 files changed, 28 insertions(+), 37 deletions(-)
diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
index 50f845702b92..a53f83830986 100644
--- a/fs/binfmt_elf_fdpic.c
Both fs/binfmt_elf.c and fs/binfmt_elf_fdpic.c need to dump ranges of pages
into the coredump file. Extract that logic into a common helper.
Signed-off-by: Jann Horn
---
fs/binfmt_elf.c | 22 ++
fs/binfmt_elf_fdpic.c| 18 +++---
fs/coredump.c
t do anything about the existing data races with stack
expansion in other mm code.)
Signed-off-by: Jann Horn
---
fs/binfmt_elf.c | 100 +--
fs/binfmt_elf_fdpic.c| 67 +++---
fs/coredump.c| 81 +
-by: Jann Horn
---
mm/gup.c | 16 ++--
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 92519e5a44b3..bd0f7311c5c6 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1552,19 +1552,23 @@ static long __get_user_pages_locked(struct mm_struct
*mm, unsigned
dump_emit() has a retry loop, but there seems to be no way for that retry
logic to actually be used; and it was also buggy, writing the same data
repeatedly after a short write.
Let's just bail out on a short write.
Suggested-by: Linus Torvalds
Signed-off-by: Jann Horn
---
fs/coredump.c | 22
The preceding patches have ensured that core dumping properly takes the
mmap_lock. Thanks to that, we can now remove mmget_still_valid() and all
its users.
Signed-off-by: Jann Horn
---
drivers/infiniband/core/uverbs_main.c | 3 ---
drivers/vfio/pci/vfio_pci.c | 38
Both fs/binfmt_elf.c and fs/binfmt_elf_fdpic.c need to dump ranges of pages
into the coredump file. Extract that logic into a common helper.
Signed-off-by: Jann Horn
---
fs/binfmt_elf.c | 22 ++
fs/binfmt_elf_fdpic.c| 18 +++---
fs/coredump.c
201 - 300 of 1477 matches
Mail list logo