Re: [PATCH] bpf: avoid old-style declaration warnings

2021-03-22 Thread KP Singh
On Mon, Mar 22, 2021 at 10:52 PM Arnd Bergmann  wrote:
>
> From: Arnd Bergmann 
>
> gcc -Wextra wants type modifiers in the normal order:
>
> kernel/bpf/bpf_lsm.c:70:1: error: 'static' is not at beginning of declaration 
> [-Werror=old-style-declaration]
>70 | const static struct bpf_func_proto bpf_bprm_opts_set_proto = {
>   | ^
> kernel/bpf/bpf_lsm.c:91:1: error: 'static' is not at beginning of declaration 
> [-Werror=old-style-declaration]
>91 | const static struct bpf_func_proto bpf_ima_inode_hash_proto = {
>   | ^
>
> Fixes: 3f6719c7b62f ("bpf: Add bpf_bprm_opts_set helper")
> Fixes: 27672f0d280a ("bpf: Add a BPF helper for getting the IMA hash of an 
> inode")
> Signed-off-by: Arnd Bergmann 

Thanks for fixing!

Acked-by: KP Singh 


Re: [PATCH] bpf: fix a warning message in mark_ptr_not_null_reg()

2021-02-16 Thread KP Singh
On Tue, Feb 16, 2021 at 8:37 PM Dan Carpenter  wrote:
>
> The WARN_ON() argument is a condition, and it generates a stack trace
> but it doesn't print the warning.
>
> Fixes: 4ddb74165ae5 ("bpf: Extract nullable reg type conversion into a helper 
> function")
> Signed-off-by: Dan Carpenter 
> ---
>  kernel/bpf/verifier.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 056df6be3e30..bd4d1dfca73c 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1120,7 +1120,7 @@ static void mark_ptr_not_null_reg(struct bpf_reg_state 
> *reg)
> reg->type = PTR_TO_RDWR_BUF;
> break;
> default:
> -   WARN_ON("unknown nullable register type");
> +   WARN(1, "unknown nullable register type");

Should we use WARN_ONCE here? Also, I think the fix should be targeted
for bpf-next as
the patch that introduced this hasn't made it to bpf yet.

[...]


Re: [PATCH v3 bpf-next 1/4] bpf: enable task local storage for tracing programs

2021-01-31 Thread KP Singh
On Thu, Jan 28, 2021 at 1:20 AM Song Liu  wrote:
>
> To access per-task data, BPF programs usually creates a hash table with
> pid as the key. This is not ideal because:
>  1. The user need to estimate the proper size of the hash table, which may
> be inaccurate;
>  2. Big hash tables are slow;
>  3. To clean up the data properly during task terminations, the user need
> to write extra logic.
>
> Task local storage overcomes these issues and offers a better option for
> these per-task data. Task local storage is only available to BPF_LSM. Now
> enable it for tracing programs.
>
> Unlike LSM progreams, tracing programs can be called in IRQ contexts.

nit: typo *programs

> Helpers that accesses task local storage are updated to use

nit: Helpers that access..

> raw_spin_lock_irqsave() instead of raw_spin_lock_bh().
>
> Tracing programs can attach to functions on the task free path, e.g.
> exit_creds(). To avoid allocating task local storage after
> bpf_task_storage_free(). bpf_task_storage_get() is updated to not allocate
> new storage when the task is not refcounted (task->usage == 0).
>
> Signed-off-by: Song Liu 

Acked-by: KP Singh 

Thanks for adding better commit descriptions :)

I think checking the usage before adding storage should work for the
task exit path (I could not think of cases where it would break).
Would also be nice to check with Martin and Hao about this.


Re: [PATCH v2] bpf: Drop disabled LSM hooks from the sleepable set

2021-01-25 Thread KP Singh
On Mon, Jan 25, 2021 at 7:39 AM Mikko Ylinen
 wrote:
>
> Some networking and keys LSM hooks are conditionally enabled
> and when building the new sleepable BPF LSM hooks with those
> LSM hooks disabled, the following build error occurs:
>
> BTFIDS  vmlinux
> FAILED unresolved symbol bpf_lsm_socket_socketpair
>
> To fix the error, conditionally add the relevant networking/keys
> LSM hooks to the sleepable set.
>
> Fixes: 423f16108c9d8 ("bpf: Augment the set of sleepable LSM hooks")
> Signed-off-by: Mikko Ylinen 

Acked-by: KP Singh 


Re: [PATCH] bpf: Drop disabled LSM hooks from the sleepable set

2021-01-25 Thread KP Singh
On Mon, Jan 25, 2021 at 7:55 AM Mikko Ylinen
 wrote:
>
> On Sat, Jan 23, 2021 at 12:50:21AM +0100, KP Singh wrote:
> > On Fri, Jan 22, 2021 at 11:33 PM KP Singh  wrote:
> > >
> > > On Fri, Jan 22, 2021 at 1:32 PM Mikko Ylinen
> > >  wrote:
> > > >
> > > > Networking LSM hooks are conditionally enabled and when building the new
> > > > sleepable BPF LSM hooks with the networking LSM hooks disabled, the
> > > > following build error occurs:
> > > >
> > > > BTFIDS  vmlinux
> > > > FAILED unresolved symbol bpf_lsm_socket_socketpair
> > > >

[...]

>
> Agree, a way to get the set automatically created makes sense. But the
> extra parameter to LSM_HOOK macro would be BPF specific, right?
>

The information about whether the hook "must not sleep" has been
mentioned sporadically in comments and

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/lsm_hooks.h#n920
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/lsm_hooks.h#n594

I think it would be generally useful for the framework to actually provide this
in the definition in the hook and then ensure (by calling
might_sleep() for hooks
that can sleep).

- KP

> -- Regards, Mikko


Re: [PATCH] bpf: Drop disabled LSM hooks from the sleepable set

2021-01-22 Thread KP Singh
On Fri, Jan 22, 2021 at 11:33 PM KP Singh  wrote:
>
> On Fri, Jan 22, 2021 at 1:32 PM Mikko Ylinen
>  wrote:
> >
> > Networking LSM hooks are conditionally enabled and when building the new
> > sleepable BPF LSM hooks with the networking LSM hooks disabled, the
> > following build error occurs:
> >
> > BTFIDS  vmlinux
> > FAILED unresolved symbol bpf_lsm_socket_socketpair
> >
> > To fix the error, conditionally add the networking LSM hooks to the
> > sleepable set.
> >
> > Fixes: 423f16108c9d8 ("bpf: Augment the set of sleepable LSM hooks")
> > Signed-off-by: Mikko Ylinen 
>
> Thanks!
>
> Acked-by: KP Singh 

Btw, I was noticing that there's another hook that is surrounded by ifdefs:

diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 70e5e0b6d69d..f7f7754e938d 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -166,7 +166,11 @@ BTF_ID(func, bpf_lsm_inode_symlink)
 BTF_ID(func, bpf_lsm_inode_unlink)
 BTF_ID(func, bpf_lsm_kernel_module_request)
 BTF_ID(func, bpf_lsm_kernfs_init_security)
+
+#ifdef CONFIG_KEYS
 BTF_ID(func, bpf_lsm_key_free)
+#endif
+
 BTF_ID(func, bpf_lsm_mmap_file)
 BTF_ID(func, bpf_lsm_netlink_send)
 BTF_ID(func, bpf_lsm_path_notify)

It would be great if you can also add this to your patch :)

I guess the cleanest solution to never let this happen would be to
incorporate this in
lsm_hook_defs.h and mark hooks as SLEEPABLE and NON_SLEEPABLE with an
extra parameter to the LSM_HOOK macro and then only generate the BTF IDs
based on this macro parameter.


Re: [PATCH] bpf: Drop disabled LSM hooks from the sleepable set

2021-01-22 Thread KP Singh
On Fri, Jan 22, 2021 at 1:32 PM Mikko Ylinen
 wrote:
>
> Networking LSM hooks are conditionally enabled and when building the new
> sleepable BPF LSM hooks with the networking LSM hooks disabled, the
> following build error occurs:
>
> BTFIDS  vmlinux
> FAILED unresolved symbol bpf_lsm_socket_socketpair
>
> To fix the error, conditionally add the networking LSM hooks to the
> sleepable set.
>
> Fixes: 423f16108c9d8 ("bpf: Augment the set of sleepable LSM hooks")
> Signed-off-by: Mikko Ylinen 

Thanks!

Acked-by: KP Singh 


Re: [PATCH] bpf: put file handler if no storage found

2021-01-20 Thread KP Singh
On Wed, Jan 20, 2021 at 8:23 PM Alexei Starovoitov
 wrote:
>
> On Tue, Jan 19, 2021 at 4:03 AM Pan Bian  wrote:
> >
> > Put file f if inode_storage_ptr() returns NULL.
> >
> > Signed-off-by: Pan Bian 

Thanks for fixing this! (You can add my ack with the fixes tag when
you resubmit)

Fixes: 8ea636848aca ("bpf: Implement bpf_local_storage for inodes")
Acked-by: KP Singh 

> > ---
> >  kernel/bpf/bpf_inode_storage.c | 6 +-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
> > index 6edff97ad594..089d5071d4fc 100644
> > --- a/kernel/bpf/bpf_inode_storage.c
> > +++ b/kernel/bpf/bpf_inode_storage.c
> > @@ -125,8 +125,12 @@ static int bpf_fd_inode_storage_update_elem(struct 
> > bpf_map *map, void *key,
> >
> > fd = *(int *)key;
> > f = fget_raw(fd);
> > -   if (!f || !inode_storage_ptr(f->f_inode))
> > +   if (!f)
> > +   return -EBADF;
> > +   if (!inode_storage_ptr(f->f_inode)) {
> > +   fput(f);
> > return -EBADF;
> > +   }
>
> Good catch.
> Somehow the patch is not in patchwork.
> Could you please resubmit with Fixes tag and reduce cc list?
> I guess it's hitting some spam filters in vger.


Re: [PATCH bpf-next v5 4/4] selftests/bpf: Add a selftest for the tracing bpf_get_socket_cookie

2021-01-20 Thread KP Singh
On Tue, Jan 19, 2021 at 5:00 PM Florent Revest  wrote:
>
> This builds up on the existing socket cookie test which checks whether
> the bpf_get_socket_cookie helpers provide the same value in
> cgroup/connect6 and sockops programs for a socket created by the
> userspace part of the test.
>
> Adding a tracing program to the existing objects requires a different
> attachment strategy and different headers.
>
> Signed-off-by: Florent Revest 

Acked-by: KP Singh 

(one minor note, doesn't really need fixing as a part of this though)

> ---
>  .../selftests/bpf/prog_tests/socket_cookie.c  | 24 +++
>  .../selftests/bpf/progs/socket_cookie_prog.c  | 41 ---
>  2 files changed, 52 insertions(+), 13 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/socket_cookie.c 
> b/tools/testing/selftests/bpf/prog_tests/socket_cookie.c
> index 53d0c44e7907..e5c5e2ea1deb 100644
> --- a/tools/testing/selftests/bpf/prog_tests/socket_cookie.c
> +++ b/tools/testing/selftests/bpf/prog_tests/socket_cookie.c
> @@ -15,8 +15,8 @@ struct socket_cookie {
>
>  void test_socket_cookie(void)
>  {
> +   struct bpf_link *set_link, *update_sockops_link, *update_tracing_link;
> socklen_t addr_len = sizeof(struct sockaddr_in6);
> -   struct bpf_link *set_link, *update_link;
> int server_fd, client_fd, cgroup_fd;
> struct socket_cookie_prog *skel;
> __u32 cookie_expected_value;
> @@ -39,15 +39,21 @@ void test_socket_cookie(void)
>   PTR_ERR(set_link)))
> goto close_cgroup_fd;
>
> -   update_link = bpf_program__attach_cgroup(skel->progs.update_cookie,
> -cgroup_fd);
> -   if (CHECK(IS_ERR(update_link), "update-link-cg-attach", "err %ld\n",
> - PTR_ERR(update_link)))
> +   update_sockops_link = bpf_program__attach_cgroup(
> +   skel->progs.update_cookie_sockops, cgroup_fd);
> +   if (CHECK(IS_ERR(update_sockops_link), 
> "update-sockops-link-cg-attach",
> + "err %ld\n", PTR_ERR(update_sockops_link)))
> goto free_set_link;
>
> +   update_tracing_link = bpf_program__attach(
> +   skel->progs.update_cookie_tracing);
> +   if (CHECK(IS_ERR(update_tracing_link), "update-tracing-link-attach",
> + "err %ld\n", PTR_ERR(update_tracing_link)))
> +   goto free_update_sockops_link;
> +
> server_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0, 0);
> if (CHECK(server_fd < 0, "start_server", "errno %d\n", errno))
> -   goto free_update_link;
> +   goto free_update_tracing_link;
>
> client_fd = connect_to_fd(server_fd, 0);
> if (CHECK(client_fd < 0, "connect_to_fd", "errno %d\n", errno))
> @@ -71,8 +77,10 @@ void test_socket_cookie(void)
> close(client_fd);
>  close_server_fd:
> close(server_fd);
> -free_update_link:
> -   bpf_link__destroy(update_link);
> +free_update_tracing_link:
> +   bpf_link__destroy(update_tracing_link);

I don't think this need to block submission unless there are other
issues but the
bpf_link__destroy can just be called in a single cleanup label because
it handles null or
erroneous inputs:

int bpf_link__destroy(struct bpf_link *link)
{
int err = 0;

if (IS_ERR_OR_NULL(link))
 return 0;
[...]


Re: [PATCH bpf-next v5 3/4] selftests/bpf: Integrate the socket_cookie test to test_progs

2021-01-20 Thread KP Singh
On Tue, Jan 19, 2021 at 5:00 PM Florent Revest  wrote:
>
> Currently, the selftest for the BPF socket_cookie helpers is built and
> run independently from test_progs. It's easy to forget and hard to
> maintain.
>
> This patch moves the socket cookies test into prog_tests/ and vastly
> simplifies its logic by:
> - rewriting the loading code with BPF skeletons
> - rewriting the server/client code with network helpers
> - rewriting the cgroup code with test__join_cgroup
> - rewriting the error handling code with CHECKs
>
> Signed-off-by: Florent Revest 

Acked-by: KP Singh 


Re: [PATCH bpf-next v5 2/4] bpf: Expose bpf_get_socket_cookie to tracing programs

2021-01-20 Thread KP Singh
On Tue, Jan 19, 2021 at 5:00 PM Florent Revest  wrote:
>
> This needs a new helper that:
> - can work in a sleepable context (using sock_gen_cookie)
> - takes a struct sock pointer and checks that it's not NULL
>
> Signed-off-by: Florent Revest 

Acked-by: KP Singh 


Re: [PATCH bpf-next v5 1/4] bpf: Be less specific about socket cookies guarantees

2021-01-20 Thread KP Singh
On Tue, Jan 19, 2021 at 5:00 PM Florent Revest  wrote:
>
> Since "92acdc58ab11 bpf, net: Rework cookie generator as per-cpu one"
> socket cookies are not guaranteed to be non-decreasing. The
> bpf_get_socket_cookie helper descriptions are currently specifying that
> cookies are non-decreasing but we don't want users to rely on that.
>
> Reported-by: Daniel Borkmann 
> Signed-off-by: Florent Revest 

Acked-by: KP Singh 


Re: [PATCH bpf v2 2/2] selftests/bpf: add verifier test for PTR_TO_MEM spill

2021-01-13 Thread KP Singh
On Wed, Jan 13, 2021 at 5:05 PM Yonghong Song  wrote:
>
>
>
> On 1/12/21 9:38 PM, Gilad Reti wrote:
> > Add a test to check that the verifier is able to recognize spilling of
> > PTR_TO_MEM registers, by reserving a ringbuf buffer, forcing the spill
> > of a pointer holding the buffer address to the stack, filling it back
> > in from the stack and writing to the memory area pointed by it.
> >
> > The patch was partially contributed by CyberArk Software, Inc.
> >
> > Signed-off-by: Gilad Reti 
>
> I didn't verify result_unpriv = ACCEPT part. I think it is correct
> by checking code.
>
> Acked-by: Yonghong Song 

Thanks for the description!

Acked-by: KP Singh 


Re: [PATCH bpf v2 1/2] bpf: support PTR_TO_MEM{,_OR_NULL} register spilling

2021-01-13 Thread KP Singh
On Wed, Jan 13, 2021 at 6:38 AM Gilad Reti  wrote:
>
> Add support for pointer to mem register spilling, to allow the verifier
> to track pointers to valid memory addresses. Such pointers are returned
> for example by a successful call of the bpf_ringbuf_reserve helper.
>
> The patch was partially contributed by CyberArk Software, Inc.
>
> Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for 
> it")
> Suggested-by: Yonghong Song 
> Signed-off-by: Gilad Reti 

Acked-by: KP Singh 


Re: [PATCH bpf-next 1/4] bpf: enable task local storage for tracing programs

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 5:32 PM Yonghong Song  wrote:
>
>
>
> On 1/11/21 3:45 PM, Song Liu wrote:
> >
> >
> >> On Jan 11, 2021, at 1:58 PM, Martin Lau  wrote:
> >>
> >> On Mon, Jan 11, 2021 at 10:35:43PM +0100, KP Singh wrote:
> >>> On Mon, Jan 11, 2021 at 7:57 PM Martin KaFai Lau  wrote:
> >>>>
> >>>> On Fri, Jan 08, 2021 at 03:19:47PM -0800, Song Liu wrote:
> >>>>
> >>>> [ ... ]
> >>>>
> >>>>> diff --git a/kernel/bpf/bpf_local_storage.c 
> >>>>> b/kernel/bpf/bpf_local_storage.c
> >>>>> index dd5aedee99e73..9bd47ad2b26f1 100644
> >>>>> --- a/kernel/bpf/bpf_local_storage.c
> >>>>> +++ b/kernel/bpf/bpf_local_storage.c

[...]

> >>>>> +#include 
> >>>>>
> >>>>> #include 
> >>>>> #include 
> >>>>> @@ -734,6 +735,7 @@ void __put_task_struct(struct task_struct *tsk)
> >>>>>   cgroup_free(tsk);
> >>>>>   task_numa_free(tsk, true);
> >>>>>   security_task_free(tsk);
> >>>>> + bpf_task_storage_free(tsk);
> >>>>>   exit_creds(tsk);
> >>>> If exit_creds() is traced by a bpf and this bpf is doing
> >>>> bpf_task_storage_get(..., BPF_LOCAL_STORAGE_GET_F_CREATE),
> >>>> new task storage will be created after bpf_task_storage_free().
> >>>>
> >>>> I recalled there was an earlier discussion with KP and KP mentioned
> >>>> BPF_LSM will not be called with a task that is going away.
> >>>> It seems enabling bpf task storage in bpf tracing will break
> >>>> this assumption and needs to be addressed?
> >>>
> >>> For tracing programs, I think we will need an allow list where
> >>> task local storage can be used.
> >> Instead of whitelist, can refcount_inc_not_zero(>usage) be used?
> >
> > I think we can put refcount_inc_not_zero() in bpf_task_storage_get, like:
> >
> > diff --git i/kernel/bpf/bpf_task_storage.c w/kernel/bpf/bpf_task_storage.c
> > index f654b56907b69..93d01b0a010e6 100644
> > --- i/kernel/bpf/bpf_task_storage.c
> > +++ w/kernel/bpf/bpf_task_storage.c
> > @@ -216,6 +216,9 @@ BPF_CALL_4(bpf_task_storage_get, struct bpf_map *, map, 
> > struct task_struct *,
> >   * by an RCU read-side critical section.
> >   */
> >  if (flags & BPF_LOCAL_STORAGE_GET_F_CREATE) {
> > +   if (!refcount_inc_not_zero(>usage))
> > +   return -EBUSY;
> > +
> >  sdata = bpf_local_storage_update(
> >  task, (struct bpf_local_storage_map *)map, value,
> >  BPF_NOEXIST);
> >
> > But where shall we add the refcount_dec()? IIUC, we cannot add it to
> > __put_task_struct().
>
> Maybe put_task_struct()?

Yeah, something like, or if you find a more elegant alternative :)

--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -107,13 +107,20 @@ extern void __put_task_struct(struct task_struct *t);

 static inline void put_task_struct(struct task_struct *t)
 {
-   if (refcount_dec_and_test(>usage))
+
+   if (rcu_access_pointer(t->bpf_storage)) {
+   if (refcount_sub_and_test(2, >usage))
+   __put_task_struct(t);
+   } else if (refcount_dec_and_test(>usage))
__put_task_struct(t);
 }

 static inline void put_task_struct_many(struct task_struct *t, int nr)
 {
-   if (refcount_sub_and_test(nr, >usage))
+   if (rcu_access_pointer(t->bpf_storage)) {
+   if (refcount_sub_and_test(nr + 1, >usage))
+   __put_task_struct(t);
+   } else if (refcount_sub_and_test(nr, >usage))
__put_task_struct(t);
 }


I may be missing something but shouldn't bpf_storage be an __rcu
member like we have for sk_bpf_storage?

#ifdef CONFIG_BPF_SYSCALL
struct bpf_local_storage __rcu *sk_bpf_storage;
#endif


>
> > Thanks,
> > Song
> >


Re: [PATCH 2/2] selftests/bpf: add verifier test for PTR_TO_MEM spill

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 4:43 PM Daniel Borkmann  wrote:
>
> On 1/12/21 4:35 PM, Gilad Reti wrote:
> > On Tue, Jan 12, 2021 at 4:56 PM KP Singh  wrote:
> >> On Tue, Jan 12, 2021 at 10:16 AM Gilad Reti  wrote:
> >>>
> >>> Add test to check that the verifier is able to recognize spilling of
> >>> PTR_TO_MEM registers.
> >>
> >> It would be nice to have some explanation of what the test does to
> >> recognize the spilling of the PTR_TO_MEM registers in the commit
> >> log as well.
> >>
> >> Would it be possible to augment an existing test_progs
> >> program like tools/testing/selftests/bpf/progs/test_ringbuf.c to test
> >> this functionality?
>
> How would you guarantee that LLVM generates the spill/fill, via inline asm?

Yeah, I guess there is no sure-shot way to do it and, adding inline asm would
just be doing the same thing as this verifier test. You can ignore me
on this one :)

It would, however, be nice to have a better description about what the test is
actually doing./


>
> > It may be possible, but from what I understood from Daniel's comment here
> >
> > https://lore.kernel.org/bpf/17629073-4fab-a922-ecc3-25b019960...@iogearbox.net/
> >
> > the test should be a part of the verifier tests (which is reasonable
> > to me since it is
> > a verifier bugfix)
>
> Yeah, the test_verifier case as you have is definitely the most straight
> forward way to add coverage in this case.


Re: [PATCH bpf 1/2] bpf: support PTR_TO_MEM{,_OR_NULL} register spilling

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 3:24 PM Gilad Reti  wrote:
>
> On Tue, Jan 12, 2021 at 3:57 PM KP Singh  wrote:
> >
> > On Tue, Jan 12, 2021 at 10:14 AM Gilad Reti  wrote:
> > >
> > > Add support for pointer to mem register spilling, to allow the verifier
> > > to track pointer to valid memory addresses. Such pointers are returned
> >
> > nit: pointers
>
> Thanks
>
> >
> > > for example by a successful call of the bpf_ringbuf_reserve helper.
> > >
> > > This patch was suggested as a solution by Yonghong Song.
> >
> > You can use the "Suggested-by:" tag for this.
>
> Thanks
>
> >
> > >
> > > The patch was partially contibuted by CyberArk Software, Inc.
> >
> > nit: typo *contributed
>
> Thanks. Should I submit a v2 of the patch to correct all of those?

I think it would be nice to do another revision
which also addresses the comments on the other patch.


>
> >
> > Also, I was wondering if "partially" here means someone collaborated with 
> > you
> > on the patch? And, in that case:
> >
> > "Co-developed-by:" would be a better tag here.
>
> No, I did it alone. I mentioned CyberArk since I work there and did some of 
> the
> coding during my daily work, so they deserve credit.
>
> >
> > Acked-by: KP Singh 
> >
> >
> > >
> > > Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier
> > > support for it")
> > > Signed-off-by: Gilad Reti 
> > > ---
> > >  kernel/bpf/verifier.c | 2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 17270b8404f1..36af69fac591 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -2217,6 +2217,8 @@ static bool is_spillable_regtype(enum bpf_reg_type 
> > > type)
> > > case PTR_TO_RDWR_BUF:
> > > case PTR_TO_RDWR_BUF_OR_NULL:
> > > case PTR_TO_PERCPU_BTF_ID:
> > > +   case PTR_TO_MEM:
> > > +   case PTR_TO_MEM_OR_NULL:
> > > return true;
> > > default:
> > > return false;
> > > --
> > > 2.27.0
> > >


Re: [PATCH 2/2] selftests/bpf: add verifier test for PTR_TO_MEM spill

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 10:16 AM Gilad Reti  wrote:
>
> Add test to check that the verifier is able to recognize spilling of
> PTR_TO_MEM registers.
>

It would be nice to have some explanation of what the test does to
recognize the spilling of the PTR_TO_MEM registers in the commit
log as well.

Would it be possible to augment an existing test_progs
program like tools/testing/selftests/bpf/progs/test_ringbuf.c to test
this functionality?



> The patch was partially contibuted by CyberArk Software, Inc.
>
> Signed-off-by: Gilad Reti 
> ---
>  tools/testing/selftests/bpf/test_verifier.c   | 12 +++-
>  .../selftests/bpf/verifier/spill_fill.c   | 30 +++
>  2 files changed, 41 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/bpf/test_verifier.c 
> b/tools/testing/selftests/bpf/test_verifier.c
> index 777a81404fdb..f8569f04064b 100644
> --- a/tools/testing/selftests/bpf/test_verifier.c
> +++ b/tools/testing/selftests/bpf/test_verifier.c
> @@ -50,7 +50,7 @@
>  #define MAX_INSNS  BPF_MAXINSNS
>  #define MAX_TEST_INSNS 100
>  #define MAX_FIXUPS 8
> -#define MAX_NR_MAPS20
> +#define MAX_NR_MAPS21
>  #define MAX_TEST_RUNS  8
>  #define POINTER_VALUE  0xcafe4all
>  #define TEST_DATA_LEN  64
> @@ -87,6 +87,7 @@ struct bpf_test {
> int fixup_sk_storage_map[MAX_FIXUPS];
> int fixup_map_event_output[MAX_FIXUPS];
> int fixup_map_reuseport_array[MAX_FIXUPS];
> +   int fixup_map_ringbuf[MAX_FIXUPS];
> const char *errstr;
> const char *errstr_unpriv;
> uint32_t insn_processed;
> @@ -640,6 +641,7 @@ static void do_test_fixup(struct bpf_test *test, enum 
> bpf_prog_type prog_type,
> int *fixup_sk_storage_map = test->fixup_sk_storage_map;
> int *fixup_map_event_output = test->fixup_map_event_output;
> int *fixup_map_reuseport_array = test->fixup_map_reuseport_array;
> +   int *fixup_map_ringbuf = test->fixup_map_ringbuf;
>
> if (test->fill_helper) {
> test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct 
> bpf_insn));
> @@ -817,6 +819,14 @@ static void do_test_fixup(struct bpf_test *test, enum 
> bpf_prog_type prog_type,
> fixup_map_reuseport_array++;
> } while (*fixup_map_reuseport_array);
> }
> +   if (*fixup_map_ringbuf) {
> +   map_fds[20] = create_map(BPF_MAP_TYPE_RINGBUF, 0,
> +  0, 4096);
> +   do {
> +   prog[*fixup_map_ringbuf].imm = map_fds[20];
> +   fixup_map_ringbuf++;
> +   } while (*fixup_map_ringbuf);
> +   }
>  }
>
>  struct libcap {
> diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c 
> b/tools/testing/selftests/bpf/verifier/spill_fill.c
> index 45d43bf82f26..1833b6c730dd 100644
> --- a/tools/testing/selftests/bpf/verifier/spill_fill.c
> +++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
> @@ -28,6 +28,36 @@
> .result = ACCEPT,
> .result_unpriv = ACCEPT,
>  },
> +{
> +   "check valid spill/fill, ptr to mem",
> +   .insns = {
> +   /* reserve 8 byte ringbuf memory */
> +   BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
> +   BPF_LD_MAP_FD(BPF_REG_1, 0),
> +   BPF_MOV64_IMM(BPF_REG_2, 8),
> +   BPF_MOV64_IMM(BPF_REG_3, 0),
> +   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
> +   /* store a pointer to the reserved memory in R6 */
> +   BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
> +   /* check whether the reservation was successful */
> +   BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
> +   /* spill R6(mem) into the stack */
> +   BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
> +   /* fill it back in R7 */
> +   BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8),
> +   /* should be able to access *(R7) = 0 */
> +   BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0),
> +   /* submit the reserved rungbuf memory */
> +   BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
> +   BPF_MOV64_IMM(BPF_REG_2, 0),
> +   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
> +   BPF_MOV64_IMM(BPF_REG_0, 0),
> +   BPF_EXIT_INSN(),
> +   },
> +   .fixup_map_ringbuf = { 1 },
> +   .result = ACCEPT,
> +   .result_unpriv = ACCEPT,
> +},
>  {
> "check corrupted spill/fill",
> .insns = {
> --
> 2.27.0
>


Re: [PATCH bpf 1/2] bpf: support PTR_TO_MEM{,_OR_NULL} register spilling

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 10:14 AM Gilad Reti  wrote:
>
> Add support for pointer to mem register spilling, to allow the verifier
> to track pointer to valid memory addresses. Such pointers are returned

nit: pointers

> for example by a successful call of the bpf_ringbuf_reserve helper.
>
> This patch was suggested as a solution by Yonghong Song.

You can use the "Suggested-by:" tag for this.

>
> The patch was partially contibuted by CyberArk Software, Inc.

nit: typo *contributed

Also, I was wondering if "partially" here means someone collaborated with you
on the patch? And, in that case:

"Co-developed-by:" would be a better tag here.

Acked-by: KP Singh 


>
> Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier
> support for it")
> Signed-off-by: Gilad Reti 
> ---
>  kernel/bpf/verifier.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 17270b8404f1..36af69fac591 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2217,6 +2217,8 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
> case PTR_TO_RDWR_BUF:
> case PTR_TO_RDWR_BUF_OR_NULL:
> case PTR_TO_PERCPU_BTF_ID:
> +   case PTR_TO_MEM:
> +   case PTR_TO_MEM_OR_NULL:
> return true;
> default:
> return false;
> --
> 2.27.0
>


Re: [PATCH bpf-next] bpf: Clarify return value of probe str helpers

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 1:34 PM Brendan Jackman  wrote:
>
> When the buffer is too small to contain the input string, these
> helpers return the length of the buffer, not the length of the
> original string. This tries to make the docs totally clear about
> that, since "the length of the [copied ]string" could also refer to
> the length of the input.
>
> Signed-off-by: Brendan Jackman 

Acked-by: KP Singh 


Re: [PATCH bpf-next] bpf: Fix a verifier message for alloc size helper arg

2021-01-12 Thread KP Singh
On Tue, Jan 12, 2021 at 1:39 PM Brendan Jackman  wrote:
>
> The error message here is misleading, the argument will be rejected
> unless it is a known constant.
>
> Signed-off-by: Brendan Jackman 
> ---
>  kernel/bpf/verifier.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 17270b8404f1..5534e667bdb1 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4319,7 +4319,7 @@ static int check_func_arg(struct bpf_verifier_env *env, 
> u32 arg,
> err = mark_chain_precision(env, regno);
> } else if (arg_type_is_alloc_size(arg_type)) {
> if (!tnum_is_const(reg->var_off)) {
> -   verbose(env, "R%d unbounded size, use 'var &= const' 
> or 'if (var < const)'\n",

Can you check if:

int var = 1000;
var += 1;

if (var < 2000)
   // call helper

and then using var in the argument works? If so, the existing error
message would be correct.


> +   verbose(env, "R%d is not a known constant'\n",
> regno);
> return -EACCES;
> }
>
> base-commit: e22d7f05e445165e58feddb4e40cc9c0f94453bc
> --
> 2.30.0.284.gd98b1dd5eaa7-goog
>


Re: [PATCH bpf-next 1/4] bpf: enable task local storage for tracing programs

2021-01-11 Thread KP Singh
On Mon, Jan 11, 2021 at 7:57 PM Martin KaFai Lau  wrote:
>
> On Fri, Jan 08, 2021 at 03:19:47PM -0800, Song Liu wrote:
>
> [ ... ]
>
> > diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> > index dd5aedee99e73..9bd47ad2b26f1 100644
> > --- a/kernel/bpf/bpf_local_storage.c
> > +++ b/kernel/bpf/bpf_local_storage.c
> > @@ -140,17 +140,18 @@ static void __bpf_selem_unlink_storage(struct 
> > bpf_local_storage_elem *selem)
> >  {
> >   struct bpf_local_storage *local_storage;
> >   bool free_local_storage = false;
> > + unsigned long flags;
> >
> >   if (unlikely(!selem_linked_to_storage(selem)))
> >   /* selem has already been unlinked from sk */
> >   return;
> >
> >   local_storage = rcu_dereference(selem->local_storage);
> > - raw_spin_lock_bh(_storage->lock);
> > + raw_spin_lock_irqsave(_storage->lock, flags);
> It will be useful to have a few words in commit message on this change
> for future reference purpose.
>
> Please also remove the in_irq() check from bpf_sk_storage.c
> to avoid confusion in the future.  It probably should
> be in a separate patch.
>
> [ ... ]
>
> > diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> > index 4ef1959a78f27..f654b56907b69 100644
> > diff --git a/kernel/fork.c b/kernel/fork.c
> > index 7425b3224891d..3d65c8ebfd594 100644
> [ ... ]
>
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -96,6 +96,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >
> >  #include 
> >  #include 
> > @@ -734,6 +735,7 @@ void __put_task_struct(struct task_struct *tsk)
> >   cgroup_free(tsk);
> >   task_numa_free(tsk, true);
> >   security_task_free(tsk);
> > + bpf_task_storage_free(tsk);
> >   exit_creds(tsk);
> If exit_creds() is traced by a bpf and this bpf is doing
> bpf_task_storage_get(..., BPF_LOCAL_STORAGE_GET_F_CREATE),
> new task storage will be created after bpf_task_storage_free().
>
> I recalled there was an earlier discussion with KP and KP mentioned
> BPF_LSM will not be called with a task that is going away.
> It seems enabling bpf task storage in bpf tracing will break
> this assumption and needs to be addressed?

For tracing programs, I think we will need an allow list where
task local storage can be used.


Re: [PATCH bpf-next 2/4] selftests/bpf: add non-BPF_LSM test for task local storage

2021-01-11 Thread KP Singh
On Mon, Jan 11, 2021 at 6:31 PM Yonghong Song  wrote:
>
>
>
> On 1/8/21 3:19 PM, Song Liu wrote:
> > Task local storage is enabled for tracing programs. Add a test for it
> > without CONFIG_BPF_LSM.

Can you also explain what the test does in the commit log?

It would also be nicer to have a somewhat more realistic selftest which
represents a simple tracing + task local storage use case.

> >
> > Signed-off-by: Song Liu 
> > ---
> >   .../bpf/prog_tests/test_task_local_storage.c  | 34 +
> >   .../selftests/bpf/progs/task_local_storage.c  | 37 +++
> >   2 files changed, 71 insertions(+)
> >   create mode 100644 
> > tools/testing/selftests/bpf/prog_tests/test_task_local_storage.c
> >   create mode 100644 tools/testing/selftests/bpf/progs/task_local_storage.c
> >
> > diff --git 
> > a/tools/testing/selftests/bpf/prog_tests/test_task_local_storage.c 
> > b/tools/testing/selftests/bpf/prog_tests/test_task_local_storage.c
> > new file mode 100644
> > index 0..7de7a154ebbe6
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/test_task_local_storage.c
> > @@ -0,0 +1,34 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Copyright (c) 2020 Facebook */
>
> 2020 -> 2021
>
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include "task_local_storage.skel.h"
> > +
> > +static unsigned int duration;
> > +
> > +void test_test_task_local_storage(void)
> > +{
> > + struct task_local_storage *skel;
> > + const int count = 10;
> > + int i, err;
> > +
> > + skel = task_local_storage__open_and_load();
> > +
>
> Extra line is unnecessary here.
>
> > + if (CHECK(!skel, "skel_open_and_load", "skeleton open and load 
> > failed\n"))
> > + return;
> > +
> > + err = task_local_storage__attach(skel);
> > +
>
> ditto.
>
> > + if (CHECK(err, "skel_attach", "skeleton attach failed\n"))
> > + goto out;
> > +
> > + for (i = 0; i < count; i++)
> > + usleep(1000);
>
> Does a smaller usleep value will work? If it is, recommend to have a
> smaller value here to reduce test_progs running time.
>
> > + CHECK(skel->bss->value < count, "task_local_storage_value",
> > +   "task local value too small\n");

[...]

> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Copyright (c) 2020 Facebook */
>
> 2020 -> 2021
>
> > +
> > +#include "vmlinux.h"
> > +#include 
> > +#include 
> > +
> > +char _license[] SEC("license") = "GPL";

[...]

> > +{
> > + struct local_data *storage;
>
> If it possible that we do some filtering based on test_progs pid
> so below bpf_task_storage_get is only called for test_progs process?
> This is more targeted and can avoid counter contributions from
> other unrelated processes and make test_task_local_storage.c result
> comparison more meaningful.

Indeed, have a look at the monitored_pid approach some of the LSM programs
do.

>
> > +
> > + storage = bpf_task_storage_get(_storage_map,
> > +next, 0,
> > +BPF_LOCAL_STORAGE_GET_F_CREATE);
> > + if (storage) {
> > + storage->val++;
> > + value = storage->val;
> > + }
> > + return 0;
> > +}
> >


Re: [PATCH bpf-next 1/4] bpf: enable task local storage for tracing programs

2021-01-11 Thread KP Singh
On Mon, Jan 11, 2021 at 7:27 AM Yonghong Song  wrote:
>
>
>
> On 1/8/21 3:19 PM, Song Liu wrote:
> > To access per-task data, BPF program typically creates a hash table with
> > pid as the key. This is not ideal because:
> >   1. The use need to estimate requires size of the hash table, with may be
> >  inaccurate;
> >   2. Big hash tables are slow;
> >   3. To clean up the data properly during task terminations, the user need
> >  to write code.
> >
> > Task local storage overcomes these issues and becomes a better option for
> > these per-task data. Task local storage is only available to BPF_LSM. Now
> > enable it for tracing programs.
> >
> > Reported-by: kernel test robot 
> > Signed-off-by: Song Liu 
> > ---

[...]

> >   struct cfs_rq;
> >   struct fs_struct;
> > @@ -1348,6 +1349,10 @@ struct task_struct {
> >   /* Used by LSM modules for access restriction: */
> >   void*security;
> >   #endif
> > +#ifdef CONFIG_BPF_SYSCALL
> > + /* Used by BPF task local storage */
> > + struct bpf_local_storage*bpf_storage;
> > +#endif
>
> I remembered there is a discussion where KP initially wanted to put
> bpf_local_storage in task_struct, but later on changed to
> use in lsm as his use case mostly for lsm. Did anybody
> remember the details of the discussion? Just want to be
> sure what is the concern people has with putting bpf_local_storage
> in task_struct and whether the use case presented by
> Song will justify it.
>

If I recall correctly, the discussion was about inode local storage and
it was decided to use the security blob since the use-case was only LSM
programs. Since we now plan to use it in tracing,
detangling the dependency from CONFIG_BPF_LSM
sounds logical to me.


> >
> >   #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
> >   unsigned long   lowest_stack;
> > diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> > index d1249340fd6ba..ca995fdfa45e7 100644
> > --- a/kernel/bpf/Makefile
> > +++ b/kernel/bpf/Makefile
> > @@ -8,9 +8,8 @@ CFLAGS_core.o += $(call cc-disable-warning, override-init) 
> > $(cflags-nogcse-yy)
> >
> >   obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o 
> > tnum.o bpf_iter.o map_iter.o task_iter.o prog_iter.o
> >   obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o 
> > bpf_lru_list.o lpm_trie.o map_in_map.o
> > -obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
> > +obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o 
> > bpf_task_storage.o
> >   obj-${CONFIG_BPF_LSM} += bpf_inode_storage.o
> > -obj-${CONFIG_BPF_LSM}  += bpf_task_storage.o
> >   obj-$(CONFIG_BPF_SYSCALL) += disasm.o
> >   obj-$(CONFIG_BPF_JIT) += trampoline.o
> >   obj-$(CONFIG_BPF_SYSCALL) += btf.o
> [...]


Re: [PATCH bpf-next 1/4] bpf: enable task local storage for tracing programs

2021-01-11 Thread KP Singh
On Sat, Jan 9, 2021 at 12:35 AM Song Liu  wrote:
>
> To access per-task data, BPF program typically creates a hash table with
> pid as the key. This is not ideal because:
>  1. The use need to estimate requires size of the hash table, with may be
> inaccurate;
>  2. Big hash tables are slow;
>  3. To clean up the data properly during task terminations, the user need
> to write code.
>
> Task local storage overcomes these issues and becomes a better option for
> these per-task data. Task local storage is only available to BPF_LSM. Now
> enable it for tracing programs.

Also mention here that you change the pointer from being a security blob to a
dedicated member in the task struct. I assume this is because you want to
use it without CONFIG_BPF_LSM?

>

Can you also mention the reasons for changing the
raw_spin_lock_bh to raw_spin_lock_irqsave in the commit log?


> Reported-by: kernel test robot 
> Signed-off-by: Song Liu 
> ---
>  include/linux/bpf.h|  7 +++
>  include/linux/bpf_lsm.h| 22 --
>  include/linux/bpf_types.h  |  2 +-
>  include/linux/sched.h  |  5 +
>  kernel/bpf/Makefile|  3 +--
>  kernel/bpf/bpf_local_storage.c | 28 +---
>  kernel/bpf/bpf_lsm.c   |  4 
>  kernel/bpf/bpf_task_storage.c  | 26 ++
>  kernel/fork.c  |  5 +
>  kernel/trace/bpf_trace.c   |  4 
>  10 files changed, 46 insertions(+), 60 deletions(-)
>

[...]


Re: [PATCH bpf-next v4 2/4] bpf: Expose bpf_get_socket_cookie to tracing programs

2020-12-09 Thread KP Singh
On Wed, Dec 9, 2020 at 2:29 PM Florent Revest  wrote:
>
> This needs two new helpers, one that works in a sleepable context (using
> sock_gen_cookie which disables/enables preemption) and one that does not
> (for performance reasons). Both take a struct sock pointer and need to
> check it for NULLness.
>
> This helper could also be useful to other BPF program types such as LSM.
>
> Signed-off-by: Florent Revest 

Acked-by: KP Singh 


Re: [PATCH bpf-next v3 2/4] bpf: Expose bpf_get_socket_cookie to tracing programs

2020-12-08 Thread KP Singh
On Tue, Dec 8, 2020 at 9:20 PM Florent Revest  wrote:
>
> This needs two new helpers, one that works in a sleepable context (using
> sock_gen_cookie which disables/enables preemption) and one that does not
> (for performance reasons). Both take a struct sock pointer and need to
> check it for NULLness.
>
> This helper could also be useful to other BPF program types such as LSM.
>
> Signed-off-by: Florent Revest 
> ---
>  include/linux/bpf.h|  2 ++
>  include/uapi/linux/bpf.h   |  7 +++
>  kernel/trace/bpf_trace.c   |  4 
>  net/core/filter.c  | 24 
>  tools/include/uapi/linux/bpf.h |  7 +++
>  5 files changed, 44 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index d05e75ed8c1b..2ecda549b773 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1859,6 +1859,8 @@ extern const struct bpf_func_proto 
> bpf_snprintf_btf_proto;
>  extern const struct bpf_func_proto bpf_per_cpu_ptr_proto;
>  extern const struct bpf_func_proto bpf_this_cpu_ptr_proto;
>  extern const struct bpf_func_proto bpf_ktime_get_coarse_ns_proto;
> +extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_sleepable_proto;
> +extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto;
>
>  const struct bpf_func_proto *bpf_tracing_func_proto(
> enum bpf_func_id func_id, const struct bpf_prog *prog);
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index ba59309f4d18..9ac66cf25959 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1667,6 +1667,13 @@ union bpf_attr {
>   * Return
>   * A 8-byte long unique number.
>   *
> + * u64 bpf_get_socket_cookie(void *sk)
> + * Description
> + * Equivalent to **bpf_get_socket_cookie**\ () helper that 
> accepts
> + * *sk*, but gets socket from a BTF **struct sock**.
> + * Return
> + * A 8-byte long unique number.
> + *
>   * u32 bpf_get_socket_uid(struct sk_buff *skb)
>   * Return
>   * The owner UID of the socket associated to *skb*. If the socket
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 0cf0a6331482..99accc2146bc 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1778,6 +1778,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, 
> const struct bpf_prog *prog)
> return _sk_storage_get_tracing_proto;
> case BPF_FUNC_sk_storage_delete:
> return _sk_storage_delete_tracing_proto;
> +   case BPF_FUNC_get_socket_cookie:
> +   return prog->aux->sleepable ?
> +  _get_socket_ptr_cookie_sleepable_proto :
> +  _get_socket_ptr_cookie_proto;
>  #endif
> case BPF_FUNC_seq_printf:
> return prog->expected_attach_type == BPF_TRACE_ITER ?
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 77001a35768f..34877796ab5b 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -4631,6 +4631,30 @@ static const struct bpf_func_proto 
> bpf_get_socket_cookie_sock_proto = {
> .arg1_type  = ARG_PTR_TO_CTX,
>  };
>
> +BPF_CALL_1(bpf_get_socket_ptr_cookie_sleepable, struct sock *, sk)
> +{
> +   return sk ? sock_gen_cookie(sk) : 0;

My understanding is you can simply always call sock_gen_cookie and not
have two protos.

This will disable preemption in sleepable programs and not have any effect
in non-sleepable programs since preemption will already be disabled.


Re: [PATCH bpf-next v3] bpf: Only provide bpf_sock_from_file with CONFIG_NET

2020-12-08 Thread KP Singh
On Tue, Dec 8, 2020 at 9:56 PM Martin KaFai Lau  wrote:
>
> On Tue, Dec 08, 2020 at 06:36:23PM +0100, Florent Revest wrote:
> > This moves the bpf_sock_from_file definition into net/core/filter.c
> > which only gets compiled with CONFIG_NET and also moves the helper proto
> > usage next to other tracing helpers that are conditional on CONFIG_NET.
> >
> > This avoids
> >   ld: kernel/trace/bpf_trace.o: in function `bpf_sock_from_file':
> >   bpf_trace.c:(.text+0xe23): undefined reference to `sock_from_file'
> > When compiling a kernel with BPF and without NET.
> Acked-by: Martin KaFai Lau 

Acked-by: KP Singh 


Re: [PATCH bpf-next v3 3/3] bpf: Add a selftest for bpf_ima_inode_hash

2020-11-27 Thread KP Singh
On Fri, Nov 27, 2020 at 5:29 AM Andrii Nakryiko
 wrote:
>
> On Tue, Nov 24, 2020 at 7:16 AM KP Singh  wrote:
> >
> > From: KP Singh 
> >

[...]

>
> > +cleanup() {
> > +local tmp_dir="$1"
> > +local mount_img="${tmp_dir}/test.img"
> > +local mount_dir="${tmp_dir}/mnt"
> > +
> > +local loop_devices=$(losetup -j ${mount_img} -O NAME --noheadings)
>
> libbpf and kernel-patches CIs are using BusyBox environment which has
> losetup that doesn't support -j option. Is there some way to work
> around that? What we have is this:
>
> BusyBox v1.31.1 () multi-call binary.
>
> Usage: losetup [-rP] [-o OFS] {-f|LOOPDEV} FILE: associate loop devices
>
> losetup -c LOOPDEV: reread file size
>
> losetup -d LOOPDEV: disassociate
>
> losetup -a: show status

I can try to grep and parse the status output as a fallback. Will send another
fix.

- KP

>
> losetup -f: show next free loop device
>
> -o OFSStart OFS bytes into FILE
>
> -PScan for partitions
>
> -rRead-only
>
> -fShow/use next free loop device
>
>
> > +for loop_dev in "${loop_devices}"; do

[...]


Re: [PATCH bpf-next 1/2] bpf: Add a bpf_kallsyms_lookup helper

2020-11-27 Thread KP Singh
On Fri, Nov 27, 2020 at 8:35 AM Yonghong Song  wrote:
>
>
>
> On 11/26/20 8:57 AM, Florent Revest wrote:
> > This helper exposes the kallsyms_lookup function to eBPF tracing
> > programs. This can be used to retrieve the name of the symbol at an
> > address. For example, when hooking into nf_register_net_hook, one can
> > audit the name of the registered netfilter hook and potentially also
> > the name of the module in which the symbol is located.
> >
> > Signed-off-by: Florent Revest 
> > ---
> >   include/uapi/linux/bpf.h   | 16 +
> >   kernel/trace/bpf_trace.c   | 41 ++
> >   tools/include/uapi/linux/bpf.h | 16 +
> >   3 files changed, 73 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index c3458ec1f30a..670998635eac 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -3817,6 +3817,21 @@ union bpf_attr {
> >*  The **hash_algo** is returned on success,
> >*  **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
> >*  invalid arguments are passed.
> > + *
> > + * long bpf_kallsyms_lookup(u64 address, char *symbol, u32 symbol_size, 
> > char *module, u32 module_size)
> > + *   Description
> > + *   Uses kallsyms to write the name of the symbol at *address*
> > + *   into *symbol* of size *symbol_sz*. This is guaranteed to be
> > + *   zero terminated.
> > + *   If the symbol is in a module, up to *module_size* bytes of
> > + *   the module name is written in *module*. This is also
> > + *   guaranteed to be zero-terminated. Note: a module name
> > + *   is always shorter than 64 bytes.
> > + *   Return
> > + *   On success, the strictly positive length of the full symbol
> > + *   name, If this is greater than *symbol_size*, the written
> > + *   symbol is truncated.
> > + *   On error, a negative value.
> >*/
> >   #define __BPF_FUNC_MAPPER(FN)   \
> >   FN(unspec), \
> > @@ -3981,6 +3996,7 @@ union bpf_attr {
> >   FN(bprm_opts_set),  \
> >   FN(ktime_get_coarse_ns),\
> >   FN(ima_inode_hash), \
> > + FN(kallsyms_lookup),\
> >   /* */
> >
> >   /* integer value in 'imm' field of BPF_CALL instruction selects which 
> > helper
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index d255bc9b2bfa..9d86e20c2b13 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -17,6 +17,7 @@
> >   #include 
> >   #include 
> >   #include 
> > +#include 
> >
> >   #include 
> >
> > @@ -1260,6 +1261,44 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = 
> > {
> >   .arg5_type  = ARG_ANYTHING,
> >   };
> >
> > +BPF_CALL_5(bpf_kallsyms_lookup, u64, address, char *, symbol, u32, 
> > symbol_size,
> > +char *, module, u32, module_size)
> > +{
> > + char buffer[KSYM_SYMBOL_LEN];
> > + unsigned long offset, size;
> > + const char *name;
> > + char *modname;
> > + long ret;
> > +
> > + name = kallsyms_lookup(address, , , , buffer);
> > + if (!name)
> > + return -EINVAL;
> > +
> > + ret = strlen(name) + 1;
> > + if (symbol_size) {
> > + strncpy(symbol, name, symbol_size);
> > + symbol[symbol_size - 1] = '\0';
> > + }
> > +
> > + if (modname && module_size) {
> > + strncpy(module, modname, module_size);
> > + module[module_size - 1] = '\0';
>
> In this case, module name may be truncated and user did not get any
> indication from return value. In the helper description, it is mentioned
> that module name currently is most 64 bytes. But from UAPI perspective,
> it may be still good to return something to let user know the name
> is truncated.
>
> I do not know what is the best way to do this. One suggestion is
> to break it into two helpers, one for symbol name and another

I think it would be slightly preferable to have one helper though.
maybe something like bpf_get_symbol_info (better names anyone? :))
with flags to get the module name or the symbol name depending
on the flag?

> for module name. What is the use cases people want to get both
> symbol name and module name and is it common?

The use case would be to disambiguate symbols in the
kernel from the ones from a kernel module. Similar to what
/proc/kallsyms does:

T cpufreq_gov_powersave_init [cpufreq_powersave]

>
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +const struct bpf_func_proto bpf_kallsyms_lookup_proto = {
> > + .func   = bpf_kallsyms_lookup,
> > + .gpl_only   = false,
> > + .ret_type   = RET_INTEGER,
> > + .arg1_type  = ARG_ANYTHING,
> > + .arg2_type  = ARG_PTR_TO_MEM,
> ARG_PTR_TO_UNINIT_MEM?
>
> > + .arg3_type  = ARG_CONST_SIZE,
> ARG_CONST_SIZE_OR_ZERO? This is 

Re: [PATCH bpf-next 1/2] bpf: Add a bpf_kallsyms_lookup helper

2020-11-26 Thread KP Singh
[...]

> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index c3458ec1f30a..670998635eac 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -3817,6 +3817,21 @@ union bpf_attr {
>   * The **hash_algo** is returned on success,
>   * **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
>   * invalid arguments are passed.
> + *
> + * long bpf_kallsyms_lookup(u64 address, char *symbol, u32 symbol_size, char 
> *module, u32 module_size)
> + * Description
> + * Uses kallsyms to write the name of the symbol at *address*
> + * into *symbol* of size *symbol_sz*. This is guaranteed to be
> + * zero terminated.
> + * If the symbol is in a module, up to *module_size* bytes of
> + * the module name is written in *module*. This is also
> + * guaranteed to be zero-terminated. Note: a module name
> + * is always shorter than 64 bytes.
> + * Return
> + * On success, the strictly positive length of the full symbol
> + * name, If this is greater than *symbol_size*, the written
> + * symbol is truncated.
> + * On error, a negative value.
>   */
>  #define __BPF_FUNC_MAPPER(FN)  \
> FN(unspec), \
> @@ -3981,6 +3996,7 @@ union bpf_attr {
> FN(bprm_opts_set),  \
> FN(ktime_get_coarse_ns),\
> FN(ima_inode_hash), \
> +   FN(kallsyms_lookup),\
> /* */
>
>  /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index d255bc9b2bfa..9d86e20c2b13 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -17,6 +17,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>
>  #include 
>
> @@ -1260,6 +1261,44 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
> .arg5_type  = ARG_ANYTHING,
>  };
>
> +BPF_CALL_5(bpf_kallsyms_lookup, u64, address, char *, symbol, u32, 
> symbol_size,
> +  char *, module, u32, module_size)
> +{
> +   char buffer[KSYM_SYMBOL_LEN];
> +   unsigned long offset, size;
> +   const char *name;
> +   char *modname;
> +   long ret;
> +
> +   name = kallsyms_lookup(address, , , , buffer);
> +   if (!name)
> +   return -EINVAL;
> +
> +   ret = strlen(name) + 1;
> +   if (symbol_size) {
> +   strncpy(symbol, name, symbol_size);
> +   symbol[symbol_size - 1] = '\0';
> +   }
> +
> +   if (modname && module_size) {
> +   strncpy(module, modname, module_size);

The return value does not seem to be impacted by the truncation of the
module name, I wonder if it is better to just use a single buffer.

For example, the proc kallsyms shows symbols as:

 [module_name]

https://github.com/torvalds/linux/blob/master/kernel/kallsyms.c#L648

The square brackets do seem to be a waste here, so maybe we could use
a single character as a separator?

> +   module[module_size - 1] = '\0';
> +   }
> +
> +   return ret;
> +}
> +
> +const struct bpf_func_proto bpf_kallsyms_lookup_proto = {
> +   .func   = bpf_kallsyms_lookup,
> +   .gpl_only   = false,
> +   .ret_type   = RET_INTEGER,
> +   .arg1_type  = ARG_ANYTHING,
> +   .arg2_type  = ARG_PTR_TO_MEM,
> +   .arg3_type  = ARG_CONST_SIZE,
> +   .arg4_type  = ARG_PTR_TO_MEM,
> +   .arg5_type  = ARG_CONST_SIZE,
> +};
> +

[...]


Re: [PATCH bpf-next v3 3/6] bpf: Expose bpf_sk_storage_* to iterator programs

2020-11-26 Thread KP Singh
On Thu, Nov 26, 2020 at 5:45 PM Florent Revest  wrote:
>
> Iterators are currently used to expose kernel information to userspace
> over fast procfs-like files but iterators could also be used to
> manipulate local storage. For example, the task_file iterator could be
> used to initialize a socket local storage with associations between
> processes and sockets or to selectively delete local storage values.
>
> Signed-off-by: Florent Revest 
> Acked-by: Martin KaFai Lau 

Acked-by: KP Singh 


Re: [PATCH bpf-next v3 1/6] net: Remove the err argument from sock_from_file

2020-11-26 Thread KP Singh
On Thu, Nov 26, 2020 at 5:45 PM Florent Revest  wrote:
>
> Currently, the sock_from_file prototype takes an "err" pointer that is
> either not set or set to -ENOTSOCK IFF the returned socket is NULL. This
> makes the error redundant and it is ignored by a few callers.
>
> This patch simplifies the API by letting callers deduce the error based
> on whether the returned socket is NULL or not.
>
> Suggested-by: Al Viro 
> Signed-off-by: Florent Revest 

Reviewed-by: KP Singh 


Re: [PATCH bpf-next v3 3/3] bpf: Add a selftest for bpf_ima_inode_hash

2020-11-26 Thread KP Singh
[...]

> > + exit(errno);
>
> Running test_progs-no-alu32, the test failed as:
>
> root@arch-fb-vm1:~/net-next/net-next/tools/testing/selftests/bpf
> ./test_progs-no_alu32 -t test_ima

Note to self: Also start testing test_progs-no_alu32

>
> sh: ./ima_setup.sh: No such file or directory
>
> sh: ./ima_setup.sh: No such file or directory
>
> test_test_ima:PASS:skel_load 0 nsec
>
> test_test_ima:PASS:attach 0 nsec
>
> test_test_ima:PASS:mkdtemp 0 nsec
>
> test_test_ima:FAIL:56
>
> test_test_ima:FAIL:71
>
> #114 test_ima:FAIL
>
> Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED
>
> Although the file is indeed in this directory:
> root@arch-fb-vm1:~/net-next/net-next/tools/testing/selftests/bpf ls
> ima_setup.sh
> ima_setup.sh
>
> I think the execution actually tries to get file from
> no_alu32 directory to avoid reusing the same files in
> .../testing/selftests/bpf for -mcpu=v3 purpose.
>
> The following change, which copies ima_setup.sh to
> no_alu32 directory, seems fixing the issue:

Thanks!

>
> TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c
>  \
>   network_helpers.c testing_helpers.c\
>   btf_helpers.c  flow_dissector_load.h
>   TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read  \
> +  ima_setup.sh \
> $(wildcard progs/btf_dump_test_case_*.c)
>   TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE
>   TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
>
> Could you do a followup on this?

Yes, I will send out a fix today.

- KP


Re: [PATCH bpf-next v3 1/3] ima: Implement ima_inode_hash

2020-11-25 Thread KP Singh
On Tue, Nov 24, 2020 at 6:35 PM Yonghong Song  wrote:
>
>
>
> On 11/24/20 7:12 AM, KP Singh wrote:
> > From: KP Singh 
> >
> > This is in preparation to add a helper for BPF LSM programs to use
> > IMA hashes when attached to LSM hooks. There are LSM hooks like
> > inode_unlink which do not have a struct file * argument and cannot
> > use the existing ima_file_hash API.
> >
> > An inode based API is, therefore, useful in LSM based detections like an
> > executable trying to delete itself which rely on the inode_unlink LSM
> > hook.
> >
> > Moreover, the ima_file_hash function does nothing with the struct file
> > pointer apart from calling file_inode on it and converting it to an
> > inode.
> >
> > Signed-off-by: KP Singh 
>
> There is no change for this patch compared to previous version,
> so you can carry my Ack.
>
> Acked-by: Yonghong Song 

I am guessing:

*  We need an Ack from Mimi/James.
* As regards to which tree, I guess bpf-next would be better since the
BPF helper and the selftest depends on it


Re: [PATCH bpf-next v3 3/3] bpf: Add a selftest for bpf_ima_inode_hash

2020-11-24 Thread KP Singh
On Wed, Nov 25, 2020 at 3:20 AM Mimi Zohar  wrote:
>
> On Tue, 2020-11-24 at 15:12 +0000, KP Singh wrote:
> > diff --git a/tools/testing/selftests/bpf/ima_setup.sh 
> > b/tools/testing/selftests/bpf/ima_setup.sh
> > new file mode 100644
> > index ..15490ccc5e55
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/ima_setup.sh
> > @@ -0,0 +1,80 @@
> > +#!/bin/bash
> > +# SPDX-License-Identifier: GPL-2.0
> > +
> > +set -e
> > +set -u
> > +
> > +IMA_POLICY_FILE="/sys/kernel/security/ima/policy"
> > +TEST_BINARY="/bin/true"
> > +
> > +usage()
> > +{
> > +echo "Usage: $0  "
> > +exit 1
> > +}
> > +
> > +setup()
> > +{
> > +local tmp_dir="$1"
> > +local mount_img="${tmp_dir}/test.img"
> > +local mount_dir="${tmp_dir}/mnt"
> > +local copied_bin_path="${mount_dir}/$(basename ${TEST_BINARY})"
> > +mkdir -p ${mount_dir}
> > +
> > +dd if=/dev/zero of="${mount_img}" bs=1M count=10
> > +
> > +local loop_device="$(losetup --find --show ${mount_img})"
> > +
> > +mkfs.ext4 "${loop_device}"
> > +mount "${loop_device}" "${mount_dir}"
> > +
> > +cp "${TEST_BINARY}" "${mount_dir}"
> > +local mount_uuid="$(blkid -s UUID -o value ${loop_device})"
> > +echo "measure func=BPRM_CHECK fsuuid=${mount_uuid}" > 
> > ${IMA_POLICY_FILE}
>
> Anyone using IMA, normally define policy rules requiring the policy
> itself to be signed.   Instead of writing the policy rules, write the

The goal of this self test is to not fully test the IMA functionality but check
if the BPF helper works and returns a hash with the minimal possible IMA
config dependencies. And it seems like we can accomplish this by simply
writing the policy to securityfs directly.

>From what I noticed, IMA_APPRAISE_REQUIRE_POLICY_SIGS
requires configuring a lot of other kernel options
(IMA_APPRAISE, ASYMMETRIC_KEYS etc.) that seem
like too much for bpf self tests to depend on.

I guess we can independently add selftests for IMA  which represent
a more real IMA configuration.  Hope this sounds reasonable?

> signed policy file pathname.  Refer to dracut commit 479b5cd9
> ("98integrity: support validating the IMA policy file signature").
>
> Both enabling IMA_APPRAISE_REQUIRE_POLICY_SIGS and the builtin
> "appraise_tcb" policy require loading a signed policy.

Thanks for the pointers.

- KP

>



> Mimi
>


[PATCH bpf-next v3 0/3] Implement bpf_ima_inode_hash

2020-11-24 Thread KP Singh
From: KP Singh 

# v2 -> v3

- Fixed an issue pointed out by Alexei, the helper should only be
  exposed to sleepable hooks.
- Update the selftests to constrain the IMA policy udpate to a loopback
  filesystem specifically created for the test. Also, split this out
  from the LSM test. I dropped the Ack from this last patch since this
  is a re-write.

KP Singh (3):
  ima: Implement ima_inode_hash
  bpf: Add a BPF helper for getting the IMA hash of an inode
  bpf: Add a selftest for bpf_ima_inode_hash

 include/linux/ima.h   |  6 ++
 include/uapi/linux/bpf.h  | 11 +++
 kernel/bpf/bpf_lsm.c  | 26 ++
 scripts/bpf_helpers_doc.py|  2 +
 security/integrity/ima/ima_main.c | 78 --
 tools/include/uapi/linux/bpf.h| 11 +++
 tools/testing/selftests/bpf/config|  4 +
 tools/testing/selftests/bpf/ima_setup.sh  | 80 +++
 .../selftests/bpf/prog_tests/test_ima.c   | 74 +
 tools/testing/selftests/bpf/progs/ima.c   | 28 +++
 10 files changed, 296 insertions(+), 24 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/ima_setup.sh
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_ima.c
 create mode 100644 tools/testing/selftests/bpf/progs/ima.c

-- 
2.29.2.454.gaff20da3a2-goog



[PATCH bpf-next v3 2/3] bpf: Add a BPF helper for getting the IMA hash of an inode

2020-11-24 Thread KP Singh
From: KP Singh 

Provide a wrapper function to get the IMA hash of an inode. This helper
is useful in fingerprinting files (e.g executables on execution) and
using these fingerprints in detections like an executable unlinking
itself.

Since the ima_inode_hash can sleep, it's only allowed for sleepable
LSM hooks.

Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 11 +++
 kernel/bpf/bpf_lsm.c   | 26 ++
 scripts/bpf_helpers_doc.py |  2 ++
 tools/include/uapi/linux/bpf.h | 11 +++
 4 files changed, 50 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 3ca6146f001a..c3458ec1f30a 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3807,6 +3807,16 @@ union bpf_attr {
  * See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
  * Return
  * Current *ktime*.
+ *
+ * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
+ * Description
+ * Returns the stored IMA hash of the *inode* (if it's avaialable).
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** is returned on success,
+ * **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
+ * invalid arguments are passed.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3970,6 +3980,7 @@ union bpf_attr {
FN(get_current_task_btf),   \
FN(bprm_opts_set),  \
FN(ktime_get_coarse_ns),\
+   FN(ima_inode_hash), \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index b4f27a874092..70e5e0b6d69d 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -75,6 +76,29 @@ const static struct bpf_func_proto bpf_bprm_opts_set_proto = 
{
.arg2_type  = ARG_ANYTHING,
 };
 
+BPF_CALL_3(bpf_ima_inode_hash, struct inode *, inode, void *, dst, u32, size)
+{
+   return ima_inode_hash(inode, dst, size);
+}
+
+static bool bpf_ima_inode_hash_allowed(const struct bpf_prog *prog)
+{
+   return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
+}
+
+BTF_ID_LIST_SINGLE(bpf_ima_inode_hash_btf_ids, struct, inode)
+
+const static struct bpf_func_proto bpf_ima_inode_hash_proto = {
+   .func   = bpf_ima_inode_hash,
+   .gpl_only   = false,
+   .ret_type   = RET_INTEGER,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _ima_inode_hash_btf_ids[0],
+   .arg2_type  = ARG_PTR_TO_UNINIT_MEM,
+   .arg3_type  = ARG_CONST_SIZE,
+   .allowed= bpf_ima_inode_hash_allowed,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -97,6 +121,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_delete_proto;
case BPF_FUNC_bprm_opts_set:
return _bprm_opts_set_proto;
+   case BPF_FUNC_ima_inode_hash:
+   return prog->aux->sleepable ? _ima_inode_hash_proto : NULL;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index c5bc947a70ad..8b829748d488 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -436,6 +436,7 @@ class PrinterHelpers(Printer):
 'struct xdp_md',
 'struct path',
 'struct btf_ptr',
+'struct inode',
 ]
 known_types = {
 '...',
@@ -480,6 +481,7 @@ class PrinterHelpers(Printer):
 'struct task_struct',
 'struct path',
 'struct btf_ptr',
+'struct inode',
 }
 mapped_types = {
 'u8': '__u8',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 3ca6146f001a..c3458ec1f30a 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3807,6 +3807,16 @@ union bpf_attr {
  * See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
  * Return
  * Current *ktime*.
+ *
+ * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
+ * Description
+ * Returns the stored IMA hash of the *inode* (if it's avaialable).
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** is returned on success,
+ * **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
+ *

[PATCH bpf-next v3 3/3] bpf: Add a selftest for bpf_ima_inode_hash

2020-11-24 Thread KP Singh
From: KP Singh 

The test does the following:

- Mounts a loopback filesystem and appends the IMA policy to measure
  executions only on this file-system. Restricting the IMA policy to a
  particular filesystem prevents a system-wide IMA policy change.
- Executes an executable copied to this loopback filesystem.
- Calls the bpf_ima_inode_hash in the bprm_committed_creds hook and
  checks if the call succeeded and checks if a hash was calculated.

The test shells out to the added ima_setup.sh script as the setup is
better handled in a shell script and is more complicated to do in the
test program or even shelling out individual commands from C.

The list of required configs (i.e. IMA, SECURITYFS,
IMA_{WRITE,READ}_POLICY) for running this test are also updated.

Signed-off-by: KP Singh 
---
 tools/testing/selftests/bpf/config|  4 +
 tools/testing/selftests/bpf/ima_setup.sh  | 80 +++
 .../selftests/bpf/prog_tests/test_ima.c   | 74 +
 tools/testing/selftests/bpf/progs/ima.c   | 28 +++
 4 files changed, 186 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/ima_setup.sh
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_ima.c
 create mode 100644 tools/testing/selftests/bpf/progs/ima.c

diff --git a/tools/testing/selftests/bpf/config 
b/tools/testing/selftests/bpf/config
index 2118e23ac07a..365bf9771b07 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -39,3 +39,7 @@ CONFIG_BPF_JIT=y
 CONFIG_BPF_LSM=y
 CONFIG_SECURITY=y
 CONFIG_LIRC=y
+CONFIG_IMA=y
+CONFIG_SECURITYFS=y
+CONFIG_IMA_WRITE_POLICY=y
+CONFIG_IMA_READ_POLICY=y
diff --git a/tools/testing/selftests/bpf/ima_setup.sh 
b/tools/testing/selftests/bpf/ima_setup.sh
new file mode 100644
index ..15490ccc5e55
--- /dev/null
+++ b/tools/testing/selftests/bpf/ima_setup.sh
@@ -0,0 +1,80 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+set -e
+set -u
+
+IMA_POLICY_FILE="/sys/kernel/security/ima/policy"
+TEST_BINARY="/bin/true"
+
+usage()
+{
+echo "Usage: $0  "
+exit 1
+}
+
+setup()
+{
+local tmp_dir="$1"
+local mount_img="${tmp_dir}/test.img"
+local mount_dir="${tmp_dir}/mnt"
+local copied_bin_path="${mount_dir}/$(basename ${TEST_BINARY})"
+mkdir -p ${mount_dir}
+
+dd if=/dev/zero of="${mount_img}" bs=1M count=10
+
+local loop_device="$(losetup --find --show ${mount_img})"
+
+mkfs.ext4 "${loop_device}"
+mount "${loop_device}" "${mount_dir}"
+
+cp "${TEST_BINARY}" "${mount_dir}"
+local mount_uuid="$(blkid -s UUID -o value ${loop_device})"
+echo "measure func=BPRM_CHECK fsuuid=${mount_uuid}" > 
${IMA_POLICY_FILE}
+}
+
+cleanup() {
+local tmp_dir="$1"
+local mount_img="${tmp_dir}/test.img"
+local mount_dir="${tmp_dir}/mnt"
+
+local loop_devices=$(losetup -j ${mount_img} -O NAME --noheadings)
+for loop_dev in "${loop_devices}"; do
+losetup -d $loop_dev
+done
+
+umount ${mount_dir}
+rm -rf ${tmp_dir}
+}
+
+run()
+{
+local tmp_dir="$1"
+local mount_dir="${tmp_dir}/mnt"
+local copied_bin_path="${mount_dir}/$(basename ${TEST_BINARY})"
+
+exec "${copied_bin_path}"
+}
+
+main()
+{
+[[ $# -ne 2 ]] && usage
+
+local action="$1"
+local tmp_dir="$2"
+
+[[ ! -d "${tmp_dir}" ]] && echo "Directory ${tmp_dir} doesn't exist" 
&& exit 1
+
+if [[ "${action}" == "setup" ]]; then
+setup "${tmp_dir}"
+elif [[ "${action}" == "cleanup" ]]; then
+cleanup "${tmp_dir}"
+elif [[ "${action}" == "run" ]]; then
+run "${tmp_dir}"
+else
+echo "Unknown action: ${action}"
+exit 1
+fi
+}
+
+main "$@"
diff --git a/tools/testing/selftests/bpf/prog_tests/test_ima.c 
b/tools/testing/selftests/bpf/prog_tests/test_ima.c
new file mode 100644
index ..61fca681d524
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_ima.c
@@ -0,0 +1,74 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ima.skel.h"
+
+static int run_measured_process(const char *measured_dir, u32 *monitored_pid)
+{
+   int child_pid, child_status;
+
+   child_pid = fork();
+   if (child_pid == 0) {
+   *monitored_pid = getpid();
+   execlp("./ima_set

[PATCH bpf-next v3 1/3] ima: Implement ima_inode_hash

2020-11-24 Thread KP Singh
From: KP Singh 

This is in preparation to add a helper for BPF LSM programs to use
IMA hashes when attached to LSM hooks. There are LSM hooks like
inode_unlink which do not have a struct file * argument and cannot
use the existing ima_file_hash API.

An inode based API is, therefore, useful in LSM based detections like an
executable trying to delete itself which rely on the inode_unlink LSM
hook.

Moreover, the ima_file_hash function does nothing with the struct file
pointer apart from calling file_inode on it and converting it to an
inode.

Signed-off-by: KP Singh 
---
 include/linux/ima.h   |  6 +++
 security/integrity/ima/ima_main.c | 78 +--
 2 files changed, 60 insertions(+), 24 deletions(-)

diff --git a/include/linux/ima.h b/include/linux/ima.h
index 8fa7bcfb2da2..7233a2751754 100644
--- a/include/linux/ima.h
+++ b/include/linux/ima.h
@@ -29,6 +29,7 @@ extern int ima_post_read_file(struct file *file, void *buf, 
loff_t size,
  enum kernel_read_file_id id);
 extern void ima_post_path_mknod(struct dentry *dentry);
 extern int ima_file_hash(struct file *file, char *buf, size_t buf_size);
+extern int ima_inode_hash(struct inode *inode, char *buf, size_t buf_size);
 extern void ima_kexec_cmdline(int kernel_fd, const void *buf, int size);
 
 #ifdef CONFIG_IMA_KEXEC
@@ -115,6 +116,11 @@ static inline int ima_file_hash(struct file *file, char 
*buf, size_t buf_size)
return -EOPNOTSUPP;
 }
 
+static inline int ima_inode_hash(struct inode *inode, char *buf, size_t 
buf_size)
+{
+   return -EOPNOTSUPP;
+}
+
 static inline void ima_kexec_cmdline(int kernel_fd, const void *buf, int size) 
{}
 #endif /* CONFIG_IMA */
 
diff --git a/security/integrity/ima/ima_main.c 
b/security/integrity/ima/ima_main.c
index 2d1af8899cab..cb2deaa188e7 100644
--- a/security/integrity/ima/ima_main.c
+++ b/security/integrity/ima/ima_main.c
@@ -501,37 +501,14 @@ int ima_file_check(struct file *file, int mask)
 }
 EXPORT_SYMBOL_GPL(ima_file_check);
 
-/**
- * ima_file_hash - return the stored measurement if a file has been hashed and
- * is in the iint cache.
- * @file: pointer to the file
- * @buf: buffer in which to store the hash
- * @buf_size: length of the buffer
- *
- * On success, return the hash algorithm (as defined in the enum hash_algo).
- * If buf is not NULL, this function also outputs the hash into buf.
- * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
- * It generally just makes sense to pass a buffer capable of holding the 
largest
- * possible hash: IMA_MAX_DIGEST_SIZE.
- * The file hash returned is based on the entire file, including the appended
- * signature.
- *
- * If IMA is disabled or if no measurement is available, return -EOPNOTSUPP.
- * If the parameters are incorrect, return -EINVAL.
- */
-int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+static int __ima_inode_hash(struct inode *inode, char *buf, size_t buf_size)
 {
-   struct inode *inode;
struct integrity_iint_cache *iint;
int hash_algo;
 
-   if (!file)
-   return -EINVAL;
-
if (!ima_policy_flag)
return -EOPNOTSUPP;
 
-   inode = file_inode(file);
iint = integrity_iint_find(inode);
if (!iint)
return -EOPNOTSUPP;
@@ -558,8 +535,61 @@ int ima_file_hash(struct file *file, char *buf, size_t 
buf_size)
 
return hash_algo;
 }
+
+/**
+ * ima_file_hash - return the stored measurement if a file has been hashed and
+ * is in the iint cache.
+ * @file: pointer to the file
+ * @buf: buffer in which to store the hash
+ * @buf_size: length of the buffer
+ *
+ * On success, return the hash algorithm (as defined in the enum hash_algo).
+ * If buf is not NULL, this function also outputs the hash into buf.
+ * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
+ * It generally just makes sense to pass a buffer capable of holding the 
largest
+ * possible hash: IMA_MAX_DIGEST_SIZE.
+ * The file hash returned is based on the entire file, including the appended
+ * signature.
+ *
+ * If IMA is disabled or if no measurement is available, return -EOPNOTSUPP.
+ * If the parameters are incorrect, return -EINVAL.
+ */
+int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+{
+   if (!file)
+   return -EINVAL;
+
+   return __ima_inode_hash(file_inode(file), buf, buf_size);
+}
 EXPORT_SYMBOL_GPL(ima_file_hash);
 
+/**
+ * ima_inode_hash - return the stored measurement if the inode has been hashed
+ * and is in the iint cache.
+ * @inode: pointer to the inode
+ * @buf: buffer in which to store the hash
+ * @buf_size: length of the buffer
+ *
+ * On success, return the hash algorithm (as defined in the enum hash_algo).
+ * If buf is not NULL, this function also outputs the hash into buf.
+ * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
+ * It generally just makes

Re: [PATCH bpf-next 2/3] bpf: Add a BPF helper for getting the IMA hash of an inode

2020-11-24 Thread KP Singh
On Tue, Nov 24, 2020 at 12:04 PM KP Singh  wrote:
>
> On Tue, Nov 24, 2020 at 5:02 AM Alexei Starovoitov
>  wrote:
> >
> > On Fri, Nov 20, 2020 at 01:17:07PM +, KP Singh wrote:
> > > +
> > > +static bool bpf_ima_inode_hash_allowed(const struct bpf_prog *prog)
> > > +{
> > > + return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
> > > +}
> > > +
> > > +BTF_ID_LIST_SINGLE(bpf_ima_inode_hash_btf_ids, struct, inode)
> > > +
> > > +const static struct bpf_func_proto bpf_ima_inode_hash_proto = {
> > > + .func   = bpf_ima_inode_hash,
> > > + .gpl_only   = false,
> > > + .ret_type   = RET_INTEGER,
> > > + .arg1_type  = ARG_PTR_TO_BTF_ID,
> > > + .arg1_btf_id= _ima_inode_hash_btf_ids[0],
> > > + .arg2_type  = ARG_PTR_TO_UNINIT_MEM,
> > > + .arg3_type  = ARG_CONST_SIZE_OR_ZERO,
> > > + .allowed= bpf_ima_inode_hash_allowed,
> > > +};
> > > +
> > >  static const struct bpf_func_proto *
> > >  bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> > >  {
> > > @@ -97,6 +121,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const 
> > > struct bpf_prog *prog)
> > >   return _task_storage_delete_proto;
> > >   case BPF_FUNC_bprm_opts_set:
> > >   return _bprm_opts_set_proto;
> > > + case BPF_FUNC_ima_inode_hash:
> > > + return _ima_inode_hash_proto;
> >
> > That's not enough for correctness.
> > Not only hook has to sleepable, but the program has to be sleepable too.
> > The patch 3 should be causing all sort of kernel warnings
> > for calling mutex from preempt disabled.
> > There it calls bpf_ima_inode_hash() from SEC("lsm/file_mprotect") program.

Okay I dug into why I did not get any warnings, I do have
CONFIG_DEBUG_ATOMIC_SLEEP
and friends enabled and I do look at dmesg and... I think you misread
the diff of my patch :)

it's indeed attaching to "lsm.s/bprm_committed_creds":

[https://lore.kernel.org/bpf/CACYkzJ7Oi8wXf=9a-e=fFHJirRbD=u47z+3+m2crtcy_1fw...@mail.gmail.com/T/#m8d55bf0cdda614338cecd7154476497628612f6a]

 SEC("lsm/file_mprotect")
 int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
@@ -65,8 +67,11 @@ int BPF_PROG(test_void_hook, struct linux_binprm *bprm)
  __u32 key = 0;
  __u64 *value;

- if (monitored_pid == pid)
+ if (monitored_pid == pid) {
  bprm_count++;
+ ima_hash_ret = bpf_ima_inode_hash(bprm->file->f_inode,
+  _hash, sizeof(ima_hash));
+ }

  bpf_copy_from_user(args, sizeof(args), (void *)bprm->vma->vm_mm->arg_start);
  bpf_copy_from_user(args, sizeof(args), (void *)bprm->mm->arg_start);
-- 

The diff makes it look like it is attaching to "lsm/file_mprotect" but
it's actually attaching to
"lsm.s/bprm_committed_creds".

Now we can either check for prod->aux->sleepable in
bpf_ima_inode_hash_allowed or
just not expose the helper to non-sleepable hooks. I went with the
latter as this is what
we do for bpf_copy_from_user.

- KP

>
> I did actually mean to use SEC("lsm.s/bprm_committed_creds"), my bad.
>
> > "lsm/" is non-sleepable. "lsm.s/" is.
> > please enable CONFIG_DEBUG_ATOMIC_SLEEP=y in your config.
>
> Oops, yes I did notice that during recent work on the test cases.
>
> Since we need a stronger check than just warnings, I am doing
> something similar to
> what we do for bpf_copy_from_user i.e.
>
>  return prog->aux->sleepable ? _ima_inode_hash_proto : NULL;


Re: [PATCH bpf-next 2/3] bpf: Add a BPF helper for getting the IMA hash of an inode

2020-11-24 Thread KP Singh
On Tue, Nov 24, 2020 at 5:02 AM Alexei Starovoitov
 wrote:
>
> On Fri, Nov 20, 2020 at 01:17:07PM +0000, KP Singh wrote:
> > +
> > +static bool bpf_ima_inode_hash_allowed(const struct bpf_prog *prog)
> > +{
> > + return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
> > +}
> > +
> > +BTF_ID_LIST_SINGLE(bpf_ima_inode_hash_btf_ids, struct, inode)
> > +
> > +const static struct bpf_func_proto bpf_ima_inode_hash_proto = {
> > + .func   = bpf_ima_inode_hash,
> > + .gpl_only   = false,
> > + .ret_type   = RET_INTEGER,
> > + .arg1_type  = ARG_PTR_TO_BTF_ID,
> > + .arg1_btf_id= _ima_inode_hash_btf_ids[0],
> > + .arg2_type  = ARG_PTR_TO_UNINIT_MEM,
> > + .arg3_type  = ARG_CONST_SIZE_OR_ZERO,
> > + .allowed= bpf_ima_inode_hash_allowed,
> > +};
> > +
> >  static const struct bpf_func_proto *
> >  bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >  {
> > @@ -97,6 +121,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const 
> > struct bpf_prog *prog)
> >   return _task_storage_delete_proto;
> >   case BPF_FUNC_bprm_opts_set:
> >   return _bprm_opts_set_proto;
> > + case BPF_FUNC_ima_inode_hash:
> > + return _ima_inode_hash_proto;
>
> That's not enough for correctness.
> Not only hook has to sleepable, but the program has to be sleepable too.
> The patch 3 should be causing all sort of kernel warnings
> for calling mutex from preempt disabled.
> There it calls bpf_ima_inode_hash() from SEC("lsm/file_mprotect") program.

I did actually mean to use SEC("lsm.s/bprm_committed_creds"), my bad.

> "lsm/" is non-sleepable. "lsm.s/" is.
> please enable CONFIG_DEBUG_ATOMIC_SLEEP=y in your config.

Oops, yes I did notice that during recent work on the test cases.

Since we need a stronger check than just warnings, I am doing
something similar to
what we do for bpf_copy_from_user i.e.

 return prog->aux->sleepable ? _ima_inode_hash_proto : NULL;


Re: [PATCH bpf-next v2 3/3] bpf: Update LSM selftests for bpf_ima_inode_hash

2020-11-23 Thread KP Singh
On Mon, Nov 23, 2020 at 7:36 PM Yonghong Song  wrote:
>
>
>
> On 11/23/20 10:27 AM, KP Singh wrote:
> > [...]
> >
> >>>>
> >>>> Even if a custom policy has been loaded, potentially additional
> >>>> measurements unrelated to this test would be included the measurement
> >>>> list.  One way of limiting a rule to a specific test is by loopback
> >>>> mounting a file system and defining a policy rule based on the loopback
> >>>> mount unique uuid.
> >>>
> >>> Thanks Mimi!
> >>>
> >>> I wonder if we simply limit this to policy to /tmp and run an executable
> >>> from /tmp (like test_local_storage.c does).
> >>>
> >>> The only side effect would be of extra hashes being calculated on
> >>> binaries run from /tmp which is not too bad I guess?
> >>
> >> The builtin measurement policy (ima_policy=tcb") explicitly defines a
> >> rule to not measure /tmp files.  Measuring /tmp results in a lot of
> >> measurements.
> >>
> >> {.action = DONT_MEASURE, .fsmagic = TMPFS_MAGIC, .flags = IMA_FSMAGIC},
> >>
> >>>
> >>> We could do the loop mount too, but I am guessing the most clean way
> >>> would be to shell out to mount from the test? Are there some other 
> >>> examples
> >>> of IMA we could look at?
> >>
> >> LTP loopback mounts a filesystem, since /tmp is not being measured with
> >> the builtin "tcb" policy.  Defining new policy rules should be limited
> >> to the loopback mount.  This would pave the way for defining IMA-
> >> appraisal signature verification policy rules, without impacting the
> >> running system.
> >
> > +Andrii
> >
> > Do you think we can split the IMA test out,
> > have a little shell script that does the loopback mount, gets the
> > FS UUID, updates the IMA policy and then runs a C program?
> >
> > This would also allow "test_progs" to be independent of CONFIG_IMA.
> >
> > I am guessing the structure would be something similar
> > to test_xdp_redirect.sh
>
> Look at sk_assign test.
>
> sk_assign.c:if (CHECK_FAIL(system("ip link set dev lo up")))
> sk_assign.c:if (CHECK_FAIL(system("ip route add local default dev lo")))
> sk_assign.c:if (CHECK_FAIL(system("ip -6 route add local default dev
> lo")))
> sk_assign.c:if (CHECK_FAIL(system("tc qdisc add dev lo clsact")))
> sk_assign.c:if (CHECK(system(tc_cmd), "BPF load failed;"
>
> You can use "system" to invoke some bash commands to simulate a script
> in the tests.

Heh, that's what I was trying to avoid, I need to parse the output to the get
the name of which loop device was assigned and then call a command like:

# blkid /dev/loop0
/dev/loop0: UUID="607ed7ce-3fad-4236-8faf-8ab744f23e01" TYPE="ext3"

Running simple commands with "system" seems okay but parsing output
is a bit too much :)

I read about:

https://man7.org/linux/man-pages/man4/loop.4.html

But I still need to create a backing file, format it and then get the UUID.

Any simple trick that I may be missing?

- KP

>
> >
> > - KP
> >
> >>
> >> Mimi
> >>


Re: [PATCH bpf-next v2 3/3] bpf: Update LSM selftests for bpf_ima_inode_hash

2020-11-23 Thread KP Singh
[...]

> > >
> > > Even if a custom policy has been loaded, potentially additional
> > > measurements unrelated to this test would be included the measurement
> > > list.  One way of limiting a rule to a specific test is by loopback
> > > mounting a file system and defining a policy rule based on the loopback
> > > mount unique uuid.
> >
> > Thanks Mimi!
> >
> > I wonder if we simply limit this to policy to /tmp and run an executable
> > from /tmp (like test_local_storage.c does).
> >
> > The only side effect would be of extra hashes being calculated on
> > binaries run from /tmp which is not too bad I guess?
>
> The builtin measurement policy (ima_policy=tcb") explicitly defines a
> rule to not measure /tmp files.  Measuring /tmp results in a lot of
> measurements.
>
> {.action = DONT_MEASURE, .fsmagic = TMPFS_MAGIC, .flags = IMA_FSMAGIC},
>
> >
> > We could do the loop mount too, but I am guessing the most clean way
> > would be to shell out to mount from the test? Are there some other examples
> > of IMA we could look at?
>
> LTP loopback mounts a filesystem, since /tmp is not being measured with
> the builtin "tcb" policy.  Defining new policy rules should be limited
> to the loopback mount.  This would pave the way for defining IMA-
> appraisal signature verification policy rules, without impacting the
> running system.

+Andrii

Do you think we can split the IMA test out,
have a little shell script that does the loopback mount, gets the
FS UUID, updates the IMA policy and then runs a C program?

This would also allow "test_progs" to be independent of CONFIG_IMA.

I am guessing the structure would be something similar
to test_xdp_redirect.sh

- KP

>
> Mimi
>


Re: [PATCH bpf-next v2 3/3] bpf: Update LSM selftests for bpf_ima_inode_hash

2020-11-23 Thread KP Singh
On Mon, Nov 23, 2020 at 2:24 PM Mimi Zohar  wrote:
>
> On Sat, 2020-11-21 at 00:50 +0000, KP Singh wrote:
> > From: KP Singh 
> >
> > - Update the IMA policy before executing the test binary (this is not an
> >   override of the policy, just an append that ensures that hashes are
> >   calculated on executions).
>
> Assuming the builtin policy has been replaced with a custom policy and
> CONFIG_IMA_WRITE_POLICY is enabled, then yes the rule is appended.   If
> a custom policy has not yet been loaded, loading this rule becomes the
> defacto custom policy.
>
> Even if a custom policy has been loaded, potentially additional
> measurements unrelated to this test would be included the measurement
> list.  One way of limiting a rule to a specific test is by loopback
> mounting a file system and defining a policy rule based on the loopback
> mount unique uuid.

Thanks Mimi!

I wonder if we simply limit this to policy to /tmp and run an executable
from /tmp (like test_local_storage.c does).

The only side effect would be of extra hashes being calculated on
binaries run from /tmp which is not too bad I guess?

We could do the loop mount too, but I am guessing the most clean way
would be to shell out to mount from the test? Are there some other examples
of IMA we could look at?

- KP

>
> Mimi
>


[PATCH bpf-next v2 3/3] bpf: Update LSM selftests for bpf_ima_inode_hash

2020-11-20 Thread KP Singh
From: KP Singh 

- Update the IMA policy before executing the test binary (this is not an
  override of the policy, just an append that ensures that hashes are
  calculated on executions).

- Call the bpf_ima_inode_hash in the bprm_committed_creds hook and check
  if the call succeeded and a hash was calculated.

Acked-by: Yonghong Song 
Signed-off-by: KP Singh 
---
 tools/testing/selftests/bpf/config|  3 ++
 .../selftests/bpf/prog_tests/test_lsm.c   | 32 +++
 tools/testing/selftests/bpf/progs/lsm.c   |  7 +++-
 3 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/config 
b/tools/testing/selftests/bpf/config
index 2118e23ac07a..4b5764031368 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -39,3 +39,6 @@ CONFIG_BPF_JIT=y
 CONFIG_BPF_LSM=y
 CONFIG_SECURITY=y
 CONFIG_LIRC=y
+CONFIG_IMA=y
+CONFIG_IMA_WRITE_POLICY=y
+CONFIG_IMA_READ_POLICY=y
diff --git a/tools/testing/selftests/bpf/prog_tests/test_lsm.c 
b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
index 6ab29226c99b..bcb050a296a4 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_lsm.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
@@ -52,6 +52,28 @@ int exec_cmd(int *monitored_pid)
return -EINVAL;
 }
 
+#define IMA_POLICY "measure func=BPRM_CHECK"
+
+/* This does not override the policy, IMA policy updates are
+ * append only, so this just ensures that "measure func=BPRM_CHECK"
+ * is in the policy. IMA does not allow us to remove this line once
+ * it is added.
+ */
+static int update_ima_policy(void)
+{
+   int fd, ret = 0;
+
+   fd = open("/sys/kernel/security/ima/policy", O_WRONLY);
+   if (fd < 0)
+   return -errno;
+
+   if (write(fd, IMA_POLICY, sizeof(IMA_POLICY)) == -1)
+   ret = -errno;
+
+   close(fd);
+   return ret;
+}
+
 void test_test_lsm(void)
 {
struct lsm *skel = NULL;
@@ -66,6 +88,10 @@ void test_test_lsm(void)
if (CHECK(err, "attach", "lsm attach failed: %d\n", err))
goto close_prog;
 
+   err = update_ima_policy();
+   if (CHECK(err, "update_ima_policy", "err %d\n", err))
+   goto close_prog;
+
err = exec_cmd(>bss->monitored_pid);
if (CHECK(err < 0, "exec_cmd", "err %d errno %d\n", err, errno))
goto close_prog;
@@ -83,6 +109,12 @@ void test_test_lsm(void)
CHECK(skel->bss->mprotect_count != 1, "mprotect_count",
  "mprotect_count = %d\n", skel->bss->mprotect_count);
 
+   CHECK(skel->data->ima_hash_ret < 0, "ima_hash_ret",
+ "ima_hash_ret = %ld\n", skel->data->ima_hash_ret);
+
+   CHECK(skel->bss->ima_hash == 0, "ima_hash",
+ "ima_hash = %lu\n", skel->bss->ima_hash);
+
syscall(__NR_setdomainname, , -2L);
syscall(__NR_setdomainname, 0, -3L);
syscall(__NR_setdomainname, ~0L, -4L);
diff --git a/tools/testing/selftests/bpf/progs/lsm.c 
b/tools/testing/selftests/bpf/progs/lsm.c
index ff4d343b94b5..5adc193e414d 100644
--- a/tools/testing/selftests/bpf/progs/lsm.c
+++ b/tools/testing/selftests/bpf/progs/lsm.c
@@ -35,6 +35,8 @@ char _license[] SEC("license") = "GPL";
 int monitored_pid = 0;
 int mprotect_count = 0;
 int bprm_count = 0;
+long ima_hash_ret = -1;
+u64 ima_hash = 0;
 
 SEC("lsm/file_mprotect")
 int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
@@ -65,8 +67,11 @@ int BPF_PROG(test_void_hook, struct linux_binprm *bprm)
__u32 key = 0;
__u64 *value;
 
-   if (monitored_pid == pid)
+   if (monitored_pid == pid) {
bprm_count++;
+   ima_hash_ret = bpf_ima_inode_hash(bprm->file->f_inode,
+ _hash, sizeof(ima_hash));
+   }
 
bpf_copy_from_user(args, sizeof(args), (void 
*)bprm->vma->vm_mm->arg_start);
bpf_copy_from_user(args, sizeof(args), (void *)bprm->mm->arg_start);
-- 
2.29.2.454.gaff20da3a2-goog



[PATCH bpf-next v2 1/3] ima: Implement ima_inode_hash

2020-11-20 Thread KP Singh
From: KP Singh 

This is in preparation to add a helper for BPF LSM programs to use
IMA hashes when attached to LSM hooks. There are LSM hooks like
inode_unlink which do not have a struct file * argument and cannot
use the existing ima_file_hash API.

An inode based API is, therefore, useful in LSM based detections like an
executable trying to delete itself which rely on the inode_unlink LSM
hook.

Moreover, the ima_file_hash function does nothing with the struct file
pointer apart from calling file_inode on it and converting it to an
inode.

Signed-off-by: KP Singh 
---
 include/linux/ima.h   |  6 +++
 security/integrity/ima/ima_main.c | 78 +--
 2 files changed, 60 insertions(+), 24 deletions(-)

diff --git a/include/linux/ima.h b/include/linux/ima.h
index 8fa7bcfb2da2..7233a2751754 100644
--- a/include/linux/ima.h
+++ b/include/linux/ima.h
@@ -29,6 +29,7 @@ extern int ima_post_read_file(struct file *file, void *buf, 
loff_t size,
  enum kernel_read_file_id id);
 extern void ima_post_path_mknod(struct dentry *dentry);
 extern int ima_file_hash(struct file *file, char *buf, size_t buf_size);
+extern int ima_inode_hash(struct inode *inode, char *buf, size_t buf_size);
 extern void ima_kexec_cmdline(int kernel_fd, const void *buf, int size);
 
 #ifdef CONFIG_IMA_KEXEC
@@ -115,6 +116,11 @@ static inline int ima_file_hash(struct file *file, char 
*buf, size_t buf_size)
return -EOPNOTSUPP;
 }
 
+static inline int ima_inode_hash(struct inode *inode, char *buf, size_t 
buf_size)
+{
+   return -EOPNOTSUPP;
+}
+
 static inline void ima_kexec_cmdline(int kernel_fd, const void *buf, int size) 
{}
 #endif /* CONFIG_IMA */
 
diff --git a/security/integrity/ima/ima_main.c 
b/security/integrity/ima/ima_main.c
index 2d1af8899cab..cb2deaa188e7 100644
--- a/security/integrity/ima/ima_main.c
+++ b/security/integrity/ima/ima_main.c
@@ -501,37 +501,14 @@ int ima_file_check(struct file *file, int mask)
 }
 EXPORT_SYMBOL_GPL(ima_file_check);
 
-/**
- * ima_file_hash - return the stored measurement if a file has been hashed and
- * is in the iint cache.
- * @file: pointer to the file
- * @buf: buffer in which to store the hash
- * @buf_size: length of the buffer
- *
- * On success, return the hash algorithm (as defined in the enum hash_algo).
- * If buf is not NULL, this function also outputs the hash into buf.
- * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
- * It generally just makes sense to pass a buffer capable of holding the 
largest
- * possible hash: IMA_MAX_DIGEST_SIZE.
- * The file hash returned is based on the entire file, including the appended
- * signature.
- *
- * If IMA is disabled or if no measurement is available, return -EOPNOTSUPP.
- * If the parameters are incorrect, return -EINVAL.
- */
-int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+static int __ima_inode_hash(struct inode *inode, char *buf, size_t buf_size)
 {
-   struct inode *inode;
struct integrity_iint_cache *iint;
int hash_algo;
 
-   if (!file)
-   return -EINVAL;
-
if (!ima_policy_flag)
return -EOPNOTSUPP;
 
-   inode = file_inode(file);
iint = integrity_iint_find(inode);
if (!iint)
return -EOPNOTSUPP;
@@ -558,8 +535,61 @@ int ima_file_hash(struct file *file, char *buf, size_t 
buf_size)
 
return hash_algo;
 }
+
+/**
+ * ima_file_hash - return the stored measurement if a file has been hashed and
+ * is in the iint cache.
+ * @file: pointer to the file
+ * @buf: buffer in which to store the hash
+ * @buf_size: length of the buffer
+ *
+ * On success, return the hash algorithm (as defined in the enum hash_algo).
+ * If buf is not NULL, this function also outputs the hash into buf.
+ * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
+ * It generally just makes sense to pass a buffer capable of holding the 
largest
+ * possible hash: IMA_MAX_DIGEST_SIZE.
+ * The file hash returned is based on the entire file, including the appended
+ * signature.
+ *
+ * If IMA is disabled or if no measurement is available, return -EOPNOTSUPP.
+ * If the parameters are incorrect, return -EINVAL.
+ */
+int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+{
+   if (!file)
+   return -EINVAL;
+
+   return __ima_inode_hash(file_inode(file), buf, buf_size);
+}
 EXPORT_SYMBOL_GPL(ima_file_hash);
 
+/**
+ * ima_inode_hash - return the stored measurement if the inode has been hashed
+ * and is in the iint cache.
+ * @inode: pointer to the inode
+ * @buf: buffer in which to store the hash
+ * @buf_size: length of the buffer
+ *
+ * On success, return the hash algorithm (as defined in the enum hash_algo).
+ * If buf is not NULL, this function also outputs the hash into buf.
+ * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
+ * It generally just makes

[PATCH bpf-next v2 2/3] bpf: Add a BPF helper for getting the IMA hash of an inode

2020-11-20 Thread KP Singh
From: KP Singh 

Provide a wrapper function to get the IMA hash of an inode. This helper
is useful in fingerprinting files (e.g executables on execution) and
using these fingerprints in detections like an executable unlinking
itself.

Since the ima_inode_hash can sleep, it's only allowed for sleepable
LSM hooks.

Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 11 +++
 kernel/bpf/bpf_lsm.c   | 26 ++
 scripts/bpf_helpers_doc.py |  2 ++
 tools/include/uapi/linux/bpf.h | 11 +++
 4 files changed, 50 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 3ca6146f001a..c3458ec1f30a 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3807,6 +3807,16 @@ union bpf_attr {
  * See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
  * Return
  * Current *ktime*.
+ *
+ * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
+ * Description
+ * Returns the stored IMA hash of the *inode* (if it's avaialable).
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** is returned on success,
+ * **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
+ * invalid arguments are passed.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3970,6 +3980,7 @@ union bpf_attr {
FN(get_current_task_btf),   \
FN(bprm_opts_set),  \
FN(ktime_get_coarse_ns),\
+   FN(ima_inode_hash), \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index b4f27a874092..bec1f164ba58 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -75,6 +76,29 @@ const static struct bpf_func_proto bpf_bprm_opts_set_proto = 
{
.arg2_type  = ARG_ANYTHING,
 };
 
+BPF_CALL_3(bpf_ima_inode_hash, struct inode *, inode, void *, dst, u32, size)
+{
+   return ima_inode_hash(inode, dst, size);
+}
+
+static bool bpf_ima_inode_hash_allowed(const struct bpf_prog *prog)
+{
+   return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
+}
+
+BTF_ID_LIST_SINGLE(bpf_ima_inode_hash_btf_ids, struct, inode)
+
+const static struct bpf_func_proto bpf_ima_inode_hash_proto = {
+   .func   = bpf_ima_inode_hash,
+   .gpl_only   = false,
+   .ret_type   = RET_INTEGER,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _ima_inode_hash_btf_ids[0],
+   .arg2_type  = ARG_PTR_TO_UNINIT_MEM,
+   .arg3_type  = ARG_CONST_SIZE,
+   .allowed= bpf_ima_inode_hash_allowed,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -97,6 +121,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_delete_proto;
case BPF_FUNC_bprm_opts_set:
return _bprm_opts_set_proto;
+   case BPF_FUNC_ima_inode_hash:
+   return _ima_inode_hash_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index c5bc947a70ad..8b829748d488 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -436,6 +436,7 @@ class PrinterHelpers(Printer):
 'struct xdp_md',
 'struct path',
 'struct btf_ptr',
+'struct inode',
 ]
 known_types = {
 '...',
@@ -480,6 +481,7 @@ class PrinterHelpers(Printer):
 'struct task_struct',
 'struct path',
 'struct btf_ptr',
+'struct inode',
 }
 mapped_types = {
 'u8': '__u8',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 3ca6146f001a..c3458ec1f30a 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3807,6 +3807,16 @@ union bpf_attr {
  * See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
  * Return
  * Current *ktime*.
+ *
+ * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
+ * Description
+ * Returns the stored IMA hash of the *inode* (if it's avaialable).
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** is returned on success,
+ * **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
+ * invalid arguments are passed.
  */
 #

Re: [PATCH bpf-next 3/3] bpf: Update LSM selftests for bpf_ima_inode_hash

2020-11-20 Thread KP Singh
On Fri, Nov 20, 2020 at 7:11 PM Yonghong Song  wrote:
>
>
>
> On 11/20/20 5:17 AM, KP Singh wrote:
> > From: KP Singh 
> >
> > - Update the IMA policy before executing the test binary (this is not an
> >override of the policy, just an append that ensures that hashes are
> >calculated on executions).
> >
> > - Call the bpf_ima_inode_hash in the bprm_committed_creds hook and check
> >if the call succeeded and a hash was calculated.
> >
> > Signed-off-by: KP Singh 
>
> LGTM with a few nits below.
>
> Acked-by: Yonghong Song 
>
> > ---
> >   tools/testing/selftests/bpf/config|  3 ++

[...]

> >   }
> >
> [...]
> > +
> >   void test_test_lsm(void)
> >   {
> >   struct lsm *skel = NULL;
> > @@ -66,6 +88,10 @@ void test_test_lsm(void)
> >   if (CHECK(err, "attach", "lsm attach failed: %d\n", err))
> >   goto close_prog;
> >
> > + err = update_ima_policy();
> > + if (CHECK(err != 0, "update_ima_policy", "error = %d\n", err))
> > + goto close_prog;
>
> "err != 0" => err?
> "error = %d" => "err %d" for consistency with other usage in this function.

Done.

>
> > +
> >   err = exec_cmd(>bss->monitored_pid);
> >   if (CHECK(err < 0, "exec_cmd", "err %d errno %d\n", err, errno))
> >   goto close_prog;
> > @@ -83,6 +109,12 @@ void test_test_lsm(void)

[...]

> >   int mprotect_count = 0;
> >   int bprm_count = 0;
> > +int ima_hash_ret = -1;
>
> The helper returns type "long", but "int" type here should be fine too.

Changed it to long for correctness.


Re: [PATCH bpf-next 2/3] bpf: Add a BPF helper for getting the IMA hash of an inode

2020-11-20 Thread KP Singh
[...]

> > + * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
> > + *   Description
> > + *   Returns the stored IMA hash of the *inode* (if it's 
> > avaialable).
> > + *   If the hash is larger than *size*, then only *size*
> > + *   bytes will be copied to *dst*
> > + *   Return > + *The **hash_algo** of is returned on success,
>
> of => if?

Just changed it to:

"The **hash_algo** is returned on success"

>
> > + *   **-EOPNOTSUP** if IMA is disabled and **-EINVAL** if
>
> and => or

Done. (and the same for tools/)

>

[...]

> > + .gpl_only   = false,
> > + .ret_type   = RET_INTEGER,
> > + .arg1_type  = ARG_PTR_TO_BTF_ID,
> > + .arg1_btf_id= _ima_inode_hash_btf_ids[0],
> > + .arg2_type  = ARG_PTR_TO_UNINIT_MEM,
> > + .arg3_type  = ARG_CONST_SIZE_OR_ZERO,
>
> I know ARG_CONST_SIZE_OR_ZERO provides some flexibility and may
> make verifier easier to verify programs. But beyond that did
> you see any real use case user will pass a zero size buf to
> get hash value?
>

I agree, in this case it makes more sense to ARG_CONST_SIZE.

> > + .allowed= bpf_ima_inode_hash_allowed,
> > +};

[...]


Re: [PATCH bpf-next 1/3] ima: Implement ima_inode_hash

2020-11-20 Thread KP Singh
[...]

> >
> > diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
> > index c5bc947a70ad..add7fcb32dcd 100755
> > --- a/scripts/bpf_helpers_doc.py
> > +++ b/scripts/bpf_helpers_doc.py
> > @@ -478,6 +478,7 @@ class PrinterHelpers(Printer):
> >   'struct tcp_request_sock',
> >   'struct udp6_sock',
> >   'struct task_struct',
> > +'struct inode',
>
> This change (bpf_helpers_doc.py) belongs to patch #2.

Indeed, I missed it during a rebase. Thanks!


>
> >   'struct path',
> >   'struct btf_ptr',
> >   }
> > diff --git a/security/integrity/ima/ima_main.c 
> > b/security/integrity/ima/ima_main.c
> > index 2d1af8899cab..1dd2123b5b43 100644
> > --- a/security/integrity/ima/ima_main.c
> > +++ b/security/integrity/ima/ima_main.c
> > @@ -501,37 +501,17 @@ int ima_file_check(struct file *file, int mask)
> >   }
> >   EXPORT_SYMBOL_GPL(ima_file_check);
> >
> > -/**
> > - * ima_file_hash - return the stored measurement if a file has been hashed 
> > and
> > - * is in the iint cache.
> > - * @file: pointer to the file
> > - * @buf: buffer in which to store the hash
> > - * @buf_size: length of the buffer
> > - *
> > - * On success, return the hash algorithm (as defined in the enum 
> > hash_algo).
> > - * If buf is not NULL, this function also outputs the hash into buf.
> > - * If the hash is larger than buf_size, then only buf_size bytes will be 
> > copied.
> > - * It generally just makes sense to pass a buffer capable of holding the 
> > largest
> > - * possible hash: IMA_MAX_DIGEST_SIZE.
> > - * The file hash returned is based on the entire file, including the 
> > appended
> > - * signature.
> > - *
> > - * If IMA is disabled or if no measurement is available, return 
> > -EOPNOTSUPP.
> > - * If the parameters are incorrect, return -EINVAL.
> > - */
> > -int ima_file_hash(struct file *file, char *buf, size_t buf_size)
> > +static int __ima_inode_hash(struct inode *inode, char *buf, size_t 
> > buf_size)
> >   {
> > - struct inode *inode;
> >   struct integrity_iint_cache *iint;
> >   int hash_algo;
> >
> > - if (!file)
> > + if (!inode)
> >   return -EINVAL;
>
> Based on original code, for ima_file_hash(), inode cannot be NULL,
> so I prefer to remove this change here and add !inode test in
> ima_inode_hash.

Makes sense. Thanks!

>
>
> >

[...]


> > + * If the parameters are incorrect, return -EINVAL.
> > + */
> > +int ima_inode_hash(struct inode *inode, char *buf, size_t buf_size)
> > +{
>
> Add
> if (!inode)
> return -EINVAL;

Done.

>
>
> > + return __ima_inode_hash(inode, buf, buf_size);
> > +}
> > +EXPORT_SYMBOL_GPL(ima_inode_hash);
> > +
> >   /**
> >* ima_post_create_tmpfile - mark newly created tmpfile as new
> >* @file : newly created tmpfile
> >


[PATCH bpf-next 1/3] ima: Implement ima_inode_hash

2020-11-20 Thread KP Singh
From: KP Singh 

This is in preparation to add a helper for BPF LSM programs to use
IMA hashes when attached to LSM hooks. There are LSM hooks like
inode_unlink which do not have a struct file * argument and cannot
use the existing ima_file_hash API.

An inode based API is, therefore, useful in LSM based detections like an
executable trying to delete itself which rely on the inode_unlink LSM
hook.

Moreover, the ima_file_hash function does nothing with the struct file
pointer apart from calling file_inode on it and converting it to an
inode.

Signed-off-by: KP Singh 
---
 include/linux/ima.h   |  6 +++
 scripts/bpf_helpers_doc.py|  1 +
 security/integrity/ima/ima_main.c | 74 ++-
 3 files changed, 59 insertions(+), 22 deletions(-)

diff --git a/include/linux/ima.h b/include/linux/ima.h
index 8fa7bcfb2da2..7233a2751754 100644
--- a/include/linux/ima.h
+++ b/include/linux/ima.h
@@ -29,6 +29,7 @@ extern int ima_post_read_file(struct file *file, void *buf, 
loff_t size,
  enum kernel_read_file_id id);
 extern void ima_post_path_mknod(struct dentry *dentry);
 extern int ima_file_hash(struct file *file, char *buf, size_t buf_size);
+extern int ima_inode_hash(struct inode *inode, char *buf, size_t buf_size);
 extern void ima_kexec_cmdline(int kernel_fd, const void *buf, int size);
 
 #ifdef CONFIG_IMA_KEXEC
@@ -115,6 +116,11 @@ static inline int ima_file_hash(struct file *file, char 
*buf, size_t buf_size)
return -EOPNOTSUPP;
 }
 
+static inline int ima_inode_hash(struct inode *inode, char *buf, size_t 
buf_size)
+{
+   return -EOPNOTSUPP;
+}
+
 static inline void ima_kexec_cmdline(int kernel_fd, const void *buf, int size) 
{}
 #endif /* CONFIG_IMA */
 
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index c5bc947a70ad..add7fcb32dcd 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -478,6 +478,7 @@ class PrinterHelpers(Printer):
 'struct tcp_request_sock',
 'struct udp6_sock',
 'struct task_struct',
+'struct inode',
 'struct path',
 'struct btf_ptr',
 }
diff --git a/security/integrity/ima/ima_main.c 
b/security/integrity/ima/ima_main.c
index 2d1af8899cab..1dd2123b5b43 100644
--- a/security/integrity/ima/ima_main.c
+++ b/security/integrity/ima/ima_main.c
@@ -501,37 +501,17 @@ int ima_file_check(struct file *file, int mask)
 }
 EXPORT_SYMBOL_GPL(ima_file_check);
 
-/**
- * ima_file_hash - return the stored measurement if a file has been hashed and
- * is in the iint cache.
- * @file: pointer to the file
- * @buf: buffer in which to store the hash
- * @buf_size: length of the buffer
- *
- * On success, return the hash algorithm (as defined in the enum hash_algo).
- * If buf is not NULL, this function also outputs the hash into buf.
- * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
- * It generally just makes sense to pass a buffer capable of holding the 
largest
- * possible hash: IMA_MAX_DIGEST_SIZE.
- * The file hash returned is based on the entire file, including the appended
- * signature.
- *
- * If IMA is disabled or if no measurement is available, return -EOPNOTSUPP.
- * If the parameters are incorrect, return -EINVAL.
- */
-int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+static int __ima_inode_hash(struct inode *inode, char *buf, size_t buf_size)
 {
-   struct inode *inode;
struct integrity_iint_cache *iint;
int hash_algo;
 
-   if (!file)
+   if (!inode)
return -EINVAL;
 
if (!ima_policy_flag)
return -EOPNOTSUPP;
 
-   inode = file_inode(file);
iint = integrity_iint_find(inode);
if (!iint)
return -EOPNOTSUPP;
@@ -558,8 +538,58 @@ int ima_file_hash(struct file *file, char *buf, size_t 
buf_size)
 
return hash_algo;
 }
+
+/**
+ * ima_file_hash - return the stored measurement if a file has been hashed and
+ * is in the iint cache.
+ * @file: pointer to the file
+ * @buf: buffer in which to store the hash
+ * @buf_size: length of the buffer
+ *
+ * On success, return the hash algorithm (as defined in the enum hash_algo).
+ * If buf is not NULL, this function also outputs the hash into buf.
+ * If the hash is larger than buf_size, then only buf_size bytes will be 
copied.
+ * It generally just makes sense to pass a buffer capable of holding the 
largest
+ * possible hash: IMA_MAX_DIGEST_SIZE.
+ * The file hash returned is based on the entire file, including the appended
+ * signature.
+ *
+ * If IMA is disabled or if no measurement is available, return -EOPNOTSUPP.
+ * If the parameters are incorrect, return -EINVAL.
+ */
+int ima_file_hash(struct file *file, char *buf, size_t buf_size)
+{
+   if (!file)
+   return -EINVAL;
+
+   return __ima_inode_hash(file_inode(file), buf, buf_size);
+}
 EXPORT_SYMBOL_GPL(ima_file_hash

[PATCH bpf-next 3/3] bpf: Update LSM selftests for bpf_ima_inode_hash

2020-11-20 Thread KP Singh
From: KP Singh 

- Update the IMA policy before executing the test binary (this is not an
  override of the policy, just an append that ensures that hashes are
  calculated on executions).

- Call the bpf_ima_inode_hash in the bprm_committed_creds hook and check
  if the call succeeded and a hash was calculated.

Signed-off-by: KP Singh 
---
 tools/testing/selftests/bpf/config|  3 ++
 .../selftests/bpf/prog_tests/test_lsm.c   | 32 +++
 tools/testing/selftests/bpf/progs/lsm.c   |  7 +++-
 3 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/config 
b/tools/testing/selftests/bpf/config
index 2118e23ac07a..4b5764031368 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -39,3 +39,6 @@ CONFIG_BPF_JIT=y
 CONFIG_BPF_LSM=y
 CONFIG_SECURITY=y
 CONFIG_LIRC=y
+CONFIG_IMA=y
+CONFIG_IMA_WRITE_POLICY=y
+CONFIG_IMA_READ_POLICY=y
diff --git a/tools/testing/selftests/bpf/prog_tests/test_lsm.c 
b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
index 6ab29226c99b..3f5d64adb233 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_lsm.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
@@ -52,6 +52,28 @@ int exec_cmd(int *monitored_pid)
return -EINVAL;
 }
 
+#define IMA_POLICY "measure func=BPRM_CHECK"
+
+/* This does not override the policy, IMA policy updates are
+ * append only, so this just ensures that "measure func=BPRM_CHECK"
+ * is in the policy. IMA does not allow us to remove this line once
+ * it is added.
+ */
+static int update_ima_policy(void)
+{
+   int fd, ret = 0;
+
+   fd = open("/sys/kernel/security/ima/policy", O_WRONLY);
+   if (fd < 0)
+   return -errno;
+
+   if (write(fd, IMA_POLICY, sizeof(IMA_POLICY)) == -1)
+   ret = -errno;
+
+   close(fd);
+   return ret;
+}
+
 void test_test_lsm(void)
 {
struct lsm *skel = NULL;
@@ -66,6 +88,10 @@ void test_test_lsm(void)
if (CHECK(err, "attach", "lsm attach failed: %d\n", err))
goto close_prog;
 
+   err = update_ima_policy();
+   if (CHECK(err != 0, "update_ima_policy", "error = %d\n", err))
+   goto close_prog;
+
err = exec_cmd(>bss->monitored_pid);
if (CHECK(err < 0, "exec_cmd", "err %d errno %d\n", err, errno))
goto close_prog;
@@ -83,6 +109,12 @@ void test_test_lsm(void)
CHECK(skel->bss->mprotect_count != 1, "mprotect_count",
  "mprotect_count = %d\n", skel->bss->mprotect_count);
 
+   CHECK(skel->data->ima_hash_ret < 0, "ima_hash_ret",
+ "ima_hash_ret = %d\n", skel->data->ima_hash_ret);
+
+   CHECK(skel->bss->ima_hash == 0, "ima_hash",
+ "ima_hash = %lu\n", skel->bss->ima_hash);
+
syscall(__NR_setdomainname, , -2L);
syscall(__NR_setdomainname, 0, -3L);
syscall(__NR_setdomainname, ~0L, -4L);
diff --git a/tools/testing/selftests/bpf/progs/lsm.c 
b/tools/testing/selftests/bpf/progs/lsm.c
index ff4d343b94b5..b0f9639e4b0a 100644
--- a/tools/testing/selftests/bpf/progs/lsm.c
+++ b/tools/testing/selftests/bpf/progs/lsm.c
@@ -35,6 +35,8 @@ char _license[] SEC("license") = "GPL";
 int monitored_pid = 0;
 int mprotect_count = 0;
 int bprm_count = 0;
+int ima_hash_ret = -1;
+u64 ima_hash = 0;
 
 SEC("lsm/file_mprotect")
 int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
@@ -65,8 +67,11 @@ int BPF_PROG(test_void_hook, struct linux_binprm *bprm)
__u32 key = 0;
__u64 *value;
 
-   if (monitored_pid == pid)
+   if (monitored_pid == pid) {
bprm_count++;
+   ima_hash_ret = bpf_ima_inode_hash(bprm->file->f_inode,
+ _hash, sizeof(ima_hash));
+   }
 
bpf_copy_from_user(args, sizeof(args), (void 
*)bprm->vma->vm_mm->arg_start);
bpf_copy_from_user(args, sizeof(args), (void *)bprm->mm->arg_start);
-- 
2.29.2.454.gaff20da3a2-goog



[PATCH bpf-next 2/3] bpf: Add a BPF helper for getting the IMA hash of an inode

2020-11-20 Thread KP Singh
From: KP Singh 

Provide a wrapper function to get the IMA hash of an inode. This helper
is useful in fingerprinting files (e.g executables on execution) and
using these fingerprints in detections like an executable unlinking
itself.

Since the ima_inode_hash can sleep, it's only allowed for sleepable
LSM hooks.

Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 11 +++
 kernel/bpf/bpf_lsm.c   | 26 ++
 scripts/bpf_helpers_doc.py |  1 +
 tools/include/uapi/linux/bpf.h | 11 +++
 4 files changed, 49 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 3ca6146f001a..dd5b8622bb89 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3807,6 +3807,16 @@ union bpf_attr {
  * See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
  * Return
  * Current *ktime*.
+ *
+ * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
+ * Description
+ * Returns the stored IMA hash of the *inode* (if it's avaialable).
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** of is returned on success,
+ * **-EOPNOTSUP** if IMA is disabled and **-EINVAL** if
+ * invalid arguments are passed.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3970,6 +3980,7 @@ union bpf_attr {
FN(get_current_task_btf),   \
FN(bprm_opts_set),  \
FN(ktime_get_coarse_ns),\
+   FN(ima_inode_hash), \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index b4f27a874092..51c36f61339e 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -75,6 +76,29 @@ const static struct bpf_func_proto bpf_bprm_opts_set_proto = 
{
.arg2_type  = ARG_ANYTHING,
 };
 
+BPF_CALL_3(bpf_ima_inode_hash, struct inode *, inode, void *, dst, u32, size)
+{
+   return ima_inode_hash(inode, dst, size);
+}
+
+static bool bpf_ima_inode_hash_allowed(const struct bpf_prog *prog)
+{
+   return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
+}
+
+BTF_ID_LIST_SINGLE(bpf_ima_inode_hash_btf_ids, struct, inode)
+
+const static struct bpf_func_proto bpf_ima_inode_hash_proto = {
+   .func   = bpf_ima_inode_hash,
+   .gpl_only   = false,
+   .ret_type   = RET_INTEGER,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _ima_inode_hash_btf_ids[0],
+   .arg2_type  = ARG_PTR_TO_UNINIT_MEM,
+   .arg3_type  = ARG_CONST_SIZE_OR_ZERO,
+   .allowed= bpf_ima_inode_hash_allowed,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -97,6 +121,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_delete_proto;
case BPF_FUNC_bprm_opts_set:
return _bprm_opts_set_proto;
+   case BPF_FUNC_ima_inode_hash:
+   return _ima_inode_hash_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index add7fcb32dcd..cb16687acb66 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -430,6 +430,7 @@ class PrinterHelpers(Printer):
 'struct tcp_request_sock',
 'struct udp6_sock',
 'struct task_struct',
+'struct inode',
 
 'struct __sk_buff',
 'struct sk_msg_md',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 3ca6146f001a..dd5b8622bb89 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3807,6 +3807,16 @@ union bpf_attr {
  * See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
  * Return
  * Current *ktime*.
+ *
+ * long bpf_ima_inode_hash(struct inode *inode, void *dst, u32 size)
+ * Description
+ * Returns the stored IMA hash of the *inode* (if it's avaialable).
+ * If the hash is larger than *size*, then only *size*
+ * bytes will be copied to *dst*
+ * Return
+ * The **hash_algo** of is returned on success,
+ * **-EOPNOTSUP** if IMA is disabled and **-EINVAL** if
+ * invalid arguments are passed.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3970,6 +3980,7 @@ union bpf_attr {
FN(get_current_task_btf),   \

Re: [PATCH v2 5/5] bpf: Add an iterator selftest for bpf_sk_storage_get

2020-11-19 Thread KP Singh
On Fri, Nov 20, 2020 at 1:32 AM Martin KaFai Lau  wrote:
>
> On Thu, Nov 19, 2020 at 05:26:54PM +0100, Florent Revest wrote:
> > From: Florent Revest 
> >
> > The eBPF program iterates over all files and tasks. For all socket
> > files, it stores the tgid of the last task it encountered with a handle
> > to that socket. This is a heuristic for finding the "owner" of a socket
> > similar to what's done by lsof, ss, netstat or fuser. Potentially, this
> > information could be used from a cgroup_skb/*gress hook to try to
> > associate network traffic with processes.
> >
> > The test makes sure that a socket it created is tagged with prog_tests's
> > pid.
> >
> > Signed-off-by: Florent Revest 
> > ---
> >  .../selftests/bpf/prog_tests/bpf_iter.c   | 35 +++
> >  .../progs/bpf_iter_bpf_sk_storage_helpers.c   | 26 ++
> >  2 files changed, 61 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c 
> > b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
> > index bb4a638f2e6f..4d0626003c03 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
> > @@ -975,6 +975,39 @@ static void test_bpf_sk_storage_delete(void)
> >   bpf_iter_bpf_sk_storage_helpers__destroy(skel);
> >  }
> >
> > +/* The BPF program stores in every socket the tgid of a task owning a 
> > handle to
> > + * it. The test verifies that a locally-created socket is tagged with its 
> > pid
> > + */
> > +static void test_bpf_sk_storage_get(void)
> > +{
> > + struct bpf_iter_bpf_sk_storage_helpers *skel;
> > + int err, map_fd, val = -1;
> > + int sock_fd = -1;
> > +
> > + skel = bpf_iter_bpf_sk_storage_helpers__open_and_load();
> > + if (CHECK(!skel, "bpf_iter_bpf_sk_storage_helpers__open_and_load",
> > +   "skeleton open_and_load failed\n"))
> > + return;
> > +
> > + sock_fd = socket(AF_INET6, SOCK_STREAM, 0);
> > + if (CHECK(sock_fd < 0, "socket", "errno: %d\n", errno))
> > + goto out;
> > +
> > + do_dummy_read(skel->progs.fill_socket_owners);
> > +
> > + map_fd = bpf_map__fd(skel->maps.sk_stg_map);
> > +
> > + err = bpf_map_lookup_elem(map_fd, _fd, );
> > + CHECK(err || val != getpid(), "bpf_map_lookup_elem",
> > +   "map value wasn't set correctly (expected %d, got %d, 
> > err=%d)\n",
> > +   getpid(), val, err);
> > +
> > + if (sock_fd >= 0)
> > + close(sock_fd);
> > +out:
> > + bpf_iter_bpf_sk_storage_helpers__destroy(skel);
> > +}
> > +
> >  static void test_bpf_sk_storage_map(void)
> >  {
> >   DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
> > @@ -1131,6 +1164,8 @@ void test_bpf_iter(void)
> >   test_bpf_sk_storage_map();
> >   if (test__start_subtest("bpf_sk_storage_delete"))
> >   test_bpf_sk_storage_delete();
> > + if (test__start_subtest("bpf_sk_storage_get"))
> > + test_bpf_sk_storage_get();
> >   if (test__start_subtest("rdonly-buf-out-of-bound"))
> >   test_rdonly_buf_out_of_bound();
> >   if (test__start_subtest("buf-neg-offset"))
> > diff --git 
> > a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_helpers.c 
> > b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_helpers.c
> > index 01ff3235e413..7206fd6f09ab 100644
> > --- a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_helpers.c
> > +++ b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_helpers.c
> > @@ -21,3 +21,29 @@ int delete_bpf_sk_storage_map(struct 
> > bpf_iter__bpf_sk_storage_map *ctx)
> >
> >   return 0;
> >  }
> > +
> > +SEC("iter/task_file")
> > +int fill_socket_owners(struct bpf_iter__task_file *ctx)
> > +{
> > + struct task_struct *task = ctx->task;
> > + struct file *file = ctx->file;
> > + struct socket *sock;
> > + int *sock_tgid;
> > +
> > + if (!task || !file || task->tgid != task->pid)
> > + return 0;
> > +
> > + sock = bpf_sock_from_file(file);
> > + if (!sock)
> > + return 0;
> > +
> > + sock_tgid = bpf_sk_storage_get(_stg_map, sock->sk, 0,
> > +BPF_SK_STORAGE_GET_F_CREATE);
> Does it affect all sk(s) in the system?  Can it be limited to
> the sk that the test is testing?

Yeah, one such way would be to set the socket storage on the socket
from userspace and then "search" for the socket in the iterator and
mark it as found in a shared global variable.


Re: [PATCH v2 3/5] bpf: Expose bpf_sk_storage_* to iterator programs

2020-11-19 Thread KP Singh
On Thu, Nov 19, 2020 at 5:27 PM Florent Revest  wrote:
>
> From: Florent Revest 
>
> Iterators are currently used to expose kernel information to userspace
> over fast procfs-like files but iterators could also be used to
> manipulate local storage. For example, the task_file iterator could be
> used to initialize a socket local storage with associations between
> processes and sockets or to selectively delete local storage values.
>
> This exposes both socket local storage helpers to all iterators.
> Alternatively we could expose it to only certain iterators with strcmps
> on prog->aux->attach_func_name.

Since you mentioned the alternative here, maybe you can also
explain why you chose the current approach.


Re: [PATCH v2 2/5] bpf: Add a bpf_sock_from_file helper

2020-11-19 Thread KP Singh
On Thu, Nov 19, 2020 at 5:27 PM Florent Revest  wrote:
>
> From: Florent Revest 
>
> While eBPF programs can check whether a file is a socket by file->f_op
> == _file_ops, they cannot convert the void private_data pointer
> to a struct socket BTF pointer. In order to do this a new helper
> wrapping sock_from_file is added.
>
> This is useful to tracing programs but also other program types
> inheriting this set of helpers such as iterators or LSM programs.
>
> Signed-off-by: Florent Revest 

Acked-by: KP Singh 

Some minor comments.

> ---
>  include/uapi/linux/bpf.h   |  7 +++
>  kernel/trace/bpf_trace.c   | 20 
>  scripts/bpf_helpers_doc.py |  4 
>  tools/include/uapi/linux/bpf.h |  7 +++
>  4 files changed, 38 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 162999b12790..7d598f161dc0 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -3787,6 +3787,12 @@ union bpf_attr {
>   * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
>   * Return
>   * Pointer to the current task.
> + *
> + * struct socket *bpf_sock_from_file(struct file *file)
> + * Description
> + * If the given file contains a socket, returns the associated 
> socket.

"If the given file is a socket" or "represents a socket" would fit better here.

> + * Return
> + * A pointer to a struct socket on success or NULL on failure.

NULL if the file is not a socket.


Re: [PATCH v2 1/5] net: Remove the err argument from sock_from_file

2020-11-19 Thread KP Singh
I think you meant to send these as [PATCH bpf-next] for bpf-next.

I guess we can do a round of reviews and update the next revision (if
any) with the correct prefixes.

On Thu, Nov 19, 2020 at 5:27 PM Florent Revest  wrote:
>
> From: Florent Revest 
>
> Currently, the sock_from_file prototype takes an "err" pointer that is
> either not set or set to -ENOTSOCK IFF the returned socket is NULL. This
> makes the error redundant and it is ignored by a few callers.
>
> This patch simplifies the API by letting callers deduce the error based
> on whether the returned socket is NULL or not.
>
> Suggested-by: Al Viro 
> Signed-off-by: Florent Revest 
> ---
>  fs/eventpoll.c   |  3 +--
>  fs/io_uring.c| 16 
>  include/linux/net.h  |  2 +-
>  net/core/netclassid_cgroup.c |  3 +--
>  net/core/netprio_cgroup.c|  3 +--
>  net/core/sock.c  |  8 +---
>  net/socket.c | 27 ---
>  7 files changed, 29 insertions(+), 33 deletions(-)
>
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 4df61129566d..c764d8d5a76a 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -415,12 +415,11 @@ static inline void ep_set_busy_poll_napi_id(struct 
> epitem *epi)
> unsigned int napi_id;
> struct socket *sock;
> struct sock *sk;
> -   int err;
>
> if (!net_busy_loop_on())
> return;
>
> -   sock = sock_from_file(epi->ffd.file, );
> +   sock = sock_from_file(epi->ffd.file);
> if (!sock)
> return;
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 8018c7076b25..ace99b15cbd3 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -4341,9 +4341,9 @@ static int io_sendmsg(struct io_kiocb *req, bool 
> force_nonblock,
> unsigned flags;
> int ret;
>
> -   sock = sock_from_file(req->file, );
> +   sock = sock_from_file(req->file);
> if (unlikely(!sock))
> -   return ret;
> +   return -ENOTSOCK;
>
> if (req->async_data) {
> kmsg = req->async_data;
> @@ -4390,9 +4390,9 @@ static int io_send(struct io_kiocb *req, bool 
> force_nonblock,
> unsigned flags;
> int ret;
>
> -   sock = sock_from_file(req->file, );
> +   sock = sock_from_file(req->file);
> if (unlikely(!sock))
> -   return ret;
> +   return -ENOTSOCK;
>
> ret = import_single_range(WRITE, sr->buf, sr->len, , 
> _iter);
> if (unlikely(ret))
> @@ -4569,9 +4569,9 @@ static int io_recvmsg(struct io_kiocb *req, bool 
> force_nonblock,
> unsigned flags;
> int ret, cflags = 0;
>
> -   sock = sock_from_file(req->file, );
> +   sock = sock_from_file(req->file);
> if (unlikely(!sock))
> -   return ret;
> +   return -ENOTSOCK;
>
> if (req->async_data) {
> kmsg = req->async_data;
> @@ -4632,9 +4632,9 @@ static int io_recv(struct io_kiocb *req, bool 
> force_nonblock,
> unsigned flags;
> int ret, cflags = 0;
>
> -   sock = sock_from_file(req->file, );
> +   sock = sock_from_file(req->file);
> if (unlikely(!sock))
> -   return ret;
> +   return -ENOTSOCK;
>
> if (req->flags & REQ_F_BUFFER_SELECT) {
> kbuf = io_recv_buffer_select(req, !force_nonblock);
> diff --git a/include/linux/net.h b/include/linux/net.h
> index 0dcd51feef02..9e2324efc26a 100644
> --- a/include/linux/net.h
> +++ b/include/linux/net.h
> @@ -240,7 +240,7 @@ int sock_sendmsg(struct socket *sock, struct msghdr *msg);
>  int sock_recvmsg(struct socket *sock, struct msghdr *msg, int flags);
>  struct file *sock_alloc_file(struct socket *sock, int flags, const char 
> *dname);
>  struct socket *sockfd_lookup(int fd, int *err);
> -struct socket *sock_from_file(struct file *file, int *err);
> +struct socket *sock_from_file(struct file *file);
>  #define sockfd_put(sock) fput(sock->file)
>  int net_ratelimit(void);
>
> diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
> index 41b24cd31562..b49c57d35a88 100644
> --- a/net/core/netclassid_cgroup.c
> +++ b/net/core/netclassid_cgroup.c
> @@ -68,9 +68,8 @@ struct update_classid_context {
>
>  static int update_classid_sock(const void *v, struct file *file, unsigned n)
>  {
> -   int err;
> struct update_classid_context *ctx = (void *)v;
> -   struct socket *sock = sock_from_file(file, );
> +   struct socket *sock = sock_from_file(file);
>
> if (sock) {
> spin_lock(_sk_update_lock);
> diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c
> index 9bd4cab7d510..99a431c56f23 100644
> --- a/net/core/netprio_cgroup.c
> +++ b/net/core/netprio_cgroup.c
> @@ -220,8 +220,7 @@ static ssize_t write_priomap(struct kernfs_open_file *of,
>
>  static int update_netprio(const void *v, struct file *file, unsigned 

Re: [PATCH bpf-next v3 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-17 Thread KP Singh
On Tue, Nov 17, 2020 at 11:41 PM Daniel Borkmann  wrote:
>
> On 11/17/20 3:13 AM, KP Singh wrote:
> > From: KP Singh 
> >
> > The helper allows modification of certain bits on the linux_binprm
> > struct starting with the secureexec bit which can be updated using the
> > BPF_LSM_F_BPRM_SECUREEXEC flag.
> >
> > secureexec can be set by the LSM for privilege gaining executions to set
> > the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
> > use of certain environment variables (like LD_PRELOAD).
> >
> > Signed-off-by: KP Singh 
> > ---
> >   include/uapi/linux/bpf.h   | 18 ++
> >   kernel/bpf/bpf_lsm.c   | 27 +++
> >   scripts/bpf_helpers_doc.py |  2 ++
> >   tools/include/uapi/linux/bpf.h | 18 ++
> >   4 files changed, 65 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 162999b12790..bfa79054d106 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -3787,6 +3787,18 @@ union bpf_attr {
> >*  *ARG_PTR_TO_BTF_ID* of type *task_struct*.
> >*  Return
> >*  Pointer to the current task.
> > + *
> > + * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
> > + *
>
> small nit: should have no extra newline (same for the tools/ copy)
>
> > + *   Description
> > + *   Set or clear certain options on *bprm*:
> > + *
> > + *   **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
> > + *   which sets the **AT_SECURE** auxv for glibc. The bit
> > + *   is cleared if the flag is not specified.
> > + *   Return
> > + *   **-EINVAL** if invalid *flags* are passed.
> > + *
> >*/
> >   #define __BPF_FUNC_MAPPER(FN)   \
> >   FN(unspec), \
> > @@ -3948,6 +3960,7 @@ union bpf_attr {
> >   FN(task_storage_get),   \
> >   FN(task_storage_delete),\
> >   FN(get_current_task_btf),   \
> > + FN(lsm_set_bprm_opts),  \
> >   /* */
> >
> >   /* integer value in 'imm' field of BPF_CALL instruction selects which 
> > helper
> > @@ -4119,6 +4132,11 @@ enum bpf_lwt_encap_mode {
> >   BPF_LWT_ENCAP_IP,
> >   };
> >
> > +/* Flags for LSM helpers */
> > +enum {
> > + BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
> > +};
> > +
> >   #define __bpf_md_ptr(type, name)\
> >   union { \
> >   type name;  \
> > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> > index 553107f4706a..cd85482228a0 100644
> > --- a/kernel/bpf/bpf_lsm.c
> > +++ b/kernel/bpf/bpf_lsm.c
> > @@ -7,6 +7,7 @@
> >   #include 
> >   #include 
> >   #include 
> > +#include 
> >   #include 
> >   #include 
> >   #include 
> > @@ -51,6 +52,30 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
> >   return 0;
> >   }
> >
> > +/* Mask for all the currently supported BPRM option flags */
> > +#define BPF_LSM_F_BRPM_OPTS_MASK BPF_LSM_F_BPRM_SECUREEXEC
> > +
> > +BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
> > +{
> > +
>
> ditto
>
> Would have fixed up these things on the fly while applying, but one small item
> I wanted to bring up here given uapi which will then freeze: it would be 
> cleaner
> to call the helper just bpf_bprm_opts_set() or so given it's implied that we
> attach to lsm here and we don't use _lsm in the naming for the others either.
> Similarly, I'd drop the _LSM from the flag/mask.
>

Thanks Daniel, this makes sense and is more future proof, I respun this and
sent out another version with some minor fixes and the rename. Also added
Martin's acks.

- KP


[PATCH bpf-next v4 2/2] bpf: Add tests for bpf_bprm_opts_set helper

2020-11-17 Thread KP Singh
From: KP Singh 

The test forks a child process, updates the local storage to set/unset
the securexec bit.

The BPF program in the test attaches to bprm_creds_for_exec which checks
the local storage of the current task to set the secureexec bit on the
binary parameters (bprm).

The child then execs a bash command with the environment variable
TMPDIR set in the envp.  The bash command returns a different exit code
based on its observed value of the TMPDIR variable.

Since TMPDIR is one of the variables that is ignored by the dynamic
loader when the secureexec bit is set, one should expect the
child execution to not see this value when the secureexec bit is set.

Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 .../selftests/bpf/prog_tests/test_bprm_opts.c | 116 ++
 tools/testing/selftests/bpf/progs/bprm_opts.c |  34 +
 2 files changed, 150 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
 create mode 100644 tools/testing/selftests/bpf/progs/bprm_opts.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c 
b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
new file mode 100644
index ..2559bb775762
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
@@ -0,0 +1,116 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+
+#include 
+#include 
+
+#include "bprm_opts.skel.h"
+#include "network_helpers.h"
+
+#ifndef __NR_pidfd_open
+#define __NR_pidfd_open 434
+#endif
+
+static const char * const bash_envp[] = { "TMPDIR=shouldnotbeset", NULL };
+
+static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+{
+   return syscall(__NR_pidfd_open, pid, flags);
+}
+
+static int update_storage(int map_fd, int secureexec)
+{
+   int task_fd, ret = 0;
+
+   task_fd = sys_pidfd_open(getpid(), 0);
+   if (task_fd < 0)
+   return errno;
+
+   ret = bpf_map_update_elem(map_fd, _fd, , BPF_NOEXIST);
+   if (ret)
+   ret = errno;
+
+   close(task_fd);
+   return ret;
+}
+
+static int run_set_secureexec(int map_fd, int secureexec)
+{
+   int child_pid, child_status, ret, null_fd;
+
+   child_pid = fork();
+   if (child_pid == 0) {
+   null_fd = open("/dev/null", O_WRONLY);
+   if (null_fd == -1)
+   exit(errno);
+   dup2(null_fd, STDOUT_FILENO);
+   dup2(null_fd, STDERR_FILENO);
+   close(null_fd);
+
+   /* Ensure that all executions from hereon are
+* secure by setting a local storage which is read by
+* the bprm_creds_for_exec hook and sets bprm->secureexec.
+*/
+   ret = update_storage(map_fd, secureexec);
+   if (ret)
+   exit(ret);
+
+   /* If the binary is executed with securexec=1, the dynamic
+* loader ingores and unsets certain variables like LD_PRELOAD,
+* TMPDIR etc. TMPDIR is used here to simplify the example, as
+* LD_PRELOAD requires a real .so file.
+*
+* If the value of TMPDIR is set, the bash command returns 10
+* and if the value is unset, it returns 20.
+*/
+   execle("/bin/bash", "bash", "-c",
+  "[[ -z \"${TMPDIR}\" ]] || exit 10 && exit 20", NULL,
+  bash_envp);
+   exit(errno);
+   } else if (child_pid > 0) {
+   waitpid(child_pid, _status, 0);
+   ret = WEXITSTATUS(child_status);
+
+   /* If a secureexec occurred, the exit status should be 20 */
+   if (secureexec && ret == 20)
+   return 0;
+
+   /* If normal execution happened, the exit code should be 10 */
+   if (!secureexec && ret == 10)
+   return 0;
+   }
+
+   return -EINVAL;
+}
+
+void test_test_bprm_opts(void)
+{
+   int err, duration = 0;
+   struct bprm_opts *skel = NULL;
+
+   skel = bprm_opts__open_and_load();
+   if (CHECK(!skel, "skel_load", "skeleton failed\n"))
+   goto close_prog;
+
+   err = bprm_opts__attach(skel);
+   if (CHECK(err, "attach", "attach failed: %d\n", err))
+   goto close_prog;
+
+   /* Run the test with the secureexec bit unset */
+   err = run_set_secureexec(bpf_map__fd(skel->maps.secure_exec_task_map),
+0 /* secureexec */);
+   if (CHECK(err, "run_set_secureexec:0", "err = %d\n", err))
+   goto close_prog;
+
+   /* Run the test with the secureexec bit set */
+   err = run_set_secureexec(bpf_map__fd(

[PATCH bpf-next v4 1/2] bpf: Add bpf_bprm_opts_set helper

2020-11-17 Thread KP Singh
From: KP Singh 

The helper allows modification of certain bits on the linux_binprm
struct starting with the secureexec bit which can be updated using the
BPF_F_BPRM_SECUREEXEC flag.

secureexec can be set by the LSM for privilege gaining executions to set
the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
use of certain environment variables (like LD_PRELOAD).

Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 16 
 kernel/bpf/bpf_lsm.c   | 26 ++
 scripts/bpf_helpers_doc.py |  2 ++
 tools/include/uapi/linux/bpf.h | 16 
 4 files changed, 60 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 162999b12790..a52299b80b9d 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3787,6 +3787,16 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * long bpf_bprm_opts_set(struct linux_binprm *bprm, u64 flags)
+ * Description
+ * Set or clear certain options on *bprm*:
+ *
+ * **BPF_F_BPRM_SECUREEXEC** Set the secureexec bit
+ * which sets the **AT_SECURE** auxv for glibc. The bit
+ * is cleared if the flag is not specified.
+ * Return
+ * **-EINVAL** if invalid *flags* are passed, zero otherwise.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3948,6 +3958,7 @@ union bpf_attr {
FN(task_storage_get),   \
FN(task_storage_delete),\
FN(get_current_task_btf),   \
+   FN(bprm_opts_set),  \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -4119,6 +4130,11 @@ enum bpf_lwt_encap_mode {
BPF_LWT_ENCAP_IP,
 };
 
+/* Flags for bpf_bprm_opts_set helper */
+enum {
+   BPF_F_BPRM_SECUREEXEC   = (1ULL << 0),
+};
+
 #define __bpf_md_ptr(type, name)   \
 union {\
type name;  \
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 553107f4706a..b4f27a874092 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -51,6 +52,29 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
return 0;
 }
 
+/* Mask for all the currently supported BPRM option flags */
+#define BPF_F_BRPM_OPTS_MASK   BPF_F_BPRM_SECUREEXEC
+
+BPF_CALL_2(bpf_bprm_opts_set, struct linux_binprm *, bprm, u64, flags)
+{
+   if (flags & ~BPF_F_BRPM_OPTS_MASK)
+   return -EINVAL;
+
+   bprm->secureexec = (flags & BPF_F_BPRM_SECUREEXEC);
+   return 0;
+}
+
+BTF_ID_LIST_SINGLE(bpf_bprm_opts_set_btf_ids, struct, linux_binprm)
+
+const static struct bpf_func_proto bpf_bprm_opts_set_proto = {
+   .func   = bpf_bprm_opts_set,
+   .gpl_only   = false,
+   .ret_type   = RET_INTEGER,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _bprm_opts_set_btf_ids[0],
+   .arg2_type  = ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -71,6 +95,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_get_proto;
case BPF_FUNC_task_storage_delete:
return _task_storage_delete_proto;
+   case BPF_FUNC_bprm_opts_set:
+   return _bprm_opts_set_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index 31484377b8b1..c5bc947a70ad 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -418,6 +418,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
@@ -465,6 +466,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 162999b12790..a52299b80b9d 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3787,6 +3787,16 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * long bpf_bprm_opts_set(struct linux_binprm *bprm, u64 flags)
+ 

[PATCH bpf-next v3 2/2] bpf: Add tests for bpf_lsm_set_bprm_opts

2020-11-16 Thread KP Singh
From: KP Singh 

The test forks a child process, updates the local storage to set/unset
the securexec bit.

The BPF program in the test attaches to bprm_creds_for_exec which checks
the local storage of the current task to set the secureexec bit on the
binary parameters (bprm).

The child then execs a bash command with the environment variable
TMPDIR set in the envp.  The bash command returns a different exit code
based on its observed value of the TMPDIR variable.

Since TMPDIR is one of the variables that is ignored by the dynamic
loader when the secureexec bit is set, one should expect the
child execution to not see this value when the secureexec bit is set.

Signed-off-by: KP Singh 
---
 .../selftests/bpf/prog_tests/test_bprm_opts.c | 121 ++
 tools/testing/selftests/bpf/progs/bprm_opts.c |  34 +
 2 files changed, 155 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
 create mode 100644 tools/testing/selftests/bpf/progs/bprm_opts.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c 
b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
new file mode 100644
index ..0d0954adad73
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+
+#include 
+#include 
+#include 
+
+#include "bprm_opts.skel.h"
+#include "network_helpers.h"
+
+#ifndef __NR_pidfd_open
+#define __NR_pidfd_open 434
+#endif
+
+static const char * const bash_envp[] = { "TMPDIR=shouldnotbeset", NULL };
+
+static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+{
+   return syscall(__NR_pidfd_open, pid, flags);
+}
+
+static int update_storage(int map_fd, int secureexec)
+{
+   int task_fd, ret = 0;
+
+   task_fd = sys_pidfd_open(getpid(), 0);
+   if (task_fd < 0)
+   return errno;
+
+   ret = bpf_map_update_elem(map_fd, _fd, , BPF_NOEXIST);
+   if (ret)
+   ret = errno;
+
+   close(task_fd);
+   return ret;
+}
+
+static int run_set_secureexec(int map_fd, int secureexec)
+{
+
+   int child_pid, child_status, ret, null_fd;
+
+   child_pid = fork();
+   if (child_pid == 0) {
+   null_fd = open("/dev/null", O_WRONLY);
+   if (null_fd == -1)
+   exit(errno);
+   dup2(null_fd, STDOUT_FILENO);
+   dup2(null_fd, STDERR_FILENO);
+   close(null_fd);
+
+   /* Ensure that all executions from hereon are
+* secure by setting a local storage which is read by
+* the bprm_creds_for_exec hook and sets bprm->secureexec.
+*/
+   ret = update_storage(map_fd, secureexec);
+   if (ret)
+   exit(ret);
+
+   /* If the binary is executed with securexec=1, the dynamic
+* loader ingores and unsets certain variables like LD_PRELOAD,
+* TMPDIR etc. TMPDIR is used here to simplify the example, as
+* LD_PRELOAD requires a real .so file.
+*
+* If the value of TMPDIR is set, the bash command returns 10
+* and if the value is unset, it returns 20.
+*/
+   execle("/bin/bash", "bash", "-c",
+  "[[ -z \"${TMPDIR}\" ]] || exit 10 && exit 20", NULL,
+  bash_envp);
+   exit(errno);
+   } else if (child_pid > 0) {
+   waitpid(child_pid, _status, 0);
+   ret = WEXITSTATUS(child_status);
+
+   /* If a secureexec occured, the exit status should be 20.
+*/
+   if (secureexec && ret == 20)
+   return 0;
+
+   /* If normal execution happened the exit code should be 10.
+*/
+   if (!secureexec && ret == 10)
+   return 0;
+
+   }
+
+   return -EINVAL;
+}
+
+void test_test_bprm_opts(void)
+{
+   int err, duration = 0;
+   struct bprm_opts *skel = NULL;
+
+   skel = bprm_opts__open_and_load();
+   if (CHECK(!skel, "skel_load", "skeleton failed\n"))
+   goto close_prog;
+
+   err = bprm_opts__attach(skel);
+   if (CHECK(err, "attach", "attach failed: %d\n", err))
+   goto close_prog;
+
+   /* Run the test with the secureexec bit unset */
+   err = run_set_secureexec(bpf_map__fd(skel->maps.secure_exec_task_map),
+0 /* secureexec */);
+   if (CHECK(err, "run_set_secureexec:0", "err = %d\n", err))
+   goto close_prog;
+
+   /* Run the test with the secureexec bit set */
+   err

[PATCH bpf-next v3 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
From: KP Singh 

The helper allows modification of certain bits on the linux_binprm
struct starting with the secureexec bit which can be updated using the
BPF_LSM_F_BPRM_SECUREEXEC flag.

secureexec can be set by the LSM for privilege gaining executions to set
the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
use of certain environment variables (like LD_PRELOAD).

Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 18 ++
 kernel/bpf/bpf_lsm.c   | 27 +++
 scripts/bpf_helpers_doc.py |  2 ++
 tools/include/uapi/linux/bpf.h | 18 ++
 4 files changed, 65 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 162999b12790..bfa79054d106 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3787,6 +3787,18 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
+ *
+ * Description
+ * Set or clear certain options on *bprm*:
+ *
+ * **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
+ * which sets the **AT_SECURE** auxv for glibc. The bit
+ * is cleared if the flag is not specified.
+ * Return
+ * **-EINVAL** if invalid *flags* are passed.
+ *
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3948,6 +3960,7 @@ union bpf_attr {
FN(task_storage_get),   \
FN(task_storage_delete),\
FN(get_current_task_btf),   \
+   FN(lsm_set_bprm_opts),  \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -4119,6 +4132,11 @@ enum bpf_lwt_encap_mode {
BPF_LWT_ENCAP_IP,
 };
 
+/* Flags for LSM helpers */
+enum {
+   BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
+};
+
 #define __bpf_md_ptr(type, name)   \
 union {\
type name;  \
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 553107f4706a..cd85482228a0 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -51,6 +52,30 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
return 0;
 }
 
+/* Mask for all the currently supported BPRM option flags */
+#define BPF_LSM_F_BRPM_OPTS_MASK   BPF_LSM_F_BPRM_SECUREEXEC
+
+BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
+{
+
+   if (flags & ~BPF_LSM_F_BRPM_OPTS_MASK)
+   return -EINVAL;
+
+   bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
+   return 0;
+}
+
+BTF_ID_LIST_SINGLE(bpf_lsm_set_bprm_opts_btf_ids, struct, linux_binprm)
+
+const static struct bpf_func_proto bpf_lsm_set_bprm_opts_proto = {
+   .func   = bpf_lsm_set_bprm_opts,
+   .gpl_only   = false,
+   .ret_type   = RET_INTEGER,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _lsm_set_bprm_opts_btf_ids[0],
+   .arg2_type  = ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -71,6 +96,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_get_proto;
case BPF_FUNC_task_storage_delete:
return _task_storage_delete_proto;
+   case BPF_FUNC_lsm_set_bprm_opts:
+   return _lsm_set_bprm_opts_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index 31484377b8b1..c5bc947a70ad 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -418,6 +418,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
@@ -465,6 +466,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 162999b12790..bfa79054d106 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3787,6 +3787,18 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * long bpf_lsm_set_bprm_opts(struct linux_binprm

Re: [PATCH bpf-next v2 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
On Tue, Nov 17, 2020 at 3:03 AM KP Singh  wrote:
>
> On Tue, Nov 17, 2020 at 1:11 AM Martin KaFai Lau  wrote:
> >
> > On Mon, Nov 16, 2020 at 11:25:35PM +, KP Singh wrote:
> > > From: KP Singh 
> > >
> > > The helper allows modification of certain bits on the linux_binprm
> > > struct starting with the secureexec bit which can be updated using the
> > > BPF_LSM_F_BPRM_SECUREEXEC flag.
> > >
> > > secureexec can be set by the LSM for privilege gaining executions to set
> > > the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
> > > use of certain environment variables (like LD_PRELOAD).
> > >
> > > Signed-off-by: KP Singh 
> > > ---
> > >  include/uapi/linux/bpf.h   | 14 ++
> > >  kernel/bpf/bpf_lsm.c   | 27 +++
> > >  scripts/bpf_helpers_doc.py |  2 ++
> > >  tools/include/uapi/linux/bpf.h | 14 ++
> > >  4 files changed, 57 insertions(+)
> > >
> > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > index 162999b12790..7f1b6ba8246c 100644
> > > --- a/include/uapi/linux/bpf.h
> > > +++ b/include/uapi/linux/bpf.h
> > > @@ -3787,6 +3787,14 @@ union bpf_attr {
> > >   *   *ARG_PTR_TO_BTF_ID* of type *task_struct*.
> > >   *   Return
> > >   *   Pointer to the current task.
> > > + *
> > > + * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
> > > + *
> > > + *   Description
> > > + *   Sets certain options on the *bprm*:
> > > + *
> > > + *   **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
> > > + *   which sets the **AT_SECURE** auxv for glibc.
> > The return value needs to be documented also.
>
> Done.
>
> >
> > >   */
> > >  #define __BPF_FUNC_MAPPER(FN)\
> > >   FN(unspec), \
> > > @@ -3948,6 +3956,7 @@ union bpf_attr {
> > >   FN(task_storage_get),   \
> > >   FN(task_storage_delete),\
> > >   FN(get_current_task_btf),   \
> > > + FN(lsm_set_bprm_opts),  \
> > >   /* */
> > >
> > >  /* integer value in 'imm' field of BPF_CALL instruction selects which 
> > > helper
> > > @@ -4119,6 +4128,11 @@ enum bpf_lwt_encap_mode {
> > >   BPF_LWT_ENCAP_IP,
> > >  };
> > >
> > > +/* Flags for LSM helpers */
> > > +enum {
> > > + BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
> > > +};
> > > +
> > >  #define __bpf_md_ptr(type, name) \
> > >  union {  \
> > >   type name;  \
> > > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> > > index 553107f4706a..31f85474a0ef 100644
> > > --- a/kernel/bpf/bpf_lsm.c
> > > +++ b/kernel/bpf/bpf_lsm.c
> > > @@ -7,6 +7,7 @@
> > >  #include 
> > >  #include 
> > >  #include 
> > > +#include 
> > >  #include 
> > >  #include 
> > >  #include 
> > > @@ -51,6 +52,30 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
> > >   return 0;
> > >  }
> > >
> > > +/* Mask for all the currently supported BPRM option flags */
> > > +#define BPF_LSM_F_BRPM_OPTS_MASK 0x1ULL
> > If there is a need to have v3, it will be better to use
> > BPF_LSM_F_BPRM_SECUREEXEC instead of 0x1ULL.
>
> Done.
>
> >
> > > +
> > > +BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, 
> > > flags)
> > > +{
> > > +
> > > + if (flags & ~BPF_LSM_F_BRPM_OPTS_MASK)
> > > + return -EINVAL;
> > > +
> > > + bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
> > The intention of this helper is to set "or clear" a bit?
> > It may be useful to clarify the "clear" part in the doc also.
>
> Updated the docs:
>
>  * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
>  *
>  *  Description
>  *  Set or clear certain options on *bprm*:
>  *
>  *  **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
>  *  which sets the **AT_SECURE** auxv for glibc. The bit is
>  *  is cleared if the flag is not specified.

(-is) = cleared if the flag is not specified. (Thanks checkpatch!)

>  *  Return
>  *  **-EINVAL** if invalid *flags* are passed.
>
> >
> > > + return 0;
> > > +}
> > > +


Re: [PATCH bpf-next v2 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
On Tue, Nov 17, 2020 at 1:11 AM Martin KaFai Lau  wrote:
>
> On Mon, Nov 16, 2020 at 11:25:35PM +0000, KP Singh wrote:
> > From: KP Singh 
> >
> > The helper allows modification of certain bits on the linux_binprm
> > struct starting with the secureexec bit which can be updated using the
> > BPF_LSM_F_BPRM_SECUREEXEC flag.
> >
> > secureexec can be set by the LSM for privilege gaining executions to set
> > the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
> > use of certain environment variables (like LD_PRELOAD).
> >
> > Signed-off-by: KP Singh 
> > ---
> >  include/uapi/linux/bpf.h   | 14 ++
> >  kernel/bpf/bpf_lsm.c   | 27 +++
> >  scripts/bpf_helpers_doc.py |  2 ++
> >  tools/include/uapi/linux/bpf.h | 14 ++
> >  4 files changed, 57 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 162999b12790..7f1b6ba8246c 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -3787,6 +3787,14 @@ union bpf_attr {
> >   *   *ARG_PTR_TO_BTF_ID* of type *task_struct*.
> >   *   Return
> >   *   Pointer to the current task.
> > + *
> > + * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
> > + *
> > + *   Description
> > + *   Sets certain options on the *bprm*:
> > + *
> > + *   **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
> > + *   which sets the **AT_SECURE** auxv for glibc.
> The return value needs to be documented also.

Done.

>
> >   */
> >  #define __BPF_FUNC_MAPPER(FN)\
> >   FN(unspec), \
> > @@ -3948,6 +3956,7 @@ union bpf_attr {
> >   FN(task_storage_get),   \
> >   FN(task_storage_delete),\
> >   FN(get_current_task_btf),   \
> > + FN(lsm_set_bprm_opts),  \
> >   /* */
> >
> >  /* integer value in 'imm' field of BPF_CALL instruction selects which 
> > helper
> > @@ -4119,6 +4128,11 @@ enum bpf_lwt_encap_mode {
> >   BPF_LWT_ENCAP_IP,
> >  };
> >
> > +/* Flags for LSM helpers */
> > +enum {
> > + BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
> > +};
> > +
> >  #define __bpf_md_ptr(type, name) \
> >  union {  \
> >   type name;  \
> > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> > index 553107f4706a..31f85474a0ef 100644
> > --- a/kernel/bpf/bpf_lsm.c
> > +++ b/kernel/bpf/bpf_lsm.c
> > @@ -7,6 +7,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  #include 
> > @@ -51,6 +52,30 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
> >   return 0;
> >  }
> >
> > +/* Mask for all the currently supported BPRM option flags */
> > +#define BPF_LSM_F_BRPM_OPTS_MASK 0x1ULL
> If there is a need to have v3, it will be better to use
> BPF_LSM_F_BPRM_SECUREEXEC instead of 0x1ULL.

Done.

>
> > +
> > +BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
> > +{
> > +
> > + if (flags & ~BPF_LSM_F_BRPM_OPTS_MASK)
> > + return -EINVAL;
> > +
> > + bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
> The intention of this helper is to set "or clear" a bit?
> It may be useful to clarify the "clear" part in the doc also.

Updated the docs:

 * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
 *
 *  Description
 *  Set or clear certain options on *bprm*:
 *
 *  **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
 *  which sets the **AT_SECURE** auxv for glibc. The bit is
 *  is cleared if the flag is not specified.
 *  Return
 *  **-EINVAL** if invalid *flags* are passed.

>
> > + return 0;
> > +}
> > +


Re: [PATCH bpf-next v2 2/2] bpf: Add tests for bpf_lsm_set_bprm_opts

2020-11-16 Thread KP Singh
On Tue, Nov 17, 2020 at 1:43 AM Martin KaFai Lau  wrote:
>
> On Mon, Nov 16, 2020 at 11:25:36PM +0000, KP Singh wrote:
> > From: KP Singh 
> >
> > The test forks a child process, updates the local storage to set/unset
> > the securexec bit.
> >
> > The BPF program in the test attaches to bprm_creds_for_exec which checks
> > the local storage of the current task to set the secureexec bit on the
> > binary parameters (bprm).
> >
> > The child then execs a bash command with the environment variable
> > TMPDIR set in the envp.  The bash command returns a different exit code
> > based on its observed value of the TMPDIR variable.
> >
> > Since TMPDIR is one of the variables that is ignored by the dynamic
> > loader when the secureexec bit is set, one should expect the
> > child execution to not see this value when the secureexec bit is set.
> >
> > Signed-off-by: KP Singh 
> > ---
> >  .../selftests/bpf/prog_tests/test_bprm_opts.c | 124 ++
> >  tools/testing/selftests/bpf/progs/bprm_opts.c |  34 +
> >  2 files changed, 158 insertions(+)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/bprm_opts.c
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c 
> > b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
> > new file mode 100644
> > index ..cba1ef3dc8b4
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
> > @@ -0,0 +1,124 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * Copyright (C) 2020 Google LLC.
> > + */
> > +
> > +#include 
> > +#include 
> Is it needed?

No, Good catch, removed.

>
> > +#include 

[...]

> > +  * If the value of TMPDIR is set, the bash command returns 10
> > +  * and if the value is unset, it returns 20.
> > +  */
> > + ret = execle("/bin/bash", "bash", "-c",
> > +  "[[ -z \"${TMPDIR}\" ]] || exit 10 && exit 20",
> > +  NULL, bash_envp);
> > + if (ret)

> It should never reach here?  May be just exit() unconditionally
> instead of having a chance to fall-through and then return -EINVAL.

Agreed. changed it to exit(errno); here.

>
> > + exit(errno);
> > + } else if (child_pid > 0) {
> > + waitpid(child_pid, _status, 0);
> > + ret = WEXITSTATUS(child_status);
> > +
> > + /* If a secureexec occured, the exit status should be 20.
> > +  */
> > + if (secureexec && ret == 20)
> > + return 0;
> > +
> > + /* If normal execution happened the exit code should be 10.
> > +  */
> > + if (!secureexec && ret == 10)
> > + return 0;
> > +
> > + return ret;
> Any chance that ret may be 0?

I think it's safer to just let it fall through and return -EINVAL, so
I removed the return ret here.

>
> > + }

[...]

> > +  0 /* secureexec */);
> > + if (CHECK(err, "run_set_secureexec:0", "err = %d", err))
> nit. err = %d"\n"

Fixed.

>
> > + goto close_prog;
> > +
> > + /* Run the test with the secureexec bit set */
> > + err = run_set_secureexec(bpf_map__fd(skel->maps.secure_exec_task_map),
> > +  1 /* secureexec */);
> > + if (CHECK(err, "run_set_secureexec:1", "err = %d", err))
> Same here.

Fixed.

- KP

>
> Others LGTM.


[PATCH bpf-next v2 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
From: KP Singh 

The helper allows modification of certain bits on the linux_binprm
struct starting with the secureexec bit which can be updated using the
BPF_LSM_F_BPRM_SECUREEXEC flag.

secureexec can be set by the LSM for privilege gaining executions to set
the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
use of certain environment variables (like LD_PRELOAD).

Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 14 ++
 kernel/bpf/bpf_lsm.c   | 27 +++
 scripts/bpf_helpers_doc.py |  2 ++
 tools/include/uapi/linux/bpf.h | 14 ++
 4 files changed, 57 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 162999b12790..7f1b6ba8246c 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3787,6 +3787,14 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
+ *
+ * Description
+ * Sets certain options on the *bprm*:
+ *
+ * **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
+ * which sets the **AT_SECURE** auxv for glibc.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3948,6 +3956,7 @@ union bpf_attr {
FN(task_storage_get),   \
FN(task_storage_delete),\
FN(get_current_task_btf),   \
+   FN(lsm_set_bprm_opts),  \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -4119,6 +4128,11 @@ enum bpf_lwt_encap_mode {
BPF_LWT_ENCAP_IP,
 };
 
+/* Flags for LSM helpers */
+enum {
+   BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
+};
+
 #define __bpf_md_ptr(type, name)   \
 union {\
type name;  \
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 553107f4706a..31f85474a0ef 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -51,6 +52,30 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
return 0;
 }
 
+/* Mask for all the currently supported BPRM option flags */
+#define BPF_LSM_F_BRPM_OPTS_MASK   0x1ULL
+
+BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
+{
+
+   if (flags & ~BPF_LSM_F_BRPM_OPTS_MASK)
+   return -EINVAL;
+
+   bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
+   return 0;
+}
+
+BTF_ID_LIST_SINGLE(bpf_lsm_set_bprm_opts_btf_ids, struct, linux_binprm)
+
+const static struct bpf_func_proto bpf_lsm_set_bprm_opts_proto = {
+   .func   = bpf_lsm_set_bprm_opts,
+   .gpl_only   = false,
+   .ret_type   = RET_INTEGER,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _lsm_set_bprm_opts_btf_ids[0],
+   .arg2_type  = ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -71,6 +96,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_get_proto;
case BPF_FUNC_task_storage_delete:
return _task_storage_delete_proto;
+   case BPF_FUNC_lsm_set_bprm_opts:
+   return _lsm_set_bprm_opts_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index 31484377b8b1..c5bc947a70ad 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -418,6 +418,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
@@ -465,6 +466,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 162999b12790..7f1b6ba8246c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3787,6 +3787,14 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * long bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
+ *
+ * Description
+ * Sets certain options on the *bprm*:
+ *
+ * **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
+ *  

[PATCH bpf-next v2 2/2] bpf: Add tests for bpf_lsm_set_bprm_opts

2020-11-16 Thread KP Singh
From: KP Singh 

The test forks a child process, updates the local storage to set/unset
the securexec bit.

The BPF program in the test attaches to bprm_creds_for_exec which checks
the local storage of the current task to set the secureexec bit on the
binary parameters (bprm).

The child then execs a bash command with the environment variable
TMPDIR set in the envp.  The bash command returns a different exit code
based on its observed value of the TMPDIR variable.

Since TMPDIR is one of the variables that is ignored by the dynamic
loader when the secureexec bit is set, one should expect the
child execution to not see this value when the secureexec bit is set.

Signed-off-by: KP Singh 
---
 .../selftests/bpf/prog_tests/test_bprm_opts.c | 124 ++
 tools/testing/selftests/bpf/progs/bprm_opts.c |  34 +
 2 files changed, 158 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
 create mode 100644 tools/testing/selftests/bpf/progs/bprm_opts.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c 
b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
new file mode 100644
index ..cba1ef3dc8b4
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include "bprm_opts.skel.h"
+#include "network_helpers.h"
+
+#ifndef __NR_pidfd_open
+#define __NR_pidfd_open 434
+#endif
+
+static const char * const bash_envp[] = { "TMPDIR=shouldnotbeset", NULL };
+
+static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+{
+   return syscall(__NR_pidfd_open, pid, flags);
+}
+
+static int update_storage(int map_fd, int secureexec)
+{
+   int task_fd, ret = 0;
+
+   task_fd = sys_pidfd_open(getpid(), 0);
+   if (task_fd < 0)
+   return errno;
+
+   ret = bpf_map_update_elem(map_fd, _fd, , BPF_NOEXIST);
+   if (ret)
+   ret = errno;
+
+   close(task_fd);
+   return ret;
+}
+
+static int run_set_secureexec(int map_fd, int secureexec)
+{
+
+   int child_pid, child_status, ret, null_fd;
+
+   child_pid = fork();
+   if (child_pid == 0) {
+   null_fd = open("/dev/null", O_WRONLY);
+   if (null_fd == -1)
+   exit(errno);
+   dup2(null_fd, STDOUT_FILENO);
+   dup2(null_fd, STDERR_FILENO);
+   close(null_fd);
+
+   /* Ensure that all executions from hereon are
+* secure by setting a local storage which is read by
+* the bprm_creds_for_exec hook and sets bprm->secureexec.
+*/
+   ret = update_storage(map_fd, secureexec);
+   if (ret)
+   exit(ret);
+
+   /* If the binary is executed with securexec=1, the dynamic
+* loader ingores and unsets certain variables like LD_PRELOAD,
+* TMPDIR etc. TMPDIR is used here to simplify the example, as
+* LD_PRELOAD requires a real .so file.
+*
+* If the value of TMPDIR is set, the bash command returns 10
+* and if the value is unset, it returns 20.
+*/
+   ret = execle("/bin/bash", "bash", "-c",
+"[[ -z \"${TMPDIR}\" ]] || exit 10 && exit 20",
+NULL, bash_envp);
+   if (ret)
+   exit(errno);
+   } else if (child_pid > 0) {
+   waitpid(child_pid, _status, 0);
+   ret = WEXITSTATUS(child_status);
+
+   /* If a secureexec occured, the exit status should be 20.
+*/
+   if (secureexec && ret == 20)
+   return 0;
+
+   /* If normal execution happened the exit code should be 10.
+*/
+   if (!secureexec && ret == 10)
+   return 0;
+
+   return ret;
+   }
+
+   return -EINVAL;
+}
+
+void test_test_bprm_opts(void)
+{
+   int err, duration = 0;
+   struct bprm_opts *skel = NULL;
+
+   skel = bprm_opts__open_and_load();
+   if (CHECK(!skel, "skel_load", "skeleton failed\n"))
+   goto close_prog;
+
+   err = bprm_opts__attach(skel);
+   if (CHECK(err, "attach", "attach failed: %d\n", err))
+   goto close_prog;
+
+   /* Run the test with the secureexec bit unset */
+   err = run_set_secureexec(bpf_map__fd(skel->maps.secure_exec_task_map),
+0 /* secureexec */);
+   if (CHECK(err, "run_set_secureexec:0", "err = %d", err))
+   goto close_prog

Re: [PATCH bpf-next 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
On Mon, Nov 16, 2020 at 11:48 PM KP Singh  wrote:
>
> [...]
>
> > >
> > > +BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, 
> > > flags)
> > > +{
> >
> > This should also reject invalid flags. I'd rather change this helper from 
> > RET_VOID
> > to RET_INTEGER and throw -EINVAL for everything other than 
> > BPF_LSM_F_BPRM_SECUREEXEC
> > passed in here including zero so it can be extended in future.
>
> Sounds good, I added:
>
>  enum {
> BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
> +   /* Mask for all the currently supported BPRM options */
> +   BPF_LSM_F_BRPM_OPTS_MASK= 0x1ULL,
>  };
>
> changed the return type to RET_INTEGER as suggested checking for
> invalid flags as:
>
>  BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
>  {
> +
> +   if (flags & !BPF_LSM_F_BRPM_OPTS_MASK)
> +   return -EINVAL;
>
> Do let me know if this is okay and I can spin up a v2 with these changes.

Oops this should have been:

  if (flags & ~BPF_LSM_F_BRPM_OPTS_MASK)
   return -EINVAL;

>
> - KP
>
> >
> > > + bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
> > > + return 0;
> > > +}
> > > +
> > > +BTF_ID_LIST_SINGLE(bpf_lsm_set_bprm_opts_btf_ids, struct, linux_binprm)
> > > +
> > > +const static struct bpf_func_proto bpf_lsm_set_bprm_opts_proto = {
> > > + .func   = bpf_lsm_set_bprm_opts,
> > > + .gpl_only   = false,
> > > + .ret_type   = RET_VOID,
> > > + .arg1_type  = ARG_PTR_TO_BTF_ID,
> > > + .arg1_btf_id= _lsm_set_bprm_opts_btf_ids[0],
> > > + .arg2_type  = ARG_ANYTHING,
> > > +};
> > > +


Re: [PATCH bpf-next 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
[...]

> >
> > +BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
> > +{
>
> This should also reject invalid flags. I'd rather change this helper from 
> RET_VOID
> to RET_INTEGER and throw -EINVAL for everything other than 
> BPF_LSM_F_BPRM_SECUREEXEC
> passed in here including zero so it can be extended in future.

Sounds good, I added:

 enum {
BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
+   /* Mask for all the currently supported BPRM options */
+   BPF_LSM_F_BRPM_OPTS_MASK= 0x1ULL,
 };

changed the return type to RET_INTEGER as suggested checking for
invalid flags as:

 BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
 {
+
+   if (flags & !BPF_LSM_F_BRPM_OPTS_MASK)
+   return -EINVAL;

Do let me know if this is okay and I can spin up a v2 with these changes.

- KP

>
> > + bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
> > + return 0;
> > +}
> > +
> > +BTF_ID_LIST_SINGLE(bpf_lsm_set_bprm_opts_btf_ids, struct, linux_binprm)
> > +
> > +const static struct bpf_func_proto bpf_lsm_set_bprm_opts_proto = {
> > + .func   = bpf_lsm_set_bprm_opts,
> > + .gpl_only   = false,
> > + .ret_type   = RET_VOID,
> > + .arg1_type  = ARG_PTR_TO_BTF_ID,
> > + .arg1_btf_id= _lsm_set_bprm_opts_btf_ids[0],
> > + .arg2_type  = ARG_ANYTHING,
> > +};
> > +


Re: [PATCH bpf-next 2/2] bpf: Add tests for bpf_lsm_set_bprm_opts

2020-11-16 Thread KP Singh
[...]

> +
> +#include "vmlinux.h"
> +#include 
> +#include 
> +#include 
> +
> +char _license[] SEC("license") = "GPL";
> +
> +struct {
> +   __uint(type, BPF_MAP_TYPE_TASK_STORAGE);
> +   __uint(map_flags, BPF_F_NO_PREALLOC);
> +   __type(key, int);
> +   __type(value, int);
> +} secure_exec_task_map SEC(".maps");
> +
> +SEC("lsm/bprm_creds_for_exec")
> +int BPF_PROG(secure_exec, struct linux_binprm *bprm)
> +{
> +   int *secureexec;
> +
> +   secureexec = bpf_task_storage_get(_exec_task_map,
> +  bpf_get_current_task_btf(), 0,
> +  BPF_LOCAL_STORAGE_GET_F_CREATE);
> +   if (!secureexec)
> +   return 0;
> +
> +   if (*secureexec)
> +   bpf_lsm_set_bprm_opts(bprm, BPF_LSM_F_BPRM_SECUREEXEC);

This can just be:

   if (secureexec && *secureexec)
  bpf_lsm_set_bprm_opts(bprm, BPF_LSM_F_BPRM_SECUREEXEC);

   bpf_lsm_set_bprm_opts(bprm, BPF_LSM_F_BPRM_SECUREEXEC);

> +   return 0;
> +}
> --
> 2.29.2.299.gdc1121823c-goog
>


[PATCH bpf-next 1/2] bpf: Add bpf_lsm_set_bprm_opts helper

2020-11-16 Thread KP Singh
From: KP Singh 

The helper allows modification of certain bits on the linux_binprm
struct starting with the secureexec bit which can be updated using the
BPF_LSM_F_BPRM_SECUREEXEC flag.

secureexec can be set by the LSM for privilege gaining executions to set
the AT_SECURE auxv for glibc.  When set, the dynamic linker disables the
use of certain environment variables (like LD_PRELOAD).

Signed-off-by: KP Singh 
---
 include/uapi/linux/bpf.h   | 14 ++
 kernel/bpf/bpf_lsm.c   | 20 
 scripts/bpf_helpers_doc.py |  2 ++
 tools/include/uapi/linux/bpf.h | 14 ++
 4 files changed, 50 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 162999b12790..ed4f575be3d3 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3787,6 +3787,14 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * void bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
+ *
+ * Description
+ * Sets certain options on the *bprm*:
+ *
+ * **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
+ * which sets the **AT_SECURE** auxv for glibc.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3948,6 +3956,7 @@ union bpf_attr {
FN(task_storage_get),   \
FN(task_storage_delete),\
FN(get_current_task_btf),   \
+   FN(lsm_set_bprm_opts),  \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -4119,6 +4128,11 @@ enum bpf_lwt_encap_mode {
BPF_LWT_ENCAP_IP,
 };
 
+/* Flags for LSM helpers */
+enum {
+   BPF_LSM_F_BPRM_SECUREEXEC   = (1ULL << 0),
+};
+
 #define __bpf_md_ptr(type, name)   \
 union {\
type name;  \
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 553107f4706a..4d04fc490a14 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -51,6 +52,23 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
return 0;
 }
 
+BPF_CALL_2(bpf_lsm_set_bprm_opts, struct linux_binprm *, bprm, u64, flags)
+{
+   bprm->secureexec = (flags & BPF_LSM_F_BPRM_SECUREEXEC);
+   return 0;
+}
+
+BTF_ID_LIST_SINGLE(bpf_lsm_set_bprm_opts_btf_ids, struct, linux_binprm)
+
+const static struct bpf_func_proto bpf_lsm_set_bprm_opts_proto = {
+   .func   = bpf_lsm_set_bprm_opts,
+   .gpl_only   = false,
+   .ret_type   = RET_VOID,
+   .arg1_type  = ARG_PTR_TO_BTF_ID,
+   .arg1_btf_id= _lsm_set_bprm_opts_btf_ids[0],
+   .arg2_type  = ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -71,6 +89,8 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _task_storage_get_proto;
case BPF_FUNC_task_storage_delete:
return _task_storage_delete_proto;
+   case BPF_FUNC_lsm_set_bprm_opts:
+   return _lsm_set_bprm_opts_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py
index 31484377b8b1..c5bc947a70ad 100755
--- a/scripts/bpf_helpers_doc.py
+++ b/scripts/bpf_helpers_doc.py
@@ -418,6 +418,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
@@ -465,6 +466,7 @@ class PrinterHelpers(Printer):
 'struct bpf_tcp_sock',
 'struct bpf_tunnel_key',
 'struct bpf_xfrm_state',
+'struct linux_binprm',
 'struct pt_regs',
 'struct sk_reuseport_md',
 'struct sockaddr',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 162999b12790..ed4f575be3d3 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3787,6 +3787,14 @@ union bpf_attr {
  * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
  * Return
  * Pointer to the current task.
+ *
+ * void bpf_lsm_set_bprm_opts(struct linux_binprm *bprm, u64 flags)
+ *
+ * Description
+ * Sets certain options on the *bprm*:
+ *
+ * **BPF_LSM_F_BPRM_SECUREEXEC** Set the secureexec bit
+ * which sets the **AT_SECURE** auxv for glibc.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3948,6 +3956,7 @@ union bpf_attr {
FN(tas

[PATCH bpf-next 2/2] bpf: Add tests for bpf_lsm_set_bprm_opts

2020-11-16 Thread KP Singh
From: KP Singh 

The test forks a child process, updates the local storage to set/unset
the securexec bit.

The BPF program in the test attaches to bprm_creds_for_exec which checks
the local storage of the current task to set the secureexec bit on the
binary parameters (bprm).

The child then execs a bash command with the environment variable
TMPDIR set in the envp.  The bash command returns a different exit code
based on its observed value of the TMPDIR variable.

Since TMPDIR is one of the variables that is ignored by the dynamic
loader when the secureexec bit is set, one should expect the
child execution to not see this value when the secureexec bit is set.

Signed-off-by: KP Singh 
---
 .../selftests/bpf/prog_tests/test_bprm_opts.c | 124 ++
 tools/testing/selftests/bpf/progs/bprm_opts.c |  35 +
 2 files changed, 159 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
 create mode 100644 tools/testing/selftests/bpf/progs/bprm_opts.c

diff --git a/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c 
b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
new file mode 100644
index ..cba1ef3dc8b4
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include "bprm_opts.skel.h"
+#include "network_helpers.h"
+
+#ifndef __NR_pidfd_open
+#define __NR_pidfd_open 434
+#endif
+
+static const char * const bash_envp[] = { "TMPDIR=shouldnotbeset", NULL };
+
+static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
+{
+   return syscall(__NR_pidfd_open, pid, flags);
+}
+
+static int update_storage(int map_fd, int secureexec)
+{
+   int task_fd, ret = 0;
+
+   task_fd = sys_pidfd_open(getpid(), 0);
+   if (task_fd < 0)
+   return errno;
+
+   ret = bpf_map_update_elem(map_fd, _fd, , BPF_NOEXIST);
+   if (ret)
+   ret = errno;
+
+   close(task_fd);
+   return ret;
+}
+
+static int run_set_secureexec(int map_fd, int secureexec)
+{
+
+   int child_pid, child_status, ret, null_fd;
+
+   child_pid = fork();
+   if (child_pid == 0) {
+   null_fd = open("/dev/null", O_WRONLY);
+   if (null_fd == -1)
+   exit(errno);
+   dup2(null_fd, STDOUT_FILENO);
+   dup2(null_fd, STDERR_FILENO);
+   close(null_fd);
+
+   /* Ensure that all executions from hereon are
+* secure by setting a local storage which is read by
+* the bprm_creds_for_exec hook and sets bprm->secureexec.
+*/
+   ret = update_storage(map_fd, secureexec);
+   if (ret)
+   exit(ret);
+
+   /* If the binary is executed with securexec=1, the dynamic
+* loader ingores and unsets certain variables like LD_PRELOAD,
+* TMPDIR etc. TMPDIR is used here to simplify the example, as
+* LD_PRELOAD requires a real .so file.
+*
+* If the value of TMPDIR is set, the bash command returns 10
+* and if the value is unset, it returns 20.
+*/
+   ret = execle("/bin/bash", "bash", "-c",
+"[[ -z \"${TMPDIR}\" ]] || exit 10 && exit 20",
+NULL, bash_envp);
+   if (ret)
+   exit(errno);
+   } else if (child_pid > 0) {
+   waitpid(child_pid, _status, 0);
+   ret = WEXITSTATUS(child_status);
+
+   /* If a secureexec occured, the exit status should be 20.
+*/
+   if (secureexec && ret == 20)
+   return 0;
+
+   /* If normal execution happened the exit code should be 10.
+*/
+   if (!secureexec && ret == 10)
+   return 0;
+
+   return ret;
+   }
+
+   return -EINVAL;
+}
+
+void test_test_bprm_opts(void)
+{
+   int err, duration = 0;
+   struct bprm_opts *skel = NULL;
+
+   skel = bprm_opts__open_and_load();
+   if (CHECK(!skel, "skel_load", "skeleton failed\n"))
+   goto close_prog;
+
+   err = bprm_opts__attach(skel);
+   if (CHECK(err, "attach", "attach failed: %d\n", err))
+   goto close_prog;
+
+   /* Run the test with the secureexec bit unset */
+   err = run_set_secureexec(bpf_map__fd(skel->maps.secure_exec_task_map),
+0 /* secureexec */);
+   if (CHECK(err, "run_set_secureexec:0", "err = %d", err))
+   goto close_prog

[PATCH bpf-next v3 1/2] bpf: Augment the set of sleepable LSM hooks

2020-11-12 Thread KP Singh
From: KP Singh 

Update the set of sleepable hooks with the ones that do not trigger
a warning with might_fault() when exercised with the correct kernel
config options enabled, i.e.

DEBUG_ATOMIC_SLEEP=y
LOCKDEP=y
PROVE_LOCKING=y

This means that a sleepable LSM eBPF program can be attached to these
LSM hooks. A new helper method bpf_lsm_is_sleepable_hook is added and
the set is maintained locally in bpf_lsm.c

Signed-off-by: KP Singh 
---
 include/linux/bpf_lsm.h |  7 
 kernel/bpf/bpf_lsm.c| 81 +
 kernel/bpf/verifier.c   | 16 +---
 3 files changed, 89 insertions(+), 15 deletions(-)

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 73226181b744..0d1c33ace398 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -27,6 +27,8 @@ extern struct lsm_blob_sizes bpf_lsm_blob_sizes;
 int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
const struct bpf_prog *prog);
 
+bool bpf_lsm_is_sleepable_hook(u32 btf_id);
+
 static inline struct bpf_storage_blob *bpf_inode(
const struct inode *inode)
 {
@@ -54,6 +56,11 @@ void bpf_task_storage_free(struct task_struct *task);
 
 #else /* !CONFIG_BPF_LSM */
 
+static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
+{
+   return false;
+}
+
 static inline int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
  const struct bpf_prog *prog)
 {
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index e92c51bebb47..aed74b853415 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -72,6 +73,86 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
}
 }
 
+/* The set of hooks which are called without pagefaults disabled and are 
allowed
+ * to "sleep" and thus can be used for sleeable BPF programs.
+ */
+BTF_SET_START(sleepable_lsm_hooks)
+BTF_ID(func, bpf_lsm_bpf)
+BTF_ID(func, bpf_lsm_bpf_map)
+BTF_ID(func, bpf_lsm_bpf_map_alloc_security)
+BTF_ID(func, bpf_lsm_bpf_map_free_security)
+BTF_ID(func, bpf_lsm_bpf_prog)
+BTF_ID(func, bpf_lsm_bprm_check_security)
+BTF_ID(func, bpf_lsm_bprm_committed_creds)
+BTF_ID(func, bpf_lsm_bprm_committing_creds)
+BTF_ID(func, bpf_lsm_bprm_creds_for_exec)
+BTF_ID(func, bpf_lsm_bprm_creds_from_file)
+BTF_ID(func, bpf_lsm_capget)
+BTF_ID(func, bpf_lsm_capset)
+BTF_ID(func, bpf_lsm_cred_prepare)
+BTF_ID(func, bpf_lsm_file_ioctl)
+BTF_ID(func, bpf_lsm_file_lock)
+BTF_ID(func, bpf_lsm_file_open)
+BTF_ID(func, bpf_lsm_file_receive)
+BTF_ID(func, bpf_lsm_inet_conn_established)
+BTF_ID(func, bpf_lsm_inode_create)
+BTF_ID(func, bpf_lsm_inode_free_security)
+BTF_ID(func, bpf_lsm_inode_getattr)
+BTF_ID(func, bpf_lsm_inode_getxattr)
+BTF_ID(func, bpf_lsm_inode_mknod)
+BTF_ID(func, bpf_lsm_inode_need_killpriv)
+BTF_ID(func, bpf_lsm_inode_post_setxattr)
+BTF_ID(func, bpf_lsm_inode_readlink)
+BTF_ID(func, bpf_lsm_inode_rename)
+BTF_ID(func, bpf_lsm_inode_rmdir)
+BTF_ID(func, bpf_lsm_inode_setattr)
+BTF_ID(func, bpf_lsm_inode_setxattr)
+BTF_ID(func, bpf_lsm_inode_symlink)
+BTF_ID(func, bpf_lsm_inode_unlink)
+BTF_ID(func, bpf_lsm_kernel_module_request)
+BTF_ID(func, bpf_lsm_kernfs_init_security)
+BTF_ID(func, bpf_lsm_key_free)
+BTF_ID(func, bpf_lsm_mmap_file)
+BTF_ID(func, bpf_lsm_netlink_send)
+BTF_ID(func, bpf_lsm_path_notify)
+BTF_ID(func, bpf_lsm_release_secctx)
+BTF_ID(func, bpf_lsm_sb_alloc_security)
+BTF_ID(func, bpf_lsm_sb_eat_lsm_opts)
+BTF_ID(func, bpf_lsm_sb_kern_mount)
+BTF_ID(func, bpf_lsm_sb_mount)
+BTF_ID(func, bpf_lsm_sb_remount)
+BTF_ID(func, bpf_lsm_sb_set_mnt_opts)
+BTF_ID(func, bpf_lsm_sb_show_options)
+BTF_ID(func, bpf_lsm_sb_statfs)
+BTF_ID(func, bpf_lsm_sb_umount)
+BTF_ID(func, bpf_lsm_settime)
+BTF_ID(func, bpf_lsm_socket_accept)
+BTF_ID(func, bpf_lsm_socket_bind)
+BTF_ID(func, bpf_lsm_socket_connect)
+BTF_ID(func, bpf_lsm_socket_create)
+BTF_ID(func, bpf_lsm_socket_getpeername)
+BTF_ID(func, bpf_lsm_socket_getpeersec_dgram)
+BTF_ID(func, bpf_lsm_socket_getsockname)
+BTF_ID(func, bpf_lsm_socket_getsockopt)
+BTF_ID(func, bpf_lsm_socket_listen)
+BTF_ID(func, bpf_lsm_socket_post_create)
+BTF_ID(func, bpf_lsm_socket_recvmsg)
+BTF_ID(func, bpf_lsm_socket_sendmsg)
+BTF_ID(func, bpf_lsm_socket_shutdown)
+BTF_ID(func, bpf_lsm_socket_socketpair)
+BTF_ID(func, bpf_lsm_syslog)
+BTF_ID(func, bpf_lsm_task_alloc)
+BTF_ID(func, bpf_lsm_task_getsecid)
+BTF_ID(func, bpf_lsm_task_prctl)
+BTF_ID(func, bpf_lsm_task_setscheduler)
+BTF_ID(func, bpf_lsm_task_to_inode)
+BTF_SET_END(sleepable_lsm_hooks)
+
+bool bpf_lsm_is_sleepable_hook(u32 btf_id)
+{
+   return btf_id_set_contains(_lsm_hooks, btf_id);
+}
+
 const struct bpf_prog_ops lsm_prog_ops = {
 };
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 10

[PATCH bpf-next v3 0/2] Sleepable LSM Hooks

2020-11-12 Thread KP Singh
From: KP Singh 

# v2 -> v3

  * Remove the list of non-sleepable hooks, will send a separate patch
to the lsm list based on the discussion with Daniel.
  * Add Andrii's ack for real

# v1 -> v2

  * Fixed typos and formatting errors.
  * Added Andrii's ack.

KP Singh (2):
  bpf: Augment the set of sleepable LSM hooks
  bpf: Expose bpf_d_path helper to sleepable LSM hooks

 include/linux/bpf_lsm.h  |  7 
 kernel/bpf/bpf_lsm.c | 81 
 kernel/bpf/verifier.c| 16 +---
 kernel/trace/bpf_trace.c |  7 +++-
 4 files changed, 95 insertions(+), 16 deletions(-)

-- 
2.29.2.299.gdc1121823c-goog



[PATCH bpf-next v3 2/2] bpf: Expose bpf_d_path helper to sleepable LSM hooks

2020-11-12 Thread KP Singh
From: KP Singh 

Sleepable hooks are never called from an NMI/interrupt context, so it is
safe to use the bpf_d_path helper in LSM programs attaching to these
hooks.

The helper is not restricted to sleepable programs and merely uses the
list of sleeable hooks as the initial subset of LSM hooks where it can
be used.

Acked-by: Andrii Nakryiko 

Signed-off-by: KP Singh 
---
 kernel/trace/bpf_trace.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index e4515b0f62a8..eab1af02c90d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1178,7 +1179,11 @@ BTF_SET_END(btf_allowlist_d_path)
 
 static bool bpf_d_path_allowed(const struct bpf_prog *prog)
 {
-   return btf_id_set_contains(_allowlist_d_path, 
prog->aux->attach_btf_id);
+   if (prog->type == BPF_PROG_TYPE_LSM)
+   return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
+
+   return btf_id_set_contains(_allowlist_d_path,
+  prog->aux->attach_btf_id);
 }
 
 BTF_ID_LIST_SINGLE(bpf_d_path_btf_ids, struct, path)
-- 
2.29.2.299.gdc1121823c-goog



Re: [PATCH bpf-next v2 1/2] bpf: Augment the set of sleepable LSM hooks

2020-11-12 Thread KP Singh
On Thu, Nov 12, 2020 at 11:35 PM Daniel Borkmann  wrote:
>
> On 11/12/20 9:03 PM, KP Singh wrote:
> > From: KP Singh 
> >
> > Update the set of sleepable hooks with the ones that do not trigger
> > a warning with might_fault() when exercised with the correct kernel
> > config options enabled, i.e.

[...]

>
> I think this is very useful info. I was wondering whether it would make sense
> to annotate these more closely to the code so there's less chance this info
> becomes stale? Maybe something like below, not sure ... issue is if you would
> just place a cant_sleep() in there it might be wrong since this should just
> document that it can be invoked from non-sleepable context but it might not
> have to.

Indeed, this is why I did not make an explicit cant_sleep() call for these hooks
in __bpf_prog_enter (with a change in the signature to pass struct *prog).

> diff --git a/security/security.c b/security/security.c
> index a28045dc9e7f..7899bf32cdaa 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -94,6 +94,11 @@ static __initdata bool debug;
>  pr_info(__VA_ARGS__);   \
>  } while (0)
>
> +/*
> + * Placeholder for now to document that hook implementation cannot sleep
> + * since it could potentially be called from non-sleepable context, too.
> + */
> +#define hook_cant_sleep()  do { } while (0)

Good idea!

At the very least, we can update the comments in lsm_hooks.h
which already mention some of the LSM hooks as being called from
non-sleepable contexts.

I will remove this comment, send a separate patch to security folks
and respin these patches.

-KP

> +
>   static bool __init is_enabled(struct lsm_info *lsm)
>   {
>  if (!lsm->enabled)
> @@ -2522,6 +2527,7 @@ void security_bpf_map_free(struct bpf_map *map)
>   }
>   void security_bpf_prog_free(struct bpf_prog_aux *aux)
>   {
> +   hook_cant_sleep();
>  call_void_hook(bpf_prog_free_security, aux);
>   }
>   #endif /* CONFIG_BPF_SYSCALL */


Re: [PATCH bpf-next v2 0/2] Sleepable LSM Hooks

2020-11-12 Thread KP Singh
On Thu, Nov 12, 2020 at 9:03 PM KP Singh  wrote:
>
> From: KP Singh 
>
> # v1 -> v2
>
>   * Fixed typos and formatting errors.
>   * Added Andrii's ack.

Oops, I sent an older patch file which does not have Andrii's ack.


[PATCH bpf-next v2 0/2] Sleepable LSM Hooks

2020-11-12 Thread KP Singh
From: KP Singh 

# v1 -> v2

  * Fixed typos and formatting errors.
  * Added Andrii's ack.

KP Singh (2):
  bpf: Augment the set of sleepable LSM hooks
  bpf: Expose bpf_d_path helper to sleepable LSM hooks

 include/linux/bpf_lsm.h  |   7 +++
 kernel/bpf/bpf_lsm.c | 121 +++
 kernel/bpf/verifier.c|  16 +-
 kernel/trace/bpf_trace.c |   7 ++-
 4 files changed, 135 insertions(+), 16 deletions(-)

-- 
2.29.2.222.g5d2a92d10f8-goog



[PATCH bpf-next v2 2/2] bpf: Expose bpf_d_path helper to sleepable LSM hooks

2020-11-12 Thread KP Singh
From: KP Singh 

Sleepable hooks are never called from an NMI/interrupt context, so it is
safe to use the bpf_d_path helper in LSM programs attaching to these
hooks.

The helper is not restricted to sleepable programs and merely uses the
list of sleeable hooks as the initial subset of LSM hooks where it can
be used.

Signed-off-by: KP Singh 
---
 kernel/trace/bpf_trace.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index e4515b0f62a8..eab1af02c90d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1178,7 +1179,11 @@ BTF_SET_END(btf_allowlist_d_path)
 
 static bool bpf_d_path_allowed(const struct bpf_prog *prog)
 {
-   return btf_id_set_contains(_allowlist_d_path, 
prog->aux->attach_btf_id);
+   if (prog->type == BPF_PROG_TYPE_LSM)
+   return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
+
+   return btf_id_set_contains(_allowlist_d_path,
+  prog->aux->attach_btf_id);
 }
 
 BTF_ID_LIST_SINGLE(bpf_d_path_btf_ids, struct, path)
-- 
2.29.2.222.g5d2a92d10f8-goog



[PATCH bpf-next v2 1/2] bpf: Augment the set of sleepable LSM hooks

2020-11-12 Thread KP Singh
From: KP Singh 

Update the set of sleepable hooks with the ones that do not trigger
a warning with might_fault() when exercised with the correct kernel
config options enabled, i.e.

DEBUG_ATOMIC_SLEEP=y
LOCKDEP=y
PROVE_LOCKING=y

This means that a sleepable LSM eBPF program can be attached to these
LSM hooks. A new helper method bpf_lsm_is_sleepable_hook is added and
the set is maintained locally in bpf_lsm.c

A comment is added about the list of LSM hooks that have been observed
to be called from softirqs, atomic contexts, or the ones that can
trigger pagefaults and thus should not be added to this list.

Signed-off-by: KP Singh 
---
 include/linux/bpf_lsm.h |   7 +++
 kernel/bpf/bpf_lsm.c| 121 
 kernel/bpf/verifier.c   |  16 +-
 3 files changed, 129 insertions(+), 15 deletions(-)

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 73226181b744..0d1c33ace398 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -27,6 +27,8 @@ extern struct lsm_blob_sizes bpf_lsm_blob_sizes;
 int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
const struct bpf_prog *prog);
 
+bool bpf_lsm_is_sleepable_hook(u32 btf_id);
+
 static inline struct bpf_storage_blob *bpf_inode(
const struct inode *inode)
 {
@@ -54,6 +56,11 @@ void bpf_task_storage_free(struct task_struct *task);
 
 #else /* !CONFIG_BPF_LSM */
 
+static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
+{
+   return false;
+}
+
 static inline int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
  const struct bpf_prog *prog)
 {
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index e92c51bebb47..47e25da9e8bb 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -72,6 +73,126 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
}
 }
 
+/* The set of hooks which are called without pagefaults disabled and are 
allowed
+ * to "sleep" and thus can be used for sleeable BPF programs.
+ *
+ * There are some hooks which have been observed to be called from a
+ * non-sleepable context and should not be added to this set:
+ *
+ *  bpf_lsm_bpf_prog_free_security
+ *  bpf_lsm_capable
+ *  bpf_lsm_cred_free
+ *  bpf_lsm_d_instantiate
+ *  bpf_lsm_file_alloc_security
+ *  bpf_lsm_file_mprotect
+ *  bpf_lsm_file_send_sigiotask
+ *  bpf_lsm_inet_conn_request
+ *  bpf_lsm_inet_csk_clone
+ *  bpf_lsm_inode_alloc_security
+ *  bpf_lsm_inode_follow_link
+ *  bpf_lsm_inode_permission
+ *  bpf_lsm_key_permission
+ *  bpf_lsm_locked_down
+ *  bpf_lsm_mmap_addr
+ *  bpf_lsm_perf_event_read
+ *  bpf_lsm_ptrace_access_check
+ *  bpf_lsm_req_classify_flow
+ *  bpf_lsm_sb_free_security
+ *  bpf_lsm_sk_alloc_security
+ *  bpf_lsm_sk_clone_security
+ *  bpf_lsm_sk_free_security
+ *  bpf_lsm_sk_getsecid
+ *  bpf_lsm_socket_sock_rcv_skb
+ *  bpf_lsm_sock_graft
+ *  bpf_lsm_task_free
+ *  bpf_lsm_task_getioprio
+ *  bpf_lsm_task_getscheduler
+ *  bpf_lsm_task_kill
+ *  bpf_lsm_task_setioprio
+ *  bpf_lsm_task_setnice
+ *  bpf_lsm_task_setpgid
+ *  bpf_lsm_task_setrlimit
+ *  bpf_lsm_unix_may_send
+ *  bpf_lsm_unix_stream_connect
+ *  bpf_lsm_vm_enough_memory
+ */
+BTF_SET_START(sleepable_lsm_hooks)
+BTF_ID(func, bpf_lsm_bpf)
+BTF_ID(func, bpf_lsm_bpf_map)
+BTF_ID(func, bpf_lsm_bpf_map_alloc_security)
+BTF_ID(func, bpf_lsm_bpf_map_free_security)
+BTF_ID(func, bpf_lsm_bpf_prog)
+BTF_ID(func, bpf_lsm_bprm_check_security)
+BTF_ID(func, bpf_lsm_bprm_committed_creds)
+BTF_ID(func, bpf_lsm_bprm_committing_creds)
+BTF_ID(func, bpf_lsm_bprm_creds_for_exec)
+BTF_ID(func, bpf_lsm_bprm_creds_from_file)
+BTF_ID(func, bpf_lsm_capget)
+BTF_ID(func, bpf_lsm_capset)
+BTF_ID(func, bpf_lsm_cred_prepare)
+BTF_ID(func, bpf_lsm_file_ioctl)
+BTF_ID(func, bpf_lsm_file_lock)
+BTF_ID(func, bpf_lsm_file_open)
+BTF_ID(func, bpf_lsm_file_receive)
+BTF_ID(func, bpf_lsm_inet_conn_established)
+BTF_ID(func, bpf_lsm_inode_create)
+BTF_ID(func, bpf_lsm_inode_free_security)
+BTF_ID(func, bpf_lsm_inode_getattr)
+BTF_ID(func, bpf_lsm_inode_getxattr)
+BTF_ID(func, bpf_lsm_inode_mknod)
+BTF_ID(func, bpf_lsm_inode_need_killpriv)
+BTF_ID(func, bpf_lsm_inode_post_setxattr)
+BTF_ID(func, bpf_lsm_inode_readlink)
+BTF_ID(func, bpf_lsm_inode_rename)
+BTF_ID(func, bpf_lsm_inode_rmdir)
+BTF_ID(func, bpf_lsm_inode_setattr)
+BTF_ID(func, bpf_lsm_inode_setxattr)
+BTF_ID(func, bpf_lsm_inode_symlink)
+BTF_ID(func, bpf_lsm_inode_unlink)
+BTF_ID(func, bpf_lsm_kernel_module_request)
+BTF_ID(func, bpf_lsm_kernfs_init_security)
+BTF_ID(func, bpf_lsm_key_free)
+BTF_ID(func, bpf_lsm_mmap_file)
+BTF_ID(func, bpf_lsm_netlink_send)
+BTF_ID(func, bpf_lsm_path_notify)
+BTF_ID(func, bpf_lsm_release_secctx)
+B

Re: [PATCH bpf-next 1/2] bpf: Augment the set of sleepable LSM hooks

2020-11-12 Thread KP Singh
On Thu, Nov 12, 2020 at 7:48 PM Andrii Nakryiko
 wrote:
>
> On Thu, Nov 12, 2020 at 9:20 AM KP Singh  wrote:
> >
> > From: KP Singh 
> >
> > Update the set of sleepable hooks with the ones that do not trigger
> > a warning with might_fault() when exercised with the correct kernel
> > config options enabled, i.e.
> >
> > DEBUG_ATOMIC_SLEEP=y
> > LOCKDEP=y
> > PROVE_LOCKING=y
> >
> > This means that a sleepable LSM eBPF prorgam can be attached to these
>
> typo: program

Fixed.

>
> > LSM hooks. A new helper method bpf_lsm_is_sleepable_hook is added and
> > the set is maintained locally in bpf_lsm.c
> >
> > A comment is added about the list of LSM hooks that have been observed
> > to be called from softirqs, atomic contexts, or the ones that can
> > trigger pagefaults and thus should not be added to this list.
> >
> > Signed-off-by: KP Singh 
> > ---
> >  include/linux/bpf_lsm.h |   7 +++
> >  kernel/bpf/bpf_lsm.c| 120 
> >  kernel/bpf/verifier.c   |  16 +-
> >  3 files changed, 128 insertions(+), 15 deletions(-)
> >
> > diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> > index 73226181b744..0d1c33ace398 100644
> > --- a/include/linux/bpf_lsm.h
> > +++ b/include/linux/bpf_lsm.h
> > @@ -27,6 +27,8 @@ extern struct lsm_blob_sizes bpf_lsm_blob_sizes;
> >  int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
> > const struct bpf_prog *prog);
> >
> > +bool bpf_lsm_is_sleepable_hook(u32 btf_id);
> > +
> >  static inline struct bpf_storage_blob *bpf_inode(
> > const struct inode *inode)
> >  {
> > @@ -54,6 +56,11 @@ void bpf_task_storage_free(struct task_struct *task);
> >
> >  #else /* !CONFIG_BPF_LSM */
> >
> > +static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
> > +{
> > +   return false;
> > +}
> > +
> >  static inline int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
> >   const struct bpf_prog *prog)
> >  {
> > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> > index e92c51bebb47..3a6e927485c2 100644
> > --- a/kernel/bpf/bpf_lsm.c
> > +++ b/kernel/bpf/bpf_lsm.c
> > @@ -13,6 +13,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >
> >  /* For every LSM hook that allows attachment of BPF programs, declare a nop
> >   * function where a BPF program can be attached.
> > @@ -72,6 +73,125 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const 
> > struct bpf_prog *prog)
> > }
> >  }
> >
> > +/* The set of hooks which are called without pagefaults disabled and are 
> > allowed
> > + * to "sleep and thus can be used for sleeable BPF programs.
>
> typo: "sleep" (both quotes) or no quotes at all?

Fixed.

>
> > + *
> > + * There are some hooks which have been observed to be called from a
> > + * non-sleepable context and should not be added to this set:
> > + *
> > + *  bpf_lsm_bpf_prog_free_security
> > + *  bpf_lsm_capable
> > + *  bpf_lsm_cred_free
> > + *  bpf_lsm_d_instantiate
> > + *  bpf_lsm_file_alloc_security
> > + *  bpf_lsm_file_mprotect
> > + *  bpf_lsm_file_send_sigiotask
> > + *  bpf_lsm_inet_conn_request
> > + *  bpf_lsm_inet_csk_clone
> > + *  bpf_lsm_inode_alloc_security
> > + *  bpf_lsm_inode_follow_link
> > + *  bpf_lsm_inode_permission
> > + *  bpf_lsm_key_permission
> > + *  bpf_lsm_locked_down
> > + *  bpf_lsm_mmap_addr
> > + *  bpf_lsm_perf_event_read
> > + *  bpf_lsm_ptrace_access_check
> > + *  bpf_lsm_req_classify_flow
> > + *  bpf_lsm_sb_free_security
> > + *  bpf_lsm_sk_alloc_security
> > + *  bpf_lsm_sk_clone_security
> > + *  bpf_lsm_sk_free_security
> > + *  bpf_lsm_sk_getsecid
> > + *  bpf_lsm_socket_sock_rcv_skb
> > + *  bpf_lsm_sock_graft
> > + *  bpf_lsm_task_free
> > + *  bpf_lsm_task_getioprio
> > + *  bpf_lsm_task_getscheduler
> > + *  bpf_lsm_task_kill
> > + *  bpf_lsm_task_setioprio
> > + *  bpf_lsm_task_setnice
> > + *  bpf_lsm_task_setpgid
> > + *  bpf_lsm_task_setrlimit
> > + *  bpf_lsm_unix_may_send
> > + *  bpf_lsm_unix_stream_connect
> > + *  bpf_lsm_vm_enough_memory
> > + */
> > +BTF_SET_START(sleepable_lsm_hooks)BTF_ID(func, bpf_lsm_bpf)
>
> something is off here

Oops. Fixed.

>
> > +BTF_ID(func, bpf_lsm_bpf_map)
> > +BTF_ID(func, bpf_lsm_bpf_map_alloc_security)
> > +BTF_ID(func, bpf_lsm_bpf_map_free_security)
> > +BTF_ID(func, bpf_lsm_bpf_prog)
>
> [...]


[PATCH bpf-next 2/2] bpf: Expose bpf_d_path helper to sleepable LSM hooks

2020-11-12 Thread KP Singh
From: KP Singh 

Sleepable hooks are never called from an NMI/interrupt context, so it is
safe to use the bpf_d_path helper in LSM programs attaching to these
hooks.

The helper is not restricted to sleepable programs and merely uses the
list of sleeable hooks as the initial subset of LSM hooks where it can
be used.

Signed-off-by: KP Singh 
---
 kernel/trace/bpf_trace.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index e4515b0f62a8..eab1af02c90d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1178,7 +1179,11 @@ BTF_SET_END(btf_allowlist_d_path)
 
 static bool bpf_d_path_allowed(const struct bpf_prog *prog)
 {
-   return btf_id_set_contains(_allowlist_d_path, 
prog->aux->attach_btf_id);
+   if (prog->type == BPF_PROG_TYPE_LSM)
+   return bpf_lsm_is_sleepable_hook(prog->aux->attach_btf_id);
+
+   return btf_id_set_contains(_allowlist_d_path,
+  prog->aux->attach_btf_id);
 }
 
 BTF_ID_LIST_SINGLE(bpf_d_path_btf_ids, struct, path)
-- 
2.29.2.222.g5d2a92d10f8-goog



[PATCH bpf-next 1/2] bpf: Augment the set of sleepable LSM hooks

2020-11-12 Thread KP Singh
From: KP Singh 

Update the set of sleepable hooks with the ones that do not trigger
a warning with might_fault() when exercised with the correct kernel
config options enabled, i.e.

DEBUG_ATOMIC_SLEEP=y
LOCKDEP=y
PROVE_LOCKING=y

This means that a sleepable LSM eBPF prorgam can be attached to these
LSM hooks. A new helper method bpf_lsm_is_sleepable_hook is added and
the set is maintained locally in bpf_lsm.c

A comment is added about the list of LSM hooks that have been observed
to be called from softirqs, atomic contexts, or the ones that can
trigger pagefaults and thus should not be added to this list.

Signed-off-by: KP Singh 
---
 include/linux/bpf_lsm.h |   7 +++
 kernel/bpf/bpf_lsm.c| 120 
 kernel/bpf/verifier.c   |  16 +-
 3 files changed, 128 insertions(+), 15 deletions(-)

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 73226181b744..0d1c33ace398 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -27,6 +27,8 @@ extern struct lsm_blob_sizes bpf_lsm_blob_sizes;
 int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
const struct bpf_prog *prog);
 
+bool bpf_lsm_is_sleepable_hook(u32 btf_id);
+
 static inline struct bpf_storage_blob *bpf_inode(
const struct inode *inode)
 {
@@ -54,6 +56,11 @@ void bpf_task_storage_free(struct task_struct *task);
 
 #else /* !CONFIG_BPF_LSM */
 
+static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
+{
+   return false;
+}
+
 static inline int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
  const struct bpf_prog *prog)
 {
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index e92c51bebb47..3a6e927485c2 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -72,6 +73,125 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
}
 }
 
+/* The set of hooks which are called without pagefaults disabled and are 
allowed
+ * to "sleep and thus can be used for sleeable BPF programs.
+ *
+ * There are some hooks which have been observed to be called from a
+ * non-sleepable context and should not be added to this set:
+ *
+ *  bpf_lsm_bpf_prog_free_security
+ *  bpf_lsm_capable
+ *  bpf_lsm_cred_free
+ *  bpf_lsm_d_instantiate
+ *  bpf_lsm_file_alloc_security
+ *  bpf_lsm_file_mprotect
+ *  bpf_lsm_file_send_sigiotask
+ *  bpf_lsm_inet_conn_request
+ *  bpf_lsm_inet_csk_clone
+ *  bpf_lsm_inode_alloc_security
+ *  bpf_lsm_inode_follow_link
+ *  bpf_lsm_inode_permission
+ *  bpf_lsm_key_permission
+ *  bpf_lsm_locked_down
+ *  bpf_lsm_mmap_addr
+ *  bpf_lsm_perf_event_read
+ *  bpf_lsm_ptrace_access_check
+ *  bpf_lsm_req_classify_flow
+ *  bpf_lsm_sb_free_security
+ *  bpf_lsm_sk_alloc_security
+ *  bpf_lsm_sk_clone_security
+ *  bpf_lsm_sk_free_security
+ *  bpf_lsm_sk_getsecid
+ *  bpf_lsm_socket_sock_rcv_skb
+ *  bpf_lsm_sock_graft
+ *  bpf_lsm_task_free
+ *  bpf_lsm_task_getioprio
+ *  bpf_lsm_task_getscheduler
+ *  bpf_lsm_task_kill
+ *  bpf_lsm_task_setioprio
+ *  bpf_lsm_task_setnice
+ *  bpf_lsm_task_setpgid
+ *  bpf_lsm_task_setrlimit
+ *  bpf_lsm_unix_may_send
+ *  bpf_lsm_unix_stream_connect
+ *  bpf_lsm_vm_enough_memory
+ */
+BTF_SET_START(sleepable_lsm_hooks)BTF_ID(func, bpf_lsm_bpf)
+BTF_ID(func, bpf_lsm_bpf_map)
+BTF_ID(func, bpf_lsm_bpf_map_alloc_security)
+BTF_ID(func, bpf_lsm_bpf_map_free_security)
+BTF_ID(func, bpf_lsm_bpf_prog)
+BTF_ID(func, bpf_lsm_bprm_check_security)
+BTF_ID(func, bpf_lsm_bprm_committed_creds)
+BTF_ID(func, bpf_lsm_bprm_committing_creds)
+BTF_ID(func, bpf_lsm_bprm_creds_for_exec)
+BTF_ID(func, bpf_lsm_bprm_creds_from_file)
+BTF_ID(func, bpf_lsm_capget)
+BTF_ID(func, bpf_lsm_capset)
+BTF_ID(func, bpf_lsm_cred_prepare)
+BTF_ID(func, bpf_lsm_file_ioctl)
+BTF_ID(func, bpf_lsm_file_lock)
+BTF_ID(func, bpf_lsm_file_open)
+BTF_ID(func, bpf_lsm_file_receive)
+BTF_ID(func, bpf_lsm_inet_conn_established)
+BTF_ID(func, bpf_lsm_inode_create)
+BTF_ID(func, bpf_lsm_inode_free_security)
+BTF_ID(func, bpf_lsm_inode_getattr)
+BTF_ID(func, bpf_lsm_inode_getxattr)
+BTF_ID(func, bpf_lsm_inode_mknod)
+BTF_ID(func, bpf_lsm_inode_need_killpriv)
+BTF_ID(func, bpf_lsm_inode_post_setxattr)
+BTF_ID(func, bpf_lsm_inode_readlink)
+BTF_ID(func, bpf_lsm_inode_rename)
+BTF_ID(func, bpf_lsm_inode_rmdir)
+BTF_ID(func, bpf_lsm_inode_setattr)
+BTF_ID(func, bpf_lsm_inode_setxattr)
+BTF_ID(func, bpf_lsm_inode_symlink)
+BTF_ID(func, bpf_lsm_inode_unlink)
+BTF_ID(func, bpf_lsm_kernel_module_request)
+BTF_ID(func, bpf_lsm_kernfs_init_security)
+BTF_ID(func, bpf_lsm_key_free)
+BTF_ID(func, bpf_lsm_mmap_file)
+BTF_ID(func, bpf_lsm_netlink_send)
+BTF_ID(func, bpf_lsm_path_notify)
+BTF_ID(func, bpf_lsm_release_secctx)
+BTF_ID(func, bpf_lsm_sb_alloc_sec

Re: [PATCH bpf-next v5 8/9] bpf: Add tests for task_local_storage

2020-11-06 Thread KP Singh
On Fri, Nov 6, 2020 at 3:14 AM Alexei Starovoitov
 wrote:
>
> On Thu, Nov 05, 2020 at 10:58:26PM +0000, KP Singh wrote:
> > +
> > + ret = copy_file_range(fd_in, NULL, fd_out, NULL, stat.st_size, 0);
>
> centos7 glibc doesn't have it.
>
> /prog_tests/test_local_storage.c:59:8: warning: implicit declaration of 
> function ‘copy_file_range’; did you mean ‘sync_file_range’? 
> [-Wimplicit-function-declaration]
>59 |  ret = copy_file_range(fd_in, NULL, fd_out, NULL, stat.st_size, 0);
>   |^~~
>   |sync_file_range
>   BINARY   test_progs
>   BINARY   test_progs-no_alu32
> ld: test_local_storage.test.o: in function `copy_rm':
> test_local_storage.c:59: undefined reference to `copy_file_range'
>
> Could you use something else or wrap it similar to pidfd_open ?

Sure, I created a wrapper similar to pidfd_open and sent out a v6.


[PATCH bpf-next v6 2/9] bpf: Implement task local storage

2020-11-06 Thread KP Singh
From: KP Singh 

Similar to bpf_local_storage for sockets and inodes add local storage
for task_struct.

The life-cycle of storage is managed with the life-cycle of the
task_struct.  i.e. the storage is destroyed along with the owning task
with a callback to the bpf_task_storage_free from the task_free LSM
hook.

The BPF LSM allocates an __rcu pointer to the bpf_local_storage in
the security blob which are now stackable and can co-exist with other
LSMs.

The userspace map operations can be done by using a pid fd as a key
passed to the lookup, update and delete operations.

Acked-by: Song Liu 
Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 include/linux/bpf_lsm.h|  23 +++
 include/linux/bpf_types.h  |   1 +
 include/uapi/linux/bpf.h   |  39 
 kernel/bpf/Makefile|   1 +
 kernel/bpf/bpf_lsm.c   |   4 +
 kernel/bpf/bpf_task_storage.c  | 315 +
 kernel/bpf/syscall.c   |   3 +-
 kernel/bpf/verifier.c  |  10 ++
 security/bpf/hooks.c   |   2 +
 tools/include/uapi/linux/bpf.h |  39 
 10 files changed, 436 insertions(+), 1 deletion(-)
 create mode 100644 kernel/bpf/bpf_task_storage.c

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index aaacb6aafc87..73226181b744 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -7,6 +7,7 @@
 #ifndef _LINUX_BPF_LSM_H
 #define _LINUX_BPF_LSM_H
 
+#include 
 #include 
 #include 
 
@@ -35,9 +36,21 @@ static inline struct bpf_storage_blob *bpf_inode(
return inode->i_security + bpf_lsm_blob_sizes.lbs_inode;
 }
 
+static inline struct bpf_storage_blob *bpf_task(
+   const struct task_struct *task)
+{
+   if (unlikely(!task->security))
+   return NULL;
+
+   return task->security + bpf_lsm_blob_sizes.lbs_task;
+}
+
 extern const struct bpf_func_proto bpf_inode_storage_get_proto;
 extern const struct bpf_func_proto bpf_inode_storage_delete_proto;
+extern const struct bpf_func_proto bpf_task_storage_get_proto;
+extern const struct bpf_func_proto bpf_task_storage_delete_proto;
 void bpf_inode_storage_free(struct inode *inode);
+void bpf_task_storage_free(struct task_struct *task);
 
 #else /* !CONFIG_BPF_LSM */
 
@@ -53,10 +66,20 @@ static inline struct bpf_storage_blob *bpf_inode(
return NULL;
 }
 
+static inline struct bpf_storage_blob *bpf_task(
+   const struct task_struct *task)
+{
+   return NULL;
+}
+
 static inline void bpf_inode_storage_free(struct inode *inode)
 {
 }
 
+static inline void bpf_task_storage_free(struct task_struct *task)
+{
+}
+
 #endif /* CONFIG_BPF_LSM */
 
 #endif /* _LINUX_BPF_LSM_H */
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 2e6f568377f1..99f7fd657d87 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -109,6 +109,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_SOCKHASH, sock_hash_ops)
 #endif
 #ifdef CONFIG_BPF_LSM
 BPF_MAP_TYPE(BPF_MAP_TYPE_INODE_STORAGE, inode_storage_map_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_TASK_STORAGE, task_storage_map_ops)
 #endif
 BPF_MAP_TYPE(BPF_MAP_TYPE_CPUMAP, cpu_map_ops)
 #if defined(CONFIG_XDP_SOCKETS)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index e6ceac3f7d62..f4037b2161a6 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -157,6 +157,7 @@ enum bpf_map_type {
BPF_MAP_TYPE_STRUCT_OPS,
BPF_MAP_TYPE_RINGBUF,
BPF_MAP_TYPE_INODE_STORAGE,
+   BPF_MAP_TYPE_TASK_STORAGE,
 };
 
 /* Note that tracing related programs such as
@@ -3742,6 +3743,42 @@ union bpf_attr {
  * Return
  * The helper returns **TC_ACT_REDIRECT** on success or
  * **TC_ACT_SHOT** on error.
+ *
+ * void *bpf_task_storage_get(struct bpf_map *map, struct task_struct *task, 
void *value, u64 flags)
+ * Description
+ * Get a bpf_local_storage from the *task*.
+ *
+ * Logically, it could be thought of as getting the value from
+ * a *map* with *task* as the **key**.  From this
+ * perspective,  the usage is not much different from
+ * **bpf_map_lookup_elem**\ (*map*, **&**\ *task*) except this
+ * helper enforces the key must be an task_struct and the map must 
also
+ * be a **BPF_MAP_TYPE_TASK_STORAGE**.
+ *
+ * Underneath, the value is stored locally at *task* instead of
+ * the *map*.  The *map* is used as the bpf-local-storage
+ * "type". The bpf-local-storage "type" (i.e. the *map*) is
+ * searched against all bpf_local_storage residing at *task*.
+ *
+ * An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
+ * used such that a new bpf_local_storage will be
+ * created if one does not exist.  *value* can be used
+ * together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
+ * the initial value o

[PATCH bpf-next v6 3/9] libbpf: Add support for task local storage

2020-11-06 Thread KP Singh
From: KP Singh 

Updates the bpf_probe_map_type API to also support
BPF_MAP_TYPE_TASK_STORAGE similar to other local storage maps.

Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 tools/lib/bpf/libbpf_probes.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index 5482a9b7ae2d..ecaae2927ab8 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -230,6 +230,7 @@ bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 
ifindex)
break;
case BPF_MAP_TYPE_SK_STORAGE:
case BPF_MAP_TYPE_INODE_STORAGE:
+   case BPF_MAP_TYPE_TASK_STORAGE:
btf_key_type_id = 1;
btf_value_type_id = 3;
value_size = 8;
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v6 8/9] bpf: Add tests for task_local_storage

2020-11-06 Thread KP Singh
From: KP Singh 

The test exercises the syscall based map operations by creating a pidfd
for the current process.

For verifying kernel / LSM functionality, the test implements a simple
MAC policy which denies an executable from unlinking itself. The LSM
program bprm_committed_creds sets a task_local_storage with a pointer to
the inode. This is then used to detect if the task is trying to unlink
itself in the inode_unlink LSM hook.

The test copies /bin/rm to /tmp and executes it in a child thread with
the intention of deleting itself. A successful test should prevent the
the running executable from deleting itself.

The bpf programs are also updated to call bpf_spin_{lock, unlock} to
trigger the verfier checks for spin locks.

The temporary file is cleaned up later in the test.

Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 .../bpf/prog_tests/test_local_storage.c   | 185 --
 .../selftests/bpf/progs/local_storage.c   |  61 +-
 2 files changed, 226 insertions(+), 20 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c 
b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
index 91cd6f357246..4e7f6a4965f2 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
@@ -4,30 +4,161 @@
  * Copyright (C) 2020 Google LLC.
  */
 
+#include 
+#include 
 #include 
 #include 
 
 #include "local_storage.skel.h"
 #include "network_helpers.h"
 
-int create_and_unlink_file(void)
+static inline int sys_pidfd_open(pid_t pid, unsigned int flags)
 {
-   char fname[PATH_MAX] = "/tmp/fileXX";
-   int fd;
+   return syscall(__NR_pidfd_open, pid, flags);
+}
+
+static inline ssize_t copy_file_range(int fd_in, loff_t *off_in, int fd_out,
+ loff_t *off_out, size_t len,
+ unsigned int flags)
+{
+   return syscall(__NR_copy_file_range, fd_in, off_in, fd_out, off_out,
+  len, flags);
+}
+
+static unsigned int duration;
+
+#define TEST_STORAGE_VALUE 0xbeefdead
 
-   fd = mkstemp(fname);
-   if (fd < 0)
-   return fd;
+struct storage {
+   void *inode;
+   unsigned int value;
+   /* Lock ensures that spin locked versions of local stoage operations
+* also work, most operations in this tests are still single threaded
+*/
+   struct bpf_spin_lock lock;
+};
+
+/* Copies an rm binary to a temp file. dest is a mkstemp template */
+static int copy_rm(char *dest)
+{
+   int fd_in, fd_out = -1, ret = 0;
+   struct stat stat;
+
+   fd_in = open("/bin/rm", O_RDONLY);
+   if (fd_in < 0)
+   return -errno;
+
+   fd_out = mkstemp(dest);
+   if (fd_out < 0) {
+   ret = -errno;
+   goto out;
+   }
+
+   ret = fstat(fd_in, );
+   if (ret == -1) {
+   ret = -errno;
+   goto out;
+   }
+
+   ret = copy_file_range(fd_in, NULL, fd_out, NULL, stat.st_size, 0);
+   if (ret == -1) {
+   ret = -errno;
+   goto out;
+   }
+
+   /* Set executable permission on the copied file */
+   ret = chmod(dest, 0100);
+   if (ret == -1)
+   ret = -errno;
+
+out:
+   close(fd_in);
+   close(fd_out);
+   return ret;
+}
+
+/* Fork and exec the provided rm binary and return the exit code of the
+ * forked process and its pid.
+ */
+static int run_self_unlink(int *monitored_pid, const char *rm_path)
+{
+   int child_pid, child_status, ret;
+   int null_fd;
+
+   child_pid = fork();
+   if (child_pid == 0) {
+   null_fd = open("/dev/null", O_WRONLY);
+   dup2(null_fd, STDOUT_FILENO);
+   dup2(null_fd, STDERR_FILENO);
+   close(null_fd);
+
+   *monitored_pid = getpid();
+   /* Use the copied /usr/bin/rm to delete itself
+* /tmp/copy_of_rm /tmp/copy_of_rm.
+*/
+   ret = execlp(rm_path, rm_path, rm_path, NULL);
+   if (ret)
+   exit(errno);
+   } else if (child_pid > 0) {
+   waitpid(child_pid, _status, 0);
+   return WEXITSTATUS(child_status);
+   }
+
+   return -EINVAL;
+}
 
-   close(fd);
-   unlink(fname);
-   return 0;
+static bool check_syscall_operations(int map_fd, int obj_fd)
+{
+   struct storage val = { .value = TEST_STORAGE_VALUE, .lock = { 0 } },
+  lookup_val = { .value = 0, .lock = { 0 } };
+   int err;
+
+   /* Looking up an existing element should fail initially */
+   err = bpf_map_lookup_elem_flags(map_fd, _fd, _val,
+   BPF_F_LOCK);
+   if (CHECK(!err || errno != ENOENT, "bpf_map_lookup_elem",
+   

[PATCH bpf-next v6 7/9] bpf: Update selftests for local_storage to use vmlinux.h

2020-11-06 Thread KP Singh
From: KP Singh 

With the fixing of BTF pruning of embedded types being fixed, the test
can be simplified to use vmlinux.h

Acked-by: Song Liu 
Signed-off-by: KP Singh 
---
 .../selftests/bpf/progs/local_storage.c   | 20 +--
 1 file changed, 1 insertion(+), 19 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/local_storage.c 
b/tools/testing/selftests/bpf/progs/local_storage.c
index 09529e33be98..ef3822bc7542 100644
--- a/tools/testing/selftests/bpf/progs/local_storage.c
+++ b/tools/testing/selftests/bpf/progs/local_storage.c
@@ -4,9 +4,8 @@
  * Copyright 2020 Google LLC.
  */
 
+#include "vmlinux.h"
 #include 
-#include 
-#include 
 #include 
 #include 
 
@@ -36,23 +35,6 @@ struct {
__type(value, struct dummy_storage);
 } sk_storage_map SEC(".maps");
 
-/* TODO Use vmlinux.h once BTF pruning for embedded types is fixed.
- */
-struct sock {} __attribute__((preserve_access_index));
-struct sockaddr {} __attribute__((preserve_access_index));
-struct socket {
-   struct sock *sk;
-} __attribute__((preserve_access_index));
-
-struct inode {} __attribute__((preserve_access_index));
-struct dentry {
-   struct inode *d_inode;
-} __attribute__((preserve_access_index));
-struct file {
-   struct inode *f_inode;
-} __attribute__((preserve_access_index));
-
-
 SEC("lsm/inode_unlink")
 int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim)
 {
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v6 4/9] bpftool: Add support for task local storage

2020-11-06 Thread KP Singh
From: KP Singh 

Updates the binary to handle the BPF_MAP_TYPE_TASK_STORAGE as
"task_storage" for printing and parsing. Also updates the documentation
and bash completion

Acked-by: Song Liu 
Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 tools/bpf/bpftool/Documentation/bpftool-map.rst | 3 ++-
 tools/bpf/bpftool/bash-completion/bpftool   | 2 +-
 tools/bpf/bpftool/map.c | 4 +++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/bpf/bpftool/Documentation/bpftool-map.rst 
b/tools/bpf/bpftool/Documentation/bpftool-map.rst
index dade10cdf295..3d52256ba75f 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-map.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-map.rst
@@ -50,7 +50,8 @@ MAP COMMANDS
 |  | **lru_percpu_hash** | **lpm_trie** | **array_of_maps** | 
**hash_of_maps**
 |  | **devmap** | **devmap_hash** | **sockmap** | **cpumap** | 
**xskmap** | **sockhash**
 |  | **cgroup_storage** | **reuseport_sockarray** | 
**percpu_cgroup_storage**
-|  | **queue** | **stack** | **sk_storage** | **struct_ops** | 
**ringbuf** | **inode_storage** }
+|  | **queue** | **stack** | **sk_storage** | **struct_ops** | 
**ringbuf** | **inode_storage**
+   | **task_storage** }
 
 DESCRIPTION
 ===
diff --git a/tools/bpf/bpftool/bash-completion/bpftool 
b/tools/bpf/bpftool/bash-completion/bpftool
index 3f1da30c4da6..fdffbc64c65c 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -705,7 +705,7 @@ _bpftool()
 hash_of_maps devmap devmap_hash sockmap cpumap 
\
 xskmap sockhash cgroup_storage 
reuseport_sockarray \
 percpu_cgroup_storage queue stack sk_storage \
-struct_ops inode_storage' -- \
+struct_ops inode_storage task_storage' -- \
"$cur" ) )
 return 0
 ;;
diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
index a7efbd84fbcc..b400364ee054 100644
--- a/tools/bpf/bpftool/map.c
+++ b/tools/bpf/bpftool/map.c
@@ -51,6 +51,7 @@ const char * const map_type_name[] = {
[BPF_MAP_TYPE_STRUCT_OPS]   = "struct_ops",
[BPF_MAP_TYPE_RINGBUF]  = "ringbuf",
[BPF_MAP_TYPE_INODE_STORAGE]= "inode_storage",
+   [BPF_MAP_TYPE_TASK_STORAGE] = "task_storage",
 };
 
 const size_t map_type_name_size = ARRAY_SIZE(map_type_name);
@@ -1464,7 +1465,8 @@ static int do_help(int argc, char **argv)
" lru_percpu_hash | lpm_trie | array_of_maps | 
hash_of_maps |\n"
" devmap | devmap_hash | sockmap | cpumap | 
xskmap | sockhash |\n"
" cgroup_storage | reuseport_sockarray | 
percpu_cgroup_storage |\n"
-   " queue | stack | sk_storage | struct_ops | 
ringbuf | inode_storage }\n"
+   " queue | stack | sk_storage | struct_ops | 
ringbuf | inode_storage |\n"
+   " task_storage }\n"
"   " HELP_SPEC_OPTIONS "\n"
"",
bin_name, argv[-2]);
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v6 1/9] bpf: Allow LSM programs to use bpf spin locks

2020-11-06 Thread KP Singh
From: KP Singh 

Usage of spin locks was not allowed for tracing programs due to
insufficient preemption checks. The verifier does not currently prevent
LSM programs from using spin locks, but the helpers are not exposed
via bpf_lsm_func_proto.

Based on the discussion in [1], non-sleepable LSM programs should be
able to use bpf_spin_{lock, unlock}.

Sleepable LSM programs can be preempted which means that allowng spin
locks will need more work (disabling preemption and the verifier
ensuring that no sleepable helpers are called when a spin lock is held).

[1]: 
https://lore.kernel.org/bpf/20201103153132.2717326-1-kpsi...@chromium.org/T/#md601a053229287659071600d3483523f752cd2fb

Acked-by: Song Liu 
Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 kernel/bpf/bpf_lsm.c  |  4 
 kernel/bpf/verifier.c | 20 +++-
 2 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 78ea8a7bd27f..cd8a617f2109 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -59,6 +59,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct 
bpf_prog *prog)
return _sk_storage_get_proto;
case BPF_FUNC_sk_storage_delete:
return _sk_storage_delete_proto;
+   case BPF_FUNC_spin_lock:
+   return _spin_lock_proto;
+   case BPF_FUNC_spin_unlock:
+   return _spin_unlock_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6200519582a6..f863aa84d0a2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9719,11 +9719,21 @@ static int check_map_prog_compatibility(struct 
bpf_verifier_env *env,
verbose(env, "trace type programs with run-time allocated hash 
maps are unsafe. Switch to preallocated hash maps.\n");
}
 
-   if ((is_tracing_prog_type(prog_type) ||
-prog_type == BPF_PROG_TYPE_SOCKET_FILTER) &&
-   map_value_has_spin_lock(map)) {
-   verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
-   return -EINVAL;
+   if (map_value_has_spin_lock(map)) {
+   if (prog_type == BPF_PROG_TYPE_SOCKET_FILTER) {
+   verbose(env, "socket filter progs cannot use 
bpf_spin_lock yet\n");
+   return -EINVAL;
+   }
+
+   if (is_tracing_prog_type(prog_type)) {
+   verbose(env, "tracing progs cannot use bpf_spin_lock 
yet\n");
+   return -EINVAL;
+   }
+
+   if (prog->aux->sleepable) {
+   verbose(env, "sleepable progs cannot use bpf_spin_lock 
yet\n");
+   return -EINVAL;
+   }
}
 
if ((bpf_prog_is_dev_bound(prog->aux) || bpf_map_is_dev_bound(map)) &&
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v6 5/9] bpf: Implement get_current_task_btf and RET_PTR_TO_BTF_ID

2020-11-06 Thread KP Singh
From: KP Singh 

The currently available bpf_get_current_task returns an unsigned integer
which can be used along with BPF_CORE_READ to read data from
the task_struct but still cannot be used as an input argument to a
helper that accepts an ARG_PTR_TO_BTF_ID of type task_struct.

In order to implement this helper a new return type, RET_PTR_TO_BTF_ID,
is added. This is similar to RET_PTR_TO_BTF_ID_OR_NULL but does not
require checking the nullness of returned pointer.

Acked-by: Song Liu 
Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 include/linux/bpf.h|  1 +
 include/uapi/linux/bpf.h   |  9 +
 kernel/bpf/verifier.c  |  7 +--
 kernel/trace/bpf_trace.c   | 16 
 tools/include/uapi/linux/bpf.h |  9 +
 5 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 2fffd30e13ac..73d5381a5d5c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -310,6 +310,7 @@ enum bpf_return_type {
RET_PTR_TO_BTF_ID_OR_NULL,  /* returns a pointer to a btf_id or 
NULL */
RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL, /* returns a pointer to a valid 
memory or a btf_id or NULL */
RET_PTR_TO_MEM_OR_BTF_ID,   /* returns a pointer to a valid memory 
or a btf_id */
+   RET_PTR_TO_BTF_ID,  /* returns a pointer to a btf_id */
 };
 
 /* eBPF function prototype used by verifier to allow BPF_CALLs from eBPF 
programs
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index f4037b2161a6..9879d6793e90 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3779,6 +3779,14 @@ union bpf_attr {
  * 0 on success.
  *
  * **-ENOENT** if the bpf_local_storage cannot be found.
+ *
+ * struct task_struct *bpf_get_current_task_btf(void)
+ * Description
+ * Return a BTF pointer to the "current" task.
+ * This pointer can also be used in helpers that accept an
+ * *ARG_PTR_TO_BTF_ID* of type *task_struct*.
+ * Return
+ * Pointer to the current task.
  */
 #define __BPF_FUNC_MAPPER(FN)  \
FN(unspec), \
@@ -3939,6 +3947,7 @@ union bpf_attr {
FN(redirect_peer),  \
FN(task_storage_get),   \
FN(task_storage_delete),\
+   FN(get_current_task_btf),   \
/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 00960f6a83ec..10da26e55130 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5186,11 +5186,14 @@ static int check_helper_call(struct bpf_verifier_env 
*env, int func_id, int insn
PTR_TO_BTF_ID : PTR_TO_BTF_ID_OR_NULL;
regs[BPF_REG_0].btf_id = meta.ret_btf_id;
}
-   } else if (fn->ret_type == RET_PTR_TO_BTF_ID_OR_NULL) {
+   } else if (fn->ret_type == RET_PTR_TO_BTF_ID_OR_NULL ||
+  fn->ret_type == RET_PTR_TO_BTF_ID) {
int ret_btf_id;
 
mark_reg_known_zero(env, regs, BPF_REG_0);
-   regs[BPF_REG_0].type = PTR_TO_BTF_ID_OR_NULL;
+   regs[BPF_REG_0].type = fn->ret_type == RET_PTR_TO_BTF_ID ?
+PTR_TO_BTF_ID :
+PTR_TO_BTF_ID_OR_NULL;
ret_btf_id = *fn->ret_btf_id;
if (ret_btf_id == 0) {
verbose(env, "invalid return type %d of func %s#%d\n",
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4517c8b66518..e4515b0f62a8 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1022,6 +1022,20 @@ const struct bpf_func_proto bpf_get_current_task_proto = 
{
.ret_type   = RET_INTEGER,
 };
 
+BPF_CALL_0(bpf_get_current_task_btf)
+{
+   return (unsigned long) current;
+}
+
+BTF_ID_LIST_SINGLE(bpf_get_current_btf_ids, struct, task_struct)
+
+static const struct bpf_func_proto bpf_get_current_task_btf_proto = {
+   .func   = bpf_get_current_task_btf,
+   .gpl_only   = true,
+   .ret_type   = RET_PTR_TO_BTF_ID,
+   .ret_btf_id = _get_current_btf_ids[0],
+};
+
 BPF_CALL_2(bpf_current_task_under_cgroup, struct bpf_map *, map, u32, idx)
 {
struct bpf_array *array = container_of(map, struct bpf_array, map);
@@ -1265,6 +1279,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const 
struct bpf_prog *prog)
return _get_current_pid_tgid_proto;
case BPF_FUNC_get_current_task:
return _get_current_task_proto;
+   case BPF_FUNC_get_current_task_btf:
+   return _get_current_task_btf_proto;
case BPF_FUNC_get_current_uid_gid:
return _get_current_uid_gid_proto;
case BPF_FUNC_g

[PATCH bpf-next v6 0/9] Implement task_local_storage

2020-11-06 Thread KP Singh
From: KP Singh 

# v5 -> v6

- Using a wrapper for copy_file_range in selftests since it's missing
  in older libcs.
- Added Martin's acks.

# v4 -> v5

- Fixes to selftests as suggested by Martin.
- Added Martin's acks.

# v3 -> v4

- Move the patch that exposes spin lock helpers to LSM programs as the
  first patch as some of the changes in the implementation are actually
  for spin locks.
- Clarify the comment in the bpf_task_storage_{get, delete} helper as
  discussed with Martin.
- Added Martin's ack and rebased.

# v2 -> v3

- Added bpf_spin_locks to the selftests for local storage, found that
  these are not available for LSM programs.
- Made spin lock helpers available for LSM programs (except sleepable
  programs which need more work).
- Minor fixes for includes and added short commit messages for patches
  that were split up for libbpf and bpftool.
- Added Song's acks.

# v1 -> v2

- Updated the refcounting for task_struct and simplified conversion
  of fd -> struct pid.
- Some fixes suggested by Martin and Andrii, notably:
   * long return type for the bpf_task_storage_delete helper (update
 for bpf_inode_storage_delete will be sent separately).
   * Remove extra nullness check to task_storage_ptr in map syscall
 ops.
   * Changed the argument signature of the BPF helpers to use
 task_struct pointer in uapi headers.
   * Remove unnecessary verifier logic for the bpf_get_current_task_btf
 helper.
   * Split the changes for bpftool and libbpf.
- Exercised syscall operations for local storage (kept a simpler verison
  in test_local_storage.c, the eventual goal will be to update
  sk_storage_map.c for all local storage types).
- Formatting fixes + Rebase.

We already have socket and inode local storage since [1]

This patch series:

* Implements bpf_local_storage for task_struct.
* Implements the bpf_get_current_task_btf helper which returns a BTF
  pointer to the current task. Not only is this generally cleaner
  (reading from the task_struct currently requires BPF_CORE_READ), it
  also allows the BTF pointer to be used in task_local_storage helpers.
* In order to implement this helper, a RET_PTR_TO_BTF_ID is introduced
  which works similar to RET_PTR_TO_BTF_ID_OR_NULL but does not require
  a nullness check.
* Implements a detection in selftests which uses the
  task local storage to deny a running executable from unlinking itself.

[1]: 
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=f836a56e84ffc9f1a1cd73f77e10404ca46a4616

KP Singh (9):
  bpf: Allow LSM programs to use bpf spin locks
  bpf: Implement task local storage
  libbpf: Add support for task local storage
  bpftool: Add support for task local storage
  bpf: Implement get_current_task_btf and RET_PTR_TO_BTF_ID
  bpf: Fix tests for local_storage
  bpf: Update selftests for local_storage to use vmlinux.h
  bpf: Add tests for task_local_storage
  bpf: Exercise syscall operations for inode and sk storage

 include/linux/bpf.h   |   1 +
 include/linux/bpf_lsm.h   |  23 ++
 include/linux/bpf_types.h |   1 +
 include/uapi/linux/bpf.h  |  48 +++
 kernel/bpf/Makefile   |   1 +
 kernel/bpf/bpf_lsm.c  |   8 +
 kernel/bpf/bpf_task_storage.c | 315 ++
 kernel/bpf/syscall.c  |   3 +-
 kernel/bpf/verifier.c |  37 +-
 kernel/trace/bpf_trace.c  |  16 +
 security/bpf/hooks.c  |   2 +
 .../bpf/bpftool/Documentation/bpftool-map.rst |   3 +-
 tools/bpf/bpftool/bash-completion/bpftool |   2 +-
 tools/bpf/bpftool/map.c   |   4 +-
 tools/include/uapi/linux/bpf.h|  48 +++
 tools/lib/bpf/libbpf_probes.c |   1 +
 .../bpf/prog_tests/test_local_storage.c   | 200 ++-
 .../selftests/bpf/progs/local_storage.c   | 103 --
 18 files changed, 757 insertions(+), 59 deletions(-)
 create mode 100644 kernel/bpf/bpf_task_storage.c

-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v6 6/9] bpf: Fix tests for local_storage

2020-11-06 Thread KP Singh
From: KP Singh 

The {inode,sk}_storage_result checking if the correct value was retrieved
was being clobbered unconditionally by the return value of the
bpf_{inode,sk}_storage_delete call.

Also, consistently use the newly added BPF_LOCAL_STORAGE_GET_F_CREATE
flag.

Acked-by: Song Liu 
Fixes: cd324d7abb3d ("bpf: Add selftests for local_storage")
Signed-off-by: KP Singh 
---
 .../selftests/bpf/progs/local_storage.c   | 24 ---
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/local_storage.c 
b/tools/testing/selftests/bpf/progs/local_storage.c
index 0758ba229ae0..09529e33be98 100644
--- a/tools/testing/selftests/bpf/progs/local_storage.c
+++ b/tools/testing/selftests/bpf/progs/local_storage.c
@@ -58,20 +58,22 @@ int BPF_PROG(unlink_hook, struct inode *dir, struct dentry 
*victim)
 {
__u32 pid = bpf_get_current_pid_tgid() >> 32;
struct dummy_storage *storage;
+   int err;
 
if (pid != monitored_pid)
return 0;
 
storage = bpf_inode_storage_get(_storage_map, victim->d_inode, 0,
-BPF_SK_STORAGE_GET_F_CREATE);
+   BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;
 
-   if (storage->value == DUMMY_STORAGE_VALUE)
+   if (storage->value != DUMMY_STORAGE_VALUE)
inode_storage_result = -1;
 
-   inode_storage_result =
-   bpf_inode_storage_delete(_storage_map, victim->d_inode);
+   err = bpf_inode_storage_delete(_storage_map, victim->d_inode);
+   if (!err)
+   inode_storage_result = err;
 
return 0;
 }
@@ -82,19 +84,23 @@ int BPF_PROG(socket_bind, struct socket *sock, struct 
sockaddr *address,
 {
__u32 pid = bpf_get_current_pid_tgid() >> 32;
struct dummy_storage *storage;
+   int err;
 
if (pid != monitored_pid)
return 0;
 
storage = bpf_sk_storage_get(_storage_map, sock->sk, 0,
-BPF_SK_STORAGE_GET_F_CREATE);
+BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;
 
-   if (storage->value == DUMMY_STORAGE_VALUE)
+   if (storage->value != DUMMY_STORAGE_VALUE)
sk_storage_result = -1;
 
-   sk_storage_result = bpf_sk_storage_delete(_storage_map, sock->sk);
+   err = bpf_sk_storage_delete(_storage_map, sock->sk);
+   if (!err)
+   sk_storage_result = err;
+
return 0;
 }
 
@@ -109,7 +115,7 @@ int BPF_PROG(socket_post_create, struct socket *sock, int 
family, int type,
return 0;
 
storage = bpf_sk_storage_get(_storage_map, sock->sk, 0,
-BPF_SK_STORAGE_GET_F_CREATE);
+BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;
 
@@ -131,7 +137,7 @@ int BPF_PROG(file_open, struct file *file)
return 0;
 
storage = bpf_inode_storage_get(_storage_map, file->f_inode, 0,
-BPF_LOCAL_STORAGE_GET_F_CREATE);
+   BPF_LOCAL_STORAGE_GET_F_CREATE);
if (!storage)
return 0;
 
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v6 9/9] bpf: Exercise syscall operations for inode and sk storage

2020-11-06 Thread KP Singh
From: KP Singh 

Use the check_syscall_operations added for task_local_storage to
exercise syscall operations for other local storage maps:

* Check the absence of an element for the given fd.
* Create a new element, retrieve and compare its value.
* Delete the element and check again for absence.

Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 .../bpf/prog_tests/test_local_storage.c | 17 +++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c 
b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
index 4e7f6a4965f2..5fda45982be0 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c
@@ -157,7 +157,7 @@ static bool check_syscall_operations(int map_fd, int obj_fd)
 void test_test_local_storage(void)
 {
char tmp_exec_path[PATH_MAX] = "/tmp/copy_of_rmXX";
-   int err, serv_sk = -1, task_fd = -1;
+   int err, serv_sk = -1, task_fd = -1, rm_fd = -1;
struct local_storage *skel = NULL;
 
skel = local_storage__open_and_load();
@@ -181,6 +181,15 @@ void test_test_local_storage(void)
if (CHECK(err < 0, "copy_rm", "err %d errno %d\n", err, errno))
goto close_prog;
 
+   rm_fd = open(tmp_exec_path, O_RDONLY);
+   if (CHECK(rm_fd < 0, "open", "failed to open %s err:%d, errno:%d",
+ tmp_exec_path, rm_fd, errno))
+   goto close_prog;
+
+   if (!check_syscall_operations(bpf_map__fd(skel->maps.inode_storage_map),
+ rm_fd))
+   goto close_prog;
+
/* Sets skel->bss->monitored_pid to the pid of the forked child
 * forks a child process that executes tmp_exec_path and tries to
 * unlink its executable. This operation should be denied by the loaded
@@ -209,11 +218,15 @@ void test_test_local_storage(void)
CHECK(skel->data->sk_storage_result != 0, "sk_storage_result",
  "sk_local_storage not set\n");
 
-   close(serv_sk);
+   if (!check_syscall_operations(bpf_map__fd(skel->maps.sk_storage_map),
+ serv_sk))
+   goto close_prog;
 
 close_prog_unlink:
unlink(tmp_exec_path);
 close_prog:
+   close(serv_sk);
+   close(rm_fd);
close(task_fd);
local_storage__destroy(skel);
 }
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v2] bpf: Update verification logic for LSM programs

2020-11-05 Thread KP Singh
From: KP Singh 

The current logic checks if the name of the BTF type passed in
attach_btf_id starts with "bpf_lsm_", this is not sufficient as it also
allows attachment to non-LSM hooks like the very function that performs
this check, i.e. bpf_lsm_verify_prog.

In order to ensure that this verification logic allows attachment to
only LSM hooks, the LSM_HOOK definitions in lsm_hook_defs.h are used to
generate a BTF_ID set. Upon verification, the attach_btf_id of the
program being attached is checked for presence in this set.

Signed-off-by: KP Singh 
---
 kernel/bpf/bpf_lsm.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 78ea8a7bd27f..56cc5a915f67 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -26,7 +27,11 @@ noinline RET bpf_lsm_##NAME(__VA_ARGS__) \
 #include 
 #undef LSM_HOOK
 
-#define BPF_LSM_SYM_PREFX  "bpf_lsm_"
+#define LSM_HOOK(RET, DEFAULT, NAME, ...) BTF_ID(func, bpf_lsm_##NAME)
+BTF_SET_START(bpf_lsm_hooks)
+#include 
+#undef LSM_HOOK
+BTF_SET_END(bpf_lsm_hooks)
 
 int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
const struct bpf_prog *prog)
@@ -37,8 +42,7 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
return -EINVAL;
}
 
-   if (strncmp(BPF_LSM_SYM_PREFX, prog->aux->attach_func_name,
-   sizeof(BPF_LSM_SYM_PREFX) - 1)) {
+   if (!btf_id_set_contains(_lsm_hooks, prog->aux->attach_btf_id)) {
bpf_log(vlog, "attach_btf_id %u points to wrong type name %s\n",
prog->aux->attach_btf_id, prog->aux->attach_func_name);
return -EINVAL;
-- 
2.29.1.341.ge80a0c044ae-goog



Re: [PATCH bpf-next] bpf: Update verification logic for LSM programs

2020-11-05 Thread KP Singh
On Fri, Nov 6, 2020 at 12:02 AM KP Singh  wrote:
>
> From: KP Singh 
>
> The current logic checks if the name of the BTF type passed in
> attach_btf_id starts with "bpf_lsm_", this is not sufficient as it also
> allows attachment to non-LSM hooks like the very function that performs
> this check, i.e. bpf_lsm_verify_prog.
>
> In order to ensure that this verification logic allows attachment to
> only LSM hooks, the LSM_HOOK definitions in lsm_hook_defs.h are used to
> generate a BTD id set. The attach_btf_id of the program being attached

Fixing typo (BTD -> BTF) and resending.


[PATCH bpf-next] bpf: Update verification logic for LSM programs

2020-11-05 Thread KP Singh
From: KP Singh 

The current logic checks if the name of the BTF type passed in
attach_btf_id starts with "bpf_lsm_", this is not sufficient as it also
allows attachment to non-LSM hooks like the very function that performs
this check, i.e. bpf_lsm_verify_prog.

In order to ensure that this verification logic allows attachment to
only LSM hooks, the LSM_HOOK definitions in lsm_hook_defs.h are used to
generate a BTD id set. The attach_btf_id of the program being attached
is then checked for presence in this set.

Signed-off-by: KP Singh 
---
 kernel/bpf/bpf_lsm.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 78ea8a7bd27f..56cc5a915f67 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* For every LSM hook that allows attachment of BPF programs, declare a nop
  * function where a BPF program can be attached.
@@ -26,7 +27,11 @@ noinline RET bpf_lsm_##NAME(__VA_ARGS__) \
 #include 
 #undef LSM_HOOK
 
-#define BPF_LSM_SYM_PREFX  "bpf_lsm_"
+#define LSM_HOOK(RET, DEFAULT, NAME, ...) BTF_ID(func, bpf_lsm_##NAME)
+BTF_SET_START(bpf_lsm_hooks)
+#include 
+#undef LSM_HOOK
+BTF_SET_END(bpf_lsm_hooks)
 
 int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
const struct bpf_prog *prog)
@@ -37,8 +42,7 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
return -EINVAL;
}
 
-   if (strncmp(BPF_LSM_SYM_PREFX, prog->aux->attach_func_name,
-   sizeof(BPF_LSM_SYM_PREFX) - 1)) {
+   if (!btf_id_set_contains(_lsm_hooks, prog->aux->attach_btf_id)) {
bpf_log(vlog, "attach_btf_id %u points to wrong type name %s\n",
prog->aux->attach_btf_id, prog->aux->attach_func_name);
return -EINVAL;
-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v5 0/9] Implement task_local_storage

2020-11-05 Thread KP Singh
From: KP Singh 

# v4 -> v5

- Fixes to selftests as suggested by Martin.
- Added Martin's acks.

# v3 -> v4

- Move the patch that exposes spin lock helpers to LSM programs as the
  first patch as some of the changes in the implementation are actually
  for spin locks.
- Clarify the comment in the bpf_task_storage_{get, delete} helper as
  discussed with Martin.
- Added Martin's ack and rebased.

# v2 -> v3

- Added bpf_spin_locks to the selftests for local storage, found that
  these are not available for LSM programs.
- Made spin lock helpers available for LSM programs (except sleepable
  programs which need more work).
- Minor fixes for includes and added short commit messages for patches
  that were split up for libbpf and bpftool.
- Added Song's acks.

# v1 -> v2

- Updated the refcounting for task_struct and simplified conversion
  of fd -> struct pid.
- Some fixes suggested by Martin and Andrii, notably:
   * long return type for the bpf_task_storage_delete helper (update
 for bpf_inode_storage_delete will be sent separately).
   * Remove extra nullness check to task_storage_ptr in map syscall
 ops.
   * Changed the argument signature of the BPF helpers to use
 task_struct pointer in uapi headers.
   * Remove unnecessary verifier logic for the bpf_get_current_task_btf
 helper.
   * Split the changes for bpftool and libbpf.
- Exercised syscall operations for local storage (kept a simpler verison
  in test_local_storage.c, the eventual goal will be to update
  sk_storage_map.c for all local storage types).
- Formatting fixes + Rebase.

We already have socket and inode local storage since [1]

This patch series:

* Implements bpf_local_storage for task_struct.
* Implements the bpf_get_current_task_btf helper which returns a BTF
  pointer to the current task. Not only is this generally cleaner
  (reading from the task_struct currently requires BPF_CORE_READ), it
  also allows the BTF pointer to be used in task_local_storage helpers.
* In order to implement this helper, a RET_PTR_TO_BTF_ID is introduced
  which works similar to RET_PTR_TO_BTF_ID_OR_NULL but does not require
  a nullness check.
* Implements a detection in selftests which uses the
  task local storage to deny a running executable from unlinking itself.

[1]: 
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=f836a56e84ffc9f1a1cd73f77e10404ca46a4616


KP Singh (9):
  bpf: Allow LSM programs to use bpf spin locks
  bpf: Implement task local storage
  libbpf: Add support for task local storage
  bpftool: Add support for task local storage
  bpf: Implement get_current_task_btf and RET_PTR_TO_BTF_ID
  bpf: Fix tests for local_storage
  bpf: Update selftests for local_storage to use vmlinux.h
  bpf: Add tests for task_local_storage
  bpf: Exercise syscall operations for inode and sk storage

 include/linux/bpf.h   |   1 +
 include/linux/bpf_lsm.h   |  23 ++
 include/linux/bpf_types.h |   1 +
 include/uapi/linux/bpf.h  |  48 +++
 kernel/bpf/Makefile   |   1 +
 kernel/bpf/bpf_lsm.c  |   8 +
 kernel/bpf/bpf_task_storage.c | 315 ++
 kernel/bpf/syscall.c  |   3 +-
 kernel/bpf/verifier.c |  37 +-
 kernel/trace/bpf_trace.c  |  16 +
 security/bpf/hooks.c  |   2 +
 .../bpf/bpftool/Documentation/bpftool-map.rst |   3 +-
 tools/bpf/bpftool/bash-completion/bpftool |   2 +-
 tools/bpf/bpftool/map.c   |   4 +-
 tools/include/uapi/linux/bpf.h|  48 +++
 tools/lib/bpf/libbpf_probes.c |   1 +
 .../bpf/prog_tests/test_local_storage.c   | 195 ++-
 .../selftests/bpf/progs/local_storage.c   | 103 --
 18 files changed, 752 insertions(+), 59 deletions(-)
 create mode 100644 kernel/bpf/bpf_task_storage.c

-- 
2.29.1.341.ge80a0c044ae-goog



[PATCH bpf-next v5 4/9] bpftool: Add support for task local storage

2020-11-05 Thread KP Singh
From: KP Singh 

Updates the binary to handle the BPF_MAP_TYPE_TASK_STORAGE as
"task_storage" for printing and parsing. Also updates the documentation
and bash completion

Acked-by: Song Liu 
Acked-by: Martin KaFai Lau 
Signed-off-by: KP Singh 
---
 tools/bpf/bpftool/Documentation/bpftool-map.rst | 3 ++-
 tools/bpf/bpftool/bash-completion/bpftool   | 2 +-
 tools/bpf/bpftool/map.c | 4 +++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/bpf/bpftool/Documentation/bpftool-map.rst 
b/tools/bpf/bpftool/Documentation/bpftool-map.rst
index dade10cdf295..3d52256ba75f 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-map.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-map.rst
@@ -50,7 +50,8 @@ MAP COMMANDS
 |  | **lru_percpu_hash** | **lpm_trie** | **array_of_maps** | 
**hash_of_maps**
 |  | **devmap** | **devmap_hash** | **sockmap** | **cpumap** | 
**xskmap** | **sockhash**
 |  | **cgroup_storage** | **reuseport_sockarray** | 
**percpu_cgroup_storage**
-|  | **queue** | **stack** | **sk_storage** | **struct_ops** | 
**ringbuf** | **inode_storage** }
+|  | **queue** | **stack** | **sk_storage** | **struct_ops** | 
**ringbuf** | **inode_storage**
+   | **task_storage** }
 
 DESCRIPTION
 ===
diff --git a/tools/bpf/bpftool/bash-completion/bpftool 
b/tools/bpf/bpftool/bash-completion/bpftool
index 3f1da30c4da6..fdffbc64c65c 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -705,7 +705,7 @@ _bpftool()
 hash_of_maps devmap devmap_hash sockmap cpumap 
\
 xskmap sockhash cgroup_storage 
reuseport_sockarray \
 percpu_cgroup_storage queue stack sk_storage \
-struct_ops inode_storage' -- \
+struct_ops inode_storage task_storage' -- \
"$cur" ) )
 return 0
 ;;
diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
index a7efbd84fbcc..b400364ee054 100644
--- a/tools/bpf/bpftool/map.c
+++ b/tools/bpf/bpftool/map.c
@@ -51,6 +51,7 @@ const char * const map_type_name[] = {
[BPF_MAP_TYPE_STRUCT_OPS]   = "struct_ops",
[BPF_MAP_TYPE_RINGBUF]  = "ringbuf",
[BPF_MAP_TYPE_INODE_STORAGE]= "inode_storage",
+   [BPF_MAP_TYPE_TASK_STORAGE] = "task_storage",
 };
 
 const size_t map_type_name_size = ARRAY_SIZE(map_type_name);
@@ -1464,7 +1465,8 @@ static int do_help(int argc, char **argv)
" lru_percpu_hash | lpm_trie | array_of_maps | 
hash_of_maps |\n"
" devmap | devmap_hash | sockmap | cpumap | 
xskmap | sockhash |\n"
" cgroup_storage | reuseport_sockarray | 
percpu_cgroup_storage |\n"
-   " queue | stack | sk_storage | struct_ops | 
ringbuf | inode_storage }\n"
+   " queue | stack | sk_storage | struct_ops | 
ringbuf | inode_storage |\n"
+   " task_storage }\n"
"   " HELP_SPEC_OPTIONS "\n"
"",
bin_name, argv[-2]);
-- 
2.29.1.341.ge80a0c044ae-goog



  1   2   3   >