On 1/8/21 3:19 PM, Song Liu wrote:
To access per-task data, BPF program typically creates a hash table with
pid as the key. This is not ideal because:
  1. The use need to estimate requires size of the hash table, with may be
     inaccurate;
  2. Big hash tables are slow;
  3. To clean up the data properly during task terminations, the user need
     to write code.

Task local storage overcomes these issues and becomes a better option for
these per-task data. Task local storage is only available to BPF_LSM. Now
enable it for tracing programs.

Reported-by: kernel test robot <l...@intel.com>

The whole patch is not reported by kernel test robot. I think we should
drop this.

Signed-off-by: Song Liu <songliubrav...@fb.com>
---
  include/linux/bpf.h            |  7 +++++++
  include/linux/bpf_lsm.h        | 22 ----------------------
  include/linux/bpf_types.h      |  2 +-
  include/linux/sched.h          |  5 +++++
  kernel/bpf/Makefile            |  3 +--
  kernel/bpf/bpf_local_storage.c | 28 +++++++++++++++++-----------
  kernel/bpf/bpf_lsm.c           |  4 ----
  kernel/bpf/bpf_task_storage.c  | 26 ++++++--------------------
  kernel/fork.c                  |  5 +++++
  kernel/trace/bpf_trace.c       |  4 ++++
  10 files changed, 46 insertions(+), 60 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 07cb5d15e7439..cf16548f28f7b 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1480,6 +1480,7 @@ struct bpf_prog *bpf_prog_by_id(u32 id);
  struct bpf_link *bpf_link_by_id(u32 id);
const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
+void bpf_task_storage_free(struct task_struct *task);
  #else /* !CONFIG_BPF_SYSCALL */
  static inline struct bpf_prog *bpf_prog_get(u32 ufd)
  {
@@ -1665,6 +1666,10 @@ bpf_base_func_proto(enum bpf_func_id func_id)
  {
        return NULL;
  }
+
+static inline void bpf_task_storage_free(struct task_struct *task)
+{
+}
  #endif /* CONFIG_BPF_SYSCALL */
[...]

Reply via email to