There is a memory leak happening in lpm_trie map_free callback
function trie_free. The trie structure itself does not get freed.

Also, trie_free function did not do synchronize_rcu before freeing
various data structures. This is incorrect as some rcu_read_lock
region(s) for lookup, update, delete or get_next_key may not complete yet.
The fix is to add synchronize_rcu in the beginning of trie_free.
The useless spin_lock is removed from this function as well.

Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation")
Reported-by: Mathieu Malaterre <>
Reported-by: Alexei Starovoitov <>
Tested-by: Mathieu Malaterre <>
Signed-off-by: Yonghong Song <>
 kernel/bpf/lpm_trie.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

  Make comments more precise and make label name more appropriate,
  as suggested by Daniel

diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index 7b469d1..a75e02c 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -555,7 +555,10 @@ static void trie_free(struct bpf_map *map)
        struct lpm_trie_node __rcu **slot;
        struct lpm_trie_node *node;
-       raw_spin_lock(&trie->lock);
+       /* Wait for outstanding programs to complete
+        * update/lookup/delete/get_next_key and free the trie.
+        */
+       synchronize_rcu();
        /* Always start at the root and walk down to a node that has no
         * children. Then free that node, nullify its reference in the parent
@@ -569,7 +572,7 @@ static void trie_free(struct bpf_map *map)
                        node = rcu_dereference_protected(*slot,
                        if (!node)
-                               goto unlock;
+                               goto out;
                        if (rcu_access_pointer(node->child[0])) {
                                slot = &node->child[0];
@@ -587,8 +590,8 @@ static void trie_free(struct bpf_map *map)
-       raw_spin_unlock(&trie->lock);
+       kfree(trie);
 static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)

Reply via email to