This is a note to let you know that I've just added the patch titled
kobject: fix kset_find_obj() race with concurrent last kobject_put()
to the 3.8-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch
and it can be found in the queue-3.8 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.
>From a49b7e82cab0f9b41f483359be83f44fbb6b4979 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <[email protected]>
Date: Sat, 13 Apr 2013 15:15:30 -0700
Subject: kobject: fix kset_find_obj() race with concurrent last kobject_put()
From: Linus Torvalds <[email protected]>
commit a49b7e82cab0f9b41f483359be83f44fbb6b4979 upstream.
Anatol Pomozov identified a race condition that hits module unloading
and re-loading. To quote Anatol:
"This is a race codition that exists between kset_find_obj() and
kobject_put(). kset_find_obj() might return kobject that has refcount
equal to 0 if this kobject is freeing by kobject_put() in other
thread.
Here is timeline for the crash in case if kset_find_obj() searches for
an object tht nobody holds and other thread is doing kobject_put() on
the same kobject:
THREAD A (calls kset_find_obj()) THREAD B (calls kobject_put())
splin_lock()
atomic_dec_return(kobj->kref), counter
gets zero here
... starts kobject cleanup ....
spin_lock() // WAIT thread A in
kobj_kset_leave()
iterate over kset->list
atomic_inc(kobj->kref) (counter becomes 1)
spin_unlock()
spin_lock() // taken
// it does not know that thread A
increased counter so it
remove obj from list
spin_unlock()
vfree(module) // frees module object
with containing kobj
// kobj points to freed memory area!!
kobject_put(kobj) // OOPS!!!!
The race above happens because module.c tries to use kset_find_obj()
when somebody unloads module. The module.c code was introduced in
commit 6494a93d55fa"
Anatol supplied a patch specific for module.c that worked around the
problem by simply not using kset_find_obj() at all, but rather than make
a local band-aid, this just fixes kset_find_obj() to be thread-safe
using the proper model of refusing the get a new reference if the
refcount has already dropped to zero.
See examples of this proper refcount handling not only in the kref
documentation, but in various other equivalent uses of this pattern by
grepping for atomic_inc_not_zero().
[ Side note: the module race does indicate that module loading and
unloading is not properly serialized wrt sysfs information using the
module mutex. That may require further thought, but this is the
correct fix at the kobject layer regardless. ]
Reported-analyzed-and-tested-by: Anatol Pomozov <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
lib/kobject.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--- a/lib/kobject.c
+++ b/lib/kobject.c
@@ -529,6 +529,13 @@ struct kobject *kobject_get(struct kobje
return kobj;
}
+static struct kobject *kobject_get_unless_zero(struct kobject *kobj)
+{
+ if (!kref_get_unless_zero(&kobj->kref))
+ kobj = NULL;
+ return kobj;
+}
+
/*
* kobject_cleanup - free kobject resources.
* @kobj: object to cleanup
@@ -751,7 +758,7 @@ struct kobject *kset_find_obj(struct kse
list_for_each_entry(k, &kset->list, entry) {
if (kobject_name(k) && !strcmp(kobject_name(k), name)) {
- ret = kobject_get(k);
+ ret = kobject_get_unless_zero(k);
break;
}
}
Patches currently in stable-queue which might be from
[email protected] are
queue-3.8/kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch
queue-3.8/vfs-revert-spurious-fix-to-spinning-prevention-in-prune_icache_sb.patch
queue-3.8/ipc-set-msg-back-to-eagain-if-copy-wasn-t-performed.patch
queue-3.8/x86-32-fix-possible-incomplete-tlb-invalidate-with-pae-pagetables.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html