From: Nadav Har'El <[email protected]>
Committer: Nadav Har'El <[email protected]>
Branch: master

sched: add _pinned flag for threads

Threads currently have a _migration_lock_counter which, when positive,
prevent the thread from being migrated to a different CPU. Using
sched::thread::pin() can make this counter permanently positive, but
we cannot distinguish it from a temporary increase due to migrate_disable().
In the future, to support pinning and re-pinning of other threads, we
want to make such distinction.

So this patch adds a _pinned flag for a thread. When true, the thread was
permanently pinned with pin(). _pinned being set to true also explains an
increase of 1 in _migration_lock_counter.

In the future, the "_pinned" boolean flag should be replaced by a bitmask
of CPUs which this thread is allowed to be on, so that a thread can be
pinned to a set of CPUs instead of just one.

Signed-off-by: Nadav Har'El <[email protected]>
Message-Id: <[email protected]>

---
diff --git a/core/sched.cc b/core/sched.cc
--- a/core/sched.cc
+++ b/core/sched.cc
@@ -483,18 +483,20 @@ unsigned cpu::load()
 void thread::pin(cpu *target_cpu)
 {
     thread &t = *current();
- // We want to wake this thread on the target CPU, but can't do this while
-    // it is still running on this CPU. So we need a different thread to
-    // complete the wakeup. We could re-used an existing thread (e.g., the
-    // load balancer thread) but a "good-enough" dirty solution is to
-    // temporarily create a new ad-hoc thread, "wakeme"
-    if (!t._migration_lock_counter) {
+    if (!t._pinned) {
+        // _pinned comes with a +1 increase to _migration_counter.
         migrate_disable();
+        t._pinned = true;
     }
     cpu *source_cpu = cpu::current();
     if (source_cpu == target_cpu) {
         return;
     }
+ // We want to wake this thread on the target CPU, but can't do this while
+    // it is still running on this CPU. So we need a different thread to
+    // complete the wakeup. We could re-used an existing thread (e.g., the
+    // load balancer thread) but a "good-enough" dirty solution is to
+    // temporarily create a new ad-hoc thread, "wakeme".
     bool do_wakeme = false;
     thread wakeme([&] () {
         wait_until([&] { return do_wakeme; });
@@ -765,6 +767,7 @@ thread::thread(std::function<void ()> func, attr attr, bool main, bool app)
     , _detached_state(new detached_state(this))
     , _attr(attr)
     , _migration_lock_counter(0)
+    , _pinned(false)
     , _id(0)
     , _cleanup([this] { delete this; })
     , _app(app)
@@ -832,6 +835,7 @@ thread::thread(std::function<void ()> func, attr attr, bool main, bool app)

     if (_attr._pinned_cpu) {
         ++_migration_lock_counter;
+        _pinned = true;
     }

     if (main) {
diff --git a/include/osv/sched.hh b/include/osv/sched.hh
--- a/include/osv/sched.hh
+++ b/include/osv/sched.hh
@@ -630,6 +630,13 @@ private:
     std::unique_ptr<detached_state> _detached_state;
     attr _attr;
     int _migration_lock_counter;
+    // _migration_lock_counter being set may be temporary, but if _pinned
+ // is true, it was permanently incremented by 1 by sched::thread::pin().
+    // In the future, we should replace this boolean _pinned by a bitmask
+    // of allowed cpus for this thread (for full support of
+ // sched_setaffinity()), and the load balancer should consult this bitmask
+    // to decide to which cpus a thread may migrate.
+    bool _pinned;
     arch_thread _arch;
     unsigned int _id;
     std::atomic<bool> _interrupted;

--
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to