Hello Joel,

please have a look at the attached patch.

On 2014-05-20 23:24, Joel Sherrill wrote:
On 5/19/2014 1:34 AM, Sebastian Huber wrote:
>On 2014-05-16 19:48, Joel Sherrill wrote:
>>Hi
>>
>>Questions first:
>>
>>+ The affinity mask must be non-empty. This means a thread must
>>be able to be scheduled on some processor.  With cluster scheduling,
>>a scheduler implementation would only be associated with a subset
>>of processors. The affinity can't be set such that the thread can't be
>>scheduled by this scheduler instance.
>Ok.
>
>>How can a scheduler instance
>>know which processors it is associated with?
>You can store this information in the scheduler context if it needs this
>information quickly or you have iterate though the scheduler assignments:
I started to implement some validation but then looked back at
_Scheduler_Set_affinity() and it only invokes the scheduler specific
set affinity if it is in the cpu set. So it is checked out before you
get to the operation.

But, I think blindly invoking the first scheduler instance that
the thread has affinity for is dangerous and incorrect.

Since we don't allow a CPU set to overlap two or more scheduler instances, this is correct with the current default operation. In the patch I moved some checks from the default operation to _Scheduler_Set_affinity() to make it more clear.


I do not think that _Scheduler_Set_affinity() properly
addresses the case where the old affinity set is on one scheduler
instance and the new set moves it to another. The source scheduler
doesn't appear to get a chance to remove it from its bookkeeping.

The _Scheduler_default_Set_affinity() calls _Scheduler_Set() after it validated the CPU set. The _Scheduler_Set() does all the bookkeeping. You probably have to split the _Scheduler_Set() sequence up to also account for the CPU set.


It is pretty clear that _Scheduler_Set_affinity() doesn't address this
because it blindly invokes the first scheduler instance the new
affinity matches without concern over whether that was the previous
schedule instance.

The scheduler specific affinity routine can't address this because it
won't see it leaving.

I repeat that I really don't think changing affinity should move a
thread from one scheduler instance to another.  It is non-obvious
behavior and has some subtle side-effects that need to be managed.
_Scheduler_Set_affinity() could enforce this.

The pthread_setaffinity_np() is the only way to set the scheduler via the non-portable POSIX API. So I would keep the ability to change the scheduler with it.


Moving schedulers and changing affinity should be rare and very
explicit operations.


--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>From 742119c8b7be77afaed2a08150ca64fe8df49a21 Mon Sep 17 00:00:00 2001
From: Sebastian Huber <sebastian.hu...@embedded-brains.de>
Date: Wed, 21 May 2014 09:07:51 +0200
Subject: [PATCH] score: _Scheduler_Set_affinity()

Check that the CPU set doesn't overlap two or more scheduler instances
in _Scheduler_Set_affinity() and remove this check in
_Scheduler_default_Set_affinity().

Allow that CPUs are set outside of the processor set of the system in
_Scheduler_default_Set_affinity().  This avoids errors in case an
application is used on systems with different processor sets.
---
 cpukit/score/include/rtems/score/schedulerimpl.h |   28 ++++------
 cpukit/score/src/schedulersetaffinity.c          |   61 +++++++++++++---------
 2 files changed, 48 insertions(+), 41 deletions(-)

diff --git a/cpukit/score/include/rtems/score/schedulerimpl.h b/cpukit/score/include/rtems/score/schedulerimpl.h
index 0f19833..46292f5 100644
--- a/cpukit/score/include/rtems/score/schedulerimpl.h
+++ b/cpukit/score/include/rtems/score/schedulerimpl.h
@@ -419,35 +419,31 @@ RTEMS_INLINE_ROUTINE bool _Scheduler_default_Set_affinity_body(
   const cpu_set_t         *cpuset
 )
 {
-  size_t   cpu_max   = _CPU_set_Maximum_CPU_count( cpusetsize );
+#if defined(RTEMS_SMP)
   uint32_t cpu_count = _SMP_Get_processor_count();
   uint32_t cpu_index;
   bool     ok = true;
 
+  /* Check that the CPU set covers all processors of this scheduler instance */
   for ( cpu_index = 0 ; cpu_index < cpu_count ; ++cpu_index ) {
-#if defined(RTEMS_SMP)
     const Scheduler_Control *scheduler_of_cpu =
       _Scheduler_Get_by_CPU_index( cpu_index );
 
-    ok = ok
-      && ( ( CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset )
-          && scheduler == scheduler_of_cpu )
-        || ( !CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset )
-          && scheduler != scheduler_of_cpu ) );
-#else
-    (void) scheduler;
-
-    ok = ok && CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset );
-#endif
+    if ( !CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset ) ) {
+      ok = ok && scheduler != scheduler_of_cpu;
+    }
   }
 
-  for ( ; cpu_index < cpu_max ; ++cpu_index ) {
-    ok = ok && !CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset );
+  if ( ok ) {
+    _Scheduler_Set( scheduler, the_thread );
   }
 
-  _Scheduler_Set( scheduler, the_thread );
-
   return ok;
+#else
+  (void) scheduler;
+
+  return CPU_ISSET_S( 0, cpusetsize, cpuset );
+#endif
 }
 
 bool _Scheduler_Set_affinity(
diff --git a/cpukit/score/src/schedulersetaffinity.c b/cpukit/score/src/schedulersetaffinity.c
index a20888b..b4fea3c 100644
--- a/cpukit/score/src/schedulersetaffinity.c
+++ b/cpukit/score/src/schedulersetaffinity.c
@@ -26,45 +26,56 @@ bool _Scheduler_Set_affinity(
   const cpu_set_t         *cpuset
 )
 {
-  bool ok;
-
-  if ( _CPU_set_Is_large_enough( cpusetsize ) ) {
 #if defined(RTEMS_SMP)
-    uint32_t cpu_count = _SMP_Get_processor_count();
-    uint32_t cpu_index;
+  uint32_t                 cpu_count = _SMP_Get_processor_count();
+  uint32_t                 cpu_index;
+  const Scheduler_Control *scheduler_of_cpuset = NULL;
+  bool                     ok = false;
+#endif
 
-    ok = false;
+  if ( !_CPU_set_Is_large_enough( cpusetsize ) ) {
+    return false;
+  }
 
-    for ( cpu_index = 0 ; cpu_index < cpu_count ; ++cpu_index ) {
-      if ( CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset ) ) {
-        const Scheduler_Control *scheduler_of_cpu =
-          _Scheduler_Get_by_CPU_index( cpu_index );
+#if defined(RTEMS_SMP)
+  /*
+   * Check that the CPU set doesn't overlap with two or more scheduler
+   * instances.
+   */
+  for ( cpu_index = 0 ; cpu_index < cpu_count ; ++cpu_index ) {
+    if ( CPU_ISSET_S( (int) cpu_index, cpusetsize, cpuset ) ) {
+      const Scheduler_Control *scheduler_of_cpu =
+        _Scheduler_Get_by_CPU_index( cpu_index );
 
-        if ( scheduler_of_cpu != NULL ) {
-          ok = ( *scheduler_of_cpu->Operations.set_affinity )(
-            scheduler_of_cpu,
-            the_thread,
-            cpusetsize,
-            cpuset
-          );
+      if ( scheduler_of_cpu != NULL ) {
+        if ( scheduler_of_cpuset == NULL ) {
+          scheduler_of_cpuset = scheduler_of_cpu;
+          ok = true;
+        } else {
+          ok = ok && scheduler_of_cpu == scheduler_of_cpuset;
         }
-
-        break;
       }
     }
-#else
-    ok = _Scheduler_default_Set_affinity_body(
-      _Scheduler_Get( the_thread ),
+  }
+
+  if ( ok ) {
+    ok = ( *scheduler_of_cpuset->Operations.set_affinity )(
+      scheduler_of_cpuset,
       the_thread,
       cpusetsize,
       cpuset
     );
-#endif
-  } else {
-    ok = false;
   }
 
   return ok;
+#else
+  return _Scheduler_default_Set_affinity_body(
+    _Scheduler_Get( the_thread ),
+    the_thread,
+    cpusetsize,
+    cpuset
+  );
+#endif
 }
 
 #endif /* defined(__RTEMS_HAVE_SYS_CPUSET_H__) */
-- 
1.7.7

_______________________________________________
rtems-devel mailing list
rtems-devel@rtems.org
http://www.rtems.org/mailman/listinfo/rtems-devel

Reply via email to