Joel, On Wed, Mar 5, 2014 at 10:38 AM, Joel Sherrill <joel.sherr...@oarcorp.com> wrote: > > On 3/5/2014 9:05 AM, Sebastian Huber wrote: >> On 2014-03-05 15:58, Joel Sherrill wrote: >>> On Mar 5, 2014 8:33 AM, Sebastian Huber >>> <sebastian.hu...@embedded-brains.de> wrote: >>> > >>> > On 2014-03-05 15:21, Joel Sherrill wrote: >>> > > We discussed this privately when you raised this to me. It should not >>> be part >>> > > of the scheduler per thread data. lf you do then the data just >>> disappears from >>> > > the API perspective when the scheduler doesn't support affinity. It >>> will >>> create >>> > > a new class of odd errors at the API level which no other >>> implementation of >>> > > this type of API has. From an API perspective, it is the same as the >>> preempt >>> > > and timeslicing flags. >>> > Oops I forgot about our chat. I do think we need to revisit the issue in the future though, and probably add calls to the scheduler from the score affinity implementation.
>>> > It is the responsibility of the scheduler set/get thread affinity >>> operation to >>> > validate/return the affinity sets. Storing this stuff in an arbitrary >>> format >>> > in the Thread_Control structure makes no sense since the thread affinity >>> > support is entirely scheduler dependent. >>> > >>> > > >>> > > It is disabled in non-smp configurations, so doesn't impact minimum >>> footprint. >>> > > >>> > > I seriously considered this and decided it was a bad idea. I don't >>> know if I >>> > > have veto power :) but if I did, I would invoke it here. >>> > > >>> > > We can revisit this when we have a scheduler with affinity. As it >>> stands now, >>> > > there is absolutely no way to test any of this code if we move the >>> affinity >>> > > data to the scheduler. >>> > > >>> > >>> > You can test the code if you add the scheduler set/get affinity >>> operations. >>> > The default implementation is trivial. We have only global schedulers, >>> so >>> > simply check that the affinity set is the set of all processors in the >>> system. >>> > For the get simply return the set of all processors. >>> >>> I don't disagree with this statement but you are being short sighted for >>> iterative development. >>> >>> If we do this, then the tests can't set an arbitrary affinity and return it. >>> We can commit this set will full passing tests. >>> >>> We will move the set when we touch the schedulers. We were very open and >>> public >>> and getting the APIs in and tested with data consistency and no scheduler >>> changes as step one. Step two moves the information into the schedulers. >>> >>> If we make this change, we will commit nearly all broken tests >>> >> Its fine if you move the information into the schedulers later. This wasn't >> obvious from the current patch set. >> > Great! Thanks. I just started hacking on the scheduler part and it would > be better to commit what we have and make sure it works before doing > more. > > Unless someone has someone has something else, Jennifer is going > to commit and verify that the resulting master matches her results. > OK with me. I would like you to add a note to the SMP wiki page regarding this issue about affinity in the TCB vs the scheduler. It seems to me that any SMP scheduler must deal with the affinity API whether or not the scheduler supports thread affinity---the scheduler can return an error if an application tries to use affinity calls but the scheduler does not support affinity. -Gedare _______________________________________________ rtems-devel mailing list rtems-devel@rtems.org http://www.rtems.org/mailman/listinfo/rtems-devel