On Fri, 13 Apr 2018 at 13:56, Benoit St-Jean via Pharo-users <
pharo-us...@lists.pharo.org> wrote:

> Do we really need 8 delay schedulers (DelayMicrosecondScheduler,
> DelayMillisecondScheduler, DelayNullScheduler,
> DelayExperimentalSpinScheduler, DelaySpinScheduler, DelayTicklessScheduler,
> DelayExperimentalCourageousScheduler, DelayExperimentalSemaphoreScheduler)
> ?
>

I've cleaned the delay scheduling subsystem [1] to condense these
alternatives and separate orthogonal functionality.  After a couple of
reviews this was integrated and made active in last week's Build 1273.
Could anyone with test scenarios from previous issues with delays have a
bash at stressing the latest build.

The old(existing) DelaySpinScheduler remains in the system so it can be
activated as a point of comparison (System > Settings > System > Delay
Scheduler). Pending any adverse reports the final step will be to remove
the old hierarchy next week.

There remain separate mutex and semaphore based schedulers since:
* they differ by only a couple of overridden methods
* their slightly different implementations help highlight the core algorithm
* they provide a simple in-Image example of different synchronisation
mechanisms.
* in isolating edge cases its useful to be able to compare results between
implementations

I've retained both microsecond and millisecond operation,
but extracted into "ticker" classes orthogonal to the "scheduling" classes
since:
* it makes the core scheduling algorithm independent of time base
* millisecond (or other custom) timebase might(?) be more efficient on
smaller 32-bit embedded systems
* tests can now simulate ticker time to avoid tests interfering with
system's-active-scheduler VM interaction (which may be the source of some
random CI failures)
* delay scheduler tests are now independent of real-time (which may be the
source of some random CI failures where delays are affected by varying CI
server loads)
* when multi-threaded FFI callbacks become available, may facilitate
experimenting with:
     * wake-up from native timers
     * wake-up of embedded Pharo from the encompassing system


Other refactors:
* during system snapshot, save/restore of resumption times was happening at
user priority which risks a race condition reported at [2]. *All*
modification of resumption times now occurs at timing/highest priority.


[1]
https://pharo.manuscript.com/f/cases/22477/DelayScheduler-cleanup-and-refactoring
[2] https://pharo.manuscript.com/f/cases/18359/

Reply via email to