Mon Sep 15 06:28:46 PDT 2008  [EMAIL PROTECTED]
  * Work stealing for sparks
  
     Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of 
the RTS.
    
    Spark pools are per capability, separately allocated and held in the 
Capability 
    structure. The implementation uses Double-Ended Queues (deque) and 
cas-protected 
    access.
    
    The write end of the queue (position bottom) can only be used with
    mutual exclusion, i.e. by exactly one caller at a time.
    Multiple readers can steal()/findSpark() from the read end
    (position top), and are synchronised without a lock, based on a cas
    of the top position. One reader wins, the others return NULL for a
    failure.
    
    Work stealing is called when Capabilities find no other work (inside 
yieldCapability),
    and tries all capabilities 0..n-1 twice, unless a theft succeeds.
    
    Inside schedulePushWork, all considered cap.s (those which were idle and 
could 
    be grabbed) are woken up. Future versions should wake up capabilities 
immediately when 
    putting a new spark in the local pool, from newSpark().
  
  Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also 
fixing a 
  (strange) conflict in the scheduler.
  

    M ./includes/Regs.h -95 +1
    M ./includes/RtsTypes.h +34
    M ./rts/Capability.c -3 +54
    M ./rts/Capability.h +4
    M ./rts/Schedule.c -37 +73
    M ./rts/Sparks.c -108 +409
    M ./rts/Sparks.h -27 +42

View patch online:
http://darcs.haskell.org/ghc/_darcs/patches/20080915132846-54c9c-ef835837a05ebcfa503fe4e3bcfa25277a18a7dd.gz

_______________________________________________
Cvs-ghc mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to