On Mon, Mar 23, 2009 at 4:49 PM, Michael van der Gulik <[email protected]>wrote:
> > > On Mon, Mar 23, 2009 at 2:21 PM, Michael van der Gulik > <[email protected]>wrote: > >> >> >> On Mon, Mar 23, 2009 at 12:34 PM, Igor Stasenko <[email protected]>wrote: >> >>> >>> Now consider the overhead of creating a fork vs the actual useful code >>> which is running within a block. >>> I presume, this code will run 10x times slower on a single core >>> processor, comparing to one w/o forks. >>> So, you will need to have 10 cores to match the computation time with >>> single core processor. >>> >>> I think its not wise to introduce parallelism on such low levels (in >>> Concurrent.Collections.Array>>withIndexDo:). Its like hammering nails >>> with microscope :) >>> >>> That's why i'm saying its too good to be true. >>> Introducing parallelism at such low levels will be a waste. I leaning >>> toward island model. This is a middle point between no sharing, like >>> in Hydra and sharing everything, like in what you proposing. >>> >> >> 10 times slower? Sounds like a made-up number to me... >> >> " Using 101 threads: " >> c := ConcurrentArray new: 1000001. >> Time millisecondsToRun: [c withIndexDo: [ :each :i | c at: i put: i >> asString. ]]. >> 5711 >> 5626 >> 6074 >> >> " Using 11 threads: " >> c := ConcurrentArray new: 1000001. >> Time millisecondsToRun: [c withIndexDo: [ :each :i | c at: i put: i >> asString. ]]. >> 3086 >> 3406 >> 3256 >> >> " Using 1 thread: " >> d := Array new: 1000001. >> Time millisecondsToRun: [d withIndexDo: [ :each :i | d at: i put: i >> asString]]. >> 2426 >> 2610 >> 2599 >> >> My implementation is 1/2 to 1/3 the speed of the single-threaded Array. If >> the blocks did more work, then the overhead would be lower and some benefit >> would be gained from using multiple cores. >> >> I don't have a good idea of where the overhead is going - maybe it's being >> lost in the block copying that is needed to work around Squeak's >> deficiencies? Or maybe it's waiting for the scheduler to do its stuff? >> >> Implementation attached, MIT license if you're picky. >> >> > I just tried it on VisualWorks as well. I removed the block copying and > renamed the method to "keysDo:" (I never thought of an array like that > before... keys and values). > > d := Array new: 1000001. > Time millisecondsToRun: [d keysDo: [ :i | d at: i put: i > printString]]. > 1180 > 982 > 1008 > > " Using 101 threads " > c := ConcurrentArray new: 1000001. > Time millisecondsToRun: [c keysDo: [ :i | c at: i put: i printString. > ]]. > 1072 > 1120 > 962 > > At this stage, I'm suspicous about the results :-). I tried it on SmalltalkMT on a dual-core system, but failed. I couldn't work out how to use its Semaphores; they don't behave like other Smalltalks and seem to be some MS Windows concoction. In fact, the whole environment feels like C++ with the Windows API. I also failed to get SmalltalkMT to use 100% of both cores on my machine despite forking lots of busy blocks. Huemul Smalltalk crashed when I tried to fork something. So... I remain saddened. It appears that Hydra and GemStone are the nearest we have to a multi-core Smalltalk, and they only do coarse-grained parallelism. This is a sad state of affairs. Gulik. -- http://gulik.pbwiki.com/
_______________________________________________ Pharo-project mailing list [email protected] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
