2009/3/22 Igor Stasenko <[email protected]>:
> 2009/3/22 Michael van der Gulik <[email protected]>:
>>
>>
>> On Sat, Mar 21, 2009 at 8:38 AM, Janko Mivšek <[email protected]>
>> wrote:
>>>
>>> Philippe Marschall pravi:
>>> >> Michael van der Gulik wrote:
>>>
>>> >> So now it seems that Gemstone is the only multi-core capable Smalltalk
>>> >> VM :-(.
>>>
>>> > AFAIK Gemstone isn't multi-core capable as well. You can just run
>>> > multiple gems and they share the same persistent memory. Which is
>>> > similar but different.
>>>
>>> Well, Gemstone can for sure be considered as multi-core capable. Every
>>> gem runs on its own process and therefore can run on its own CPU core.
>>> All gems then share a Shared Memory Cache. So, a typical multi-core
>>> scenario.
>>>
>> By multi-core, I mean that the following code would spread CPU usage over at
>> least two cores of a CPU or computer for a while:
>>
>> | sum1 sum2 |
>>
>> sum1 := 0. sum2 := 0.
>>
>> [ 1 to: 10000000 do: [ :i | sum1 := sum1 + 1 ] ] fork.
>>
>> [ 1 to: 10000000 do: [ :i | sum2 := sum2 + 1 ] ] fork.
>>
>> (I didn't try the above so there might be obvious bugs)
>>
>> If a VM can't distribute the load for the above over two or more CPU cores,
>> I consider its multi-core capabilities a hack. No offense intended to the
>> Hydra VM.
>>
>
> Michael, that's would be too ideal to be true, especially for smalltalk.
>
> Consider the following:
>
> | array sum1 sum2 |
>
> sum1 := 0. sum2 := 0.
> array := Array new: 10.
>
> [ 1 to: 10000000 do: [ :i | array at: (10 random) put: (Array new: 10) ] ] 
> fork.
> [ 1 to: 10000000 do: [ :i | array at: (10 random) put: (Array new: 10) ] ] 
> fork.
> 1 to: 10000000 do: [ :i | array at: (10 random) put: (Array new: 10) ].
>
> This code reveals the following problems:
> - concurrent access to same object
> - a heavy memory allocation during running 3 processes, which at some
> point should cause GC.
> While first is more or less on the hands of developer (write a proper
> code to avoid such things), but second is a problem that you need to
> solve to be able to collect garbage in real time , when there are
> multiple threads producing it.
>
> Another problem, which will force you to rewrite many things in
> smalltalk code base, is problems with concurrent access to complex
> collections such as ordered collections, streams & dictionaries:
> | dict |
> dict := Dictionary new.
> [ 10000 timesRepeat: [ dict at: (1000000 random) put: 1 ]] fork.
> [ 10000 timesRepeat: [ dict at: (1000000 random) put: 1 ]] fork.
>
> at some point dictionary would require rehashing, but there are
> another thread which constantly putting new values in it. Obviously,
> access to dictionary should be synchronized to avoid conflicts.
> And synchronized access to collection(s) ( semaphore critical: [] )
> makes them really slow, which scales very poorly on multiple cores.
> The code above will take more time to complete comparing to same code
> when running in single native thread (green threading model), because
> you don't have to deal with synchronization.
>

Sorry, my fault. You need to sync in both cases (even with green threads).

But what strikes me, that there are a lot of code, which never cares
about it, for instance see
Symbol class>> intern:
and in some magical fashion it works w/o problems in green threading..
i'm not sure it will continue running when you enable multiple native
threads.

There is another problem, that squeak processes is cheap (few bytes in
object memory), while allocating new native thread consumes
considerable amount of memory & address space. So, if you map
Processes to native threads, you will lose the ability in having
millions of them, instead you will be limited to thousands.

>
>> I'm feeling a bit disheartened by the fact that there aren't any Smalltalk
>> VMs, commercial or not, that can do fine-grained parallelism.
>>
>> Gulik.
>>
>> --
>> http://gulik.pbwiki.com/
>>
>> _______________________________________________
>> Pharo-project mailing list
>> [email protected]
>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>
>
>
>
> --
> Best regards,
> Igor Stasenko AKA sig.
>



-- 
Best regards,
Igor Stasenko AKA sig.

_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to