Just to show what i talking about..
Credits to Queen:

| str |
str := 'Don''t stop me now.'.

[ 2 seconds asDelay wait. str := str , '(oops , i did)' ] forkAt: Processor
highestPriority.

[ 4 seconds asDelay wait. str := str, 'I''m having such a good time
I''m having a ball'] valueUnpreemptively.

str



On 25 March 2014 18:11, Igor Stasenko <[email protected]> wrote:

>
>
>
> On 25 March 2014 17:31, Eliot Miranda <[email protected]> wrote:
>
>> Hi Igor,
>>
>>
>> On Tue, Mar 25, 2014 at 5:05 AM, Igor Stasenko <[email protected]>wrote:
>>
>>>
>>>
>>>
>>> On 24 March 2014 22:54, [email protected] <[email protected]> wrote:
>>>
>>>> On Mon, Mar 24, 2014 at 8:23 PM, Alexandre Bergel <
>>>> [email protected]> wrote:
>>>>
>>>>> >> I am working on a memory model for expandable collection in Pharo.
>>>>> Currently, OrderedCollection, Dictionary and other expandable collections
>>>>> use a internal array to store their data. My new collection library 
>>>>> recycle
>>>>> these array instead of letting the garbage collector dispose them. I 
>>>>> simply
>>>>> insert the arrays in an ordered collection when an array is not necessary
>>>>> anymore. And I remove one when I need one.
>>>>> >
>>>>> > Hm, is that really going to be worth the trouble?
>>>>>
>>>>> This technique reduces the consumption of about 15% of memory.
>>>>>
>>>>> >> At the end, #add:  and #remove: are performed on these polls of
>>>>> arrays. I haven’t been able to spot any problem regarding concurrency and 
>>>>> I
>>>>> made no effort in preventing them. I have a simple global collection and
>>>>> each call site of "OrderedCollection new” can pick an element of my global
>>>>> collection.
>>>>> >>
>>>>> >> I have the impression that I simply need to guard the access to the
>>>>> global poll, which is basically guarding #add:  #remove: and #includes:
>>>>> >
>>>>> > One of the AtomicCollections might be the right things for you?
>>>>>
>>>>> I will have a look at it.
>>>>>
>>>>> >> What is funny, is that I did not care at all about multi-threading
>>>>> and concurrency, and I have not spotted any problem so far.
>>>>> >
>>>>> > There isn’t any ‘multi-threading’ like in Java, you got a much more
>>>>> control version: cooperative on the same priority, preemptive between
>>>>> priorities.
>>>>> > So, I am not surprised. And well, these operations are likely not to
>>>>> be problematic when they are racy, except when the underling data 
>>>>> structure
>>>>> could get into an inconsistent state itself. The overall operations
>>>>> (adding/removing/searing) are racy on the application level anyway.
>>>>> >
>>>>> > However, much more interesting would be to know what kind of benefit
>>>>> do you see for such reuse?
>>>>> > And especially, with Spur around the corner, will it still pay off
>>>>> then? Or is it an application-specific optimization?
>>>>>
>>>>> I am exploring a new design of the collection library of Pharo. Not
>>>>> all the (academic) ideas will be worth porting into the mainstream of
>>>>> Pharo. But some of them yes.
>>>>>
>>>>> Thanks for all your help guys! You’re great!
>>>>>
>>>>> Cheers,
>>>>> Alexandre
>>>>>
>>>>> --
>>>>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>>>>> Alexandre Bergel  http://www.bergel.eu
>>>>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>>>>
>>>>>
>>>>>
>>>>>
>>>> An interesting method I stumbled upon which may help in understanding
>>>> how these things do work.
>>>>
>>>> BlockClosure>>valueUnpreemptively
>>>>  "Evaluate the receiver (block), without the possibility of preemption
>>>> by higher priority processes. Use this facility VERY sparingly!"
>>>> "Think about using Block>>valueUninterruptably first, and think about
>>>> using Semaphore>>critical: before that, and think about redesigning your
>>>> application even before that!
>>>>  After you've done all that thinking, go right ahead and use it..."
>>>> | activeProcess oldPriority result semaphore |
>>>>  activeProcess := Processor activeProcess.
>>>> oldPriority := activeProcess priority.
>>>> activeProcess priority: Processor highestPriority.
>>>>  result := self ensure: [activeProcess priority: oldPriority].
>>>>
>>>>
>>> I would not recommend you to use this method for anything.
>>> This method heavily relies on how process scheduler works, and in case
>>> of any changes, it may break everything.
>>> For the sake of programming, one shall never assume there is a way to
>>> "stop the world while i busy doing something".
>>>
>>
>> Really?  Surely any system as interactive as Smalltalk can benefit from a
>> stop-the-rest-of-the-world scheduling facility, and surely packaging it as
>> BlockClosure>>valueUnpreemptively would be a convenient way of doing so.
>>  Surely the right attitude for an implementor of a threading system for
>> Smalltalk would be "Sure, I can implement that, even in a truly concurrent,
>> multi-processor environment".  It may take some doing but it's an important
>> facility to have.  It shouldn't be abused, but when you need it, you need
>> it.
>>
>>
> There should be hard guarantees from VM to do it. Right now there's none.
> That's my point.
> Like special primitive(s) for disabling interrupts/scheduling and enabling
> it back again.
> Let us be realistic: the above implementation is based on insider's
> knowledge how scheduling works, lacking any notion of contract between VM
> and image.
> Right now it just based on implementation detail rather than on guaranteed
> and well defined semantic.
>
> It is no doubt, sometimes you may need such hammer to stop the world.
> And it is no doubt (to me) that one should avoid using it unless it is
> impossible to do otherwise.
>
>
>
>> --
>> best,
>> Eliot
>>
>
>
>
> --
> Best regards,
> Igor Stasenko.
>



-- 
Best regards,
Igor Stasenko.

Reply via email to