2010/10/13 Levente Uzonyi <[email protected]>:
> On Tue, 12 Oct 2010, Igor Stasenko wrote:
>
>> On 12 October 2010 16:51, Levente Uzonyi <[email protected]> wrote:
>>>
>>> On Tue, 12 Oct 2010, Igor Stasenko wrote:
>>>
>>>> Hello, i just thought, that it would be cool to have a special bytecode,
>>>> which guarantees atomicity for swapping values between two variables.
>>>>
>>>> To swap two values, you usually do:
>>>>
>>>> | var1 var2 temp |
>>>>
>>>> temp := var1.
>>>> var1 := var2.
>>>> var2 := temp.
>>>>
>>>> But since its non-atomic, a process can be interrupted and such
>>>> operation
>>>> is not thread-safe.
>>>>
>>>> In order to make it thread safe, you must add even more boilerplate:
>>>>
>>>> | var1 var2 temp |
>>>>
>>>> semaphore critical: [
>>>>  temp := var1.
>>>>  var1 := var2.
>>>>  var2 := temp.
>>>> ]
>>>
>>> An alternative solution:
>>>
>>> | a b |
>>> a := 1.
>>> b := 2.
>>> [
>>>        | tmp |
>>>        tmp := a.
>>>        a := b.
>>>        b := tmp ] valueUnpreemptively
>>>
>>
>> Yeah, another boilerplate under the hood, also highly dependent from
>> scheduling nuances :)
>
> I don't get the "dependency from scheduling nuances part", but here's my
> idea:

i don't like the code, which assuming that scheduling works in some
specific way,
or some code can't be interrupted in the middle.

See Semaphore>>critical: for excersise.

| caught |
        caught := false.
        ^[
                caught := true.
                self wait.
                mutuallyExcludedBlock value
        ] ensure: [ caught ifTrue: [self signal] ]

this code assuming that between
                caught := true.
and
                self wait.

no interrupt possible.
But if it is, then the above implementation is incorrect.


>
> Add the compiler changes to support :=: as atomic swap. We don't really need
> a bytecode for now, since a sequence of assignments is currently atomic. So
> the compiler could compile :=: as three assignments using a hidden temporary
> variable. On other systems, :=: can be compiled differently.
>

For same reason, there is no any guarantees from VM side that three
assignments in a row will be
atomic: storing pointer could trigger a root check, and if roots table
is full, it could trigger GC,
and after GC, there is a finalization semaphore, which could switch an
active process immediately.

VM is evolving and subject to change. Having a clear rule which
guarantees a specific behavior is far more
beneficial comparing to intimate knowledge about how VM works (now).


>
> Levente
>
>>
>> valueUnpreemptively
>>        "Evaluate the receiver (block), without the possibility of
>> preemption
>> by higher priority processes. Use this facility VERY sparingly!"
>>        "Think about using Block>>valueUninterruptably first, and think
>> about
>> using Semaphore>>critical: before that, and think about redesigning
>> your application even before that!
>>        After you've done all that thinking, go right ahead and use it..."
>>        | activeProcess oldPriority result |
>>        activeProcess := Processor activeProcess.
>>        oldPriority := activeProcess priority.
>>        activeProcess priority: Processor highestPriority.
>>        result := self ensure: [activeProcess priority: oldPriority].
>>        "Yield after restoring priority to give the preempted processes a
>> chance to run"
>>        Processor yield.
>>        ^result
>>>
>>> Levente
>>>
>>
>>
>>
>> --
>> Best regards,
>> Igor Stasenko AKA sig.
>>
>
>
>
>



-- 
Best regards,
Igor Stasenko AKA sig.

_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to