They do not behave different. The problem is that you cannot assume that by
just looking at the message send.

They could behave different, they do not, but you don't know that until you
look at the method implementation. You cannot assume that #and:and:and: will
or wont evaluate the second block and then compare the results. In fact,
it's almost suggested in the declaration that it will by hierarchically
organizing the blocks in the same logical level. And that is a lack of
expressiveness. For that, is preferred the second option.

Besides, #and:and:and: implementation is horrible :)

Cheers,

Mariano.


On Wed, May 26, 2010 at 4:18 PM, Levente Uzonyi <[email protected]> wrote:

> On Wed, 26 May 2010, Igor Stasenko wrote:
>
>  On 26 May 2010 20:47, Levente Uzonyi <[email protected]> wrote:
>>
>>> On Wed, 26 May 2010, Lukas Renggli wrote:
>>>
>>>  - The exact semantics of #and:and:and: is not clear without knowing
>>>>>> how it is implemented.
>>>>>>
>>>>>> - There are subtle semantic differences between "a and: [ b ] and: [ c
>>>>>> ] and: [ d ]" and "a and: [ b and: [ c and: [ d ] ] ]" if the
>>>>>> conditions have side-effects.
>>>>>>
>>>>>
>>>>> That's not true. Both #and: and #and:and:and: (and friends) are
>>>>> short-circuit, so they'll cause exactly the same side effects.
>>>>>
>>>>
>>>> I know that and this is *not* what I am talking about. What you point
>>>> out is already discussed above, it is absolutely unclear what
>>>> #and:and:and: does without looking at the implementation.
>>>>
>>>> The point is that blocks that are lexically nested "a and: [ b and: [
>>>> c ] ]" and blocks that are in a lexical sequence "a and: [ b ] and: [
>>>> c ]" do not have the same expressive power (temps, state) and do not
>>>> necessarily behave the same.
>>>>
>>>
>>> Maybe i'm just narrow-minded, but I can't see the difference except for
>>> the
>>> scope of temporaries defined inside blocks.
>>>
>>>
>> For pushing a closure on stack, an interpreter makes a copy of closure
>> literal, from method's literal frame.
>> For expressions like:
>> a and: [ b ] and: [ c ]
>>
>> it should prepare and push 2 closures.
>>
>> and for expression like:
>>
>> a and: [ b and: [ c ] ]
>>
>> just one.
>> So it is faster, for cases when outer block is not activated, because
>> a is false.
>>
>
> I understand this. And I know that the current compiler will inline the
> blocks in the second case which gives the real difference in speed.
>
> What I don't understand is: how can these two expressions behave
> differently. Can you provide a, b and c which will behave differently if
> evaluated as:
> 1) a and: [ b ] and: [ c ]
> and
> 2) a and: [ b and: [ c ] ]
> ?
>
>
> Levente
>
>
>
>>
>>> Levente
>>>
>>>
>>>> Lukas
>>>>
>>>> --
>>>> Lukas Renggli
>>>> www.lukas-renggli.ch
>>>>
>>>> _______________________________________________
>>>> Pharo-project mailing list
>>>> [email protected]
>>>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>>>
>>>>
>>> _______________________________________________
>>> Pharo-project mailing list
>>> [email protected]
>>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Igor Stasenko AKA sig.
>>
>> _______________________________________________
>> Pharo-project mailing list
>> [email protected]
>> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>>
>>
> _______________________________________________
> Pharo-project mailing list
> [email protected]
> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
>
_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to