--- Begin Message ---
> On 11 Jan 2019, at 20:28, Eliot Miranda <[email protected]> wrote:
>
> Hi Thomas,
>
> forgive me, my first response was too terse. Having thought about it in the
> shower it becomes clear :-)
>
>> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez <[email protected]>
>> wrote:
>>
>> Hi,
>>
>> Yes, my question was just of the form: "Hey there's this method in
>> DebugSession. What is it doing? What's the intention behind it? Does someone
>> know?". There was no hidden agenda behind it.
>>
>> @Eliot
>>
>> After taking another look at this method, there's something I don't
>> understand:
>>
>> activePC: aContext
>> ^ (self isLatestContext: aContext)
>> ifTrue: [ interruptedContext pc ]
>> ifFalse: [ self previousPC: aContext ]
>>
>> isLatestContext: checks whether its argument is the suspended context (the
>> context at the top of the stack of the interrupted process). And if that's
>> true, activePC: returns the pc of **interruptedContext**, not of the
>> suspended context. These two contexts are different when the debugger opens
>> on an exception, so this method is potentially returning a pc for another
>> context than its argument...
>>
>> Another question I have to improve the comment for this method is: what's
>> the high-level meaning of this concept of "activePC". You gave the formal
>> definition, but what's the point of defining this so to speak? What makes
>> this concept interesting enough to warrant defining it and giving it a name?
>
> There are two “modes” where a pc us mapped to a source range. One is when
> stepping a context in the debugger (the context is on top and is actively
> executing bytecodes). Here the debugger stops immediately before a send or
> assignment or return, so that for sends we can do into or over, or for
> assignments or returns check stack top to see what will be assigned or
> returned. In this mode we want the pc of the send, assign or return to map
> to the source range for the send, or the expression being assigned or
> returned. Since this is the “common case”, and since this is the only choice
> that makes sense for assignments ta and returns, the bytecode compiler
> constructs it’s pc to source range map in terms of the pc of the first byte
> if the send, assign or return bytecode.
>
> The second “mode” is when selecting a context below the top context. The pc
> for any context below the top context will be the return pc for a send,
> because the send has already happened. The compiler could choose to map this
> pc to the send, but it would not match what works for the common case.
> Another choice would appear be to have two map entries, one for the send and
> one for the return pc, both mapping to the source range. But this wouldn’t
> work because the result of a send might be assigned or returned and so there
> is a potential conflict. I stead the reasonable solution is to select the
> previous pc for contexts below the top of context, which will be the pc for
> the start of the send bytecode.
>
I checked with Thomas
-> for source mapping, we use the API of the method map. The map does the “get
the mapping for the instruction before”, it just needs to be told that we ask
the range for an active context:
#rangeForPC:contextIsActiveContext:
it is called
^aContext debuggerMap
rangeForPC: aContext pc
contextIsActiveContext: (self isLatestContext: aContext) ]
So the logic was move from the debugger to the Map. (I think this is even your
design?), and thus the logic inside the debugger is not needed anymore.
-> For the question why the AST node of the Block has no simple method to check
if it is an “isOptimized” block (e.g. in an ifTrue:): Yes, that might be nice.
The reason why is is not there is that the compiler internally using
OCOptimizedBlockScope and a check on the AST level was never needed. I would
have added it as soon as it would be needed (quite easy to do).
On the AST level there is already a check, though, if a *send* is optimized,
which is used for code generation (#isInlined).
The question why we did not add more compatibility layers to the old AST is
that is is quite different.. so even with some methods here and there 99% of
clients need changes, so I did not even investigate it too much. With the idea
that if we end up in a situation where we need just a method, we can just add
it as needed.
In general, I have no srtong feelings about the details of the implementation
of Opal. It’s just what was possible, with the resources and the understanding
that I had at the point it was done. There are for sure even mistakes. I always
make mistakes.
In addition, it is based on a prior compiler which was done for a different
closure model whose Design-choices influenced it too much I think, sometimes
good, sometimes not.
But I like the “high level”: using a shared AST between the compiler and the
tools *and* having a mapping BC -> AST -> Text.
Of course I understand that the choice to use the RB AST for the compiler is
not a “traditional” one.. but it turned out to work very well *and* it brings
some amazing power, as we have now not only a mapping bc->text offset, but a
mapping bc->AST, too. This e.g. (just a a simple example) makes the magic of
the runtime transforming deprecations possible. See #transform on class
Deprecation, the #sourceNodeExecuted:
transform
| node rewriteRule aMethod |
self shouldTransform ifFalse: [ ^ self ].
self rewriterClass ifNil:[ ^ self signal ].
aMethod := self contextOfSender method.
aMethod isDoIt ifTrue:[^ self]. "no need to transform doits"
node := self contextOfSender sourceNodeExecuted.
rewriteRule := self rewriterClass new
replace: rule key with: rule value.
(rewriteRule executeTree: node)
ifFalse: [ ^ self ].
node replaceWith: rewriteRule tree.
Author
useAuthor: 'AutoDeprecationRefactoring'
during: [aMethod origin compile: aMethod ast formattedCode
classified: aMethod protocol].
Marcus
--- End Message ---