Hi Marcus, On Fri, Jan 18, 2019 at 5:15 AM Marcus Denker via Pharo-dev < [email protected]> wrote:
> > > On 11 Jan 2019, at 20:28, Eliot Miranda <[email protected]> wrote: > > > > Hi Thomas, > > > > forgive me, my first response was too terse. Having thought about it > in the shower it becomes clear :-) > > > >> On Jan 11, 2019, at 6:49 AM, Thomas Dupriez < > [email protected]> wrote: > >> > >> Hi, > >> > >> Yes, my question was just of the form: "Hey there's this method in > DebugSession. What is it doing? What's the intention behind it? Does > someone know?". There was no hidden agenda behind it. > >> > >> @Eliot > >> > >> After taking another look at this method, there's something I don't > understand: > >> > >> activePC: aContext > >> ^ (self isLatestContext: aContext) > >> ifTrue: [ interruptedContext pc ] > >> ifFalse: [ self previousPC: aContext ] > >> > >> isLatestContext: checks whether its argument is the suspended context > (the context at the top of the stack of the interrupted process). And if > that's true, activePC: returns the pc of **interruptedContext**, not of the > suspended context. These two contexts are different when the debugger opens > on an exception, so this method is potentially returning a pc for another > context than its argument... > >> > >> Another question I have to improve the comment for this method is: > what's the high-level meaning of this concept of "activePC". You gave the > formal definition, but what's the point of defining this so to speak? What > makes this concept interesting enough to warrant defining it and giving it > a name? > > > > There are two “modes” where a pc us mapped to a source range. One is > when stepping a context in the debugger (the context is on top and is > actively executing bytecodes). Here the debugger stops immediately before > a send or assignment or return, so that for sends we can do into or over, > or for assignments or returns check stack top to see what will be assigned > or returned. In this mode we want the pc of the send, assign or return to > map to the source range for the send, or the expression being assigned or > returned. Since this is the “common case”, and since this is the only > choice that makes sense for assignments ta and returns, the bytecode > compiler constructs it’s pc to source range map in terms of the pc of the > first byte if the send, assign or return bytecode. > > > > The second “mode” is when selecting a context below the top context. > The pc for any context below the top context will be the return pc for a > send, because the send has already happened. The compiler could choose to > map this pc to the send, but it would not match what works for the common > case. Another choice would appear be to have two map entries, one for the > send and one for the return pc, both mapping to the source range. But this > wouldn’t work because the result of a send might be assigned or returned > and so there is a potential conflict. I stead the reasonable solution is > to select the previous pc for contexts below the top of context, which will > be the pc for the start of the send bytecode. > > > > > I checked with Thomas > > -> for source mapping, we use the API of the method map. The map does the > “get the mapping for the instruction before”, it just needs to be told that > we ask the range for an active context: > > #rangeForPC:contextIsActiveContext: > > it is called > > ^aContext debuggerMap > rangeForPC: aContext pc > contextIsActiveContext: (self isLatestContext: aContext) ] > > So the logic was move from the debugger to the Map. (I think this is even > your design?), and thus the logic inside the debugger is not needed > anymore. > "Design" is giving my code a little too much respect. I was desperately trying to get something to work to be able to deploy Cog with the new closure model. I happily admit that DebuggerMethodMap in Squeak is ugly code. It had to be extended recently to handle full blocks. But it would be great to rewrite it. I dream of a harmonisation of Squeak/Pharo/Cuis execution classes such that we have the same Context, CompiledCode, CompiledBlock, CompiledMethod, debuggerMap and BytecodeEncoder (which is effectively the back end of the compiler that generates bytecode, and the interface to the debugger when bytecode is analyses or executed in the debugger), which would make my life easier maintaining the VM and execution classes, especially as we introduce Sista. I think Opal as a separate compiler is great; the work you've done with mustBeBoleanMagic is superb. At the same time the Squeak compiler is fine for its job and I still find it easier to understand than Opal (probably because I'm not spending enough time with it). One major reason for the harmonization is bug fixes. Right now I use SistaV1 and 64-bits as my default work environment, in a 64-bit Squeak image that was compiled with V3PlusClosures, so that half the code base (the base image) is the old bytecode set, but all of VMMaker is the new bytecode set & full blocks. So far in the past few months I've fixed a few issues with pc mapping, debugging in the different bytecode sets, searching for literals with full blocks, fixing Slang for full blocks, etc. And I'd like these bug fixes to make it into Pharo too. If this speaks to you Marcus, maybe we can try and figure out a path that doesn't ruffle too many feathers. -> For the question why the AST node of the Block has no simple method to > check if it is an “isOptimized” block (e.g. in an ifTrue:): Yes, that might > be nice. The reason why is is not there is that the compiler internally > using OCOptimizedBlockScope and a check on the AST level was never needed. > I would have added it as soon as it would be needed (quite easy to do). > OK. isOptimized is something I use a lot when analyzing parse trees in looking at the relationship between bytecode and hit code. For example it is key to work I;'m doing right now on register allocation for Sista. On the AST level there is already a check, though, if a *send* is > optimized, which is used for code generation (#isInlined). > The Squeak versions here are MessageNode>>isOptimized, isOptimizedLoop & isOptimizedWhileLoop The question why we did not add more compatibility layers to the old AST is > that is is quite different.. so even with some methods here and there 99% > of clients need changes, so I did not even investigate it too much. With > the idea that if we end up in a situation where we need just a method, we > can just add it as needed. > Reasonable. I simply have to maintain compatibility for the two parse trees. > In general, I have no srtong feelings about the details of the > implementation of Opal. It’s just what was possible, with the resources and > the understanding that I had at the point it was done. There are for sure > even mistakes. I always make mistakes. > I can join you in all of these for the Squeak compiler. It was not possible to have someone to pair with when modifying the Squeak compiler to conform to my closure design; I was in a tiny startup and had to do all execution technology work. And I always make mistakes too :-) > In addition, it is based on a prior compiler which was done for a > different closure model whose Design-choices influenced it too much I > think, sometimes good, sometimes not. > > But I like the “high level”: using a shared AST between the compiler and > the tools *and* having a mapping BC -> AST -> Text. > Yes, this is great. > Of course I understand that the choice to use the RB AST for the compiler > is not a “traditional” one.. but it turned out to work very well *and* it > brings some amazing power, as we have now not only a mapping bc->text > offset, but a mapping bc->AST, too. This e.g. (just a a simple example) > makes the magic of the runtime transforming deprecations possible. See > #transform on class Deprecation, the #sourceNodeExecuted: > > transform > | node rewriteRule aMethod | > self shouldTransform ifFalse: [ ^ self ]. > self rewriterClass ifNil:[ ^ self signal ]. > aMethod := self contextOfSender method. > aMethod isDoIt ifTrue:[^ self]. "no need to transform doits" > node := self contextOfSender sourceNodeExecuted. > rewriteRule := self rewriterClass new > replace: rule key with: rule value. > (rewriteRule executeTree: node) > ifFalse: [ ^ self ]. > node replaceWith: rewriteRule tree. > Author > useAuthor: 'AutoDeprecationRefactoring' > during: [aMethod origin compile: aMethod ast formattedCode > classified: aMethod protocol]. > +10 > Marcus _,,,^..^,,,_ best, Eliot
