> On Monday 24 September 2001 11:54 am, Dan Sugalski wrote:
> > Odds are you'll get per-op event checking if you enable debugging, since
> > the debugging oploop will really be a generic "check event every op"
> > loop that happens to have the "pending debugging event" bit permanently
> > set. Dunno whether we want to force this at compile time or consider
> > some way to set it at runtime. I'd really like to be able to switch
> > oploops dynamically, but I can't think of a good way to do that
> > efficiently.

On a side note, back when I was doing some of my initial benchmarking, I 
came up with this solution to the opcode loop / event check condundrum: 
eventless events.  (An attempt to integrate opcodes, events, and priorities.)
For those that want the executive summary, it worked, but was so slow (slow 
as in measured-in-multiples-rather-than-percentages slow) that I never 
pursued it further.  (Particularly because checking a flag is so relatively 
inexpensive, really.)

Currently, the DO_OP loop is essentially a 1x1 table for opcode dispatch. 
(By 1x1, I mean one priority level, one pending opcode deep.)  Events are a 
completely separate beast.  

So I elected to abstract an event as "a set series of opcodes" that run at a 
given priority, as would be referenced (basically) by the head of that 
particular branch of the opcode tree.  I set an arbitrary number of (and 
meaning to) priorities, from signals to async i/o to user-defined callbacks.

To remove the last vestige of distinction between regular opcodes and 
events, I abstracted "regular code" as a single event that ran at the lowest 
priority.  (Or the next-to-lowest.  I was contemplating, at one point, 
having BEGIN, INIT, CHECK, and END blocks implemented in terms of priority.) 
So now every opcode stream is an event, or every event is an opcode stream; 
depending on how you care to look at it.

So now you have an 'p' x 1 table for opcode dispatch, where 'p' is the  
different possible run-levels within the interpreter, with one pending 
opcode (branch head) per runlevel.

But, of course, you can have pending events.  Giving our (Uri, Dan, Simon, 
and I - way back at Uri's BOF at the OSCon) previous agreement that 
events at a given priority shouldn't preempt an already scheduled event at 
that priority, we needed a way to queue events so that they were lost, but 
would still be processed at the correct time (according to our scheduler).  
So I lengthened the width of the table to handle 'e' events.

I've now an 'p' x 'e' table.  (Implemented as an array ['p'] of linked lists 
['e'].)  Now to offload the event overhead onto the events themselves.

Each interpreter has its current priority available.  The DO_OP loop uses 
that priority as the offset into the dispatch table (up the 'p' axis).  The 
first opcode in the list is what gets executed.  That opcode, in turn, then 
updates itself (the table entry) to point to the next opcode within the 
particular event.

When a new event arrives, it appends its branch head to the priority list, 
and repoints the interpreter's current priority if it is now the highest.  
(This, in effect, suspends the current opcode stream, and the DO-OP loop 
begins processing the higher-level code immediately.  When regular 
processing resumes, it picks up more or less exactly from where it left off.)

When the event "exits", it deletes its own node in the linked list, and, if 
it were the last branch at that priority,  repoints the current priority to 
the next highest priority that needs to be processed.  It took a 
while to come up with the necessary incantations to Do The Right Thing when 
the priority switchers were themselves interrupted by an event at a higher, 
lower, or identical priority to the one that was just leaving.

Sure, events were a lot hairier themselves than how they currently look, but 
events and prioirties are still rather non-existent on paper - who knows how 
hairy they may become to work properly.  Besides, cleaning up the opcode 
dispatch itself was supposed to make up the difference.

For those of you playing along at home, I'm sure you obviously see why 
*that's* not the case.  Testing equality is one of the more efficient 
processor commands; more so when testing for non-zero (on machines that have 
a zero-register, or support a test for non-zero).  Which is all a check 
against an event flag would do.  Instead, I replaced it with doubly 
indirected pointer deferencing, which is not only damn inefficient (from a 
memory, cache, and paging perspective), but also can't be optimized into 
something less heinous.

An oft-mentioned (most recently by Simon on language-dev) lament WRT Perl 6 
is the plethora of uninformed-ness from contributors.  So I am now informed. 
And so are you, if you weren't already.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]

Reply via email to