"Miller, Raul D" <[EMAIL PROTECTED]> wrote:
> Simon Ampleman wrote:
> >    - Is there a J solution ? What other programmer does with the same 
> > context? Another language? Extreme optimization?
>
> I think the slowest operation is going to be updating memory.
>
> If you represent memory using a 65536 element vector, you're going to
> wind up making a copy of that every time you update it.  I think
> memr/memw
> on a region of system memory will be much faster than using a variable
> for
> the kinds of operations you want to perform.
>
> Other than that, your big issue is going to be J overhead.  Each J
> operation
> has to check type and shape, and each J result requires allocating a new
> J array.  So minimizing the number of J operations you perform for each
> 6502 operation will tend to reduce this overhead.

This is not necessarily true. While you DO have the overhead of parsing
and type-checking, you don't generally have the overhead of copying.
Sentences like:
  register =: address { memory
and
  memory =: register address } memory
do not actually copy the entire memory array
(The first as a natural consequence of J referencing variables
without copying them each time, and the second as a result of
special code for amend-in-place).

Still, one thing that could improve an interpreter speed by orders of
magnitude or more is a higher-level analysis of code. A year or two
ago, I read about an interpreter that could run code from one machine
on another (like PC->MAC and vice versa), and amazingly do it at
real or near real time. It did this by relying on the observation that
in most code, the CPU spends 95% of the time in 5% of the code,
usually in tight loops performing simple but intensive tasks like
block moves, compares, etc. So the interpreter did local code
analysis to determine which algorithm was being performed, and
if it found a common idiom, it would then emulate the idiom rather
than the instructions invoking it. For some code, this could end up
with emulated code running faster than the original. What speed it
lost by wasting time performing such analysis on 95% of the code
was more than made up for by doing it on the 5% where it normally
spent 95% of the time. (This is also where J shines; while operations
on atoms are abysmally slow compared to other languages,
operations on large arrays often outrun even hand-coded assembler).

Unfortunately, such a project would likely be very ambitious,
and much harder than a brute-force one-instruction-at-a-time
emulator.

-- Mark D. Niemiec <[EMAIL PROTECTED]>

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to