I have 2 major propositions: Recently, I (to my chagrin) demonstarted to a friend that '>: i.1e7' takes almost twice as long as 'i.1e7'. Of course I expected them both to execute instantly, not after a full second. So my suggestion: i. should return a 'range' (or 'i.') object containing three vars: 'start end step'. In this way, '+ - * %' and indeed any linear combination of linear operations can be executed on only 3 variables rather than #y . besides the immediate speed and memory improvements here, other operations (on i. objects), like '+/ */ e. i.' etc.. can now be found by direct calculation, without ever spawning a physical array! Another fascinating possibility becomes available: 'i._'. Thus something like '*/ x * y ^ - i. _' is now able to return the result of the infinite geometric series. In fact in general it may be very profitable to use virtual arrays only, unless forced otherwise. Another concrete example: when searching for the first number to satisfy a certain property, one could use 'i.@u seq i. _' rather than some likely inefficent variation of ^: or while. . Perhaps this 'array only when forced' approach may even void the need for special combinations, a concept which feels suspicious to me.
Secondly: operations on extended precision numbers are unbelievably slow... The most direct example: '! 100000x' takes 50 (!) times longer than python3's math.factorial(100000). It should be well worth looking into llvm's APInt and APfloats http://llvm.org/doxygen/classllvm_1_1APInt.html, or perhaps Cpython's bigint's https://github.com/python/cpython/blob/65d4639677d60ec503bb2ccd2a196e5347065f27/Objects/longobject.c, I wouldn't think it's necessary to write another custom library. James Faure ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm