>  Secondly: operations on extended precision numbers are unbelievably
slow... The most direct example: '! 100000x' takes 50 (!) times longer than
python3's math.factorial(100000). It should be well worth looking into
llvm's APInt and APfloats<http://llvm.org/doxygen/classllvm_1_1APInt.html>,
or perhaps Cpython's bigint's, I wouldn't think it's necessary to write
another custom library

My $0.02...

J might be a "general-purpose, high-performance programming language" but
it is low-performance for arbitrary-precision arithmetic, operating on
signed integers, rational numbers, and it lacks (arbitrary-precision)
floating point numbers.  This is hardly breaking news (e.g., [0, 1]) and a
potential solution, using GMP [2], has also been apparent [0, 1].

Other languages are using GMP: "The basic interface is for C but wrappers
exist for other languages including Ada, C++, C#, Julia, .NET, OCaml, Perl,
PHP, Python, R, Ruby and the Wolfram Language."

Why is J not using GMP?  Reasons could be economical (distribution of
resources), technical (compatibility), legal, etc.  I have no idea.

[0] [Jforum] Mathematica v APL
    http://www.jsoftware.com/pipermail/general/2005-July/023651.html

[1] [Jprogramming] Comparing J speed

http://www.jsoftware.com/pipermail/programming/2015-September/042728.html

[2] GNU Multiple Precision Arithmetic Library  Wikipedia
    https://en.wikipedia.org/wiki/GNU_Multiple_Precision_Arithmetic_Library



On Mon, Feb 26, 2018 at 12:48 PM, james faure <[email protected]>
wrote:

> I have 2 major propositions:
>
> Recently, I (to my chagrin) demonstarted to a friend that '>: i.1e7' takes
> almost twice as long as 'i.1e7'. Of course I expected them both to execute
> instantly, not after a full second. So my suggestion: i. should return a
> 'range' (or 'i.') object containing three vars: 'start end step'. In this
> way, '+ - * %' and indeed any linear combination of linear operations can
> be executed on only 3 variables rather than #y . besides the immediate
> speed and memory improvements here, other operations (on i. objects), like
> '+/ */ e. i.' etc.. can now be found by direct calculation, without ever
> spawning a physical array! Another fascinating possibility becomes
> available: 'i._'. Thus something like '*/ x * y ^ - i. _' is now able to
> return the result of the infinite geometric series. In fact in general it
> may be very profitable to use virtual arrays only, unless forced otherwise.
> Another concrete example: when searching for the first number to satisfy a
> certain property, one could use 'i.@u seq i. _' rather than some likely
> inefficent variation of ^: or while. . Perhaps this 'array only when
> forced' approach may even void the need for special combinations, a concept
> which feels suspicious to me.
>
> Secondly: operations on extended precision numbers are unbelievably
> slow... The most direct example: '! 100000x' takes 50 (!) times longer than
> python3's math.factorial(100000). It should be well worth looking into
> llvm's APInt and APfloats http://llvm.org/doxygen/classllvm_1_1APInt.html,
> or perhaps Cpython's bigint's https://github.com/python/cpython/blob/
> 65d4639677d60ec503bb2ccd2a196e5347065f27/Objects/longobject.c, I wouldn't
> think it's necessary to write another custom library.
>
> James Faure
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to