On Wed, Feb 28, 2018 at 10:21 AM, james faure <james.fa...@epitech.eu> wrote:
> For J, where so much more is happening per statement, we could transparently 
> replace all intermediate results by generators, except where a full noun is 
> required.

Worth considering in this context:

   $
   {
   |.
   %.

> To translate the python statement to j, one would write (<./ 1 2 3 4 , 0 10 0 
> 10), however we lose lots of time to python here due to calculating the 
> intermediate catenation (unless we are fortunate enough that the interpreter 
> has a Special Combination (SC) for this)

How much is "a lot"?

> For everyone who may be of the opinion that this is not worth it, I would 
> like to direct your attention to the existing jsources, where vast amounts of 
> effort for far smaller optimizations are apparent.

As a general rule, J only accepts optimizations where there's a useful
problem where that optimization produces at least a factor of 2
improvement. (Often enough it's more like factor of 1000.)

> for example p.c, where the parser itself tries to emplace arrays and avoid 
> unnecessary copying. It is still unable to do this consistently however, as 
> simple memory tests like 7!:2 'i.1000x' ,: '>: i.1000x' prove. (and yes I 
> like using x for my benchmarks due to how much slower they are)

I am not sure what you're talking about here - I took a quick look at
p.c but most of what I saw was implicit in the definitions. I'm not
comfortable calling that "optimizations".

> another example is. jt.h, where a JFE's state information structure has been 
> lovingly crafted to optimize it's use of cpu cache
> or again the countless optimizations in the primitives, like n&i. hash 
> tables, the immensely flexible /: etc..

The hash tables aren't so good on large arrays, unfortunately - so
that's something that could maybe use some good insights.

But the thing that worries me is how many things would break if you
tried to throw generators at them. That winds up being cases where
generators would slow things down (because you have the work of
building the generator and then interpreting the generator and you pay
in both code bloat and cache pressure and in cpu pipeline breakage and
of course any extra tests to keep things from breaking.)

Put different: arrays are not "an optimization" in J. They are J's
fundamental data structure.

Or, for a similar perspective:

  gnu gmp's bignum library looks like a nice-to-have

  the lapack library is a nice-to-have

  But integrating both of them would not solve the hard issues
associated with using them together.

Also, thinking back to infinities: one useful tactic for implementing
math which involves infinities is to work in the reciprocal domain.
But - hypothetically speaking - there are potentially an infinite
number of different ways to work with infinities.

Thanks,

-- 
Raul
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to