Payoff could mean improved speed, features, or code quality.

In these examples, it means speed mostly, sometimes code quality when indicated.

The targets for work were chosen by analysis of a task that processed a LaTeX file.

I have been comparing results using a benchmark of running lint on the source for lint: a parsing-heavy task that takes a couple of seconds.


The reference-count work is intended to address an issue we have all seen: that arrays with many boxes have considerable overhead even in just referring to the array.

The reference-header work is intended to make (,) in sentences like (x { , y) take small constant time.


I feel pretty certain that both of those changes have a payoff that is worth the effort of implementation.

For anything beyond that, we will need to be guided by analysis of benchmarks to make sure we don't make changes that aren't worthwhile.

I would really like to develop a rich benchmark suite to use for measurement. Suggestions welcomed.

Henry Rich


On 5/12/2016 5:39 AM, Raul Miller wrote:
I am curious how payoff is measured (I see the estimates, I mean - are
the payoffs decreased memory use? increased speed? on either of those
do we have decent benchmarks? are they increased code stability?
opportunities for better documentation? something else?)

I have seen many projects adopt "improved efficiency" measures which
instead made the system slower and more fragile. I hope we can avoid
that mistake here.

(I also I hope I am not sticking my nose somewhere where it causes problems.

Thanks,


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to