On Tuesday, 24 January 2017 18:47:49 UTC+7, Robin Heggelund Hansen wrote:
>
> The reason is that BuckleScript proves that Arrays are faster than 
>> "string" tagged objects and I have tried benchmarking it myself.  In fact, 
>> I have gone further and manually substituted the use of Arrays rather than 
>> the string tagged objects in the generated Elm code to show that is the 
>> reason.  The problem isn't so much the use of Array's versus 
>> objects/records, but the string tags, which as the Elm JS output doesn't 
>> preserve type information except by these tags, are continuously requiring 
>> string processing to determine the type of object at run time.  Elimination 
>> of these strings by using the type information the compiler already has 
>> would greatly speed things even if objects are used, with the further 
>> advantage of Arrays being that their indices are numeric for slightly less 
>> processing (depending on the browser engine used).
>> This becomes readily seen the more functional the code, with the use of 
>> tuples (tagged objects), lots of records (tagged objects), lists (nested 
>> tagged objects) and so on, and these passed as (essentially untyped) 
>> arguments across functions.
>
>  
>
This goes against my own benchmarks, where replacing current Elm code with 
> arrays proved slower. Accessing an element on a record was faster than on 
> an array. Did you try against more than just one browser?
>

I did, but got confused between comparing the different browsers.  It turns 
out that for running my benchmarks, the Edge browser is almost three times 
slower than Chrome.  For the functional primes benchmark, Elm is now about 
33% slower than BuckleScript, and when I manually eliminate the call to the 
eq/cmp functions (the issue you raised with Evan), the speed increases to 
only about 10-15% slower than BuckleScript (both on Chrome) for this 
benchmark.  The remaining difference is likely that Elm wraps the 
underlying data structure in a tagged wrapper and the extra overhead of 
wrapping and unwrapping this as compared to BuckleScript that uses the 
known type information to not require a wrapper in this case.  It seems 
that OCaml/BucklsScript does not need a (numeric) tag for ADT's when there 
is only one union case, with the tag automatically erased by the compiler.

I also fail to understand what you mean by this: "... and these passed as 
> (essentially untyped) arguments across functions."
> To the javascript runtime, everything is untyped anyway.
>

I meant that elm uses the A2, F2 wrapping functions as its way of 
implementing currying of various numbers of arguments, which requires one 
extra level of function call.  BuckleScript also uses wrapping Currying 
functions, but this is optional and in some cases gets optimized away or 
one can specify non-currying with annotations.

Some other benchmarks have a higher ratio of slowness compared to 
BuckleScript with the tighter the loop the greater the ratio, and with the 
biggest effect the issue of the calls to eq/cmp that you identified.  Being 
reasonable, for a language that is still in quite an early stage of 
development as compared to OCaml to which BuckleScript tags on as a back 
end, Elm is doing not bad and I'm sure Evan will eventually get around to 
fixing the eq/cmp issue.

Pure functional code in JS is currently often throttled by by the speed of 
the many small memory allocations de-allocations required for functional 
forms (a problem shared when one writes functional code in C++), but this 
memory bottleneck will be overcome as asm.js and wasm become more 
implemented as I understand that they specify taking over memory allocation 
rather than delegating it to the normal C run time as I understand browsers 
currently do.  Then what are currently small differences will become much 
larger differences as the remaining inefficiencies become bigger 
percentages of the execution time.  But the language is evolving and 
hopefully there will be time to add more optimization in the next year or 
two.

Currently, I still have the factors of two or more slowness when doing 
intensive math operations with tight loops, but do have the option of 
writing those in BuckleScript and importing them to Elm as Native 
libraries.  I do accept that the majority of users will never see the 
effects that I do.  For instance, many may never need a IArray linear array 
library like Haskell's that is immutable in interface but which has 
efficient update functions that use mutability under the covers to create 
and make immutable modifications - this has uses such as bit blitting 
graphics and matrix array calculations, but does not break immutability in 
the language and doesn't require the complexity of monad states threaded 
through the array modifications.

So with your help, we have determined that Elm code is actually quite fast 
other than for the function calls necessitated by not having type 
information available to the code generator (your issue), which is 
amplified in my case due to the tightness of the loops I am trying to write.

Thank you for illuminating this for me.

-- 
You received this message because you are subscribed to the Google Groups "Elm 
Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to