I have a benchmark where I insert into the front of a vector in a loop 1k 
times, which causes the vector to grow continuously. 

This was pretty slow with the Fastcomp generated Wasm binary (~80ms), and 
sure enough profiling showed that the hot path was memmove. 
I have now switched to the new LLVM upstream backend, and it's much faster 
now but not as fast as I expect (~20ms), and memmove is still showing as 
the most time consuming thing. 

I have inspected the generated wasm binary and don't see any of the new 
bulk memory operations there, so my questions are:

- Does the new LLVM upstream backend support bulk memory operations?
- If not, then why am I seeing this speedup by switching to the LLVM 
backend? 

The benchmark code can be found here: 
https://github.com/ldarbi/wasm-scratchpad/tree/master/memmove

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/eb5441fc-02d2-4894-8559-6d7ea5ea1e61%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to