A memory.copy instruction should be emitted if you pass -mbulk-memory while 
using the LLVM backend. I’d be very interested in how that affects your 
benchmark.

> On Jul 12, 2019, at 09:01, Lilit Darbinyan <[email protected]> wrote:
> 
> I have a benchmark where I insert into the front of a vector in a loop 1k 
> times, which causes the vector to grow continuously. 
> 
> This was pretty slow with the Fastcomp generated Wasm binary (~80ms), and 
> sure enough profiling showed that the hot path was memmove. 
> I have now switched to the new LLVM upstream backend, and it's much faster 
> now but not as fast as I expect (~20ms), and memmove is still showing as the 
> most time consuming thing. 
> 
> I have inspected the generated wasm binary and don't see any of the new bulk 
> memory operations there, so my questions are:
> 
> - Does the new LLVM upstream backend support bulk memory operations?
> - If not, then why am I seeing this speedup by switching to the LLVM backend? 
> 
> The benchmark code can be found here: 
> https://github.com/ldarbi/wasm-scratchpad/tree/master/memmove
> -- 
> You received this message because you are subscribed to the Google Groups 
> "emscripten-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/emscripten-discuss/eb5441fc-02d2-4894-8559-6d7ea5ea1e61%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/DEB4BF6E-890E-41B2-BCD2-C444A7A1E096%40google.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to