Ok, it now tries to use the native optimizer, but I'm getting a compile 
error on OSX 10.10 that the <unordered_set> header can't be found when it 
compiles the native compiler. I wrote a ticket 
here: https://github.com/kripken/emscripten/issues/2997

Let me know if/how I can help :)

Cheers,
-Floh.

Am Samstag, 15. November 2014 01:58:20 UTC+1 schrieb Alon Zakai:
>
> Ah, oops, the native optimizer wasn't working on minified output (default 
> in -O2+). Fixed now on incoming. Might need to emcc --clear-cache to 
> rebuild the optimizer, currently.
>
> - Alon
>
>
> On Fri, Nov 14, 2014 at 4:00 PM, Floh <[email protected] <javascript:>> 
> wrote:
>
>> Strange, I can't seem to get it to use the native optimizers, at least 
>> I'm not seeing any 'building native optimizer' or 'js optimizer using 
>> native' messages. I'm using the emscripten-sdk with incoming, and I'm 
>> seeing the EMCC_NATIVE_OPTIMIZER in tools/js_optimizer.py, what am I doing 
>> wrong? :) 
>>
>> Here's the complete console dump:
>>
>> FlohOfWoe:oryol floh$ EMCC_NATIVE_OPTIMIZER=1 
>> sdks/osx/emsdk_portable/emscripten/incoming/emcc -O2 hello.c 
>> DEBUG    root: PYTHON not defined in ~/.emscripten, using 
>> "/usr/bin/python2"
>> DEBUG    root: JAVA not defined in ~/.emscripten, using "java"
>> WARNING  root: invocation: 
>> sdks/osx/emsdk_portable/emscripten/incoming/emcc -O2 hello.c  (in 
>> /Users/floh/projects/oryol)
>> INFO     root: (Emscripten: Running sanity checks)
>> DEBUG    root: compiling to bitcode
>> DEBUG    root: emcc step "parse arguments and setup" took 0.00 seconds
>> DEBUG    root: compiling source file: hello.c
>> DEBUG    root: running: 
>> /Users/floh/projects/oryol/sdks/osx/emsdk_portable/clang/fastcomp/build_incoming_64/bin/clang
>>  
>> -target asmjs-unknown-emscripten -D__EMSCRIPTEN_major__=1 
>> -D__EMSCRIPTEN_minor__=26 -D__EMSCRIPTEN_tiny__=1 
>> -Werror=implicit-function-declaration -nostdinc -Xclang -nobuiltininc 
>> -Xclang -nostdsysteminc -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/local/include
>>  
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/include/compat
>>  
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/include
>>  
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/include/emscripten
>>  
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/include/libc
>>  
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/lib/libc/musl/arch/js
>>  
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/include/libcxx
>>  
>> -O2 -mllvm -disable-llvm-optzns -emit-llvm -c hello.c -o 
>> /var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/tmp9Plu1v/hello_0.o 
>> -Xclang 
>> -isystem/Users/floh/projects/oryol/sdks/osx/emsdk_portable/emscripten/incoming/system/include/SDL
>> DEBUG    root: emcc step "bitcodeize inputs" took 0.01 seconds
>> DEBUG    root: optimizing hello.c
>> DEBUG    root: emcc: LLVM opts: -O3
>> DEBUG    root: emcc step "process inputs" took 0.29 seconds
>> DEBUG    root: will generate JavaScript
>> DEBUG    root: including libc
>> DEBUG    root: emcc step "calculate system libraries" took 0.01 seconds
>> DEBUG    root: linking: 
>> ['/var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/tmp9Plu1v/hello_0_1.o', 
>> '/Users/floh/.emscripten_cache/libc.bc']
>> DEBUG    root: emcc: llvm-linking: 
>> ['/var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/tmp9Plu1v/hello_0_1.o', 
>> '/Users/floh/.emscripten_cache/libc.bc'] to 
>> /var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/tmp9Plu1v/a.out.bc
>> DEBUG    root: emcc step "link" took 0.02 seconds
>> DEBUG    root: saving intermediate processing steps to 
>> /var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/emscripten_temp
>> DEBUG    root: emcc: LLVM opts: -strip-debug -internalize 
>> -internalize-public-api-list=main,malloc,free -globaldce 
>> -pnacl-abi-simplify-preopt -pnacl-abi-simplify-postopt
>> DEBUG    root: emcc step "post-link" took 0.02 seconds
>> DEBUG    root: LLVM => JS
>> DEBUG    root: PYTHON not defined in ~/.emscripten, using 
>> "/usr/bin/python2"
>> DEBUG    root: JAVA not defined in ~/.emscripten, using "java"
>> DEBUG    root: not building relooper to js, using it in c++ backend
>> DEBUG    root: emscript: llvm backend: 
>> /Users/floh/projects/oryol/sdks/osx/emsdk_portable/clang/fastcomp/build_incoming_64/bin/llc
>>  
>> /var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/tmp9Plu1v/a.out.bc 
>> -march=js -filetype=asm -o 
>> /var/folders/t5/y4k268cs1fl4hsygfbrtjzl00000gn/T/emscripten_temp/tmpKKukJK.4.js
>>  
>> -O2 -emscripten-max-setjmps=20
>> DEBUG    root:   emscript: llvm backend took 0.0165209770203 seconds
>> DEBUG    root: emscript: js compiler glue
>> DEBUG    root:   emscript: glue took 0.216127157211 seconds
>> DEBUG    root: asm text sizes[[105107, 1849, 25], 0, 130, 1180, 0, 0, 29, 
>> 274, 234, 663, 348]
>> DEBUG    root:   emscript: final python processing took 0.00248885154724 
>> seconds
>> DEBUG    root: emcc step "emscript (llvm=>js)" took 0.40 seconds
>> DEBUG    root: wrote memory initialization to a.out.js.mem
>> DEBUG    root: emcc step "source transforms" took 0.01 seconds
>> DEBUG    root: running js post-opts
>> DEBUG    root: applying js optimization passes: ['asm', 'emitJSON', 
>> 'minifyWhitespace']
>> chunkification: num funcs: 7 actual num chunks: 1 chunk size range: 
>> 108251 - 108251
>> .
>> DEBUG    root: applying js optimization passes: ['asm', 'receiveJSON', 
>> 'eliminate', 'simplifyExpressions', 'simplifyIfs', 'registerize', 
>> 'minifyNames', 'emitJSON', 'minifyWhitespace']
>> chunkification: num funcs: 3 actual num chunks: 1 chunk size range: 
>> 307159 - 307159
>> .
>> DEBUG    root: applying js optimization passes: ['asm', 'receiveJSON', 
>> 'cleanup', 'asmLastOpts', 'last', 'minifyWhitespace']
>> chunkification: num funcs: 3 actual num chunks: 1 chunk size range: 
>> 162909 - 162909
>> .
>> running cleanup on shell code
>> DEBUG    root: emcc step "js opts" took 0.96 seconds
>> DEBUG    root: emcc step "final emitting" took 0.00 seconds
>> DEBUG    root: total time: 1.72 seconds
>>
>>
>>
>> Am Freitag, 14. November 2014 02:19:47 UTC+1 schrieb Alon Zakai:
>>
>>> Early this year the fastcomp project replaced the core compiler, which 
>>> was written in JS, with an LLVM backend in C++, and that brought large 
>>> compilation speedups. However, the late JS optimization passes were still 
>>> run in JS, which meant optimized builds could be slow (in unoptimized 
>>> builds, we don't run those JS optimizations, typically). Especially in very 
>>> large projects, this could be annoying.
>>>
>>> Progress towards speeding up those JS optimization passes just landed, 
>>> turned off, on incoming. This is not yet stable or ready, so it is *not* 
>>> enabled by default. Feel free to test it though and report bugs. To use it, 
>>> build with
>>>
>>> EMCC_NATIVE_OPTIMIZER=1
>>>
>>> in the environment, e.g.
>>>
>>> EMCC_NATIVE_OPTIMIZER=1 emcc -O2 tests/hello_world.c
>>>
>>> It just matters when building to JS (not building C++ to 
>>> object/bitcode). When EMCC_DEBUG=1 is used, you should see it mention it 
>>> uses the native optimizer. The first time you use it, it will also say it 
>>> is compiling it, which can take several seconds.
>>>
>>> The native optimizer is basically a port of the JS optimizer passes from 
>>> JS into c++11. c++11 features like lambdas made this much easier than it 
>>> would have been otherwise, as the JS code has lots of lambdas. The ported 
>>> code uses the same JSON-based AST, implemented in C++.
>>>
>>> Using c++11 is a little risky. We build the code natively, using clang 
>>> from fastcomp, but we do use the system C++ standard libraries. In 
>>> principle if those are not c++11-friendly, problems could happen. It seems 
>>> to work fine where I tested so far.
>>>
>>> Not all passes have been converted, but the main time-consuming passes 
>>> in -O2 have been (eliminator, simplifyExpresions, registerize). (Note that 
>>> in -O3 the registerizeHarder pass has *not* yet been converted.) The 
>>> toolchain can handle running some passes in JS and some passes natively, 
>>> using JSON to serialize them.
>>>
>>> Potentially this approach can speed us up very significantly, but it 
>>> isn't quite there yet. JSON parsing/unparsing and running the passes 
>>> themselves can be done natively, and in tests I see that running 4x faster, 
>>> and using about half as much memory. However, there is overhead from 
>>> serializing JSON between native and JS, which will remain until 100% of the 
>>> passes you use are native. Also, and more significantly, we do not have a 
>>> parser from JS - the output of fastcomp - to the JSON AST. That means that 
>>> we send fastcomp output into JS to be parsed, it emits JSON, and we read 
>>> that in the native optimizer.
>>>
>>> For those reasons, the current speedup is not dramatic. I see around a 
>>> 10% improvement, far from how much we could reach.
>>>
>>> Further speedups will happen as the final passes are converted. The 
>>> bigger issue is to write a JS parser in C++ for this. This is not that easy 
>>> as parsing JS is not that easy - there are some corner cases and 
>>> ambiguities. I'm looking into existing code for this, but not sure there is 
>>> anything we can easily use - JS engine parsers are in C++ but tend not to 
>>> be easy to detach. If anyone has good ideas here that would be useful.
>>>
>>> - Alon
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "emscripten-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to