Thanks! About the sanity checks, in general I don't recommend running them - they are the least portable part of the test suite (e.g. they don't work on windows). It's hard to test the things it does "properly". So I'd just ignore those, although the first issue (about vanilla/wasm backend) looks like a recent regression, I commented on https://github.com/kripken/emscripten/commit/8737d35b3c356a87741844f1d41e53450df22052 (we don't run the sanity tests on the bots, so this was missed).
The first main test suite error is strange. I can't reproduce it locally. And I can see js_outfile is a param to build() - do you not see that as well, in tests/runner.py:285 ? No local changes on your system? I can verify the second main test suite error, it was a regression from a28993402158a57d771abd4b4894a053b9eef644, I commented on it https://github.com/kripken/emscripten/commit/a28993402158a57d771abd4b4894a053b9eef644 This is from a day or two ago, I guess the committer didn't notice the error on the bots yet. SpiderMonkey nightly should be fine. I use it, although I only get around to updating once a week or so. On Wed, Feb 8, 2017 at 5:53 PM, Qwerty Uiop <[email protected]> wrote: > > > On Wednesday, February 8, 2017 at 3:24:25 AM UTC+2, Alon Zakai wrote: >> >> 1. Yes, incoming is where new development happens. It is common to have >> local failures, though, for various reasons, for example some tests use >> optional features like closure compiler (needs Java) or crunch (needs >> crunch), so without installing those, it will fail. But those are a small >> fraction of the tests - if you see lots of failures, there might be a >> problem with the local install (e.g., old node.js version can cause that). >> >> What errors do you see? >> >> First of all, the sanity test immediately fails with the following error: > > ====================================================================== > ERROR: setUpClass (test_sanity.sanity) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/brd/work/src/emscripten/tests/test_sanity.py", line 33, in > setUpClass > assert not os.environ.get('EMCC_WASM_BACKEND'), 'do not force wasm > backend either way in sanity checks!' > AssertionError: do not force wasm backend either way in sanity checks! > > Quick investigation shows that the culprit is the line #898 in > tools/shared.py: > > os.environ['EMCC_WASM_BACKEND'] = str(is_vanilla) > > However, commenting it out is not enough, since the runner would fail > later, on the line 1261: > > 'EMCC_WASM_BACKEND='+os.environ['EMCC_WASM_BACKEND'] > > since the env var EMCC_WASM_BACKEND is now undefined. > So I have to change it to the following for the runner to work: > > 'EMCC_WASM_BACKEND='+os.getenv('EMCC_WASM_BACKEND', '0') > > > Then: > > 1) several other tests in the sanity suite were failing because > EMSCRIPTEN_NATIVE_OPTIMIZER was set in my .emscripten to a prebuilt binary. > Once I removed it from there, the tests passed. > > 2) sanity.test_firstrun fails because it seems to be written incorrectly: > On my system nodejs is located inside $HOME (namely, in $HOME/subst), the > corresponding directory is first in PATH and Emscripten correctly finds the > executable on the first run. But the test expects that Emscripten always > looks for 'node' in PATH or for 'nodejs' in /usr/bin > > 3) sanity.test_emcc_caching fails with the following message: > > ====================================================================== > FAIL: test_emcc_caching (test_sanity.sanity) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/brd/work/src/emscripten/tests/test_sanity.py", line 487, in > test_emcc_caching > assert ERASING_MESSAGE in output > AssertionError > > Also, some tests in the sanity suite were failing because I specified > wrong path to d8. > > Here are examples of errors in the main suite: > test_dylink_basics (test_core.asm1) ... ERROR > test_dylink_class (test_core.asm1) ... ERROR > test_dylink_dot_a (test_core.asm1) ... [was asm.js'ified] > [was asm.js'ified] > (test did not pass in JS engine: ['/home/brd/soft/ > SpiderMonkeyShell-nightly-2017-02-07/js', '-w']) > ERROR > test_dylink_dynamic_cast (test_core.asm1) ... ERROR > test_dylink_floats (test_core.asm1) ... ERROR > test_dylink_funcpointer (test_core.asm1) ... ERROR > test_dylink_funcpointers (test_core.asm1) ... ERROR > test_dylink_funcpointers2 (test_core.asm1) ... ERROR > test_dylink_funcpointers_float (test_core.asm1) ... ERROR > test_dylink_funcpointers_wrapper (test_core.asm1) ... ERROR > test_dylink_global_init (test_core.asm1) ... ERROR > test_dylink_global_inits (test_core.asm1) ... ERROR > test_dylink_global_var (test_core.asm1) ... ERROR > test_dylink_global_var_jslib (test_core.asm1) ... ERROR > test_dylink_global_var_modded (test_core.asm1) ... ERROR > test_dylink_hyper_dupe (test_core.asm1) ... ERROR > test_dylink_i64 (test_core.asm1) ... ERROR > test_dylink_i64_b (test_core.asm1) ... ERROR > test_dylink_iostream (test_core.asm1) ... ERROR > test_dylink_jslib (test_core.asm1) ... ERROR > test_dylink_mallocs (test_core.asm1) ... ERROR > test_dylink_many_postSets (test_core.asm1) ... ERROR > test_dylink_printfs (test_core.asm1) ... ERROR > test_dylink_spaghetti (test_core.asm1) ... ERROR > test_dylink_syslibs (test_core.asm1) ... syslibs libcxx 0 > ERROR > > ====================================================================== > ERROR: test_dylink_dynamic_cast (test_core.default) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/brd/work/src/emscripten/tests/test_core.py", line 3619, in > test_dylink_dynamic_cast > ''', expected=['starting main\nBase\nDerived\nOK']) > File "/home/brd/work/src/emscripten/tests/test_core.py", line 3048, in > dylink_test > self.build(side, self.get_dir(), base, js_outfile=(side_suffix == 'js > ')) > TypeError: build() got an unexpected keyword argument 'js_outfile' > > ====================================================================== > ERROR: test_dlfcn_self (test_core.default) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/brd/work/src/emscripten/tests/test_core.py", line 27, in > decorated > return func(self) > File "/home/brd/work/src/emscripten/tests/test_core.py", line 2669, in > test_dlfcn_self > self.do_run_from_file(src, output, post_build=(None, post)) > File "/home/brd/work/src/emscripten/tests/runner.py", line 623, in > do_run_from_file > includes, force_c, build_ll_hook, extra_emscripten_args) > File "/home/brd/work/src/emscripten/tests/runner.py", line 655, in > do_run > raise e > Exception: Expected to find '123 > 123' in '/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:16982:6 > warning: unreachable code after return statement > /tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:14628:11 TypeError: > GL.tempFixedLengthArray is undefined > Stack: > init@/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:14628:11 > @/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:26452:12 > ', diff: > > > --- expected > +++ actual > @@ -1,2 +1,6 @@ > -123 > -123 > +/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:16982:6 warning: > unreachable code after return statement > +/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:14628:11 TypeError: > GL.tempFixedLengthArray > is undefined > +Stack: > + init@/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:14628:11 > + @/tmp/emscripten_test_default_RSeJ40/src.cpp.o.js:26452:12 > + > > 311 error in total. > > BTW, it is OK to use a nightly build of SpiderMonkey shell? I.e. is it > stable enough? > > > >> 2. Yes, failures on master should not happen, definitely. Assuming those >> aren't due to lack of Java/crunch/etc. as just mentioned, it does suggest a >> local install problem, as the master branch is known to pass tests on both >> the bots and a dev's local machine. Not a perfect guarantee but pretty good >> ;) >> >> > I think the python test runner has ERROR for tests that fail a check, as >> in "this test verifies the output is X, and it the output was Y". While >> FAIL is the state of tests that fail to complete due to a Python exception >> (e.g., due to out of memory, or the test trying to access a file that >> doesn't exist, etc.). >> >> >> >> On Tue, Feb 7, 2017 at 3:41 PM, Qwerty Uiop <[email protected]> wrote: >> >>> Hi, >>> I have a couple of questions: >>> 1) Which branch should I use as a base - incoming or master? On one >>> hand, the dev's guide instructs to use "incoming". On the other hand, it >>> also instructs to make sure that all tests pass before submitting a patch >>> but the "incoming" branch already has a lot of failing tests... >>> >>> 2) I also have some failing tests on the "master" branch. Am I right >>> that this shouldn't happen and if it does, it should be reported? (or, >>> perhaps, my build environment is broken in some way) >>> >>> Btw, the test runner reports "failures" and "errors" (e.g. "FAILED >>> (failures=3, errors=1)"), what's the difference between them? >>> >>> Thanks! >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "emscripten-discuss" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- > You received this message because you are subscribed to the Google Groups > "emscripten-discuss" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "emscripten-discuss" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
