On Tue, 12 Oct 2010, Levente Uzonyi wrote:
On Tue, 12 Oct 2010, Nicolas Cellier wrote:
I just realized that sometimes the 100 ms timeout is not enough even on
Cog
and the test fails, 200 ms works well. It typically happens when running
#testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more,
to
make sure it doesn't fail with SqueakVM or on slower machines.
Levente
Hmm, I increased to 1000ms, but still get some random failures...
Yes, it's a bit strange. If I change it to 200ms, every test passes, no
random failures. If I increase it to 1000ms, I get random failures.
Okay, I really tracked down the cause of the problem. The tests perform
simple producer-consumer scenarios, but the consumers won't wait at all.
If there's nothing to consume, the test fails. The randomness come from
the scheduler. The server process is started first, the client is the
second. If the server process can produce enough input to the client
process the test will pass.
If you decrease the priority of the client process by one in
#timeout:server:client:, the randomness will be gone and the tests will
reliably pass, because the client won't be able to starve the server. To
avoid false timeouts I had to increase the timeout value to 2000
milliseconds using SqueakVM.
I also found an issue: the process in XTTransformWriteStream doesn't
terminate. If you run the tests, you'll get 12 lingering processes.
Levente
Levente
Nicolas
_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project