Am 03.12.2011 20:48, schrieb Aaron Meurer:
Right now, Stefan's script works only for his machine, if I were to
run the same script, we would just get duplicate reports (unless we
timed them to be offset or something).  So we really need to get some
poling mechanism implemented,

This sounds worrying.

You see, what started as a simple buildbot now needs to coordinate stuff across servers.
Which may fail, hang, or disconnect, complicating the image considerably.
And you also have some issues with identifying units of work, making sure each is run once, and avoid useless duplications (but you need to detect whether a server failed because the test failed or because there was some server mishap: in the former case, you wish to report the test as failed, in the latter case, you need to schedule the test on some other machine). You may even want to supersede a server-failed test because the module change it was supposed to test has been reverted or reworked.

I'm a bit worried that we'll end working more on the testing framework than on Sympy itself. I agree that UI quality is relevant. However, as I see lots of projects using Hudson/Jenkins, I guess other factors have eclipsed any UI problems.

This all doesn't mean that Jenkins is the way to go; I have too little experience with automated testing to judge anything here.
I'm just worrying whether all relevant factors have been considered.

Regards,
Jo

--
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to