I'd like to get some comments on what I have so far and stuff I plan on doing.
--- What I Have --- If you go to http://faculty.cs.byu.edu/~jay/tmp/14986/ You'll see the interface for a sample run of my tester on revision 14986. [No particular reason Robby] The essential details about the revision are listed at the top (notice the links to inside the testing logs) followed by a table summarizing the run. This table always shows when there is a directory. Each entry on the table depends on if the path is a file or a directory. If it is a file, then you have (1) how long it took to "mred-text -t <path>" the path; (2) whether the execution timed out (current timeout for everything is 10 minutes); (3) whether mred-text exited cleanly (meaning with status code 0); (4) whether there was output to STDERR. If it is a directory, you see the same information, except it is a sum of the contents of the directory. At the bottom is the entry for the entire directory. The path name on the left is a link to a page for that directory or file. As you browse, the breadcrumbs at the top of the page accumulate. Each sub-path is a link to the corresponding page, as you'd expect. If you go to the page for a file http://faculty.cs.byu.edu/~jay/tmp/14986/src/build/make.html or http://faculty.cs.byu.edu/~jay/tmp/14986/collects/frtime/gui/mod-mrpanel_ss.html are decent choices for this demo You'll have more information about the run, including the log. The stdout output is black; the stderr output is red. --- What I'd Like From You --- 1) Comments on the interface and information you would want to be displayed 2) A suggestion for a name [I'm thinking pis: the PLT Integration Server =P] 3) Comments on the ideas below --- What I Plan On Doing --- Here are some things I know I am planning on doing: * Determining if a file tests differently than the previous revision. Combining this with the saving of terminal output is, IMHO, a fairly robust way to locate errors and be testing suite agnostic. Basically, there will be another column that says "Changed?" and whether the output has changed. This is will be the basis of the "nag" emails with some heuristics to avoid unnecessary naggery. For example, if the file you just edited in the commit displays something different, it won't consider it any error if it always displayed something in that way. For example, if X.ss never printed anything on stderr than it will nag if it starts to even though you just edited it. However, if X.ss always printed on stderr, it won't nag if it prints something different. Obviously these heuristics will be very fluid. * Using Subversion properties to set the timeout on a per-file basis This will help the build not wait for ever for DrScheme, as it won't complete. By including it in Subversion, it will be versioned with the file, so the metadata is not in a magical place on the server. * Using Subversion properties to set the command-line options and execution program Most of the files can be run in mzscheme, but about 1000 need to be run in mred. Also, many files (particular in collects/tests/mzscheme/benchmarks) need command-line arguments to run properly. This will include an option to ignore files. The default will be that if a file ends in .ss, .scm, or .scrbl then mzscheme -t will do it. Again, if these are in Subversion, then it is more transparent and trackable what the test server should be doing. [I will set the initial version of these properties; no need to worry about it] * Jump to output changes on first page * Emitting the status of a build on twitter * Nag emails to committer and plt:responsible * Client-side sorting of directory listings * Eventually I'd like to do two different kinds of builds: A "fast" build that uses the previous slow build, but updates to the next version and perhaps does some sort of dependency heuristic to not run everything. A "clean" build corresponding to this. The goal would be that the fast build would be done within 30 minutes, but the clean would be available in a few hours. --- Some Data --- It took 7.34 hours to do the build on my Macbook in power save mode. At the average of 8 commits per day, this is too slow to keep up with the edge, but perhaps a better machine will do better. I currently don't even run test two files at once. (I could because I have dual CPUs on the laptop.) Plus with better timeouts I could shave off almost 5 hours. It took 500MBs of space for the source & compiled source. It took 18MBs for the output logs. The UI takes 20MBs. There were 20 files that were created in `pwd`. (Most of them with bizarre names.) There were 20 PLaneT packages installed. -- Jay McCarthy <j...@cs.byu.edu> Assistant Professor / Brigham Young University http://teammccarthy.org/jay "The glory of God is Intelligence" - D&C 93 _________________________________________________ For list-related administrative tasks: http://list.cs.brown.edu/mailman/listinfo/plt-dev