> > there is a new extension to the syntax to add examples. Essentially > > any line that begins with ++E will be output as Example documentation > > to the display command. So the coerce from Tuple now reads: > > > > coerce: PrimitiveArray S -> % > > ++ coerce(a) makes a tuple from primitive array a > > ++ > > ++E t1:PrimitiveArray(Integer):= [i for i in 1..10] > > ++E t2:=coerce(t1)$Tuple(Integer) > >That seems to be common nowadays. But what if for some reason the output >is wrong? Then the user sees that in the documentation. So, personally, >I would rather add another tag like ++R, for example, that lists the >explicit (expected) output. > >Why I think ++R is a good idea, is that the ++E and ++R could be used >for the TestSuite.
An EXCELLENT idea. I had planned to sweep up all of the ++E expressions into algebra test suites since these explicitly test each of the exported functions. I've uncovered some bugs already from the documentation process. I think it is vital that we test the exposed functions at build time as they are the first line of defense against breakage. Your idea makes it MUCH easier to integrate the tests into the existing regression test framework. I'll see if I can automate this. >You could also move some checking time to the users machine. At the time >the documentation is rendered, the commands in ++E are executed and >compared internally with the ++R stuff. >Since execution might take some time, first show the ++R output and >spawn a background process that compares the output of ++E with ++R. >If that is equal, everything is fine. If it is different. That is a bug. >So notify the user with all the information (about his system) that s/he >should send to a bugtracker. This could potentially introduce a lot of overhead into the execution of the )display operation command. The "map" function, for instance, has about 80 instances which would introduce 80 background executions. If we could do that in parallel (since each test is independent) this might make sense on a multicore machine. But we can certainly test at build time. As for bugtracking I plan to have a bug script that the user can run to collect information about a bug and format a "standard" report that the user can modify with details. I think this would be useful for the panaxiom systems also. I have tried wrapping the breakpoint with a "dump state" function so I can capture backtracking in a file but it has not hit the trunk code. The main thing that keeps pushing it down on the list is my relative weakness in writing shell scripts. Ideally the user just runs "bugreport" and everything gets sent to us. Axiom used to have such a shell script when I worked at IBM. I'll see if I can find a copy. Tim _______________________________________________ Axiom-developer mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/axiom-developer
