[ https://issues.apache.org/jira/browse/COLLECTIONS-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13015165#comment-13015165 ]
Sai Zhang commented on COLLECTIONS-374: --------------------------------------- Thank you very much, Sebb, for all your good suggestion! We really appreciate your response. Thanks. We should implement this feature. Thanks. We should improve the readability of class name Actually, all you have mentioned above reflect the fact that: automatically-generated test, though can reveal previously-unknown bugs, is hard to interpret. From the viewpoint of developing new fully-automatic testing techniques, that is an inherent problem, because to reveal bugs: the test created need to be behaviorally-diverse (e.g., covering as many program states as possible). Therefore, in my tool, we use several heuristic and randomized algorithms to achieve this (since doing exhaustive program state search is infeasible, given the huge space of possible method invocations). The comments the tool generates aim to alleviate (we can not say it solves) the above problem (poor readability). As you may find, the generated test is long, and often has many unused variables. Even developers who are already familiar with the code can not easily have ideas on which test code part should they inspect. The comments provide an alternative way to "correct" a failed test, which we hope to given additional debugging clues. We add this "comment " feature based on our own (limited) experience: when given a long/hard-to-read failed test, a common practice for programmers to start debugging is try to make some minimal edit, making the failed test pass. Then, observe the difference between a failed and passing execution. We agree that the tool itself is still far from perfect (due to the randomized algorithm it uses). Compared with the long test without comments, do you think the test with comments can somehow give certain debugging clues, and help to guide programmers to inspect the right place more efficiently? (we know the automatically-generated test is still much worse than human written one) Thanks a lot. -Sai > Bug in class#BeanMap and TransformedBuffer with reproducible JUnit test > ----------------------------------------------------------------------- > > Key: COLLECTIONS-374 > URL: https://issues.apache.org/jira/browse/COLLECTIONS-374 > Project: Commons Collections > Issue Type: Bug > Affects Versions: 3.2 > Environment: jdk 1.6.0 > Reporter: Sai Zhang > Attachments: ApacheCommons_Documented_Test.java > > > Hi all: > (as in the previous post) > I am writing an automated bug finding tool, and using > Apache Commons Collections as an experimental subject > for evaluation. > The tool creates executable JUnit tests as well as > explanatory code comments. I attached one bug-revealing > test as follows. Could you please kindly check it, to > see if it is a real bug or not? > Also, it would be tremendous helpful if you could give > some feedback and suggestion on the generated code comments? > From the perspective of developers who are relatively familiar > with the code, > is the automatically-inferred comment useful in understanding > the generated test? is the comment helpful in bug fixing? > Your suggestion will help us improve the tool. > Please see attachment for the failed test. > The comment appears in the form of: > //Tests pass if .... (it gives some small change to the test which can make > the failed test pass) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira