Well, the bad news is that Hackystat wasn't accepted to the formal demonstrations at ICSE.
The good news is that the reviews about Hackystat were great! As far as I can tell, the only reason it wasn't accepted is because I didn't do a very good job of explaining what I would do in the demo. Oh well.
How about this for a sound bite: Hackystat "is one of the major outcomes of the metrics community"!
Cheers, Philip
------------ Forwarded Message ------------ Date: Monday, January 10, 2005 1:17 PM -0500 Subject: ICSE 2005 Research Demonstrations Notification
Dear Philip
Thank you for your submission to ICSE-Demonstrations 2005. We regret to inform you that your submission, entitled:
"Hackystat: A Framework for Automated Collection and Analysis of Software Product and Process Measures"
has not been accepted for presentation at ICSE-Demonstrations 2005. Of 41 submissions, only 8 were accepted for publication. Because of the low acceptance rate, and the high quality of the submissions, the competition was very tight.
The review process was extremely thorough. Below we have included anonymous, verbatim copies of the reviews of your submission. We hope you find them useful in revising your paper for submission to another forum.
There are several other opportunities for you to participate in the conference---numerous workshops and tutorials---and we encourage you to take advantage of them.
Please check our website at http://www.cs.wustl.edu/icse05/Home/index.html for details.
We hope to see you at the conference in May, and we encourage you to submit a paper next year to ICSE 2006.
Best Regards,
Premkumar Devanbu and Cecilia Mascolo ICSE 2005 Demonstrations Co-Chairs *=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=
First reviewer's review:
>>> Summary of the submission <<<
Hackystat is a framework for collecting software metrics. It uses a metaphor of telemetry and sensors. Sensors are placed into various software tools, for example CVS and Ant, and they feed data back to a server. This server displays the data as collection of graphs and reports. The demo will cover the general problem, show data collected using the tool, and experiences using the tool in classrooms and on projects.
>>> Evaluation <<<
The Hackystat framework presents an elegant method for piggybacking data collection onto existing tools and infrastructure.
While this work is interesting and laudible (as evident in publications), this formal demonstration proposal is somewhat thin on detail. There needs to be much more information on on the content of the demonstration and on how Hackystat works. Also, it would have been nice to have more on the usefulness of Hackstat in both the classroom and at JPL.
*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*
Second reviewer's review:
>>> Summary of the submission <<<
The paper describes Hackystat, a framework to support automatic collection/analysis during the development process. The unique aspect of this toolset is its ability to perform transparent data collection with no or minimum impact on the developer.
Hackystat has been around for a while and many groups have or are using it. Looking at the tool's web site and the list of publications supporting the tool's development, this clearly is a mature enterprise.
Nonetheless, the level of activity and evolution of these tools is still high, and it would be benefitial for the research community attending ICSE (and not familiar with HackyStat) to be exposed to it since it is one of the major outcomes of the metrics community.
I would like to see more details on the hackystat architecture (there is space to add at least some details on how framework is structured). Also, please clarify what is the different between Hackystat and the telemetry project. I am assumming there is some analysis added when telemetry is involved, but it would be helpful to explain what type of analysis can be performed.
Minor suggestion for paper improvement: either refine the titles of section 2 and 3 (Applications and Other Applications is just too coarse) or consider integrating both in just one section.
>>> Evaluation <<<
In favour - interesting/mature/opensource toolset coming from academia with relevant impact
Against - plenty of space to add more details about hackystat, its architecture, ... - not clear what is the value added of telemetry and how it is different from just using hackystat. (needs to clarify section 2)
*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*
Third reviewer's review:
>>> Summary of the submission <<<
A general purpose data gathering and visualization tool for software metrics. Retargetable to different operating environments.
>>> Evaluation <<<
This should be a good demo. The toolset is quite mature, and I think it would attract broad interest at ICSE. Several papers have already been published, and the work appears to have some traction.
It would have been good to include some details on what is involved in writing the "sensors" that pull data, e.g., from Eclipse. How hard is that? What queries are allowed on the data? How hard is it to generate graphs?
*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*
---------- End Forwarded Message ----------
