* I would like to track issues from the automated code inspection tools (PMD, FindBugs, CheckStyle, etc). I don't see a sensor or SDT related to this type of data.
Am I missing something here?

Other than the sensors for these tools? :-) You're right, there are no sensors yet for PMD, FindBugs, Checkstyle. As for the sensor data type, one could either try to use Issue, or ReviewIssue, or else come up with a new one entirely.

I would like to track these issues too. So, I'm planning on working on an Ant-based Checkstyle Sensor . I also plan on making a new SDT called CodeIssue. I don't think the Issue or ReviewIssue SDT are general enough to support my thoughts about CodeIssue. Basically, it will look almost like the Checkstyle failureType from the Ant build sensor.

For now, I guess I will hack this in a new module called hackyCodeIssue. I'm planning on just keep this local (ie. not in CVS) until the next release is done.

* We didn't like the coverage analysis of JBlanket -- it only looks at method level coverage. We are using Emma since it looks at lines and logical blocks in addition to method calls. I have a hackystat sensor that takes Emma output and sends it to
Hackystat using block level coverage detail.

That's great! I hadn't heard of Emma, and it seems like a neat open source alternative to JBlanket. If you would like at some point to contribute your sensor code to the project so that everyone can benefit, let me know; we'd love to have it.

Tonight, I tried out Emma on StackyHack. It seems like it worked fine for this small project. Only major difference is exclude one line methods. I'm going to hack some build scripts to try it out on some bigger projects. If it works out ok, I'm going to make an Ant-based sensor for Emma.

I'm wondering how Hackystat will handle Line and Block coverage. I just looked at DailyProjectCoverage and JavaCoverageReducer refers only to Methods. Unfortunately, I think the granularity is ignored in DailyProjectCoverage. So, I think all Coverage granularity within a workspace will be aggregated.

For now, I guess I will hack this in a new module called hackyEmma. I'm planning on just keep this local (ie. not in CVS) until the next release is done.

* I would like to have the unit test time information such that I can graph time much like I watch test failures. This would give an early, but not reliable, indication of negative performance impact if it were joined with unit test count. Does this sound
like a worthy metric to chase?

Absolutely. We have an explicit "Perf" sensor data type and a sensor for our in-house load testing framework, but I agree that simple unit test time data could serve as an early warning signal for performance degradation.

One way to proceed is to create a reduction function where you can specify one or more unit tests whose elapsed time you want to monitor (the canaries). The reduction function allows you to generate a set of telemetry streams, one per unit test, which would allow you to see if things are changing (positively or negatively).

Cedric: want to take a quick look at this and see if you could write this reduction function in a day or two? Seems like a generally useful facility.

This looks like something I need too. I can't seem to find the Jira issue associated with this improvement on HackyDev. Cedric did this get lost in the noise?

thanks, aaron

Reply via email to