I think this is all pretty good in principle, but I have some concerns
about specific points, which I'll summarize below. I agree that we need
to raise our minimum quality at the release level, and even at the
individual commit level. However, I'm hesitant to add too much burden to
patch contributors, since I'd hate to discourage such involvement.
-john
Brett Porter wrote:
So, I've committed r383963 which emphasises these requirements on patch
contributions:
* Whether it works and does what is intended.
* Whether it fits the spirit of the project.
This seems like it would be hard for a user to assess. Not sure how a
patch contributor is supposed to handle it.
* Whether it contains tests. It is expected that any patches relating to
functionality will be accompanied by unit tests and/or integration
tests. It is strongly desired (and will be requested) for bug fixes too,
but will not be the basis for not applying it. At a bare minimum, the
change should not decrease the amount of automated test coverage.
What I wind up doing in cases where I'm applying a patch is create tests
to go along with them if there are none. HOWERVER, I'd like to suggest
another guideline, for submitting a bug report:
Steps to reproduce.
This is fairly simple in most cases, and if you cannot reliably tell
someone else how you happened on the bug, I'd say you can't be certain
you have a bug at all. If you have a failing test case you can attach to
the issue, even better, since this will often speak much louder than the
issue description for describing the bug.
* Whether it contains documentation. All new functionality needs to be
documented for users, even if it is very rough for someone to expand on
later. While rough is acceptable, incomplete is not.
I agree that documentation is important, but making it a basis for
rejecting a patch isn't necessarily the way to go IMO. In many cases,
patches will be small enough that its documentation won't serve to
improve the lives of users. It's still a good idea to have doco in code
comments for developers to use, but if you're going to apply a patch,
you should be able to write a one- or two-line code comment describing a
particular change...otherwise, you probably shouldn't be applying the patch.
In the following, I'll discuss a technical veto. That's a -1 that means
that the issue needs to be resolved before a release can occur. It
doesn't mean someone rolls it back immediately (though a release manager
may if it blocks a branch being released). It also doesn't mean anything
negative about the committer in question, and it's important we keep it
that way so that people aren't afraid to contribute or to call people on
theese things.
So my questions are, can we institute the following:
* new functionality without automated test coverage can and should be
technically vetoed. We can measure this with code coverage tools, and
the measure will be that the % does not decline. It will actually break
the build, being an automatic technical veto :)
+1000000
* new functionality without documentation can and should be technically
vetoed. We can't really measure this with tools, so need to be vigilant
on it ourselves. We also need to be understanding that docs may follow
the code once it is finished, so we should use JIRA to track it (keep
the original ticket open until documented, or create a new, blocking issue).
+1. We just need to watch commit messages, and I *really* like the idea
of adding a workflow step to JIRA that says something like "Awaiting
Documentation" or something...Tim's idea, I believe.
* new code without javadoc/code documentation can be technically vetoed.
I'm less tied to this one, though for public facing APIs I think Javadoc
is imperative. It may be that we can use Checkstyle to enforce this to a
degree.
+1, though I'd say that Maven's componentized design means nearly any
method marked "public" or "protected" should be considered part of the
public API...either that, or we need to clearly delineate what
constitutes public vs. private APIs. For maven-artifact-* stuff, I'd
expect that none of it would be private API, since it can be reused in
other places, and since the container composes the component
relationships at runtime...
* code that doesn't meet current metrics can and should be technically
vetoed. Again, we should set these up to fail the build, using
Checkstyle, PMD and Clirr.
I think we need to define what those metrics are in some way that we all
can reference, and *then* make sure we have the proper plugins in place
to break the build when the metrics are violated. We may want to run
these assessments in CI if they become too heavy, as the bootstrap takes
quite awhile already. However, it would be nice to make them required
before code can be committed, just like it would be nice to require that
all ITs pass before committing.
Of course, I'll incorporate this into the dev process after some discussion.
Thoughts?
- Brett
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]