This is a very important question and it really showcases the biggest
problem facing Apache Geode in my opinion.

If existing tests don't fail when you change the code, then that's a sign
of a serious problem with our tests. Most likely the test isn't actually
covering the code and all of its branches or it's missing validation. The
author of such a test may have been manually validating (possibly by
reading System.out.printlns) or didn't understand how or what to assert.
Writing Unit Tests with TDD requires you to see the test fail before
writing or fixing the code to make the test go green. This will prevent you
from ending up with tests that don't have appropriate validation or provide
real coverage.

This also brings up Unit Test vs Integration Test (here I'm using the term
Integration Test to mean any non-Unit Test including DUnit tests). There is
a lot of confusion about why someone would want Unit Test AND Integration
Test coverage for the same code, but we do need that. I copied some of the
following from various sources but it really spells out the differences
very nicely.

Unit Tests:
* provide highly transparent white box testing
* promote internal code quality: Maintainability, Flexibility, Portability,
Re-usability, Readability, Testability, and Understandability
* provide test coverage of a single isolated piece of code (aka a class)
and covers all code branches within it including boundary conditions,
failure conditions, corner cases, interactions with mocked dependencies
* should be deterministic, isolated, idempotent, and run very fast (you
should be able to run all Unit Tests within a few seconds)
* should be Repeatable, Consistent, In Memory, Fast
* should test one single concern or use case
* should have only one reason to fail
* must NOT access the file system, communicate with a database, communicate
across the network, prevent running of other unit tests in parallel,
require anything special in the environment
* should ideally cover 100% of the code

Integration Tests:
* provide black box testing
* promote external code quality: Correctness, Usability, Efficiency,
Reliability, Integrity, Adaptability, Accuracy, and Robustness
* provide test coverage of real classes and components working together
* cannot practically cover all code branches, boundary conditions, or
corner cases
* may access the file system, communicate with a database or communicate
across the network
* should not depend on test hooks in the product code (write a Unit Test to
exercise such failure conditions instead of writing an Integration Test
that requires a test hook)
* should realistically cover much less than 100% of the code (this can be
an ideal goal but it probably cannot be achieved)
* may depend on environment and external systems, be much slower than Unit
Tests and may test multiple things in one test case

If you read between the lines you'll see a strong correlation between
developing-with-high-level-or-Integration-Tests-only (ie low-to-no-coverage
with Unit Tests) resulting in code that has high technical debt and becomes
very difficult to maintain or add to.

The best discussion about why you should have both types of tests for the
same code is in the book Growing Object-Oriented Software, Guided by Tests
[1] by Steve Freeman and Nat Pryce.

If you have code that you cannot write a Unit Test for then you have
classic untestable legacy code that you need to refactor so that it can be
properly unit tested and maintained. This means we have technical debt that
we must pay down and we should all be contributing to this effort. The best
way to fit this in and create a reasonable balance of fixing tech debt vs
new work is to adopt the discipline of ALWAYS writing a Unit Test for any
code you touch that doesn't already have coverage from a Unit Test
(regardless of higher level test coverage).

Changing a class to be testable usually involves:
* changing or adding a constructor to pass in dependencies (so you can pass
mocks in)
* changing the class to depend on interfaces instead of impls
* eliminating non-constant static variables and reducing (or eliminating)
usage of statics in general

See also:
* http://wiki.c2.com/?InternalAndExternalQuality
*
https://meekrosoft.wordpress.com/2010/10/31/internal-and-external-software-quality/

[1] Growing Object-Oriented Software, Guided by Tests
<http://www.amazon.com/gp/product/0321503627?ie=UTF8&tag=meekrosoft-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0321503627>
Steve
Freeman and Nat Pryce.

On Fri, Dec 29, 2017 at 2:20 PM, Xiaojian Zhou <gz...@pivotal.io> wrote:

> How about the code change is already covered by existing tests?
>
> Not to reduce test coverage seems a more reasonable standard.
>
> On Fri, Dec 29, 2017 at 2:07 PM, Udo Kohlmeyer <u...@apache.org> wrote:
>
> > +1
> >
> >
> >
> > On 12/29/17 12:05, Kirk Lund wrote:
> >
> >> I think we all need to be very consistent in requiring tests with all
> PRs.
> >> This goes for committer as well as non-committer contributions.
> >>
> >> A test would both confirm the existence of the bug in the first place
> and
> >> then confirm the fix. Without such a test, any developer could come
> along
> >> later, modify the code in question and break it without ever realizing
> it.
> >> A test would protect the behavior that was fixed or introduced.
> >>
> >> Also if we are not consistent in requiring tests for all contributions,
> >> then contributors will learn to pick and choose which reviewers to
> listen
> >> to and which ones to ignore.
> >>
> >> I for one do not want to waste my time reviewing and requesting changes
> >> only to be ignored and have said PR be committed without the (justified)
> >> changes I've requested.
> >>
> >>
> >
>

Reply via email to