By comprehensive, I actually just meant that the integration test should 
evaluate things more end-to-end, i.e. that the integration of multiple 
components are working.

I agree that you are frequently testing interfaces, but also lots of frequently 
unspoken assumptions (or not enforced), e.g. even simple things like if `null` 
is a valid value for a particular argument or not, and whether other components 
calling the method will deliver them.

You can't cover the entire space of combinations, but I called out naming 
because I think the name says a lot about what you think you're testing. 
`DeadLetterQueueIntegrationTest` is named to be quite specifically targeted -- 
it's testing a single config/policy/*mostly* one class 
(`DeadLetterQueueReporter`).

The suggestion re: individual test methods is interesting as well -- this was 
also what I was getting at wrt this style of test being quite heavyweight and 
therefore better to keep broader. If it was an integration test with fast mocks 
(e.g. even if full Connect worker, mocked out ZK + broker), I'd be more 
inclined to have tests like `ErrorHandlingIntegrationTest` with separate tests 
for each. But given how expensive they are, why *not* just have a single test 
method in an `ErrorHandlingIntegrationTest` validate all the error handling 
cases end-to-end?

I guess high level what I'm getting at is that there's a tendency to write 
integration and system tests that validate individual features that are 
frequently effectively a single class + a couple of surrounding ones, but pull 
in a ton of other test surface area. I think getting the balance is especially 
hard with integration tests, but with system tests I've noticed some similar 
tests as well. But if you go that route, you end up with basically as many 
integration tests as you have unit tests, which shouldn't really be the case 
(the thing the integration test is testing should be larger than the unit test)?

wrt "testing the same thing", I think an end-to-end source connector and sink 
connector test where we swap out components in a parameterization that 
reasonably covers the space *is* a good set of integration tests (since it 
validates the source and sink data pipelines, integrated together, under at 
least a few different variants, e.g. different types of converter, transforms, 
etc). Some of those will ultimately land with the individual connectors as well 
(e.g. is the connector & upstream converters/transforms actually obeying the 
assumptions of interfaces that may not be explicit yet).

Another way to view this is that the javadoc should say *what is under test* 
and maybe even *why* we need this in addition to unit tests; currently this 
test's javadoc just says *what it will do*.

[ Full content available at: https://github.com/apache/kafka/pull/5516 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to