Mark, Thanks for the extra details. I created a JIRA [1] to address the issue. Looks like when refactoring how the validation occurs, in the Mock Framework we stopped validating the referenced Controller Services. This is definitely something that we will want to fix before the next release.
In the meantime, though, you can certainly still validate the Controller Service explicitly via TestRunner.assertValid(ControllerService) or TestRunner.assertNotValid(ControllerService). Generally speaking, this is a good practice, as it sounds like your unit tests are intended to see the Controller Service invalid and that's not happening. In such a case, I'd recommend explicitly validating the service. But in any case, the call to TestRunner.run() should only succeed if the Processor and all referenced services are valid. Thanks -Mark [1] https://issues.apache.org/jira/browse/NIFI-5368 On Jul 3, 2018, at 8:55 AM, Mark Bean <[email protected]<mailto:[email protected]>> wrote: This issue is not related to EL. I've tripped over that one, and modified the scope appropriately. Yes, the affected unit tests work correctly in 1.6.0. There is no stack trace associated with this. The issue is with the scheduling/execution of customValidate(), and specifically the customValidate of a Controller Service. When I run one of the failing unit tests in debug mode, I set a break point in the customValidate() of both the Processor and its required Controller Service. Only the Processor's customValidate is executed, the Controller Service's customValidate() is not executed. Here is what is going on in the unit test: TestRunner runner = TestRunners.newTestRunner(new CustomProcessor()); CustomControllerService service = new CustomConstrollerServiceImp(); runner.addControllerService("cs-1", service, getMap(...)); Map<String, String> attributes = new HashMap<>(); attributes.put(...); runner.enqueue(Paths.get(...), attributes); runner.run(); It's possible there is a problem in the Mock framework. The customValidate() of the controller service does not seem to be honored, or the new asynchronous nature has its execution excessively delayed. On Tue, Jul 3, 2018 at 3:59 AM Pierre Villard <[email protected]<mailto:[email protected]>> wrote: Hey Mark, I agree with the team here and suspect it could be related to EL: the changes I did for the Expression Language scope can cause existing unit tests to fail with NiFi 1.7.0. If a property is supporting expression language without defining the scope (1.6.0 and below), the scope defaults to VARIABLE_REGISTRY in 1.7.0. However, if flow files are used to evaluate the expression language during a unit test, it'll fail saying that the scope is wrong: scope should be FLOWFILE_ATTRIBUTES instead of VARIABLE_REGISTRY. This failure is on purpose to ensure the proper scope is set on the properties based on how properties are used in the processor. Note that, if that's indeed the issue, only the unit tests are impacted by the changes, processor behavior won't change after the upgrade to 1.7.0. Pierre 2018-07-03 0:17 GMT+02:00 Mike Thomsen <[email protected]<mailto:[email protected]>>: Mark, Matt raises a very good point there. Please share the stack traces because if they follow a general pattern of complaining "it was scoped for this, but not for that," we can probably get you up and running again very quickly. For what it's worth, from my own testing, this is entirely a testing framework issue. The Mongo bundle was blowing up with all sorts of false errors during that time, and yet it never seemed to affect the processors once they were actually running. On Mon, Jul 2, 2018 at 5:26 PM Matt Burgess <[email protected]<mailto:[email protected]>> wrote: Do your processors work against 1.6.0? I’m wondering if there’s something with the new Expression Language scope stuff, I believe that can cause unit test failures even if it still works (albeit deprecated) on a live NiFi. Regards, Matt On Jul 2, 2018, at 4:52 PM, Mark Payne <[email protected]<mailto:[email protected]>> wrote: Hey Mark, The validation logic was changed so that instead of performing validation on demand each time that the user refreshes stats, navigates to a process group, etc., it all is done asynchronously. We then periodically perform validation in the background. So the idea is that we changed when validation is called, not how it is called. So I'm not sure why you would have seen customValidate() being called previously but not anymore. It is possible that there was previously an implicit call (implicit from the perspective of the unit test, not from the perspective of the Test Runner) to validate a component that got removed in some of this refactoring. I do see in StandardProcessorTestRunner (the impl of the TestRunner interface) that when run() is called, are still performing validation as we were previously and asserting that the processor is valid. It's possible, though, that there is another code path where we now fail to call validate()? Can you share any more information about what your unit tests are doing, that previously worked but no longer do? Thanks -Mark On Jul 2, 2018, at 3:59 PM, Mark Bean <[email protected]<mailto:[email protected]>> wrote: Several of our custom processors are not compatible with version 1.7.0. There is a growing pattern of unit test failures due to customValidate() not being executed now where it was being executed in 1.6.0 and prior. I believe there were changes made in 1.7.0 in terms of how validation - and therefore customValidate() - is performed and/or when it is performed. Could someone provide a brief description on how validation was changed in 1.7.0? Thanks! Mark
