Community -  I would like to highlight this email thread and bring it back
to top of mind for an important discussion.

It is our policy and written plainly in the PR acceptance criteria:  We
should NOT accept a large code dump. The exception to this is if it is well
discussed and a VOTE has been taken to accept it.

I'm talking about the Cucumber test framework that's been accepted into the
dev branch recently but has not been fully vetted, and now shows up in
draft 1.10 release.

There has NOT been sufficient discussion of this on list.  If you've had
other discussions outside of this list, then I urge you to bring that back
to the list and to summarize and explain WHY we should be accepting this.

You need to explain - following a pattern long established on list, for all
major changes to be fully discussed and vetted, which we can simply call a
Significant Improvement Proposal (SIP):
a) How will this improve our project  ?
b) How will you make it maintainable ?

For my part, as release manager, I am going to hold the release until this
issue is resolved.   Unless and until we have a clear understanding of
this, and likely a VOTE, we should NOT include it as part of release 1.10.
If there is no discussion, I will revert the commits within 72 hrs.

This is now a blocker.

-- 

Now, let me give you my opinion.

Do we need new testing - better testing?   YES.  The answer to that is
yes.  See Aleksander's write up below which mentions Cucumber.

Will this be maintainable?  How can it be made maintainable?
To this I am skeptical. The challenge with Cucumber is that it requires new
skills  (e.g. new programming language Gherkin) and an awareness of how it
fits into the overall scheme.  Someone has to describe how to use it, and
someone needs to give examples, create a video or training document and
clear readme to make sure people are writing useful tests.  The flexibility
of the tool makes it too easy to add cruft.   You can write "tests" that
are backed by nothing or that you think are doing one thing but aren't, the
separation of the framework makes this more likely than unit tests.

If that someone is willing to step forward to Shepherd this through and
into the project, then maybe I can see this being added.

But, DISCUSS!! This is not my decision. It is yours.

James

On Mon, Mar 25, 2024 at 12:47 AM Arnold Galovics <arn...@apache.org> wrote:

> Hi Aleks,
>
> Looks awesome, I'll take a look at the PR.
>
> Best,
> Arnold
>
> On Mon, Mar 25, 2024 at 6:37 AM Aleksandar Vidakovic <al...@apache.org>
> wrote:
>
>> Hello everyone,
>>
>> ... as you know our integration tests are quite essential to ensure that
>> we are not breaking anything when we try to add new features or fix bugs.
>> Everyone who contributed to Fineract probably saw the integration tests or
>> even wrote one as part of their contribution. I guess you agree that
>> writing those integration tests - at least the way that it's currently done
>> - is a bit challenging. I'd even say that a bigger problem is to read them.
>> Even if you are the author then it's really difficult to understand (again)
>> the purpose and intention of those tests after some time has passed (read:
>> a year).
>>
>> The reasons for those difficulties (in my opinion) are:
>>
>>    - we are writing handcrafted HTTP client code with help of the  REST
>>    Assured library to issue API requests against a running Fineract instance
>>    - we do - again - manual JSON parsing... we do already a ton of
>>    manual JSON parsing in Fineract itself, and now we do pretty much the same
>>    thing again in the integration tests
>>    - there is an "air gap" between the test code on the one side and the
>>    actual business logic that gets executed which makes reading and reasoning
>>    about those tests a lot more difficult than they should be (you need to 
>> run
>>    a Fineract instance in debug mode, connect an IDE to the remote debug 
>> port,
>>    set breakpoints, execute the integration tests...)
>>
>> I don't know what your preferences are, but personally I find tests a
>> valuable source of information when I try to understand a code base or
>> certain use case... next to any available documentation. Now, we know that
>> we are a bit thin on useful documentation at the moment (I hope this
>> changes...) which would mean that the only other source of dense
>> information are those hard to read/understand integration tests.
>>
>> This is why I'd like to propose a major improvement in that space:
>>
>>    - use exclusively the Fineract Java API client to replace handcrafted
>>    REST Assured code
>>    - to a small extent we do this (use Fineract client) already, but
>>    it's still not enough to improve the situation
>>    - the use of Fineract client brings us type safety again and we don't
>>    need to manually parse the JSON data (read: reduction of a ton of
>>    boilerplate code)
>>    - on top of this we should write the test specifications in Cucumber
>>    /Gherkin (as we do already with our basic unit tests); these test specs 
>> are
>>    easy to read (even for non-developers) and would allow a bigger audience 
>> to
>>    contribute test cases
>>
>> Note: the "air gap" issue is not addressed by this proposal, but I think
>> these improvements are already significant enough. I have an idea for that
>> too, but this requires more changes that are backwards incompatible with
>> the current code base.
>>
>> I've created a pull request that shows how this could look like. Please
>> have a look at https://github.com/apache/fineract/pull/3821 and let me
>> know what you think.
>>
>> Cheers,
>>
>> Aleks
>>
>

Reply via email to