Arnold and all-   Ok.  So, I'm still seeking clarification and discussion
here with more voices involved in this decision.
How do we avoid the pitfalls?  Make this maintainable?  Set expectations?

Yes, we've got poorly documented code and poorly maintained things, so
saying "this isn't worse" is not much of a recommendation.

The first rule of getting out of a hole is to stop digging.

Cucumber is behavior-driven, it is also highly flexible. You have to have
familiarity with BDD and with keeping your tests the "right size" and with
the "right approach" to the behavior you want to test.

Way back, when this project was called Mifos back in 2003, I brought up
something called JBehave and behavior driven design which I had read
about.  Eventually in 2005 or 2006, we discussed this brand new open source
testing project called Cucumber.  My hope at the time was that these tests,
once written, could serve as both documentation for the project and an
ongoing set of tests that could last across new versions and releases. That
didn't happen.
What did happen was that a few tests got written as brittle UI tests that
got more and more convoluted. Eventually it was dropped as it got further
away from the idea of behavior of the backend.  The sourceforge project
files may still be findable, I don't know.


So, I am also asking: how do we avoid making a deep hole, again.

I'm actually positively inclined for the SIP (significant improvement
proposal), but we need to do better... and I feel like this is an easy one
to get right.

If no one in our dev community is willing to shepherd this through
for consistency, however, and there aren't standards - like our code
standards - that are "enforced" upon commit, then we will end up looking
back at this decision with the same feeling I have for the first time we
tried Cucumber.   Do the examples that exist today make enough of a codice
that the "standard is obvious", as Arnold seems to argue?   I don't know,
but I doubt it.

Please, more input.  Is there anyone else besides Arnold with experience
with this and willing to step up?


On Mon, Apr 29, 2024 at 12:35 AM Arnold Galovics <arn...@apache.org> wrote:

> Hi James,
>
> > does this replace the integration tests?  Do we do away with those?
> Eventually, yes. It's a process to decommission the integration tests. Now
> there's overlap between the int tests and the new cucumber tests. The
> overlapping scenarios shall be removed from the integration tests for sure.
>
> > And, for training up on Cucumber - are you willing to put an explainer
> together for this project here as to how it should be used specifically
> here on this project?
> Not now for sure although:
> - I think documentation is valuable if it's being maintained. I feel a lot
> of documentation we have today is stale and is causing more harm than help
> - the tests should be written according to the existing style. It's pretty
> simple to understand how it's structured (of course you have to spend like
> 30 mins to understand in case you're starting with it) and I feel like
> other (very complex) places in Fineract should deserve more attention when
> it comes to "explainers"
>
> I'm still unsure why we're trying to overdocument it when more pressing
> issues are present with the codebase.
>
> Best,
> Arnold
>
> On Sat, Apr 27, 2024 at 3:15 AM James Dailey <jamespdai...@gmail.com>
> wrote:
>
>> Arnold - does this replace the integration tests?  Do we do away with
>> those?
>>
>> And, for training up on Cucumber - are you willing to put an explainer
>> together for this project here as to how it should be used specifically
>> here on this project?
>>
>> James Dailey
>>
>>
>>
>> On Fri, Apr 26, 2024 at 10:13 AM Arnold Galovics <arn...@apache.org>
>> wrote:
>>
>>> Hi James,
>>>
>>> I'm still in support of this new testing solution. The old integration
>>> tests are very much unmaintainable, especially the ones being 2-3+ years
>>> old.
>>>
>>> To answer specifically some thoughts:
>>> > The challenge with Cucumber is that it requires new skills  (e.g. new
>>> programming language Gherkin) and an awareness of how it fits into the
>>> overall scheme.  Someone has to describe how to use it, and someone needs
>>> to give examples, create a video or training document and clear readme to
>>> make sure people are writing useful tests.
>>>
>>> If we literally reject new tech contributions (to a tech that's pretty
>>> much standard in the industry) because we think people are unable to watch
>>> a beginners guide on Youtube/etc on Cucumber, I'm afraid we have a bigger
>>> problem. We can't expect all contributions to be 120% documented. People
>>> need guidance, agreed. That's why the devlist is there.
>>> I'm reluctant to believe creating these materials will help anything.
>>>
>>> My 2 cents.
>>>
>>> Best,
>>> Arnold
>>>
>>> On Fri, Apr 26, 2024 at 7:05 PM James Dailey <jdai...@apache.org> wrote:
>>>
>>>> Community -  I would like to highlight this email thread and bring it
>>>> back to top of mind for an important discussion.
>>>>
>>>> It is our policy and written plainly in the PR acceptance criteria:  We
>>>> should NOT accept a large code dump. The exception to this is if it is well
>>>> discussed and a VOTE has been taken to accept it.
>>>>
>>>> I'm talking about the Cucumber test framework that's been accepted into
>>>> the dev branch recently but has not been fully vetted, and now shows up in
>>>> draft 1.10 release.
>>>>
>>>> There has NOT been sufficient discussion of this on list.  If you've
>>>> had other discussions outside of this list, then I urge you to bring that
>>>> back to the list and to summarize and explain WHY we should be accepting
>>>> this.
>>>>
>>>> You need to explain - following a pattern long established on list, for
>>>> all major changes to be fully discussed and vetted, which we can simply
>>>> call a Significant Improvement Proposal (SIP):
>>>> a) How will this improve our project  ?
>>>> b) How will you make it maintainable ?
>>>>
>>>> For my part, as release manager, I am going to hold the release until
>>>> this issue is resolved.   Unless and until we have a clear understanding of
>>>> this, and likely a VOTE, we should NOT include it as part of release 1.10.
>>>> If there is no discussion, I will revert the commits within 72 hrs.
>>>>
>>>> This is now a blocker.
>>>>
>>>> --
>>>>
>>>> Now, let me give you my opinion.
>>>>
>>>> Do we need new testing - better testing?   YES.  The answer to that is
>>>> yes.  See Aleksander's write up below which mentions Cucumber.
>>>>
>>>> Will this be maintainable?  How can it be made maintainable?
>>>> To this I am skeptical. The challenge with Cucumber is that it requires
>>>> new skills  (e.g. new programming language Gherkin) and an awareness of how
>>>> it fits into the overall scheme.  Someone has to describe how to use it,
>>>> and someone needs to give examples, create a video or training document and
>>>> clear readme to make sure people are writing useful tests.  The flexibility
>>>> of the tool makes it too easy to add cruft.   You can write "tests" that
>>>> are backed by nothing or that you think are doing one thing but aren't, the
>>>> separation of the framework makes this more likely than unit tests.
>>>>
>>>> If that someone is willing to step forward to Shepherd this through and
>>>> into the project, then maybe I can see this being added.
>>>>
>>>> But, DISCUSS!! This is not my decision. It is yours.
>>>>
>>>> James
>>>>
>>>> On Mon, Mar 25, 2024 at 12:47 AM Arnold Galovics <arn...@apache.org>
>>>> wrote:
>>>>
>>>>> Hi Aleks,
>>>>>
>>>>> Looks awesome, I'll take a look at the PR.
>>>>>
>>>>> Best,
>>>>> Arnold
>>>>>
>>>>> On Mon, Mar 25, 2024 at 6:37 AM Aleksandar Vidakovic <al...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> Hello everyone,
>>>>>>
>>>>>> ... as you know our integration tests are quite essential to ensure
>>>>>> that we are not breaking anything when we try to add new features or fix
>>>>>> bugs. Everyone who contributed to Fineract probably saw the integration
>>>>>> tests or even wrote one as part of their contribution. I guess you agree
>>>>>> that writing those integration tests - at least the way that it's 
>>>>>> currently
>>>>>> done - is a bit challenging. I'd even say that a bigger problem is to 
>>>>>> read
>>>>>> them. Even if you are the author then it's really difficult to understand
>>>>>> (again) the purpose and intention of those tests after some time has 
>>>>>> passed
>>>>>> (read: a year).
>>>>>>
>>>>>> The reasons for those difficulties (in my opinion) are:
>>>>>>
>>>>>>    - we are writing handcrafted HTTP client code with help of the
>>>>>>    REST Assured library to issue API requests against a running Fineract
>>>>>>    instance
>>>>>>    - we do - again - manual JSON parsing... we do already a ton of
>>>>>>    manual JSON parsing in Fineract itself, and now we do pretty much the 
>>>>>> same
>>>>>>    thing again in the integration tests
>>>>>>    - there is an "air gap" between the test code on the one side and
>>>>>>    the actual business logic that gets executed which makes reading and
>>>>>>    reasoning about those tests a lot more difficult than they should be 
>>>>>> (you
>>>>>>    need to run a Fineract instance in debug mode, connect an IDE to the 
>>>>>> remote
>>>>>>    debug port, set breakpoints, execute the integration tests...)
>>>>>>
>>>>>> I don't know what your preferences are, but personally I find tests a
>>>>>> valuable source of information when I try to understand a code base or
>>>>>> certain use case... next to any available documentation. Now, we know 
>>>>>> that
>>>>>> we are a bit thin on useful documentation at the moment (I hope this
>>>>>> changes...) which would mean that the only other source of dense
>>>>>> information are those hard to read/understand integration tests.
>>>>>>
>>>>>> This is why I'd like to propose a major improvement in that space:
>>>>>>
>>>>>>    - use exclusively the Fineract Java API client to replace
>>>>>>    handcrafted REST Assured code
>>>>>>    - to a small extent we do this (use Fineract client) already, but
>>>>>>    it's still not enough to improve the situation
>>>>>>    - the use of Fineract client brings us type safety again and we
>>>>>>    don't need to manually parse the JSON data (read: reduction of a ton 
>>>>>> of
>>>>>>    boilerplate code)
>>>>>>    - on top of this we should write the test specifications in
>>>>>>    Cucumber /Gherkin (as we do already with our basic unit tests); these 
>>>>>> test
>>>>>>    specs are easy to read (even for non-developers) and would allow a 
>>>>>> bigger
>>>>>>    audience to contribute test cases
>>>>>>
>>>>>> Note: the "air gap" issue is not addressed by this proposal, but I
>>>>>> think these improvements are already significant enough. I have an idea 
>>>>>> for
>>>>>> that too, but this requires more changes that are backwards incompatible
>>>>>> with the current code base.
>>>>>>
>>>>>> I've created a pull request that shows how this could look like.
>>>>>> Please have a look at https://github.com/apache/fineract/pull/3821
>>>>>> and let me know what you think.
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> Aleks
>>>>>>
>>>>>

Reply via email to