vidakovic commented on PR #3915:
URL: https://github.com/apache/fineract/pull/3915#issuecomment-2213566671

   @galovics ... ignore my last comment... my message was too long for the 
mailing list (arrrrrgh)... so, let's try here:
   
   ... counter argument: while we are at it we might as well tackle existing 
tech debt instead of moving it just in front of us and postponing the handling 
of it. My impression is that the current emphasis is still on adding new 
features and less on tackling improving the code (and runtime) quality of 
Fineract. It's going to be hard (very) to ever catch up if we continue like 
that.
   
   Let's take your client search (v2) example:
   
   - fair enough, introduces types
   - gets rid of all those (manually maintainable) dummy Java classes to 
reintroduce type information for OpenApi
   - finally IDE refactoring tools will work
   
   ... but...
   
   - does not follow the architecture "rules" of current Fineract
   - uses "domain" (which everywhere else is used for database table mapping), 
but means "data"
   - these new data structures are created in the service package... why not in 
the existing "domain" (to stay with current reasoning here) or better in the 
"data" package?
   - if we have these new JAR "modules": why not put these new data 
structures/classes in the fineract-core module, but instead they are buried in 
fineract-provider?
   - not sure where the permissions are enforced? maybe in web security config, 
I personally prefer that, because then you have one place to audit/check if 
necessary... and it gets rid of these explicit security service checks (which 
in turn call all those "utility" functions on AppUser... where they are really 
misplaced)
   - not sure if I understand why the "delegate" is needed... ok, just a detail
   - the example that you give is an easy one: read requests... Zeyad started 
also tackling a read request for a very small service (note), because those are 
simple and don't go through that complex execution path for write requests; but 
those write requests are actually the main drivers for losing the type 
information, manual JSON parsing etc.
   
   Having said that, please also note that:
   
   - Spring Web MVC is (obviously) a first class citizen in the current tech 
stack we are using (testability...); JAX-RS on the other not so much and it's 
usage is (I'd argue) responsible for a big chunk of our technical debt 
("integration" tests)
   - all these changes are a proposal and intentionally done in the custom 
module folder so that no one currently working on projects needs to be 
bothered, but at the same time we generate fully working (custom) Docker images 
that can actually be used/tested
   - everything that is created in this context can be (and is by default) 
turned off, even in the custom Docker build
   - the goal is not to migrate 100% of the REST layer, but to create a proof 
of concept to show how things could look like... of course this needs to be 
discussed in the community
   - surprisingly we seem to mimic (in general) the coding style from 1-2 
decades ago (including e. g. tying the business logic to the transport 
protocol/format of the REST API layer)... even for new features
   
   Read reqeuests, Spring Web MVC or not, are not really that relevant here 
(although I would still do it and as said it's a lower entry level to tackle 
the issue of type safety)... the bigger questions start with the write requests 
and how we can improve on those.
   
   That's why Zeyad started to map out that (write request) execution flow:
   
   
![image](https://github.com/apache/fineract/assets/380079/1357016c-3650-4b71-8e9d-1b6f671617fe)
   
   From the top of my head I can remember a couple of things that popped out 
while investigating all this:
   
   - also not clear why we need that many command related "wrapper" data 
structures... when in all of them serialized JSON in a string variable is the 
common denominator for the request parameters... THE main reason for the whole 
type safety discussion
   - it's not really clear why we need 3 components/services (blue boxes) to 
process write requests; in most cases the blue boxes are only used in one 
place...
   - validation (aka enforcement of permissions) is done in a couple of 
places/services/components and is mixed with business logic which makes it very 
hard to reason what rules actually apply; the enforcement also happens quite 
"late", in other words: this should happen in the REST API layer as soon as 
possible (either with web security config and/or with Spring Security 
AccessDecisionVoters)
   - for some reason it seems that we do "manual" (?) transaction management 
somewhere in that execution pipeline (blue boxes in the diagram), don't 
remember exactly where that was... really not sure why we have to do this 
ourselves and why Spring/Data seems somehow not enough here?
   
   If we'll actually see PoC code for improving write requests (in the custom 
modules... again, not in the way of any current development efforts) that is 
still open; documenting the current state of affairs properly would already be 
a great win. If we get some sort of proof of concept out of this then even 
better... because then we really have something that we can discuss vs. Wiki 
pages where ideas usually go to die (I'm exaggerating to make a point).
   
   My 2 cents.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to