adamsaghy commented on code in PR #5357:
URL: https://github.com/apache/fineract/pull/5357#discussion_r2720958965
##########
fineract-savings/src/main/java/org/apache/fineract/portfolio/savings/service/SavingsSchedularInterestPoster.java:
##########
@@ -63,17 +63,25 @@ public class SavingsSchedularInterestPoster {
private Collection<SavingsAccountData> savingAccounts;
private boolean backdatedTxnsAllowedTill;
- @Transactional(isolation = Isolation.READ_UNCOMMITTED, rollbackFor =
Exception.class)
+ @Transactional(isolation = Isolation.SERIALIZABLE, rollbackFor =
Exception.class)
Review Comment:
While the SERIALIZABLE here might be better than the READ_UNCOMMITTED, i
dont think we are addressing the underlying issue properly.
**I would rather recommend a different approach:**
- Is it possible to ensure no two `SavingsSchedularInterestPoster` got the
very same savings account? If we can ensure no two parallel executed
`postInterest` work on the very same savings account, i see no reason why would
any double posting and we dont need to enforce SERIALIZABLE isolation which
usually means strict and heavy locking of the resource which does not help the
performance.
**Alternative solution:**
- Is it possible to introduce constraints which ensure no two interest to be
posted on the very same date of the very same savings account? We can even
build retry strategy too based on this constraints:
- Read savings account -> post interest -> if fails, refetch the savings
account and check again whether interest needed to be posted.
It might go beyond the scope of the original story, but to be frank, this
would be the proper way to do it.
TLDR: 1 savings account should be processed by 1 thread entirely and avoid
situations where 1 savings account might be processed by many threads.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]