Hi Wilfred,

I understand that you do not want to change the server specs but which
Database are you using and is that a standalone server?

Few options we could consider first
1. Having a standalone server for DB
2. Check and tweek the DB settings and configurations
3. Small changes to jobs configs
     - Disable all unnecessary jobs as per your functionality usage
     - Check and Adjust the cron timing of each active job to make sure one
job runs at a time

Tweaking the jobs to process in batches could also work but eventually this
problem will occur for other jobs or other actions when the volume is again
increased in future.
So, I would suggest finding a permanent fix for the issue.



Regards,
Bharath
Lead Implementation Analyst | Mifos Initiative
PMC Member | Apache Fineract
Mobile: +91.7019635592
http://mifos.org  <http://facebook.com/mifos>
<http://www.twitter.com/mifos>


On Wed, Sep 10, 2025 at 6:46 PM Kigred Developer <kigred.develo...@gmail.com>
wrote:

> Hello Devs,
>
> You find that when groups are starting financial operations with small
> number of transactions (both loans and savings) and the minimum machine
> specs will work fine for sometime. This changes with time as the number of
> transactions grows and overnight jobs hit out-of-memory issues.
>
> Is it a good idea to tweak a job such that it approaches the task
> batch-wise? For example you find the an 8GB machine will process 1000
> standing instructions without a problem but run into memory exceptions when
> the number of instructions changes to 1200. So it a good idea to tweak the
> job such it handles the 1000 to 1200 instructions in batches of say 250
> instructions, then another 250 etc? With the same machine job will handle
> task without running into memory issues.
>
> Is this the way it should work? What is the other way (without changing
> server specs)?
>
> Regards.
> Wilfred
>

Reply via email to