Hi Mates,

For performance on Fineract 1.X and Fineract CN we have done the following changes:

1. Instances dedicated only to running the jobs, not connected to the User Interface.

2. Changed the DB Pool to HikariCP.

3. We use dedicated DB Servers (Galera Cluster)

4. We are changing the implementation from RAMStore to DBStore for Quartz and improve the job clustering.

For all of that we use the official MySQL Driver

Regards

Victor
El 21 de octubre de 2019 a las 03:07 PM Ed Cable <[email protected]> escribió:

Nayan,

Thanks for assisting Joseph with those suggestions and Joseph thanks for sharing the initial results of improvements after indexing. I'm happy you emailed about this as performance testing and improving performance for high scalability environments is a topic of wide importance for many members of the current ecosystem and a number of prospective new adopters that are looking at the stack.

As evident from your searching and the input you have received thus far on the thread what's publicly out there on performance statistics and fine-tuning performance is limited. Yet, many of our implementers have run Mifos/Fineract in high load environments and have a lot of wisdom and experience to share. 

@Avik Ganguly based on your experiences, are you able to give some additional suggestions on top of what Nayan has already provided. 

I will get a separate email thread going to start a collaborative effort across the community to get in place a set of reproducible tools to do ongoing performance testing on Fineract and Fineract CN, and start a collaborative effort to carry out these performance and load testing exercises on a regular basis and a group of contributors focused on identifying and fixing the issues to improve performance.

We welcome your contributions to those efforts

Ed



On Mon, Oct 21, 2019 at 7:14 AM Joseph Cabral < [email protected]> wrote:

Hi everyone,

I would like to share the initial results of our testing and get your feedback.

We modified the code for the post interest to savings scheduler job, specifically the findByStatus method in SavingsRepositoryWrapper where it queries the active savings accounts. We noticed that it the current design depended on lazy load fetching to populate the transactions and charges list of each savings accounts. From previous experience with  other systems this has been a cause of various slow downs so we focused on modifying this part. We decided to query the savings transaction and charges in bulk to reduce the number of database calls. See below for our implementation. 

We also removed the CascadeType.ALL and FetchType.Lazy settings for the transactions and charges list of the SavingsAccount entity as we are already manually fetching their contents. We will do further testing as this may have an impact in other modules.


@Transactional(readOnly=true)
    public Page<SavingsAccount> findByStatus(Integer status, Pageable pageable) {
        logger.info("findByStatus - Start querying savings account");
        Page<SavingsAccount> accounts = this.repository.findByStatus(status, pageable);
        List<Long> idList = new ArrayList<Long>();
        Map<Long, SavingsAccount> accountsMap = new HashMap<Long, SavingsAccount>();
        if(accounts != null) {
            for(SavingsAccount account : accounts) {
                account.setCharges(new HashSet<>());
                account.setTransactions(new ArrayList<>());

                idList.add(account.getId());
                accountsMap.put(account.getId(), account);
            }
            List<SavingsAccountTransaction> savingsAccountTransactionList = savingsAccountTransactionRepository.findBySavingsAccountIdList(idList);
            if(savingsAccountTransactionList != null) {
                for(SavingsAccountTransaction transaction : savingsAccountTransactionList) {
                    SavingsAccount account = accountsMap.get(transaction.getSavingsAccount().getId());
                    account.getTransactions().add(transaction);
                }
            }

            Set<SavingsAccountCharge> savingsAccountChargeList = savingsAccountChargeRepository.findBySavingsAccountIdList(idList);
            if(savingsAccountChargeList != null) {
                for(SavingsAccountCharge charges : savingsAccountChargeList) {
                    SavingsAccount account = accountsMap.get(charges.savingsAccount().getId());
                    account.getCharges().add(charges);
                }
            }
        }
        logger.info("findByStatus - Finished querying savings account");
//        loadLazyCollections(accounts);
        return accounts;
    }


We were able to reduce the post interest to savings schedule job run time from 15-20 hours to 6 hours with these modifications. After this we will look at how to reduce the run time for the saving/updating part.

I would like to ask if anyone has any alternative solutions or if you have any feedback on how we implemented it?

Regards,

Joseph

On Sun, Oct 20, 2019 at 11:02 PM Joseph Cabral < [email protected]> wrote:
Hi Michael,

Of course, we will give feedback if we are able to make improvements to our scheduler job run time.

But if anyone else has any experience in load testing or running Fineract in high load environments I am open to suggestions.

Thanks!

Joseph

On Sun, Oct 20, 2019 at 7:32 PM Michael Vorburger < [email protected]> wrote:
Joseph,

On Sun, 20 Oct 2019, 00:28 Joseph Cabral, < [email protected]> wrote:
Hi Nayan,

Thank you for the tips! I tried asking for advice here first because we are wary of doing any code change since we were under the assumption that Fineract had already been used in many high load situations in production and we have just set it up wrong or there are some settings we could change. 

We will first try adding some indexes to the database but it looks like we have to do some code change for this. 

Will you be contributing any performance related improvements you make back to the community?

Thanks again!

Joseph

On Sun, Oct 20, 2019 at 2:21 AM Nayan Ambali < [email protected]> wrote:
Joseph,

Previously I had done Mifos platform load testing for the community, based on my experience below are my recommendations

without code change
1. Use SSD/High IOPS storage for Database
2. If you are on AWS, go for Aurora instead of MySQL
3. Look at database usage for this batch job and see if there is opportunity index some columns for better performance

with code change
1. Process the data parallel either on with multi threading on single node or we can go for multiple nodes
2. Query optimisation
3. Data fetch and commit batch size
there are many other opportunities to improve the same

-
at your service

Nayan Ambali
+91 9591996042
skype: nayangambali


On Sat, Oct 19, 2019 at 2:20 PM Joseph Cabral < [email protected]> wrote:
Hi Everyone,

I would like to ask if anyone has done load testing of the MifosX/Fineract scheduler jobs? Specifically the post interest to savings job.

We created a new Savings Product (ADB Monthly - 10% Interest) with the following settings:
Nominal interest rate (annual): 10%
Balance required for interest calculation: $1,000.00
Interest compounding period: Monthly
Interest posting period: Monthly
Interested calculated using: Average Daily Balance

We populated the m_savings_account table using the savings product above with 1.2M new savings accounts which we then deposited an initial balance of $10,000 each. We then edited the post interest to savings job to post the interest even though it is not yet the end of the month. 

On consecutive tests, we averaged around 15 to 20 hours to complete the job.

We are using a 4vCPU 128GB server to do the load testing. We deployed MifosX/Fineract in a docker container with Tomcat 7. The MySQL database is deployed in a separate docker container on the same machine.

Any tips or ideas on what we can do to improve the run time of the job? Our target for 1.2M savings accounts is less than 1 hour.

Regards,

Joseph


--
Ed Cable
President/CEO, Mifos Initiative
[email protected] | Skype: edcable | Mobile: +1.484.477.8649

Collectively Creating a World of 3 Billion Maries | http://mifos.org  


 

Reply via email to