I am bringing back this thread from *July 2020*, observing the Fineract 1.x roadmapping discussion <https://lists.apache.org/[email protected]> taking place now in *Oct 2021*. My goal here is to recommend that we should pick up on a non-partisan Performance and Scalability test from here. This subject is progressively significant to the project and to the roadmapping discussion itself as depicted below:
"Thirdly, going back to Fineract 1.x, one of the key issues is around the scalability and performance of the platform. A number of participants mentioned that fineract 1.x is starting to be used in increasingly large-scale scenarios." >> Yes and designing an updated test is much required. I have conducted two of them in the last 1 year(one with 1.x+Mifos app and other only with 1.x APIs) on a pre production instance (since this July 2020). Would be great to add those results to the project documentation.(above release 1.5) "Alternatively, outside of the project, Community members may be able to get results and share them here. That may be the easier path to establish some baselines. We have some old data, but nothing from release 1.5." >> I am now beginning to schedule an updated "Performance & Scalability Benchmark test" on Fineract 1.x. It would be great to add findings from all three of these tests. These tests themselves speak for reinforcing stability of Fineract 1.x implementations for mid scale financial services providers. # Community members should also conduct these tests on other infra. IBM Cloud services and AWS were used primarily for the above two tests. # Performance & Scalability tests tell us a lot including exhibiting security vulnerabilities of the project. For future recommendation, a smart objective is to create replicable IT tests and Performance/Scalability Benchmark tests both on Apache Infra so that these can be quarterly reported. # Now the test documentation is released by Muellners Foundation not me, which has released some information in public domain attaching a Creative Commons License with it's OSS Usage and Delivery Policy <https://docs.muellners.info/open-source-policies/open-source-usage-and-delivery-policies> . I cannot help but cite this non for profit as this is their Intellectual Property (released with an open source license) to share here, not mine. It's like quoting a book, basic internet literacy. Thanks On Sun, Aug 2, 2020 at 3:46 PM Giorgio Zoppi <[email protected]> wrote: > Hello, > in the case of CN would be interesting see the "collateral damage" i mean > fail over under stress conditions. > The point of the stress benchmark is using something that is faster than > JVM/CLR to create requests in brief amount of time: i mean a multithread > native client written in C or Go that in case of the CN > automates all flow. We can work on the specification. > BR, > Giorgio. > > -- > Life is a chess game - Anonymous. > -- Ankit Managing Partner Muellners ApS, Denmark Impressum- Muellners® Inc; Copenhagen, Denmark CVR: 41548304; New Delhi, India CIN: U72900DL2019PTC344870; Foundation EU CVR:41008407 This mail is governed by Muellners® IT policy. The information contained in this e-mail and any accompanying documents may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient of this message, or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message, including any attachments. Any dissemination, distribution or other use of the contents of this message by anyone other than the intended recipient is strictly prohibited. All messages sent to and from this e-mail address may be monitored as permitted by applicable law and regulations to ensure compliance with our internal policies and to protect our business. E-mails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. You are deemed to have accepted these risks if you communicate with us by e-mail.
