Hi Nidhi,

Apologies for the late reply. This is more of an architectural discussion,
and I am not qualified to answer on behalf of FINERACT. That said, I want
to clarify something important: *Apache Fineract can integrate with Kafka*
— it just isn’t enabled by default. According to the official docs, Kafka
support must be explicitly configured via properties or environment
variables (for example enabling FINERACT_EXTERNAL_EVENTS_KAFKA_ENABLED to
true and setting Kafka bootstrap servers) for Fineract to publish external
events to Kafka topics.

In many production systems — including ones I’ve worked on — rather than
modifying the core application, we leverage this eventing capability and
build a separate consumer service. For example, if I were adding an ETL
service:

   1.

   *FINERACT publishes events to Kafka* (configured via environment
   variables like FINERACT_EXTERNAL_EVENTS_KAFKA_ENABLED=true, Kafka
   bootstrap servers, topic name, partitions, etc.).
   2.

   I’d create a *separate service that subscribes to the Kafka topic(s)*.
   3.

   That service would then write to a *read‑optimized database* for
   analytics or reporting.

This keeps your read‑heavy or analytical workloads *decoupled* from the
core banking services, which must prioritize *transaction integrity*.

A clear architectural flow looks like this:

*FINERACT → Kafka → New Service to process transactions → Read‑optimized DB*
*(e.g., Elasticsearch, MongoDB, or any Apache Lucene‑based search engine)*

By doing this, you preserve the core FINERACT system, gain scalability for
event consumers, and ensure that analytics/ETL doesn’t interfere with
transactional performance.

Best regards,

Aman Mittal

On Wed, Feb 11, 2026 at 11:07 PM Nidhi Bhawari <[email protected]>
wrote:

> Hi Aman,
>
> Thank you so much for your feedback and for raising those points. I really
> appreciate the focus on scalability, it’s a perspective I'm still learning
> to navigate in a system as large as Fineract.
>
> I completely agree that for heavy analytical data or long-term auditing,
> an ETL or background-process approach is much more robust. My initial
> thought process for a real-time API was primarily centered on the
> "operational" side specifically for mobile dashboards where a user might
> expect to see their standing update immediately after a transaction.
>
> I just wanted to share this point of view as a potential use case for the
> community to consider. I am not at all fixed on this specific
> implementation; I’m mostly curious about how we, as a community, prefer to
> handle the balance between real-time data needs and system performance.
>
> If the community feels that moving toward an ETL-based or pre-computed
> pattern is the better path for Fineract’s long-term health, I would be more
> than happy to pivot and explore how to implement this using the project's
> preferred architectural patterns.
>
> I’m looking forward to hearing more of your thoughts and learning from the
> community's experience on this!
>
> Best regards,
> Nidhi Bhawari
>
>>

Reply via email to