*Objective:* To scale existing application where PostgreSQL is used as a
data store.

*How can Apache Ignite help:* We have an application which has many modules
and all the modules are using some shared tables. So we have only one
PostgreSQL master database and It's already on AWS large SSD machines. We
already have Redis for caching but as we no limitation of Redis is, It's
not easy partial updates and querying on secondary indexes.

*Our use case:* We have two big tables, one is member and second is
subscription. It's many to many relations where one member is subscribed in
multiple groups and we are maintaining subscriptions in subscription table.
Member table size is around 40 million and size of this table is around 40M
x 1.5KB + more ~= 60GB

*Challenge*

A challenge is, we can't archive this data since every member is working
and there are frequent updates and read on this table.

*My thought:*

Apache Ignite can help to provide a caching layer on top of PostgreSQL
table, as per I read from the documentation.

   -

   Now, I have a couple of questions from an Implementation point of view.
   1. Will Apache Ignite fits in our use case? If Yes then,
      2. Will apache Ignite keep all data 60GB in RAM? Or we can distribute
      RAM load on multiple machines?
      3. On updating PostgreSQL database table, we are using python and
      SQLALchamy (ORM). Will there be a separate call for Apache
Ignite to update
      the same record in memory OR IS there any way that Apache Ignite can sync
      it immediately from Database?
      4. Is there enough support for Python?
      5. Are there REST API support to Interact with Apache Ignite. I can
      avoid ODBC connection.
      6. How about If this load becomes double in next one year?

A quick answer is much appreciated and Thanks in Advance.
Please help.

-- 
Regards,
*Harsh Mishra** | Solution Architect*
Quovantis Technologies
Mobile: +91-9958311308
Skype ID: harshmishra1984
www.quovantis.com

Reply via email to