Hi,

I just want to confirm whether the pattern I'm thinking of following for our 
requirement and also to clear some doubts I have.


The requirement we have is to use Ignite Caching service to store values from a 
database to speed up api requests for that data. The application is a Scala 
Play application running on a single machine.


I've fiddled around with ignite for the last couple of days and I would like 
some input on the pattern I'm thinking of using + info on some problems I've 
faced. I think the questions are quite basic in nature.


1. The pattern I think will work is, first, override the onStart method 
provided by play applications, start a node (I dont see need of a client node 
here) which establishes the cache configuration and creates a cache which is 
initially empty. This node will remain functioning as long as the application 
is not shutdown.

2. Each subsequent api interaction with this application will create a new node 
which references the cache created in the above step to check key/value pairs, 
and insert new ones if needed. At the end of the api call, the node is closed 
by using ignite.close().

3. Essentially, there is only one node running on the machine, and each api 
requests spawns a new one and closes it at the end of the request.

4. I havent seen the need of fiddling around with any changes in the xml 
configurations. The default ones should work fine with this requirement.


Does this pattern look ok? Are there any further advanced customization that 
you might suggest?


Further two questions I have -
1. I have also used the rest apis exposed by each node to access the 
distributed cache. It appears that instead of spawning new nodes on each api 
request and then closing them, I could use just the rest api to fetch/putvalues 
in the cache. Would that be a correct use of the rest apis, or have they been 
designed keeping some other use case in mind.

2. I have had some problems with the multi cast discovery, wherein any nodes 
that I start from main methods or from shell scripts, have no problem 
discovering each other and accessing the distributed cache, but on using the 
same code in the application I have exceptions raised stating that socket is 
closed. Do I need to override the TcpDiscoveryMulticastIpFinder address list 
programmatically for each server restart?


Thanks for your time.


Regards,

Avi

Reply via email to