Hello Claude, Andy,
In reference to your email I would like to discuss the way I am looking
forward to implement the Cache Layer.
1. We will be creating a global cache store based on config params passed
via cache.properties.
2. Users will have option to select any of the following cache store as
default
a) In memory local cache
b) remote in memory cache
c) remote in memory cache with persistence in disk
3. We will update SPARQL_QUERY to read query results from cache if
available and if the ResultSet is within valid time to live.
4. If cache has expired we will run executeQuery and repopulate the cache
with updated time to live.
5. We will add Tests to validate different cache store operations.
6. I am still evaluating the changes required to accommodate different
datasets.
Please let me know if there are changes required in the implementation.
Regards
Saikat
Hi Saikat,
The plan looks good.
One suggestion is to start simple.
For example, get 2a working first, maybe with hardwired config, focus on
getting that working end-to-end then go back and handle 2b and 2c and 1.
By working on just one case, you'll validate the framework into which 2b
and 2c have to fit.
And it's nice to have something working as soon as possible :-)
Tests - good!
Andy