On 2017-12-29 14:36, ajs6f wrote:
> See comments in-line.
> 
> ajs6f
> 
>> On Dec 29, 2017, at 8:27 AM, George News <[email protected]>
>> wrote:
>> 
>> I'm also thinking on creating a queue to handle writings and
>> reading. This way I will parallelize readings and stop them to
>> lauch writing based on scheduler.
> 
> TDB already does something like this, and you are not likely to
> better it by adding more code on top. You will be better off figuring
> out why your code isn't working and correcting it.
> 
>>> If you expect to scale this application very far, you will
>>> probably want to solve your problem by introducing Fuseki and
>>> having your application make calls to it, instead of trying to
>>> manage concurrency yourself. You are already using SPARQL
>>> according to your example. Then you can use SPARQL keywords to
>>> pick out the named graphs together over which you want to run
>>> your query:
>> 
>> this is not the way. I'm not parsing the SPARQL sentences. The
>> think is that graphs are internally created periodically in order
>> to speed up response times. We have noticed that one single graph
>> handling all our data makes the system quite slow and unresponsive.
>> But dividing it into smaller subgraphs makes the system quicker.
>> Then we merge graphs depending on user requests (REST endpoint and
>> not FROM GRAPH).
> 
> That doesn't in any way prevent you from using Fuseki or make it less
> attractive. If you are merging named graphs from a dataset in
> response to user requests, you are manually doing something that ARQ
> (and therefore Fuseki) can already do. Parsing SPARQL and adding the
> appropriate clauses might be a lot less work than what you are trying
> to do now.
> 
> In both cases, you seem to be rewriting functionality that already
> exists in Jena.
> 

>From what I have been reading, I could get Fuseki and convert my service
in a proxy to Fuseki. The issue here is that when I took the code the
already implemented stuff was not using Fuseki and my knowledge was low.
Maybe if I don't get to solve this issue I will think on migrating
everything.

Reply via email to