Matteo,

What I think is the problem is that you are using a general dataset for service <sparql> but it has several ways to get to the same SDB database client instance <#sdb_busa> via the different graphs in <#sdb-part>.

It should work if you create different instances of the rdf:type sdb:DatasetStore, one for each named graph (I haven't tried). Each one is a JDBC connection.

If they are all the same you get the situation of calling a new transaction for the various graphs inside one query but with JDBC you only have one transaction at a time.

Do you really need the dataset structured like that?

Maybe TDB and dynamic datasets would work better for you.

        Andy


On 12/01/12 17:01, Matteo Busanelli (GMail) wrote:
This is my configuration file (attached).


Il 12/01/2012 17:58, Andy Seaborne ha scritto:
On 12/01/12 09:39, Matteo Busanelli (GMail) wrote:
Hi everyone,
I'm tryng to use Joseki with SDB (on MySQL 5.1) for serving multithread
application queries.
If all queries doesen't overlapp everything works well.
The problem comes wehn two different thread make concurrent queries that
overlaps and I got this error:

======================= log Stacktrace =====================
...

As you can see here I have the first REQUEST and the second one that
come before RESPONSE of the first one. As a result The first ends
correctly (RESPONSE /sparql 200) whereas the second one recieve a ::
RESPONSE /sparql 500 (so that I get an Http 500 error: Internal Server
Error:... from Joseki).

I already tried to configure "joseki:lockingPolicy" parameter but
whatever value I specify (joseki:lockingPolicyMutex,
joseki:lockingPolicyMRSW,joseki:lockingPolicyNone ) the result doesn't
change. I also have tried to increase the the "joseki:poolSize"
parameter but nothing.

What can I do to let Joseki manage correctly the concurrent queries
trough SDB?

I'm working with:
- Joseki 3.4.4
- SDB-1.3.4
- Jena-2.6.4
- MySql 5.1

If it may be useful I can attach the joseki config file used.

It would be useful to see it.

Andy


Thanx in advance,
Matteo.



Reply via email to