On 10/07/13 16:59, Reto Bachmann-Gmür wrote:
Could it be that Jena methods return before that jena has actually finished writing and that the jena buiilt-in locks take this into account?
No - they don't do that. Iterators (obviously) resume on teh next .hasNext etc so iterator - operation - iterator is a potential problem area.
Could there be separate accesses to two graphs in the same dataset?
Can you use transactions? Then you don't need locking.
Andy
Another exception I'm getting is: java.util.ConcurrentModificationException: Reader = 1, Writer = 1 at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW.policyError(DatasetControlMRSW.java:157) at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW.policyError(DatasetControlMRSW.java:152) at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW.checkConcurrency(DatasetControlMRSW.java:79) at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW.startRead(DatasetControlMRSW.java:46) at com.hp.hpl.jena.tdb.nodetable.NodeTupleTableConcrete.startRead(NodeTupleTableConcrete.java:68) at com.hp.hpl.jena.tdb.nodetable.NodeTupleTableConcrete.findAsNodeIds(NodeTupleTableConcrete.java:139) at com.hp.hpl.jena.tdb.store.TripleTable.find(TripleTable.java:76) at com.hp.hpl.jena.tdb.store.DatasetGraphTDB.findInDftGraph(DatasetGraphTDB.java:100) at com.hp.hpl.jena.sparql.core.DatasetGraphBaseFind.find(DatasetGraphBaseFind.java:46) at com.hp.hpl.jena.tdb.store.GraphTDBBase.graphBaseFindDft(GraphTDBBase.java:114) at com.hp.hpl.jena.tdb.store.GraphTriplesTDB.graphBaseFind(GraphTriplesTDB.java:71) at com.hp.hpl.jena.graph.impl.GraphBase.find(GraphBase.java:268) at com.hp.hpl.jena.graph.impl.GraphBase.graphBaseFind(GraphBase.java:290) at com.hp.hpl.jena.graph.impl.GraphBase.find(GraphBase.java:287) at org.apache.clerezza.rdf.jena.storage.JenaGraphAdaptor.performFilter(JenaGraphAdaptor.java:94) at org.apache.clerezza.rdf.core.impl.AbstractTripleCollection.filter(AbstractTripleCollection.java:71) at org.apache.clerezza.rdf.core.impl.AbstractTripleCollection.contains(AbstractTripleCollection.java:65) at org.apache.clerezza.rdf.core.impl.util.PrivilegedTripleCollectionWrapper$4.run(PrivilegedTripleCollectionWrapper.java:88) at org.apache.clerezza.rdf.core.impl.util.PrivilegedTripleCollectionWrapper$4.run(PrivilegedTripleCollectionWrapper.java:84) at java.security.AccessController.doPrivileged(Native Method) at org.apache.clerezza.rdf.core.impl.util.PrivilegedTripleCollectionWrapper.contains(PrivilegedTripleCollectionWrapper.java:84) at org.apache.clerezza.rdf.core.access.LockableMGraphWrapper.contains(LockableMGraphWrapper.java:118) at org.apache.clerezza.rdf.jena.tdb.storage.MultiThreadedTest.perform(MultiThreadedTest.java:131) Cheers, Reto On Wed, Jul 10, 2013 at 5:51 PM, Reto Bachmann-Gmür <[email protected]> wrote:Hi Andy Running into concurrency issues again. I'm getting the foolowing exception: java.util.ConcurrentModificationException: Iterator: started at 99459, now 99460 at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW.policyError(DatasetControlMRSW.java:157) at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW.access$000(DatasetControlMRSW.java:32) at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW$IteratorCheckNotConcurrent.checkCourrentModification(DatasetControlMRSW.java:110) at com.hp.hpl.jena.tdb.sys.DatasetControlMRSW$IteratorCheckNotConcurrent.next(DatasetControlMRSW.java:128) at org.apache.jena.atlas.iterator.Iter.count(Iter.java:478) at com.hp.hpl.jena.tdb.store.GraphTDBBase.graphBaseSize(GraphTDBBase.java:159) at com.hp.hpl.jena.graph.impl.GraphBase.size(GraphBase.java:344) at org.apache.clerezza.rdf.jena.storage.JenaGraphAdaptor.size(JenaGraphAdaptor.java:70) at org.apache.clerezza.rdf.core.impl.util.PrivilegedTripleCollectionWrapper$2.run(PrivilegedTripleCollectionWrapper.java:66) at org.apache.clerezza.rdf.core.impl.util.PrivilegedTripleCollectionWrapper$2.run(PrivilegedTripleCollectionWrapper.java:62) at java.security.AccessController.doPrivileged(Native Method) at org.apache.clerezza.rdf.core.impl.util.PrivilegedTripleCollectionWrapper.size(PrivilegedTripleCollectionWrapper.java:62) at org.apache.clerezza.rdf.core.access.LockableMGraphWrapper.size(LockableMGraphWrapper.java:97) at org.apache.clerezza.rdf.jena.tdb.storage.MultiThreadedTest.perform(MultiThreadedTest.java:129) I've checked the clerezza code and it seems that it ensures that no write happens while size() is executed. Also only a single graph is used so it cannot be the issue about the clerezza lock not being broad enough (afaik this is an issue but not the cause of this exception). Do you have an idea what could cause the problem? Cheers, Reto On Thu, Mar 14, 2013 at 2:43 PM, Andy Seaborne <[email protected]> wrote:On 14/03/13 09:39, Minto van der Sluis wrote:Rupert, Thanks for the additional explanation. Regards, Minto Op 14-3-2013 10:31, Rupert Westenthaler schreef:Hi Minto I am traveling this week and do not have time to work on this until the weekend but I will have a look into this. Let me try to explain my concern again and make it more clear: The Jena TDB named graphs are hold in a single quad store table (SPOC - Subject Predicate Object Context). On the Clerezza side you have a TripleCollections (SPO) with a name (C). What that means is that all Clerezza TripleCollections provided by the same SingleTdbDatasetTcProvider do share the same SPOC table. meaning that a change of any of those TripleCollections will cause a modification in the Jena TDB Backend. This means that Iterators of all TripleCollections need to make a ReadLock on the SPOC table (and not only on the SPO section represented by the TripleCollection). While Clerezza allows to build a LockableMGraphWrapper over an MGrpah this is not sufficient for the SingleTdbDatasetTcProvider as this will only protect the SPO section and not the SPOC table used by the backend. So changes in other graphs - or the creation of a new graph - are still possible and will cause ConcurrentModificationExceptions as reported. To solve this issue one needs to ensure that a single ReadWrite lock is used for all TripleCollections provided by the SingleTdbDatasetTcProvider as this will allow users to lock the whole SPOC table of the backend when they perform operations on the Clerezza TripleCollections.A TDB dataset provides a single Lock you can reuse/wrap so all the graph locks are related when needed. The GraphTDB.getLock() is the dataset lock. Transactions would be better. Better concurrency (concurrent writer and multiple readers). Andybest Rupert On Thu, Mar 14, 2013 at 9:50 AM, Minto van der Sluis <[email protected]> wrote:Hi, Half of what the 2 of you write is not very clear to me. Probably due to being a novice when it comes to Clerezza internals. Maybe I will start with giving CLEREZZA-726 another try and then check if I still get these exceptions. Regard, Minto Op 13-3-2013 18:35, Reto Bachmann-Gmür schreef:On Wed, Mar 13, 2013 at 6:04 PM, Rupert Westenthaler < [email protected]> wrote:Hi, I think that this is cased by the fact that if you create a LockableMGraph over MGraphs provided by the SingleTdbDatasetTcProvider you end up in a situation where you have multiple ReadWrite Locks on the same quad store (the Jena TDB dataset). This means that acquiring a write lock on one MGraph will not prohibit changes in other graphs - or the creation of new graphs. Because of that you will end up with ConcurrentModificationException when using iterators over triples (such as going over SPARQL results).True. But where is the graph locked in the first place? It should aquire a lock before iterating though the graph, does this happen? cheers, retoThe solution would be to * create a single ReadWirte lock for the SingleTdbDatasetTcProvider * replace all synchronized(dataset){..} block with read/wirte locks * all methods returning MGraphs need to return LockableMGraph instances that do use the ReadWrite lock used by the SingleTdbDatasetTcProvider * users would than need to use the LockableMGraph instance provided by the provider and NOT wrap those with an other LockableMGraph instance (e.g. the LockableMGraphWrapper). best Rupert On Wed, Mar 13, 2013 at 5:31 PM, Minto van der Sluis <[email protected]> wrote:Hi Folks, I ran into an issue is both the existing SingleTdbDatasetTcProvider and my customized version (see CLEREZZA-736). How to reproduce: 1) Have some process constantly inject new named graphs (I had a process injecting 1000 named graphs) 2) perform a query while 1 is still running. I used the following query: SELECT ?graphName WHERE { GRAPH ?graphName {} } LIMIT 10 OFFSET 0 3) repeat step 2 a number of times (since the error does not alwaysoccur)This results in a ConcurrentModificationException (see stacktrace below). I am not sure whether this is a Clerezza or Jena issue. Anyone an idea what is causing this? Or more importantly how to fix it? Should I create a Jira issue for this? Regards, -- ir. ing. Minto van der Sluis Software innovator / renovator Xup BV Stacktrace: java.util.ConcurrentModificationException: Iterator: started at 7103,now 7105atcom.hp.hpl.jena.tdb.sys.DatasetControlMRSW.policyError(DatasetControlMRSW.java:157)atcom.hp.hpl.jena.tdb.sys.DatasetControlMRSW.access$000(DatasetControlMRSW.java:32)atcom.hp.hpl.jena.tdb.sys.DatasetControlMRSW$IteratorCheckNotConcurrent.checkCourrentModification(DatasetControlMRSW.java:110)atcom.hp.hpl.jena.tdb.sys.DatasetControlMRSW$IteratorCheckNotConcurrent.hasNext(DatasetControlMRSW.java:118)at org.openjena.atlas.iterator.Iter$4.hasNext(Iter.java:295) atcom.hp.hpl.jena.tdb.store.GraphTDBBase$ProjectQuadsToTriples.hasNext(GraphTDBBase.java:173)atcom.hp.hpl.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:76)atorg.apache.clerezza.rdf.jena.storage.JenaGraphAdaptor$1.hasNext(JenaGraphAdaptor.java:106)atorg.apache.clerezza.rdf.core.impl.AbstractTripleCollection$1.hasNext(AbstractTripleCollection.java:78)atorg.apache.clerezza.rdf.core.access.LockingIterator.hasNext(LockingIterator.java:47)atorg.apache.clerezza.rdf.jena.facade.JenaGraph$1.hasNext(JenaGraph.java:95)atcom.hp.hpl.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:76)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIterTriplePattern$TripleMapper.hasNextBinding(QueryIterTriplePattern.java:151)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:79)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIterBlockTriples.hasNextBinding(QueryIterBlockTriples.java:64)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.main.iterator.QueryIterGraph$QueryIterGraphInner.hasNextBinding(QueryIterGraph.java:123)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:79)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:59)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIterSlice.hasNextBinding(QueryIterSlice.java:76)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:40)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:40)atcom.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)atcom.hp.hpl.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:72)atorg.apache.clerezza.rdf.jena.sparql.ResultSetWrapper.<init>(ResultSetWrapper.java:39)atorg.apache.clerezza.rdf.jena.sparql.JenaSparqlEngine.execute(JenaSparqlEngine.java:68)atorg.apache.clerezza.rdf.core.access.TcManager.executeSparqlQuery(TcManager.java:272)...-- | Rupert Westenthaler [email protected] | Bodenlehenstraße 11 ++43-699-11108907 | A-5500 Bischofshofen-- ir. ing. Minto van der Sluis Software innovator / renovator Xup BV Mobiel: +31 (0) 626 014541-- | Rupert Westenthaler [email protected] | Bodenlehenstraße 11 ++43-699-11108907 | A-5500 Bischofshofen
