On 27/06/14 18:47, Mark Feblowitz wrote:
Andy -

The configuration looks OK -

Now, to a question (and a guilty admission of a dangerous practice):

Is there  a recommended safe way to copy/move/replicate a store?

My best guess is that I copied the store to my SSD drive - possibly
before quitting Fuseki. (Yes, I know: Don’t Do That!) I assume that a
snapshot of a moving store is a big risk.

Sorry - all bets are off :-)

If you can instantly (atomically) copy a database, it should work (it's not a good idea) because that's like a crash.

But you can't take a copy of a changing database - the copy may see parts of some files at one time point (state of the DB) and other parts of other files at a different point in time (different, later or earlier, state of the DB). That's not a consistent image of the database, it's not like a crash recovery.

There are various words to describe the state of the copy, many not suitable for a public email list.

What you are seeing with "RecordRangeIterator: records not strictly increasing:" is entirely possible.

One thing that I did notice is that the size of the original store
and  the size of the copy are significantly different - the copy is
> much larger. Is there something I’m not understanding, wrt the store?
> I didn’t see any symbolic links - are there hard links?



The files in the indexes are sparse - when they are allocated they are 8M but the OS hasn't actually allocated 8M. If you look at a fresh empty DB, then "du -sh" is reporting 148K for me, but "ls -lh" shows 8M for each index file (24 of them). I gather Mac's get this different and "du -sh" reports about 192M (the sum of the file sizes).

As space is used, real disk is allocated.

What maybe happening is the copy across filesystems is making the files non-sparse. What are the sizes reported?

        Andy



I’m running Fuseki 1.0.1 with TDB:

# TDB
tdb:DatasetTDB  rdfs:subClassOf  ja:RDFDataset .
tdb:GraphTDB    rdfs:subClassOf  ja:Model .

## ---------------------------------------------------------------
## Service with only SPARQL query on an inference model.
## Inference model bbase data in TDB.

<#service1>  rdf:type fuseki:Service ;
     fuseki:name              "km4sp" ;             # http://host/inf
     fuseki:serviceQuery      "sparql" ;          # SPARQL query service
     fuseki:serviceConstruct  "sparql" ;          # SPARQL query service
     fuseki:serviceUpdate     "update" ;
     fuseki:serviceUpload     "upload" ;
     fuseki:dataset           <#dataset> ;
     .

<#dataset> rdf:type       ja:RDFDataset ;
     ja:defaultGraph       <#model_inf> ;
      .

<#model_inf> a ja:InfModel ;
      ja:baseModel <#tdbGraph> ;
      ja:reasoner [
          ja:reasonerURL <http://jena.hpl.hp.com/2003/RDFSExptRuleReasoner>
      ] .

<#tdbDataset> rdf:type tdb:DatasetTDB ;
    tdb:location "/sp/km/demo/btn" ;
     .

<#tdbGraph> rdf:type tdb:GraphTDB ;
     tdb:dataset <#tdbDataset> .


Now, to a question (and a guilty admission of a dangerous practice):

Is there  a recommended safe way to copy/move/replicate a store?

My best guess is that I copied the store to my SSD drive - possibly before 
quitting Fuseki. (Yes, I know: Don’t Do That!) I assume that a snapshot of a 
moving store is a big risk.

One thing that I did notice is that the size of the original store and the size 
of the copy are significantly different  - the copy is much larger. Is there 
something I’m not understanding, wrt the store? I didn’t see any symbolic links 
- are there hard links?


Thanks,

Mark


On Jun 27, 2014, at 4:31 AM, Andy Seaborne <[email protected]> wrote:

Mark,

What's the Fuseki configuration (and version, just to check)?

It does look like an update problem but one that should not happen unless two 
things have update the DB simultaneously at some in the past.  The queries just 
happen to detect the problem, which is probably permanent.

        Andy


On 26/06/14 23:51, Mark Feblowitz wrote:
I saw something odd today (see trace below). I have a Fuseki/TDB
server that receives some bursts of updates and queries.

I know that this error appears to have failed during results
iteration, but this is the first time we’ve seen this. One guess was an
update locking problem. Does this make sense? Is there a better answer
(that makes Fuseki look better? :)

Among the things we changed before this are 1) moved the store to an
SSD drive, and 2) I removed my 0.1 second delay between updates.

The server did seem to recover from this, but I’m told it might have
taken a while.

Is this an ephemeral thing? Something I need to look into? An
internal
thing?

Thanks,

Mark


SELECT ?O ?ObsType ?Timestamp ?S ?E ?Payload
WHERE
{ ?E <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> :E
{ SELECT ?E ?O ?OType ?Timestamp
WHERE
{ ?O :contextEntity ?E .
?O :oType ?ObsType .
?O :generatedAtTimeMilli ?Timestamp
}
ORDER BY DESC(?Timestamp)
LIMIT 1
}
{ SELECT ?E ?S
WHERE
{ ?SH :contextEntity ?E .
?SH :generatedAtTimeMilli ?Timestamp .
?SH :bestRank ?SHBestRank .
?SH :s ?S
}
ORDER BY DESC(?Timestamp) ASC(?SHBestRank)
LIMIT 1
}
OPTIONAL
{ ?O btn:payload ?Payload }
}

14:46:19 INFO [4918] exec/select
14:46:20 WARN Open iterator: QueryIterSingleton/7998397
14:46:20 WARN Open iterator: QueryIterBlockTriples/7998398
14:46:20 WARN Open iterator: QueryIterTriplePattern/7998399
14:46:20 WARN Open iterator: QueryIterTriplePattern/7998400
14:46:20 WARN Open iterator: QueryIterTriplePattern/7998401
14:46:20 WARN Open iterator: QueryIterTriplePattern/7998402
14:46:20 WARN Open iterator: QueryIterTriplePattern$TripleMapper/7998405
14:46:20 WARN Open iterator: QueryIterTriplePattern$TripleMapper/7998453
14:46:20 WARN Open iterator: QueryIterTriplePattern$TripleMapper/7998454
14:46:20 WARN Open iterator: QueryIterTriplePattern$TripleMapper/7998455
14:46:20 WARN [4918] RC = 500 : RecordRangeIterator: records not strictly 
increasing: 0000000005a1b314000000000000761b00000000000066f5 // 
000000000000004b00000000000000bd0000000003835bbe
com.hp.hpl.jena.tdb.base.StorageException: RecordRangeIterator: records not 
strictly increasing: 0000000005a1b314000000000000761b00000000000066f5 // 
000000000000004b00000000000000bd0000000003835bbe
at 
com.hp.hpl.jena.tdb.base.recordbuffer.RecordRangeIterator.hasNext(RecordRangeIterator.java:124)
at org.apache.jena.atlas.iterator.Iter$4.hasNext(Iter.java:317)
at 
com.hp.hpl.jena.tdb.sys.DatasetControlMRSW$IteratorCheckNotConcurrent.hasNext(DatasetControlMRSW.java:119)
at org.apache.jena.atlas.iterator.Iter$4.hasNext(Iter.java:317)
at org.apache.jena.atlas.iterator.Iter$4.hasNext(Iter.java:317)
at org.apache.jena.atlas.iterator.Iter$4.hasNext(Iter.java:317)
at org.apache.jena.atlas.iterator.Iter.hasNext(Iter.java:915)
at 
com.hp.hpl.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:90)
at com.hp.hpl.jena.util.iterator.NiceIterator$1.hasNext(NiceIterator.java:103)
at com.hp.hpl.jena.util.iterator.NiceIterator$1.hasNext(NiceIterator.java:103)
at 
com.hp.hpl.jena.reasoner.rulesys.impl.TopLevelTripleMatchFrame.nextMatch(TopLevelTripleMatchFrame.java:55)
at 
com.hp.hpl.jena.reasoner.rulesys.impl.LPInterpreter.run(LPInterpreter.java:330)
at 
com.hp.hpl.jena.reasoner.rulesys.impl.LPInterpreter.next(LPInterpreter.java:192)
at 
com.hp.hpl.jena.reasoner.rulesys.impl.LPTopGoalIterator.moveForward(LPTopGoalIterator.java:100)
at 
com.hp.hpl.jena.reasoner.rulesys.impl.LPTopGoalIterator.hasNext(LPTopGoalIterator.java:222)
at 
com.hp.hpl.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:90)
at 
com.hp.hpl.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:90)
at com.hp.hpl.jena.util.iterator.FilterIterator.hasNext(FilterIterator.java:54)
at 
com.hp.hpl.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:90)
at com.hp.hpl.jena.util.iterator.FilterIterator.hasNext(FilterIterator.java:54)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterTriplePattern$TripleMapper.hasNextBinding(QueryIterTriplePattern.java:151)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:76)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterBlockTriples.hasNextBinding(QueryIterBlockTriples.java:64)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterTopN$1.initializeIterator(QueryIterTopN.java:97)
at 
org.apache.jena.atlas.iterator.IteratorDelayedInitialization.init(IteratorDelayedInitialization.java:40)
at 
org.apache.jena.atlas.iterator.IteratorDelayedInitialization.hasNext(IteratorDelayedInitialization.java:50)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:54)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:59)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:76)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:103)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:67)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:59)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:40)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:40)
at 
com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:112)
at 
com.hp.hpl.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:75)
at com.hp.hpl.jena.sparql.resultset.ResultSetApply.apply(ResultSetApply.java:41)
at com.hp.hpl.jena.sparql.resultset.XMLOutput.format(XMLOutput.java:52)
at 
com.hp.hpl.jena.query.ResultSetFormatter.outputAsXML(ResultSetFormatter.java:482)
at 
org.apache.jena.fuseki.servlets.ResponseResultSet$1.output(ResponseResultSet.java:191)
at 
org.apache.jena.fuseki.servlets.ResponseResultSet.output(ResponseResultSet.java:283)
at 
org.apache.jena.fuseki.servlets.ResponseResultSet.sparqlXMLOutput(ResponseResultSet.java:195)
at 
org.apache.jena.fuseki.servlets.ResponseResultSet.doResponseResultSet$(ResponseResultSet.java:141)
at 
org.apache.jena.fuseki.servlets.ResponseResultSet.doResponseResultSet(ResponseResultSet.java:88)
at 
org.apache.jena.fuseki.servlets.SPARQL_Query.sendResults(SPARQL_Query.java:348)
at org.apache.jena.fuseki.servlets.SPARQL_Query.execute(SPARQL_Query.java:244)
at 
org.apache.jena.fuseki.servlets.SPARQL_Query.executeWithParameter(SPARQL_Query.java:195)
at org.apache.jena.fuseki.servlets.SPARQL_Query.perform(SPARQL_Query.java:80)
at 
org.apache.jena.fuseki.servlets.SPARQL_ServletBase.executeLifecycle(SPARQL_ServletBase.java:171)
at 
org.apache.jena.fuseki.servlets.SPARQL_ServletBase.executeAction(SPARQL_ServletBase.java:152)
at 
org.apache.jena.fuseki.servlets.SPARQL_ServletBase.execCommonWorker(SPARQL_ServletBase.java:140)
at 
org.apache.jena.fuseki.servlets.SPARQL_ServletBase.doCommon(SPARQL_ServletBase.java:69)
at org.apache.jena.fuseki.servlets.SPARQL_Query.doGet(SPARQL_Query.java:61)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.nio.BlockingChannelConnector$BlockingChannelEndPoint.run(BlockingChannelConnector.java:298)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
14:46:20 INFO [4918] Content-Type application/sparql-r?O?O





Reply via email to