Hi Andy,

unfortunately (in terms of reproducing the error, not in terms of my project), 
starting with a fresh database seems to have solved the problem. It is well 
possible that the previous database was corrupted due to an abort transaction, 
and that this only happened once - I assumed that dropping the graph would also 
get rid of any corrupted data, but if I understand you correctly, that may not 
have been the case, and the error appearing again and again may just have been 
a consequence of the initial corruption.


If you still want me to run a test, I'd be happy to, but I guess this may not 
be necessary in that case.


Best,

Andreas

________________________________
Von: Andy Seaborne <[email protected]>
Gesendet: Mittwoch, 20. März 2019 00:15:48
An: [email protected]
Betreff: Re: AW: AW: AW: Error 500: No conversion to a Node: <RDF_Term >

Hi Andreas,

Do you have a reproducible test case even if it happens only occasionally?

Having looked at the code, I can't see a risk point and certainly not
like earlier problems with TDB1.

Have you had any abort transactions - these happen in the Fuseki UI only
if you browse away from a file upload or there is a data error.

I can build a special for you with a layer of node write caching removed
if its easier to run an test case in your environment rather than try to
extract one.

     Andy

On 11/03/2019 21:59, Andy Seaborne wrote:
> Hi Andreas,
>
> On 11/03/2019 14:37, Walker, Andreas wrote:
>> Hi Andy,
>>
>>
>> the database was created from the web interface, and I've only used
>> the web interface to add data to it, so no other version has ever
>> touched it.
>
> OK - so you have only run v3.10.0.
>
>> If I understand you correctly, the problem is with the database as a
>> whole, so dropping and reloading the graph might not solve the
>> problem. I have now switched to a fresh database and am currently
>> reloading the data, so I can see whether the problem persists beyond
>> the original database.
>
> If it does then we have a reproducible test case.  That said, I can
> think of a way of a single load or a single load and a sequence of
> updates not in a parallel can break the node table.
>
> The email of Osma's is a compaction - have you compacted the database?
> (Fuseki must not be running at the time - this is supposed to be caught
> using OS file locks but ... I'm told VMs can get this wrong (but I don't
> know which and when).
>
>> The only backup I have done is by making snapshots of the entire
>> virtual server this is running on, so I don't think that is related in
>> any way.
>
> Probably not related but is it an instantaneous backup of the
> filesystem? If not, then it isn't a reliable backup (in the same way
> that copying all the files isn't a safe backup procedure).
>
> The problem is that if the copy is done while a write transaction is
> running, some files may be copied before the commit point and some
> after, which risks chaos.
>
>      Andy
>
>>
>>
>> Thanks again for your help,
>>
>> Andreas
>>
>> ________________________________
>> Von: Andy Seaborne <[email protected]>
>> Gesendet: Freitag, 8. März 2019 10:50:28
>> An: [email protected]
>> Betreff: Re: AW: AW: Error 500: No conversion to a Node: <RDF_Term >
>>
>> Hi Andreas,
>>
>> Is this a database that has only ever been used with 3.10.0 or was the
>> data loaded with a previous version at some time in the past?
>>
>> The problem occurs silently during loading. There is no sign of the
>> problem at the time and the system works just fine while the RDF term,
>> or terms, are also in the node table cache.
>>
>> Then the system is restarted.
>>
>> Then the RDF term is needed for a query and the errors are reported.
>>
>> But the problem originated back when the data was loaded or updated, may
>> be several restarts ago.
>>
>> Of course, it may be a different issue in which case, but the error
>> message is consistent with the known bug.
>>
>> Have you been backing up the server on a regular basis? A backup is
>> NQuads so it is pulling every RDF term from disk (subject to already
>> being cached).
>>
>>       Andy
>>
>> On 07/03/2019 20:47, Walker, Andreas wrote:
>>> Hi Andy,
>>>
>>>
>>> I am running Version 3.10.0. The problem with reloading the database
>>> is the regular (multiple times a day) recurrence of the problem, so
>>> if there are any strategies to avoid it, I'd appreciate any advice.
>>>
>>>
>>> Best,
>>>
>>> Andreas
>>>
>>>
>>> ________________________________
>>> Von: Andy Seaborne <[email protected]>
>>> Gesendet: Donnerstag, 7. März 2019 21:12
>>> An: [email protected]
>>> Betreff: Re: AW: Error 500: No conversion to a Node: <RDF_Term >
>>>
>>> Hi Andreas - which version are you running?
>>>
>>> It does not look like the  corruption problem, which is now fixed.
>>>
>>> The best thing to do is reload the database again. Whatever terms were
>>> messed up are permanently damaged I'm afraid.
>>>
>>>        Andy
>>>
>>> On 07/03/2019 10:49, Walker, Andreas wrote:
>>>> Dear all,
>>>>
>>>>
>>>> as a quick follow-up which might be helpful in identifying the
>>>> error; I can currently run a SPARQL query (just listing any triples)
>>>> with LIMIT 80, but no higher, before I run into the error, so it
>>>> seems like there might indeed be a particular part of the database
>>>> that is corrupted.
>>>>
>>>>
>>>> Best,
>>>>
>>>> Andreas
>>>>
>>>> ________________________________
>>>> Von: Walker, Andreas <[email protected]>
>>>> Gesendet: Mittwoch, 6. März 2019 10:42:32
>>>> An: [email protected]
>>>> Betreff: Error 500: No conversion to a Node: <RDF_Term >
>>>>
>>>> Dear all,
>>>>
>>>>
>>>> from time to time, my Fuseki server starts throwing the following
>>>> error message on any SPARQL query I pose to one of my graphs:
>>>>
>>>>
>>>> "Error 500: No conversion to a Node: <RDF_Term >"
>>>>
>>>>
>>>> Unfortunately, I couldn't find any explanation of this error
>>>> message, beyond a discussion of a corrupted TDB2 database.
>>>>
>>>>
>>>> (https://users.jena.apache.narkive.com/LF4XE801/corrupted-tdb2-database)<https://users.jena.apache.narkive.com/LF4XE801/corrupted-tdb2-database>
>>>>
>>>>
>>>>
>>>> Once this happens, the only thing I could do so far is to drop the
>>>> entire afflicted graph and rebuild it, but of course that isn't
>>>> going to be a viable solution in the long term.
>>>>
>>>>
>>>> The only way I interact with Fuseki is by starting and stopping it,
>>>> querying it via the SPARQL endpoint (and sometimes through the web
>>>> interface, e.g. when troubleshooting my application), and uploading
>>>> new triples (as turtle files) via the web interface. I haven't been
>>>> able to find a pattern in when the error appears so far.
>>>>
>>>>
>>>> Any insights into why this error appears, and what to do in order to
>>>> avoid it? I'd appreciate any help.
>>>>
>>>>
>>>> Best,
>>>>
>>>> Andreas
>>>>
>>>
>>

Reply via email to