SamuelBoerlin commented on issue #3488: URL: https://github.com/apache/jena/issues/3488#issuecomment-3384658453
> Do you happen to know what the CONSTRUCT query is? Have other CONSTRUCT queries succeeded? I think this is different problem but made likely to happen around the time of compaction finishing. It may (I'm still investigating) leave the system in a broken state but something triggered it in the first place. Unfortunately not. But queries in general work fine and don't seem to be affected by the compaction getting stuck. The DB has been in this state for days several times now and we haven't observed any problems other than more and more disk space getting used up and memory usage not going down (usually it drops quite a lot after a successful compaction). > It's inside the JVM to get a `java.lang.InternalError` - it is accessing a memory mapped file. Would it be possible to run without `--add-modules=jdk.incubator.vector` and with a 2G heap `-Xmx2G`? Ah sorry, I had copied the wrong line. We are already running with `-Xmx2G`: ``` /opt/java/openjdk/bin/java -Xmx2G -Dlog4j2.formatMsgNoLookups=true --enable-native-access=ALL-UNNAMED --add-modules=jdk.incubator.vector -Dlog4j.configurationFile=/jena-fuseki/log4j2.properties -cp /jena-fuseki/fuseki-server.jar org.apache.jena.fuseki.main.cmds.FusekiServerCmd ``` The `--add-modules=jdk.incubator.vector` was only added when we upgraded from 5.2.0 to 5.5.0. This problem was also happening already before that (though less frequently) on 5.2.0 and without `--add-modules=jdk.incubator.vector`, so I don't think it is the root cause. I'll give it a try, but it'll take a while. Some additional points that are interesting: - I tried reproducing the problem by running the compaction script in a loop for quite a while on a less busy server with the same data and was unable to reproduce it this way. So it seems that it might only be triggered when other stuff is happening at the same time as compaction. - Starting a backup via `GET http://db:3030/dsp-repo` with `Accept: application/trig` ca. half an hour after starting the compaction and before it has finished, led to compaction getting stuck and the .trig file being truncated at some random point and being smaller than expected (and yet no error was detected by `curl --fail ...`). This led us to change the schedules such that backups and compaction don't overlap, which seems to have solved the occasional corrupted/truncated backups, but didn't solve the compaction getting stuck. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
