It's also worth noting that since Fuseki is a Java based application the JVM has its own memory management so asks the OS for some amount of memory which is then divided up between the Java objects inside the process. Often the heap size may be larger than the memory the application needs. The Fuseki scripts set a default heap size based on prior experience (i.e. provides a good general purpose default) but that may not be suitable for all environments and may need customising. Also Fuseki uses memory mapped files if backed with TDB databases which are accounted for separately from JVM heap but should be shown in OS level accounting for the JVM.
TL;DR The amount of memory you see consumed at the OS level does not necessarily correlate directly with the amount of memory used for your datasets. To determine that you would need to attach a JVM profiler to the running Fuseki application. You may want to look at the Fuseki script in detail and adjust the JVM memory settings for your use case. Rob On 12/03/2020, 13:26, "Andy Seaborne" <[email protected]> wrote: On 12/03/2020 11:26, Luís Moreira de Sousa wrote: > Dear all, > > I loaded six RDF datasets to Fuseki with sizes comprised between 20 Kb and 20 Mb. To host these six datasets (in persistent mode) Fuseki is using over 1 Gb of RAM and could soon get killed by the system (running on a container platform). How much RAM are you giving Fuseki? (Why "could soon get killed"?) > > This demand on RAM for such small datasets appears excessive. What strategies are there to limit the RAM used by Fuseki? The persistent store can be controlled with https://jena.apache.org/documentation/tdb/store-parameters.html Specifically: tdb.node2nodeid_cache_size tdb.nodeid2node_cache_size file tdb.cfg in the Do not change the database layout parameters after it is built! Or put them all in one datasets as named graphs. Or load them as plain files (if they are read-only) Andy > > Thank you. > > -- > Luís >
