I confirmed with jconsole that the Heap was seen as 1200m for the fuseki
server that also blows up like the tdbdump/tdbquery did for me.

The corrollary question is what a max data set that these tools support for
exploratory work on a 2-4 GB RAM Virtual linux machine.

bruce



On Thu, Oct 27, 2011 at 4:35 PM, Andy Seaborne <[email protected]> wrote:

> On 27/10/11 21:15, Bruce Craig wrote:
>
>> We are new to JENA* and thought we would first try to replicate benchmarks
>> for the tools that we have seen mentioned.
>> One which aligned well with the kinds of work we are pursuing was the
>> Social
>> Network Intelligence Benchmark (
>> http://www.w3.org/wiki/Social_**Network_Intelligence_BenchMark<http://www.w3.org/wiki/Social_Network_Intelligence_BenchMark>)
>> We downloaded the generator and created what seems like a huge dataset.
>> Nonetheless we succeeded in creating a TDB data store we think with
>> tdbloader.  However, any attempts to tdbdump and query with arq or
>> tdbquery
>> or connect and query with fuseki  greets us with heap memory failures.
>>
>
> Out of heap?  What heap size are you using?  TDB does not flex with heap
> size (in the last release).
>
> On a 32 bit JVM, use -Xmx1200M
> On a 64 bit JVM, don't push the heap size up too high (caching of indexes
> isn't in the heap; nodes are though).
>
> Seeing the stacktrace would be useful.
>
>        Andy
>
>
>> We dont really need DBpedia scale as yet so we were looking for experience
>> and guidance on
>> a) Setup of basic tool set  - any hidden caveats?
>> b) Suggestions for more modest datasets to use in environments on VM linux
>> systems with 2-4GB RAM.
>>
>> Regards
>>
>> Bruce
>>
>>
> (caution - that wiki page documents XSD datatimes as being
> "2011-02-22 09:43:13"^^xsd:dateTime
>
> they are not.  TDB (nor ARQ) will recognize that as a dataTime.  No space,
> use a T.
>
> e.g.
> "2011-02-22T09:43:13"^^xsd:**dateTime
>

Reply via email to