100M characters isn't a common measure of RDF, because the number of triples 
that represents will depend radically on the kind of RDF loaded (how many 
literal objects, how large they are, etc.).

 How many lines of N-Triples are you loading? 

What does "passed to Bulkloader function with a serialization" mean? Please 
show us actual code. Is there a reason you aren't using the CLI utilities 
provided?

What are your requirements? What is the larger process or application of which 
this is a part?

ajs6f

> On Mar 19, 2018, at 7:14 AM, Davide <[email protected]> wrote:
> 
> To load more than 100 milion of characters to triples in N-TRIPLE format into
> a not empty dataset, passed to Bulkloader function with a serialization, it
> also takes more than 10 minutes.
> So, I would like to ask if I use the bulkloader with TDB2, I can obtain
> improvements.
> The hardware is a common notebook with 8gb of RAM, without SSD and with an
> Intel Core i7-4510U with 2.00GHz.
> 
> 2018-03-19 11:54 GMT+01:00 Dick Murray <[email protected]>:
> 
>> Slow needs to be qualified. Slow because you need to load 1MT in 10s? What
>> hardware? What environment? Are you loading a line based serialization? Are
>> you loading from scratch or appending?
>> 
>> D
>> 
>> On Mon, 19 Mar 2018, 10:51 Davide, <[email protected]> wrote:
>> 
>>> Hi,
>>> What is the best way to perform the bulk loading with TDB2 and Java API?
>>> Because I used the bulkloader with TDB1, but when I store data, it's too
>>> slow.
>>> 
>> 

Reply via email to