To load more than 100 milion of characters to triples in N-TRIPLE format into
a not empty dataset, passed to Bulkloader function with a serialization, it
also takes more than 10 minutes.
So, I would like to ask if I use the bulkloader with TDB2, I can obtain
improvements.
The hardware is a common notebook with 8gb of RAM, without SSD and with an
Intel Core i7-4510U with 2.00GHz.

2018-03-19 11:54 GMT+01:00 Dick Murray <[email protected]>:

> Slow needs to be qualified. Slow because you need to load 1MT in 10s? What
> hardware? What environment? Are you loading a line based serialization? Are
> you loading from scratch or appending?
>
> D
>
> On Mon, 19 Mar 2018, 10:51 Davide, <[email protected]> wrote:
>
> > Hi,
> > What is the best way to perform the bulk loading with TDB2 and Java API?
> > Because I used the bulkloader with TDB1, but when I store data, it's too
> > slow.
> >
>

Reply via email to