Hello,

I've load a complete version of PDB in N3 with an approximate size of
14 billions triples. It completed successfully, but it crash at the
end of a checkpoint and don't want to restart. I'm getting the
following error:

GPF: disk.c:2046 looking for a dp allocation bit that is out of range
Segmentation fault

Also, the first message I've receive, but it is not giving it to me anymore was

GPF: disk.c:1867 cannot write buffer to 0 page.

What does this mean? Is it because of the size of the virtuoso.db
which is 1.1 TB?

Thanks for the help,

Marc-Alexandre Nolin

Reply via email to