Hello Marc-Alexandre,

Something gone really weird.
It is probably not a run out of disk, because Virtuoso usually detect such 
things properly.
OTOH single 1.1TB file without striping is what I've never tried so I can't say 
"it works for me".
Striping to multiple disks is always Good Thing if a database is larger than a 
half of RAM.
I've inspected the code, but found no evident reason that may make this 
specific size important.
So I'll ask others.

Best Regards,

Ivan Mikhailov
OpenLink Software
http://virtuoso.openlinksw.com

On Fri, 2010-03-26 at 14:25 -0400, Marc-Alexandre Nolin wrote:
> Hello,
> 
> I've load a complete version of PDB in N3 with an approximate size of
> 14 billions triples. It completed successfully, but it crash at the
> end of a checkpoint and don't want to restart. I'm getting the
> following error:
> 
> GPF: disk.c:2046 looking for a dp allocation bit that is out of range
> Segmentation fault
> 
> Also, the first message I've receive, but it is not giving it to me anymore 
> was
> 
> GPF: disk.c:1867 cannot write buffer to 0 page.
> 
> What does this mean? Is it because of the size of the virtuoso.db
> which is 1.1 TB?
> 
> Thanks for the help,
> 
> Marc-Alexandre Nolin
> 
> ------------------------------------------------------------------------------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Virtuoso-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/virtuoso-users


Reply via email to