Peter Asemann's bits of Thu, 18 Jul 2002 translated to:

>Hmm... I've written this quite paranoid little script to test my filesystem,
>and it works. The script takes a file with 4096 random bytes as input.
>I do not think the filesystem has a built-in compression or something (it's
>a normal ext3 filesystem), so the file produced by the script
>
>-rw-r--r--    1 root     bin      4311744512 Jul 18 16:01 bigfile
>
>must really be there. An it's more than 4 GB big. So my filesystem must be
>capable of having files more then 4 GB big... I suppose. So why the htdig DB
>doesn't want to grow bigger than 2 GB? Maybe it's just the berkeley DB

Sorry. Guess I was wrong on all counts. As for the general file
size limits, I think that is really a library/kernel issue, not
a filesystem issue. On rereading a couple things, it looks like
ext3 (even ext2) supports >2GB with no problem, assuming you
have appropriate kernel and library support.

As for ht://Dig, it looks like there is a Berkeley DB issue. If
you didn't already see it, there was another post today discussing
the issue.

Jim



-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
htdig-general mailing list <[EMAIL PROTECTED]>
To unsubscribe, send a message to <[EMAIL PROTECTED]> with a 
subject of unsubscribe
FAQ: http://htdig.sourceforge.net/FAQ.html

Reply via email to