Hello, i have an application which uses ~80 BTREE files via perl DB_File interface. All of them are queried sequentially on a 3,5 GB RAM Linux Server. The file ranges from 800M to 1,5GB each.
I wrote a first application which does an average queries in 0,005 sec. (The first 40 are scanned, and as soon as a match is found it stops. Then the remaining 40 are processed with the same strategy. The average time for these two scan is 0,005sec ). This application does not set the cache (i.e use the default i presume). After 12 hours of running, the process has 21 Mb of memory.
Having 3,5Gb of memory available, i decided to exploit a memory cache. Each DB has now a local cache of 32Mb. After 12 hours of running the process occupied 1,2GB of memory. But the performance are 2-3 times bad than the process running without cache. And this is quite strange. I know that data have very low locality.
Anyone has any experience about similar situations?
_______________________________________________ Perl-Unix-Users mailing list [EMAIL PROTECTED] To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs