Hi, 

        I was trying to use either DBM_File or IPC::Shareable to handle about 
100M data locally without a separate database machine. The data 
structure I tried to store was a complex Hash with hash and array 
inside( actually use MLDBM for nested Hash and array, IPC::Shareable 
for nested Hash). The underlying store modules tied were Storable and 
Data::Dumper. But one error always occur ->the serial store will 
return negative error no. usually (-1,220). Another weird thing I 
obsereved is that the file size reported by 'ls' deosn't agree with 
the actually disk usage. My last try was to load approx 100MB data 
into a MLDBM file. It failed when almost finishing loading all data 
because a store failed. 'ls' report the file size was about 1.169GB, 
while the partition was only about 250MB. 'du' showed the correct 
usage, it was about 100MB. Can anyone here tell me what is the exact 
problem here, and whether it is possible to handle about 50MB at least 
with these methods. 

        Another related question is when tie the MLDBM file, what fcntl open 
mode should be use, if I need to do minor update when several 
processes are using the same file, i.e. Whether it can be shared among 
processes for read/write. If MLDBM cannot, whether Berkeley DB_file 
can do the job?

        Comments appreciated!

-Martin


Reply via email to