My recollection of using MPE was performance measured in furlongs per fortnight and the need to do that allocation strictly to get contiguous space to try to counter the dismal performance of the HPIB disks which were boat anchors on some of the HP/3000's.

Fortunately we don't have to indulge in that nonsense any more.

Rob Sciuk wrote:
On Fri, 1 Dec 2006, John Stanton wrote:


I cannot see a reason for what you propose, but you could do it by brute force and ignorance - populate the DB with 1 million rows then delete them all to add all the space to the free pages list. Then your insertions will use the freed pages, not fresh ones.


Actually, this harkens back to the dedicated transaction oriented file systems (HP 3000/MPE) which were extent based. File "extents" could be pre-allocated to ensure a minimum number of transactions were captured before the disk ran out of space -- this also had the advantage of contiguous allocation which ensured data proximity. The administrator could control the size and number of extents as well as the number pre-allocated (if any), and indeed, the maximum file size.

I believe that some modern file system development work is looking back towards the 60's and 70's and modern high performance extent based file system are in development -- at least in the open systems (Linux?) area, but I have no data to back this up, and no clue as to whether they are ready for prime time (other than a fuzzily remembered magazine article).

As for the OP, this type of operation is OS dependant, and should not be relegated to SQLite IMHO.

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to