In message
<[EMAIL PROTECTED]>,
David A Barrett <[EMAIL PROTECTED]> writes
We have a monster file that hit the 2GB mark way back when, so we divided
it up in a distributed file.
It continues to grow, and the keys are largely sequential with a
partitioning algorithm that puts each block of one m
Indexes have always been partfile specific as the partfiles are individual
files that can be accessed by themselves as standalone files, as long as
the items accessed, or more importantly, written, match the algorhythm.
Since that is the case, rebuilding an index against the partfile would be
much
We have a monster file that hit the 2GB mark way back when, so we divided
it up in a distributed file.
It continues to grow, and the keys are largely sequential with a
partitioning algorithm that puts each block of one million records into a
part. Currently the file grows at a pace of about one m