Rbase 8.0.21.31001

We have a very large table three million plus records and growing. We have 
indexes on 9 columns. The Database is about 8 Gig.  We do update 
processing that can easily touch 100 K rows with multiple columns. Via 
trail and error I learned that I must drop the indexes on the updated rows 
or processing takes forever. After the processing  I re-create the 
indexes. I have found that I need to pack after each Drop / Alter Table 
Add or the database gets to about 17 Gig where 8.0 seams to self destruct 
(I get Disk Errors and can't save the table).  Am I missing something. Is 
there a way to update large tables without dropping the indexes?  Has 
anyone else experienced 8.0 'Blowing Up' at a little over 17 Gig? My 
indexes are all separate indexes. What will happen to performance is I 
combine some of the indexes? Will  it save substantial space ?

Reply via email to