I don't have precise details, but I have noted that MapInfo's database
management does hit "walls" when you try to do inserts in large tables. In fact,
I've sometimes found that breaking a very large table into parts and updating
these separately is faster than letting it grind through the thing all in one
go. This would be an interesting experiment, but I bet that the total time for
inserts and edits in a table increases exponentially with the number of table
records. If that's so, then for a table of sufficient size there will be a point
where updating it will take millenia. That is, if don't get the *** OUT OF
CHEESE ERROR *** REDO FROM START *** general protection fault first! (For those
who don't know Hex, Unseen University's calculating engine housed in the High
Energy Magic building, you are missing a treat!)

You might try entering into the MapBasic window:
Set Table <tablename> Fastedit On Undo Off

to speed things up. Or remove the index keys. (I think it's the indexing that's
the bottleneck.)

- Bill Thoen

Kevin Blair wrote:

> Does anyone have any details of whether MapInfo has any
> technical/performance issues with working with large tables. We have a
> project at the moment where we are seeing a steep performance dropoff when
> inserting records into a tables. The table with the worst performance at the
> mo has about  300k rows in it (all mappable objects).
>
> Kevin
> --
>             Hex (now running slower since they  installed Casements 'Year of
> the Sloth) - Terry Pratchett

----------------------------------------------------------------------
To unsubscribe from this list, send e-mail to [EMAIL PROTECTED] and put
"unsubscribe MAPINFO-L" in the message body, or contact [EMAIL PROTECTED]

Reply via email to