I have received a number of useful responses, but most hinted that i need to provide a 
little more info so here goes.

The piece of offending code contains 6 select queries (executed against 9000 objects) 
of the form

SELECT x FROM table WHERE a = b INTO _temptable NOSELECT

NONE of the queries are joins between tables. (<- this might be a clue, perhaps i can 
simplify my dataset at the outset)

There last two queries are of the form

SELECT x FROM table WHERE a=b or x=y

These two last queries are designed to select two objects from the table X. The next 
statement is a OBJECTS COMBINE DATA... This process is not necessarily run for every 
record, only if certain criteria are matched in the preceding queries on the _temp 
tables.

The general progression of the code is a number of  SELECT .... INTO .. NOSELECT 
queries that take data from the original data table and cut it down into smaller and 
smaller temp tables. e.g.

SELECT * FROM table WHERE a=b INTO _temp1
SELECT * FROM _temp1 WHERE x=y INTO _temp2
SELECT * FROM _temp2 WHERE x=y INTO _temp3

This is an over simplification but you get the idea. The bits i have removed are 
generally extracting records from those tables (using FETCH FIRST, FETCH NEXT) and 
comparing them to variables. The nature of the selects is generally such that the 
original table contains 1000's of records while the _temp1, _temp2, etc generally only 
contain a handful of records, < 10.

I would guess that my _temp tables are not index which may slow things down, but these 
are very small tables.

I have set fastedit ON and undo OFF for all of my base tables.





>>> "Scott Fagg" <[EMAIL PROTECTED]> 27/4/00 9:42:01 am >>>
I have a piece of mapbasic code that process a road network removing redundant nodes 
and links, leading to a substantially smaller map in terms of number of objects. This 
we then feed into a transport model (Emme/2)

My problem is that the large size of the network (9000 nodes, 9000 links) leads to 
very long run times for my process, ranging from 5 to 12 hours.

The code is not especially complex but makes use of a lot of queries. Each iteration 
(one per node) involves probably at least 5 queries. I discovered the hardway that 
after mapinfo has processed 10000 queries, it hangs, having run out of available temp 
file names. I've resolved this by periodically closing all temp files from within my 
code.

Are there any guidelines for optimising mapinfo code to extract more performance from 
it?

Turning indexing on and off and had very little effect.

----------------------------------------------------------------------
To unsubscribe from this list, send e-mail to [EMAIL PROTECTED] and put
"unsubscribe MAPINFO-L" in the message body, or contact [EMAIL PROTECTED]

----------------------------------------------------------------------
To unsubscribe from this list, send e-mail to [EMAIL PROTECTED] and put
"unsubscribe MAPINFO-L" in the message body, or contact [EMAIL PROTECTED]

Reply via email to