We have been using "explain" with each of our SQL queries to try and
determine an appropriate setting for the CACHE_SIZE of our database. I
typically see two entries in the PAGECOUNT column of the explain
results.

The contents of the STRATEGY column for one is "RESULT IS COPIED
,COSTVALUE IS", and I'm assuming this is the number of pages used to
hold the result set. It seems to scale upward with the number of rows
in the table as we would expect. 

The contents of the STRATEGY column for the other differs, but I'm
assuming this is the number of pages required for scanning through the
index and/or table looking for rows that match the query. This number
seems less affected by the number of rows in the table, but we have
found that we can reduce this number by our use of indexes. Our best
results have been achieved when we create an index that contains both
the columns in the WHERE clause and the columns in the SELECT clause.
It appears that the query can then search the index rather than scan
the table, and it also appears that when all of the columns in the
SELECT are present in the index, the query can pick them up from there
rather than have to look for them in the table.

This method seems to result in fast, low-memory queries when all of the
columns in the index are small, but when there's even one large column
in the index, the index takes up a lot of space and the result are no
longer very different than just scanning the table.

For example, I've been trying to tune a query that selects 3 ints and
an
800-character varchar. With 40,000 rows in the table this uses 11148
pages without the varchar included in the index and 10527 pages WITH
the varchar included in the index. Not a huge improvement.

OK, that's the end of the background - here are some questions:

1. Are any of our assumptions about the output of "explain" incorrect?

2. Are there other changes (or other uses of indexes) we can try to
   make this query use fewer pages of memory?  Can we create a 
   clustered index in SAP, for example?

3. Are there attributes we can set for the database such that the
   database will try to cache less information in memory and just use
   the disk? We hope to support a worst-case scenario where the table
   would contain 520,000 entries and although "explain" says we'd need
   about 27,000 pages of cache, we would rather not tie up 210 mb.

Thanks,
Jeff

__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to