J. Jeff Roberts wrote:

> I have a table with 52,000 rows.
> 
> This takes less than a second:
> 
> select msgnum, tstamp, message FROM mylog ORDER BY tstamp DESC, msgnum
> DESC
> 
> This takes 13 seconds:
> 
> select msgnum, tstamp message FROM mylog WHERE msgnum < 100 ORDER BY
> tstamp DESC, msgnum DESC
> 
> The only difference is that the second query contains the WHERE.
> 
> We have also found that "WHERE msgnum < 40000" is almost as fast as
> getting the entire table.
> 
> msgnum is the primary key and I have an index on tstamp DESC 
> and msgnum
> DESC.  The message field is not included in the index because it is
> VARCHAR(2048).
> 
> "Explain" tells me that an index scan is being done in both cases.
> 
> Why is selecting the entire table (or most of the table) so much
> faster?  This seems counter-intuitive.
> 
> -Jeff
> 
> P.S. I will add a fetch loop to the testcase because I'm curious about
> the overall time taken.

I assume you are using SQL Studio for your tests. 
Then you don't fetch the whole table but only as much 
rows as fits in one communication packet.
With the where condition msgnum < 100 you have to read more rows
perhaps the whole table because only few of them hits this condition 
to fill up the packet. 
On the other side every row hits msgnum < 40000 so you only read the
first rows.
But I think you'll already discoverd this with your fetch loop test.

Best regards,
Holger
SAP Labs Berlin
_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to