On 29 Apr 2004 at 13:54, Josh Berkus wrote:

> Gary,
> It's also quite possble the MSSQL simply has more efficient index scanning 
> implementation that we do.    They've certainly had incentive; their storage 
> system sucks big time for random lookups and they need those fast indexes.  
> (just try to build a 1GB adjacency list tree on SQL Server.   I dare ya).
> Certainly the fact that MSSQL is essentially a single-user database makes 
> things easier for them.    They don't have to maintain multiple copies of the 
> index tuples in memory.    I think that may be our main performance loss.

Possibly, but MSSQL certainly uses data from indexes and cuts out the 
subsequent (possibly random seek) data fetch. This is also why the 
"Index Tuning Wizard" often recommends multi column compound 
indexes in some cases. I've tried these recommendations on occasions 
and they certainly speed up the selects significantly. If anyhing the index 
scan on the new compound index must be slower then the original single 
column index and yet it still gets the data faster.

This indicates to me that it is not the scan (or IO) performance that is 
making the difference, but not having to go get the data row.


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to