In a message dated 6/7/2005 6:37:39 AM Pacific Daylight Time, 
[EMAIL PROTECTED] writes:


> For your specific question about nulls, it depends on how many nulls you 
> expect to find.  Using the above examples, if you only expect a handful of 
> null values, your select statement "should" work the same in either case 
> and it will be much faster with an index.

I think we have to keep in mind what happens in the whole process.
While the select itself will process faster (provided you have not specified 
NO.NULLS when creating the index), we have to ask what you're going to do next 
with that list ?

If the amount of nulls exceeds perhaps, off-the-cuff, ten percent of your 
database, then you will not enjoy any end-to-end speed improvements in your 
process.  This is because, in processing that list, you will be jumping all 
over 
your disk in semi-random order.  (Let's assume the entire file is not resident 
in memory at once.)  Because of this "thrashing", each READ will take 
significantly longer than if you simply read the file, in hashed order, from 
beginning 
to end, and process only those records with nulls in your target field.

So in the end, your timing may show, that an indexed SELECT on a value that 
is very common, followed by a loop of processing, is actually taking longer, 
than a non-indexed SELECT, followed by that same loop.

And you thought this stuff was easy :)

Will Johnson
Fast Forward Technologies
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to