Well, if nothing can ever equal null, then why isn't MySQL query parser smart enough to reduce my queries to something more sensible? If I'm saying this:

  SELECT *
  FROM sometable
  WHERE somecolumn = NULL OR somecolumn = 'abc';

Why isn't it able to reduce the query to something more like this:

  SELECT *
  FROM sometable
  WHERE somecolumn = 'abc';

Since it already should know that somecolumn = NULL will always evaluate to FALSE (or is it NULL? ... either way, it's not "TRUE")? If I run the first query above, the query takes about 15 seconds to run against 40 million records, but if I run the second query, it takes about .050 seconds. The test for NULL seems to cause the query to skip use of an index because I doubt NULL values are indexed.

Am I expecting too much of the parser?

Of the optimizer...

I could as easily write:

were myintegercolumn = 'test'

which would also result into False (haven't tried, depending on
how it evaluates, this could result in a datatype error ;-) )

Would the DBSM code have to check for all of these silly constructs? Talking
about bloat.




With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to