On Nov 2, 2005, at 7:36 AM, [EMAIL PROTECTED] wrote:
Am I alone in thinking that a division operator that does
different things depending on the declared datatype of a
column is an abomination?
I don't think you're alone. I do think you're wrong.
By declaring column a as INTEGER, and column b as REAL, I am
*explicitly* asking for SQLite to treat the values I put in a as
INTEGERs and in b as REALs. If I declare column a as NUMERIC, or
without a type, I would expect your proposed behavior. And of course
if I ask for a value as a particular type - whether through the
SQLite API or using a cast expression - I expect it to be treated as
that type.
I like to think of this as a "have my cake and eat it too" approach:
The typing can be as static or as dynamic as I need it to be for a
particular application, and I have a full set of tools for making
that determination. The database isn't preventing me from doing what
I need in *either* direction.
To give you an analogy, Objective-C is a dynamic object-oriented
extension to C that's modeled on Smalltalk. All message dispatch is
dynamic and happens at runtime; however, users can declare variables
that point to objects using either an "id" type that means "any kind
of object" or by explicitly specifying a class. If you do specify a
class, the compiler will do what type checking it can and warn you if
it notices something out of the ordinary (or give you an error for a
type/class it's never encountered before). This lets you avoid
extensive compile-time type checking when you don't want it, but also
obtain extensive compile-time type checking when you do want it.
This is the kind of behavior I think would be very valuable in
SQLite. If I specify a particular affinity, SQLite should adhere to
it, because I probably had a reason for specifying it. But if I
*didn't* specify an affinity, I probably had a reason for *that
too*. They are both valuable options, and I think it would be very
valuable for SQLite to let me choose rather than force them one way
(always require affinity, e.g. the SQL standard way) or the other
(never enforce affinity, e.g. the proposed SQLite way).
Does anybody have a real-world example where any of this
will actually make a difference, or is this really just an
academic argument?
It could lead to cascading inaccuracy in situations where developers
do expect, plan for, or rely upon traditional integer truncation
behavior.
One area where this may be the case is in graphics: Many developers
are used to working in whole-pixel units with integer truncation on
division, and performing more exact calculations as part of SELECT
statements may actually give erroneous results. Particularly if they
declared columns as having type INTEGER and expect those columns to
have such affinity.
An example of this might be in locating images that can be trivially
scaled (e.g. by sampling every second pixel) to fit a particular
display area. The desired-area calculation could occur in the
SELECT, and the proposed SQLite behavior could weed out images that
would actually fit (e.g. (x/2 < u) is false in SQLite but true in C).
I think behaving differently from the SQL standard in this situation
-- if an affinity is specified -- would be seen as a bug in SQLite.
-- Chris