> On Sat, 23 Apr 2016 08:56:14 -0400
> "Keith Medcalf" <kmedcalf at dessus.com> wrote:
> 
> > Those things that those other DBMSes do are holdovers to maintain
> > backwards compatibility with the good old days when dinosaurs ruled
> > the earth

> As amusing as your rant is, it's not accurate.  Treating columns as
> types is a "holdover" from mathematics and logic.  It has nothing to do
> with maintaining backwards compatibility, or the cost of RAM.  Many new
> DBMS engines have been written in recent years (notably columnar
> stores) and, of those that support SQL, none abandoned strict column
> types.

> Granted, duck-typing is/was a design choice.  The good news is it makes
> simple things simple.  But it's also the source of quite a bit of
> confusion and perplexity, not to mention errors and complexity:
> *inserting* is easy, but selection is made more complicated.  The
> application has to be prepared to cope with inconsistencies (and
> errors) in the data that are prevented by type-checking.  In SQLite,
> the "dba" has to exercise care, in a very reduntant way, to effect
> type-enforcement of a kind he gets "for free" with a standard DBMS.

The specification of "Length/precision" as well as the use of "varchar", 
"varbinary" and the like are holdovers from the days when files had to have 
fixed record lengths so BDAM would work.  It is only since the bitty-boxen 
(70's) that you have been able to do variable length records worth a darn, and 
the bitty-boxen started to treat "files" as a bag-o-bytes rather than 
collection-of-records.

When a file is a bag-o-bytes rather than a collection-of-records (fixed length 
records at that), there is no longer any need to keep the records the same 
length, hence no need to specify the "reserved length" of things (contrary to 
common mis-belief, varchar(100) does not mean a "variable length character 
field with a length up to 100 characters", it means a character field with a 
length of 100 into which a smaller amount of data may be stored that will be 
padded to 100 characters with special discarded-upon-retrieval padding 
characters" -- that is, the difference between CHAR(100) and VARCHAR(100) is 
that in the former case data less than 100 characters is padded with spaces, 
and in the latter case it is padded with nulls.  The column is still, however, 
100 characters wide.  Note that there are a few DBMS which were developed after 
the dinosaurs died on bag-o-byte machines rather than collection-o-record 
machines.  For those specific DBMS, varchar may mean that the field length is 
variable and that the data is prefixed with the length (in pascal style), and 
that a "file" is therefore interpreted as a bag-o-bytes.

Note that the advent of VSAM style I/O made variable length blocks a bit more 
efficient, but not much.  It was not until the bag-o-bytes file came along that 
variable length database records became possible.

Non-duck-typing is simply a holdover of the days when everything had a type.  
There was no such thing as a "variant" or an "object" and pointers had to know 
what it was they were pointing at.

In order for Codd and Date to conceive of a world wherein duck-typing existed, 
a duck-typed language would have to exist first.  Since such was inconceivable 
before its invention, it is no wonder that it never occurred to anyone that a 
database column could be ducky too.




Reply via email to