But, with very fast CPUs and RAM memory buffers for the directory 
entries in the disks,
  the variable length records would probably result in gain for much 
less I/O for the data.

On 23/04/2016 21:22, Scott Robison wrote:
> On Apr 23, 2016 6:21 PM, "Simon Slavin" <slavins at bigfraud.org> wrote:
>>
>> On 24 Apr 2016, at 12:58am, Scott Robison <scott at casaderobison.com> wrote:
>>
>>> For any SQL datastore, the way the data is stored is completely an
>>> implementation detail. The SQL engine would be free to serialize all
> values
>>> to text and store them in 4K pages if it wanted to, then deserialize
> them
>>> on the way back. I certainly don't know of any that do that, but the
>>> impetus for the creation of VARCHAR fields (I imagine) was specifically
> to
>>> avoid storing padding for data that did not require it.
>> Speed plays a part as well as storage space.  Back in the days of
> mainframes and minicomputers, it was far more efficient to store
> fixed-length records than variable-length records.  To look up row 7463 in
> a file you would just multiply 7463 by the number of bytes in a row, then
> start reading from that byte.  Think about how much more processing and
> access SQLite has to do just to read a row from a database file.
>> So if you could make your table up of integers, floats, and text with
> character limits on them you could get fixed-length rows, which might
> reduce your access time by 60% or more.  Such a decrease in access time
> could mean the difference between being able to update a database live or
> being able to update only during an overnight run.
>
> This was particularly true in the case of media like 9 track mag tape
> (effective 1 dimensional access) vs modern hard drive (effective 3
> dimensional access).
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to