Joost van der Sluis wrote:

[snip]

/me just jumping in, so might probably completely clueless

If you open the table, the postgresql-client will allocate some memory
to store the data. That'll be 10+8=18 bytes plus something dependent on
it's own string-data-format.
At the same time TBufDataset will allocate memory for FPacketRecords
(default=10) records. Thus: 10*(8192+8192)=163840 bytes.
And TDataset will allocate memory for Buffercount (default=10)*(8192
+8192)=163840 bytes.

So, that'll be: 327698 bytes for the two records.
If you fetch more records, only the postgresql-client and TBufDataset
will allocate more memory, but you can do that math yourself.

Now, what if the fields were varchar(100); (ridiculous high, but even
then...) 18+10*(100+100)+10*(100+100)=2018 bytes.

So, by defining a limit on the varchar, you've gained 325680 bytes of
memory.

I think that this little example shows the problem pretty well...

btw: only postgresql supports unlimited varchar's.

isn't it like an Oracle CLOB or Access memo ?

Didnt' the bde have a limit on varchar sizes and threating every varchar > 256 as a blob ?

Marc

_________________________________________________________________
    To unsubscribe: mail [EMAIL PROTECTED] with
               "unsubscribe" as the Subject
  archives at http://www.lazarus.freepascal.org/mailarchives

Reply via email to