On Tue, 4 Jul 2006, Alexandre Leclerc wrote:

> 2006/7/4, Marc Weustink <[EMAIL PROTECTED]>:
> > Joost van der Sluis wrote:
> >
> > [snip]
> >
> > /me just jumping in, so might probably completely clueless
> >
> > > If you open the table, the postgresql-client will allocate some memory
> > > to store the data. That'll be 10+8=18 bytes plus something dependent on
> > > it's own string-data-format.
> > > At the same time TBufDataset will allocate memory for FPacketRecords
> > > (default=10) records. Thus: 10*(8192+8192)=163840 bytes.
> > > And TDataset will allocate memory for Buffercount (default=10)*(8192
> > > +8192)=163840 bytes.
> > >
> > > So, that'll be: 327698 bytes for the two records.
> > > If you fetch more records, only the postgresql-client and TBufDataset
> > > will allocate more memory, but you can do that math yourself.
> > >
> > > Now, what if the fields were varchar(100); (ridiculous high, but even
> > > then...) 18+10*(100+100)+10*(100+100)=2018 bytes.
> > >
> > > So, by defining a limit on the varchar, you've gained 325680 bytes of
> > > memory.
> > >
> > > I think that this little example shows the problem pretty well...
> > >
> > > btw: only postgresql supports unlimited varchar's.
> >
> > isn't it like an Oracle CLOB or Access memo ?
> >
> > Didnt' the bde have a limit on varchar sizes and threating every varchar
> >  > 256 as a blob ?
> 
> Is there a design reason to act like that? Having a null terminated
> string could be a good solution to allocate only the required memory
> (I guess a 'string' type would not be used at that level?)

Well, it's Delphi compat...

An I think it's because you can't build a generic-dataset model, that can
work perfectly together with all RDBMS'es. Postgres has unlimited varchar
support. So people start using it. But most other db's doesn't have it. So
TDataset isn't build for it.

> Because some varchar are used to store full email-sources,
> code-sources, etc. So this is very handy. You can store a full
> encyclopedia in it.

Use a blob. That's what they are for.

> This brings a second question. What is going to do TBufDataset with
> huge varchar records? Load it at once in memory or cut the string or
> gradually ask the next part of the string to the db?

What's huge? Varchar's larger then dsMaxStringSize are
cut-off. The behaviour that you want is provided by BLOB's

> Thank you for expertise in this regard. (It is appreciated for
> software design with Laz+Psql.)

Yes, most people don't know what's underneath it all. And that's not
really a suprise, since it's complex material.

And once again about the oracle clob-fields: that's actually what you
want. But here you can see that how popular the 'free' databases are, the
good commercial ones, are really better. (I'm talking about Oracle in
particular, but I think that DB2 also can od the trick. MS SQL not,
offcourse) Postgres does a good job, but you can't compare it with 
Oracle...


Joost.


_________________________________________________________________
     To unsubscribe: mail [EMAIL PROTECTED] with
                "unsubscribe" as the Subject
   archives at http://www.lazarus.freepascal.org/mailarchives

Reply via email to