On Thu, Dec 16, 2004 at 06:58:17PM -0500, Rudy Lippan wrote:
> > > >
> > > > I can, for it all depends on how the benchmarking is done, and I'd 
> > > > have to say, who cares if Pg is 40% faster? 
> > 
> > I do! The DBI is designed to be very fast. Speed should never be a
> 
> You are right. I just don't trust benchmarks, and I somehow doubt that the
> overall performance (in real-world-situations) of DBD::Pg is that bad; 
> however,
> For a simple application just loading DBI/DBD::Pg would probably give Pg a 40%
> run time advantange. Not to mention that $dbh->do("Select 1"); would probably 
> be
> at least 50% slower because DBD::Pg does scan of the statement and placeholder
> parsing &c before sending it to PostgreSQL.

Ug. I've just looked at the code for _prepare and it does look inefficient.
Multiple scans of the string and multiple memory allocations.
Certainly some room for improvement there.

> > Assuming that the "40% faster" relates mainly to fetching then
> > here's a quick review of the relevant code...
> 
> I was assuming that that number had to do with prepare -- which I know to be
> much slower :)
> 
> A quick benchmark (1,100,100K rows, 5 cols, varchar 128, random length data) 
> of
> the fetch code shows the runtime to be almost the same, I'll play with it some
> more, but if you want I can send the code up to dbi-dev.

There's no need for the fetch code if it's about the same as Pg.
But I suggest making the NULL change and the utf8 changes I mentioned.

Tim.

Reply via email to