"David Duff" <[EMAIL PROTECTED]> writes:

> i just did some performance tests using DBD::Pg - here are quick
> results.

Awesome David, thanks for this. After reviewing my DB design I
realized that I had some fkey constraints that I didn't need to have
and this was probably slowing things down. I'm rebuilding my DB (w/o
the constraints) and I'll recheck and compare my results to yours.

> i timed inserts of 100k records using the following three techniques:
> 
> 1. row-at-a-time insert using a prepared insert statement.
> 2. "copy <table> from stdin", followed by repeated calls to putline.
> 3. "copy <table> from <tempfile>"
> 
> results:
> 
> 1. 1053 records per second
> 2. 3225 records per second
> 3. 3448 records per second

I was seeing about 10 records/sec!! So I'll take any of these
results. 

> your milage will undoubtedly vary due to, among other things:

I'll also add, that one of the new features of DBI-1.30 is the
execute_array() method, which enables you to give execute() many rows
at a time. The drivers that currently support it (DBD::ODBC,
DBD::Oracle, ???) have shown pretty decent speedups, maybe x3 to x5 as
well. I'm not sure that Postgres C API has the necessary library calls
to actually implement this, but we should talk to Bruce/Tom about it.

jas.

Reply via email to