I'm having similar problems with DBD::ODBC and I wanted to clarify my
understanding of a couple of things you're mentioning here!

> > On Thu, Nov 21, 2002 at 09:25:13AM -0700, Jason E. Stewart wrote:
> >> I'd be grateful if someone could give me a reality check. I have
> >> 250k rows I want to insert into Postgres using a simple Perl script
> >> and it's taking *forever*. According to my simple timings, It seems
> >> to be only capable of handling about 5,000 rows/hr!!! This seems
> >> ridiculous. This is running on a pretty speedy dual processor P4,
> >> and it doesn't seem to have any trouble at all with big selects.
> 
> Important questions:
> - Is AutoCommit on or off?
Is it generally superior to keep AutoCommit *off* for large-scale inserts?

> - Can you use the 'copy' command instead of 'insert'? Copy is good for
> large-scale batch inserts.
Is this a DBD::Pg specific command, or is it some new addition to SQL?  I
didn't see it in the DBI documentation, that's why I ask.

> - What types are those columns?
>
> It's possible it's postgresql itself. I'd recommend checking with the
> pgsql-general or pgsql-performance lists. Make sure you vacuum your
> table.
> 
> >> Is this some DBD::Pg problem? Is this some Postgres problem (is it
> >> recalculating an index of every insert???)?
> 
> Most databases calculate index effects with every insertion. If you
> have many indices, you should see some improvement by dropping them,
> doing your insertions, and recreating them.
> 
> Hope that helps.
> 
> -johnnnnnnnnnnnnn
> 
> 

Reply via email to