Try turning off Autocommit: MySQL doesn't support transactions, so that
might be what's causing the speed boost.   Just change the connect line from:
$pg_con=DBI->connect("DBI:Pg:....
to
$pg_con=DBI->connect("DBI:Pg(AutoCommit=>0):....

and add 

$pg_con->commit

before you disconnect.  I may have the syntax wrong, so double check the
docs for the DBI and PG modules (perldoc DBD::Pg and perldoc DBI)

At 01:25 AM 10/20/99, Lincoln Yeoh wrote:
>Hi everyone,
>
>Should inserts be so slow?
>
>I've written a perl script to insert 10 million records for testing
>purposes and it looks like it's going to take a LONG time with postgres.
>MySQL is about 150 times faster! I don't have any indexes on either. I am
>using the DBI and relevant DBD for both.
>
>For Postgres 6.5.2 it's slow with either of the following table structures.
>create table central ( counter serial, number varchar (12), name text,
>address text );
>create table central ( counter serial, number varchar (12), name
>varchar(80), address varchar(80));
>
>For MySQL I used:
>create table central (counter int not null auto_increment primary key,
>number varchar(12), name varchar(80), address varchar(80));
>
>The relevant perl portion is (same for both):
>               $SQL=<<"EOT";
>insert into central (number,name,address) values (?,?,?)
>EOT
>               $cursor=$dbh->prepare($SQL);
>
>       while ($c<10000000) {
>               $number=$c;
>               $name="John Doe the number ".$c;
>               $address="$c, Jalan SS$c/$c, Petaling Jaya";
>               $rv=$cursor->execute($number,$name,$address) or die("Error executing
>insert!",$DBI::errstr);
>               if ($rv==0) {
>                       die("Error inserting a record with database!",$DBI::errstr);
>               };
>               $c++;
>               $d++;
>               if ($d>1000) {
>                       print "$c\n";
>                       $d=1;
>               }
>       }
>
>
>
>************
>


************

Reply via email to