In article <[EMAIL PROTECTED]>,
"Michael L. Hostbaek" <[EMAIL PROTECTED]> writes:

> Hello, 
> I've got a table in an oracle database with approx. 100000 records, that
> I'd like to put into a table in a postgresql database. (This should be
> done a couple of times per week)

> I have written a short perl script, on a server that has remote access
> to both the oracle database as well as the postgresql database. I am
> running postgresql 7.4.1 on FreeBSD.

> My perl script looks something like this:

> [...]
> my $sth2 = $cnx2->prepare('SELECT * FROM oracle_table');
> my $res2 = $sth2->execute();

> while(my($field2,$field5,$field6) = ($sth2->fetchrow_array)) {
>       if(defined($field2)) {
>               my $sth = $cnx->prepare('INSERT INTO
> the_pg_table(field1, field2) VALUES(?,?)');
>               my $result = $sth->execute($field2,$field5);
>               $sth->finish;

>       }
> }
> [...]

> I runs fine - and I get no errors - but it takes almost 25 minutes to
> complete.. I tried running the script while just grabbing the rows from
> the oracle database and writing to a text file - and then it only takes
> a couple of minutes .. So it must be the INSERT command that chokes - is
> there a better way to do it ? 

First of all, you should prepare the insert statement only once,
outside of the loop.  Then you could use fetchrow_arrarref instead of
fetchrow_array; this should eliminate a copy operation.

But the biggest win would be not to use INSERT at all.  Instruct
Oracle to dump the rows into a CSV file, and then do just

$cnx->do ("COPY the_pg_table FROM 'csv.file'")


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to