Using DBI, you must be careful how you fetch the data. 

There are methods for fetching entire result set as an array of arrays or
array of hashes, but doing row-by-row fetching as a reference of an array
is fastest. That was most likely the reason for disparity.

-alex


 On Tue, 13 Nov 2001, Michael Wray wrote:

> Hmm...well I don't have the DBD code I used but basically it was something like:
> $dbh->Autocommit=0
> $dbh->do (qw($query));
> $dbh->commit
> disconnect...
> 
> (I'd have to verify my syntax against Programming the Perl DBI and the DBD::Pg 
>manpage.)
> 
> (Pretty straight forward..and is what I am doing with Pg as well..or at least the 
>equivalent..)
> Pg Versus DBD:Pg is about 100 times faster.Though not a fair comparison as the DBD 
>method had more going on..(Like ONE Replication Unit trying to synchronize 8 servers 
>)  the $query is a dump of SQL statements...drop, create, insert, or delete....(I'm 
>doing  a dump from a MSSQL DB via an ASP query...and reissuing the SQL statements to 
>PG...the Statements come from a table built by triggers...so that the statements will 
>work properly in PG.)
> 
> However..when doing some SELECTS via DBD versus Pg I found Pg to be much quicker...I 
>don't know why libpq is faster on large SELECTS but it is...and I will need that 
>speed when we move over the rest of our DB system to Postgres next year.
> 
> At any rate..I may review DBD:Pg when I have time..right now I have to finish a 
>different product by the end of next week...which uses similar methods...and this is 
>why I am converting a bunch of PHP scripts to Perl...(and why I am sticking with 
>libpq right now...since libpq is what was used by PHP)
> 
> My original method used DBD::Pg and connected to a PLProxy running on the SQL Server 
>connecting via ODBC....(Which may have been the slow down..I don't know)  and queried 
>the same table for itself...read the results into an array...copied the array to a 
>scalar variable ($query) and did a DO.  That bombed every time.  so I switched to 
>stepping through the array..it never bombed, but it was god awful slow...I know that 
>the connection to MSSQL took longer than the ASP dump we are doing...although if the 
>ASP query doesn't move soon..it will be taking longer than it should soon....
> 
> Most of the time there are only a couple of records to do...the problem is when I 
>have to do a fresh copy to one of my backup servers...it takes awhile...where as with 
>the ASP dump it isn't taking long yet..until we run into a timeout issue on the 
>select of the data.
> 
> That and I was doing a Push/Pull perl service: (Connect to MSSQL...record updates to 
>a local PG database., connect to replication partners..push updates...look for 
>updates on partners...grab updates to push to MSSQL...)  This was a pain..now each 
>server is running a pull from MSSQL and all changes go straight to MSSQL..and are 
>replicated from there...it is much faster but has many other drawbacks.
> 
> Pg accomplishes the commands but sometimes reports errors when there aren't any that 
>I can trace.but sometimes the same error is legitimate...I think it reports errors on 
>DELETES that didn't occur because the record didn't exist...which to me is not an 
>error..or at least not the fatal error Pg reports it to be....(I can verify the 
>integrity of the data....both by record count and an MD5 checksum)
> 
> (Note: Admittedly I have not tried the new method using DBD:Pg, this was a port of a 
>bash script that used psql to do the updates and I went to the Pg module because it 
>most resembled my script in execution.)  I now check for things like: Are my servers 
>running, is there another replication process running, is the old process a zombie, 
>if my services aren't running, can I restart them, if not..send out an alarm.  All of 
>my original work was scrapped by another programmer while I was in China...he wrote 
>the ugly bash script which worked but had lots of problems if there were errors, 
>etc..(like corrupt data...)
> 
> Why MSSQL? It hooks directly into our accounting system....specific info for the 
>user application is then exported via the above method so we can authenticate users, 
>and keep their data up to date...
> 
> (PostGres on Linux handles more connections per second than MSSQL on Winblows)
> 
> 
> *********** REPLY SEPARATOR  ***********
> 
> On 11/12/01 at 8:21 PM Alex Pilosov wrote:
> 
> >I think you are doing something wrong. DBD::Pg is fast-to-unnoticeable.
> >
> >I bet that with PHP you do your operations within one transaction (do an
> >explicit BEGIN or whatever way in PHP you start a transaction), whereas in
> >perl, you don't do $dbh->{AutoCommit}=0;
> >
> >The best way to kill postgres' performance is to put each statement in its
> >own transaction (default, AutoCommit=1). 
> >
> >If you post your source code, I might be able to help you out.
> >
> >-alex
> >
> >On Mon, 12 Nov 2001, Michael Wray wrote:
> >
> >> Thanks for the suggestions, but here's my 2 cents on DBI/DBD::Pg:.
> >> 
> >> I would LOVE to use DBD::Pg...I have written a lot of code using
> >> DBD::Pg...however...unless there is a Monumental Performance increase
> >> between DBD::Pg that was out with DBI-1.16, and the current version..I
> >> don't think I can consider this...That and right now I think I am
> >> looking at less coding for libpq as most of the routines will port
> >> straight across from PHP. (Note...I said amateur not NOVICE..)
> >> 
> >> If I were ONLY doing Report type queries and no UPDATES..then DBI
> >> would be fine...
> >> 
> >> I have already tried this project using DBD::Pg...performance from
> >> DBD::Pg sucked....I am already using the libpq routines in PHP
> >> (written by another programmer.) with amazing results...from other
> >> reports I have seen...moving away from apache and going pure perl with
> >> the Pg module will give me blazing performance.  (Yes..PosgtGres)...I
> >> have no need of porting my code to another database...this is a
> >> proprietary product and will be maintained and released by this
> >> company for quite awhile....I need speed..not portability...The way
> >> they are being used in PHP does not match my needs though and I need
> >> info on Error response codes from libpq.
> >> 
> >> DBD adds an extra layer to the programming that I am trying to cut
> >> out....that extra layer cuts way into performance...
> >> 
> >> 
> >> I can manually replicate my database faster by hand than by any of the
> >> available methods in DBD::Pg...and in fact have done so...(And that
> >> was a LONG night...)
> >> 
> >> 
> >> My end goal:
> >> 
> >> Client->Perl->libpq (Pg)->Postgres,
> >> which sits now as:
> >> Client->Apache->Php->libpq->PostGres.
> >> will move to
> >> Client->Apache->mod_perl->libpq->Postgres...then I will drop Apache...I
> >don't need a full fledged WebServer for what I am doing...just a TCP
> >server listening on the same port to handle DB queries in the form of a
> >URL request...
> >> *********** REPLY SEPARATOR  ***********
> >> 
> >> On 11/12/01 at 11:51 AM Rudy Lippan wrote:
> >> 
> >> >On Mon, 12 Nov 2001, Hardy Merrill wrote:
> >> >
> >> >> By "Pg", do you mean PostgreSQL database?  The only interface that
> >> >> I'm aware of for Perl to PostgreSQL is through DBI and DBD::Pg.  In
> >> >> terms of documentation for PostgreSQL, I'd start with the Postgresql
> >> >> home page at
> >> >
> >> >There is(was?) a "Pg" interface to perl: 
> >> >http://search.cpan.org/doc/MERGL/pgsql_perl5-1.9.0/Pg.pm
> >> >
> >> >And then there was an attempt to write a pure-perl interface for
> >Postgres.
> >> >I don't think it ever got very far, and I don't remember the name of the
> >> >pagkage.
> >> >
> >> >DBI/DBD::Pg is the way to go.  Performance is very good, and DBI makes
> >it
> >> >easier for you to port your code to other databases.
> >> >
> >> >
> >> >Rudy
> >> 
> >> ********************** 2K ******************************
> >> 
> >>
> 
> ********************** 4K ******************************
> 
> 

Reply via email to