Juha-Mikko,

Thanks for that -- I had arrived at the same conclusion after touring
through the DBD code and forming the hypothesis that it did not do the
pre-fetching (and therefore it must have been PostgreSQL -- have I got
this right?).  However, I looked through the PostgreSQL docs and could
find no mention of the pre-fetching, so I'm still slightly confused as
to where it actually happens.

I hadn't got as sophisticated as embedding intelligence on the server
side, but that sounds like a cool idea to look into -- I ended up doing
the limit/offset thing because my end game is not a nice application but
getting this flamin' data analyzed.

Thanks again for your help.

Regards,

Mike

On Sat, 2002-10-26 at 17:48, Juha-Mikko Ahonen wrote:

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1
    
    On Saturday 26 October 2002 00:29, Mike Nielsen wrote:
    > The "execute" seems to be returning all the rows into the address
    > space of my program.  Since the result set is ~4 million rows, this
    > has diabolical consequences on memory and, in fact, the program gets
    > whacked by the kernel once it has exhausted memory (and caused my
    > machine to thrash wildly for quite some time...).  I've run this
    > under the perl debugger, and it dies before it gets to the
    > fetchrow_arrayref contained in the ParseQS module -- it never returns
    > from the "execute".
    >
    > So, where's the bug?  If it is in my code, I'd be grateful for tips
    > on the correct way to fetch big data.
    
    It is not a bug, but a feature. If the rows were returned one by one, 
    the DB connection would be extremely slow, especially so on TCP/IP 
    connections.
    
    You should do most of the work in DB side, not in Perl. If you need to 
    select certain rows using a complex algorithm which is not readily 
    emulated with SQL, write a procedure in PostgreSQL; it is possible to 
    use Perl for stored procedures.
    
    I beliewe RowCacheSize has no effect in PostgreSQL. The docs underline 
    the fact that all data is copied to the frontend program's memory 
    space.
    
    If you need to get all those millions of lines from the table, consider 
    SQL COPY command (if it suits the purpose), or use LIMIT .. OFFSET to 
    fetch the data in chuncks. You'll have to use ORDER BY with LIMIT.
    
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.0.6 (GNU/Linux)
    Comment: For info see http://www.gnupg.org
    
    iD8DBQE9uki6nksV4Ys/z5gRAjJOAJ4qyEqsHFHFvjNe3M1sAerZamFVFwCcCMAS
    zDU/YXKlQwZrDpJhnYYesL4=
    =bUcs
    -----END PGP SIGNATURE-----
    

Michael Nielsen

ph: 0411-097-023 email: [EMAIL PROTECTED]

Mike Nielsen

________________________________________________________________________

Reply via email to