I'm using 7.0.1 with a TCL frontend.

I have a schema that splits large files into tuple-sized bites and stores them in a table.  This was done before TOAST in order to store large files.

I have a backend TCL function that re-assembles the file like this:

-- Fetch the specified document data, reassembling the bits back together
-- in the right order.
-- Calling sequence: cont_doc_fetch(crt_by,crt_date,ctype)
create function cont_doc_fetch(int4,timestamp,varchar) returns text as '
    set odata {}
    spi_exec -array d "select data from cont_doc_data where crt_by = \'$1\' and crt_date = \'$2\' and ctype = \'[quote $3]\' order by seq" {
        append odata $d(data)
    }
    return $odata
    ' LANGUAGE 'pltcl';

This worked great until I put a real big file in (about 5M).  Then, when I tried to fetch the file, it seemed really slow (about 60 seconds).  I tried reassembling the file in the frontend instead and my time dropped to about 6 seconds using this TCL fragment (mpg::qlist is an interface to pg_exec that returns a list of tuples):

    set data {}
    set tuple_list [mpg::qlist "select data from $ca(prefix)_doc_data where crt_by = $crt_by and crt_date = '$crt_date' and ctype = '$ctype' order by seq"]
    foreach rec $tuple_list {
        append data [lindex $rec 0]
    }

The only difference I can identify is whether the re-assembly TCL code is running as a procedural language (backend) or in the frontend.  Anyone have any idea why the difference is so dramatic?

Jan:
Is this the difference between old TCL and new TCL (with multi-port objects)?  Or is there something else about the way the backend handles large chunks of data that would mark the difference?

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;;;;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard

Reply via email to