David Fetter wrote:

*** 716,724 ****
<listitem>
      <para>
!       In the current implementation, if you are fetching or returning
!       very large data sets, you should be aware that these will all go
!       into memory.
      </para>
     </listitem>
    </itemizedlist>
--- 766,776 ----
<listitem>
      <para>
!       If you are fetching or returning very large data sets using
!       <literal>spi_exec_query</literal>, you should be aware that
!       these will all go into memory.  You can avoid this by using
!       <literal>spi_query</literal>/<literal>spi_fetchrow</literal> as
!       illustrated earlier.
      </para>
     </listitem>
    </itemizedlist>

You have rolled 2 problems into one - spi_query+spi_fetchrow does not address the issue of returning large data sets.

Suggest instead:

<para>

      If you are fetching very large data sets using
      <literal>spi_exec_query</literal>, you should be aware that
      these will all go into memory.  You can avoid this by using
<literal>spi_query</literal> and <literal>spi_fetchrow</literal> as illustrated earlier.
</para>
<para>
A similar problem occurs if a set-returning function passes a large set of rows back to postgres via <literal>return</literal>. You can avoid this problem too by instead using <literal>return_next</literal> for each row returned, as shown previously.
</para>




cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to