I have some libpq-using application code, in which fetching the data
follows this logic (after a statement has been prepared):


  PQexecPrepared(pg_result, pg_conn, pg_statement_name, input_param_cnt,
                 param_values, param_lengths, param_formats, result_format);
  PQntuples(&rows_in_result, pg_result);
  /* The application provides storage so that I can pass a certain number of 
   * (rows_to_pass_up) to the caller, and I repeat the following loop until
   * many rows_to_pass_up cover all the rows_in_result (pg_row_num_base keeps 
the track
   * of where I am in the process. */
  for (int row_idx = 0; row_idx < rows_to_pass_up; ++row_idx) {
    const int pg_row_number = row_idx + pg_row_num_base; 
    for (int pg_column_number = 0; pg_column_number < result_column_cnt_ 
++pg_column_number) {
        PQgetvalue(&value, pg_result, pg_row_number, pg_column_number);
        PQgetlength(&length, pg_result, pg_row_number, pg_column_number);


My question is: am I doing the right thing from the "data size being
passed from BE to FE" perspective?

The code in `bin/psql' relies on the value of the FETCH_COUNT
parameter to build an appropriate

    fetch forward FETCH_COUNT from _psql_cursor


No equivalent of FETCH_COUNT is available at the libpq level, so I
assume that the interface I am using is smart enough not to send
gigabytes of data to FE.

Is that right? Is the logic I am using safe and good?

Where does the result set (GBs of data) reside after I call
PQexecPrepared?  On BE, I hope?


-- Alex -- alex-goncha...@comcast.net --

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to