psql's "FETCH_COUNT" feature is useful for incrementally displaying the
results of a long-running query. However, psql fails to flush its output
stream as new rows from the partial result set are produced, which means
that partial query results may not be visible to the client until the
stdio buffer is eventually flushed or the query produces its complete
result set.


$ cat ~/test.sql
-- a contrived function to get a query that slowly produces
-- more rows
create function slow_func() returns boolean as
    perform pg_sleep(2);
    return true;
$$ language plpgsql;

neilc=# \i ~/test.sql 
neilc=# create table t1 (a int, b int);
neilc=# insert into t1 values (5, 10), (10, 15), (20, 25), (30, 35);
neilc=# \set FETCH_COUNT 1
neilc=# select * from t1 where slow_func() is true;

With CVS HEAD, no output is visible until the complete result set has
been produced, at which point all 4 rows are printed. This is
undesirable: since the client has gone to the trouble of FETCH'ing the
rows one-at-a-time, it should display the partial result set before
issuing another FETCH.

Attached is a patch that fixes this, by calling fflush() on the psql
output stream after each call to printQuery() in ExecQueryUsingCursor().


Index: src/bin/psql/common.c
RCS file: /home/neilc/postgres/cvs_root/pgsql/src/bin/psql/common.c,v
retrieving revision 1.134
diff -p -c -r1.134 common.c
*** src/bin/psql/common.c	16 Apr 2007 20:15:38 -0000	1.134
--- src/bin/psql/common.c	20 Jun 2007 22:34:04 -0000
*************** ExecQueryUsingCursor(const char *query, 
*** 1076,1081 ****
--- 1076,1087 ----
  		printQuery(results, &my_popt, pset.queryFout, pset.logfile);
+ 		/*
+ 		 * Make sure to flush the output stream, so intermediate
+ 		 * results are visible to the client immediately.
+ 		 */
+ 		fflush(pset.queryFout);
  		/* after the first result set, disallow header decoration */
  		my_popt.topt.start_table = false;
  		my_popt.topt.prior_records += ntuples;
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to