On 2023-12-03 Su 12:11, Joe Conway wrote:
On 12/3/23 11:03, Joe Conway wrote:
  From your earlier post, regarding constructing the aggregate -- not
extensive testing but one data point:
8<--------------------------
test=# copy foo to '/tmp/buf' (format json, force_array);
COPY 10000000
Time: 36353.153 ms (00:36.353)
test=# copy (select json_agg(foo) from foo) to '/tmp/buf';
COPY 1
Time: 46835.238 ms (00:46.835)
8<--------------------------

Also if the table is large enough, the aggregate method is not even feasible whereas the COPY TO method works:
8<--------------------------
test=# select count(*) from foo;
  count
----------
 20000000
(1 row)

test=# copy (select json_agg(foo) from foo) to '/tmp/buf';
ERROR:  out of memory
DETAIL:  Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes.

test=# copy foo to '/tmp/buf' (format json, force_array);
COPY 20000000
8<--------------------------


None of this is surprising. As I mentioned, limitations with json_agg() are why I support the idea of this patch.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com



Reply via email to