OK cool, thanks. Can we remove the minimum size limit when the per table degree setting is applied?
This would help for tables with 2 - 1000 pages combined with a high CPU cost aggregate. Cheers, James Sewell, PostgreSQL Team Lead / Solutions Architect ______________________________________ Level 2, 50 Queen St, Melbourne VIC 3000 *P *(+61) 3 8370 8000 *W* www.lisasoft.com *F *(+61) 3 8370 8099 On Sun, Mar 20, 2016 at 11:23 PM, David Rowley <david.row...@2ndquadrant.com > wrote: > On 18 March 2016 at 10:13, James Sewell <james.sew...@lisasoft.com> wrote: > > This does bring up an interesting point I don't quite understand though. > If I run parallel agg on a table with 4 rows with 2 workers will it run on > two workers (2 rows each) or will the first one grab all 4 rows? > It works on a per page basis, workers just each grab the next page to > be scanned from a page counter that sits in shared memory, the worker > just increments the page number, releases the lock on the counter and > scans that page. > > See heap_parallelscan_nextpage() > > So the answer to your question is probably no. At least not unless the > the page only contained 2 rows. > > -- > David Rowley http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training & Services > -- ------------------------------ The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.