* Jim Nasby (jim.na...@bluetreble.com) wrote: > On 1/5/15, 9:21 AM, Stephen Frost wrote: > >* Robert Haas (robertmh...@gmail.com) wrote: > >>I think it's right to view this in the same way we view work_mem. We > >>plan on the assumption that an amount of memory equal to work_mem will > >>be available at execution time, without actually reserving it. > > > >Agreed- this seems like a good approach for how to address this. We > >should still be able to end up with plans which use less than the max > >possible parallel workers though, as I pointed out somewhere up-thread. > >This is also similar to work_mem- we certainly have plans which don't > >expect to use all of work_mem and others that expect to use all of it > >(per node, of course). > > I agree, but we should try and warn the user if they set > parallel_seqscan_degree close to max_worker_processes, or at least give some > indication of what's going on. This is something you could end up beating > your head on wondering why it's not working. > > Perhaps we could have EXPLAIN throw a warning if a plan is likely to get less > than parallel_seqscan_degree number of workers.
Yeah, if we come up with a plan for X workers and end up not being able to spawn that many then I could see that being worth a warning or notice or something. Not sure what EXPLAIN has to do anything with it.. Thanks, Stephen
signature.asc
Description: Digital signature