There is a problem trying to make Postgres do these things in Parallel.
The backend code isn't thread-safe, so doing a multi-thread implementation requires quite a bit of work. Using multiple processes has its own problems: The whole way locking works equates one process with one transaction (The proc table is one entry per process). Processes would conflict on locks, deadlocking themselves, as well as many other problems. It's all a good idea, but the work is probably far more than you expect. Async I/O might be easier, if you used pThreads, which is mostly portable, but not to all platforms. (Yes, they do work on Windows) From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jeffrey Baker Sent: 2008-10-20 22:25 To: Julius Stroffek Cc: pgsql-hackers@postgresql.org; Dano Vojtek Subject: Re: [HACKERS] Multi CPU Queries - Feedback and/or suggestions wanted! On Mon, Oct 20, 2008 at 12:05 PM, Julius Stroffek <[EMAIL PROTECTED]> wrote: Topics that seem to be of interest and most of them were already discussed at developers meeting in Ottawa are 1.) parallel sorts 2.) parallel query execution 3.) asynchronous I/O 4.) parallel COPY 5.) parallel pg_dump 6.) using threads for parallel processing [...] 2.) Different subtrees (or nodes) of the plan could be executed in parallel on different CPUs and the results of this subtrees could be requested either synchronously or asynchronously. I don't see why multiple CPUs can't work on the same node of a plan. For instance, consider a node involving a scan with an expensive condition, like UTF-8 string length. If you have four CPUs you can bring to bear, each CPU could take every fourth page, computing the expensive condition for each tuple in that page. The results of the scan can be retired asynchronously to the next node above. -jwb