"Simon Riggs" <[EMAIL PROTECTED]> wrote > > We would only look at dynamically changing work_mem in those few > restricted cases where we track that against the work_mem limit. If we > hit that limit, we would make a request to the central pool: "Can I be > allotted another 2MB please?" (etc). The central allotment mechanism > would then say Yes or No. If allotted the memory, the backend would then > palloc up to that limit. The backend may return later for additional > allotments, but for now it has been allowed to dynamically increase its > memory usage. This allotment would be noted in the memory context > header, so that when the memory context is freed, the allotment can be > "returned" to the central pool by a deallotment call. This is now easier > than before since each sort within a query has its own memory context. >
Interesting, I understand that shared_work_mem is process-wise, allocate-when-use, request-may-or-may-not-get-it (as you have pointed out, this may make planner in a hard situation if we are sensitive to work_mem). But I still have something unclear. Let's say we have a sort operation need 1024 memory. So the DBA may have the following two options: (1) SET work_mem = 1024; SET shared_work_mem = 0; do sort; (2) SET work_mem = 512; SET shared_work_mem = 512; do sort; So what's the difference between these two strategy? (1) Running time: do they use the same amount of memory? Why option 2 is better than 1? (2) Idle time: after sort done, option 1 will return all 1024 to the OS and 2 will still keep 512? Regards, Qingqing ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org