Alvaro Herrera wrote: > Log Message: > ----------- > Reduce the size of memory allocations by lazy vacuum when processing a small > table, by allocating just enough for a hardcoded number of dead tuples per > page. The current estimate is 200 dead tuples per page.
200 sounds like a badly chosen value. With a 8k block size, that's a bit less than MaxHeapTuplesPerPage, which means that in the worst case you don't allocate enough space to hold all dead tuples, and you end up doing 2 index cleanups, no matter how large you set maintenance_work_mem. Remember that having MaxHeapTuplesPerPage dead tuples on a page just got much more likely with HOT, and with larger block sizes 200 tuples isn't very much anyway. At the other end of the spectrum, with a smaller block size 200 is more than MaxHeapTuplesPerPage, so we're still allocating more than necessary. Note that as the patch stands, the capping is not limited to small tables. Doing extra index passes on a relatively big table with lots of indexes might be cause a lot of real extra I/O. How about just using MaxHeapTuplesPerPage? With the default 8K block size, it's not that much more than 200, but makes the above gripes completely go away. That seems like the safest option at this point. > Per reports from Jeff Amiel, Erik Jones and Marko Kreen, and subsequent > discussion. Ok, I just read that discussion in the archives. A lot of good ideas were suggested, like reducing the space required for the tid list, or dividing maintenance_work_mem between workers. None of that is going to happen for 8.3, so it seems likely that we're going to revisit this in 8.4. Let's keep it simple and safe for now. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match