On Wed, Jun 09, 2004 at 01:41:27PM -0400, [EMAIL PROTECTED] wrote: > > On Wed, Jun 09, 2004 at 10:49:20PM +0800, Christopher Kings-Lynne wrote:
> > Also he said that the problem was solved with enough lazy VACUUM > > scheduling. I don't understand why he doesn't want to use that > > solution. > > Sigh, because vacuums take away from performance. Imagine a table that has > to be updated on the order of a few thousand times a minute. Think about > the drop in performance during the vacuum. > > On a one row table, vacuum is not so bad, but try some benchmarks on a > table with a goodly number of rows. Hmm, this can be a problem if VACUUM pollutes the shared buffer pool. So what about a new buffer replacement policy that takes this into account and is not fooled by VACUUM? This is already implemented in 7.5. Also, how about a background writer process that writes dirty buffers so that backends don't have to wait for IO to complete when a dirty buffer has to be written? This is also in current CVS. Have you tried and measured how the current CVS code performs? Jan Wieck reported a lot of performance improvement some time ago while he was developing this. The code has changed since and I have not seen any measurement. -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) Oh, oh, las chicas galacianas, lo har�n por las perlas, �Y las de Arrakis por el agua! Pero si buscas damas Que se consuman como llamas, �Prueba una hija de Caladan! (Gurney Halleck) ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
