Seeing as I've missed the last N messages... I'll just reply to this
one, rather than each of them in turn...

Tom Lane <[EMAIL PROTECTED]> wrote on 16.10.2004, 18:54:17:
> I wrote:
> > Josh Berkus  writes:
> >> First off, two test runs with OProfile are available at:
> >>
> >>
> > Hmm.  The stuff above 1% in the first of these is
> > Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask 
> > of 0x00 (No unit mask) count 100000
> > samples  %        app name                 symbol name
> > ...
> > 920369    2.1332  postgres                 AtEOXact_Buffers
> > ...
> > In the second test AtEOXact_Buffers is much lower (down around 0.57
> > percent) but the other suspects are similar.  Since the only difference
> > in parameters is shared_buffers (36000 vs 9000), it does look like we
> > are approaching the point where AtEOXact_Buffers is a problem, but so
> > far it's only a 2% drag.

Yes... as soon as you first mentioned AtEOXact_Buffers, I realised I'd
seen it near the top of the oprofile results on previous tests.

Although you don't say this, I presume you're acting on the thought that
a 2% drag would soon become a much larger contention point with more
users and/or smaller transactions - since these things are highly

> It occurs to me that given the 8.0 resource manager mechanism, we could
> in fact dispense with AtEOXact_Buffers, or perhaps better turn it into a
> no-op unless #ifdef USE_ASSERT_CHECKING.  We'd just get rid of the
> special case for transaction termination in resowner.c and let the
> resource owner be responsible for releasing locked buffers always.  The
> OSDL results suggest that this won't matter much at the level of 10000
> or so shared buffers, but for 100000 or more buffers the linear scan in
> AtEOXact_Buffers is going to become a problem.

If the resource owner is always responsible for releasing locked
buffers, who releases the locks if the backend crashes? Do we need some
additional code in bgwriter (or?) to clean up buffer locks?

> We could also get rid of the linear search in UnlockBuffers().  The only
> thing it's for anymore is to release a BM_PIN_COUNT_WAITER flag, and
> since a backend could not be doing more than one of those at a time,
> we don't really need an array of flags for that, only a single variable.
> This does not show in the OSDL results, which I presume means that their
> test case is not exercising transaction aborts; but I think we need to
> zap both routines to make the world safe for large shared_buffers
> values.  (See also

Yes, that's important. 

> Any objection to doing this for 8.0?

As you say, if these issues are definitely kicking in at 100000
shared_buffers - there's a good few people out there with 800Mb
shared_buffers already. 

Could I also suggest that we adopt your earlier suggestion of raising
the bgwriter parameters as a permanent measure - i.e. changing the
defaults in postgresql.conf. That way, StrategyDirtyBufferList won't
immediately show itself as a problem when using the default parameter
set. It would be a shame to remove one obstacle only to leave another
one following so close behind. [...and that also argues against an
earlier thought to introduce more fine grained values for the
bgwriter's parameters, ISTM]

Also, what will Vacuum delay do to the O(N) effect of
FlushRelationBuffers when called by VACUUM? Will the locks be held for

I think we should do some tests while running a VACUUM in the background
also, which isn't part of the DBT-2 set-up, but perhaps we might argue
*it should be for the PostgreSQL version*?

Dare we hope for a scalability increase in 8.0 after all.... 

Best Regards,

Simon Riggs

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to