Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2011-02-16 Thread Bruce Momjian
Tom Lane wrote:
 I wrote:
  In particular, now that there's a distinction between smgr flush
  and relcache flush, maybe we could associate targblock reset with
  smgr flush (only) and arrange to not flush the smgr level during
  ANALYZE --- basically, smgr flush would only be needed when truncating
  or reassigning the relfilenode.  I think this might work out nicely but
  haven't chased the details.
 
 I looked into that a bit more and decided that it'd be a ticklish
 change: the coupling between relcache and smgr cache is pretty tight,
 and there just isn't any provision for having an smgr cache entry live
 longer than its owning relcache entry.  Even if we could fix it to
 work reliably, this approach does nothing for the case where a backend
 actually exits after filling just part of a new page, as noted by
 Takahiro-san.
 
 The next most promising fix is to have RelationGetBufferForTuple tell
 the FSM about the new page immediately on creation.  I made a draft
 patch for that (attached).  It fixes Michael's scenario nicely ---
 all pages get filled completely --- and a simple test with pgbench
 didn't reveal any obvious change in performance.  However there is
 clear *potential* for performance loss, due to both the extra FSM
 access and the potential for increased contention because of multiple
 backends piling into the same new page.  So it would be good to do
 some real performance testing on insert-heavy scenarios before we
 consider applying this.  Any volunteers?

I have added this TODO:

Allow concurrent inserts to use recently created pages rather than
creating new ones

* http://archives.postgresql.org/pgsql-hackers/2010-05/msg00853.php 

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2011-02-04 Thread Bruce Momjian
Tom Lane wrote:
 I wrote:
  In particular, now that there's a distinction between smgr flush
  and relcache flush, maybe we could associate targblock reset with
  smgr flush (only) and arrange to not flush the smgr level during
  ANALYZE --- basically, smgr flush would only be needed when truncating
  or reassigning the relfilenode.  I think this might work out nicely but
  haven't chased the details.
 
 I looked into that a bit more and decided that it'd be a ticklish
 change: the coupling between relcache and smgr cache is pretty tight,
 and there just isn't any provision for having an smgr cache entry live
 longer than its owning relcache entry.  Even if we could fix it to
 work reliably, this approach does nothing for the case where a backend
 actually exits after filling just part of a new page, as noted by
 Takahiro-san.
 
 The next most promising fix is to have RelationGetBufferForTuple tell
 the FSM about the new page immediately on creation.  I made a draft
 patch for that (attached).  It fixes Michael's scenario nicely ---
 all pages get filled completely --- and a simple test with pgbench
 didn't reveal any obvious change in performance.  However there is
 clear *potential* for performance loss, due to both the extra FSM
 access and the potential for increased contention because of multiple
 backends piling into the same new page.  So it would be good to do
 some real performance testing on insert-heavy scenarios before we
 consider applying this.  Any volunteers?
 
 Note: patch is against HEAD but should work in 8.4, if you reverse out
 the use of the rd_targblock access macros.

Is this something we want to address or should I just add it to the
TODO?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-31 Thread Tom Lane
I wrote:
 In particular, now that there's a distinction between smgr flush
 and relcache flush, maybe we could associate targblock reset with
 smgr flush (only) and arrange to not flush the smgr level during
 ANALYZE --- basically, smgr flush would only be needed when truncating
 or reassigning the relfilenode.  I think this might work out nicely but
 haven't chased the details.

I looked into that a bit more and decided that it'd be a ticklish
change: the coupling between relcache and smgr cache is pretty tight,
and there just isn't any provision for having an smgr cache entry live
longer than its owning relcache entry.  Even if we could fix it to
work reliably, this approach does nothing for the case where a backend
actually exits after filling just part of a new page, as noted by
Takahiro-san.

The next most promising fix is to have RelationGetBufferForTuple tell
the FSM about the new page immediately on creation.  I made a draft
patch for that (attached).  It fixes Michael's scenario nicely ---
all pages get filled completely --- and a simple test with pgbench
didn't reveal any obvious change in performance.  However there is
clear *potential* for performance loss, due to both the extra FSM
access and the potential for increased contention because of multiple
backends piling into the same new page.  So it would be good to do
some real performance testing on insert-heavy scenarios before we
consider applying this.  Any volunteers?

Note: patch is against HEAD but should work in 8.4, if you reverse out
the use of the rd_targblock access macros.

regards, tom lane

Index: src/backend/access/heap/hio.c
===
RCS file: /cvsroot/pgsql/src/backend/access/heap/hio.c,v
retrieving revision 1.78
diff -c -r1.78 hio.c
*** src/backend/access/heap/hio.c	9 Feb 2010 21:43:29 -	1.78
--- src/backend/access/heap/hio.c	31 May 2010 20:44:29 -
***
*** 354,384 
  	 * is empty (this should never happen, but if it does we don't want to
  	 * risk wiping out valid data).
  	 */
  	page = BufferGetPage(buffer);
  
  	if (!PageIsNew(page))
  		elog(ERROR, page %u of relation \%s\ should be empty but is not,
! 			 BufferGetBlockNumber(buffer),
! 			 RelationGetRelationName(relation));
  
  	PageInit(page, BufferGetPageSize(buffer), 0);
  
! 	if (len  PageGetHeapFreeSpace(page))
  	{
  		/* We should not get here given the test at the top */
  		elog(PANIC, tuple is too big: size %lu, (unsigned long) len);
  	}
  
  	/*
  	 * Remember the new page as our target for future insertions.
- 	 *
- 	 * XXX should we enter the new page into the free space map immediately,
- 	 * or just keep it for this backend's exclusive use in the short run
- 	 * (until VACUUM sees it)?	Seems to depend on whether you expect the
- 	 * current backend to make more insertions or not, which is probably a
- 	 * good bet most of the time.  So for now, don't add it to FSM yet.
  	 */
! 	RelationSetTargetBlock(relation, BufferGetBlockNumber(buffer));
  
  	return buffer;
  }
--- 354,388 
  	 * is empty (this should never happen, but if it does we don't want to
  	 * risk wiping out valid data).
  	 */
+ 	targetBlock = BufferGetBlockNumber(buffer);
  	page = BufferGetPage(buffer);
  
  	if (!PageIsNew(page))
  		elog(ERROR, page %u of relation \%s\ should be empty but is not,
! 			 targetBlock, RelationGetRelationName(relation));
  
  	PageInit(page, BufferGetPageSize(buffer), 0);
  
! 	pageFreeSpace = PageGetHeapFreeSpace(page);
! 	if (len  pageFreeSpace)
  	{
  		/* We should not get here given the test at the top */
  		elog(PANIC, tuple is too big: size %lu, (unsigned long) len);
  	}
  
  	/*
+ 	 * If using FSM, mark the page in FSM as having whatever amount of
+ 	 * free space will be left after our insertion.  This is needed so that
+ 	 * the free space won't be forgotten about if this backend doesn't use
+ 	 * it up before exiting or flushing the rel's relcache entry.
+ 	 */
+ 	if (use_fsm)
+ 		RecordPageWithFreeSpace(relation, targetBlock, pageFreeSpace - len);
+ 
+ 	/*
  	 * Remember the new page as our target for future insertions.
  	 */
! 	RelationSetTargetBlock(relation, targetBlock);
  
  	return buffer;
  }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-30 Thread Tom Lane
Alvaro Herrera alvhe...@alvh.no-ip.org writes:
 Excerpts from Michael Renner's message of sáb may 15 20:24:36 -0400 2010:
 I've written a simple tool to generate traffic on a database [1], which
 did about 30 TX/inserts per second to a table. Upon inspecting the data
 in the table, I noticed the expected grouping of tuples which came from
 a single backend to matching pages [2]. The strange part was that the
 pages weren't completely filled but the backends seemed to jump
 arbitrarily from one page to the next [3]. For the table in question
 this resulted in about 10% wasted space.

 I think this may be related to the smgr_targblock stuff; if the relcache
 entry gets invalidated at the wrong time for whatever reason, the
 current page could be abandoned in favor of extending the rel.  This
 has changed since 8.4, but a quick perusal suggests that it should be
 less likely on 9.0 than 8.4 but maybe there's something weird going on.

I found time to try this example finally.  The behavior that I see in
HEAD is even worse than Michael describes: there is room for 136 rows
per block in the bid table, but most blocks have only a few rows.  The
distribution after letting the exerciser run for 500 bids or so is
typically like this:

  #rows  block#
136  0
  6  1
  5  2
  4  3
  3  4
  5  5
  3  6
  1  7
  4  8
  4  9
136  10
  6  11
  7  12
  9  13
  9  14
  7  15
  9  16
  7  17
  8  18
  5  19
136  20
  2  21
  4  22
  4  23
  3  24
  5  25
  3  26
  4  27
  3  28
  2  29
  1  30

Examining the insertion timestamps and bidder numbers (client process
IDs), and correlating this with logged autovacuum activity, makes it
pretty clear what is going on.  See the logic in
RelationGetBufferForTuple, and note that at no time do we have any FSM
data for the bid table:

1. Initially, all backends will decide to insert into block 0.  They do
so until the block is full.

2. At that point, each active backend individually decides it needs to
extend the relation.  They each create a new block and start inserting
into that one, each carefully not telling anyone else about the block
so as to avoid block-level insertion contention.  In the above diagram,
blocks 1-9 are each created by a different backend and the rows inserted
into it come (mostly?) from just one backend.  Block 10's first few rows
also come from the one backend that created it, but it doesn't manage to
fill the block entirely before ...

3. After awhile, autovacuum notices all the insert activity and kicks
off an autoanalyze on the bid table.  When committed, this forces a
relcache flush for each other backend's relcache entry for bid.
In particular, the smgr targblock gets reset.

4. Now, all the backends again decide to try to insert into the last
available block.  So everybody jams into the partly-filled block 10,
until it gets filled.

5. Lather, rinse, repeat.  Since there are exactly 10 active clients
(by default) in this test program, the repeat distance is exactly 10
blocks.

The obvious thing to do about this would be to not reset targblock
on receipt of a relcache flush event, but we can *not* do that in the
general case.  The reason that that gets reset is so that it's not
left pointing to a no-longer-existent block after a VACUUM truncation.
Maybe we could develop a way to distinguish truncation events from
others, but right now the sinval signaling mechanism can't do that.
This looks like there might be sufficient grounds to do something,
though.

Attached exhibits: contents of relevant columns of the bid table
and postmaster log entries for autovacuum actions during the run.

regards, tom lane

   ctid   | bidder | time  
--++---
 (0,1)|  1 | 2010-05-30 22:02:34.315279-04
 (0,2)|  2 | 2010-05-30 22:02:34.664073-04
 (0,3)| 10 | 2010-05-30 22:02:34.731018-04
 (0,4)|  4 | 2010-05-30 22:02:34.787941-04
 (0,5)|  6 | 2010-05-30 22:02:35.873605-04
 (0,6)|  2 | 2010-05-30 22:02:36.173464-04
 (0,7)|  4 | 2010-05-30 22:02:36.563819-04
 (0,8)|  4 | 2010-05-30 22:02:37.039633-04
 (0,9)|  3 | 2010-05-30 22:02:37.41705-04
 (0,10)   |  9 | 2010-05-30 22:02:37.66857-04
 (0,11)   |  8 | 2010-05-30 22:02:37.842781-04
 (0,12)   |  6 | 2010-05-30 22:02:39.554071-04
 (0,13)   |  9 | 2010-05-30 22:02:39.659859-04
 (0,14)   |  7 | 2010-05-30 22:02:40.470786-04
 (0,15)   |  6 | 2010-05-30 22:02:40.555843-04
 (0,16)   |  6 | 2010-05-30 22:02:42.587344-04
 (0,17)   |  5 | 2010-05-30 22:02:42.613972-04
 (0,18)   |  1 | 2010-05-30 22:02:42.624847-04
 (0,19)   |  3 | 2010-05-30 22:02:43.330164-04
 (0,20)   |  9 | 2010-05-30 22:02:43.480749-04
 (0,21)   |  3 | 2010-05-30 22:02:44.285052-04
 (0,22)   |  2 | 2010-05-30 22:02:44.810929-04
 

Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-30 Thread Robert Haas
On Sun, May 30, 2010 at 10:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 pretty clear what is going on.  See the logic in
 RelationGetBufferForTuple, and note that at no time do we have any FSM
 data for the bid table:

Is this because, in the absence of updates or deletes, we never vacuum it?

 4. Now, all the backends again decide to try to insert into the last
 available block.  So everybody jams into the partly-filled block 10,
 until it gets filled.

Would it be (a) feasible and (b) useful to inject some entropy into this step?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-30 Thread Takahiro Itagaki

Tom Lane t...@sss.pgh.pa.us wrote:

 3. After awhile, autovacuum notices all the insert activity and kicks
 off an autoanalyze on the bid table.  When committed, this forces a
 relcache flush for each other backend's relcache entry for bid.
 In particular, the smgr targblock gets reset.
 
 4. Now, all the backends again decide to try to insert into the last
 available block.  So everybody jams into the partly-filled block 10,
 until it gets filled.

The autovacuum process runs only analyze, but does not run vacuum at 3
because the workload is insert-only. Partially filled pages are never
tracked by freespace map. We could re-run an autovacuum if we saw the
report from the autoanalyze that says the table is low-density,
but the additional vacuum might be overhead in other cases.

 The obvious thing to do about this would be to not reset targblock
 on receipt of a relcache flush event

Even if we don't reset targblock, can we solve the issue when clients
connect and disconnect for each insert? New backends only check the end
of the table, and extend it as same as this case. If we are worrying
about the worst caase, we might need to track newly added pages with
freespace map. Of course we can ignore the case because frequent
connections and disconnections should be always avoided.

Regards,
---
Takahiro Itagaki
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-30 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sun, May 30, 2010 at 10:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 pretty clear what is going on.  See the logic in
 RelationGetBufferForTuple, and note that at no time do we have any FSM
 data for the bid table:

 Is this because, in the absence of updates or deletes, we never vacuum it?

Right.

 4. Now, all the backends again decide to try to insert into the last
 available block.  So everybody jams into the partly-filled block 10,
 until it gets filled.

 Would it be (a) feasible and (b) useful to inject some entropy into this step?

Maybe, but at least in this case, the insert rate is not fast enough
that contention for the block is worth worrying about.  IMO this isn't
the part of the cycle that needs fixed.

I guess another path to a fix might be to allow the backends to record
new pages in the FSM immediately at creation.  That might result in more
insert contention, but it'd avoid losing track of the free space
permanently, which is what is happening here (unless something happens
to cause a vacuum).  One reason the current code doesn't do that is that
the old in-memory FSM couldn't efficiently support retail insertion of
single-page data, but the new FSM code hasn't got a problem with that.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-30 Thread Greg Stark
On Mon, May 31, 2010 at 3:42 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 note that at no time do we have any FSM
 data for the bid table:


 3. After awhile, autovacuum notices all the insert activity and kicks
 off an autoanalyze on the bid table.  When committed, this forces a
 relcache flush for each other backend's relcache entry for bid.
 In particular, the smgr targblock gets reset.

This is an analyze-only scan? Why does analyze need to issue a
relcache flush? Maybe we only need to issue one for an actual vacuum
which would also populate the fsm?


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-30 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 This is an analyze-only scan? Why does analyze need to issue a
 relcache flush?

Directly: to cause other backends to pick up the updated pg_class row
(with new relpages/reltuples data).

Indirectly: to cause cached plans for the rel to be invalidated,
so that they can get replanned with updated pg_statistic entries.

So we can't just not have a relcache flush here.  However, we
might be able to decouple targblock reset from the rest of it.
In particular, now that there's a distinction between smgr flush
and relcache flush, maybe we could associate targblock reset with
smgr flush (only) and arrange to not flush the smgr level during
ANALYZE --- basically, smgr flush would only be needed when truncating
or reassigning the relfilenode.  I think this might work out nicely but
haven't chased the details.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-17 Thread Alvaro Herrera
Excerpts from Michael Renner's message of sáb may 15 20:24:36 -0400 2010:
 On 16.05.2010 02:16, Tom Lane wrote:
  Michael Rennermichael.ren...@amd.co.at  writes:
  I've written a simple tool to generate traffic on a database [1], which
  did about 30 TX/inserts per second to a table. Upon inspecting the data
  in the table, I noticed the expected grouping of tuples which came from
  a single backend to matching pages [2]. The strange part was that the
  pages weren't completely filled but the backends seemed to jump
  arbitrarily from one page to the next [3]. For the table in question
  this resulted in about 10% wasted space.
 
  Which table would that be?  The trigger-driven updates to auction,
  in particular, would certainly guarantee some amount of wasted space.
 
 Yeah, the auction table receives heavy updates and gets vacuumed regularly.
 
 The behavior I showed was for the bid table, which only gets inserts 
 (and triggers the updates for the auction table).

I think this may be related to the smgr_targblock stuff; if the relcache
entry gets invalidated at the wrong time for whatever reason, the
current page could be abandoned in favor of extending the rel.  This
has changed since 8.4, but a quick perusal suggests that it should be
less likely on 9.0 than 8.4 but maybe there's something weird going on.
-- 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-15 Thread Michael Renner
While preparing a replication test setup with 9.0beta1 I noticed strange 
page allocation patterns which Andrew Gierth found interesting enough to 
report here.


I've written a simple tool to generate traffic on a database [1], which 
did about 30 TX/inserts per second to a table. Upon inspecting the data 
in the table, I noticed the expected grouping of tuples which came from 
a single backend to matching pages [2]. The strange part was that the 
pages weren't completely filled but the backends seemed to jump 
arbitrarily from one page to the next [3]. For the table in question 
this resulted in about 10% wasted space.


After issuing a VACUUM on the table the free space map got updated (or 
initialized?) and the backends used the remaining space in the pages, 
though the spurious page allocation continued.



best regards,
Michael

[1] https://workbench.amd.co.at/hg/pgworkshop/file/dc5ab49c99bb/pgexerciser

[2] E.g.:

(0,1) TX1
(0,2) TX5
(0,3) TX7
..
(1,1) TX2
(1,2) TX6
(1,3) TX9

etc.

[3] http://nopaste.narf.at/show/55/
Optimal usage seems to be 136 tuples per page for the table in question.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-15 Thread Tom Lane
Michael Renner michael.ren...@amd.co.at writes:
 I've written a simple tool to generate traffic on a database [1], which 
 did about 30 TX/inserts per second to a table. Upon inspecting the data 
 in the table, I noticed the expected grouping of tuples which came from 
 a single backend to matching pages [2]. The strange part was that the 
 pages weren't completely filled but the backends seemed to jump 
 arbitrarily from one page to the next [3]. For the table in question 
 this resulted in about 10% wasted space.

Which table would that be?  The trigger-driven updates to auction,
in particular, would certainly guarantee some amount of wasted space.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unexpected page allocation behavior on insert-only tables

2010-05-15 Thread Michael Renner

On 16.05.2010 02:16, Tom Lane wrote:

Michael Rennermichael.ren...@amd.co.at  writes:

I've written a simple tool to generate traffic on a database [1], which
did about 30 TX/inserts per second to a table. Upon inspecting the data
in the table, I noticed the expected grouping of tuples which came from
a single backend to matching pages [2]. The strange part was that the
pages weren't completely filled but the backends seemed to jump
arbitrarily from one page to the next [3]. For the table in question
this resulted in about 10% wasted space.


Which table would that be?  The trigger-driven updates to auction,
in particular, would certainly guarantee some amount of wasted space.


Yeah, the auction table receives heavy updates and gets vacuumed regularly.

The behavior I showed was for the bid table, which only gets inserts 
(and triggers the updates for the auction table).


best regards,
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers