Claudio Freire <klaussfre...@gmail.com> writes:
> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> BTW, the originally given test case shows no measurable improvement
>> on my box.

> I did manage to reproduce the original test and got a consistent improvement.

It occurred to me that I could force the issue by hacking bufpage.c to
execute compactify_tuples multiple times each time it was called, as in
the first patch attached below.  This has nothing directly to do with real
performance of course, but it's making use of the PG system to provide
realistic test data for microbenchmarking compactify_tuples.  I was a bit
surprised to find that I had to set the repeat count to 1000 to make
compactify_tuples really dominate the runtime (while using the originally
posted test case ... maybe there's a better one?).  But once I did get it
to dominate the runtime, perf gave me this for the CPU hotspots:

+   27.97%    27.88%        229040  postmaster       libc-2.12.so               
  [.] memmove
+   14.61%    14.57%        119704  postmaster       postgres                   
  [.] compactify_tuples
+   12.40%    12.37%        101566  postmaster       libc-2.12.so               
  [.] _wordcopy_bwd_aligned
+   11.68%    11.65%         95685  postmaster       libc-2.12.so               
  [.] _wordcopy_fwd_aligned
+    7.67%     7.64%         62801  postmaster       postgres                   
  [.] itemoffcompare
+    7.00%     6.98%         57303  postmaster       postgres                   
  [.] compactify_tuples_loop
+    4.53%     4.52%         37111  postmaster       postgres                   
  [.] pg_qsort
+    1.71%     1.70%         13992  postmaster       libc-2.12.so               
  [.] memcpy

which says that micro-optimizing the sort step is a complete, utter waste
of time, and what we need to be worried about is the data copying part.

The memcpy part of the above is presumably from the scaffolding memcpy's
in compactify_tuples_loop, which is interesting because that's moving as
much data as the memmove's are.  So at least with RHEL6's version of
glibc, memmove is apparently a lot slower than memcpy.

This gave me the idea to memcpy the page into some workspace and then use
memcpy, not memmove, to put the tuples back into the caller's copy of the
page.  That gave me about a 50% improvement in observed TPS, and a perf
profile like this:

+   38.50%    38.40%        299520  postmaster       postgres                   
    [.] compactify_tuples
+   31.11%    31.02%        241975  postmaster       libc-2.12.so               
    [.] memcpy
+    8.74%     8.72%         68022  postmaster       postgres                   
    [.] itemoffcompare
+    6.51%     6.49%         50625  postmaster       postgres                   
    [.] compactify_tuples_loop
+    4.21%     4.19%         32719  postmaster       postgres                   
    [.] pg_qsort
+    1.70%     1.69%         13213  postmaster       postgres                   
    [.] memcpy@plt

There still doesn't seem to be any point in replacing the qsort,
but it does seem like something like the second attached patch
might be worth doing.

So I'm now wondering why my results seem to be so much different
from those of other people who have tried this, both as to whether
compactify_tuples is worth working on at all and as to what needs
to be done to it if so.  Thoughts?

                        regards, tom lane

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..bf6d308 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*************** compactify_tuples(itemIdSort itemidbase,
*** 465,470 ****
--- 465,489 ----
  	phdr->pd_upper = upper;
  }
  
+ static void
+ compactify_tuples_loop(itemIdSort itemidbase, int nitems, Page page)
+ {
+ 	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+ 	union {
+ 		char page[BLCKSZ];
+ 		double align;
+ 	} pagecopy;
+ 	int i;
+ 
+ 	for (i = 1; i < 1000; i++)
+ 	{
+ 		memcpy(copy, itemidbase, sizeof(itemIdSortData) * nitems);
+ 		memcpy(pagecopy.page, page, BLCKSZ);
+ 		compactify_tuples(copy, nitems, pagecopy.page);
+ 	}
+ 	compactify_tuples(itemidbase, nitems, page);
+ }
+ 
  /*
   * PageRepairFragmentation
   *
*************** PageRepairFragmentation(Page page)
*** 560,566 ****
  					 errmsg("corrupted item lengths: total %u, available space %u",
  							(unsigned int) totallen, pd_special - pd_lower)));
  
! 		compactify_tuples(itemidbase, nstorage, page);
  	}
  
  	/* Set hint bit for PageAddItem */
--- 579,585 ----
  					 errmsg("corrupted item lengths: total %u, available space %u",
  							(unsigned int) totallen, pd_special - pd_lower)));
  
! 		compactify_tuples_loop(itemidbase, nstorage, page);
  	}
  
  	/* Set hint bit for PageAddItem */
*************** PageIndexMultiDelete(Page page, OffsetNu
*** 940,946 ****
  	phdr->pd_lower = SizeOfPageHeaderData + nused * sizeof(ItemIdData);
  
  	/* and compactify the tuple data */
! 	compactify_tuples(itemidbase, nused, page);
  }
  
  
--- 959,965 ----
  	phdr->pd_lower = SizeOfPageHeaderData + nused * sizeof(ItemIdData);
  
  	/* and compactify the tuple data */
! 	compactify_tuples_loop(itemidbase, nused, page);
  }
  
  
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..e485398 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*************** static void
*** 441,446 ****
--- 441,450 ----
  compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
  {
  	PageHeader	phdr = (PageHeader) page;
+ 	union {
+ 		char page[BLCKSZ];
+ 		double align;
+ 	} pagecopy;
  	Offset		upper;
  	int			i;
  
*************** compactify_tuples(itemIdSort itemidbase,
*** 448,464 ****
  	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
  		  itemoffcompare);
  
  	upper = phdr->pd_special;
  	for (i = 0; i < nitems; i++)
  	{
  		itemIdSort	itemidptr = &itemidbase[i];
  		ItemId		lp;
  
- 		lp = PageGetItemId(page, itemidptr->offsetindex + 1);
  		upper -= itemidptr->alignedlen;
! 		memmove((char *) page + upper,
! 				(char *) page + itemidptr->itemoff,
! 				itemidptr->alignedlen);
  		lp->lp_off = upper;
  	}
  
--- 452,470 ----
  	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
  		  itemoffcompare);
  
+ 	memcpy(pagecopy.page, page, BLCKSZ);
+ 
  	upper = phdr->pd_special;
  	for (i = 0; i < nitems; i++)
  	{
  		itemIdSort	itemidptr = &itemidbase[i];
  		ItemId		lp;
  
  		upper -= itemidptr->alignedlen;
! 		memcpy((char *) page + upper,
! 			   pagecopy.page + itemidptr->itemoff,
! 			   itemidptr->alignedlen);
! 		lp = PageGetItemId(page, itemidptr->offsetindex + 1);
  		lp->lp_off = upper;
  	}
  
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to