Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Peter Geoghegan
On Wed, Nov 8, 2017 at 12:59 PM, Andres Freund  wrote:
> I complained about multiple related things, I'm not exactly sure what
> exactly you're referring to here:
> - The fact that HeapTupleHeaderData's are commonly iterated over in
>   reverse order is bad for performance. For shared buffers resident
>   workloads involving seqscans that yields 15-25% slowdowns for me. It's
>   trivial to fix that by just changing iteration order, but that
>   obviously changes results. But we could reorder the page during heap
>   pruning.

FWIW, the classic page layout (the one that appears in Gray's
Transaction Processing Systems, at any rate) has the ItemId array at
the end of the page and the tuples at the start (immediately after a
generic page header) -- it's the other way around.

I think that that has its pros and cons.

> - The layout of items in index pages is suboptimal. We regularly do
>   binary searches over the the linearly ordered items, which is cache
>   inefficient. So instead we should sort items as [1/2, 1/4, 3/4, ...]
>   elements, which will access items in a close-ish to linear manner.

I still think that we can repurpose each ItemId's lp_len as an
abbreviated key in internal index pages [1], and always get IndexTuple
size through the index tuple header. I actual got as far as writing a
very rough prototype of that. That's obviously a significant project,
but it seems doable.

[1] 
https://www.postgresql.org/message-id/CAH2-Wz=mv4dmoapficrsyntv2kinxeotbwuy5r7fxxoc-oe...@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Andres Freund
On 2017-11-08 12:02:40 -0500, Tom Lane wrote:
> BTW, it strikes me that in considering the rebuild-the-page approach,
> we should not have blinders on and just measure the speed of
> PageRepairFragmentation.  Rather, we should take a look at what happens
> subsequently given a physically-ordered set of tuples.  I can recall
> Andres or someone moaning awhile ago about lack of locality of access in
> index page searches --- maybe applying that approach while vacuuming
> indexes will help?

I complained about multiple related things, I'm not exactly sure what
exactly you're referring to here:
- The fact that HeapTupleHeaderData's are commonly iterated over in
  reverse order is bad for performance. For shared buffers resident
  workloads involving seqscans that yields 15-25% slowdowns for me. It's
  trivial to fix that by just changing iteration order, but that
  obviously changes results. But we could reorder the page during heap
  pruning.

  But that's fairly independent of indexes, so I'm not sure whether
  that's what you're referring.

- The layout of items in index pages is suboptimal. We regularly do
  binary searches over the the linearly ordered items, which is cache
  inefficient. So instead we should sort items as [1/2, 1/4, 3/4, ...]
  elements, which will access items in a close-ish to linear manner.

  But that's fairly independent of pruning, so I'm not sure whether
  that's what you're referring to, either.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Юрий Соколов
2017-11-08 20:02 GMT+03:00 Tom Lane :
>
> Claudio Freire  writes:
> > What's perhaps not clear is whether there are better ideas. Like
> > rebuilding the page as Tom proposes, which doesn't seem like a bad
> > idea. Bucket sort already is O(bytes), just as memcopy, only it has a
> > lower constant factor (it's bytes/256 in the original patch), which
> > might make copying the whole page an extra time lose against bucket
> > sort in a few cases.
>
> > Deciding that last point does need more benchmarking. That doesn't
> > mean the other improvements can't be pursued in the meanwhile, right?
>
> Well, I doubt we're going to end up committing more than one of these
> ideas.  The question is which way is best.  If people are willing to
> put in the work to test all of them, let's do it.
>
> BTW, it strikes me that in considering the rebuild-the-page approach,
> we should not have blinders on and just measure the speed of
> PageRepairFragmentation.  Rather, we should take a look at what happens
> subsequently given a physically-ordered set of tuples.  I can recall
> Andres or someone moaning awhile ago about lack of locality of access in
> index page searches --- maybe applying that approach while vacuuming
> indexes will help?
>
> regards, tom lane

I'd like to add qsort_template.h as Claudio suggested, ie in a way close to
simplehash.h. With such template header, there will be no need in
qsort_tuple_gen.pl .

With regards,
Sokolov Yura


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Tom Lane
Claudio Freire  writes:
> What's perhaps not clear is whether there are better ideas. Like
> rebuilding the page as Tom proposes, which doesn't seem like a bad
> idea. Bucket sort already is O(bytes), just as memcopy, only it has a
> lower constant factor (it's bytes/256 in the original patch), which
> might make copying the whole page an extra time lose against bucket
> sort in a few cases.

> Deciding that last point does need more benchmarking. That doesn't
> mean the other improvements can't be pursued in the meanwhile, right?

Well, I doubt we're going to end up committing more than one of these
ideas.  The question is which way is best.  If people are willing to
put in the work to test all of them, let's do it.

BTW, it strikes me that in considering the rebuild-the-page approach,
we should not have blinders on and just measure the speed of
PageRepairFragmentation.  Rather, we should take a look at what happens
subsequently given a physically-ordered set of tuples.  I can recall
Andres or someone moaning awhile ago about lack of locality of access in
index page searches --- maybe applying that approach while vacuuming
indexes will help?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Claudio Freire
On Wed, Nov 8, 2017 at 12:33 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Tue, Nov 7, 2017 at 4:39 PM, Tom Lane  wrote:
>>> What I'm getting from the standard pgbench measurements, on both machines,
>>> is that this patch might be a couple percent slower than HEAD, but that is
>>> barely above the noise floor so I'm not too sure about it.
>
>> Hmm.  It seems like slowing down single client performance by a couple
>> of percent is something that we really don't want to do.
>
> I do not think there is any change here that can be proven to always be a
> win.  Certainly the original patch, which proposes to replace an O(n log n)
> sort algorithm with an O(n^2) one, should not be thought to be that.
> The question to focus on is what's the average case, and I'm not sure how
> to decide what the average case is.  But more than two test scenarios
> would be a good start.
>
> regards, tom lane

Doing no change to the overall algorithm and replacing qsort with an
inlineable type-specific one should be a net win in all cases.

Doing bucket sort with a qsort of large buckets (or small tuple
arrays) should also be a net win in all cases.

Using shell sort might not seem clear, but lets not forget the
original patch only uses it in very small arrays and very infrequently
at that.

What's perhaps not clear is whether there are better ideas. Like
rebuilding the page as Tom proposes, which doesn't seem like a bad
idea. Bucket sort already is O(bytes), just as memcopy, only it has a
lower constant factor (it's bytes/256 in the original patch), which
might make copying the whole page an extra time lose against bucket
sort in a few cases.

Deciding that last point does need more benchmarking. That doesn't
mean the other improvements can't be pursued in the meanwhile, right?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Peter Geoghegan
On Wed, Nov 8, 2017 at 8:19 AM, Robert Haas  wrote:
> I don't remember any more just how much faster qsort_tuple() and
> qsort_ssup() are than plain qsort(), but it was significant enough to
> convince me to commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0...

IIRC, qsort_ssup() was about 20% faster at the time, while
qsort_tuple() was 5% - 10% faster.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Tom Lane
Robert Haas  writes:
> Just to throw a random idea out here, we currently have
> gen_qsort_tuple.pl producing qsort_tuple() and qsort_ssup().  Maybe it
> could be modified to also produce a specialized qsort_itemids().  That
> might be noticeably faster that our general-purpose qsort() for the
> reasons mentioned in the comments in gen_qsort_tuple.pl, viz:

+1 for somebody trying that (I'm not volunteering, though).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Robert Haas
On Wed, Nov 8, 2017 at 10:33 AM, Tom Lane  wrote:
> I do not think there is any change here that can be proven to always be a
> win.  Certainly the original patch, which proposes to replace an O(n log n)
> sort algorithm with an O(n^2) one, should not be thought to be that.
> The question to focus on is what's the average case, and I'm not sure how
> to decide what the average case is.  But more than two test scenarios
> would be a good start.

I appreciate the difficulties here; I'm just urging caution.  Let's
not change things just to clear this patch off our plate.

Just to throw a random idea out here, we currently have
gen_qsort_tuple.pl producing qsort_tuple() and qsort_ssup().  Maybe it
could be modified to also produce a specialized qsort_itemids().  That
might be noticeably faster that our general-purpose qsort() for the
reasons mentioned in the comments in gen_qsort_tuple.pl, viz:

# The major effects are (1) inlining simple tuple comparators is much faster
# than jumping through a function pointer and (2) swap and vecswap operations
# specialized to the particular data type of interest (in this case, SortTuple)
# are faster than the generic routines.

I don't remember any more just how much faster qsort_tuple() and
qsort_ssup() are than plain qsort(), but it was significant enough to
convince me to commit 337b6f5ecf05b21b5e997986884d097d60e4e3d0...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Tom Lane
Robert Haas  writes:
> On Tue, Nov 7, 2017 at 4:39 PM, Tom Lane  wrote:
>> What I'm getting from the standard pgbench measurements, on both machines,
>> is that this patch might be a couple percent slower than HEAD, but that is
>> barely above the noise floor so I'm not too sure about it.

> Hmm.  It seems like slowing down single client performance by a couple
> of percent is something that we really don't want to do.

I do not think there is any change here that can be proven to always be a
win.  Certainly the original patch, which proposes to replace an O(n log n)
sort algorithm with an O(n^2) one, should not be thought to be that.
The question to focus on is what's the average case, and I'm not sure how
to decide what the average case is.  But more than two test scenarios
would be a good start.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-08 Thread Robert Haas
On Tue, Nov 7, 2017 at 4:39 PM, Tom Lane  wrote:
> What I'm getting from the standard pgbench measurements, on both machines,
> is that this patch might be a couple percent slower than HEAD, but that is
> barely above the noise floor so I'm not too sure about it.

Hmm.  It seems like slowing down single client performance by a couple
of percent is something that we really don't want to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Peter Geoghegan
On Tue, Nov 7, 2017 at 2:40 PM, Юрий Соколов  wrote:
>> The same is true of unique indexes vs. non-unique.
>
> offtopic: recently I'd a look at setting LP_DEAD in indexes.
> I didn't found huge difference between unique and non-unique indices.
> There is codepath that works only for unique, but it is called less
> frequently than common codepath that also sets LP_DEAD.

I meant to say that this is only important with UPDATEs + contention.
The extra LP_DEAD setting within _bt_check_unique() makes quite a
noticeable difference, at least in terms of index bloat (though less
so in terms of raw TPS).

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Peter Geoghegan
On Tue, Nov 7, 2017 at 2:36 PM, Tom Lane  wrote:
> Peter Geoghegan  writes:
>> My point is only that it's worth considering that this factor affects
>> how representative your sympathetic case is. It's not clear how many
>> PageIndexMultiDelete() calls are from opportunistic calls to
>> _bt_vacuum_one_page(), how important that subset of calls is, and so
>> on. Maybe it doesn't matter at all.
>
> According to the perf measurements I took earlier, essentially all the
> compactify_tuple calls in this test case are from PageRepairFragmentation
> (from heap_page_prune), not PageIndexMultiDelete.

For a workload with high contention (e.g., lots of updates that follow
a Zipfian distribution) lots of important cleanup has to occur within
_bt_vacuum_one_page(), and with an exclusive buffer lock held. It may
be that making PageIndexMultiDelete() faster pays off
disproportionately well there, but I'd only expect to see that at
higher client count workloads with lots of contention -- workloads
that we still do quite badly on (note that we always have not done
well here, even prior to commit 2ed5b87f9 -- Yura showed this at one
point).

It's possible that this work influenced Yura in some way.

When Postgres Pro did some benchmarking of this at my request, we saw
that the bloat got really bad past a certain client count. IIRC there
was a clear point at around 32 or 64 clients where TPS nosedived,
presumably because cleanup could not keep up. This was a 128 core box,
or something like that, so you'll probably have difficulty recreating
it with what's at hand.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Юрий Соколов
2017-11-08 1:11 GMT+03:00 Peter Geoghegan :
>
> The same is true of unique indexes vs. non-unique.

offtopic: recently I'd a look at setting LP_DEAD in indexes.
I didn't found huge difference between unique and non-unique indices.
There is codepath that works only for unique, but it is called less
frequently than common codepath that also sets LP_DEAD.


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Tom Lane
Peter Geoghegan  writes:
> My point is only that it's worth considering that this factor affects
> how representative your sympathetic case is. It's not clear how many
> PageIndexMultiDelete() calls are from opportunistic calls to
> _bt_vacuum_one_page(), how important that subset of calls is, and so
> on. Maybe it doesn't matter at all.

According to the perf measurements I took earlier, essentially all the
compactify_tuple calls in this test case are from PageRepairFragmentation
(from heap_page_prune), not PageIndexMultiDelete.

I'd be the first to agree that I doubt that test case is really
representative.  I'd been whacking around Yura's original case to
try to get PageRepairFragmentation's runtime up to some measurable
fraction of the total, and while I eventually succeeded, I'm not
sure that too many real workloads will look like that.  However,
if we can make it smaller as well as faster, that seems like a win
even if it's not a measurable fraction of most workloads.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Peter Geoghegan
)

On Tue, Nov 7, 2017 at 1:39 PM, Tom Lane  wrote:
> So I think we should seriously consider the attached, but it'd be a
> good idea to benchmark it on a wider variety of platforms and test
> cases.

> create unlogged table test3 (
>  id integer PRIMARY KEY with (fillfactor=85),
>  val text
>  ) WITH (fillfactor=85);

Passing observation:  Unlogged table B-Tree indexes have a much
greater tendency for LP_DEAD setting/kill_prior_tuple() working out
following commit 2ed5b87f9 [1], because unlogged tables were
unaffected by that commit. (I've been meaning to follow up with my
analysis of that regression, actually.)

The same is true of unique indexes vs. non-unique. There are workloads
where the opportunistic LP_DEAD setting performed by
_bt_check_unique() is really important (it calls ItemIdMarkDead()).
Think high contention workloads, like when Postgres is used to
implement a queue table.

My point is only that it's worth considering that this factor affects
how representative your sympathetic case is. It's not clear how many
PageIndexMultiDelete() calls are from opportunistic calls to
_bt_vacuum_one_page(), how important that subset of calls is, and so
on. Maybe it doesn't matter at all.

[1] 
https://postgr.es/m/cah2-wzmyry7mnjf0gw5wtk3cszh3gqfhhoxvsyuno5pk8cu...@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Tom Lane
I've been getting less and less excited about this patch, because I still
couldn't measure any above-the-noise performance improvement without
artificial exaggerations, and some cases seemed actually slower.

However, this morning I had an epiphany: why are we sorting at all?

There is no requirement that these functions preserve the physical
ordering of the tuples' data areas, only that the line-pointer ordering be
preserved.  Indeed, reorganizing the data areas into an ordering matching
the line pointers is probably a good thing, because it should improve
locality of access in future scans of the page.  This is trivial to
implement if we copy the data into a workspace area and back again, as
I was already proposing to do to avoid memmove.  Moreover, at that point
there's little value in a separate compactify function at all: we can
integrate the data-copying logic into the line pointer scan loops in
PageRepairFragmentation and PageIndexMultiDelete, and get rid of the
costs of constructing the intermediate itemIdSortData arrays.

That led me to the attached patch, which is the first version of any
of this work that produces an above-the-noise performance win for me.
I'm seeing 10-20% gains on this modified version of Yura's original
example:

psql -f test3setup.sql
pgbench -M prepared -c 3 -s 1000 -T 300 -P 3 -n -f test3.sql

(sql scripts also attached below; I'm using 1GB shared_buffers and
fsync off, other parameters stock.)

However, there are a couple of objections that could be raised to
this patch:

1. It's trading off per-byte work, in the form of an extra memcpy,
to save sorting work that has per-tuple costs.  Therefore, the relatively
narrow tuples used in Yura's example offer a best-case scenario;
with wider tuples the performance might be worse.

2. On a platform with memmove not so much worse than memcpy as I'm
seeing on my RHEL6 server, trading memmove for memcpy might not be
such a win.

To address point 1, I tried some measurements on the standard pgbench
scenario, which uses significantly wider tuples.  In hopes of addressing
point 2, I also ran the measurements on a laptop running Fedora 25
(gcc 6.4.1, glibc 2.24); I haven't actually checked memmove vs memcpy
on that machine, but at least it's a reasonably late-model glibc.

What I'm getting from the standard pgbench measurements, on both machines,
is that this patch might be a couple percent slower than HEAD, but that is
barely above the noise floor so I'm not too sure about it.

So I think we should seriously consider the attached, but it'd be a
good idea to benchmark it on a wider variety of platforms and test
cases.

regards, tom lane

drop table if exists test3;

create unlogged table test3 (
 id integer PRIMARY KEY with (fillfactor=85),
 val text
 ) WITH (fillfactor=85);

insert into test3 select i, '!'||i from generate_series(1, 1000) as i;

vacuum analyze; checkpoint;

create or replace function dotest3(n int, scale float8) returns void
language plpgsql as $$
begin
for i in 1..n loop
  declare
id1 int := random() * scale;
id2 int := random() * scale;
  begin
perform * from test3 where id = id1;
update test3 set val = '!'|| id2 where id = id1;
  end;
end loop;
end $$;
select dotest3(100, :scale);
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index b6aa2af..73b73de 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*** PageRestoreTempPage(Page tempPage, Page 
*** 415,471 
  }
  
  /*
-  * sorting support for PageRepairFragmentation and PageIndexMultiDelete
-  */
- typedef struct itemIdSortData
- {
- 	uint16		offsetindex;	/* linp array index */
- 	int16		itemoff;		/* page offset of item data */
- 	uint16		alignedlen;		/* MAXALIGN(item data len) */
- } itemIdSortData;
- typedef itemIdSortData *itemIdSort;
- 
- static int
- itemoffcompare(const void *itemidp1, const void *itemidp2)
- {
- 	/* Sort in decreasing itemoff order */
- 	return ((itemIdSort) itemidp2)->itemoff -
- 		((itemIdSort) itemidp1)->itemoff;
- }
- 
- /*
-  * After removing or marking some line pointers unused, move the tuples to
-  * remove the gaps caused by the removed items.
-  */
- static void
- compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
- {
- 	PageHeader	phdr = (PageHeader) page;
- 	Offset		upper;
- 	int			i;
- 
- 	/* sort itemIdSortData array into decreasing itemoff order */
- 	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
- 		  itemoffcompare);
- 
- 	upper = phdr->pd_special;
- 	for (i = 0; i < nitems; i++)
- 	{
- 		itemIdSort	itemidptr = [i];
- 		ItemId		lp;
- 
- 		lp = PageGetItemId(page, itemidptr->offsetindex + 1);
- 		upper -= itemidptr->alignedlen;
- 		memmove((char *) page + upper,
- (char *) page + itemidptr->itemoff,
- itemidptr->alignedlen);
- 		lp->lp_off = upper;
- 	}
- 
- 	phdr->pd_upper = upper;
- }
- 
- /*
   * PageRepairFragmentation
   *
   * 

Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Andres Freund
On 2017-11-07 12:12:02 -0300, Claudio Freire wrote:
> If you need it. I'm not particularly fond of writing code before it's needed.

+1

> Otherwise, if it's a rarely-encountered corner case, I'd recommend
> simply calling the stdlib's qsort.

FWIW, we always map qsort onto our own implementation:

#define qsort(a,b,c,d) pg_qsort(a,b,c,d)

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Claudio Freire
On Tue, Nov 7, 2017 at 11:42 AM, Юрий Соколов  wrote:
>
>
> 2017-11-07 17:15 GMT+03:00 Claudio Freire :
>> Aside from requiring all that include magic, if you place specialized
>> sort functions in a reusable header, using it is as simple as
>> including the type-specific header (or declaring the type macros and
>> including the template), and using them as regular functions. There's
>> no runtime overhead involved, especially if you declare the comparison
>> function as a macro or a static inline function. The sort itself can
>> be declared static inline as well, and the compiler will decide
>> whether it's worth inlining.
>
> Ok, if no one will complain against another one qsort implementation,
> I will add template header for qsort. Since qsort needs insertion sort,
> it will be in a same file.
> Do you approve of this?
>
> With regards,
> Sokolov Yura

If you need it. I'm not particularly fond of writing code before it's needed.

If you can measure the impact for that particular case where qsort
might be needed, and it's a real-world case, then by all means.

Otherwise, if it's a rarely-encountered corner case, I'd recommend
simply calling the stdlib's qsort.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Юрий Соколов
2017-11-07 17:15 GMT+03:00 Claudio Freire :
>
> On Mon, Nov 6, 2017 at 9:08 PM, Юрий Соколов 
wrote:
> > 2017-11-07 1:14 GMT+03:00 Claudio Freire :
> >>
> >> I haven't seen this trick used in postgres, nor do I know whether it
> >> would be well received, so this is more like throwing an idea to see
> >> if it sticks...
> >>
> >> But a way to do this without macros is to have an includable
> >> "template" algorithm that simply doesn't define the comparison
> >> function/type, it rather assumes it:
> >>
> >> qsort_template.h
> >>
> >> #define QSORT_NAME qsort_ ## QSORT_SUFFIX
> >>
> >> static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
> >> {
> >> ... if (ELEM_LESS(arr[a], arr[b]))
> >> ...
> >> }
> >>
> >> #undef QSORT_NAME
> >>
> >> Then, in "offset_qsort.h":
> >>
> >> #define QSORT_SUFFIX offset
> >> #define ELEM_TYPE offset
> >> #define ELEM_LESS(a,b) ((a) < (b))
> >>
> >> #include "qsort_template.h"
> >>
> >> #undef QSORT_SUFFIX
> >> #undef ELEM_TYPE
> >> #undef ELEM_LESS
> >>
> >> Now, I realize this may have its cons, but it does simplify
> >> maintainance of type-specific or parameterized variants of
> >> performance-critical functions.
> >>
> >> > I can do specialized qsort for this case. But it will be larger
bunch of
> >> > code, than
> >> > shell sort.
> >> >
> >> >> And I'd recommend doing that when there is a need, and I don't think
> >> >> this patch really needs it, since bucket sort handles most cases
> >> >> anyway.
> >> >
> >> > And it still needs insertion sort for buckets.
> >> > I can agree to get rid of shell sort. But insertion sort is
necessary.
> >>
> >> I didn't suggest getting rid of insertion sort. But the trick above is
> >> equally applicable to insertion sort.
> >
> > This trick is used in simplehash.h . I agree, it could be useful for
qsort.
> > This will not make qsort inlineable, but will reduce overhead much.
> >
> > This trick is too heavy-weight for insertion sort alone, though. Without
> > shellsort, insertion sort could be expressed in 14 line macros ( 8 lines
> > without curly braces). But if insertion sort will be defined together
with
> > qsort (because qsort still needs it), then it is justifiable.
>
> What do you mean by heavy-weight?


I mean, I've already made reusable sort implementation with macros
that is called like a function (with type parameter). If we are talking
only about insertion sort, then such macros looks much prettier than
including file.

But qsort is better implemented with included template-header.

BTW, there is example of defining many functions with call to template
macro instead of including template header:
https://github.com/attractivechaos/klib/blob/master/khash.h
But it looks ugly.

>
> Aside from requiring all that include magic, if you place specialized
> sort functions in a reusable header, using it is as simple as
> including the type-specific header (or declaring the type macros and
> including the template), and using them as regular functions. There's
> no runtime overhead involved, especially if you declare the comparison
> function as a macro or a static inline function. The sort itself can
> be declared static inline as well, and the compiler will decide
> whether it's worth inlining.

Ok, if no one will complain against another one qsort implementation,
I will add template header for qsort. Since qsort needs insertion sort,
it will be in a same file.
Do you approve of this?

With regards,
Sokolov Yura


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-07 Thread Claudio Freire
On Mon, Nov 6, 2017 at 9:08 PM, Юрий Соколов  wrote:
> 2017-11-07 1:14 GMT+03:00 Claudio Freire :
>>
>> I haven't seen this trick used in postgres, nor do I know whether it
>> would be well received, so this is more like throwing an idea to see
>> if it sticks...
>>
>> But a way to do this without macros is to have an includable
>> "template" algorithm that simply doesn't define the comparison
>> function/type, it rather assumes it:
>>
>> qsort_template.h
>>
>> #define QSORT_NAME qsort_ ## QSORT_SUFFIX
>>
>> static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
>> {
>> ... if (ELEM_LESS(arr[a], arr[b]))
>> ...
>> }
>>
>> #undef QSORT_NAME
>>
>> Then, in "offset_qsort.h":
>>
>> #define QSORT_SUFFIX offset
>> #define ELEM_TYPE offset
>> #define ELEM_LESS(a,b) ((a) < (b))
>>
>> #include "qsort_template.h"
>>
>> #undef QSORT_SUFFIX
>> #undef ELEM_TYPE
>> #undef ELEM_LESS
>>
>> Now, I realize this may have its cons, but it does simplify
>> maintainance of type-specific or parameterized variants of
>> performance-critical functions.
>>
>> > I can do specialized qsort for this case. But it will be larger bunch of
>> > code, than
>> > shell sort.
>> >
>> >> And I'd recommend doing that when there is a need, and I don't think
>> >> this patch really needs it, since bucket sort handles most cases
>> >> anyway.
>> >
>> > And it still needs insertion sort for buckets.
>> > I can agree to get rid of shell sort. But insertion sort is necessary.
>>
>> I didn't suggest getting rid of insertion sort. But the trick above is
>> equally applicable to insertion sort.
>
> This trick is used in simplehash.h . I agree, it could be useful for qsort.
> This will not make qsort inlineable, but will reduce overhead much.
>
> This trick is too heavy-weight for insertion sort alone, though. Without
> shellsort, insertion sort could be expressed in 14 line macros ( 8 lines
> without curly braces). But if insertion sort will be defined together with
> qsort (because qsort still needs it), then it is justifiable.

What do you mean by heavy-weight?

Aside from requiring all that include magic, if you place specialized
sort functions in a reusable header, using it is as simple as
including the type-specific header (or declaring the type macros and
including the template), and using them as regular functions. There's
no runtime overhead involved, especially if you declare the comparison
function as a macro or a static inline function. The sort itself can
be declared static inline as well, and the compiler will decide
whether it's worth inlining.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-06 Thread Юрий Соколов
2017-11-07 1:14 GMT+03:00 Claudio Freire :
>
> On Mon, Nov 6, 2017 at 6:58 PM, Юрий Соколов 
wrote:
> >
> > 2017-11-06 17:55 GMT+03:00 Claudio Freire :
> >>
> >> On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов 
> >> wrote:
> >> >> Maybe leave a fallback to qsort if some corner case produces big
> >> >> buckets?
> >> >
> >> > For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is
at
> >> > most 1 heap-tuple per bucket, and for index pages it is at most 2
index
> >> > tuples per bucket. For 32kb pages it is 4 heap-tuples and 8
index-tuples
> >> > per bucket.
> >> > It will be unnecessary overhead to call non-inlineable qsort in this
> >> > cases
> >> >
> >> > So, I think, shell sort could be removed, but insertion sort have to
> >> > remain.
> >> >
> >> > I'd prefer shell sort to remain also. It could be useful in other
places
> >> > also,
> >> > because it is easily inlinable, and provides comparable to qsort
> >> > performance
> >> > up to several hundreds of elements.
> >>
> >> I'd rather have an inlineable qsort.
> >
> > But qsort is recursive. It is quite hard to make it inlineable. And
still it
> > will be
> > much heavier than insertion sort (btw, all qsort implementations uses
> > insertion
> > sort for small arrays). And it will be heavier than shell sort for small
> > arrays.
>
> I haven't seen this trick used in postgres, nor do I know whether it
> would be well received, so this is more like throwing an idea to see
> if it sticks...
>
> But a way to do this without macros is to have an includable
> "template" algorithm that simply doesn't define the comparison
> function/type, it rather assumes it:
>
> qsort_template.h
>
> #define QSORT_NAME qsort_ ## QSORT_SUFFIX
>
> static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
> {
> ... if (ELEM_LESS(arr[a], arr[b]))
> ...
> }
>
> #undef QSORT_NAME
>
> Then, in "offset_qsort.h":
>
> #define QSORT_SUFFIX offset
> #define ELEM_TYPE offset
> #define ELEM_LESS(a,b) ((a) < (b))
>
> #include "qsort_template.h"
>
> #undef QSORT_SUFFIX
> #undef ELEM_TYPE
> #undef ELEM_LESS
>
> Now, I realize this may have its cons, but it does simplify
> maintainance of type-specific or parameterized variants of
> performance-critical functions.
>
> > I can do specialized qsort for this case. But it will be larger bunch of
> > code, than
> > shell sort.
> >
> >> And I'd recommend doing that when there is a need, and I don't think
> >> this patch really needs it, since bucket sort handles most cases
> >> anyway.
> >
> > And it still needs insertion sort for buckets.
> > I can agree to get rid of shell sort. But insertion sort is necessary.
>
> I didn't suggest getting rid of insertion sort. But the trick above is
> equally applicable to insertion sort.

This trick is used in simplehash.h . I agree, it could be useful for qsort.
This will not make qsort inlineable, but will reduce overhead much.

This trick is too heavy-weight for insertion sort alone, though. Without
shellsort, insertion sort could be expressed in 14 line macros ( 8 lines
without curly braces). But if insertion sort will be defined together with
qsort (because qsort still needs it), then it is justifiable.


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-06 Thread Claudio Freire
On Mon, Nov 6, 2017 at 6:58 PM, Юрий Соколов  wrote:
>
> 2017-11-06 17:55 GMT+03:00 Claudio Freire :
>>
>> On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов 
>> wrote:
>> >> Maybe leave a fallback to qsort if some corner case produces big
>> >> buckets?
>> >
>> > For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
>> > most 1 heap-tuple per bucket, and for index pages it is at most 2 index
>> > tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
>> > per bucket.
>> > It will be unnecessary overhead to call non-inlineable qsort in this
>> > cases
>> >
>> > So, I think, shell sort could be removed, but insertion sort have to
>> > remain.
>> >
>> > I'd prefer shell sort to remain also. It could be useful in other places
>> > also,
>> > because it is easily inlinable, and provides comparable to qsort
>> > performance
>> > up to several hundreds of elements.
>>
>> I'd rather have an inlineable qsort.
>
> But qsort is recursive. It is quite hard to make it inlineable. And still it
> will be
> much heavier than insertion sort (btw, all qsort implementations uses
> insertion
> sort for small arrays). And it will be heavier than shell sort for small
> arrays.

I haven't seen this trick used in postgres, nor do I know whether it
would be well received, so this is more like throwing an idea to see
if it sticks...

But a way to do this without macros is to have an includable
"template" algorithm that simply doesn't define the comparison
function/type, it rather assumes it:

qsort_template.h

#define QSORT_NAME qsort_ ## QSORT_SUFFIX

static void QSORT_NAME(ELEM_TYPE arr, size_t num_elems)
{
... if (ELEM_LESS(arr[a], arr[b]))
...
}

#undef QSORT_NAME

Then, in "offset_qsort.h":

#define QSORT_SUFFIX offset
#define ELEM_TYPE offset
#define ELEM_LESS(a,b) ((a) < (b))

#include "qsort_template.h"

#undef QSORT_SUFFIX
#undef ELEM_TYPE
#undef ELEM_LESS

Now, I realize this may have its cons, but it does simplify
maintainance of type-specific or parameterized variants of
performance-critical functions.

> I can do specialized qsort for this case. But it will be larger bunch of
> code, than
> shell sort.
>
>> And I'd recommend doing that when there is a need, and I don't think
>> this patch really needs it, since bucket sort handles most cases
>> anyway.
>
> And it still needs insertion sort for buckets.
> I can agree to get rid of shell sort. But insertion sort is necessary.

I didn't suggest getting rid of insertion sort. But the trick above is
equally applicable to insertion sort.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-06 Thread Юрий Соколов
2017-11-06 17:55 GMT+03:00 Claudio Freire :
>
> On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов 
wrote:
> >> Maybe leave a fallback to qsort if some corner case produces big
buckets?
> >
> > For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
> > most 1 heap-tuple per bucket, and for index pages it is at most 2 index
> > tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
> > per bucket.
> > It will be unnecessary overhead to call non-inlineable qsort in this
cases
> >
> > So, I think, shell sort could be removed, but insertion sort have to
remain.
> >
> > I'd prefer shell sort to remain also. It could be useful in other places
> > also,
> > because it is easily inlinable, and provides comparable to qsort
performance
> > up to several hundreds of elements.
>
> I'd rather have an inlineable qsort.

But qsort is recursive. It is quite hard to make it inlineable. And still
it will be
much heavier than insertion sort (btw, all qsort implementations uses
insertion
sort for small arrays). And it will be heavier than shell sort for small
arrays.

I can do specialized qsort for this case. But it will be larger bunch of
code, than
shell sort.

> And I'd recommend doing that when there is a need, and I don't think
> this patch really needs it, since bucket sort handles most cases
> anyway.

And it still needs insertion sort for buckets.
I can agree to get rid of shell sort. But insertion sort is necessary.


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-06 Thread Claudio Freire
On Mon, Nov 6, 2017 at 11:50 AM, Юрий Соколов  wrote:
>> Maybe leave a fallback to qsort if some corner case produces big buckets?
>
> For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
> most 1 heap-tuple per bucket, and for index pages it is at most 2 index
> tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
> per bucket.
> It will be unnecessary overhead to call non-inlineable qsort in this cases
>
> So, I think, shell sort could be removed, but insertion sort have to remain.
>
> I'd prefer shell sort to remain also. It could be useful in other places
> also,
> because it is easily inlinable, and provides comparable to qsort performance
> up to several hundreds of elements.

I'd rather have an inlineable qsort.

And I'd recommend doing that when there is a need, and I don't think
this patch really needs it, since bucket sort handles most cases
anyway.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-06 Thread Юрий Соколов
2017-11-05 20:44 GMT+03:00 Claudio Freire :
>
> On Sat, Nov 4, 2017 at 8:07 PM, Юрий Соколов 
wrote:
> > 2017-11-03 5:46 GMT+03:00 Tom Lane :
> >>
> >> Sokolov Yura  writes:
> >> > [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
> >>
> >> I went to check the shellsort algorithm against Wikipedia's entry,
> >> and found that this appears to be an incorrect implementation of
> >> shellsort: where pg_shell_sort_pass has
> >>
> >> for (_i = off; _i < _n; _i += off) \
> >>
> >> it seems to me that we need to have
> >>
> >> for (_i = off; _i < _n; _i += 1) \
> >>
> >> or maybe just _i++.
> >
> >
> > Shame on me :-(
> > I've wrote shell sort several times, so I forgot to recheck myself once
> > again.
> > And looks like best gap sequence from wikipedia is really best
> > ( {301, 132, 57, 23, 10 , 4} in my notation),
> >
> >
> > 2017-11-03 17:37 GMT+03:00 Claudio Freire :
> >> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane  wrote:
> >>> BTW, the originally given test case shows no measurable improvement
> >>> on my box.
> >>
> >> I did manage to reproduce the original test and got a consistent
> >> improvement.
> >
> > I've rechecked my self using my benchmark.
> > Without memmove, compactify_tuples comsumes:
> > - with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare +
> > compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
> > - with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare
also
> > inlined, so whole is compactify_tuples)
> > - with just shell sort 5,98% cpu (sort is inlined again)
> > - with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 +
> > 0.46)
>
> Is that just insertion sort without bucket sort?

Yes. Just to show that inlined insertion sort is better than non-inlined
qsort
in this particular use-case.

> Because I think shell sort has little impact in your original patch
> because it's rarely exercised. With bucket sort, most buckets are very
> small, too small for shell sort to do any useful work.

Yes. In the patch, buckets are sorted with insertion sort. Shell sort is
used
only on full array if its size less than 48.
Bucket sort has constant overhead of traversing all buckets, even if they
are empty. That is why I think, shell sort for small arrays is better.
Though,
I didn't measure that carefully. And probably insertion sort for small
arrays
will be just enough.

> Maybe leave a fallback to qsort if some corner case produces big buckets?

For 8kb pages, each bucket is per 32 bytes. So, for heap pages it is at
most 1 heap-tuple per bucket, and for index pages it is at most 2 index
tuples per bucket. For 32kb pages it is 4 heap-tuples and 8 index-tuples
per bucket.
It will be unnecessary overhead to call non-inlineable qsort in this cases

So, I think, shell sort could be removed, but insertion sort have to remain.

I'd prefer shell sort to remain also. It could be useful in other places
also,
because it is easily inlinable, and provides comparable to qsort performance
up to several hundreds of elements.

With regards,
Sokolov Yura aka funny_falcon.


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-05 Thread Claudio Freire
On Sat, Nov 4, 2017 at 8:07 PM, Юрий Соколов  wrote:
> 2017-11-03 5:46 GMT+03:00 Tom Lane :
>>
>> Sokolov Yura  writes:
>> > [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
>>
>> I went to check the shellsort algorithm against Wikipedia's entry,
>> and found that this appears to be an incorrect implementation of
>> shellsort: where pg_shell_sort_pass has
>>
>> for (_i = off; _i < _n; _i += off) \
>>
>> it seems to me that we need to have
>>
>> for (_i = off; _i < _n; _i += 1) \
>>
>> or maybe just _i++.
>
>
> Shame on me :-(
> I've wrote shell sort several times, so I forgot to recheck myself once
> again.
> And looks like best gap sequence from wikipedia is really best
> ( {301, 132, 57, 23, 10 , 4} in my notation),
>
>
> 2017-11-03 17:37 GMT+03:00 Claudio Freire :
>> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane  wrote:
>>> BTW, the originally given test case shows no measurable improvement
>>> on my box.
>>
>> I did manage to reproduce the original test and got a consistent
>> improvement.
>
> I've rechecked my self using my benchmark.
> Without memmove, compactify_tuples comsumes:
> - with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare +
> compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
> - with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare also
> inlined, so whole is compactify_tuples)
> - with just shell sort 5,98% cpu (sort is inlined again)
> - with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 +
> 0.46)

Is that just insertion sort without bucket sort?

Because I think shell sort has little impact in your original patch
because it's rarely exercised. With bucket sort, most buckets are very
small, too small for shell sort to do any useful work.

That's why I'm inclined to agree with Tom in that we could safely
simplify it out, remove it, without much impact.

Maybe leave a fallback to qsort if some corner case produces big buckets?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-04 Thread Peter Geoghegan

Юрий Соколов  wrote:

tps is also reflects changes:
~17ktps with qsort
~19ktps with bucket sort

Also vacuum of benchmark's table is also improved:
~3s with qsort,
~2.4s with bucket sort


One thing that you have to be careful with when it comes to our qsort
with partially presored inputs is what I like to call "banana skin
effects":

https://postgr.es/m/cah2-wzku2xk2dpz7n8-a1mvuuttuvhqkfna+eutwnwctgyc...@mail.gmail.com

This may have nothing at all to do with your results; I'm just pointing
it out as a possibility.

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-04 Thread Юрий Соколов
2017-11-03 5:46 GMT+03:00 Tom Lane :
>
> Sokolov Yura  writes:
> > [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
>
> I went to check the shellsort algorithm against Wikipedia's entry,
> and found that this appears to be an incorrect implementation of
> shellsort: where pg_shell_sort_pass has
>
> for (_i = off; _i < _n; _i += off) \
>
> it seems to me that we need to have
>
> for (_i = off; _i < _n; _i += 1) \
>
> or maybe just _i++.


Shame on me :-(
I've wrote shell sort several times, so I forgot to recheck myself once
again.
And looks like best gap sequence from wikipedia is really best
( {301, 132, 57, 23, 10 , 4} in my notation),


2017-11-03 17:37 GMT+03:00 Claudio Freire :
> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane  wrote:
>> BTW, the originally given test case shows no measurable improvement
>> on my box.
>
> I did manage to reproduce the original test and got a consistent
improvement.

I've rechecked my self using my benchmark.
Without memmove, compactify_tuples comsumes:
- with qsort 11.66% cpu (pg_qsort + med3 + swapfunc + itemoffcompare +
compactify_tuples = 5.97 + 0.51 + 2.87 + 1.88 + 0.44)
- with just insertion sort 6.65% cpu (sort is inlined, itemoffcompare also
inlined, so whole is compactify_tuples)
- with just shell sort 5,98% cpu (sort is inlined again)
- with bucket sort 1,76% cpu (sort_itemIds + compactify_tuples = 1.30 +
0.46)

(memmove consumes 1.29% cpu)

tps is also reflects changes:
~17ktps with qsort
~19ktps with bucket sort

Also vacuum of benchmark's table is also improved:
~3s with qsort,
~2.4s with bucket sort

Of course, this benchmark is quite synthetic: table is unlogged, and tuple
is small,
and synchronous commit is off. Though, such table is still useful in some
situations
(think of not-too-important, but useful counters, like "photo watch count").
And patch affects not only this synthetic benchmark. It affects restore
performance,
as Heikki mentioned, and cpu consumption of Vacuum (though vacuum is more io
bound).

> I think we should remove pg_shell_sort and just use pg_insertion_sort.

Using shell sort is just a bit safer. Doubtfully worst pattern (for
insertion sort) will
appear, but what if? Shellsort is a bit better on whole array (5.98% vs
6.65%).
Though on small array difference will be much smaller.

With regards,
Sokolov Yura aka funny_falcon


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-03 Thread Claudio Freire
On Fri, Nov 3, 2017 at 4:30 PM, Tom Lane  wrote:
> Claudio Freire  writes:
>> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane  wrote:
>>> BTW, the originally given test case shows no measurable improvement
>>> on my box.
>
>> I did manage to reproduce the original test and got a consistent improvement.
>
> This gave me the idea to memcpy the page into some workspace and then use
> memcpy, not memmove, to put the tuples back into the caller's copy of the
> page.  That gave me about a 50% improvement in observed TPS, and a perf
> profile like this:
>
> +   38.50%38.40%299520  postmaster   postgres 
>   [.] compactify_tuples
> +   31.11%31.02%241975  postmaster   libc-2.12.so 
>   [.] memcpy
> +8.74% 8.72% 68022  postmaster   postgres 
>   [.] itemoffcompare
> +6.51% 6.49% 50625  postmaster   postgres 
>   [.] compactify_tuples_loop
> +4.21% 4.19% 32719  postmaster   postgres 
>   [.] pg_qsort
> +1.70% 1.69% 13213  postmaster   postgres 
>   [.] memcpy@plt
>
> There still doesn't seem to be any point in replacing the qsort,
> but it does seem like something like the second attached patch
> might be worth doing.
>
> So I'm now wondering why my results seem to be so much different
> from those of other people who have tried this, both as to whether
> compactify_tuples is worth working on at all and as to what needs
> to be done to it if so.  Thoughts?
>
> regards, tom lane
>

I'm going to venture a guess that the version of gcc and libc, and
build options used both in the libc (ie: the distro) and postgres may
play a part here.

I'm running with glibc 2.22, for instance, and building with gcc 4.8.5.

I will try and benchmark memcpy vs memmove and see what the
performance difference is there with my versions, too. This may
heavily depend on compiler optimizations that may vary between
versions, since memcpy/memmove tend to be inlined a lot.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-03 Thread Tom Lane
I wrote:
> Have not looked at the 0002 patch yet.

I looked at that one, and it seems to be a potential win with no
downside, so pushed.  (I tweaked it slightly to avoid an unnecessary
conflict with the test patch I posted earlier.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-03 Thread Tom Lane
Claudio Freire  writes:
> On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane  wrote:
>> BTW, the originally given test case shows no measurable improvement
>> on my box.

> I did manage to reproduce the original test and got a consistent improvement.

It occurred to me that I could force the issue by hacking bufpage.c to
execute compactify_tuples multiple times each time it was called, as in
the first patch attached below.  This has nothing directly to do with real
performance of course, but it's making use of the PG system to provide
realistic test data for microbenchmarking compactify_tuples.  I was a bit
surprised to find that I had to set the repeat count to 1000 to make
compactify_tuples really dominate the runtime (while using the originally
posted test case ... maybe there's a better one?).  But once I did get it
to dominate the runtime, perf gave me this for the CPU hotspots:

+   27.97%27.88%229040  postmaster   libc-2.12.so   
  [.] memmove
+   14.61%14.57%119704  postmaster   postgres   
  [.] compactify_tuples
+   12.40%12.37%101566  postmaster   libc-2.12.so   
  [.] _wordcopy_bwd_aligned
+   11.68%11.65% 95685  postmaster   libc-2.12.so   
  [.] _wordcopy_fwd_aligned
+7.67% 7.64% 62801  postmaster   postgres   
  [.] itemoffcompare
+7.00% 6.98% 57303  postmaster   postgres   
  [.] compactify_tuples_loop
+4.53% 4.52% 37111  postmaster   postgres   
  [.] pg_qsort
+1.71% 1.70% 13992  postmaster   libc-2.12.so   
  [.] memcpy

which says that micro-optimizing the sort step is a complete, utter waste
of time, and what we need to be worried about is the data copying part.

The memcpy part of the above is presumably from the scaffolding memcpy's
in compactify_tuples_loop, which is interesting because that's moving as
much data as the memmove's are.  So at least with RHEL6's version of
glibc, memmove is apparently a lot slower than memcpy.

This gave me the idea to memcpy the page into some workspace and then use
memcpy, not memmove, to put the tuples back into the caller's copy of the
page.  That gave me about a 50% improvement in observed TPS, and a perf
profile like this:

+   38.50%38.40%299520  postmaster   postgres   
[.] compactify_tuples
+   31.11%31.02%241975  postmaster   libc-2.12.so   
[.] memcpy
+8.74% 8.72% 68022  postmaster   postgres   
[.] itemoffcompare
+6.51% 6.49% 50625  postmaster   postgres   
[.] compactify_tuples_loop
+4.21% 4.19% 32719  postmaster   postgres   
[.] pg_qsort
+1.70% 1.69% 13213  postmaster   postgres   
[.] memcpy@plt

There still doesn't seem to be any point in replacing the qsort,
but it does seem like something like the second attached patch
might be worth doing.

So I'm now wondering why my results seem to be so much different
from those of other people who have tried this, both as to whether
compactify_tuples is worth working on at all and as to what needs
to be done to it if so.  Thoughts?

regards, tom lane

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..bf6d308 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
*** compactify_tuples(itemIdSort itemidbase,
*** 465,470 
--- 465,489 
  	phdr->pd_upper = upper;
  }
  
+ static void
+ compactify_tuples_loop(itemIdSort itemidbase, int nitems, Page page)
+ {
+ 	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+ 	union {
+ 		char page[BLCKSZ];
+ 		double align;
+ 	} pagecopy;
+ 	int i;
+ 
+ 	for (i = 1; i < 1000; i++)
+ 	{
+ 		memcpy(copy, itemidbase, sizeof(itemIdSortData) * nitems);
+ 		memcpy(pagecopy.page, page, BLCKSZ);
+ 		compactify_tuples(copy, nitems, pagecopy.page);
+ 	}
+ 	compactify_tuples(itemidbase, nitems, page);
+ }
+ 
  /*
   * PageRepairFragmentation
   *
*** PageRepairFragmentation(Page page)
*** 560,566 
  	 errmsg("corrupted item lengths: total %u, available space %u",
  			(unsigned int) totallen, pd_special - pd_lower)));
  
! 		compactify_tuples(itemidbase, nstorage, page);
  	}
  
  	/* Set hint bit for PageAddItem */
--- 579,585 
  	 errmsg("corrupted item lengths: total %u, available space %u",
  			(unsigned int) totallen, pd_special - pd_lower)));
  
! 		compactify_tuples_loop(itemidbase, nstorage, page);
  	}
  
  	/* Set hint bit for PageAddItem */
*** PageIndexMultiDelete(Page page, OffsetNu
*** 940,946 
  	phdr->pd_lower = SizeOfPageHeaderData + nused * 

Re: [HACKERS] Small improvement to compactify_tuples

2017-11-03 Thread Claudio Freire
On Thu, Nov 2, 2017 at 11:46 PM, Tom Lane  wrote:
> Sokolov Yura  writes:
>> [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]
>
> I started to review this patch.  I spent a fair amount of time on
> beautifying the code, because I found it rather ugly and drastically
> undercommented.  Once I had it to the point where it seemed readable,
> I went to check the shellsort algorithm against Wikipedia's entry,
> and found that this appears to be an incorrect implementation of
> shellsort: where pg_shell_sort_pass has
>
> for (_i = off; _i < _n; _i += off) \
>
> it seems to me that we need to have
>
> for (_i = off; _i < _n; _i += 1) \
>
> or maybe just _i++.  As-is, this isn't h-sorting the whole file,
> but just the subset of entries that have multiple-of-h indexes
> (ie, the first of the h distinct subfiles that should get sorted).
> The bug is masked by the final pass of plain insertion sort, but
> we are not getting the benefit we should get from the earlier passes.
>
> However, I'm a bit dubious that it's worth fixing that; instead
> my inclination would be to rip out the shellsort implementation
> entirely.  The code is only using it for the nitems <= 48 case
> (which makes the first three offset steps certainly no-ops) and
> I am really unconvinced that it's worth expending the code space
> for a shellsort rather than plain insertion sort in that case,
> especially when we have good reason to think that the input data
> is nearly sorted.

I actually noticed that and benchmarked some variants. Neither
made any noticeable difference in performance, so I decided not
to complain about them.

I guess the same case can be made for removing the shell sort.
So I'm inclined to agree.

> BTW, the originally given test case shows no measurable improvement
> on my box.

I did manage to reproduce the original test and got a consistent improvement.

> I was eventually able to convince myself by profiling
> that the patch makes us spend less time in compactify_tuples, but
> this test case isn't a very convincing one.
>
> So, quite aside from the bug, I'm not excited about committing the
> attached as-is.  I think we should remove pg_shell_sort and just
> use pg_insertion_sort.  If somebody can show a test case that
> provides a measurable speed improvement from the extra code,
> I could be persuaded to reconsider.

My tests modifying the shell sort didn't produce any measurable
difference, but I didn't test removing it altogether.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-11-02 Thread Tom Lane
Sokolov Yura  writes:
> [ 0001-Improve-compactify_tuples.patch, v5 or thereabouts ]

I started to review this patch.  I spent a fair amount of time on
beautifying the code, because I found it rather ugly and drastically
undercommented.  Once I had it to the point where it seemed readable,
I went to check the shellsort algorithm against Wikipedia's entry,
and found that this appears to be an incorrect implementation of
shellsort: where pg_shell_sort_pass has

for (_i = off; _i < _n; _i += off) \

it seems to me that we need to have

for (_i = off; _i < _n; _i += 1) \

or maybe just _i++.  As-is, this isn't h-sorting the whole file,
but just the subset of entries that have multiple-of-h indexes
(ie, the first of the h distinct subfiles that should get sorted).
The bug is masked by the final pass of plain insertion sort, but
we are not getting the benefit we should get from the earlier passes.

However, I'm a bit dubious that it's worth fixing that; instead
my inclination would be to rip out the shellsort implementation
entirely.  The code is only using it for the nitems <= 48 case
(which makes the first three offset steps certainly no-ops) and
I am really unconvinced that it's worth expending the code space
for a shellsort rather than plain insertion sort in that case,
especially when we have good reason to think that the input data
is nearly sorted.

BTW, the originally given test case shows no measurable improvement
on my box.  I was eventually able to convince myself by profiling
that the patch makes us spend less time in compactify_tuples, but
this test case isn't a very convincing one.

So, quite aside from the bug, I'm not excited about committing the
attached as-is.  I think we should remove pg_shell_sort and just
use pg_insertion_sort.  If somebody can show a test case that
provides a measurable speed improvement from the extra code,
I could be persuaded to reconsider.

I also wonder if the nitems <= 48 cutoff needs to be reconsidered
in light of this.  But since I can hardly measure any benefit from
the patch at all, I'm not in the best position to test different
values for that cutoff.

Have not looked at the 0002 patch yet.

regards, tom lane

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb..1af1b85 100644
*** a/src/backend/storage/page/bufpage.c
--- b/src/backend/storage/page/bufpage.c
***
*** 18,23 
--- 18,24 
  #include "access/itup.h"
  #include "access/xlog.h"
  #include "storage/checksum.h"
+ #include "utils/inline_sort.h"
  #include "utils/memdebug.h"
  #include "utils/memutils.h"
  
*** typedef struct itemIdSortData
*** 425,439 
  } itemIdSortData;
  typedef itemIdSortData *itemIdSort;
  
! static int
  itemoffcompare(const void *itemidp1, const void *itemidp2)
  {
- 	/* Sort in decreasing itemoff order */
  	return ((itemIdSort) itemidp2)->itemoff -
  		((itemIdSort) itemidp1)->itemoff;
  }
  
  /*
   * After removing or marking some line pointers unused, move the tuples to
   * remove the gaps caused by the removed items.
   */
--- 426,542 
  } itemIdSortData;
  typedef itemIdSortData *itemIdSort;
  
! /* Comparator for sorting in decreasing itemoff order */
! static inline int
  itemoffcompare(const void *itemidp1, const void *itemidp2)
  {
  	return ((itemIdSort) itemidp2)->itemoff -
  		((itemIdSort) itemidp1)->itemoff;
  }
  
  /*
+  * Sort an array of itemIdSort's on itemoff, descending.
+  *
+  * This uses Shell sort.  Given that array is small and itemoffcompare
+  * can be inlined, it is much faster than general-purpose qsort.
+  */
+ static void
+ sort_itemIds_small(itemIdSort itemidbase, int nitems)
+ {
+ 	pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+ }
+ 
+ /*
+  * Sort an array of itemIdSort's on itemoff, descending.
+  *
+  * This uses bucket sort:
+  * - single pass of stable prefix sort on high 8 bits of itemoffs
+  * - then insertion sort on buckets larger than 1 element
+  */
+ static void
+ sort_itemIds(itemIdSort itemidbase, int nitems)
+ {
+ 	/* number of buckets to use: */
+ #define NSPLIT 256
+ 	/* divisor to scale input values into 0..NSPLIT-1: */
+ #define PREFDIV (BLCKSZ / NSPLIT)
+ 	/* per-bucket counts; we need two extra elements, see below */
+ 	uint16		count[NSPLIT + 2];
+ 	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+ 	int			i,
+ max,
+ total,
+ pos,
+ highbits;
+ 
+ 	Assert(nitems <= lengthof(copy));
+ 
+ 	/*
+ 	 * Count how many items in each bucket.  We assume all itemoff values are
+ 	 * less than BLCKSZ, therefore dividing by PREFDIV gives a value less than
+ 	 * NSPLIT.
+ 	 */
+ 	memset(count, 0, sizeof(count));
+ 	for (i = 0; i < nitems; i++)
+ 	{
+ 		highbits = itemidbase[i].itemoff / PREFDIV;
+ 		count[highbits]++;
+ 	}
+ 
+ 	/*
+ 	 * Now convert counts to bucket position info, placing the buckets in
+ 	 * 

Re: [HACKERS] Small improvement to compactify_tuples

2017-09-25 Thread Claudio Freire
On Sat, Sep 23, 2017 at 5:56 AM, Sokolov Yura
 wrote:
> Hello, Claudio.
>
> Thank you for review and confirm of improvement.
>
>
> On 2017-09-23 01:12, Claudio Freire wrote:
>>
>>
>> Patch 1 applies cleanly, builds, and make check runs fine.
>>
>> The code looks similar in style to surrounding code too, so I'm not
>> going to complain about the abundance of underscores in the macros :-p
>>
>> I can reproduce the results in the OP's benchmark, with slightly
>> different numbers, but an overall improvement of ~6%, which matches
>> the OP's relative improvement.
>>
>> Algorithmically, everything looks sound.
>>
>>
>> A few minor comments about patch 1:
>>
>> +if (max == 1)
>> +goto end;
>>
>> That goto is unnecessary, you could just as simply say
>>
>> if (max > 1)
>> {
>>...
>> }
>
>
> Done.
> (I don't like indentation, though :-( )
>
>>
>>
>> +#define pg_shell_sort_pass(elem_t, cmp, off) \
>> +do { \
>> +int _i, _j; \
>> +elem_t _temp; \
>> +for (_i = off; _i < _n; _i += off) \
>> +{ \
>>
>> _n right there isn't declared in the macro, and it isn't an argument
>> either. It should be an argument, having stuff inherited from the
>> enclosing context like that is confusing.
>>
>> Same with _arr, btw.
>
>
> pg_shell_sort_pass is not intended to be used outside pg_shell_sort
> and ph_insertion_sort, so I think, stealing from their context is ok.
> Nonetheless, done.

Looks good.

Marking this patch as ready for committer


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-09-23 Thread Sokolov Yura

Hello, Claudio.

Thank you for review and confirm of improvement.

On 2017-09-23 01:12, Claudio Freire wrote:

On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
 wrote:

On 2017-07-21 13:49, Sokolov Yura wrote:


On 2017-05-17 17:46, Sokolov Yura wrote:


Alvaro Herrera писал 2017-05-15 20:13:


As I understand, these patches are logically separate, so putting 
them
together in a single file isn't such a great idea.  If you don't 
edit
the patches further, then you're all set because we already have 
the
previously archived patches.  Next commitfest starts in a few 
months

yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files.  (Some would even 
argue
that each should be its own thread, but I don't think that's 
necessary.)



Thank you for explanation.

I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
  (ie 0 or 1 element). In this case no need to sort buckets at all.



I'm putting rebased version of second patch.



Again rebased version of both patches.
Now second patch applies cleanly independent of first patch.


Patch 1 applies cleanly, builds, and make check runs fine.

The code looks similar in style to surrounding code too, so I'm not
going to complain about the abundance of underscores in the macros :-p

I can reproduce the results in the OP's benchmark, with slightly
different numbers, but an overall improvement of ~6%, which matches
the OP's relative improvement.

Algorithmically, everything looks sound.


A few minor comments about patch 1:

+if (max == 1)
+goto end;

That goto is unnecessary, you could just as simply say

if (max > 1)
{
   ...
}


Done.
(I don't like indentation, though :-( )




+#define pg_shell_sort_pass(elem_t, cmp, off) \
+do { \
+int _i, _j; \
+elem_t _temp; \
+for (_i = off; _i < _n; _i += off) \
+{ \

_n right there isn't declared in the macro, and it isn't an argument
either. It should be an argument, having stuff inherited from the
enclosing context like that is confusing.

Same with _arr, btw.


pg_shell_sort_pass is not intended to be used outside pg_shell_sort
and ph_insertion_sort, so I think, stealing from their context is ok.
Nonetheless, done.




Patch 2 LGTM.


--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom e486014c0a7545f7bb2264d0d7ac96f0544eacdb Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 14:23:39 +0300
Subject: [PATCH] Improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch implements bucket sort:
- one pass of stable prefix sort on high 8 bits of offset
- and insertion sort for buckets larger than 1 element

Also for smaller arrays shell sort is used.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off), and speeds up WAL replaying (as were
found by Heikki Linnakangas).

Same approach were implemented by Heikki Linnakangas some time ago with
several distinctions.
---
 src/backend/storage/page/bufpage.c | 87 --
 src/include/c.h| 63 +++
 2 files changed, 147 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb59c..0f9f7d5dfc 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -434,6 +434,86 @@ itemoffcompare(const void *itemidp1, const void *itemidp2)
 }
 
 /*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses Shell sort. Given array is small and itemoffcompare is inlined,
+ * it is much faster than call to qsort.
+ */
+static void
+sort_itemIds_small(itemIdSort itemidbase, int nitems)
+{
+	pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+}
+
+/*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses bucket sort:
+ * - single pass of stable prefix sort on high 8 bits
+ * - and insertion sort on buckets larger than 1 element
+ */
+static void
+sort_itemIds(itemIdSort itemidbase, int nitems)
+{
+	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+#define NSPLIT 256
+#define PREFDIV (BLCKSZ / NSPLIT)
+	/* two extra elements to emulate offset on previous step */
+	uint16		count[NSPLIT + 2] = {0};
+	int			i,
+max,
+total,
+pos,
+highbits;
+
+	Assert(nitems <= MaxIndexTuplesPerPage);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		count[highbits]++;
+	}
+	/* sort in decreasing order */
+	max = total = count[NSPLIT - 1];
+	for (i = 

Re: [HACKERS] Small improvement to compactify_tuples

2017-09-22 Thread Claudio Freire
On Tue, Sep 12, 2017 at 12:49 PM, Sokolov Yura
 wrote:
> On 2017-07-21 13:49, Sokolov Yura wrote:
>>
>> On 2017-05-17 17:46, Sokolov Yura wrote:
>>>
>>> Alvaro Herrera писал 2017-05-15 20:13:

 As I understand, these patches are logically separate, so putting them
 together in a single file isn't such a great idea.  If you don't edit
 the patches further, then you're all set because we already have the
 previously archived patches.  Next commitfest starts in a few months
 yet, and if you feel the need to submit corrected versions in the
 meantime, please do submit in separate files.  (Some would even argue
 that each should be its own thread, but I don't think that's necessary.)
>>>
>>>
>>> Thank you for explanation.
>>>
>>> I'm adding new version of first patch with minor improvement:
>>> - I added detection of a case when all buckets are trivial
>>>   (ie 0 or 1 element). In this case no need to sort buckets at all.
>>
>>
>> I'm putting rebased version of second patch.
>
>
> Again rebased version of both patches.
> Now second patch applies cleanly independent of first patch.

Patch 1 applies cleanly, builds, and make check runs fine.

The code looks similar in style to surrounding code too, so I'm not
going to complain about the abundance of underscores in the macros :-p

I can reproduce the results in the OP's benchmark, with slightly
different numbers, but an overall improvement of ~6%, which matches
the OP's relative improvement.

Algorithmically, everything looks sound.


A few minor comments about patch 1:

+if (max == 1)
+goto end;

That goto is unnecessary, you could just as simply say

if (max > 1)
{
   ...
}


+#define pg_shell_sort_pass(elem_t, cmp, off) \
+do { \
+int _i, _j; \
+elem_t _temp; \
+for (_i = off; _i < _n; _i += off) \
+{ \

_n right there isn't declared in the macro, and it isn't an argument
either. It should be an argument, having stuff inherited from the
enclosing context like that is confusing.

Same with _arr, btw.


Patch 2 LGTM.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-09-12 Thread Sokolov Yura

On 2017-07-21 13:49, Sokolov Yura wrote:

On 2017-05-17 17:46, Sokolov Yura wrote:

Alvaro Herrera писал 2017-05-15 20:13:
As I understand, these patches are logically separate, so putting 
them

together in a single file isn't such a great idea.  If you don't edit
the patches further, then you're all set because we already have the
previously archived patches.  Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files.  (Some would even argue
that each should be its own thread, but I don't think that's 
necessary.)


Thank you for explanation.

I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
  (ie 0 or 1 element). In this case no need to sort buckets at all.


I'm putting rebased version of second patch.


Again rebased version of both patches.
Now second patch applies cleanly independent of first patch.

--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom a60ac9f79cc97bd6b8cf6932b56844d136893ff9 Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 14:23:39 +0300
Subject: [PATCH 1/2] Improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch implements bucket sort:
- one pass of stable prefix sort on high 8 bits of offset
- and insertion sort for buckets larger than 1 element

Also for smaller arrays shell sort is used.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off), and speeds up WAL replaying (as were
found by Heikki Linnakangas).

Same approach were implemented by Heikki Linnakangas some time ago with
several distinctions.
---
 src/backend/storage/page/bufpage.c | 87 --
 src/include/c.h| 59 ++
 2 files changed, 143 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 41642eb59c..ba99d062b6 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -434,6 +434,86 @@ itemoffcompare(const void *itemidp1, const void *itemidp2)
 }
 
 /*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses Shell sort. Given array is small and itemoffcompare is inlined,
+ * it is much faster than call to qsort.
+ */
+static void
+sort_itemIds_small(itemIdSort itemidbase, int nitems)
+{
+	pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+}
+
+/*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses bucket sort:
+ * - single pass of stable prefix sort on high 8 bits
+ * - and insertion sort on buckets larger than 1 element
+ */
+static void
+sort_itemIds(itemIdSort itemidbase, int nitems)
+{
+	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+#define NSPLIT 256
+#define PREFDIV (BLCKSZ / NSPLIT)
+	/* two extra elements to emulate offset on previous step */
+	uint16		count[NSPLIT + 2] = {0};
+	int			i,
+max,
+total,
+pos,
+highbits;
+
+	Assert(nitems <= MaxIndexTuplesPerPage);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		count[highbits]++;
+	}
+	/* sort in decreasing order */
+	max = total = count[NSPLIT - 1];
+	for (i = NSPLIT - 2; i >= 0; i--)
+	{
+		max |= count[i];
+		total += count[i];
+		count[i] = total;
+	}
+
+	/*
+	 * count[k+1] is start of bucket k, count[k] is end of bucket k, and
+	 * count[k] - count[k+1] is length of bucket k.
+	 */
+	Assert(count[0] == nitems);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		pos = count[highbits + 1];
+		count[highbits + 1]++;
+		copy[pos] = itemidbase[i];
+	}
+	Assert(count[1] == nitems);
+
+	if (max == 1)
+		goto end;
+
+	/*
+	 * count[k+2] is start of bucket k, count[k+1] is end of bucket k, and
+	 * count[k+1]-count[k+2] is length of bucket k.
+	 */
+	for (i = NSPLIT; i > 0; i--)
+	{
+		pg_insertion_sort(itemIdSortData,
+		  copy + count[i + 1],
+		  count[i] - count[i + 1],
+		  itemoffcompare);
+	}
+end:
+	memcpy(itemidbase, copy, sizeof(itemIdSortData) * nitems);
+}
+
+/*
  * After removing or marking some line pointers unused, move the tuples to
  * remove the gaps caused by the removed items.
  */
@@ -444,9 +524,10 @@ compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
 	Offset		upper;
 	int			i;
 
-	/* sort itemIdSortData array into decreasing itemoff order */
-	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
-		  itemoffcompare);
+	if (nitems > 48)
+		sort_itemIds(itemidbase, nitems);
+	else
+		sort_itemIds_small(itemidbase, nitems);
 

Re: [HACKERS] Small improvement to compactify_tuples

2017-07-21 Thread Sokolov Yura

On 2017-05-17 17:46, Sokolov Yura wrote:

Alvaro Herrera писал 2017-05-15 20:13:

As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea.  If you don't edit
the patches further, then you're all set because we already have the
previously archived patches.  Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files.  (Some would even argue
that each should be its own thread, but I don't think that's 
necessary.)


Thank you for explanation.

I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
  (ie 0 or 1 element). In this case no need to sort buckets at all.


I'm putting rebased version of second patch.

--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 3ae5aa0401fc5028cc916d55c4b18e53bf6aa9fb Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 16:04:14 +0300
Subject: [PATCH 2/2] Simplify PageRepairFragmentation

In assumption that page usually doesn't become empty, merge second loop
body (collecting items with storage) into first (counting kinds of
items).
---
 src/backend/storage/page/bufpage.c | 47 +++---
 1 file changed, 19 insertions(+), 28 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index f28f18cff3..77cef51d02 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -497,6 +497,7 @@ sort_itemIds(itemIdSort itemidbase, int nitems)
 
 	if (max == 1)
 		goto end;
+
 	/*
 	 * count[k+2] is start of bucket k, count[k+1] is end of bucket k, and
 	 * count[k+1]-count[k+2] is length of bucket k.
@@ -563,10 +564,11 @@ PageRepairFragmentation(Page page)
 	Offset		pd_special = ((PageHeader) page)->pd_special;
 	ItemId		lp;
 	int			nline,
-nstorage,
 nunused;
 	int			i;
 	Size		totallen;
+	itemIdSortData itemidbase[MaxHeapTuplesPerPage];
+	itemIdSort	itemidptr = itemidbase;
 
 	/*
 	 * It's worth the trouble to be more paranoid here than in most places,
@@ -586,14 +588,26 @@ PageRepairFragmentation(Page page)
 		pd_lower, pd_upper, pd_special)));
 
 	nline = PageGetMaxOffsetNumber(page);
-	nunused = nstorage = 0;
+	nunused = totallen = 0;
 	for (i = FirstOffsetNumber; i <= nline; i++)
 	{
 		lp = PageGetItemId(page, i);
 		if (ItemIdIsUsed(lp))
 		{
 			if (ItemIdHasStorage(lp))
-nstorage++;
+			{
+itemidptr->offsetindex = i - 1;
+itemidptr->itemoff = ItemIdGetOffset(lp);
+if (unlikely(itemidptr->itemoff < (int) pd_upper ||
+			 itemidptr->itemoff >= (int) pd_special))
+	ereport(ERROR,
+			(errcode(ERRCODE_DATA_CORRUPTED),
+			 errmsg("corrupted item pointer: %u",
+	itemidptr->itemoff)));
+itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
+totallen += itemidptr->alignedlen;
+itemidptr++;
+			}
 		}
 		else
 		{
@@ -603,7 +617,7 @@ PageRepairFragmentation(Page page)
 		}
 	}
 
-	if (nstorage == 0)
+	if (itemidptr == itemidbase)
 	{
 		/* Page is completely empty, so just reset it quickly */
 		((PageHeader) page)->pd_upper = pd_special;
@@ -611,36 +625,13 @@ PageRepairFragmentation(Page page)
 	else
 	{
 		/* Need to compact the page the hard way */
-		itemIdSortData itemidbase[MaxHeapTuplesPerPage];
-		itemIdSort	itemidptr = itemidbase;
-
-		totallen = 0;
-		for (i = 0; i < nline; i++)
-		{
-			lp = PageGetItemId(page, i + 1);
-			if (ItemIdHasStorage(lp))
-			{
-itemidptr->offsetindex = i;
-itemidptr->itemoff = ItemIdGetOffset(lp);
-if (itemidptr->itemoff < (int) pd_upper ||
-	itemidptr->itemoff >= (int) pd_special)
-	ereport(ERROR,
-			(errcode(ERRCODE_DATA_CORRUPTED),
-			 errmsg("corrupted item pointer: %u",
-	itemidptr->itemoff)));
-itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
-totallen += itemidptr->alignedlen;
-itemidptr++;
-			}
-		}
-
 		if (totallen > (Size) (pd_special - pd_lower))
 			ereport(ERROR,
 	(errcode(ERRCODE_DATA_CORRUPTED),
 	 errmsg("corrupted item lengths: total %u, available space %u",
 			(unsigned int) totallen, pd_special - pd_lower)));
 
-		compactify_tuples(itemidbase, nstorage, page);
+		compactify_tuples(itemidbase, (int) (itemidptr - itemidbase), page);
 	}
 
 	/* Set hint bit for PageAddItem */
-- 
2.11.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-05-17 Thread Sokolov Yura

Alvaro Herrera писал 2017-05-15 20:13:

As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea.  If you don't edit
the patches further, then you're all set because we already have the
previously archived patches.  Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files.  (Some would even argue
that each should be its own thread, but I don't think that's 
necessary.)


Thank you for explanation.

I'm adding new version of first patch with minor improvement:
- I added detection of a case when all buckets are trivial
  (ie 0 or 1 element). In this case no need to sort buckets at all.

--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 4fafc870690b7de5aeab7e583780546e8170f6d0 Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 14:23:39 +0300
Subject: [PATCH 1/2] Improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch implements bucket sort:
- one pass of stable prefix sort on high 8 bits of offset
- and insertion sort for buckets larger than 1 element

Also for smaller arrays shell sort is used.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off), and speeds up WAL replaying (as were
found by Heikki Linnakangas).

Same approach were implemented by Heikki Linnakangas some time ago with
several distinctions.
---
 src/backend/storage/page/bufpage.c | 86 --
 src/include/c.h| 59 ++
 2 files changed, 142 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a45b..c1bb11c354 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -434,6 +434,85 @@ itemoffcompare(const void *itemidp1, const void *itemidp2)
 }
 
 /*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses Shell sort. Given array is small and itemoffcompare is inlined,
+ * it is much faster than call to qsort.
+ */
+static void
+sort_itemIds_small(itemIdSort itemidbase, int nitems)
+{
+	pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+}
+
+/*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses bucket sort:
+ * - single pass of stable prefix sort on high 8 bits
+ * - and insertion sort on buckets larger than 1 element
+ */
+static void
+sort_itemIds(itemIdSort itemidbase, int nitems)
+{
+	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+#define NSPLIT 256
+#define PREFDIV (BLCKSZ / NSPLIT)
+	/* two extra elements to emulate offset on previous step */
+	uint16		count[NSPLIT + 2] = {0};
+	int			i,
+max,
+total,
+pos,
+highbits;
+
+	Assert(nitems <= MaxIndexTuplesPerPage);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		count[highbits]++;
+	}
+	/* sort in decreasing order */
+	max = total = count[NSPLIT - 1];
+	for (i = NSPLIT - 2; i >= 0; i--)
+	{
+		max |= count[i];
+		total += count[i];
+		count[i] = total;
+	}
+
+	/*
+	 * count[k+1] is start of bucket k, count[k] is end of bucket k, and
+	 * count[k] - count[k+1] is length of bucket k.
+	 */
+	Assert(count[0] == nitems);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		pos = count[highbits + 1];
+		count[highbits + 1]++;
+		copy[pos] = itemidbase[i];
+	}
+	Assert(count[1] == nitems);
+
+	if (max == 1)
+		goto end;
+	/*
+	 * count[k+2] is start of bucket k, count[k+1] is end of bucket k, and
+	 * count[k+1]-count[k+2] is length of bucket k.
+	 */
+	for (i = NSPLIT; i > 0; i--)
+	{
+		pg_insertion_sort(itemIdSortData,
+		  copy + count[i + 1],
+		  count[i] - count[i + 1],
+		  itemoffcompare);
+	}
+end:
+	memcpy(itemidbase, copy, sizeof(itemIdSortData) * nitems);
+}
+
+/*
  * After removing or marking some line pointers unused, move the tuples to
  * remove the gaps caused by the removed items.
  */
@@ -444,9 +523,10 @@ compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
 	Offset		upper;
 	int			i;
 
-	/* sort itemIdSortData array into decreasing itemoff order */
-	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
-		  itemoffcompare);
+	if (nitems > 48)
+		sort_itemIds(itemidbase, nitems);
+	else
+		sort_itemIds_small(itemidbase, nitems);
 
 	upper = phdr->pd_special;
 	for (i = 0; i < nitems; i++)
diff --git a/src/include/c.h b/src/include/c.h
index fba07c651f..837940d5cf 100644
--- a/src/include/c.h
+++ b/src/include/c.h
@@ -962,6 +962,65 @@ typedef NameData *Name;
 

Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Alvaro Herrera
Sokolov Yura wrote:
> Sokolov Yura писал 2017-05-15 18:23:
> > Alvaro Herrera писал 2017-05-15 18:04:
> > > Please add these two patches to the upcoming commitfest,
> > > https://commitfest.postgresql.org/
> > 
> > Thank you for suggestion.
> > 
> > I've created https://commitfest.postgresql.org/14/1138/
> > As I could understand, I should attach both patches to single email
> > to be show correctly in commitfest topic. So I do it with this email.

> Looks like it should be single file.

As I understand, these patches are logically separate, so putting them
together in a single file isn't such a great idea.  If you don't edit
the patches further, then you're all set because we already have the
previously archived patches.  Next commitfest starts in a few months
yet, and if you feel the need to submit corrected versions in the
meantime, please do submit in separate files.  (Some would even argue
that each should be its own thread, but I don't think that's necessary.)

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Sokolov Yura

Sokolov Yura писал 2017-05-15 18:23:

Alvaro Herrera писал 2017-05-15 18:04:

Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/


Thank you for suggestion.

I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach both patches to single email
to be show correctly in commitfest topic. So I do it with this email.

Please, correct me, if I do something wrong.

With regards.


Looks like it should be single file.

--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 231023298aad3024e31b487c8f5d2a6e68ad1da9 Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 14:23:39 +0300
Subject: [PATCH 1/2] Improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch implements bucket sort:
- one pass of stable prefix sort on high 8 bits of offset
- and insertion sort for buckets larger than 1 element

Also for smaller arrays shell sort is used.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off), and speeds up WAL replaying (as were
found by Heikki Linnakangas).

Same approach were implemented by Heikki Linnakangas some time ago with
several distinctions.
---
 src/backend/storage/page/bufpage.c | 78 --
 src/include/c.h| 59 
 2 files changed, 134 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a45b..c5630e808b 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -434,6 +434,77 @@ itemoffcompare(const void *itemidp1, const void *itemidp2)
 }
 
 /*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses Shell sort. Given array is small and itemoffcompare is inlined,
+ * it is much faster than call to qsort.
+ */
+static void
+sort_itemIds_small(itemIdSort itemidbase, int nitems)
+{
+	pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+}
+
+/*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses bucket sort:
+ * - single pass of stable prefix sort on high 8 bits
+ * - and insertion sort on buckets larger than 1 element
+ */
+static void
+sort_itemIds(itemIdSort itemidbase, int nitems)
+{
+	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+#define NSPLIT 256
+#define PREFDIV (BLCKSZ / NSPLIT)
+	/* two extra elements to emulate offset on previous step */
+	uint16		count[NSPLIT + 2] = {0};
+	int			i,
+pos,
+highbits;
+
+	Assert(nitems <= MaxIndexTuplesPerPage);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		count[highbits]++;
+	}
+	/* sort in decreasing order */
+	for (i = NSPLIT - 1; i != 0; i--)
+		count[i - 1] += count[i];
+
+	/*
+	 * count[k+1] is start of bucket k, count[k] is end of bucket k, and
+	 * count[k] - count[k+1] is length of bucket k.
+	 */
+	Assert(count[0] == nitems);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		pos = count[highbits + 1];
+		count[highbits + 1]++;
+		copy[pos] = itemidbase[i];
+	}
+	Assert(count[1] == nitems);
+
+	/*
+	 * count[k+2] is start of bucket k, count[k+1] is end of bucket k, and
+	 * count[k+1]-count[k+2] is length of bucket k.
+	 */
+	for (i = NSPLIT; i > 0; i--)
+	{
+		if (likely(count[i] - count[i + 1] <= 1))
+			continue;
+		pg_insertion_sort(itemIdSortData,
+		  copy + count[i + 1],
+		  count[i] - count[i + 1],
+		  itemoffcompare);
+	}
+	memcpy(itemidbase, copy, sizeof(itemIdSortData) * nitems);
+}
+
+/*
  * After removing or marking some line pointers unused, move the tuples to
  * remove the gaps caused by the removed items.
  */
@@ -444,9 +515,10 @@ compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
 	Offset		upper;
 	int			i;
 
-	/* sort itemIdSortData array into decreasing itemoff order */
-	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
-		  itemoffcompare);
+	if (nitems > 48)
+		sort_itemIds(itemidbase, nitems);
+	else
+		sort_itemIds_small(itemidbase, nitems);
 
 	upper = phdr->pd_special;
 	for (i = 0; i < nitems; i++)
diff --git a/src/include/c.h b/src/include/c.h
index fba07c651f..837940d5cf 100644
--- a/src/include/c.h
+++ b/src/include/c.h
@@ -962,6 +962,65 @@ typedef NameData *Name;
 #define unlikely(x) ((x) != 0)
 #endif
 
+/*
+ * pg_shell_sort - sort for small arrays with inlinable comparator.
+ * Since it is implemented as a macros it could be optimized together with
+ * comparison function.
+ * Gaps are "gap(i) = smallest prime number below e^i". They are close to
+ * Incerpi & Sedwick gaps, but looks to 

Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Sokolov Yura

Alvaro Herrera писал 2017-05-15 18:04:

Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/


Thank you for suggestion.

I've created https://commitfest.postgresql.org/14/1138/
As I could understand, I should attach both patches to single email
to be show correctly in commitfest topic. So I do it with this email.

Please, correct me, if I do something wrong.

With regards.
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 2cde4cb6b0c4c5868d99e13789b0ac33364d7315 Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 16:04:14 +0300
Subject: [PATCH 2/2] bufpage.c: simplify PageRepairFragmentation

In assumption that page usually doesn't become empty, merge second loop
body (collecting items with storage) into first (counting kinds of
items).
---
 src/backend/storage/page/bufpage.c | 46 +++---
 1 file changed, 18 insertions(+), 28 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index c5630e808b..61738f241f 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -555,10 +555,11 @@ PageRepairFragmentation(Page page)
 	Offset		pd_special = ((PageHeader) page)->pd_special;
 	ItemId		lp;
 	int			nline,
-nstorage,
 nunused;
 	int			i;
 	Size		totallen;
+	itemIdSortData itemidbase[MaxHeapTuplesPerPage];
+	itemIdSort	itemidptr = itemidbase;
 
 	/*
 	 * It's worth the trouble to be more paranoid here than in most places,
@@ -578,14 +579,26 @@ PageRepairFragmentation(Page page)
 		pd_lower, pd_upper, pd_special)));
 
 	nline = PageGetMaxOffsetNumber(page);
-	nunused = nstorage = 0;
+	nunused = totallen = 0;
 	for (i = FirstOffsetNumber; i <= nline; i++)
 	{
 		lp = PageGetItemId(page, i);
 		if (ItemIdIsUsed(lp))
 		{
 			if (ItemIdHasStorage(lp))
-nstorage++;
+			{
+itemidptr->offsetindex = i - 1;
+itemidptr->itemoff = ItemIdGetOffset(lp);
+if (unlikely(itemidptr->itemoff < (int) pd_upper ||
+			 itemidptr->itemoff >= (int) pd_special))
+	ereport(ERROR,
+			(errcode(ERRCODE_DATA_CORRUPTED),
+			 errmsg("corrupted item pointer: %u",
+	itemidptr->itemoff)));
+itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
+totallen += itemidptr->alignedlen;
+itemidptr++;
+			}
 		}
 		else
 		{
@@ -595,7 +608,7 @@ PageRepairFragmentation(Page page)
 		}
 	}
 
-	if (nstorage == 0)
+	if (itemidptr == itemidbase)
 	{
 		/* Page is completely empty, so just reset it quickly */
 		((PageHeader) page)->pd_upper = pd_special;
@@ -603,36 +616,13 @@ PageRepairFragmentation(Page page)
 	else
 	{
 		/* Need to compact the page the hard way */
-		itemIdSortData itemidbase[MaxHeapTuplesPerPage];
-		itemIdSort	itemidptr = itemidbase;
-
-		totallen = 0;
-		for (i = 0; i < nline; i++)
-		{
-			lp = PageGetItemId(page, i + 1);
-			if (ItemIdHasStorage(lp))
-			{
-itemidptr->offsetindex = i;
-itemidptr->itemoff = ItemIdGetOffset(lp);
-if (itemidptr->itemoff < (int) pd_upper ||
-	itemidptr->itemoff >= (int) pd_special)
-	ereport(ERROR,
-			(errcode(ERRCODE_DATA_CORRUPTED),
-			 errmsg("corrupted item pointer: %u",
-	itemidptr->itemoff)));
-itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
-totallen += itemidptr->alignedlen;
-itemidptr++;
-			}
-		}
-
 		if (totallen > (Size) (pd_special - pd_lower))
 			ereport(ERROR,
 	(errcode(ERRCODE_DATA_CORRUPTED),
 			   errmsg("corrupted item lengths: total %u, available space %u",
 	  (unsigned int) totallen, pd_special - pd_lower)));
 
-		compactify_tuples(itemidbase, nstorage, page);
+		compactify_tuples(itemidbase, (int) (itemidptr - itemidbase), page);
 	}
 
 	/* Set hint bit for PageAddItem */
-- 
2.11.0

From 8f75fcaba0590d5150dd956a60feee5c28f4caab Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 14:23:39 +0300
Subject: [PATCH] storage/page/bufpage.c: improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch implements bucket sort:
- one pass of stable prefix sort on high 8 bits of offset
- and insertion sort for buckets larger than 1 element

Also for smaller arrays shell sort is used.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off), and speeds up WAL replaying (as were
found by Heikki Linnakangas).

Same approach were implemented by Heikki Linnakangas some time ago with
several distinctions.
---
 src/backend/storage/page/bufpage.c | 78 --
 src/include/c.h| 59 
 2 files changed, 134 insertions(+), 3 

Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Alvaro Herrera
Please add these two patches to the upcoming commitfest,
https://commitfest.postgresql.org/

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Sokolov Yura

Sokolov Yura писал 2017-05-15 15:08:

Heikki Linnakangas писал 2017-05-15 12:06:

On 05/14/2017 09:47 PM, Sokolov Yura wrote:

Good day, everyone.

I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a 
compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort 
of

itemidbase items.


Ah, I played with this too a couple of years ago, see
https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com,
but got distracted by other things and never got around to commit
that.


itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.

Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.


Cool! Could you compare that against the bucket sort I posted in the
above thread, please?

At a quick glance, your "prefix sort" seems to be the the same
algorithm as the bucket sort that I implemented. You chose 256
buckets, where I picked 32. And you're adding a shell sort
implementation, for small arrays, while I used a straight insertion
sort. Not sure what these differences mean in practice.

- Heikki


Thank you for attention.

My first version of big page sort was almost exactly same to yours.
I had a bug in my insertion sort, so I had to refactor it.
(bug were fixed)

I found that items in itemidbase are almost sorted, so it is important
to try keep its order in prefix sort. So I've changed --count[i] to
count[i+1]++.

And it looks like it is better to have more buckets:
- with 256 buckets, size of single bucket is almost always less than 2,
so array is almost always sorted after prefix sort pass.

But it looks like it is better to sort each bucket separately, as you
did, and as it was in my early version.

Also I used your names for functions and some comments.

I attached new version of the patch.

I left memcpy intact cause it looks like it doesn't take noticable
cpu time.


In a sequel, I propose to simplify PageRepairFragmentation in attached
patch.

--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom 2cde4cb6b0c4c5868d99e13789b0ac33364d7315 Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 16:04:14 +0300
Subject: [PATCH 2/2] bufpage.c: simplify PageRepairFragmentation

In assumption that page usually doesn't become empty, merge second loop
body (collecting items with storage) into first (counting kinds of
items).
---
 src/backend/storage/page/bufpage.c | 46 +++---
 1 file changed, 18 insertions(+), 28 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index c5630e808b..61738f241f 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -555,10 +555,11 @@ PageRepairFragmentation(Page page)
 	Offset		pd_special = ((PageHeader) page)->pd_special;
 	ItemId		lp;
 	int			nline,
-nstorage,
 nunused;
 	int			i;
 	Size		totallen;
+	itemIdSortData itemidbase[MaxHeapTuplesPerPage];
+	itemIdSort	itemidptr = itemidbase;
 
 	/*
 	 * It's worth the trouble to be more paranoid here than in most places,
@@ -578,14 +579,26 @@ PageRepairFragmentation(Page page)
 		pd_lower, pd_upper, pd_special)));
 
 	nline = PageGetMaxOffsetNumber(page);
-	nunused = nstorage = 0;
+	nunused = totallen = 0;
 	for (i = FirstOffsetNumber; i <= nline; i++)
 	{
 		lp = PageGetItemId(page, i);
 		if (ItemIdIsUsed(lp))
 		{
 			if (ItemIdHasStorage(lp))
-nstorage++;
+			{
+itemidptr->offsetindex = i - 1;
+itemidptr->itemoff = ItemIdGetOffset(lp);
+if (unlikely(itemidptr->itemoff < (int) pd_upper ||
+			 itemidptr->itemoff >= (int) pd_special))
+	ereport(ERROR,
+			(errcode(ERRCODE_DATA_CORRUPTED),
+			 errmsg("corrupted item pointer: %u",
+	itemidptr->itemoff)));
+itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
+totallen += itemidptr->alignedlen;
+itemidptr++;
+			}
 		}
 		else
 		{
@@ -595,7 +608,7 @@ PageRepairFragmentation(Page page)
 		}
 	}
 
-	if (nstorage == 0)
+	if (itemidptr == itemidbase)
 	{
 		/* Page is completely empty, so just reset it quickly */
 		((PageHeader) page)->pd_upper = pd_special;
@@ -603,36 +616,13 @@ PageRepairFragmentation(Page page)
 	else
 	{
 		/* Need to compact the page the hard way */
-		itemIdSortData itemidbase[MaxHeapTuplesPerPage];
-		itemIdSort	itemidptr = itemidbase;
-
-		totallen = 0;
-		for (i = 0; i < nline; i++)
-		{
-			lp = PageGetItemId(page, i + 1);
-			if (ItemIdHasStorage(lp))
-			{
-itemidptr->offsetindex = i;
-itemidptr->itemoff = ItemIdGetOffset(lp);
-if (itemidptr->itemoff < (int) pd_upper ||
-	itemidptr->itemoff >= (int) pd_special)
-	

Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Sokolov Yura

Heikki Linnakangas писал 2017-05-15 12:06:

On 05/14/2017 09:47 PM, Sokolov Yura wrote:

Good day, everyone.

I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a 
compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort 
of

itemidbase items.


Ah, I played with this too a couple of years ago, see
https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com,
but got distracted by other things and never got around to commit
that.


itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.

Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.


Cool! Could you compare that against the bucket sort I posted in the
above thread, please?

At a quick glance, your "prefix sort" seems to be the the same
algorithm as the bucket sort that I implemented. You chose 256
buckets, where I picked 32. And you're adding a shell sort
implementation, for small arrays, while I used a straight insertion
sort. Not sure what these differences mean in practice.

- Heikki


Thank you for attention.

My first version of big page sort was almost exactly same to yours.
I had a bug in my insertion sort, so I had to refactor it.
(bug were fixed)

I found that items in itemidbase are almost sorted, so it is important
to try keep its order in prefix sort. So I've changed --count[i] to
count[i+1]++.

And it looks like it is better to have more buckets:
- with 256 buckets, size of single bucket is almost always less than 2,
so array is almost always sorted after prefix sort pass.

But it looks like it is better to sort each bucket separately, as you
did, and as it was in my early version.

Also I used your names for functions and some comments.

I attached new version of the patch.

I left memcpy intact cause it looks like it doesn't take noticable
cpu time.

--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian PostgreSQL CompanyFrom 8f75fcaba0590d5150dd956a60feee5c28f4caab Mon Sep 17 00:00:00 2001
From: Sokolov Yura 
Date: Mon, 15 May 2017 14:23:39 +0300
Subject: [PATCH] storage/page/bufpage.c: improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch implements bucket sort:
- one pass of stable prefix sort on high 8 bits of offset
- and insertion sort for buckets larger than 1 element

Also for smaller arrays shell sort is used.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off), and speeds up WAL replaying (as were
found by Heikki Linnakangas).

Same approach were implemented by Heikki Linnakangas some time ago with
several distinctions.
---
 src/backend/storage/page/bufpage.c | 78 --
 src/include/c.h| 59 
 2 files changed, 134 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a45b..c5630e808b 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -434,6 +434,77 @@ itemoffcompare(const void *itemidp1, const void *itemidp2)
 }
 
 /*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses Shell sort. Given array is small and itemoffcompare is inlined,
+ * it is much faster than call to qsort.
+ */
+static void
+sort_itemIds_small(itemIdSort itemidbase, int nitems)
+{
+	pg_shell_sort(itemIdSortData, itemidbase, nitems, itemoffcompare);
+}
+
+/*
+ * Sort an array of itemIdSort's on itemoff, descending.
+ *
+ * It uses bucket sort:
+ * - single pass of stable prefix sort on high 8 bits
+ * - and insertion sort on buckets larger than 1 element
+ */
+static void
+sort_itemIds(itemIdSort itemidbase, int nitems)
+{
+	itemIdSortData copy[Max(MaxIndexTuplesPerPage, MaxHeapTuplesPerPage)];
+#define NSPLIT 256
+#define PREFDIV (BLCKSZ / NSPLIT)
+	/* two extra elements to emulate offset on previous step */
+	uint16		count[NSPLIT + 2] = {0};
+	int			i,
+pos,
+highbits;
+
+	Assert(nitems <= MaxIndexTuplesPerPage);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		count[highbits]++;
+	}
+	/* sort in decreasing order */
+	for (i = NSPLIT - 1; i != 0; i--)
+		count[i - 1] += count[i];
+
+	/*
+	 * count[k+1] is start of bucket k, count[k] is end of bucket k, and
+	 * count[k] - count[k+1] is length of bucket k.
+	 */
+	Assert(count[0] == nitems);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = 

Re: [HACKERS] Small improvement to compactify_tuples

2017-05-15 Thread Heikki Linnakangas

On 05/14/2017 09:47 PM, Sokolov Yura wrote:

Good day, everyone.

I've been playing a bit with unlogged tables - just random updates on
simple
key-value table. I've noticed amount of cpu spent in a compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort of
itemidbase items.


Ah, I played with this too a couple of years ago, see 
https://www.postgresql.org/message-id/546B89DE.7030906%40vmware.com, but 
got distracted by other things and never got around to commit that.



itemidbase array is bounded by number of tuples in a page, and
itemIdSortData
structure is simple, so specialized version could be a better choice.

Attached patch adds combination of one pass of prefix sort with
insertion
sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be
reused.


Cool! Could you compare that against the bucket sort I posted in the 
above thread, please?


At a quick glance, your "prefix sort" seems to be the the same algorithm 
as the bucket sort that I implemented. You chose 256 buckets, where I 
picked 32. And you're adding a shell sort implementation, for small 
arrays, while I used a straight insertion sort. Not sure what these 
differences mean in practice.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Small improvement to compactify_tuples

2017-05-14 Thread Sokolov Yura

Good day, everyone.

I've been playing a bit with unlogged tables - just random updates on 
simple

key-value table. I've noticed amount of cpu spent in a compactify_tuples
(called by PageRepareFragmentaion). Most of time were spent in qsort of
itemidbase items.

itemidbase array is bounded by number of tuples in a page, and 
itemIdSortData

structure is simple, so specialized version could be a better choice.

Attached patch adds combination of one pass of prefix sort with 
insertion

sort for larger array and shell sort for smaller array.
Insertion sort and shell sort are implemented as macros and could be 
reused.


I've tested following table:

create unlogged table test3 (
id integer PRIMARY KEY with (fillfactor=85),
val text
) WITH (fillfactor=85);
insert into test3 select i, '!'||i from generate_series(1, 1000) 
as i;


With pgbench script:

\set id1 RANDOM(1, :scale)
\set id2 RANDOM(1, :scale)

select * from test3 where id = :id1;
update test3 set val = '!'|| :id2 where id = :id1;

And command:

pgbench -M prepared -c 3 -s 1000 -T 1000 -P 3 -n -f test3.sql 
testdb


Using 1GB shared_buffers and synchronous_commit=off.

On my notebook improvement is:

before patch:

progress: 63.0 s, 15880.1 tps, lat 0.189 ms stddev 0.127
progress: 66.0 s, 15975.8 tps, lat 0.188 ms stddev 0.122
progress: 69.0 s, 15904.1 tps, lat 0.189 ms stddev 0.152
progress: 72.0 s, 15000.9 tps, lat 0.200 ms stddev 0.213
progress: 75.0 s, 15101.7 tps, lat 0.199 ms stddev 0.192
progress: 78.0 s, 15854.2 tps, lat 0.189 ms stddev 0.158
progress: 81.0 s, 15803.3 tps, lat 0.190 ms stddev 0.158
progress: 84.0 s, 15242.9 tps, lat 0.197 ms stddev 0.203
progress: 87.0 s, 15184.1 tps, lat 0.198 ms stddev 0.215

after patch:

progress: 63.0 s, 17108.5 tps, lat 0.175 ms stddev 0.140
progress: 66.0 s, 17271.9 tps, lat 0.174 ms stddev 0.155
progress: 69.0 s, 17243.5 tps, lat 0.174 ms stddev 0.143
progress: 72.0 s, 16675.3 tps, lat 0.180 ms stddev 0.206
progress: 75.0 s, 17187.4 tps, lat 0.175 ms stddev 0.157
progress: 78.0 s, 17293.0 tps, lat 0.173 ms stddev 0.159
progress: 81.0 s, 16289.8 tps, lat 0.184 ms stddev 0.180
progress: 84.0 s, 16131.2 tps, lat 0.186 ms stddev 0.170
progress: 87.0 s, 16741.1 tps, lat 0.179 ms stddev 0.165

I understand that it is quite degenerate test case.
But probably this improvement still has sense.

With regards,
--
Sokolov Yura aka funny.falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres CompanyFrom f8ed235dbb60a79b4f59dc8a4af014b2ca698772 Mon Sep 17 00:00:00 2001
From: Sokolov Yura aka funny_falcon 
Date: Sun, 14 May 2017 20:57:00 +0300
Subject: [PATCH] storage/page/bufpage.c: improve compactify_tuples

Items passed to compactify_tuples are almost sorted. So call to general
qsort makes unnecessary overhead on function calls (swapfunc, med,
itemoffcompare).

This patch add one pass of prefix sort + insertion sort for large array
of items, and shell sort for smaller arrays.

Insertion and Shell sort are implemented using macroses.

This approach allows to save 3% of cpu in degenerate case
(highly intensive HOT random updates on unlogged table with
 synchronized_commit=off).
---
 src/backend/storage/page/bufpage.c | 42 +--
 src/include/c.h| 59 ++
 2 files changed, 98 insertions(+), 3 deletions(-)

diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..b7c6392 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -433,6 +433,36 @@ itemoffcompare(const void *itemidp1, const void *itemidp2)
 		((itemIdSort) itemidp1)->itemoff;
 }
 
+#define NSPLIT 256
+#define PREFDIV (BLCKSZ / NSPLIT)
+/* one pass of prefix sort for high 8 bits of itemoff.*/
+static void
+prefix_presort(itemIdSort itemidbase, int nitems)
+{
+	itemIdSortData copy[MaxIndexTuplesPerPage];
+	int	count[NSPLIT+1] = { 0 };
+	int i, pos, highbits;
+
+	Assert(nitems <= MaxIndexTuplesPerPage);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		count[highbits]++;
+	}
+	/* sort in decreasing order */
+	for (i = NSPLIT-1; i != 0; i--)
+		count[i-1] += count[i];
+	Assert(count[0] == nitems);
+	for (i = 0; i < nitems; i++)
+	{
+		highbits = itemidbase[i].itemoff / PREFDIV;
+		pos = count[highbits+1]++;
+		copy[pos] = itemidbase[i];
+	}
+	Assert(count[1] == nitems);
+	memcpy(itemidbase, copy, sizeof(itemIdSortData) * nitems);
+}
+
 /*
  * After removing or marking some line pointers unused, move the tuples to
  * remove the gaps caused by the removed items.
@@ -444,9 +474,15 @@ compactify_tuples(itemIdSort itemidbase, int nitems, Page page)
 	Offset		upper;
 	int			i;
 
-	/* sort itemIdSortData array into decreasing itemoff order */
-	qsort((char *) itemidbase, nitems, sizeof(itemIdSortData),
-