Re: Boundary value check in lazy_tid_reaped()

2021-03-16 Thread Hannu Krosing
We could also go parallel in another direction - I have been mulling about
writing a "vectorized" bsearch which would use AVX2, where you look up 64
(or more likely 32, so tids also fit in 256bit vector) tids at a time.

The trickiest part is that the search can complete at different iteration
for each value.

Of course it is possible that this has a very bad RAM access behaviour and
is no win at all even if it otherways works.

--
Hannu Krosing

On Tue, Mar 16, 2021 at 10:08 PM Peter Geoghegan  wrote:

> On Sun, Mar 14, 2021 at 4:22 PM Thomas Munro 
> wrote:
> > BTW I got around to trying this idea out for a specialised
> > bsearch_itemptr() using a wide comparator, over here:
>
> Cool!
>
> I have another thing that should be considered when we revisit this
> area in the future: maybe we should structure the binary search to
> lookup multiple TIDs at once -- probably all of the TIDs from a given
> leaf page, taken together.
>
> There is now an nbtree function used for tuple deletion (all tuple
> deletion, not just bottom-up deletion) that works like this:
> _bt_delitems_delete_check(). I suspect that it would make sense to
> generalize it to do the same thing for regular VACUUM. Perhaps this
> idea would have to be combined with other techniques to show a real
> benefit. It would probably be necessary to sort the TIDs first (just
> like index_delete_sort() does for the _bt_delitems_delete_check() code
> today), but that's probably no big deal.
>
> It is typical to have 200 - 400 TIDs on an nbtree leaf page without
> using deduplication. And with deduplication enabled you can have as
> many as 1300 TIDs on a single 8KiB nbtree leaf page. It's easy to
> imagine something like GCC's __builtin_prefetch() (or maybe just more
> predictable access patterns in our "batch binary search") making
> everything much faster through batching. This will also naturally make
> btvacuumpage() much less "branchy", since of course it will no longer
> need to process the page one TID at a time -- that helps too.
>
> --
> Peter Geoghegan
>
>
>


Re: Boundary value check in lazy_tid_reaped()

2021-03-16 Thread Peter Geoghegan
On Sun, Mar 14, 2021 at 4:22 PM Thomas Munro  wrote:
> BTW I got around to trying this idea out for a specialised
> bsearch_itemptr() using a wide comparator, over here:

Cool!

I have another thing that should be considered when we revisit this
area in the future: maybe we should structure the binary search to
lookup multiple TIDs at once -- probably all of the TIDs from a given
leaf page, taken together.

There is now an nbtree function used for tuple deletion (all tuple
deletion, not just bottom-up deletion) that works like this:
_bt_delitems_delete_check(). I suspect that it would make sense to
generalize it to do the same thing for regular VACUUM. Perhaps this
idea would have to be combined with other techniques to show a real
benefit. It would probably be necessary to sort the TIDs first (just
like index_delete_sort() does for the _bt_delitems_delete_check() code
today), but that's probably no big deal.

It is typical to have 200 - 400 TIDs on an nbtree leaf page without
using deduplication. And with deduplication enabled you can have as
many as 1300 TIDs on a single 8KiB nbtree leaf page. It's easy to
imagine something like GCC's __builtin_prefetch() (or maybe just more
predictable access patterns in our "batch binary search") making
everything much faster through batching. This will also naturally make
btvacuumpage() much less "branchy", since of course it will no longer
need to process the page one TID at a time -- that helps too.

-- 
Peter Geoghegan




Re: Boundary value check in lazy_tid_reaped()

2021-03-14 Thread Thomas Munro
On Wed, Sep 9, 2020 at 7:33 AM Peter Geoghegan  wrote:
> On Tue, Sep 8, 2020 at 1:23 AM Masahiko Sawada
>  wrote:
> > > > I wonder if you would also see a speed-up with a bsearch() replacement
> > > > that is inlineable, so it can inline the comparator (instead of
> > > > calling it through a function pointer).  I wonder if something more
> > > > like (lblk << 32 | loff) - (rblk << 32 | roff) would go faster than
> > > > the branchy comparator.
> > >
> > > Erm, off course that expression won't work... should be << 16, but
> > > even then it would only work with a bsearch that uses int64_t
> > > comparators, so I take that part back.
> >
> > Yeah, it seems to worth benchmarking the speed-up with an inlining.
> > I'll do some performance tests with/without inlining on top of
> > checking boundary values.
>
> It sounds like Thomas was talking about something like
> itemptr_encode() + itemptr_decode(). In case you didn't know, we
> actually do something like this for the TID tuplesort used for CREATE
> INDEX CONCURRENTLY.

BTW I got around to trying this idea out for a specialised
bsearch_itemptr() using a wide comparator, over here:

https://www.postgresql.org/message-id/CA%2BhUKGKztHEWm676csTFjYzortziWmOcf8HDss2Zr0muZ2xfEg%40mail.gmail.com




Re: Boundary value check in lazy_tid_reaped()

2021-03-10 Thread Masahiko Sawada
On Wed, Mar 10, 2021 at 11:53 PM Peter Eisentraut
 wrote:
>
> On 10.03.21 02:29, Masahiko Sawada wrote:
> >> There is no noticeable effect of inlining lazy_tid_reaped(). So it
> >> would be better to not do that.
> >
> > Attached the patch that doesn't inline lazy_tid_reaped().
>
> committed

Thank you!

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/




Re: Boundary value check in lazy_tid_reaped()

2021-03-10 Thread Peter Eisentraut

On 10.03.21 02:29, Masahiko Sawada wrote:

There is no noticeable effect of inlining lazy_tid_reaped(). So it
would be better to not do that.


Attached the patch that doesn't inline lazy_tid_reaped().


committed




Re: Boundary value check in lazy_tid_reaped()

2021-03-09 Thread Masahiko Sawada
On Tue, Mar 9, 2021 at 9:57 AM Masahiko Sawada  wrote:
>
> On Mon, Mar 8, 2021 at 7:16 PM Peter Eisentraut
>  wrote:
> >
> > On 21.01.21 14:11, Masahiko Sawada wrote:
> > > Agreed. bsearch with bound check showed a reasonable improvement in my
> > > evaluation in terms of performance. Regarding memory efficiency, we
> > > can experiment with other methods later.
> > >
> > > I've attached the patch that adds a bound check for encoded
> > > itermpointers before bsearch() in lazy_tid_reaped() and inlines the
> > > function.
> >
> > Do you have any data showing the effect of inlining lazy_tid_reaped()?
> > I mean, it probably won't hurt, but it wasn't part of the original patch
> > that you tested, so I wonder whether it has any noticeable effect.
>
> I've done some benchmarks while changing the distribution of where
> dead tuples exist within the table. The table size is 4GB and 20% of
> total tuples are dirty. Here are the results of index vacuum execution
> time:
>
> 1. Updated evenly the table (every block has at least one dead tuple).
> master  : 8.15
> inlining  : 4.84
> not-inlinning  : 5.01
>
> 2. Updated the middle of the table.
> master  : 8.71
> inlining  : 3.51
> not-inlinning  : 3.58
>
> 3. Updated both the beginning and the tail of the table.
> master  : 8.44
> inlining  : 3.46
> not-inlinning  : 3.50
>
> There is no noticeable effect of inlining lazy_tid_reaped(). So it
> would be better to not do that.

Attached the patch that doesn't inline lazy_tid_reaped().

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/


bound_check_lazy_vacuum_noinline.patch
Description: Binary data


Re: Boundary value check in lazy_tid_reaped()

2021-03-08 Thread Masahiko Sawada
On Mon, Mar 8, 2021 at 7:16 PM Peter Eisentraut
 wrote:
>
> On 21.01.21 14:11, Masahiko Sawada wrote:
> > Agreed. bsearch with bound check showed a reasonable improvement in my
> > evaluation in terms of performance. Regarding memory efficiency, we
> > can experiment with other methods later.
> >
> > I've attached the patch that adds a bound check for encoded
> > itermpointers before bsearch() in lazy_tid_reaped() and inlines the
> > function.
>
> Do you have any data showing the effect of inlining lazy_tid_reaped()?
> I mean, it probably won't hurt, but it wasn't part of the original patch
> that you tested, so I wonder whether it has any noticeable effect.

I've done some benchmarks while changing the distribution of where
dead tuples exist within the table. The table size is 4GB and 20% of
total tuples are dirty. Here are the results of index vacuum execution
time:

1. Updated evenly the table (every block has at least one dead tuple).
master  : 8.15
inlining  : 4.84
not-inlinning  : 5.01

2. Updated the middle of the table.
master  : 8.71
inlining  : 3.51
not-inlinning  : 3.58

3. Updated both the beginning and the tail of the table.
master  : 8.44
inlining  : 3.46
not-inlinning  : 3.50

There is no noticeable effect of inlining lazy_tid_reaped(). So it
would be better to not do that.

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/




Re: Boundary value check in lazy_tid_reaped()

2021-03-08 Thread Peter Eisentraut

On 21.01.21 14:11, Masahiko Sawada wrote:

Agreed. bsearch with bound check showed a reasonable improvement in my
evaluation in terms of performance. Regarding memory efficiency, we
can experiment with other methods later.

I've attached the patch that adds a bound check for encoded
itermpointers before bsearch() in lazy_tid_reaped() and inlines the
function.


Do you have any data showing the effect of inlining lazy_tid_reaped()? 
I mean, it probably won't hurt, but it wasn't part of the original patch 
that you tested, so I wonder whether it has any noticeable effect.





Re: Boundary value check in lazy_tid_reaped()

2021-01-21 Thread Masahiko Sawada
On Wed, Jan 20, 2021 at 3:50 PM Peter Eisentraut
 wrote:
>
> On 2020-10-30 02:43, Masahiko Sawada wrote:
> > Using the integer set is very memory efficient (5MB vs. 114MB in the
> > base case) and there is no 1GB limitation. Looking at the execution
> > time, I had expected that using the integer set is slower on recording
> > TIDs than using the simple array but in this experiment, there is no
> > big difference among methods. Perhaps the result will vary if vacuum
> > needs to record much more dead tuple TIDs. From the results, I can see
> > a good improvement in the integer set case and probably a good start
> > but if we want to use it also for lazy vacuum, we will need to improve
> > it so that it can be used on DSA for the parallel vacuum.
> >
> > I've attached the patch I used for the experiment that adds xx_vacuum
> > GUC parameter that controls the method of recording TIDs. Please note
> > that it doesn't support parallel vacuum.
>
> How do you want to proceed here?  The approach of writing a wrapper for
> bsearch with bound check sounded like a good start.  All the other ideas
> discussed here seem larger projects and would probably be out of scope
> of this commit fest.

Agreed. bsearch with bound check showed a reasonable improvement in my
evaluation in terms of performance. Regarding memory efficiency, we
can experiment with other methods later.

I've attached the patch that adds a bound check for encoded
itermpointers before bsearch() in lazy_tid_reaped() and inlines the
function.

Regards,

--
Masahiko Sawada
EDB:  https://www.enterprisedb.com/


bound_check_lazy_vacuum.patch
Description: Binary data


Re: Boundary value check in lazy_tid_reaped()

2021-01-19 Thread Peter Eisentraut

On 2020-10-30 02:43, Masahiko Sawada wrote:

Using the integer set is very memory efficient (5MB vs. 114MB in the
base case) and there is no 1GB limitation. Looking at the execution
time, I had expected that using the integer set is slower on recording
TIDs than using the simple array but in this experiment, there is no
big difference among methods. Perhaps the result will vary if vacuum
needs to record much more dead tuple TIDs. From the results, I can see
a good improvement in the integer set case and probably a good start
but if we want to use it also for lazy vacuum, we will need to improve
it so that it can be used on DSA for the parallel vacuum.

I've attached the patch I used for the experiment that adds xx_vacuum
GUC parameter that controls the method of recording TIDs. Please note
that it doesn't support parallel vacuum.


How do you want to proceed here?  The approach of writing a wrapper for 
bsearch with bound check sounded like a good start.  All the other ideas 
discussed here seem larger projects and would probably be out of scope 
of this commit fest.





Re: Boundary value check in lazy_tid_reaped()

2020-10-29 Thread Masahiko Sawada
On Tue, 1 Sep 2020 at 05:56, Peter Geoghegan  wrote:
>
> On Mon, Aug 31, 2020 at 12:22 PM Thomas Munro  wrote:
> > On Sun, Aug 30, 2020 at 11:08 PM Masahiko Sawada
> >  wrote:
> > > So my proposal is to add boundary value check in lazy_tid_reaped()
> > > before executing bsearch(3). This will help when index vacuum happens
> > > multiple times or when garbage tuples are concentrated to a narrow
> > > range.
> >
> > Makes sense if it's often out of range.
>
> I also think this is a good idea. But I also wonder if it goes far
> enough. Only one or two dead TIDs in inconvenient heap pages can
> completely defeat the optimization.
>
> A related problem with the TID list is that it is very space
> inefficient. It would be nice to fix that problem at some point. We
> could make multiple index scans by VACUUM operations much rarer if we
> tried. But how can we do all of this together?
>
> I wonder if Roaring bitmaps would work well for this:
>
> https://arxiv.org/abs/1709.07821
>
> This data structure looks like it might work well in lazy_tid_reaped()
> (for the TID array), because it offers effective bitmap compression
> without compromising on binary search speed. Of course, you'd have to
> encode TIDs as bitmaps to make use of the data structure, much like
> tidbitmap.c. I imagine that the Roaring bitmap indirection would be
> very CPU cache efficient in cases that are similar to the cases
> Sawada-san is worried about, but a bit more complicated.
>
> VACUUM would be able to make the summarizing information for the TID
> bitmap resident in CPU cache. Only this well-cached summarizing
> information (the top-level bitmap range indirection) will need to be
> accessed by most individual calls to a Roaring-bitmap-aware
> lazy_tid_reaped() that return false (i.e. calls that say "don't kill
> this TID, it's alive"). These performance characteristics seem likely
> when a certain amount of clustering of dead tuples takes place in the
> heap. I bet having completely random positions for dead TIDs almost
> never happens -- *some* clustering is practically certain to take
> place, even without natural locality in the data (which is itself very
> common). Think about how opportunistic pruning accumulates many
> LP_DEAD items in heap pages.
>
> There is a reference Roaring bitmap implementation in C, so it's
> probably not that hard to experimentally determine how well it would
> work:
>
> https://github.com/RoaringBitmap/CRoaring
>
> Lots of open source projects already use it, so there are no problems
> with patents. Seems worth investigating. (I also wonder if we could
> use it for clog.)

Very interesting.

Before getting into CRoaring bitmap, I've done some experiments with
three different methods of recording dead tuple TIDs: array,
array-minmax, and integer set.

'array' is the current method lazy vacuum uses. It just adds dead
tuple TIDs to the simple array of ItemPointerData.
'array-minmax' is the same as 'array' method except for checking min
and max boundaries when deleting index dead tuples (i.g., in
lazy_tid_reaped()).
'intset' uses the integer set (src/backend/lib/integerset.c) to record
dead tuples TIDs. It's an in-memory data structure to hold 64-bit
integers.

In the experiments, I created the table with 4GB and made 20% of total
tuples dirty in all test cases while changing the distribution of
where dead tuples exist within the table. The table has only one
index, therefore I did't use parallel vacuum. I set enough
maintenance_work_mem so that the index scan runs only once. Here is
the result, showing the "total execution time in second (heap scan
time/index vacuum time/table vacuum time/memory usage in MB)”. The
numbers are round down to the nearest decimal.

1. Updated evenly the table (every block has at least one dead tuple).
array : 22 (8/12/2/114)
array-minmax : 20 (8/11/2/114)
intset : 17 (7/8/1/19)

2. Updated the middle of the table.
array : 25 (6/19/0/114)
array-minmax : 17 (6/11/0/114)
intset : 17 (6/11/0/7)

3. Updated the tail of the table.
array : 25 (6/19/0/114)
array-minmax : 18 (6/11/0/114)
intset : 18 (6/11/0/5)

4. Updated both the beginning and the tail of the table.
array : 25 (6/19/0/114)
array-minmax : 23 (6/17/0/114)
intset : 21 (6/14/0/6)

The memory usage is calculated by (# of dead tuples *
sizeof(ItemPointerData)) in both ‘array’ and ‘array-minmax’ case,
although the actual amount of palloc'd memory is different, and by
intset_memory_usage() in ‘intset’ case.

Using the integer set is very memory efficient (5MB vs. 114MB in the
base case) and there is no 1GB limitation. Looking at the execution
time, I had expected that using the integer set is slower on recording
TIDs than using the simple array but in this experiment, there is no
big difference among methods. Perhaps the result will vary if vacuum
needs to record much more dead tuple TIDs. From the results, I can see
a good improvement in the integer set case and probably a good start
but if we want to use it also for lazy 

Re: Boundary value check in lazy_tid_reaped()

2020-09-10 Thread Masahiko Sawada
On Tue, 1 Sep 2020 at 08:01, Peter Geoghegan  wrote:
>
> On Mon, Aug 31, 2020 at 1:56 PM Peter Geoghegan  wrote:
> > I wonder if Roaring bitmaps would work well for this:
> >
> > https://arxiv.org/abs/1709.07821
>
> Alternatively, perhaps we could use a negative cache of heap blocks
> that have no tuples to kill at all. Maybe just a simple array whose
> elements are BlockNumber pairs. Each pair represents a range of blocks
> known to contain heap pages that *don't* have any TIDs for VACUUM to
> kill. The array could be size limited to 8KB, allowing it to be
> accessed in L1 cache throughout vacuuming. When the representation
> cannot fit in 8KB, maybe we disable the negative cache for the rest of
> the VACUUM operation.
>
> This is a bit like Masahiko's min_blkno + max_blkno idea, except: 1).
> It's a negative cache, and 2.) There are perhaps as many as 1024
> min/max pairs -- not just 1.

Interesting idea. Maybe we need one more bsearch() for the min/max
pairs array when a TID should be killed?

>
> It's pretty clear that more than 90% of all calls to lazy_tid_reaped()
> return false unless vacuum is running behind (false means "don't kill
> this TID, it's alive"). But it could be much more than 90%. This may
> be because autovacuum is configured to run more aggressively than the
> default settings allow. But it may also happen when LP_DEAD killing in
> indexes works well enough to do most of the cleanup needed in
> practice. I bet pgbench finds that ~99% of all calls to
> lazy_tid_reaped() that take place during index vacuuming find that the
> TID doesn't need to be killed. So negative caching could really help.

I agree that lazy_tid_reaped() returns false in most checks in the
case autovacuum is not running behind. But I'm concerned a bit that it
instead costs the case where vacuum needs to kill many TIDs and uses
the min/max filter because it needs to call bsearch() twice. I think
that this is the case where autovacuum is running behind and the user
wants to complete the vacuum as soon as possible. Since I expect that
checking the filtering using min/max pairs array should be done in a
very short time it might not be a problem but I think it’s worth
benchmarking the overhead in the worst case. Or I guess we can use a
bloom filter for this purpose instead.

Also, I'm concerned that 1024 min/max pairs might not be enough in
practice, especially for very large tables.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Boundary value check in lazy_tid_reaped()

2020-09-08 Thread Peter Geoghegan
On Tue, Sep 8, 2020 at 1:23 AM Masahiko Sawada
 wrote:
> > > I wonder if you would also see a speed-up with a bsearch() replacement
> > > that is inlineable, so it can inline the comparator (instead of
> > > calling it through a function pointer).  I wonder if something more
> > > like (lblk << 32 | loff) - (rblk << 32 | roff) would go faster than
> > > the branchy comparator.
> >
> > Erm, off course that expression won't work... should be << 16, but
> > even then it would only work with a bsearch that uses int64_t
> > comparators, so I take that part back.
>
> Yeah, it seems to worth benchmarking the speed-up with an inlining.
> I'll do some performance tests with/without inlining on top of
> checking boundary values.

It sounds like Thomas was talking about something like
itemptr_encode() + itemptr_decode(). In case you didn't know, we
actually do something like this for the TID tuplesort used for CREATE
INDEX CONCURRENTLY.


-- 
Peter Geoghegan




Re: Boundary value check in lazy_tid_reaped()

2020-09-08 Thread Masahiko Sawada
On Tue, 1 Sep 2020 at 04:44, Thomas Munro  wrote:
>
> On Tue, Sep 1, 2020 at 7:21 AM Thomas Munro  wrote:
> > On Sun, Aug 30, 2020 at 11:08 PM Masahiko Sawada
> >  wrote:
> > > So my proposal is to add boundary value check in lazy_tid_reaped()
> > > before executing bsearch(3). This will help when index vacuum happens
> > > multiple times or when garbage tuples are concentrated to a narrow
> > > range.
> >
> > Makes sense if it's often out of range.
>
> ... though I'm not sure why you need to add extra members to do it?

Indeed. We can use the first and last elements of itemptrs array.

>
> > > I thought that we can have a generic function wrapping bsearch(3) that
> > > does boundary value checks and then does bsearch(3) so that we can use
> > > it in other similar places as well. But the attached patch doesn't do
> > > that as I'd like to hear opinions on the proposal first.
> >
> > I wonder if you would also see a speed-up with a bsearch() replacement
> > that is inlineable, so it can inline the comparator (instead of
> > calling it through a function pointer).  I wonder if something more
> > like (lblk << 32 | loff) - (rblk << 32 | roff) would go faster than
> > the branchy comparator.
>
> Erm, off course that expression won't work... should be << 16, but
> even then it would only work with a bsearch that uses int64_t
> comparators, so I take that part back.

Yeah, it seems to worth benchmarking the speed-up with an inlining.
I'll do some performance tests with/without inlining on top of
checking boundary values.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Boundary value check in lazy_tid_reaped()

2020-08-31 Thread Peter Geoghegan
On Mon, Aug 31, 2020 at 1:56 PM Peter Geoghegan  wrote:
> I wonder if Roaring bitmaps would work well for this:
>
> https://arxiv.org/abs/1709.07821

Alternatively, perhaps we could use a negative cache of heap blocks
that have no tuples to kill at all. Maybe just a simple array whose
elements are BlockNumber pairs. Each pair represents a range of blocks
known to contain heap pages that *don't* have any TIDs for VACUUM to
kill. The array could be size limited to 8KB, allowing it to be
accessed in L1 cache throughout vacuuming. When the representation
cannot fit in 8KB, maybe we disable the negative cache for the rest of
the VACUUM operation.

This is a bit like Masahiko's min_blkno + max_blkno idea, except: 1).
It's a negative cache, and 2.) There are perhaps as many as 1024
min/max pairs -- not just 1.

It's pretty clear that more than 90% of all calls to lazy_tid_reaped()
return false unless vacuum is running behind (false means "don't kill
this TID, it's alive"). But it could be much more than 90%. This may
be because autovacuum is configured to run more aggressively than the
default settings allow. But it may also happen when LP_DEAD killing in
indexes works well enough to do most of the cleanup needed in
practice. I bet pgbench finds that ~99% of all calls to
lazy_tid_reaped() that take place during index vacuuming find that the
TID doesn't need to be killed. So negative caching could really help.

-- 
Peter Geoghegan




Re: Boundary value check in lazy_tid_reaped()

2020-08-31 Thread Peter Geoghegan
On Mon, Aug 31, 2020 at 12:22 PM Thomas Munro  wrote:
> On Sun, Aug 30, 2020 at 11:08 PM Masahiko Sawada
>  wrote:
> > So my proposal is to add boundary value check in lazy_tid_reaped()
> > before executing bsearch(3). This will help when index vacuum happens
> > multiple times or when garbage tuples are concentrated to a narrow
> > range.
>
> Makes sense if it's often out of range.

I also think this is a good idea. But I also wonder if it goes far
enough. Only one or two dead TIDs in inconvenient heap pages can
completely defeat the optimization.

A related problem with the TID list is that it is very space
inefficient. It would be nice to fix that problem at some point. We
could make multiple index scans by VACUUM operations much rarer if we
tried. But how can we do all of this together?

I wonder if Roaring bitmaps would work well for this:

https://arxiv.org/abs/1709.07821

This data structure looks like it might work well in lazy_tid_reaped()
(for the TID array), because it offers effective bitmap compression
without compromising on binary search speed. Of course, you'd have to
encode TIDs as bitmaps to make use of the data structure, much like
tidbitmap.c. I imagine that the Roaring bitmap indirection would be
very CPU cache efficient in cases that are similar to the cases
Sawada-san is worried about, but a bit more complicated.

VACUUM would be able to make the summarizing information for the TID
bitmap resident in CPU cache. Only this well-cached summarizing
information (the top-level bitmap range indirection) will need to be
accessed by most individual calls to a Roaring-bitmap-aware
lazy_tid_reaped() that return false (i.e. calls that say "don't kill
this TID, it's alive"). These performance characteristics seem likely
when a certain amount of clustering of dead tuples takes place in the
heap. I bet having completely random positions for dead TIDs almost
never happens -- *some* clustering is practically certain to take
place, even without natural locality in the data (which is itself very
common). Think about how opportunistic pruning accumulates many
LP_DEAD items in heap pages.

There is a reference Roaring bitmap implementation in C, so it's
probably not that hard to experimentally determine how well it would
work:

https://github.com/RoaringBitmap/CRoaring

Lots of open source projects already use it, so there are no problems
with patents. Seems worth investigating. (I also wonder if we could
use it for clog.)

-- 
Peter Geoghegan




Re: Boundary value check in lazy_tid_reaped()

2020-08-31 Thread Thomas Munro
On Tue, Sep 1, 2020 at 7:21 AM Thomas Munro  wrote:
> On Sun, Aug 30, 2020 at 11:08 PM Masahiko Sawada
>  wrote:
> > So my proposal is to add boundary value check in lazy_tid_reaped()
> > before executing bsearch(3). This will help when index vacuum happens
> > multiple times or when garbage tuples are concentrated to a narrow
> > range.
>
> Makes sense if it's often out of range.

... though I'm not sure why you need to add extra members to do it?

> > I thought that we can have a generic function wrapping bsearch(3) that
> > does boundary value checks and then does bsearch(3) so that we can use
> > it in other similar places as well. But the attached patch doesn't do
> > that as I'd like to hear opinions on the proposal first.
>
> I wonder if you would also see a speed-up with a bsearch() replacement
> that is inlineable, so it can inline the comparator (instead of
> calling it through a function pointer).  I wonder if something more
> like (lblk << 32 | loff) - (rblk << 32 | roff) would go faster than
> the branchy comparator.

Erm, off course that expression won't work... should be << 16, but
even then it would only work with a bsearch that uses int64_t
comparators, so I take that part back.




Re: Boundary value check in lazy_tid_reaped()

2020-08-31 Thread Thomas Munro
On Sun, Aug 30, 2020 at 11:08 PM Masahiko Sawada
 wrote:
> So my proposal is to add boundary value check in lazy_tid_reaped()
> before executing bsearch(3). This will help when index vacuum happens
> multiple times or when garbage tuples are concentrated to a narrow
> range.

Makes sense if it's often out of range.

> I thought that we can have a generic function wrapping bsearch(3) that
> does boundary value checks and then does bsearch(3) so that we can use
> it in other similar places as well. But the attached patch doesn't do
> that as I'd like to hear opinions on the proposal first.

I wonder if you would also see a speed-up with a bsearch() replacement
that is inlineable, so it can inline the comparator (instead of
calling it through a function pointer).  I wonder if something more
like (lblk << 32 | loff) - (rblk << 32 | roff) would go faster than
the branchy comparator.