On 7/18/07, Heikki Linnakangas <[EMAIL PROTECTED] > wrote:


Here's what I think we should do to the HOT patch:



I am all for simplifying the code. That would not only help us make it less
buggy but also improve its maintainability. But we would also need
to repeat the tests and run new tests to make sure that the simplification
does not come at a significant loss of performance gain.


1. Get rid of row-level fragmentation and handling dealing with
LP_DELETEd line pointers. Instead, take a vacuum lock opportunistically,
and defrag pages using the normal PageRepairFragmentation function. I'm
not sure where exactly we would do the pruning and where we would call
PageRepairFragmentation. We could do it in heap_release_fetch, but we
need some logic to decide when it's helpful and when it's a waste of time.



I think its worth trying for this simplification. Though I am not sure
if this is the most complicated part of the code. Nevertheless, lesser
the complexity, better it is.

One thing that strikes me is that we are assuming HOT to benefit
large tables. This is mostly true, but I am wondering how this
simplification would impact the HOT effect on small tables.

I assume you are suggesting *not* to call PageRepairFragmentation
in heap_update given that it holds reference to the old tuple.


2. Get rid of separate handling and WAL record types for pruning aborted
tuples, ISTM that's no longer needed with the simplified pruning method.



Ok. If we can do that, its well and good.


3. Currently, the patch has a separate code path for pruning pages in
VACUUM FULL, which removes any DEAD tuples in the middle of chains, and
fixes the ctid/xmin of the previous/next tuple to keep the chain valid.
Instead, we should just prune all the preceding tuples in the chain,
since they're in fact dead as well. Our simplistic check with OldestXmin
just isn't enough to notice that. I think we can share most of the code
between normal pruning and VACUUM FULL, which is good because the VACUUM
FULL codepath is used very seldom, so if there's any bugs in there they
might go unnoticed for a long time.



Yes. We should certainly do that. We have discussed this before on the
list and we had concluded that any tuples preceding a DEAD tuple are
also definitely DEAD.


4. Write only one WAL record per pruned page, instead of one per update
chain.



Ok. Currently the page pruning just prunes all the individual chains
in the page. So the code just reuses the WAL logging for per chain
pruning (which we anyways need for stand-alone chain pruning).
Are you suggesting we should move away from chain pruning during
index fetch ?


I've done some experimenting on those items, producing several badly
broken versions of the patch partly implementing those ideas. It looks
like the patch size will go down from ~240 kB to ~210 kB, and more
importantly, there will be less new concepts and states a tuple can be
in and less WAL record types.



Thats very good. I would appreciate if you continue the refactoring
process. We can coordinate so that our work doesn't conflict.


I know we've been running DBT-2 tests with the patch, and that it's
effective in reducing the need to vacuum the big tables which gives
better throughput in long runs. But I also know that a lot of people are
interested in the potential to avoid CPU overhead of index inserts. We
need to run CPU bound benchmarks to measure that effect as well.



Sure. May be we should also measure effects on small/large tables,
different mix of HOT and COLD updates etc.


Thanks,
Pavan


--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

Reply via email to