On 9/6/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> > When I suggested that we get rid of the LP_DELETE flag for heap tuples,
> > the tuple-level fragmentation and all that, and just take the vacuum
> > lock and call PageRepairFragmentation, I was thinking that we'd do it in
> > heap_update and only when we run out of space on the page. But as Greg
> > said, it doesn't work because you're already holding a reference to at
> > least one tuple on the page, the one you're updating, by the time you
> > get to heap_update. That's why I put the pruning code to heap_fetch
> > instead. Yes, though the amortized cost is the same, it does push the
> > pruning work to the foreground query path.
>
> The amortized cost is only "the same" if every heap_fetch is associated
> with a heap update.  I feel pretty urgently unhappy about this choice.
> Have you tested the impact of the patch on read-mostly workloads?
>
>
For read-mostly workloads, only the first SELECT after an UPDATE
would trigger pruning/defragmentation. heap_page_prune_defrag()
would be a no-op for subsequent SELECTs  (PageIsPrunable() would
be false until the next UPDATE)

I think Heikki's recent test confirms this.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

Reply via email to