On Wed, Sep 14, 2016 at 5:32 PM, Robert Haas <robertmh...@gmail.com> wrote:

> On Wed, Sep 14, 2016 at 5:45 AM, Pavan Deolasee
> <pavan.deola...@gmail.com> wrote:
> > Another interesting bit about these small tables is that the largest used
> > offset for these tables never went beyond 291 which is the value of
> > MaxHeapTuplesPerPage. I don't know if there is something that prevents
> > inserting more than  MaxHeapTuplesPerPage offsets per heap page and I
> don't
> > know at this point if this gives us upper limit for bits per page (may
> be it
> > does).
> From PageAddItemExtended:
>     /* Reject placing items beyond heap boundary, if heap */
>     if ((flags & PAI_IS_HEAP) != 0 && offsetNumber > MaxHeapTuplesPerPage)
>     {
>         elog(WARNING, "can't put more than MaxHeapTuplesPerPage items
> in a heap page");
>         return InvalidOffsetNumber;
>     }
> Also see the comment where MaxHeapTuplesPerPage is defined:
>  * Note: with HOT, there could theoretically be more line pointers (not
> actual
>  * tuples) than this on a heap page.  However we constrain the number of
> line
>  * pointers to this anyway, to avoid excessive line-pointer bloat and not
>  * require increases in the size of work arrays.
Ah, thanks. So MaxHeapTuplesPerPage sets the upper boundary for the per
page bitmap size. Thats about 36 bytes for 8K page. IOW if on an average
there are 6 or more dead tuples per page, bitmap will outperform the
current representation, assuming max allocation for bitmap. If we can use
additional estimates to restrict the size to somewhat more conservative
value and then keep overflow area, then probably the break-even happens
even earlier than that. I hope this gives us a good starting point, but let
me know if you think it's still a wrong approach to pursue.


 Pavan Deolasee                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to