On Thu, Sep 15, 2016 at 7:13 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Thu, Sep 15, 2016 at 1:41 AM, Amit Kapila <amit.kapil...@gmail.com>
> > I think it is possible without breaking pg_upgrade, if we match all
> > items of a page at once (and save them as local copy), rather than
> > matching item-by-item as we do now. We are already doing similar for
> > btree, refer explanation of BTScanPosItem and BTScanPosData in
> > nbtree.h.
> If ever we want to sort hash buckets by TID, it would be best to do
> that in v10 since we're presumably going to be recommending a REINDEX
We are? I thought we were trying to preserve on-disk compatibility so that
we didn't have to rebuild the indexes.
Is the concern that lack of WAL logging has generated some subtle
unrecognized on disk corruption?
If I were using hash indexes on a production system and I experienced a
crash, I would surely reindex immediately after the crash, not wait until
the next pg_upgrade.
> But is that a good thing to do? That's a little harder to
How could we go about deciding that? Do you think anything short of coding
it up and seeing how it works would suffice? I agree that if we want to do
it, v10 is the time. But we have about 6 months yet on that.