On Tue, Sep 6, 2016 at 3:11 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > We could get around (1) by something like Robert's idea of segmented > allocation, but TBH I've seen nothing on this thread to make me think > it's necessary or would even result in any performance improvement > at all. The bigger we make that array, the worse index-cleaning > is going to perform, and complicating the data structure will add > another hit on top of that.
I wouldn't be so sure, I've seen cases where two binary searches were faster than a single binary search, especially when working with humongus arrays like this tid array, because touching less (memory) pages for a search does pay off considerably. I'd try before giving up on the idea. The test results (which I'll post in a second) do give credit to your expectation that making the array bigger/more complex does impact index scan performance. It's still faster than scanning several times though. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers