On Thu, Jul 7, 2016 at 4:56 PM, Bill Moran <wmo...@potentialtech.com> wrote:
> Take the following table as an example:
>  size VARCHAR(255)
> );
> CREATE INDEX grue_size ON grue(size);
> Now insert approximately eleventy jillion rows, but ensure
> that there are only about 20 distinct values for size.
> Always does a seq scan on Postgres 9.5.2. (Yes, I know we're
> a patch behind, the upgrade is on the schedule) on
> Ubuntu 14.
> I would expect it to be possible, and significantly more
> efficient to do an index scan for that query.

Well, there are two possible ways to implement uniqueness: hashing and
grouping.  You haven't included the EXPLAIN ANALYZE output so it's
hard to be sure which sort of plan you are getting, but my guess is
that you are getting a HashAggregate.  So basically what it's doing

1. Read a row.
2. Check the value in the hash table to see if we've seen it already.
3. If not, add it.
4. Go to step 1.

If you've got to visit the whole table anyway, doing it in sequential
order is smart, so this plan sounds reasonably good.

The alternative worth considering is presumably something like:

-> Index Only Scan on grue_size

Scanning an entire index in order is pretty expensive, but it seems
possible that this could be faster than the Seq Scan, especially on a
table with other wide columns, because then the index might be a lot
smaller than the table.  Even if the index traversal generates some
random I/O, if it's sufficiently smaller than the table you will still
come out ahead.  I'm not positive that the planner will actually
consider this plan, but it seems like it should; you might be able to
persuade it to do so by setting enable_hashagg=false, which would let
you check whether it's actually faster.

We're probably missing a few tricks on queries of this type. If the
index-traversal machinery had a mechanism to skip quickly to the next
distinct value, that could be used here: walk up the btree until you
find a page that contains keyspace not equal to the current key, then
walk back down until you find the first leaf page that contains such a
value.  That would potentially let you step over large chunks of the
index without actually examining all the leaf pages, which for a query
like this seems like it could be a big win.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to