I propose we introduce the concept of "indirect indexes".  I have a toy
implementation and before I go further with it, I'd like this assembly's
input on the general direction.

Indirect indexes are similar to regular indexes, except that instead of
carrying a heap TID as payload, they carry the value of the table's
primary key.  Because this is laid out on top of existing index support
code, values indexed by the PK can only be six bytes long (the length of
ItemPointerData); in other words, 281,474,976,710,656 rows are
supported, which should be sufficient for most use cases.[1]

A new flag is added to the index AM routine, indicating whether it can
support indirect indexes.  Initially, only the b-tree AM would support
indirect indexes, but I plan to develop support for GIN indirect soon
afterwards, which seems most valuable.

To create an indirect index, the command
is used.  Currently this always uses the defined primary key index[2].

Implementation-wise, to find a value using an indirect index that index
is scanned first; this produces a PK value, which can be used to scan
the primary key index, the result of which is returned.

There are two big advantages to indirect indexes, both of which are
related to UPDATE's "write amplification":

1. UPDATE is faster.  Indirect indexes on column that are not modified
   by the update do not need to be updated.
2. HOT can be used more frequently.  Columns indexed only by indirect
   indexes do not need to be considered for whether an update needs to
   be non-HOT, so this further limits "write amplification".

The biggest downside is that in order to find out a heap tuple using the
index we need to descend two indexes (the indirect and the PK) instead
of one, so it's slower.  For many use cases the tradeoff is justified.

I measured the benefits with the current prototype implementation.  In
two separate schemas, I created a pgbench_accounts table, with 12
"filler" columns, and indexed them all; one schema used regular indexes,
the other used indirect indexes.  Filled them both to the equivalent of
scale 50, which results in a table of some 2171 MB; the 12 indexes are
282 MB each, and the PK index is 107 MB).  I then ran a pgbench with a
custom script that update a random one of those columns and leave the
others alone on both schemas (not simultaneously).  I ran 100k updates
for each case, 5 times:

  method  │   TPS: min / avg (stddev) / max   │        Duration: min / avg / 
max        │ avg_wal 
 direct   │  601.2 / 1029.9 ( 371.9) / 1520.9 │ 00:01:05.76 / 00:01:48.58 / 
00:02:46.39 │ 4841 MB
 indirect │ 2165.1 / 3081.6 ( 574.8) / 3616.4 │ 00:00:27.66 / 00:00:33.56 / 
00:00:46.2  │ 1194 MB
(2 rows)

This is a pretty small test (not long enough for autovacuum to trigger
decently) but I think this should be compelling enough to present the

Please discuss.

Implementation notes:

Executor-wise, we could have a distinct IndirectIndexScan node, or we
could just hide the second index scan inside a regular IndexScan.  I
think from a cleanliness POV there is no reason to have a separate node;
efficiency wise I think a separate node leads to less branches in the
code.  (In my toy patch I actually have the second indexscan hidden
inside a separate "ibtree" AM; not what I really propose for commit.)
Additionally, executor will have to keep track of the values in the PK
index so that they can be passed down on insertion to each indirect

Planner-wise, I don't think we need to introduce a distinct indirect
index Path.  We can just let the cost estimator attach the true cost of
the two scans to a regular index scan path, and the correct executor
node is produced later if that index is chosen.

In relcache we'll need an additional bitmapset of columns indexed by
indirect indexes.  This is used so that heap_update can return an output
bitmapset of such columns that were not modified (this is computed by
HeapSatisfiesHOTandKeyUpdate).  The executor uses this to know when to
skip updating each indirect index.

Vacuuming presents an additional challenge: in order to remove index
items from an indirect index, it's critical to scan the PK index first
and collect the PK values that are being removed.  Then scan the
indirect index and remove any items that match the PK items removed.
This is a bit problematic because of the additional memory needed to 
store the array of PK values.  I haven't implemented this yet.

Items I haven't thought about yet:
* UNIQUE INDIRECT?  I think these should work, but require some
* Deferred unique indexes?  See unique_key_recheck.

[1]  Eventually we can expand this to allow for "normal" datatypes, say
bigint, but that's likely to require a much bigger patch in order to
change IndexTuple to support it.  I would defer that project to a later

[2] It is possible to extend the grammar to allow other UNIQUE indexes
to be used, if they are on top of NOT NULL columns.  This would allow to
extend existing production databases with a new column.  A proposed
syntax is 
  CREATE INDIRECT INDEX idx ON tab (a, b, c)
  REFERENCES some_unique_index
  [optional WHERE clause] ;
which Bison accepts.  I propose not to implement this yet.  However this
is an important item because it allows existing databases to simply add
an UNIQUE NOT NULL column to their existing big tables to take advantage
of the feature, without requiring a lengthy dump/reload of tables that
currently only have larger keys.

Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to