I'm currently working on correctly flushing the
catalog/relation/sgmr caches on a readonly PITR
slave during recovery. These are the things that
currently cause me headache.

1) It seems that the btree code sends out relcache
   invalidation msgs during normal operation
   (No DDL statements are executed). This lets any
   simple flush-all-caches-if-ddl-was-execute scheme

2) When a full page image is written to the wal, the
   information about what tuple was updated is lost.
   So synthesizing cache invalidation msgs from the
   WAL records would need to reverseengineer a full
   page image, which seems hard and errorprone.

3) Most cache invalidations seem to be generated by
   heap_insert (via PrepareForTupleInvalidation). Those
   seems to be reconstructable from the WAL quite easily.
   Those sent out via CacheInvalidateRelcache*, however,
   seem to leave no trace in the WAL.

What I'm wondering is how much performance is lost if
I just let the slave flush all it's caches whenever it
replayed a commit record of a transaction that executed
DDL. To me it looks like that would only seriously harm
performance if a lot of temporary tables are created on
the master. Since there seem to be quite people who are
unhappiy about the current temptable implementation,
optimizing for that case might prove worthless if 8.4 or
8.5 will change the way that temptables are handled.

If this brute-force approach turns out to perform really
bad, does anyone see an elegant way around (2) and (3)?
(2) seems solveable by writing logical and physical records
to the wal - similar to what that xlog compression idea
needs (I, however, lost track of what came out of that
discussion). But (3) seems to be messy..

greetings, Florian Pflug

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to