> > Thanks for giving this a lot of thought. When you asked the question > the first time you hadn't discussed how that might work, but now we > have something to discuss. >
My ultimate goal is to unify this effort with the application period effort. Step 1 in that was to understand what each was doing and why they were doing it. If you check out the other thread, you'll see a highly similar message that I sent over there. > There are 3 implementation routes that I see, so let me explain so > that others can join the discussion. > > 1. Putting all data in one table. This makes DROP SYSTEM VERSIONING > effectively impossible. It requires access to the table to be > rewritten to add in historical quals for non-historical access and it > requires some push-ups around indexes. (The current patch adds the > historic quals by kludging the parser, which is wrong place, since it > doesn't work for joins etc.. However, given that issue, the rest seems > to follow on naturally). > > 2. Putting data in a side table. This makes DROP SYSTEM VERSIONING > fairly trivial, but it complicates many DDL commands (please make a > list?) and requires the optimizer to know about this and cater to it, > possibly complicating plans. Neither issue is insurmountable, but it > becomes more intrusive. > > The current patch could go in either of the first 2 directions with > further work. > > 3. Let the Table Access Method handle it. I call this out separately > since it avoids making changes to the rest of Postgres, which might be > a good thing, with the right TAM implementation. > I'd like to hear more about this idea number 3. I could see value in allowing the history table to be a foreign table, perhaps writing to csv/parquet/whatever files, and that sort of setup could be persuasive to a regulator who wants extra-double-secret-proof that auditing cannot be tampered with. But with that we'd have to give up the relkind idea, which itself was going to be a cheap way to prevent updates outside of the system triggers.