So, a detail that I hope a PostgreSQL person can answer definitively:
will wrapping the delivery into a transaction prevent rows that are
eventually deleted from even hitting the database?
Sadly, no: but it will help throughput and performance to wrap things
in a transaction. You're better off investigating the possibility of
having a copy-on-write methodology wherein a message is added to the
database, then user accounts point to a given message id. When the
message is updated, it copies itself to a new message/message id and
the change cascades downwards. This would save space and IO. -sc
The message blocks themselves aren't (well, they shouldn't...) be
copied,
just the message (and physmessage?) entries. It's been a while since I
was
deep in that code, though, so I don't remember if we really did finish
getting rid of the actual messageblk copy.
Having thought about this for a min or three, I think the best solution
would be the following:
BEGIN;
CREATE TEMP TABLE newmsg (
... -- Whatever schema is necessary
) ON COMMIT DROP;
INSERT INTO other_tbl SELECT foo1, foo2, ${user_id} FROM newmsg;
-- repeat insert as necessary
COMMIT;
-sc
--
Sean Chittenden