We've settled upon a method for gathering raw statistics from widely
scattered data centers of creating one sequence per-event, per minute.
Each process (some lapp, some shell, some python, some perl etc) can
call a shell script which calls ssh->psql to execute a nextval('event')
sequence. Periodically (every 2-10 minutes, depending on other factors)
Another process picks up the value and inserts it into a permanent home.
We're only talking a few 7-10k calls per minute, but going to this from
a query that does an update has saved a *huge* amount of overhead.
If I needed to a periodic dump and restore would only take a minute.
This data is highly transient. More frequently than biweekly or so
would be annoying though.
Aside from security concerns, did we miss something? Should I be
worried we're going through ~60,000 sequences per day?
TIA,
dave
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly