Thomas,
> I forgot to mention that I'm running 7.4.6. The README includes the
> caveat that pgmemcache is designed for use with 8.0.
Well, you could always hire Sean to backport it.
> 1. Perform the drop-import-create operation in a transaction, thereby
> guaranteeing the accuracy of the counts
As far as dropping/recreating triggers, there seem to be two strategies:
1. Perform the drop-import-create operation in a transaction, thereby
guaranteeing the accuracy of the counts but presumably locking the
table during the operation, which could take many minutes (up to an
hour or two) in ex
I forgot to mention that I'm running 7.4.6. The README includes the
caveat that pgmemcache is designed for use with 8.0. My instinct is to
be hesitant using something like that in a production environment
without some confidence that people have done so with good and reliable
success or without
Thomas F.O'Connell wrote:
The problem comes in importing new data into the tables for which the
counts are maintained. The current import process does some
preprocessing and then does a COPY from the filesystem to one of the
tables on which counts are maintained. This means that for each row
be
Thomas,
> Would it be absurd to drop the triggers during import and recreate them
> afterward and update the counts in a summ> ary update based on
> information from the import process?
That's what I'd do.
Also, might I suggest storing the counts in memcached (see the pgmemached
project on pgF
I'm involved in an implementation of doing trigger-based counting as a
substitute for count( * ) in real time in an application. My
trigger-based counts seem to be working fine and dramatically improve
the performance of the display of the counts in the application layer.
The problem comes in i