On Jan 9, 2008, at 1:09 PM, Scott Marlowe wrote:

On Jan 9, 2008 12:20 PM, Steve Midgley <[EMAIL PROTECTED]> wrote:
This is kludgy but you would have some kind of random number test at
the start of the trigger - if it evals true once per every ten calls to the trigger (say), you'd cut your delete statements execs by about 10x and still periodically truncate every set of user rows fairly often. On
average you'd have ~55 rows per user, never less than 50 and a few
outliers with 60 or 70 rows before they get trimmed back down to 50..
Seems more reliable than a cron job, and solves your problem of an ever
growing table? You could adjust the random number test easily if you
change your mind of the balance of size of table vs. # of delete
statements down the road.

And, if you always through a limit 50 on the end of queries that
retrieve data, you could let it grow quite a bit more than 60 or 70...
Say 200.  Then you could have it so that the random chopper function
only gets kicked off every 100th or so time.

I like that idea.

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to