On Wed, 27 Jul 2005, Matthew Schumacher wrote:
> Then they do this to insert the token:
> INSERT INTO bayes_token (
> ) VALUES (
> ) ON DUPLICATE KEY
> spam_count = GREATEST(spam_count + ?, 0),
> ham_count = GREATEST(ham_count + ?, 0),
> atime = GREATEST(atime, ?)
> Or update the token:
> UPDATE bayes_vars SET
> newest_token_age = GREATEST(newest_token_age, ?),
> oldest_token_age = LEAST(oldest_token_age, ?)
> WHERE id = ?
> I think the reason why the procedure was written for postgres was
> because of the greatest and least statements performing poorly.
How can they perform poorly when they are dead simple? Here are 2
functions that work for the above cases of greatest:
CREATE FUNCTION greatest_int (integer, integer)
AS 'SELECT CASE WHEN $1 < $2 THEN $2 ELSE $1 END;'
CREATE FUNCTION least_int (integer, integer)
AS 'SELECT CASE WHEN $1 < $2 THEN $1 ELSE $2 END;'
and these should be inlined by pg and very fast to execute.
I wrote a function that should do what the insert above does. The update
I've not looked at (I don't know what $token_count_update is) but the
update looks simple enough to just implement the same way in pg as in
For the insert or replace case you can probably use this function:
CREATE FUNCTION insert_or_update_token (xid INTEGER,
RETURNS VOID AS
SET spam_count = greatest_int (spam_count + xspam_count, 0),
ham_count = greatest_int (ham_count + xham_count, 0),
atime = greatest_int (atime, xatime)
WHERE id = xid
AND token = xtoken;
IF found THEN
INSERT INTO bayes_token VALUES (xid,
EXCEPTION WHEN unique_violation THEN
-- do nothing
It's not really tested so I can't tell if it's faster then what you have.
What it does do is mimic the way you insert values in mysql. It only work
on pg 8.0 and later however since the exception handling was added in 8.0.
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster