Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Matthew Schumacher
Okay,

Here is the status of the SA updates and a question:

Michael got SA changed to pass an array of tokens to the proc so right
there we gained a ton of performance due to connections and transactions
being grouped into one per email instead of one per token.

Now I am working on making the proc even faster.  Since we have all of
the tokens coming in as an array, it should be possible to get this down
to just a couple of queries.

I have the proc using IN and NOT IN statements to update everything at
once from a temp table, but it progressively gets slower because the
temp table is growing between vacuums.  At this point it's slightly
slower than the old update or else insert on every token.

What I really want to do is have the token array available as a record
so that I can query against it, but not have it take up the resources of
a real table.  If I could copy from an array into a record then I can
even get rid of the loop.  Anyone have any thoughts on how to do this?


CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,
 intokenary BYTEA[],
 inspam_count INTEGER,
 inham_count INTEGER,
 inatime INTEGER)
RETURNS VOID AS '
DECLARE
  _token BYTEA;
BEGIN

  for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
  LOOP
_token := intokenary[i];
INSERT INTO bayes_token_tmp VALUES (_token);
  END LOOP;

  UPDATE
bayes_token
  SET
spam_count = greatest_int(spam_count + inspam_count, 0),
ham_count = greatest_int(ham_count + inham_count , 0),
atime = greatest_int(atime, 1000)
  WHERE
id = inuserid
  AND
(token) IN (SELECT intoken FROM bayes_token_tmp);

  UPDATE
bayes_vars
  SET
token_count = token_count + (SELECT count(intoken) FROM
bayes_token_tmp WHERE intoken NOT IN (SELECT token FROM bayes_token)),
newest_token_age = greatest_int(newest_token_age, inatime),
oldest_token_age = least_int(oldest_token_age, inatime)
  WHERE
id = inuserid;

  INSERT INTO
bayes_token
SELECT
  inuserid,
  intoken,
  inspam_count,
  inham_count,
  inatime
FROM
  bayes_token_tmp
WHERE
  (inspam_count  0 OR inham_count  0)
AND
  (intoken) NOT IN (SELECT token FROM bayes_token);

  delete from bayes_token_tmp;

  RETURN;
END;
' LANGUAGE 'plpgsql';

CREATE OR REPLACE FUNCTION greatest_int (integer, integer)
 RETURNS INTEGER
 IMMUTABLE STRICT
 AS 'SELECT CASE WHEN $1  $2 THEN $2 ELSE $1 END;'
 LANGUAGE SQL;

CREATE OR REPLACE FUNCTION least_int (integer, integer)
 RETURNS INTEGER
 IMMUTABLE STRICT
 AS 'SELECT CASE WHEN $1  $2 THEN $1 ELSE $2 END;'
 LANGUAGE SQL;

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread PFC



What I really want to do is have the token array available as a record
so that I can query against it, but not have it take up the resources of
a real table.  If I could copy from an array into a record then I can
even get rid of the loop.  Anyone have any thoughts on how to do this?


	You could make a set-returning-function (about 3 lines) which RETURNs  
NEXT every element in the array ; then you can use this SRF just like a  
table and SELECT from it, join it with your other tables, etc.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread John A Meinel
Matthew Schumacher wrote:
 Okay,

 Here is the status of the SA updates and a question:

 Michael got SA changed to pass an array of tokens to the proc so right
 there we gained a ton of performance due to connections and transactions
 being grouped into one per email instead of one per token.

 Now I am working on making the proc even faster.  Since we have all of
 the tokens coming in as an array, it should be possible to get this down
 to just a couple of queries.

 I have the proc using IN and NOT IN statements to update everything at
 once from a temp table, but it progressively gets slower because the
 temp table is growing between vacuums.  At this point it's slightly
 slower than the old update or else insert on every token.

I recommend that you drop and re-create the temp table. There is no
reason to have it around, considering you delete and re-add everything.
That means you never have to vacuum it, since it always only contains
the latest rows.


 What I really want to do is have the token array available as a record
 so that I can query against it, but not have it take up the resources of
 a real table.  If I could copy from an array into a record then I can
 even get rid of the loop.  Anyone have any thoughts on how to do this?


My one question here, is the inspam_count and inham_count *always* the
same for all tokens? I would have thought each token has it's own count.
Anyway, there are a few lines I would change:


 CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,
  intokenary BYTEA[],
  inspam_count INTEGER,
  inham_count INTEGER,
  inatime INTEGER)
 RETURNS VOID AS '
 DECLARE
   _token BYTEA;
 BEGIN


-- create the table at the start of the procedure
CREATE TEMP TABLE bayes_token_tmp (intoken bytea);
-- You might also add primary key if you are going to be adding
-- *lots* of entries, but it sounds like you are going to have
-- less than 1 page, so it doesn't matter

   for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
   LOOP
 _token := intokenary[i];
 INSERT INTO bayes_token_tmp VALUES (_token);
   END LOOP;

   UPDATE
 bayes_token
   SET
 spam_count = greatest_int(spam_count + inspam_count, 0),
 ham_count = greatest_int(ham_count + inham_count , 0),
 atime = greatest_int(atime, 1000)
   WHERE
 id = inuserid
   AND

--(token) IN (SELECT intoken FROM bayes_token_tmp);
  EXISTS (SELECT token FROM bayes_token_tmp
   WHERE intoken=token LIMIT 1);

-- I would also avoid your intoken (NOT) IN (SELECT token FROM
-- bayes_token) There are a few possibilities, but to me
-- as your bayes_token table becomes big, this will start
-- to be the slow point

-- Rather than doing 2 NOT IN queries, it *might* be faster to do
   DELETE FROM bayes_token_tmp
WHERE NOT EXISTS (SELECT token FROM bayes_token
   WHERE token=intoken);



   UPDATE
 bayes_vars
   SET

-- token_count = token_count + (SELECT count(intoken) FROM
-- bayes_token_tmp WHERE intoken NOT IN (SELECT token FROM bayes_token)),
   token_count = token_count + (SELECT count(intoken)
  FROM bayes_token_tmp)

-- You don't need the where NOT IN, since we already removed those rows

 newest_token_age = greatest_int(newest_token_age, inatime),
 oldest_token_age = least_int(oldest_token_age, inatime)
   WHERE
 id = inuserid;

   INSERT INTO
 bayes_token
 SELECT
   inuserid,
   intoken,
   inspam_count,
   inham_count,
   inatime
 FROM
   bayes_token_tmp
 WHERE
   (inspam_count  0 OR inham_count  0)

-- AND
--   (intoken) NOT IN (SELECT token FROM bayes_token);

-- You don't need either of those lines, again because we already
-- filtered

--   delete from bayes_token_tmp;
--   And rather than deleting all of the entries just
 DROP TABLE bayes_token_tmp;


   RETURN;
 END;
 ' LANGUAGE 'plpgsql';

 CREATE OR REPLACE FUNCTION greatest_int (integer, integer)
  RETURNS INTEGER
  IMMUTABLE STRICT
  AS 'SELECT CASE WHEN $1  $2 THEN $2 ELSE $1 END;'
  LANGUAGE SQL;

 CREATE OR REPLACE FUNCTION least_int (integer, integer)
  RETURNS INTEGER
  IMMUTABLE STRICT
  AS 'SELECT CASE WHEN $1  $2 THEN $1 ELSE $2 END;'
  LANGUAGE SQL;


So to clarify, here is my finished function:

CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,
 intokenary BYTEA[],
 inspam_count INTEGER,
 inham_count INTEGER,
 inatime INTEGER)
RETURNS VOID AS '
DECLARE
  _token BYTEA;
BEGIN

  CREATE TEMP TABLE bayes_token_tmp (intoken bytea);
  for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
  LOOP
_token := 

Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Tom Lane
Matthew Schumacher [EMAIL PROTECTED] writes:
   for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
   LOOP
 _token := intokenary[i];
 INSERT INTO bayes_token_tmp VALUES (_token);
   END LOOP;

   UPDATE
 bayes_token
   SET
 spam_count = greatest_int(spam_count + inspam_count, 0),
 ham_count = greatest_int(ham_count + inham_count , 0),
 atime = greatest_int(atime, 1000)
   WHERE
 id = inuserid
   AND
 (token) IN (SELECT intoken FROM bayes_token_tmp);

I don't really see why you think that this path is going to lead to
better performance than where you were before.  Manipulation of the
temp table is never going to be free, and IN (sub-select) is always
inherently not fast, and NOT IN (sub-select) is always inherently
awful.  Throwing a pile of simple queries at the problem is not
necessarily the wrong way ... especially when you are doing it in
plpgsql, because you've already eliminated the overhead of network
round trips and repeated planning of the queries.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread John A Meinel
Tom Lane wrote:
 Matthew Schumacher [EMAIL PROTECTED] writes:

  for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
  LOOP
_token := intokenary[i];
INSERT INTO bayes_token_tmp VALUES (_token);
  END LOOP;


  UPDATE
bayes_token
  SET
spam_count = greatest_int(spam_count + inspam_count, 0),
ham_count = greatest_int(ham_count + inham_count , 0),
atime = greatest_int(atime, 1000)
  WHERE
id = inuserid
  AND
(token) IN (SELECT intoken FROM bayes_token_tmp);


 I don't really see why you think that this path is going to lead to
 better performance than where you were before.  Manipulation of the
 temp table is never going to be free, and IN (sub-select) is always
 inherently not fast, and NOT IN (sub-select) is always inherently
 awful.  Throwing a pile of simple queries at the problem is not
 necessarily the wrong way ... especially when you are doing it in
 plpgsql, because you've already eliminated the overhead of network
 round trips and repeated planning of the queries.

So for an IN (sub-select), does it actually pull all of the rows from
the other table, or is the planner smart enough to stop once it finds
something?

Is IN (sub-select) about the same as EXISTS (sub-select WHERE x=y)?

What about NOT IN (sub-select) versus NOT EXISTS (sub-select WHERE x=y)

I would guess that the EXISTS/NOT EXISTS would be faster, though it
probably would necessitate using a nested loop (at least that seems to
be the way the query is written).

I did some tests on a database with 800k rows, versus a temp table with
2k rows. I did one sequential test (1-2000, with 66 rows missing), and
one sparse test (1-200, 10-100200, 20-200200, ... with 658 rows
missing).

If found that NOT IN did indeed have to load the whole table. IN was
smart enough to do a nested loop.
EXISTS and NOT EXISTS did a sequential scan on my temp table, with a
SubPlan filter (which looks a whole lot like a Nested Loop).

What I found was that IN performed about the same as EXISTS (since they
are both effectively doing a nested loop), but that NOT IN took 4,000ms
while NOT EXISTS was the same speed as EXISTS at around 166ms.

Anyway, so it does seem like NOT IN is not a good choice, but IN seems
to be equivalent to EXISTS, and NOT EXISTS is also very fast.

Is this generally true, or did I just get lucky on my data?

John
=:-




   regards, tom lane

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq


mifar07=# EXPLAIN ANALYZE SELECT id FROM ids WHERE id IN (SELECT id FROM 
object_t);
   QUERY PLAN
-
 Nested Loop IN Join  (cost=0.00..9851.68 rows=2140 width=4) (actual 
time=0.085..183.889 rows=1351 loops=1)
   -  Seq Scan on ids  (cost=0.00..31.40 rows=2140 width=4) (actual 
time=0.014..24.032 rows=2009 loops=1)
   -  Index Scan using object_t_pkey on object_t  (cost=0.00..4.58 rows=1 
width=4) (actual time=0.071..0.071 rows=1 loops=2009)
 Index Cond: (outer.id = object_t.id)
 Total runtime: 184.823 ms
(5 rows)

Time: 186.931 ms
mifar07=# EXPLAIN ANALYZE SELECT id FROM ids WHERE EXISTS (SELECT id FROM 
object_t o WHERE o.id = ids.id);
 QUERY PLAN
-
 Seq Scan on ids  (cost=0.00..9824.93 rows=1070 width=4) (actual 
time=0.086..165.053 rows=1351 loops=1)
   Filter: (subplan)
   SubPlan
 -  Index Scan using object_t_pkey on object_t o  (cost=0.00..4.58 rows=1 
width=4) (actual time=0.025..0.025 rows=1 loops=2009)
   Index Cond: (id = $0)
 Total runtime: 165.995 ms
(6 rows)

Time: 167.795 ms
mifar07=# EXPLAIN ANALYZE SELECT id FROM ids WHERE id NOT IN (SELECT id FROM 
object_t);
QUERY PLAN
---
 Seq Scan on ids  (cost=36410.51..36447.26 rows=1070 width=4) (actual 
time=4168.247..4172.080 rows=658 loops=1)
   Filter: (NOT (hashed subplan))
   SubPlan
 -  Seq Scan on object_t  (cost=0.00..34381.81 rows=811481 width=4) 
(actual time=0.044..2464.296 rows=811481 loops=1)
 Total runtime: 4210.784 ms
(5 rows)

Time: 4212.276 ms
mifar07=# EXPLAIN ANALYZE SELECT id FROM ids WHERE NOT EXISTS (SELECT id FROM 
object_t o WHERE o.id = ids.id);
 QUERY PLAN
-
 Seq Scan on ids  (cost=0.00..9824.93 rows=1070 width=4) (actual 

Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Matthew Schumacher
John A Meinel wrote:
 Matthew Schumacher wrote:
 
  I recommend that you drop and re-create the temp table. There is no
 reason to have it around, considering you delete and re-add everything.
 That means you never have to vacuum it, since it always only contains
 the latest rows.

Whenever I have a create temp and drop statement I get these errors:

select put_tokens(1, '{000}', 1, 1, 1000);
ERROR:  relation with OID 582248 does not exist
CONTEXT:  SQL statement INSERT INTO bayes_token_tmp VALUES ( $1 )
PL/pgSQL function put_tokens line 12 at SQL statement


 
 
 
 My one question here, is the inspam_count and inham_count *always* the
 same for all tokens? I would have thought each token has it's own count.
 Anyway, there are a few lines I would change:

No, we get the userid, inspam, inham, and atime, and they are the same
for each token.  If we have a different user we call the proc again.

 -- create the table at the start of the procedure
 CREATE TEMP TABLE bayes_token_tmp (intoken bytea);
 -- You might also add primary key if you are going to be adding
 -- *lots* of entries, but it sounds like you are going to have
 -- less than 1 page, so it doesn't matter

This causes errors, see above

 --(token) IN (SELECT intoken FROM bayes_token_tmp);
   EXISTS (SELECT token FROM bayes_token_tmp
  WHERE intoken=token LIMIT 1);
 
 -- I would also avoid your intoken (NOT) IN (SELECT token FROM
 -- bayes_token) There are a few possibilities, but to me
 -- as your bayes_token table becomes big, this will start
 -- to be the slow point
 
 -- Rather than doing 2 NOT IN queries, it *might* be faster to do
DELETE FROM bayes_token_tmp
   WHERE NOT EXISTS (SELECT token FROM bayes_token
  WHERE token=intoken);
 


I'll look into this.


thanks,

schu

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Tom Lane
John A Meinel [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 I don't really see why you think that this path is going to lead to
 better performance than where you were before.

 So for an IN (sub-select), does it actually pull all of the rows from
 the other table, or is the planner smart enough to stop once it finds
 something?

It stops when it finds something --- but it's still a join operation
in essence.  I don't see that putting the values one by one into a table
and then joining is going to be a win compared to just processing the
values one at a time against the main table.

 Is IN (sub-select) about the same as EXISTS (sub-select WHERE x=y)?
 What about NOT IN (sub-select) versus NOT EXISTS (sub-select WHERE x=y)

The EXISTS variants are actually worse, because we've not spent as much
time teaching the planner how to optimize them.  There's effectively
only one decent plan for an EXISTS, which is that the subselect's x is
indexed and we do an indexscan probe using the outer y for each outer
row.  IN and NOT IN can do that, or several alternative plans that might
be better depending on data statistics.

However, that's cold comfort for Matthew's application -- the only way
he'd get any benefit from all those planner smarts is if he ANALYZEs
the temp table after loading it and then EXECUTEs the main query (so
that it gets re-planned every time).  Plus, at least some of those
alternative plans would require an index on the temp table, which is
unlikely to be worth the cost of setting up.  And finally, this
formulation requires separate IN and NOT IN tests that are necessarily
going to do a lot of redundant work.

There's enough overhead here that I find it highly doubtful that it'll
be a win compared to the original approach of retail queries against the
main table.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Matthew Schumacher
Tom Lane wrote:

 I don't really see why you think that this path is going to lead to
 better performance than where you were before.  Manipulation of the
 temp table is never going to be free, and IN (sub-select) is always
 inherently not fast, and NOT IN (sub-select) is always inherently
 awful.  Throwing a pile of simple queries at the problem is not
 necessarily the wrong way ... especially when you are doing it in
 plpgsql, because you've already eliminated the overhead of network
 round trips and repeated planning of the queries.
 
   regards, tom lane

The reason why I think this may be faster is because I would avoid
running an update on data that needs to be inserted which saves
searching though the table for a matching token.

Perhaps I should do the insert first, then drop those tokens from the
temp table, then do my updates in a loop.

I'll have to do some benchmarking...

schu

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Matthew Schumacher
Matthew Schumacher wrote:
 Tom Lane wrote:
 
 
I don't really see why you think that this path is going to lead to
better performance than where you were before.  Manipulation of the
temp table is never going to be free, and IN (sub-select) is always
inherently not fast, and NOT IN (sub-select) is always inherently
awful.  Throwing a pile of simple queries at the problem is not
necessarily the wrong way ... especially when you are doing it in
plpgsql, because you've already eliminated the overhead of network
round trips and repeated planning of the queries.

  regards, tom lane
 
 
 The reason why I think this may be faster is because I would avoid
 running an update on data that needs to be inserted which saves
 searching though the table for a matching token.
 
 Perhaps I should do the insert first, then drop those tokens from the
 temp table, then do my updates in a loop.
 
 I'll have to do some benchmarking...
 
 schu

Tom, I think your right, whenever I do a NOT IN it does a full table
scan against bayes_token and since that table is going to get very big
doing the simple query in a loop that uses an index seems a bit faster.

John, thanks for your help, it was worth a try, but it looks like the
looping is just faster.

Here is what I have so far in case anyone else has ideas before I
abandon it:

CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,
 intokenary BYTEA[],
 inspam_count INTEGER,
 inham_count INTEGER,
 inatime INTEGER)
RETURNS VOID AS '
DECLARE
  _token BYTEA;
BEGIN

  UPDATE
bayes_token
  SET
spam_count = greatest_int(spam_count + inspam_count, 0),
ham_count = greatest_int(ham_count + inham_count , 0),
atime = greatest_int(atime, inatime)
  WHERE
id = inuserid
  AND
(token) IN (SELECT bayes_token_tmp FROM bayes_token_tmp(intokenary));

  UPDATE
bayes_vars
  SET
token_count = token_count + (
  SELECT
count(bayes_token_tmp)
  FROM
bayes_token_tmp(intokenary)
  WHERE
bayes_token_tmp NOT IN (SELECT token FROM bayes_token)),
newest_token_age = greatest_int(newest_token_age, inatime),
oldest_token_age = least_int(oldest_token_age, inatime)
  WHERE
id = inuserid;

  INSERT INTO
bayes_token
SELECT
  inuserid,
  bayes_token_tmp,
  inspam_count,
  inham_count,
  inatime
FROM
  bayes_token_tmp(intokenary)
WHERE
  (inspam_count  0 OR inham_count  0)
AND
  (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);

  RETURN;
END;
' LANGUAGE 'plpgsql';

CREATE OR REPLACE FUNCTION bayes_token_tmp(intokenary BYTEA[]) RETURNS
SETOF bytea AS
'
BEGIN
  for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
  LOOP
return next intokenary[i];
  END LOOP;
  RETURN;
end
'
language 'plpgsql';

CREATE OR REPLACE FUNCTION greatest_int (integer, integer)
 RETURNS INTEGER
 IMMUTABLE STRICT
 AS 'SELECT CASE WHEN $1  $2 THEN $2 ELSE $1 END;'
 LANGUAGE SQL;

CREATE OR REPLACE FUNCTION least_int (integer, integer)
 RETURNS INTEGER
 IMMUTABLE STRICT
 AS 'SELECT CASE WHEN $1  $2 THEN $1 ELSE $2 END;'
 LANGUAGE SQL;

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread John A Meinel
Matthew Schumacher wrote:
 Matthew Schumacher wrote:

Tom Lane wrote:



I don't really see why you think that this path is going to lead to
better performance than where you were before.  Manipulation of the
temp table is never going to be free, and IN (sub-select) is always
inherently not fast, and NOT IN (sub-select) is always inherently
awful.  Throwing a pile of simple queries at the problem is not
necessarily the wrong way ... especially when you are doing it in
plpgsql, because you've already eliminated the overhead of network
round trips and repeated planning of the queries.

 regards, tom lane


The reason why I think this may be faster is because I would avoid
running an update on data that needs to be inserted which saves
searching though the table for a matching token.

Perhaps I should do the insert first, then drop those tokens from the
temp table, then do my updates in a loop.

I'll have to do some benchmarking...

schu


 Tom, I think your right, whenever I do a NOT IN it does a full table
 scan against bayes_token and since that table is going to get very big
 doing the simple query in a loop that uses an index seems a bit faster.

 John, thanks for your help, it was worth a try, but it looks like the
 looping is just faster.

 Here is what I have so far in case anyone else has ideas before I
 abandon it:

Surely this isn't what you have. You have *no* loop here, and you have
stuff like:
  AND
(bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);

I'm guessing this isn't your last version of the function.

As far as putting the CREATE TEMP TABLE inside the function, I think the
problem is that the first time it runs, it compiles the function, and
when it gets to the UPDATE/INSERT with the temporary table name, at
compile time it hard-codes that table id.

I tried getting around it by using EXECUTE which worked, but it made
the function horribly slow. So I don't recommend it.

Anyway, if you want us to evaluate it, you really need to send us the
real final function.

John
=:-



 CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,
  intokenary BYTEA[],
  inspam_count INTEGER,
  inham_count INTEGER,
  inatime INTEGER)
 RETURNS VOID AS '
 DECLARE
   _token BYTEA;
 BEGIN

   UPDATE
 bayes_token
   SET
 spam_count = greatest_int(spam_count + inspam_count, 0),
 ham_count = greatest_int(ham_count + inham_count , 0),
 atime = greatest_int(atime, inatime)
   WHERE
 id = inuserid
   AND
 (token) IN (SELECT bayes_token_tmp FROM bayes_token_tmp(intokenary));

   UPDATE
 bayes_vars
   SET
 token_count = token_count + (
   SELECT
 count(bayes_token_tmp)
   FROM
 bayes_token_tmp(intokenary)
   WHERE
 bayes_token_tmp NOT IN (SELECT token FROM bayes_token)),
 newest_token_age = greatest_int(newest_token_age, inatime),
 oldest_token_age = least_int(oldest_token_age, inatime)
   WHERE
 id = inuserid;

   INSERT INTO
 bayes_token
 SELECT
   inuserid,
   bayes_token_tmp,
   inspam_count,
   inham_count,
   inatime
 FROM
   bayes_token_tmp(intokenary)
 WHERE
   (inspam_count  0 OR inham_count  0)
 AND
   (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);

   RETURN;
 END;
 ' LANGUAGE 'plpgsql';

 CREATE OR REPLACE FUNCTION bayes_token_tmp(intokenary BYTEA[]) RETURNS
 SETOF bytea AS
 '
 BEGIN
   for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
   LOOP
 return next intokenary[i];
   END LOOP;
   RETURN;
 end
 '
 language 'plpgsql';

 CREATE OR REPLACE FUNCTION greatest_int (integer, integer)
  RETURNS INTEGER
  IMMUTABLE STRICT
  AS 'SELECT CASE WHEN $1  $2 THEN $2 ELSE $1 END;'
  LANGUAGE SQL;

 CREATE OR REPLACE FUNCTION least_int (integer, integer)
  RETURNS INTEGER
  IMMUTABLE STRICT
  AS 'SELECT CASE WHEN $1  $2 THEN $1 ELSE $2 END;'
  LANGUAGE SQL;

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org




signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread Matthew Schumacher
John A Meinel wrote:

 Surely this isn't what you have. You have *no* loop here, and you have
 stuff like:
   AND
 (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);
 
 I'm guessing this isn't your last version of the function.
 
 As far as putting the CREATE TEMP TABLE inside the function, I think the
 problem is that the first time it runs, it compiles the function, and
 when it gets to the UPDATE/INSERT with the temporary table name, at
 compile time it hard-codes that table id.
 
 I tried getting around it by using EXECUTE which worked, but it made
 the function horribly slow. So I don't recommend it.
 
 Anyway, if you want us to evaluate it, you really need to send us the
 real final function.
 
 John
 =:-

It is the final function.  It doesn't need a loop because of the
bayes_token_tmp function I added.  The array is passed to it and it
returns a record set so I can work off of it like it's a table.  So the
function works the same way it before, but instead of using SELECT
intoken from TEMPTABLE, you use SELECT bayes_token_tmp from
bayes_token_tmp(intokenary).

I think this is more efficient than the create table overhead,
especially because the incoming record set won't be to big.

Thanks,

schu


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-04 Thread John A Meinel
Matthew Schumacher wrote:
 John A Meinel wrote:


Surely this isn't what you have. You have *no* loop here, and you have
stuff like:
  AND
(bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);

I'm guessing this isn't your last version of the function.

As far as putting the CREATE TEMP TABLE inside the function, I think the
problem is that the first time it runs, it compiles the function, and
when it gets to the UPDATE/INSERT with the temporary table name, at
compile time it hard-codes that table id.

I tried getting around it by using EXECUTE which worked, but it made
the function horribly slow. So I don't recommend it.

Anyway, if you want us to evaluate it, you really need to send us the
real final function.

John
=:-


 It is the final function.  It doesn't need a loop because of the
 bayes_token_tmp function I added.  The array is passed to it and it
 returns a record set so I can work off of it like it's a table.  So the
 function works the same way it before, but instead of using SELECT
 intoken from TEMPTABLE, you use SELECT bayes_token_tmp from
 bayes_token_tmp(intokenary).

 I think this is more efficient than the create table overhead,
 especially because the incoming record set won't be to big.

 Thanks,

 schu



Well, I would at least recommend that you change the WHERE
bayes_token_tmp NOT IN (SELECT token FROM bayes_token)
with a
WHERE NOT EXISTS (SELECT toke FROM bayes_token WHERE
token=bayes_token_tmp)

You might try experimenting with the differences, but on my system the
NOT IN has to do a full sequential scan on bayes_token and load all
entries into a list, while NOT EXISTS can do effectively a nested loop.

The nested loop requires that there is an index on bayes_token(token),
but I'm pretty sure there is anyway.

Again, in my testing, it was a difference of 4200ms versus 180ms. (800k
rows in my big table, 2k in the temp one)

John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-08-01 Thread Michael Parker
Jim C. Nasby wrote:

I'm not sure who's responsible for DBI::Pg (Josh?), but would it make
sense to add better support for bytea to DBI::Pg? ISTM there should be a
better way of doing this than adding gobs of \'s.
  

It has support for binding a bytea parameter, but in this case we're
trying to build up an array and pass that into a stored procedure.  The
$dbh-quote() method for DBD::Pg lacks the ability to quote bytea
types.  There is actually a TODO note in the code about adding support
for quoting Pg specific types.  Presumabliy the difficulties we are
having with this would be solved by that, once it has been implemented. 
In the meantime, I believe it's just a matter of getting the right
escapes happen so that the procedure is inserting values that we can
later get via a select and using bind_param() with the PG_BYTEA type.

Michael


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-31 Thread John Arbash Meinel
John Arbash Meinel wrote:

Matthew Schumacher wrote:

  

All it's doing is trying the update before the insert to get around the
problem of not knowing which is needed.  With only 2-3 of the queries
implemented I'm already back to running about the same speed as the
original SA proc that is going to ship with SA 3.1.0.

All of the queries are using indexes so at this point I'm pretty
convinced that the biggest problem is the sheer number of queries
required to run this proc 200 times for each email (once for each token).

I don't see anything that could be done to make this much faster on the
postgres end, it's looking like the solution is going to involve cutting
down the number of queries some how.

One thing that is still very puzzling to me is why this runs so much
slower when I put the data.sql in a transaction.  Obviously transactions
are acting different when you call a proc a zillion times vs an insert
query.
 



Well, I played with adding a COMMIT;BEGIN; statement to your exact test
every 1000 lines. And this is what I got:
  

Just for reference, I also tested this on my old server, which is a dual
Celeron 450 with 256M ram. FC4 and Postgres 8.0.3
Unmodified:
real54m15.557s
user0m24.328s
sys 0m14.200s

With Transactions every 1000 selects, and vacuum every 5000:
real8m36.528s
user0m16.585s
sys 0m12.569s

With Transactions every 1000 selects, and vacuum every 1:
real7m50.748s
user0m16.183s
sys 0m12.489s

On this machine vacuum is more expensive, since it doesn't have as much ram.

Anyway, on this machine, I see approx 7x improvement. Which I think is
probably going to satisfy your spamassassin needs.
John
=:-

PS Looking forward to having a spamassassin that can utilize my
favorite db. Right now, I'm not using a db backend because it wasn't
worth setting up mysql.

Unmodified:
real17m53.587s
user0m6.204s
sys 0m3.556s

With BEGIN/COMMIT:
real1m53.466s
user0m5.203s
sys 0m3.211s

So I see the potential for improvement almost 10 fold by switching to
transactions. I played with the perl script (and re-implemented it in
python), and for the same data as the perl script, using COPY instead of
INSERT INTO means 5s instead of 33s.

I also played around with adding VACUUM ANALYZE every 10 COMMITS, which
brings the speed to:

real1m41.258s
user0m5.394s
sys 0m3.212s

And doing VACUUM ANALYZE every 5 COMMITS makes it:
real1m46.403s
user0m5.597s
sys 0m3.244s

I'm assuming the slowdown is because of the extra time spent vacuuming.
Overall performance might still be improving, since you wouldn't
actually be inserting all 100k rows at once.
  

...

This is all run on Ubuntu, with postgres 7.4.7, and a completely
unchanged postgresql.conf. (But the machine is a dual P4 2.4GHz, with
3GB of RAM).

John
=:-
  




signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-31 Thread Matthew Schumacher
Ok, here is the current plan.

Change the spamassassin API to pass a hash of tokens into the storage
module, pass the tokens to the proc as an array, start a transaction,
load the tokens into a temp table using copy, select the tokens distinct
into the token table for new tokens, update the token table for known
tokens, then commit.

This solves the following problems:

1.  Each email is a transaction instead of each token.
2.  The update statement is only called when we really need an update
which avoids all of those searches.
3.  The looping work is done inside the proc instead of perl calling a
method a zillion times per email.

I'm not sure how vacuuming will be done yet, if we vacuum once per email
that may be too often, so I may do that every 5 mins in cron.

schu

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-31 Thread Jim C. Nasby
On Sun, Jul 31, 2005 at 08:51:06AM -0800, Matthew Schumacher wrote:
 Ok, here is the current plan.
 
 Change the spamassassin API to pass a hash of tokens into the storage
 module, pass the tokens to the proc as an array, start a transaction,
 load the tokens into a temp table using copy, select the tokens distinct
 into the token table for new tokens, update the token table for known
 tokens, then commit.

You might consider:
UPDATE tokens
FROM temp_table (this updates existing records)

INSERT INTO tokens
SELECT ...
FROM temp_table
WHERE NOT IN (SELECT ... FROM tokens)

This way you don't do an update to newly inserted tokens, which helps
keep vacuuming needs in check.

 This solves the following problems:
 
 1.  Each email is a transaction instead of each token.
 2.  The update statement is only called when we really need an update
 which avoids all of those searches.
 3.  The looping work is done inside the proc instead of perl calling a
 method a zillion times per email.
 
 I'm not sure how vacuuming will be done yet, if we vacuum once per email
 that may be too often, so I may do that every 5 mins in cron.

I would suggest leaving an option to have SA vacuum every n emails,
since some people may not want to mess with cron, etc. I suspect that
pg_autovacuum would be able to keep up with things pretty well, though.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-31 Thread Andreas Pflug

Jim C. Nasby wrote:

On Sun, Jul 31, 2005 at 08:51:06AM -0800, Matthew Schumacher wrote:


Ok, here is the current plan.

Change the spamassassin API to pass a hash of tokens into the storage
module, pass the tokens to the proc as an array, start a transaction,
load the tokens into a temp table using copy, select the tokens distinct
into the token table for new tokens, update the token table for known
tokens, then commit.



You might consider:
UPDATE tokens
FROM temp_table (this updates existing records)

INSERT INTO tokens
SELECT ...
FROM temp_table
WHERE NOT IN (SELECT ... FROM tokens)

This way you don't do an update to newly inserted tokens, which helps
keep vacuuming needs in check.


The subselect might be quite a big set, so avoiding a full table scan 
and materialization by


DELETE temp_table
  WHERE key IN (select key FROM tokens JOIN temp_table);
INSERT INTO TOKENS SELECT * FROM temp_table;

or

INSERT INTO TOKENS
SELECT temp_table.* FROM temp_table LEFT JOIN tokens USING (key)
WHERE tokens.key IS NULL

might be an additional win, assuming that only a small fraction of 
tokens is inserted and updated.


Regards,
Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-31 Thread Tom Lane
Michael Parker [EMAIL PROTECTED] writes:
 sub bytea_esc {
   my ($str) = @_;
   my $buf = ;
   foreach my $char (split(//,$str)) {
 if (ord($char) == 0) { $buf .= 000; }
 elsif (ord($char) == 39) { $buf .= 047; }
 elsif (ord($char) == 92) { $buf .= 134; }
 else { $buf .= $char; }
   }
   return $buf;
 }

Oh, I see the problem: you forgot to convert  to a backslash sequence.

It would probably also be wise to convert anything = 128 to a backslash
sequence, so as to avoid any possible problems with multibyte character
encodings.  You wouldn't see this issue in a SQL_ASCII database, but I
suspect it would rise up to bite you with other encoding settings.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-30 Thread Tom Lane
Matthew Schumacher [EMAIL PROTECTED] writes:
 One thing that is still very puzzling to me is why this runs so much
 slower when I put the data.sql in a transaction.  Obviously transactions
 are acting different when you call a proc a zillion times vs an insert
 query.

I looked into this a bit.  It seems that the problem when you wrap the
entire insertion series into one transaction is associated with the fact
that the test does so many successive updates of the single row in
bayes_vars.  (VACUUM VERBOSE at the end of the test shows it cleaning up
49383 dead versions of the one row.)  This is bad enough when it's in
separate transactions, but when it's in one transaction, none of those
dead row versions can be marked fully dead yet --- so for every update
of the row, the unique-key check has to visit every dead version to make
sure it's dead in the context of the current transaction.  This makes
the process O(N^2) in the number of updates per transaction.  Which is
bad enough if you just want to do one transaction per message, but it's
intolerable if you try to wrap the whole bulk-load scenario into one
transaction.

I'm not sure that we can do anything to make this a lot smarter, but
in any case, the real problem is to not do quite so many updates of
bayes_vars.

How constrained are you as to the format of the SQL generated by
SpamAssassin?  In particular, could you convert the commands generated
for a single message into a single statement?  I experimented with
passing all the tokens for a given message as a single bytea array,
as in the attached, and got almost a factor of 4 runtime reduction
on your test case.

BTW, it's possible that this is all just a startup-transient problem:
once the database has been reasonably well populated, one would expect
new tokens to be added infrequently, and so the number of updates to
bayes_vars ought to drop off.

regards, tom lane


Revised insertion procedure:


CREATE or replace FUNCTION put_tokens (_id INTEGER,
  _tokens BYTEA[],
  _spam_count INTEGER,
  _ham_count INTEGER,
  _atime INTEGER)
RETURNS VOID AS
$$
declare _token bytea;
new_tokens integer := 0;
BEGIN
  for i in array_lower(_tokens,1) .. array_upper(_tokens,1)
  LOOP
_token := _tokens[i];
UPDATE bayes_token
  SET spam_count = spam_count + _spam_count,
  ham_count  = ham_count + _ham_count,
  atime  = _atime
  WHERE id = _id
  AND token = _token;

IF not found THEN
  INSERT INTO bayes_token VALUES (_id, _token, _spam_count,
  _ham_count, _atime);
  new_tokens := new_tokens + 1;
END IF;
  END LOOP;
  if new_tokens  0 THEN
  UPDATE bayes_vars SET token_count = token_count + new_tokens
  WHERE id = _id;
  IF NOT FOUND THEN
RAISE EXCEPTION 'unable to update token_count in bayes_vars';
  END IF;
  END IF;
  RETURN;
END;
$$
LANGUAGE plpgsql;


Typical input:


select 
put_tokens(1,'{\\125\\42\\80\\167\\166,\\38\\153\\220\\93\\190,\\68\\7\\112\\52\\224,\\51\\14\\78\\155\\49,\\73\\245\\15\\221\\43,\\96\\179\\108\\197\\121,\\123\\97\\220\\173\\247,\\55\\132\\243\\51\\65,\\238\\36\\129\\75\\181,\\145\\253\\196\\106\\90,\\119\\0\\51\\127\\236,\\229\\35\\181\\222\\3,\\163\\1\\191\\220\\79,\\232\\97\\152\\207\\26,\\111\\146\\81\\182\\250,\\47\\141\\12\\76\\45,\\252\\97\\168\\243\\222,\\24\\157\\202\\45\\24,\\230\\207\\30\\46\\115,\\106\\45\\182\\94\\136,\\45\\66\\245\\41\\103,\\108\\126\\171\\154\\210,\\64\\90\\1\\184\\145,\\242\\78\\150\\104\\213,\\214134\\7\\179\\150,\\249\\12\\247\\164\\74,\\234\\35\\93\\118\\102,\\5\\152\\152\\219\\188,\\99\\186\\172\\56\\241,\\99\\220\\62\\240\\148,\\106\\12\\199\\33\\177,\\34\\74\\190\\192\\186,\\219\\127\\145\\132\\203,\\240\\113\\128\\160\\46,\\83\\5\\239\\206\\221,\\245\\253\\219\\83\\250,\\1\\53\\126\\56\\129,\\206\\1!
 
30\\97\\246\\47,\\217\\57\\185\\37\\202,\\235\\10\\74\\224\\150,\\80\\151\\70\\52\\96,\\126\\49\\156\\162\\93,\\243\\120\\218\\226\\49,\\251\\132\\118\\47\\221,\\241\\160\\120\\146\\198,\\183\\32\\161\\223\\178,\\80\\205\\77\\57\\2,\\121\\231\\13\\71\\218,\\71\\143\\184\\88\\185,\\163\\96\\119\\211\\142,\\20\\143\\90\\91\\211,\\179\\228\\212\\15\\22,\\243\\35\\149\\9\\55,\\140\\149\\99\\233\\241,\\164\\246\\101\\147\\107,\\202\\70\\218\\40\\114,\\39\\36\\186\\46\\84,\\58\\116\\44\\237\\2,\\80\\204\\185\\47\\105,\\64\\227\\29\\108\\222,\\173\\115\\56\\91\\52,\\102\\39\\157\\252\\64,\\133\\9\\89\\207\\62,\\27\\2\\230\\227\\201,\\163\\45\\123\\160\\129,\\170\\131\\168\\107\\198,\\236\\253\\0\\43\\228,\\44\\255\\93\\197\\136,\\64\\122\\42\\230\\126,\\207\\222\\104\\27\\239,\\26\\240\\78\\73\\45,\\225\\107\\181\\246\\160,\\231\\72\\243\\36\\159,\\248\\60\\14\\67\\145,\\21\\161\\247\\43\\198,\\81\\243\\19!
 1\\168\\18,\\237\\227\\23\\40\\140,\\60\\90\\96\\168\\201,\\211\

Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-30 Thread Matthew Schumacher
Tom Lane wrote:

 I looked into this a bit.  It seems that the problem when you wrap the
 entire insertion series into one transaction is associated with the fact
 that the test does so many successive updates of the single row in
 bayes_vars.  (VACUUM VERBOSE at the end of the test shows it cleaning up
 49383 dead versions of the one row.)  This is bad enough when it's in
 separate transactions, but when it's in one transaction, none of those
 dead row versions can be marked fully dead yet --- so for every update
 of the row, the unique-key check has to visit every dead version to make
 sure it's dead in the context of the current transaction.  This makes
 the process O(N^2) in the number of updates per transaction.  Which is
 bad enough if you just want to do one transaction per message, but it's
 intolerable if you try to wrap the whole bulk-load scenario into one
 transaction.
 
 I'm not sure that we can do anything to make this a lot smarter, but
 in any case, the real problem is to not do quite so many updates of
 bayes_vars.
 
 How constrained are you as to the format of the SQL generated by
 SpamAssassin?  In particular, could you convert the commands generated
 for a single message into a single statement?  I experimented with
 passing all the tokens for a given message as a single bytea array,
 as in the attached, and got almost a factor of 4 runtime reduction
 on your test case.
 
 BTW, it's possible that this is all just a startup-transient problem:
 once the database has been reasonably well populated, one would expect
 new tokens to be added infrequently, and so the number of updates to
 bayes_vars ought to drop off.
 
   regards, tom lane
 

The spamassassins bayes code calls the _put_token method in the storage
module a loop.  This means that the storage module isn't called once per
message, but once per token.

I'll look into modifying it to so that the bayes code passes a hash of
tokens to the storage module where they can loop or in the case of the
pgsql module pass an array of tokens to a procedure where we loop and
use temp tables to make this much more efficient.

I don't have much time this weekend to toss at this, but will be looking
at it on Monday.

Thanks,

schu

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-30 Thread John Arbash Meinel
Matthew Schumacher wrote:

All it's doing is trying the update before the insert to get around the
problem of not knowing which is needed.  With only 2-3 of the queries
implemented I'm already back to running about the same speed as the
original SA proc that is going to ship with SA 3.1.0.

All of the queries are using indexes so at this point I'm pretty
convinced that the biggest problem is the sheer number of queries
required to run this proc 200 times for each email (once for each token).

I don't see anything that could be done to make this much faster on the
postgres end, it's looking like the solution is going to involve cutting
down the number of queries some how.

One thing that is still very puzzling to me is why this runs so much
slower when I put the data.sql in a transaction.  Obviously transactions
are acting different when you call a proc a zillion times vs an insert
query.
  

Well, I played with adding a COMMIT;BEGIN; statement to your exact test
every 1000 lines. And this is what I got:

Unmodified:
real17m53.587s
user0m6.204s
sys 0m3.556s

With BEGIN/COMMIT:
real1m53.466s
user0m5.203s
sys 0m3.211s

So I see the potential for improvement almost 10 fold by switching to
transactions. I played with the perl script (and re-implemented it in
python), and for the same data as the perl script, using COPY instead of
INSERT INTO means 5s instead of 33s.

I also played around with adding VACUUM ANALYZE every 10 COMMITS, which
brings the speed to:

real1m41.258s
user0m5.394s
sys 0m3.212s

And doing VACUUM ANALYZE every 5 COMMITS makes it:
real1m46.403s
user0m5.597s
sys 0m3.244s

I'm assuming the slowdown is because of the extra time spent vacuuming.
Overall performance might still be improving, since you wouldn't
actually be inserting all 100k rows at once.


Just to complete the reference, the perl version runs as:
10:44:02 -- START
10:44:35 -- AFTER TEMP LOAD : loaded 120596 records
10:44:39 -- AFTER bayes_token INSERT : inserted 49359 new records into
bayes_token
10:44:41 -- AFTER bayes_vars UPDATE : updated 1 records
10:46:42 -- AFTER bayes_token UPDATE : updated 47537 records
DONE

My python version runs as:
00:22:54 -- START
00:23:00 -- AFTER TEMP LOAD : loaded 120596 records
00:23:03 -- AFTER bayes_token INSERT : inserted 49359 new records into
bayes_token
00:23:06 -- AFTER bayes_vars UPDATE : updated 1 records
00:25:04 -- AFTER bayes_token UPDATE : updated 47537 records
DONE

The python is effectively just a port of the perl code (with many lines
virtually unchanged), and really the only performance difference is that
the initial data load is much faster with a COPY.

This is all run on Ubuntu, with postgres 7.4.7, and a completely
unchanged postgresql.conf. (But the machine is a dual P4 2.4GHz, with
3GB of RAM).

John
=:-

Anyway, if anyone else has any ideas I'm all ears, but at this point
it's looking like raw query speed is needed for this app and while I
don't care for mysql as a database, it does have the speed going for it.

schu

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

  




signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-30 Thread John Arbash Meinel
Matthew Schumacher wrote:

Tom Lane wrote:

  

I looked into this a bit.  It seems that the problem when you wrap the
entire insertion series into one transaction is associated with the fact
that the test does so many successive updates of the single row in
bayes_vars.  (VACUUM VERBOSE at the end of the test shows it cleaning up
49383 dead versions of the one row.)  This is bad enough when it's in
separate transactions, but when it's in one transaction, none of those
dead row versions can be marked fully dead yet --- so for every update
of the row, the unique-key check has to visit every dead version to make
sure it's dead in the context of the current transaction.  This makes
the process O(N^2) in the number of updates per transaction.  Which is
bad enough if you just want to do one transaction per message, but it's
intolerable if you try to wrap the whole bulk-load scenario into one
transaction.

I'm not sure that we can do anything to make this a lot smarter, but
in any case, the real problem is to not do quite so many updates of
bayes_vars.

How constrained are you as to the format of the SQL generated by
SpamAssassin?  In particular, could you convert the commands generated
for a single message into a single statement?  I experimented with
passing all the tokens for a given message as a single bytea array,
as in the attached, and got almost a factor of 4 runtime reduction
on your test case.

BTW, it's possible that this is all just a startup-transient problem:
once the database has been reasonably well populated, one would expect
new tokens to be added infrequently, and so the number of updates to
bayes_vars ought to drop off.

  regards, tom lane




The spamassassins bayes code calls the _put_token method in the storage
module a loop.  This means that the storage module isn't called once per
message, but once per token.
  

Well, putting everything into a transaction per email might make your
pain go away.
If you saw the email I just sent, I modified your data.sql file to add a
COMMIT;BEGIN every 1000 selects, and I saw a performance jump from 18
minutes down to less than 2 minutes. Heck, on my machine, the advanced
perl version takes more than 2 minutes to run. It is actually slower
than the data.sql with commit statements.

I'll look into modifying it to so that the bayes code passes a hash of
tokens to the storage module where they can loop or in the case of the
pgsql module pass an array of tokens to a procedure where we loop and
use temp tables to make this much more efficient.
  

Well, you could do that. Or you could just have the bayes code issue
BEGIN; when it starts processing an email, and a COMMIT; when it
finishes. From my testing, you will see an enormous speed improvement.
(And you might consider including a fairly frequent VACUUM ANALYZE)

I don't have much time this weekend to toss at this, but will be looking
at it on Monday.
  

Good luck,
John
=:-

Thanks,

schu

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

  




signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Gavin Sherry
zOn Thu, 28 Jul 2005, Matthew Schumacher wrote:

 Gavin Sherry wrote:

 
  I had a look at your data -- thanks.
 
  I have a question though: put_token() is invoked 120596 times in your
  benchmark... for 616 messages. That's nearly 200 queries (not even
  counting the 1-8 (??) inside the function itself) per message. Something
  doesn't seem right there
 
  Gavin

 I am pretty sure that's right because it is doing word statistics on
 email messages.

 I need to spend some time studying the code, I just haven't found time yet.

 Would it be safe to say that there isn't any glaring performance
 penalties other than the sheer volume of queries?

Well, everything relating to one message should be issued in a transaction
block. Secondly, the initial select may be unnecessary -- I haven't looked
at the logic that closely.

There is, potentially, some parser overhead. In C, you could get around
this with PQprepare() et al.

It would also be interesting to look at the cost of a C function.

Gavin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Dennis Bjorklund
On Wed, 27 Jul 2005, Matthew Schumacher wrote:

 Then they do this to insert the token:
 
 INSERT INTO bayes_token (
   id,
   token,
   spam_count,
   ham_count,
   atime
 ) VALUES (
   ?,
   ?,
   ?,
   ?,
   ?
 ) ON DUPLICATE KEY
   UPDATE
 spam_count = GREATEST(spam_count + ?, 0),
 ham_count = GREATEST(ham_count + ?, 0),
 atime = GREATEST(atime, ?)
 
 Or update the token:
 
 UPDATE bayes_vars SET
   $token_count_update
   newest_token_age = GREATEST(newest_token_age, ?),
   oldest_token_age = LEAST(oldest_token_age, ?)
   WHERE id = ?
 
 
 I think the reason why the procedure was written for postgres was
 because of the greatest and least statements performing poorly.

How can they perform poorly when they are dead simple? Here are 2
functions that work for the above cases of greatest:

CREATE FUNCTION greatest_int (integer, integer)
 RETURNS INTEGER
 IMMUTABLE STRICT
 AS 'SELECT CASE WHEN $1  $2 THEN $2 ELSE $1 END;'
 LANGUAGE SQL;

CREATE FUNCTION least_int (integer, integer)
 RETURNS INTEGER
 IMMUTABLE STRICT
 AS 'SELECT CASE WHEN $1  $2 THEN $1 ELSE $2 END;'
 LANGUAGE SQL;

and these should be inlined by pg and very fast to execute.

I wrote a function that should do what the insert above does. The update 
I've not looked at (I don't know what $token_count_update is) but the 
update looks simple enough to just implement the same way in pg as in 
mysql.

For the insert or replace case you can probably use this function:

CREATE FUNCTION insert_or_update_token (xid INTEGER,
xtoken BYTEA,
xspam_count INTEGER,
xham_count INTEGER,
xatime INTEGER)
RETURNS VOID AS
$$
BEGIN
   LOOP
 UPDATE bayes_token
SET spam_count = greatest_int (spam_count + xspam_count, 0),
ham_count  = greatest_int (ham_count + xham_count, 0),
atime  = greatest_int (atime, xatime)
  WHERE id = xid
AND token = xtoken;

  IF found THEN
 RETURN;
  END IF;

  BEGIN
INSERT INTO bayes_token VALUES (xid,
xtoken,
xspam_count,
xham_count,
xatime);
RETURN;
  EXCEPTION WHEN unique_violation THEN
-- do nothing
  END;
   END LOOP;
END;
$$
LANGUAGE plpgsql;

It's not really tested so I can't tell if it's faster then what you have.  
What it does do is mimic the way you insert values in mysql. It only work
on pg 8.0 and later however since the exception handling was added in 8.0.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Josh Berkus
Dennis,

       EXCEPTION WHEN unique_violation THEN

I seem to remember that catching an exception in a PL/pgSQL procedure was a 
large performance cost.   It'd be better to do UPDATE ... IF NOT FOUND.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Matthew Schumacher
Andrew McMillan wrote:
 On Thu, 2005-07-28 at 16:13 -0800, Matthew Schumacher wrote:
 
Ok, I finally got some test data together so that others can test
without installing SA.

The schema and test dataset is over at
http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz

I have a pretty fast machine with a tuned postgres and it takes it about
2 minutes 30 seconds to load the test data.  Since the test data is the
bayes information on 616 spam messages than comes out to be about 250ms
per message.  While that is doable, it does add quite a bit of overhead
to the email system.
 
 
 On my laptop this takes:
 
 real1m33.758s
 user0m4.285s
 sys 0m1.181s
 
 One interesting effect is the data in bayes_vars has a huge number of
 updates and needs vacuum _frequently_.  After the run a vacuum full
 compacts it down from 461 pages to 1 page.
 
 Regards,
   Andrew.
 

I wonder why your laptop is so much faster.  My 2 min 30 sec test was
done on a dual xeon with a LSI megaraid with 128MB cache and writeback
caching turned on.

Here are my memory settings:

shared_buffers = 16384
work_mem = 32768
maintenance_work_mem = 65536

I tried higher values before I came back to these but it didn't help my
performance any.  I should also mention that this is a production
database server that was servicing other queries when I ran this test.

How often should this table be vacuumed, every 5 minutes?

Also, this test goes a bit faster with sync turned off, if mysql isn't
using sync that would be why it's so much faster.  Anyone know what the
default for mysql is?

Thanks,
schu


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread John Arbash Meinel
Josh Berkus wrote:

Dennis,

  

  EXCEPTION WHEN unique_violation THEN



I seem to remember that catching an exception in a PL/pgSQL procedure was a 
large performance cost.   It'd be better to do UPDATE ... IF NOT FOUND.

  

Actually, he was doing an implicit UPDATE IF NOT FOUND in that he was doing:

UPDATE

IF found THEN return;

INSERT
EXCEPT
...

So really, the exception should never be triggered.
John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread PFC




Also, this test goes a bit faster with sync turned off, if mysql isn't
using sync that would be why it's so much faster.  Anyone know what the
default for mysql is?


	For InnoDB I think it's like Postgres (only slower) ; for MyISAM it's no  
fsync, no transactions, no crash tolerance of any kind, and it's not a  
default value (in the sense that you could tweak it) it's just the way  
it's coded.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Andrew McMillan
On Fri, 2005-07-29 at 09:37 -0800, Matthew Schumacher wrote:
  
  On my laptop this takes:
  
  real1m33.758s
  user0m4.285s
  sys 0m1.181s
  
  One interesting effect is the data in bayes_vars has a huge number of
  updates and needs vacuum _frequently_.  After the run a vacuum full
  compacts it down from 461 pages to 1 page.
  
 
 I wonder why your laptop is so much faster.  My 2 min 30 sec test was
 done on a dual xeon with a LSI megaraid with 128MB cache and writeback
 caching turned on.

I only do development stuff on my laptop, and all of my databases are
reconstructable from copies, etc...  so I turn off fsync in this case.


 How often should this table be vacuumed, every 5 minutes?

I would be tempted to vacuum after each e-mail, in this case.


 Also, this test goes a bit faster with sync turned off, if mysql isn't
 using sync that would be why it's so much faster.  Anyone know what the
 default for mysql is?

It depends on your table type for MySQL.

For the data in question (i.e. bayes scoring) it would seem that not
much would be lost if you did have to restore your data from a day old
backup, so perhaps fsync=false is OK for this particular application.

Regards,
Andrew McMillan.

-
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201  MOB: +64(272)DEBIAN  OFFICE: +64(4)499-2267
  What we wish, that we readily believe.
 -- Demosthenes
-



signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Tom Lane
Andrew McMillan [EMAIL PROTECTED] writes:
 On Fri, 2005-07-29 at 09:37 -0800, Matthew Schumacher wrote:
 How often should this table be vacuumed, every 5 minutes?

 I would be tempted to vacuum after each e-mail, in this case.

Perhaps the bulk of the transient states should be done in a temp table,
and only write into a real table when you're done?  Dropping a temp
table is way quicker than vacuuming it ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-29 Thread Matthew Schumacher

Ok, here is where I'm at, I reduced the proc down to this:

CREATE FUNCTION update_token (_id INTEGER,
  _token BYTEA,
  _spam_count INTEGER,
  _ham_count INTEGER,
  _atime INTEGER)
RETURNS VOID AS
$$
BEGIN
  LOOP
UPDATE bayes_token
  SET spam_count = spam_count + _spam_count,
  ham_count  = ham_count + _ham_count,
  atime  = _atime
  WHERE id = _id
  AND token = _token;

IF found THEN
  RETURN;
END IF;

INSERT INTO bayes_token VALUES (_id, _token, _spam_count,
_ham_count, _atime);
IF FOUND THEN
  UPDATE bayes_vars SET token_count = token_count + 1 WHERE id = _id;
  IF NOT FOUND THEN
RAISE EXCEPTION 'unable to update token_count in bayes_vars';
return FALSE;
  END IF;

   RETURN;
END IF;

RETURN;
  END LOOP;
END;
$$
LANGUAGE plpgsql;

All it's doing is trying the update before the insert to get around the
problem of not knowing which is needed.  With only 2-3 of the queries
implemented I'm already back to running about the same speed as the
original SA proc that is going to ship with SA 3.1.0.

All of the queries are using indexes so at this point I'm pretty
convinced that the biggest problem is the sheer number of queries
required to run this proc 200 times for each email (once for each token).

I don't see anything that could be done to make this much faster on the
postgres end, it's looking like the solution is going to involve cutting
down the number of queries some how.

One thing that is still very puzzling to me is why this runs so much
slower when I put the data.sql in a transaction.  Obviously transactions
are acting different when you call a proc a zillion times vs an insert
query.

Anyway, if anyone else has any ideas I'm all ears, but at this point
it's looking like raw query speed is needed for this app and while I
don't care for mysql as a database, it does have the speed going for it.

schu

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0 Bayes module.

2005-07-28 Thread Tom Lane
Matthew Schumacher [EMAIL PROTECTED] writes:
 After playing with various indexes and what not I simply am unable to
 make this procedure perform any better.  Perhaps someone on the list can
 spot the bottleneck and reveal why this procedure isn't performing that
 well or ways to make it better.

There's not anything obviously wrong with that procedure --- all of the
updates are on primary keys, so one would expect reasonably efficient
query plans to get chosen.  Perhaps it'd be worth the trouble to build
the server with profiling enabled and get a gprof trace to see where the
time is going.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-28 Thread Karim Nassar
On Wed, 2005-07-27 at 14:35 -0800, Matthew Schumacher wrote:

 I put the rest of the schema up at
 http://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone
 needs to see it too.

Do you have sample data too?

-- 
Karim Nassar
Collaborative Computing Lab of NAU
Office: (928) 523 5868 -=- Mobile: (928) 699 9221
http://ccl.cens.nau.edu/~kan4


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0 Bayes module.

2005-07-28 Thread Merlin Moncure
 I'm not sure how much this has been discussed on the list, but wasn't
 able to find anything relevant in the archives.
 
 The new Spamassassin is due out pretty soon.  They are currently
testing
 3.1.0pre4.  One of the things I hope to get out of this release is
bayes
 word stats moved to a real RDBMS.  They have separated the mysql
 BayesStore module from the PgSQL one so now postgres can use it's own
 queries.
 
 I loaded all of this stuff up on a test server and am finding that the
 bayes put performance is really not good enough for any real amount of
 mail load.
 
 The performance problems seems to be when the bayes module is
 inserting/updating.  This is now handled by the token_put procedure.

1. you need high performance client side timing (sub 1 millisecond).  on
win32 use QueryPerformanceCounter

2. one by one, convert queries inside your routine into dynamic
versions.  That is, use execute 'query string'

3. Identify the problem.  Something somewhere is not using the index.
Because of the way the planner works you have to do this sometimes.

Merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-28 Thread Matthew Schumacher
Karim Nassar wrote:
 On Wed, 2005-07-27 at 14:35 -0800, Matthew Schumacher wrote:
 
 
I put the rest of the schema up at
http://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone
needs to see it too.
 
 
 Do you have sample data too?
 

Ok, I finally got some test data together so that others can test
without installing SA.

The schema and test dataset is over at
http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz

I have a pretty fast machine with a tuned postgres and it takes it about
2 minutes 30 seconds to load the test data.  Since the test data is the
bayes information on 616 spam messages than comes out to be about 250ms
per message.  While that is doable, it does add quite a bit of overhead
to the email system.

Perhaps this is as fast as I can expect it to go, if that's the case I
may have to look at mysql, but I really don't want to do that.

I will be working on some other benchmarks, and reading though exactly
how bayes works, but at least there is some data to play with.

schu

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-28 Thread Gavin Sherry
On Thu, 28 Jul 2005, Matthew Schumacher wrote:

 Karim Nassar wrote:
  On Wed, 2005-07-27 at 14:35 -0800, Matthew Schumacher wrote:
 
 
 I put the rest of the schema up at
 http://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone
 needs to see it too.
 
 
  Do you have sample data too?
 

 Ok, I finally got some test data together so that others can test
 without installing SA.

 The schema and test dataset is over at
 http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz

 I have a pretty fast machine with a tuned postgres and it takes it about
 2 minutes 30 seconds to load the test data.  Since the test data is the
 bayes information on 616 spam messages than comes out to be about 250ms
 per message.  While that is doable, it does add quite a bit of overhead
 to the email system.

I had a look at your data -- thanks.

I have a question though: put_token() is invoked 120596 times in your
benchmark... for 616 messages. That's nearly 200 queries (not even
counting the 1-8 (??) inside the function itself) per message. Something
doesn't seem right there

Gavin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-28 Thread Matthew Schumacher
Gavin Sherry wrote:

 
 I had a look at your data -- thanks.
 
 I have a question though: put_token() is invoked 120596 times in your
 benchmark... for 616 messages. That's nearly 200 queries (not even
 counting the 1-8 (??) inside the function itself) per message. Something
 doesn't seem right there
 
 Gavin

I am pretty sure that's right because it is doing word statistics on
email messages.

I need to spend some time studying the code, I just haven't found time yet.

Would it be safe to say that there isn't any glaring performance
penalties other than the sheer volume of queries?

Thanks,

schu

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-28 Thread Andrew McMillan
On Thu, 2005-07-28 at 16:13 -0800, Matthew Schumacher wrote:
 
 Ok, I finally got some test data together so that others can test
 without installing SA.
 
 The schema and test dataset is over at
 http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz
 
 I have a pretty fast machine with a tuned postgres and it takes it about
 2 minutes 30 seconds to load the test data.  Since the test data is the
 bayes information on 616 spam messages than comes out to be about 250ms
 per message.  While that is doable, it does add quite a bit of overhead
 to the email system.

On my laptop this takes:

real1m33.758s
user0m4.285s
sys 0m1.181s

One interesting effect is the data in bayes_vars has a huge number of
updates and needs vacuum _frequently_.  After the run a vacuum full
compacts it down from 461 pages to 1 page.

Regards,
Andrew.

-
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201  MOB: +64(272)DEBIAN  OFFICE: +64(4)499-2267
  I don't do it for the money.
-- Donald Trump, Art of the Deal

-



signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0 Bayes module.

2005-07-27 Thread Josh Berkus
Matt,

 After playing with various indexes and what not I simply am unable to
 make this procedure perform any better.  Perhaps someone on the list can
 spot the bottleneck and reveal why this procedure isn't performing that
 well or ways to make it better.

Well, my first thought is that this is a pretty complicated procedure for 
something you want to peform well.Is all this logic really necessary?   
How does it get done for MySQL?

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-27 Thread Matthew Schumacher
Josh Berkus wrote:
 Matt,
 
 
After playing with various indexes and what not I simply am unable to
make this procedure perform any better.  Perhaps someone on the list can
spot the bottleneck and reveal why this procedure isn't performing that
well or ways to make it better.
 
 
 Well, my first thought is that this is a pretty complicated procedure for 
 something you want to peform well.Is all this logic really necessary?   
 How does it get done for MySQL?
 

I'm not sure if it's all needed, in mysql they have this simple schema:

===
CREATE TABLE bayes_expire (
  id int(11) NOT NULL default '0',
  runtime int(11) NOT NULL default '0',
  KEY bayes_expire_idx1 (id)
) TYPE=MyISAM;

CREATE TABLE bayes_global_vars (
  variable varchar(30) NOT NULL default '',
  value varchar(200) NOT NULL default '',
  PRIMARY KEY  (variable)
) TYPE=MyISAM;

INSERT INTO bayes_global_vars VALUES ('VERSION','3');

CREATE TABLE bayes_seen (
  id int(11) NOT NULL default '0',
  msgid varchar(200) binary NOT NULL default '',
  flag char(1) NOT NULL default '',
  PRIMARY KEY  (id,msgid)
) TYPE=MyISAM;

CREATE TABLE bayes_token (
  id int(11) NOT NULL default '0',
  token char(5) NOT NULL default '',
  spam_count int(11) NOT NULL default '0',
  ham_count int(11) NOT NULL default '0',
  atime int(11) NOT NULL default '0',
  PRIMARY KEY  (id, token),
  INDEX bayes_token_idx1 (token),
  INDEX bayes_token_idx2 (id, atime)
) TYPE=MyISAM;

CREATE TABLE bayes_vars (
  id int(11) NOT NULL AUTO_INCREMENT,
  username varchar(200) NOT NULL default '',
  spam_count int(11) NOT NULL default '0',
  ham_count int(11) NOT NULL default '0',
  token_count int(11) NOT NULL default '0',
  last_expire int(11) NOT NULL default '0',
  last_atime_delta int(11) NOT NULL default '0',
  last_expire_reduce int(11) NOT NULL default '0',
  oldest_token_age int(11) NOT NULL default '2147483647',
  newest_token_age int(11) NOT NULL default '0',
  PRIMARY KEY  (id),
  UNIQUE bayes_vars_idx1 (username)
) TYPE=MyISAM;
===

Then they do this to insert the token:

INSERT INTO bayes_token (
  id,
  token,
  spam_count,
  ham_count,
  atime
) VALUES (
  ?,
  ?,
  ?,
  ?,
  ?
) ON DUPLICATE KEY
  UPDATE
spam_count = GREATEST(spam_count + ?, 0),
ham_count = GREATEST(ham_count + ?, 0),
atime = GREATEST(atime, ?)

Or update the token:

UPDATE bayes_vars SET
  $token_count_update
  newest_token_age = GREATEST(newest_token_age, ?),
  oldest_token_age = LEAST(oldest_token_age, ?)
  WHERE id = ?


I think the reason why the procedure was written for postgres was
because of the greatest and least statements performing poorly.

Honestly, I'm not real up on writing procs, I was hoping the problem
would be obvious to someone.

schu

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0 Bayes module.

2005-07-27 Thread Josh Berkus
Matt,

 UPDATE bayes_vars SET
   $token_count_update
   newest_token_age = GREATEST(newest_token_age, ?),
   oldest_token_age = LEAST(oldest_token_age, ?)
   WHERE id = ?


 I think the reason why the procedure was written for postgres was
 because of the greatest and least statements performing poorly.

Well, it might be because we don't have a built-in GREATEST or LEAST prior to 
8.1.   However, it's pretty darned easy to construct one.

 Honestly, I'm not real up on writing procs, I was hoping the problem
 would be obvious to someone.

Well, there's the general performance tuning stuff of course (postgresql.conf) 
which if you've not done any of it will pretty dramatically affect your 
througput rates.   And vacuum, analyze, indexes, etc.

You should also look at ways to make the SP simpler.  For example, you have a 
cycle that looks like:

SELECT
IF NOT FOUND
INSERT
ELSE
UPDATE

Which could be made shorter as:

UPDATE
IF NOT FOUND
INSERT

... saving you one index scan.

Also, I don't quite follow it, but the procedure seems to be doing at least 
two steps that the MySQL version isn't doing at all.  If the PG version is 
doing more things, of course it's going to take longer.

Finally, when you have a proc you're happy with, I suggest having an expert 
re-write it in C, which should double the procedure performance.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0

2005-07-27 Thread Matthew Schumacher
Josh Berkus wrote:
 Matt,
 

 Well, it might be because we don't have a built-in GREATEST or LEAST prior to 
 8.1.   However, it's pretty darned easy to construct one.

I was more talking about min() and max() but yea, I think you knew where
I was going with it...

 
 Well, there's the general performance tuning stuff of course 
 (postgresql.conf) 
 which if you've not done any of it will pretty dramatically affect your 
 througput rates.   And vacuum, analyze, indexes, etc.

I have gone though all that.

 You should also look at ways to make the SP simpler.  For example, you have a 
 cycle that looks like:
 
 SELECT
   IF NOT FOUND
   INSERT
   ELSE
   UPDATE
 
 Which could be made shorter as:
 
 UPDATE
   IF NOT FOUND
   INSERT
 
 ... saving you one index scan.
 
 Also, I don't quite follow it, but the procedure seems to be doing at least 
 two steps that the MySQL version isn't doing at all.  If the PG version is 
 doing more things, of course it's going to take longer.
 
 Finally, when you have a proc you're happy with, I suggest having an expert 
 re-write it in C, which should double the procedure performance.
 

Sounds like I need to completely understand what the proc is doing and
work on a rewrite.  I'll look into writing it in C, I need to do some
reading about how that works and exactly what it buys you.

Thanks for the helpful comments.

schu

---(end of broadcast)---
TIP 6: explain analyze is your friend