First of all, a little background.

We have a table which is used as a trigger table for entering and
processing data for a network monitoring system.

Essentially, we insert a set of columns into a table, and each row fires
a trigger function which calls a very large stored procedure which
aggregates data, etc.  At that point, the row is deleted from the temp
table.

Currently, records are transferred from the data collector as a series
of multi-row inserts.

Before going through the exercise of recoding, and given the fact that
each of this inserts fires of a trigger, will I see any noticeable
performance?

 

The table definition follows:

 

CREATE TABLE tbltmptests

(

  tmptestsysid bigserial NOT NULL,

  testhash character varying(32),

  testtime timestamp with time zone,

  statusid integer,

  replytxt text,

  replyval real,

  groupid integer,

  CONSTRAINT tbltmptests_pkey PRIMARY KEY (tmptestsysid)

)

WITH (

  OIDS=FALSE

);

ALTER TABLE tbltmptests OWNER TO postgres;

 

-- Trigger: tbltmptests_tr on tbltmptests

 

-- DROP TRIGGER tbltmptests_tr ON tbltmptests;

 

CREATE TRIGGER tbltmptests_tr

  AFTER INSERT

  ON tbltmptests

  FOR EACH ROW

  EXECUTE PROCEDURE fn_testtrigger();

 

 

Another question - is there anything special we need to do to handle the
primary constraint field?

 

Now, on a related note and looking forward to the streaming replication
of v9, will this work with it, since we have multiple tables being
update by a trigger function?

Reply via email to