> I don't think your idea would work for a concurrent user setup where
> people have different transactions started at different times with
> different amounts of changes inside each transaction.
>
> That's why it would have to be tracked on a "per connection" basis for
> all the tables.

I tried it out with concurrent connections and it seemed to hold up just 
fine. I think MVCC took care of everything. Transactions got a different 
count depending on whether they could see the inserted values or not. Once 
committed all transactions could see the new table count.

Can you provide a case where it wouldn't?

I imagine this causes some major performance issues, not to mention the dead 
tuples would pile up fast, but it seems to work just fine. 

My SQL is below.

Regards,
        Jeff

jdavis=> create table tuple_count(tuples int);
CREATE
jdavis=> create table c1(a int);
CREATE
jdavis=> create function f1() returns opaque as '
jdavis'> BEGIN
jdavis'>      UPDATE tuple_count set tuples=tuples+1;
jdavis'>      RETURN NEW;
jdavis'> END;
jdavis'> ' language 'plpgsql';
CREATE
jdavis=> create function f2() returns opaque as '
jdavis'> BEGIN
jdavis'>      UPDATE tuple_count set tuples=tuples-1;
jdavis'>      RETURN NEW;
jdavis'> END;
jdavis'> ' language 'plpgsql';
CREATE
jdavis=> create trigger t1 after insert on c1 for each row execute procedure 
f1();
CREATE
jdavis=> create trigger t2 after delete on c1 for each row execute procedure 
f2();
CREATE

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to