Hello everyone, I'm creating two database tables that will be used to cache the results of a search. Basically, when a search is initiated, an entry will be created in the "Search" table that represents the search, and a single entry will be created in a child table "SearchResults" for every result returned. A foreign key relationship will be associated between the two.
CREATE TABLE Search ( id SERIAL , accountid INTEGER REFERENCES account(id) ON DELETE CASCADE NOT NULL , sessionnum CHAR(32) UNIQUE NOT NULL , created TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL , sqlcode TEXT ); CREATE TABLE SearchResults_Customer ( id SERIAL , searchid INTEGER REFERENCES search(id) ON DELETE CASCADE NOT NULL , customerid INTEGER REFERENCES customer(id) ON DELETE CASCADE -- All the results go in fields here ); Now, when any record is deleted in the SearchResults table (via an ON DELETE CASCADE, or other trigger), I'd like the entire search set to be deleted since the search is now invalid. Therefore, if a single record in the SearchResults table is deleted, I want it to instead delete the associated record in the Search table; this'll cause a CASCADE into the SearchResults table, toasting my entire result set. The problem I'm looking at is: could this cause a recursion problem, where the cascading deletion will try to cause the whole thing to cascade again? How can I set this up so I can kill an entire tree of data if any one of it's members dies? Thanks in advance. -- /* Michael A. Nachbaur <[EMAIL PROTECTED]> * http://nachbaur.com/pgpkey.asc */ "I don't know, " said the voice on the PA, "apathetic bloody planet, I've no sympathy at all. " ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match