On Tue, 29 Mar 2005, Stephan Szabo wrote:

> On Tue, 29 Mar 2005, Simon Riggs wrote:
> > On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:
> > > > Each value has 1/13th of the table, which is too many rows per value to
> > > > make an IndexScan an efficient way of deleting rows from the table.
> > >
> > > But, the original question was that the delete that was taking a long time
> > > was on a different table. I tried to delete 150 rows from a table with 750
> > > rows, which is FK referenced from this large table. If I understand
> > > correctly, Tom suggested that the length of time was due to a sequential
> > > scan being done on the large table for each value being deleted from the
> > > small one.
> >
> > > For this FK check, there only need be one referring id to invalidate the
> > > delete. ISTM that for any delete with a FK reference, the index could
> > > always be used to search for a single value in the referring table
> > > (excepting very small tables). Why then must a sequential scan be
> > > performed in this case, and/or in general?
> >
> > My understanding was that you were doing a DELETE on the smaller table
> > and that this was doing a DELETE on the measurement table because you
> > had the FK defined as ON DELETE CASCADE. You are right - only a single
> > row is sufficient to RESTRICT the DELETE. But for an ON UPDATE/DELETE
> > action of CASCADE then you will want to touch all rows referenced, so a
> > SeqScan is a perfectly valid consequence of such actions.
> > I think now that you are using the default action, rather than
> > specifically requesting CASCADE?
> >
> > Stephan, Tom:
> > The SQL generated for RI checking by the RI triggers currently applies a
> > limit at execution time, not at prepare time. i.e. there is no LIMIT
> > clause in the SQL.
> >
> > We know whether the check will be limit 1 or limit 0 at prepare time, so
> > why not add a LIMIT clause to the SQL so it changes the plan, not just
> > the number of rows returned when the check query executes?
> Because IIRC, FOR UPDATE and LIMIT at least historically didn't play
> nicely together, so you could sometimes get a result where if the first
> row was locked, the FOR UPDATE would wait on it, but if it was deleted by
> the other transaction you could get 0 rows back in the trigger.

If there were some way to pass a "limit" into SPI_prepare that was treated
similarly to a LIMIT clause for planning purposes but didn't actually
change the output plan to only return that number of rows, we could use

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to