On 2022-04-16 08:25:56 -0500, Perry Smith wrote: > Currently I have one table that mimics a file system. Each entry has > a parent_id and a base name where parent_id is an id in the table that > must exist in the table or be null with cascade on delete. > > I’ve started a delete of a root entry with about 300,000 descendants. > The table currently has about 22M entries and I’m adding about 1600 > entries per minute still. Eventually there will not be massive > amounts of entries being added and the table will be mostly static. > > I started the delete before from a terminal that got detached. So I > killed that process and started it up again from a terminal less > likely to get detached.˘ > > My question is basically how can I make life easier for Postgres?
Deleting 300k rows doesn't sound that bad. Neither does recursively
finding those 300k rows, although if you have a very biased distribution
(many nodes with only a few children, but some with hundreds of
thousands or even millions of children), PostgreSQL may not find a good
plan.
So as almost always when performance is an issue:
* What exactly are you doing?
* What is the execution plan?
* How long does it take?
hp
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | [email protected] | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
signature.asc
Description: PGP signature
