Hi Andres,
thanks a lot for your reply. Unfortunately I've found out most it didn't
really start DROP TABLE yet and it's locked on autovacuum running for
the table and even if I kill the process it's autostarting again and again.

Is there any way how to do the DROP TABLE and bypass/disable autovacuum
entirely? Please note the "autovacuum = off" is set in the config file

Thanks a lot,

On 01/12/2016 12:05 PM, Andres Freund wrote:
> Hi Michal,
> This isn't really a question for -hackers, the list for postgres
> development. -general or -performance would have been more appropriate.
> On 2016-01-12 11:57:05 +0100, Michal Novotny wrote:
>> I've discovered an issue with dropping a large table (~5T). I was
>> thinking drop table is fast operation however I found out my assumption
>> was wrong.
> What exactly did you do, and how long did it take. Is there any chance
> you were actually waiting for the lock on that large table, instead of
> waiting for the actual execution?
>> Is there any way how to tune it to drop a large table in the matter of
>> seconds or minutes? Any configuration variable in the postgresql.conf or
>> any tune up options available?
> The time for dropping a table primarily is spent on three things:
> 1) acquiring the exclusive lock. How long this takes entirely depends on
>    the concurrent activity. If there's a longrunning session using that
>    table it'll take till that session is finished.
> 2) The cached portion of that table needs to be eviced from cache. How
>    long that takes depends on the size of shared_buffers - but usually
>    this is a relatively short amount of time, and only matters if you
>    drop many, many relations.
> 3) The time the filesystem takes to actually remove the, in your case
>    5000 1GB, files. This will take a while, but shouldn't be minutes.
> Greetings,
> Andres Freund

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to