A table with 800 gb means 800 files of 1 gb. When I use truncate or drop
table,  xfs that is a log based filesystem,  will write lots of data in its
log and this is the problem. The problem is not postgres, it is the way
that xfs works with big files , or being more clear, the way that it
handles lots of files.

Regards,
Joao

On Thu, Sep 19, 2019, 18:50 Andreas Kretschmer <andr...@a-kretschmer.de>
wrote:

>
>
> Am 19.09.19 um 17:59 schrieb Joao Junior:
> >
> >
> > I have a table that Is not being use anymore, I want to drop it.
> > The table is huge, around 800GB and it has some index on it.
> >
> > When I execute the drop table command it goes very slow, I realised
> > that the problem is the filesystem.
> > It seems that XFS doesn't handle well big files, there are some
> > discussion about it in some lists.
>
> PG doesn't create one big file for this table, but about 800 files with
> 1GB size each.
>
> >
> > I have to find a way do delete the table in chunks.
>
> Why? If you want to delete all rows, just use TRUNCATE.
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
>

Reply via email to