Re: [PERFORM] How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime?

2015-05-22 Thread Jim Nasby

On 5/11/15 4:55 AM, Muthusamy, Sivaraman wrote:

Hi Group,

Facing a problem where pg_catalog.pg_largetobject has been growing fast
recently, in last two weeks. The actual data itself, in user tables, is
about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please
let me know how to clean/truncate this table without losing any user
data in other table.


Autovacuum should be taking care of it for you, though you could also 
try a manual vacuum (VACUUM pg_largeobject;).



With regards to this pg_largeobject, I have the following questions:

-What is this pg_largetobject ?


It stores large objects 
http://www.postgresql.org/docs/9.4/static/lo-interfaces.html



-what does it contain ? tried PostgreSQL documentation and lists, but
could not get much from it.

-why does it grow ?

-Was there any configuration change that may have triggered this to
grow? For last one year or so, there was no problem, but it started
growing all of sudden in last two weeks. The only change we had in last
two weeks was that we have scheduled night base-backup for it and
auto-vacuum feature enabled.


Changes to autovacuum settings could certainly cause changes. 
Long-running transactions would prevent cleanup, as would any prepared 
transactions (which should really be disabled unless you explicitly need 
them).



-pg_largeobject contains so many duplicate rows (loid). Though there are
only about 0.6 million rows (LOIDs), but the total number of rows
including duplicates are about 59million records. What are all these ?


Each row can only be ~2KB wide, so any LO that's larger than that will 
be split into multiple rows.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime?

2015-05-12 Thread Muthusamy, Sivaraman
Hi Group,

Facing a problem where pg_catalog.pg_largetobject has been growing fast 
recently, in last two weeks. The actual data itself, in user tables, is about 
60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please let me know 
how to clean/truncate this table without losing any user data in other table.

With regards to this pg_largeobject, I have the following questions:


-  What is this pg_largetobject ?

-  what does it contain ? tried PostgreSQL documentation and lists, but 
could not get much from it.

-  why does it grow ?

-  Was there any configuration change that may have triggered this to grow? 
For last one year or so, there was no problem, but it started growing all of 
sudden in last two weeks. The only change we had in last two weeks was that we 
have scheduled night base-backup for it and auto-vacuum feature enabled.

-  pg_largeobject contains so many duplicate rows (loid). Though there are 
only about 0.6 million rows (LOIDs), but the total number of rows including 
duplicates are about 59million records. What are all these ?

Kindly help getting this information and getting this issue cleared, and 
appreciate your quick help on this.

Thanks and Regards
M.Shiva