Excerpts from Sergio Gabriel Rodriguez's message of jue sep 20 09:04:46 -0300 2012:
> Nuestra base de datos de produccion, postgres 8.4 tiene un tamaño > aproximado de 200 GB, la mayoría de los datos son large objects (174 GB), > hasta hace algunos meses utilizábamos pg_dump para realizar los backups, > tomaba alrededor de 3 - 4 horas realizar todo el proceso. Hace un tiempo el > proceso se volvió interminable, tomaba uno o dos días realizarlo, notamos > que el proceso decía considerablemente al comenzar el backup de los large > object, por lo que tuvimos que optar por backups físicos. Hmm, se discutió algo parecido en pgsql-hackers y resultó en este commit. Quizás te ayude a explicar el problema. Author: Heikki Linnakangas <heikki.linnakan...@iki.fi> Branch: master [eeb6f37d8] 2012-06-21 15:30:26 +0300 Add a small cache of locks owned by a resource owner in ResourceOwner. This speeds up reassigning locks to the parent owner, when the transaction holds a lot of locks, but only a few of them belong to the current resource owner. This is particularly helps pg_dump when dumping a large number of objects. The cache can hold up to 15 locks in each resource owner. After that, the cache is marked as overflowed, and we fall back to the old method of scanning the whole local lock table. The tradeoff here is that the cache has to be scanned whenever a lock is released, so if the cache is too large, lock release becomes more expensive. 15 seems enough to cover pg_dump, and doesn't have much impact on lock release. Jeff Janes, reviewed by Amit Kapila and Heikki Linnakangas. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services - Enviado a la lista de correo pgsql-es-ayuda (pgsql-es-ayuda@postgresql.org) Para cambiar tu suscripci�n: http://www.postgresql.org/mailpref/pgsql-es-ayuda