[ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread Sergey Klochkov

Hello All,

While trying to backup a database of relatively modest size (160 Gb) I 
ran into the following issue:


When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

File /path/to/mydb.dmp does not appear (yes, I've checked permissions 
and so on). pg_dump just begins to consume memory until it eats up all 
avaliable RAM (96 Gb total on server, 64 Gb available) and is killed by 
the oom killer.


According to pg_stat_activity, pg_dump runs the following query

SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = 
lomowner) AS rolname, lomacl FROM pg_largeobject_metadata


until it is killed.

strace shows that pg_dump is constantly reading a large amount of data 
from a UNIX socket. I suspect that it is the result of the above query.


There are 3 large objects in the database. Please don't ask me why.

I tried googling on this, and found mentions of pg_dump being killed by 
oom killer, but I failed to find anything related to the huge large 
objects number.


Is there any method of working around this issue?

Thanks in advance.

OS: CentOS 6
PostgreSQL version: 9.2.1
96 Gb RAM

PostgreSQL configuration:

listen_addresses = '*'  # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500   # (change requires restart)
shared_buffers = 16GB  # min 128kB
temp_buffers = 64MB # min 800kB
work_mem = 512MB# min 64kB
maintenance_work_mem = 3MB  # min 1MB
checkpoint_segments = 70# in logfile segments, min 1, 
16MB each

effective_cache_size = 5MB
logging_collector = on  # Enable capturing of stderr and 
csvlog
log_directory = 'pg_log'# directory where log files are 
written,

log_filename = 'postgresql-%a.log'  # log file name pattern,
log_truncate_on_rotation = on   # If on, an existing log file of the
log_rotation_age = 1d   # Automatic rotation of logfiles 
will
log_rotation_size = 0   # Automatic rotation of logfiles 
will

log_min_duration_statement = 5000
log_line_prefix = '%t'  # special values:
autovacuum = on # Enable autovacuum subprocess? 
 'on'
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all 
actions and
autovacuum_max_workers = 5  # max number of autovacuum 
subprocesses

autovacuum_naptime =5s  # time between autovacuum runs
autovacuum_vacuum_threshold = 25# min number of row updates before
autovacuum_vacuum_scale_factor = 0.1# fraction of table size before 
vacuum

autovacuum_vacuum_cost_delay = 7ms  # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 1500 # default vacuum cost limit for
datestyle = 'iso, dmy'
lc_monetary = 'ru_RU.UTF-8' # locale for monetary 
formatting
lc_numeric = 'ru_RU.UTF-8'  # locale for number 
formatting

lc_time = 'ru_RU.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.russian'

--
Sergey Klochkov
kloch...@iqbuzz.ru


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread Jov
Try update to the latest release,I see there is a bug fix about pg_dump out
of memroy in 9.2.2,from the release note
http://www.postgresql.org/docs/devel/static/release-9-2-2.html:

   -

   Work around unportable behavior of malloc(0) and realloc(NULL, 0) (Tom
   Lane)

   On platforms where these calls return NULL, some code mistakenly thought
   that meant out-of-memory. This is known to have broken pg_dump for
   databases containing no user-defined aggregates. There might be other cases
   as well.


Jov
blog: http:amutu.com/blog http://amutu.com/blog


2013/10/1 Sergey Klochkov kloch...@iqbuzz.ru

 Hello All,

 While trying to backup a database of relatively modest size (160 Gb) I ran
 into the following issue:

 When I run
 $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

 File /path/to/mydb.dmp does not appear (yes, I've checked permissions and
 so on). pg_dump just begins to consume memory until it eats up all
 avaliable RAM (96 Gb total on server, 64 Gb available) and is killed by
 the oom killer.

 According to pg_stat_activity, pg_dump runs the following query

 SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
 AS rolname, lomacl FROM pg_largeobject_metadata

 until it is killed.

 strace shows that pg_dump is constantly reading a large amount of data
 from a UNIX socket. I suspect that it is the result of the above query.

 There are 3 large objects in the database. Please don't ask me
 why.

 I tried googling on this, and found mentions of pg_dump being killed by
 oom killer, but I failed to find anything related to the huge large objects
 number.

 Is there any method of working around this issue?

 Thanks in advance.

 OS: CentOS 6
 PostgreSQL version: 9.2.1
 96 Gb RAM

 PostgreSQL configuration:

 listen_addresses = '*'  # what IP address(es) to listen on;
 port = 5432 # (change requires restart)
 max_connections = 500   # (change requires restart)
 shared_buffers = 16GB  # min 128kB
 temp_buffers = 64MB # min 800kB
 work_mem = 512MB# min 64kB
 maintenance_work_mem = 3MB  # min 1MB
 checkpoint_segments = 70# in logfile segments, min 1, 16MB
 each
 effective_cache_size = 5MB
 logging_collector = on  # Enable capturing of stderr and
 csvlog
 log_directory = 'pg_log'# directory where log files are
 written,
 log_filename = 'postgresql-%a.log'  # log file name pattern,
 log_truncate_on_rotation = on   # If on, an existing log file of
 the
 log_rotation_age = 1d   # Automatic rotation of logfiles
 will
 log_rotation_size = 0   # Automatic rotation of logfiles
 will
 log_min_duration_statement = 5000
 log_line_prefix = '%t'  # special values:
 autovacuum = on # Enable autovacuum subprocess?
  'on'
 log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions
 and
 autovacuum_max_workers = 5  # max number of autovacuum
 subprocesses
 autovacuum_naptime =5s  # time between autovacuum runs
 autovacuum_vacuum_threshold = 25# min number of row updates before
 autovacuum_vacuum_scale_factor = 0.1# fraction of table size before
 vacuum
 autovacuum_vacuum_cost_delay = 7ms  # default vacuum cost delay for
 autovacuum_vacuum_cost_limit = 1500 # default vacuum cost limit for
 datestyle = 'iso, dmy'
 lc_monetary = 'ru_RU.UTF-8' # locale for monetary
 formatting
 lc_numeric = 'ru_RU.UTF-8'  # locale for number
 formatting
 lc_time = 'ru_RU.UTF-8' # locale for time
 formatting
 default_text_search_config = 'pg_catalog.russian'

 --
 Sergey Klochkov
 kloch...@iqbuzz.ru


 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-adminhttp://www.postgresql.org/mailpref/pgsql-admin



Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread Giuseppe Broccolo

Maybe you can performe your database changing some parameters properly:


PostgreSQL configuration:

listen_addresses = '*'  # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500   # (change requires restart)

Set it to 100, the highest value supported by PostgreSQL

shared_buffers = 16GB  # min 128kB

This value should not be higher than 8GB

temp_buffers = 64MB # min 800kB
work_mem = 512MB# min 64kB
maintenance_work_mem = 3MB  # min 1MB

Given RAM 96GB, you could set it up to 4800MB
checkpoint_segments = 70# in logfile segments, min 1, 
16MB each

effective_cache_size = 5MB

Given RAM 96GB, you could set it up to 80GB




Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread Sergey Klochkov
No, it did not make any difference. And after looking through pg_dump.c 
and pg_dump_sort.c, I cannot tell how it possibly could. See the 
stacktrace that I've sent to the list.


Thanks.

On 01.10.2013 15:01, Giuseppe Broccolo wrote:

Maybe you can performe your database changing some parameters properly:


PostgreSQL configuration:

listen_addresses = '*'  # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500   # (change requires restart)

Set it to 100, the highest value supported by PostgreSQL

shared_buffers = 16GB  # min 128kB

This value should not be higher than 8GB

temp_buffers = 64MB # min 800kB
work_mem = 512MB# min 64kB
maintenance_work_mem = 3MB  # min 1MB

Given RAM 96GB, you could set it up to 4800MB

checkpoint_segments = 70# in logfile segments, min 1,
16MB each
effective_cache_size = 5MB

Given RAM 96GB, you could set it up to 80GB




Hope it can help.

Giuseppe.



--
Sergey Klochkov
kloch...@iqbuzz.ru


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread bricklen
On Tue, Oct 1, 2013 at 4:01 AM, Giuseppe Broccolo 
giuseppe.brocc...@2ndquadrant.it wrote:

 Maybe you can performe your database changing some parameters properly:

 max_connections = 500   # (change requires restart)

 Set it to 100, the highest value supported by PostgreSQL


Surely you mean that  max_connections = 100 is the *default* ?


Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread Magnus Hagander
On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov kloch...@iqbuzz.ru wrote:
 Hello All,

 While trying to backup a database of relatively modest size (160 Gb) I ran
 into the following issue:

 When I run
 $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

 File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
 on). pg_dump just begins to consume memory until it eats up all avaliable
 RAM (96 Gb total on server, 64 Gb available) and is killed by the oom
 killer.

 According to pg_stat_activity, pg_dump runs the following query

 SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
 AS rolname, lomacl FROM pg_largeobject_metadata

 until it is killed.

 strace shows that pg_dump is constantly reading a large amount of data from
 a UNIX socket. I suspect that it is the result of the above query.

 There are 3 large objects in the database. Please don't ask me why.

 I tried googling on this, and found mentions of pg_dump being killed by oom
 killer, but I failed to find anything related to the huge large objects
 number.

 Is there any method of working around this issue?

I think this problem comes from the fact that pg_dump treats each
large object as it's own item. See getBlobs() which allocates a
BlobInfo struct for each LO (and a DumpableObject if there are any,
but that's just one).

I assume the query (from that file):
SELECT oid, lomacl FROM pg_largeobject_metadata

returns 3 rows, which are then looped over?

I ran into a similar issue a few years ago with a client using a
32-bit version of pg_dump, and got it worked around by moving to
64-bit. Did unfortunately not have time to look at the underlying
issue.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Random server overload

2013-10-01 Thread Igor Neyman
 -Original Message-
 From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
 ow...@postgresql.org] On Behalf Of Viktor
 Sent: Tuesday, October 01, 2013 9:19 AM
 To: pgsql-admin@postgresql.org
 Subject: [ADMIN] Random server overload
 
 Hello,
 
 We are experiencing database random overloads caused by IDLE processes.
 Their count jumps from normal ~70 connections to 250-300 with high I/O (30-
 40% wa, when normal ~ 1 % wa).
 
 The overload isn't long and lasts about 5 -10 minutes just a couple of times
 during the month.
 
 Please suggest how to debug this issue and find the cause of the overloads.
 Or mby we should tune our config file ?
 
 errorlog example:
 2013-09-30 10:37:45 EEST FATAL:  sorry, too many clients already
 2013-09-30 10:37:45 EEST FATAL:  remaining connection slots are reserved for
 non-replication superuser connections ...
 
 config file:
 
 max_connections = 250
 shared_buffers = 16GB
 temp_buffers = 16MB
 max_prepared_transactions = 0
 work_mem = 448MB
 maintenance_work_mem = 4GB
 max_stack_depth = 6MB
 wal_buffers = 18MB
 checkpoint_segments = 30
 checkpoint_timeout = 5min
 checkpoint_warning = 30s
 random_page_cost = 4.0
 cpu_tuple_cost = 0.01
 cpu_index_tuple_cost = 0.005
 effective_cache_size = 50GB
 default_statistics_target = 100
 autovacuum = on
 
 othr values are defaults.
 
 System:
 RAM 74 GB
 PostgreSQL 9.1.9, Debian 6
 Database size on disc: 84 GB data + 23 GB indexes. Different LVMs on RAID
 10.
 
 
 --
 Best regards

Did you try using any kind of connection pooler, e.g. PgBouncer? 
Should help.

Regards,
Igor Neyman


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Random server overload

2013-10-01 Thread Viktor

Hello,

We are experiencing database random overloads caused by IDLE processes. 
Their count jumps from normal ~70 connections to 250-300 with high I/O 
(30-40% wa, when normal ~ 1 % wa).


The overload isn't long and lasts about 5 -10 minutes just a couple of 
times during the month.


Please suggest how to debug this issue and find the cause of the 
overloads. Or mby we should tune our config file ?


errorlog example:
2013-09-30 10:37:45 EEST FATAL:  sorry, too many clients already
2013-09-30 10:37:45 EEST FATAL:  remaining connection slots are reserved 
for non-replication superuser connections

...

config file:

max_connections = 250
shared_buffers = 16GB
temp_buffers = 16MB
max_prepared_transactions = 0
work_mem = 448MB
maintenance_work_mem = 4GB
max_stack_depth = 6MB
wal_buffers = 18MB
checkpoint_segments = 30
checkpoint_timeout = 5min
checkpoint_warning = 30s
random_page_cost = 4.0
cpu_tuple_cost = 0.01
cpu_index_tuple_cost = 0.005
effective_cache_size = 50GB
default_statistics_target = 100
autovacuum = on

othr values are defaults.

System:
RAM 74 GB
PostgreSQL 9.1.9, Debian 6
Database size on disc: 84 GB data + 23 GB indexes. Different LVMs on 
RAID 10.



--
Best regards



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Random server overload

2013-10-01 Thread Albe Laurenz
Viktor wrote:
 We are experiencing database random overloads caused by IDLE processes.
 Their count jumps from normal ~70 connections to 250-300 with high I/O
 (30-40% wa, when normal ~ 1 % wa).
 
 The overload isn't long and lasts about 5 -10 minutes just a couple of
 times during the month.
 
 Please suggest how to debug this issue and find the cause of the
 overloads. Or mby we should tune our config file ?
 
 errorlog example:
 2013-09-30 10:37:45 EEST FATAL:  sorry, too many clients already
 2013-09-30 10:37:45 EEST FATAL:  remaining connection slots are reserved
 for non-replication superuser connections

Looks like something tries to open lots of connections at these times.

Your best bet is to is to set log_connections = on and examine the log file
when the problem happens.  That way you can figure out where the connections
come from.

Yours,
Laurenz Albe

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pgAdmin forgets password on could not connect to server error

2013-10-01 Thread Andrew Swartzendruber
Would it be possible to prevent the system from forgetting the stored
password when pgAdmin fails to connect in a future release or have an
option that would prevent forgetting of passwords?

I use pgAdmin 1.16 or 1.18 to connect to our database and use the store
password option.  If postgres is not running or the network is not working
for any reason, pgAdmin displays an error that it cannot connect to the
database and immediately forgets the stored password.  This is extremely
inconvenient when using a long pass-phrase as it requires me to lookup the
password and re-enter it when I can again connect to the server.



-Andrew-


[ADMIN] pg_archivecleanup not cleaning up ?

2013-10-01 Thread Jorge Torralba
Running Postgres 9.2.4 on centos 6

We have a backup script that runs twice weekly. At the end of the script it
executes a pg_archivecleanup to remove the old WAL files no longer needed.

Most of the time this runs as expected. But for some reason, it does not
cleanup ALL of the old wal files.

I noticed today that I still had 3500 wal files. I ran the command
manually, and the screen scrolled by with its usual removing file message.

Afterwards, I did an ls | wc - l only to see it had only removed a few. In
the end, I had to run the same pg_archivecleanup about 5 times in order to
remove all of the old wal files.

Is there a known issues with this command?

Here is the command I ran.

pg_archivecleanup -d /opt/dbbackups/db1_wal_keep
000100220049.0020.backup



-- 
Thanks,

Jorge Torralba


Note: This communication may contain privileged or other confidential
information. If you are not the intended recipient, please do not print,
copy, retransmit, disseminate or otherwise use the information. Please
indicate to the sender that you have received this email in error and
delete the copy you received. Thank You.


Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

2013-10-01 Thread Alejandro Brust
Did U perform  any vacuumdb / reindexdb before the Pg_dump?


El 01/10/2013 09:49, Magnus Hagander escribió:
 On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov kloch...@iqbuzz.ru wrote:
 Hello All,

 While trying to backup a database of relatively modest size (160 Gb) I ran
 into the following issue:

 When I run
 $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

 File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
 on). pg_dump just begins to consume memory until it eats up all avaliable
 RAM (96 Gb total on server, 64 Gb available) and is killed by the oom
 killer.

 According to pg_stat_activity, pg_dump runs the following query

 SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
 AS rolname, lomacl FROM pg_largeobject_metadata

 until it is killed.

 strace shows that pg_dump is constantly reading a large amount of data from
 a UNIX socket. I suspect that it is the result of the above query.

 There are 3 large objects in the database. Please don't ask me why.

 I tried googling on this, and found mentions of pg_dump being killed by oom
 killer, but I failed to find anything related to the huge large objects
 number.

 Is there any method of working around this issue?
 I think this problem comes from the fact that pg_dump treats each
 large object as it's own item. See getBlobs() which allocates a
 BlobInfo struct for each LO (and a DumpableObject if there are any,
 but that's just one).

 I assume the query (from that file):
 SELECT oid, lomacl FROM pg_largeobject_metadata

 returns 3 rows, which are then looped over?

 I ran into a similar issue a few years ago with a client using a
 32-bit version of pg_dump, and got it worked around by moving to
 64-bit. Did unfortunately not have time to look at the underlying
 issue.





-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] DB link from postgres to Oracle; how to query Dbname.tablename?

2013-10-01 Thread Bhanu Murthy
Hi all, greetings!
 
Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB 
link from Postgres 9.3 database into Oracle 11gR3 database (with postgres DB 
user credentials).
 SQL  create public database link pg_link connect to postgres identified by 
blahblah
using 'postgresql';
Since Postgres does not support public synonyms across databases in a cluster, 
how do I connect to a specific database and query a specific table in this 
Postgres cluster using the HS DB link?
 
Let's say, if I have 2 Postgres databases named pgdb01 and pgdb02 in the 
Postgres cluster, using this DB link that I have created in Oracle, how can I 
query a specific table called table01 from pgdb01 database? 
 
Even though the table user_account exists in pgdb01 database, I cannot select 
from it using the DB link.
 
SQL select count(*) from mailto:%22user_account%22@pg_link;
select count(*) from user_account@pg_link; *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
ERROR:  relation user_account does not exist at character 21;
No query has been executed with that handle {HY000,NativeErr = 1}
ORA-02063: preceding 3 lines from PG_LINK;
 
I tried dbname.tablename syntax, but it didn't work!  BTW, all my tables belong 
to public schema.
 
Does anyone with DB link expertise try to answer my question?

Thanks, 
Bhanu M. Gandikota
Mobile: (415) 420-7740
 


 From: Alejandro Brust alejand...@pasteleros.org.ar
To: pgsql-admin@postgresql.org 
Sent: Tuesday, October 1, 2013 12:30 PM
Subject: Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a 
database with 3 large objects
  

Did U perform  any vacuumdb / reindexdb before the Pg_dump?


El 01/10/2013 09:49, Magnus Hagander escribió:
 On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov kloch...@iqbuzz.ru wrote:
 Hello All,

 While trying to backup a database of relatively modest size (160 Gb) I ran
 into the following issue:

 When I run
 $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb

 File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
 on). pg_dump just begins to consume memory until it eats up all avaliable
 RAM (96 Gb total on server, 64 Gb available) and is killed by the oom
 killer.

 According to pg_stat_activity, pg_dump runs the following query

 SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
 AS rolname, lomacl FROM pg_largeobject_metadata

 until it is killed.

 strace shows that pg_dump is constantly reading a large amount of data from
 a UNIX socket. I suspect that it is the result of the above query.

 There are 3 large objects in the database. Please don't ask me why.

 I tried googling on this, and found mentions of pg_dump being killed by oom
 killer, but I failed to find anything related to the huge large objects
 number.

 Is there any method of working around this issue?
 I think this problem comes from the fact that pg_dump treats each
 large object as it's own item. See getBlobs() which allocates a
 BlobInfo struct for each LO (and a DumpableObject if there are any,
 but that's just one).

 I assume the query (from that file):
 SELECT oid, lomacl FROM pg_largeobject_metadata

 returns 3 rows, which are then looped over?

 I ran into a similar issue a few years ago with a client using a
 32-bit version of pg_dump, and got it worked around by moving to
 64-bit. Did unfortunately not have time to look at the underlying
 issue.





-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin