On Tue, Apr 25, 2006 at 04:47:53PM -0500, Jason Minion wrote:
Usually a dump is significantly smaller than a live database due to
space taken up by indexes and discarded tuples from MVCC. If it's
significantly smaller you may also want to take a look at your vacuuming
procedure.
Between
Title: dbsize pg_dump
Good afternoon,
Probably an easy question but why are the file sizes differ so much between these two tools?
For example:
A backup using pg_dump of our largest DB creates a file 384MB in size
Using the following SQL code utilizing dbsize I get the following:
@postgresql.org'
Subject: [ADMIN] dbsize
pg_dump
Good afternoon,
Probably an easy question
but why are the file sizes differ so much between these two
tools?
For example:
A backup using pg_dump of
our largest DB creates a file 384MB in size
Using the following SQL
code utilizing dbsize
I get
]
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of mcelroy, tim
Sent: Tuesday, April 25, 2006 4:06 PM
To: 'pgsql-admin@postgresql.org'
Subject: Re: [ADMIN] dbsize pg_dump
Please disregard this question. I'm using pg_dump -F c which compresses
the data
Hello *,
I don't know if dbsize a part of the admin business here. I couln't find any
contact infos in the README or the source. I have a 7.3 on solaris 8. When I
try to execute SELECT database_size('newsdb'); I get the following error
message:
ERROR: MemoryContextAlloc: invalid request