Got a situation where a 130 mil rows (137GB) table needs to be brought down in
size to 10 mil records (most recent)
with the least amount of downtime.
Doing a full vacuum would be faster on:
- 120 mil rows deleted and 10 mil active (delete most of them then full vacuum)
- 10 mil deleted and 1
Personally, provided you have the room, I would build a new table off to the
side and then migrate what you need to keep to the new table, when done, and
satisfied that you have all of the candidate rows, ranem the original to table
to "x_tablename" and rename the newly created table into place
On Tue, Mar 29, 2011 at 9:26 PM, Nic Chidu wrote:
> Got a situation where a 130 mil rows (137GB) table needs to be brought down
> in size to 10 mil records (most recent)
> with the least amount of downtime.
>
> Doing a full vacuum would be faster on:
> - 120 mil rows deleted and 10 mil active (
Hi All,
I am trying to restore a dump (custom archive format) using pg_restore like
this:
type C:\testdump | pg_restore -Fc -C -U postgres -w -d postgres
where testdump contains the database dump from the following commmand:
pg_dump -U postgres -Fc test > C:\testdump
*the restore command fai
On Tue, Mar 29, 2011 at 9:26 PM, Nic Chidu wrote:
> Got a situation where a 130 mil rows (137GB) table needs to be brought down
> in size to 10 mil records (most recent)
> with the least amount of downtime.
>
>
What version of PG are you with ?
If PG>8.2 and have disk space twice the size of ta
On 03/29/2011 09:04 AM, Plugge, Joe R. wrote:
Personally, provided you have the room, I would build a new table off to the side and
then migrate what you need to keep to the new table, when done, and satisfied that you
have all of the candidate rows, ranem the original to table to "x_tablename"
--- On Tue, 3/29/11, Nic Chidu wrote:
> From: Nic Chidu
> Subject: [ADMIN] full vacuum of a very large table
> To: pgsql-admin@postgresql.org
> Date: Tuesday, March 29, 2011, 11:56 AM
> Got a situation where a 130 mil rows
> (137GB) table needs to be brought down in size to 10
> mil records (m
yeah , This is the best solution, Where you dont need much resources and DB
to do more work, You will save a lot here.
Just simple inserts from base table to new table with your condition,
verify ,
rename base table to some other, new table to base tablename.
analyze this table.
On Tue, Mar
On 03/29/2011 08:56 AM, Nic Chidu wrote:
Got a situation where a 130 mil rows (137GB) table needs to be brought down in
size to 10 mil records (most recent)
with the least amount of downtime.
Doing a full vacuum would be faster on:
- 120 mil rows deleted and 10 mil active (delete most of the
Ravi Thati wrote:
> type C:\testdump | pg_restore -Fc [...]
> pg_restore: [custom archiver] *could not find block ID 1770* in
> archive -- possibly corrupt archive
> pg_restore: *** aborted because of error
Does the type command treat the stream as characters? (You could
try putting a ^Z en
I am running postgres on a red hat linux server.
postgresql-python-8.1.23-1.el5_6.1
postgresql-test-8.1.23-1.el5_6.1
postgresql-libs-8.1.23-1.el5_6.1
postgresql-docs-8.1.23-1.el5_6.1
postgresql-contrib-8.1.23-1.el5_6.1
postgresql-8.1.23-1.el5_6.1
postgresql-pl-8.1.23-1.el5_6.1
postgresql-odbc-08.01
Have you tried a non redirect? It could be that the "type" command
mangles the file (MS software is good at this)
pg_restore -Fc -C -U postgres -w -d postgres C:\testdump
or even with standard redirect
pg_restore -Fc -C -U postgres -w -d postgres < C:\testdump
I've had some issues with pg_restore
Thank You Kevin & Martin.
I have tried the two ways:
1.pg_restore -Fc -C -U postgres -w -d postgres C:\testdump
2.pg_restore -Fc -C -U postgres -w -d postgres < C:\testdump
Both work very well.
Let me put my program here to understand where i started & where I landed.
Backup:
my pg_dump result
13 matches
Mail list logo