Re: [HACKERS] VACUUM FULL Error

2016-12-29 Thread Hayes, Patrick


Hi there
Can this be forwarded to someone who can assist with this query?
Thank you, Patrick Hayes
From: Hayes, Patrick
Sent: 29 December 2016 12:26
To: 'pgsql-hackers@postgresql.org' 
Subject: FW: VACUUM FULL Error

Hi there
Any suggestion how to get around this issue I am having with vacuum command I’m 
running on 8.1 version of prostgre SQL.
The VACUUM FULL command seems to get stuck on vacuuming 
"pg_catalog.pg_largeobject" (last message for Verbose)
 Now attempting below  - but not hopeful that it will complete successfully.
> VACUUM VERBOSE pg_catalog.pg_largeobject;
With initial Message
INFO:  vacuuming "pg_catalog.pg_largeobject"
How long should I wait for this to complete – if it ever does? It has currently 
been running for over 30 minutes.

Refer to the forwarded message below for additional information.

My fallback is that an archive of the existing DB (almost 2 TBytes) has been 
made and verified (via VEEAM Clone process). It contains all of the historical 
records the need to be retained in a read-only DB. The only option I seem to 
have is to drop the DB and start with a blank canvas. Not an option I want to 
take as I am not postgre SQL expect.

Help!!!

From: Hayes, Patrick
Sent: 29 December 2016 10:53
To: 'pgadmin-supp...@postgresql.org' 
mailto:pgadmin-supp...@postgresql.org>>
Subject: VACUUM FULL Error

Hi pgAdmin support,

Recently truncated tables in Postgre SQL DB and now unable to free up disk 
space on server with the VACUUM FULL command.
The DB is 1.91TBytes in size.
Server is running Win 2008 R2 Standard Operating System.
Postgre SQL is version 8.1

Error message reads as follows:
ERROR:  out of memory
DETAIL:  Failed on request of size 134217728.

Adding memory to server does not seem to make any difference. Re-starting the 
server makes no difference. The VACUUM command always ends in the same record 
134217728.
Is there any way to selectively VACUUM tables – divide and conquer approach.
Thank you in advance for any support you can provide

Patrick Hayes
[cid:image001.jpg@01D261CF.0E36C280]



[HACKERS] FW: VACUUM FULL Error

2016-12-29 Thread Hayes, Patrick
Hi there
Any suggestion how to get around this issue I am having with vacuum command I’m 
running on 8.1 version of prostgre SQL.
The VACUUM FULL command seems to get stuck on vacuuming 
"pg_catalog.pg_largeobject" (last message for Verbose)
 Now attempting below  - but not hopeful that it will complete successfully.
> VACUUM VERBOSE pg_catalog.pg_largeobject;
With initial Message
INFO:  vacuuming "pg_catalog.pg_largeobject"
How long should I wait for this to complete – if it ever does? It has currently 
been running for over 30 minutes.

Refer to the forwarded message below for additional information.

My fallback is that an archive of the existing DB (almost 2 TBytes) has been 
made and verified (via VEEAM Clone process). It contains all of the historical 
records the need to be retained in a read-only DB. The only option I seem to 
have is to drop the DB and start with a blank canvas. Not an option I want to 
take as I am not postgre SQL expect.

Help!!!
From: Hayes, Patrick
Sent: 29 December 2016 10:53
To: 'pgadmin-supp...@postgresql.org' 
Subject: VACUUM FULL Error

Hi pgAdmin support,

Recently truncated tables in Postgre SQL DB and now unable to free up disk 
space on server with the VACUUM FULL command.
The DB is 1.91TBytes in size.
Server is running Win 2008 R2 Standard Operating System.
Postgre SQL is version 8.1

Error message reads as follows:
ERROR:  out of memory
DETAIL:  Failed on request of size 134217728.

Adding memory to server does not seem to make any difference. Re-starting the 
server makes no difference. The VACUUM command always ends in the same record 
134217728.
Is there any way to selectively VACUUM tables – divide and conquer approach.
Thank you in advance for any support you can provide

Patrick Hayes
[cid:image001.jpg@01D261C1.B01E8BE0]



Re: [HACKERS] An Idea for planner hints

2006-08-26 Thread Hayes
On Aug 17, 2006, at 1:41 PM, Peter Eisentraut wrote:

> But we need to work this from the other end anyway.  We need to
> determine first, what sort of statistics the planner could make use of.
> Then we can figure out the difficulties in collecting them.
> 

There are still some things that the architect or DBA will know that 
the system could never deduce.

Any suggestions for what these statistics are?   Cross-column 
statistics have been mentioned previously.  Another that's come up 
before is how "clustered" a table is around its keys (think web log, 
where all the session records are going to be in the same page (or 
small set of pages)).  FK selectivity has been mentioned in this 
thread.

Anything else to throw into the ring?

-arturo

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq