Hello,
yesterday, in the process of bloat control, our maintenance script decided
to do a vacuum full on a table, because there was more than 50% free space
/ dead records due to a massive update.
The table normally takes about 20minutes to do a vacuum full (int he last
couple of months we alread
I have a very large table on my PostgreSQL 9.1 database (openstreetmap data:
64GB data + 100GB indexes in one particular table) stored on a dedicated 220GB
SSD-disk (tablespace "SSD"). I would like to run "VACUUM (FULL, FREEZE,
ANALYZE)" on that table to reclaim some tablespace. But as you can s
hi,
i'm very thankful for a new solution to the Vacuum-Full-Problem, as
it was reported in the wiki:
http://wiki.postgresql.org/wiki/VACUUM_FULL#ALTER_TABLE_.._SET_DATA_TYPE
but I have the feeling, that this command is not faster, than a cluster.
prior to 9.0 the syntax does not contain "SET DAT
Martin von Oertzen wrote:
> why i can't move the last table rows to the beginning step by step?
I believe that if you search the hackers list from somewhere around
six to twelve months ago, someone posted a client program which did
that. I think it wasn't pursued for lack of interest -- it's
hi,
i'm happy wih autovacuum and cluster.
but now I ask myself, why i can't move the last table rows
to the beginning step by step?
vacuum-full needs an exclusive lock.
that is okay for me, but if a query arrives, that is waiting
for my vacuum-full, i would like to save my vacuum-work
and pause.
Kieren Scott wrote:
I'm trying to understand what is going on internally when doing a
VACUUM FULL on a table in 8.3.
The info you've gotten from Kevin is all correct, but you may find some
of the additional trivia in this area collected at
http://wiki.postgresql.org/wiki/VACUUM_FULL interesti
Kieren Scott wrote:
> Could you explain what causes index bloat when running vacuum
> full?
To collapse the space, it copies tuples to locations closer to the
front of the table. The index needs to contain references to the
old and new tuple copies until the VACUUM FULL commits or rolls
back.
icourts.gov
> To: kierensc...@hotmail.com; pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] vacuum full table - internals in 8.3
>
> Kieren Scott wrote:
>
> > I'm trying to understand what is going on internally when doing a
> > VACUUM FULL on a table in 8.3.
>
Kieren Scott wrote:
> I'm trying to understand what is going on internally when doing a
> VACUUM FULL on a table in 8.3.
>
> I have a table that is 1GB in size, 500M is used, and 500M is free
> space. When I do a vacuum full on this table, will it either: -
>
> 1) Compact all of the used tupl
Hi,
I'm trying to understand what is going on internally when doing a VACUUM FULL
on a table in 8.3.
I have a table that is 1GB in size, 500M is used, and 500M is free space. When
I do a vacuum full
on this table, will it either: -
1) Compact all of the used tuples into free space within the
Alexandre Leclerc wrote:
> Thank you Scott for this note.
>
> I wanted to give some feedback on the situation:
>
> - The vacuum completed after about 6-7 hours (by 16:15 PM). We
> immediately made a backup before proceeding further. Then we vacuumed
> the postgres et template1 databases.
>
> -
Le 2010-04-19 08:59, Alexandre Leclerc a écrit :
Le 2010-04-16 19:19, Scott Marlowe a écrit :
On Fri, Apr 16, 2010 at 2:57 PM, Alexandre Leclerc
wrote:
Thank you guys. I wanted to rush and vacuum the other tables and
try, but I
decided to make a copy. This is actually running. (Enough mistakes
Le 2010-04-16 19:19, Scott Marlowe a écrit :
On Fri, Apr 16, 2010 at 2:57 PM, Alexandre Leclerc wrote:
Thank you guys. I wanted to rush and vacuum the other tables and try, but I
decided to make a copy. This is actually running. (Enough mistakes in one
day to not take the time to do it.)
A
On Fri, Apr 16, 2010 at 2:57 PM, Alexandre Leclerc wrote:
> Thank you guys. I wanted to rush and vacuum the other tables and try, but I
> decided to make a copy. This is actually running. (Enough mistakes in one
> day to not take the time to do it.)
>
> After that we try to launch the DB and hopef
Le 2010-04-16 16:14, Tom Lane a écrit :
Alexandre Leclerc writes:
The vacuum raised a "max_fsm_pages" of 142000 not enought and stopped.
That's just a warning that gets put out at the end of the run. Go on
with vacuuming your other databases. Right now is no time to be
worrying abo
Alexandre Leclerc writes:
> The vacuum raised a "max_fsm_pages" of 142000 not enought and stopped.
That's just a warning that gets put out at the end of the run. Go on
with vacuuming your other databases. Right now is no time to be
worrying about FSM too small --- you need to get back to a runn
Alexandre Leclerc wrote:
> The vacuum raised a "max_fsm_pages" of 142000 not enought and
> stopped.
That's probably just a warning that it wasn't able to track all the
dead space -- I would expect that. You're going to want to clean up
the bloat anyway. I would try a pg_dumpall at this point
You could temporarily increase the fsm size in the postgres configuration
so as to be able to properly map all the free space. I think you're going
to do a dump/restore in due course in order to return the database to
something like it's normal size, at which point (if you're RAM constrained)
you
Alexandre Leclerc writes:
> I'm always getting:
> WARNING: db "template1" must be vacuumed within 999593 transactions
> HINT: To avoid... execute a full-database VACUUM in "template1"
> ... (repeated many times until 999568)
Yeah, I think you will get that bleat once per table processed, until
yo
Le 2010-04-16 15:44, Tom Lane a écrit :
"Kevin Grittner" writes:
"Joshua D. Drake" wrote:
if you actually managed to start two services against the
same data directory, I hope you have a backup, you can restore
from.
This is 8.1 under Windows, and he connected to a di
Alexandre Leclerc wrote:
> our customer is supposed to have a full file backup from the
> evening.
That's very good news, but given that they've not been going "by the
book" in all respects, it pays to be cautious here. Did they make
the copy while the database service was shut down? If not,
Kevin Grittner wrote:
Also, the "full-database vacuum" terminology seems too likely to be
interpreted as VACUUM FULL for best results. Perhaps it's worth
changing that to just "database vacuum" or "vacuum of the entire
database"
http://archives.postgresql.org/pgsql-committers/2008-12/msg00096.
"Kevin Grittner" writes:
> "Joshua D. Drake" wrote:
>> if you actually managed to start two services against the
>> same data directory, I hope you have a backup, you can restore
>> from.
> This is 8.1 under Windows, and he connected to a different database
> with each backend. He got errors w
Le 2010-04-16 15:20, Scott Marlowe a écrit :
On Fri, Apr 16, 2010 at 12:47 PM, Alexandre Leclerc wrote:
Le 2010-04-16 14:18, Kevin Grittner a écrit :
Alexandre Leclercwrote:
At some point I got:
ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
only to
On Fri, Apr 16, 2010 at 12:47 PM, Alexandre Leclerc wrote:
> Le 2010-04-16 14:18, Kevin Grittner a écrit :
>>
>> Alexandre Leclerc wrote:
>>
>>
>>>
>>> At some point I got:
>>> ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
>>> only to AC/FB9224A8
>>> CONTEXT: writing block 0
"Joshua D. Drake" wrote:
> if you actually managed to start two services against the
> same data directory, I hope you have a backup, you can restore
> from.
This is 8.1 under Windows, and he connected to a different database
with each backend. He got errors writing the WAL files, and it
appa
On Fri, 2010-04-16 at 14:47 -0400, Alexandre Leclerc wrote:
> I did. :( Shame on me. I just realised while reading doc on postgres
> that it is not made for that but only for a single instance at the time.
> I hope I did not break anything.
How in the world did you pull that off? PostgreSQL ch
Le 2010-04-16 14:18, Kevin Grittner a écrit :
Alexandre Leclerc wrote:
At some point I got:
ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
only to AC/FB9224A8
CONTEXT: writing block 0 of relation 1664/0/1214
WARNING: could not writing block 0 of 1664/0/1214
DETAIL: Mult
Alexandre Leclerc wrote:
> I also want to mention that maybe I'm not doing it properly.
>
> I started "postgres.exe" and it is inside that "session",
> "backend>" prompt, that I did run the VACUUM command. Is it that
> way
Yes, that's the single-user mode. Just don't run more than one with
t
Alexandre Leclerc wrote:
> At some point I got:
> ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
> only to AC/FB9224A8
> CONTEXT: writing block 0 of relation 1664/0/1214
> WARNING: could not writing block 0 of 1664/0/1214
> DETAIL: Multiple failures --- write error may be per
Hi again,
I also want to mention that maybe I'm not doing it properly.
I started "postgres.exe" and it is inside that "session", "backend>"
prompt, that I did run the VACUUM command. Is it that way or should I
use psql to connect to anything "postgres.exe" would have "done" (like
listening to
Hi all,
I might have a problem of a greater order, but I can't see how to get an
answer. (Indeed the message didn't say anything about VACUUM FULL... I
miss interpreted the message.)
The messages says to VACUUM the database postgres.
When I execute:
postgres -D "D:\my\path" postgres
VACUUM;
Alexandre Leclerc writes:
> *Background:*
> - PostgreSQL 8.1 on Windows Server
> - The customer has disabled regular VACUUM jobs for backup to be taken,
> a year or two ago.
> - They didn't told us (as far as I can remember).
> - Wednesday morning at 10:55:50: database is shut down to avoid
> wr
Alexandre Leclerc wrote:
> - PostgreSQL 8.1 on Windows Server
That's not a supported environment.
http://www.postgresql.org/about/news.865
They should really be looking at upgrading.
> - The customer has disabled regular VACUUM jobs for backup to be
> taken, a year or two ago.
Ouch. O
Alexandre Leclerc wrote:
- 2. Could we stop VACUUM FULL and simply restart postmaster and
starting a normal VACUUM even if it's slow?
This is what you want to do. VACUUM FULL is the slow way--much, much
slower--and it is not needed to clean up from wraparound issues. Here's
two more opinion
Hi all,
I'm sorry for the urgency of the question. (We have a customer whose DB
is "down" since 36 hours and business operations are compromised. Thank
you for your help.)
*Background:*
- PostgreSQL 8.1 on Windows Server
- The customer has disabled regular VACUUM jobs for backup to be taken,
I've been doing more testing on several different machines, but still
haven't found a solution to my problem where VACUUM FULL is running out of
memory. Besides the original case on RHEL4, I've been able to reproduce it
on both Windows and OSX, with 3GB and 5GB RAM, respectively. Interestingly,
i
Scott Marlowe wrote:
> My usage is dictated by using slony, which means I can't do the
> things I'd like, like just copying the data out of the table,
> truncating it, and copying it back in / renaming the newly created
> table as the old one etc. So for me, and other slony users who
> can't do
On Mon, Dec 14, 2009 at 1:55 PM, Kevin Grittner
wrote:
> David Schnur wrote:
>
>> For a bit of extra context, the database is embedded within a
>> piece of software, which is the source of the uncertainty. If it
>> was something I could manage myself, I would just run the VACUUM
>> FULL by hand
David Schnur wrote:
> For a bit of extra context, the database is embedded within a
> piece of software, which is the source of the uncertainty. If it
> was something I could manage myself, I would just run the VACUUM
> FULL by hand at those times when I knew it was necessary. As it
> stands,
>
> If you actually expect it to be re-used by the database sometime
> later, I would just stick with normal VACUUM (with adequate fsm
It may or may not be used again. Although disk is cheap, it is a
substantial amount of space, and I'd prefer it wasn't locked-up forever.
For a bit of extra con
David Schnur wrote:
> Regular VACUUM is fine most of the time; it frees up space for
> re-use, the space gets re-used, and the disk size stays constant.
> But at certain non-predictable points in time, the database may
> expire several times more rows than usual, and in that case I want
> to re
On Mon, Dec 14, 2009 at 5:53 PM, David Schnur wrote:
> Is my understanding of VACUUM v.s. VACUUM FULL correct, actually? It
> appears to work that way, but the docs are a little vague on whether plain
> VACUUM ever frees actual disk space, or just reclaims it for the DB.
Basically. VACUUM does c
On Mon, Dec 14, 2009 at 12:04 PM, Kevin Grittner <
kevin.gritt...@wicourts.gov> wrote:
> I hope you've been following that with a REINDEX every time;
> otherwise you're causing index bloat.
Yes, it REINDEXes afterwards.
Are these inserts happening in the same table(s) each time? If so,
> what
David Schnur wrote:
> I have a weekly task set up to VACUUM FULL a fairly large (~300M
> row ~50GB) database.
I hope you've been following that with a REINDEX every time;
otherwise you're causing index bloat.
> The intent is to free up disk space from especially-large inserts
> that aren't c
Hello,
I have a weekly task set up to VACUUM FULL a fairly large (~300M row ~50GB)
database. The intent is to free up disk space from especially-large inserts
that aren't covered by the regular reclamation from a daily VACUUM.
Recently, I've been getting the following error:
(OperationalError)
On Tue, Oct 20, 2009 at 9:03 PM, Alvaro Herrera
wrote:
> Greg Stark escribió:
>> On Mon, Oct 19, 2009 at 8:56 PM, Tom Lane wrote:
>> > Greg Stark writes:
>> >> The run-time of CLUSTER doesn't vary very much based on whether the
>> >> data is already in index order or not. The number of passes on
Greg Stark escribió:
> On Mon, Oct 19, 2009 at 8:56 PM, Tom Lane wrote:
> > Greg Stark writes:
> >> The run-time of CLUSTER doesn't vary very much based on whether the
> >> data is already in index order or not. The number of passes only grows
> >> like log(n) of the size of your data and if you
On Mon, Oct 19, 2009 at 8:56 PM, Tom Lane wrote:
> Greg Stark writes:
>> The run-time of CLUSTER doesn't vary very much based on whether the
>> data is already in index order or not. The number of passes only grows
>> like log(n) of the size of your data and if you set
>> maintenance_work_mem lar
Greg Stark writes:
> The run-time of CLUSTER doesn't vary very much based on whether the
> data is already in index order or not. The number of passes only grows
> like log(n) of the size of your data and if you set
> maintenance_work_mem large enough (somewhere around 100MB-1GB) the
> constants a
On Mon, Oct 19, 2009 at 7:37 PM, Scott Marlowe wrote:
>> It's trying to repack your table in place. On the whole, I'd suggest
>> you cancel the vacuum and try a CLUSTER or some such instead.
>
> Unless that table's already in (mostly) index order, even cluster is
> gonna be pretty painful.
> I'd
On Mon, Oct 19, 2009 at 4:38 PM, Tom Lane wrote:
> Krowa Krowax writes:
>> INFO: vacuuming "pg_toast.pg_toast_17929229"
>> INFO: "pg_toast_17929229": found 6203 removable, 257509 nonremovable
>> row versions in 20509016 pages
>> DETAIL: 0 dead row versions cannot be removed yet.
>> Nonremovabl
Krowa Krowax writes:
> INFO: vacuuming "pg_toast.pg_toast_17929229"
> INFO: "pg_toast_17929229": found 6203 removable, 257509 nonremovable
> row versions in 20509016 pages
> DETAIL: 0 dead row versions cannot be removed yet.
> Nonremovable row versions range from 45 to 2030 bytes long.
> There
Krowa Krowax wrote:
> Why vacuum full stops responding? :(
If you give it enough time (which is hard to predict and possibly more
time than you want to allow), it will probably finish, eventually. It
is, however, almost never needed or desirable to use VACUUM FULL.
I recommend you just use
Hello,
I have problem with VACUUM FULL on one table
Table statistics
Statistic Value
Sequential Scans455
Sequential Tuples Read114059927
Index Scans 135554861
Index Tuples Fetched152172886
Tuples Inserted 2281
Kevin Grittner wrote:
> Kevin Kempter wrote:
> > 2) can I safely kill the vacuum full and do a dump, drop table,
> > restore instead?
>
> Killing a VACUUM FULL can leave the table or its indexes more bloated
> than when you started, but it should have no other negative impact. I
> have genera
"Kevin Grittner" wrote:
> Killing a VACUUM FULL
To be clear, cancel the query on the backend, don't kill the process.
Some versions of PostgreSQL can corrupt data if a backend process is
killed.
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to y
Kevin Kempter wrote:
> 1) is there any way to gain some visibility per the progress of the
> vacuum full
None that I know of, short of attaching to the backend process with a
debugger and poking at its guts.
> 2) can I safely kill the vacuum full and do a dump, drop table,
> restore instead
I'm running a vacuum full on a table due to excessive updates/deletes. It's
been running for more than an hour (it's about 3Gig in size)
2 questions:
1) is there any way to gain some visibility per the progress of the vacuum
full
2) can I safely kill the vacuum full and do a dump, drop table,
Kevin Grittner wrote:
> Yeah, I can't remember ever being glad I tried a VACUUM FULL. I think
> I once allowed it a week and a half to try to finish in a situation
> where there wasn't enough disk space for a rewrite (CLUSTER or the
> clever side-effect trick with ALTER TABLE). I even dropped al
Alvaro Herrera wrote:
> rare enough that we can suggest other workarounds when need arises.
Yeah, I can't remember ever being glad I tried a VACUUM FULL. I think
I once allowed it a week and a half to try to finish in a situation
where there wasn't enough disk space for a rewrite (CLUSTER or t
Tino Schwarze wrote:
> I remember several people (including myself) who used VACUUM FULL as a
> last resort when their database was bloated and they run out of disk
> space. But this seems to be moot, anyway, so it's probably better not to
> offer something that might or might not work in such a s
Hi Tom,
On Wed, Apr 01, 2009 at 08:58:54PM -0400, Tom Lane wrote:
> >> Maybe we should remove the code and make
> >> VACUUM FULL do the table-rewrite thing.
>
> > What do you mean with "the table-rewrite thing", exactly?
>
> Like CLUSTER, except not bothering to sort the rows: just seqscan the
Tino Schwarze writes:
>> Maybe we should remove the code and make
>> VACUUM FULL do the table-rewrite thing.
> What do you mean with "the table-rewrite thing", exactly?
Like CLUSTER, except not bothering to sort the rows: just seqscan the
table, enter all live tuples into a new relation file, th
On Wed, Apr 01, 2009 at 08:09:31PM -0400, Tom Lane wrote:
> > Nowadays, with many many new users, and no historical context, they do
> > just take it for "a better kind of vacuum" when in fact it is really
> > like reindex to indexes. I'd vote for rebuild [table]; as the new way
> > to spell vacu
Scott Marlowe writes:
> Nowadays, with many many new users, and no historical context, they do
> just take it for "a better kind of vacuum" when in fact it is really
> like reindex to indexes. I'd vote for rebuild [table]; as the new way
> to spell vacuum full;
Well, no, "rebuild" is a pretty lo
On Wed, Apr 1, 2009 at 4:10 PM, Tino Schwarze wrote:
> Hi Rafael,
>
> On Wed, Apr 01, 2009 at 06:51:00PM -0300, Rafael Domiciano wrote:
>
>> > > I have some doubts about Vacuum Full. There We go:
>> > > 1) The Only thing that Vacuum Full (Only Full, not Analyze) is to clean
>> > > "dead space" on
Hi Rafael,
On Wed, Apr 01, 2009 at 06:51:00PM -0300, Rafael Domiciano wrote:
> > > I have some doubts about Vacuum Full. There We go:
> > > 1) The Only thing that Vacuum Full (Only Full, not Analyze) is to clean
> > > "dead space" on the disc, and reorganize the relation at the physical
> > level
On Tue, Mar 31, 2009 at 1:20 PM, Scott Marlowe wrote:
> On Tue, Mar 31, 2009 at 10:04 AM, Rafael Domiciano
> wrote:
> > Hello People,
> >
> > I have some doubts about Vacuum Full. There We go:
> > 1) The Only thing that Vacuum Full (Only Full, not Analyze) is to clean
> > "dead space" on the disc
On Tue, Mar 31, 2009 at 10:04 AM, Rafael Domiciano
wrote:
> Hello People,
>
> I have some doubts about Vacuum Full. There We go:
> 1) The Only thing that Vacuum Full (Only Full, not Analyze) is to clean
> "dead space" on the disc, and reorganize the relation at the physical level?
> If it's true,
Hello People,
I have some doubts about Vacuum Full. There We go:
1) The Only thing that *Vacuum Full* (Only Full, not Analyze) is to clean
"dead space" on the disc, and reorganize the relation at the physical level?
If it's true, so doing this may speed up *select's*, while the Postgres will
going
I don't have too much delete and update in the middle of the day to run
autovacuum, but in the night, before vacuum analyze, there is some proccess
that does delete and update, then run vacuum analyze.
On Thu, Feb 19, 2009 at 5:11 PM, Scott Marlowe wrote:
> On Thu, Feb 19, 2009 at 11:01 AM, Rafae
On Thu, Feb 19, 2009 at 11:01 AM, Rafael Domiciano
wrote:
> I used to run vacuum full in one of my bases, but now i'm not running
> anymore vacuum full, just vacuum analyze in the past 1 month, but the number
> of necessary pages is increasing every day, now it's in 311264... there is
> any proble
I used to run vacuum full in one of my bases, but now i'm not running
anymore vacuum full, just vacuum analyze in the past 1 month, but the number
of necessary pages is increasing every day, now it's in 311264... there is
any problem this get increasing?
When I runned Reindex few days ago, this num
On Thu, Feb 19, 2009 at 9:35 AM, Jessica Richard wrote:
> I am running "vacuum full" via a shell script for a list of large databases
> now... and I may run out of my scheduled system down time
>
> If I don't finish all databases and kill the script in the middle... am I
> going to cause any t
I am running "vacuum full" via a shell script for a list of large databases
now... and I may run out of my scheduled system down time
If I don't finish all databases and kill the script in the middle... am I going
to cause any table corruptions since "vacuum full" is rebuilding the tables an
"Mikko Partio" <[EMAIL PROTECTED]> writes:
> Interesting. What is planned substitute for VACUUM FULL? CLUSTER?
What's been discussed is something that works roughly like CLUSTER
but doesn't bother to sort the data any particular way (ie, just do
a seqscan not an indexscan in the data-copying phase
On Sun, Mar 30, 2008 at 6:40 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> It would be good to fix it I suppose, but what with VACUUM FULL being on
> the edge of deprecation anyway, it's hard to muster enthusiasm for doing
> a (probably) large additional amount of work on it, I think most
> hackers w
"Mikko Partio" <[EMAIL PROTECTED]> writes:
> On Sun, Mar 30, 2008 at 12:05 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> "Mikko Partio" <[EMAIL PROTECTED]> writes:
>>> 2008-03-29 22:25:15 EET [26841]: [3-1] PANIC: cannot abort transaction
>>> 3778747509, it was already committed
>>
>> Yeah, this is
On Sun, Mar 30, 2008 at 12:05 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Mikko Partio" <[EMAIL PROTECTED]> writes:
> > 2008-03-29 22:25:15 EET [26841]: [3-1] PANIC: cannot abort transaction
> > 3778747509, it was already committed
>
> Yeah, this is a well-known problem with VACUUM FULL ...
>
>
O
"Mikko Partio" <[EMAIL PROTECTED]> writes:
> 2008-03-29 22:25:15 EET [26841]: [3-1] PANIC: cannot abort transaction
> 3778747509, it was already committed
Yeah, this is a well-known problem with VACUUM FULL ...
regards, tom lane
--
Sent via pgsql-admin mailing list (pgs
Hello list
an interrupted vacuum full has just caused a PG instance to restart and
recover. Background:
select version();
version
--
PostgreSQL
ma Informativo - Servizi di E-Government e Open-Source
[EMAIL PROTECTED] - www.comune.prato.it - www.htminer.it
-Messaggio originale-
Da: Scott Marlowe [mailto:[EMAIL PROTECTED]
Inviato: mercoledì 2 maggio 2007 17.41
A: Gabriele Bartolini
Cc: pgsql-admin@postgresql.org
Oggetto: Re: [ADMIN] VACUU
Peter Koczan escribió:
> I've noticed in my own experiments and experiences with VACUUM FULL that
> it tries to reindex all the indexes to compress them. While a good idea,
> this unfortunately takes a *long* time.
Huh, this is not an accurate description of what happens. VACUUM FULL
tries to k
On Wed, 2007-05-02 at 05:05, Gabriele Bartolini wrote:
> Hi guys,
>
>I am having problems with freeing disk space after a massive delete
> operation on a table that had approximately 80 million record. I
> ran the following command, by setting the vacuum memory to
> approximately a GigaByte:
I've noticed in my own experiments and experiences with VACUUM FULL that
it tries to reindex all the indexes to compress them. While a good idea,
this unfortunately takes a *long* time.
You should check two things. First, the server CPU usage should be high
(~100% if on a single core). Second,
Hi guys,
I am having problems with freeing disk space after a massive delete
operation on a table that had approximately 80 million record. I ran the
following command, by setting the vacuum memory to approximately a GigaByte:
SET vacuum_mem TO 1024000
VACUUM FULL ANALYSE VERBOSE oltp.requ
Thanks Tom...
The Linux box was completely idle (as you already guessed). There were
multiple locks on the table(s) in question. And to answer your question, we
are heavily using domain types. I suspect (to refresh your memory) that the
problem reported earlier was from Kevin Grittner. Do
[ I just noticed that this thread is happening on pgsql-admin, which is
completely inappropriate for discussing bugs in a beta version.
Please redirect followups to pgsql-hackers. ]
I wrote:
> ... The hypothesis I'm thinking about is that VACUUM is trying to do
> LockBufferForCleanup() and for
I hate to clutter the list with another post on this, but I just did
exactly what Tom asked me not to do, which is to confuse what
type of vacuum was run. The vacuum involved in the constraint
index problem was NOT a vacuum full, but a normal vacuum of
the database. Sorry for mis-stating the issu
My goal is to avoid vacuum full in production. My understanding is
that it is never necessary if vacuums are done aggressively enough,
but I felt that while we were in beta test mode, it was worthwhile for
me to have it run periodically, with the verbose option, to provide
information about where
On Mon, Oct 03, 2005 at 04:37:17PM -0500, Kevin Grittner wrote:
> We will use gdb and strace the next time we see this.
>
> I've tried to be specific about which vacuum is running in all cases. If
> the posts have been confusing on that issue, I apologize. I'll try to be
> clear on this in futu
We will use gdb and strace the next time we see this.
I've tried to be specific about which vacuum is running in all cases. If
the posts have been confusing on that issue, I apologize. I'll try to be
clear on this in future posts.
To summarize past events, the case involving the constraint in
On Mon, Oct 03, 2005 at 04:12:25PM -0400, Alvaro Herrera wrote:
> On Mon, Oct 03, 2005 at 02:41:14PM -0500, Jim C. Nasby wrote:
> > I haven't looked in detail at autovacuum in 8.1 yet, but I know that in
> > previous versions it was a bad idea to depend on it to vacuum a small
> > table that has a
On Mon, 2005-10-03 at 16:48 -0400, Tom Lane wrote:
> Finally, I'm wondering whether this bug is new in 8.1 or is
> pre-existing. Has this same application been running successfully
> in 8.0?
My money would be on a pre-existing bug. I recall two cases of
unexplained VACUUM hang in the last year.
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> However, I'm looking at the autovacuum code to see why it's sitting
> holding locks on the small table and not vacuuming it. I see on the
> pg_locks output that process 3158 (autovacuum) has got locks on the
> table and index, but it apparently isn't va
On Mon, Oct 03, 2005 at 02:41:14PM -0500, Jim C. Nasby wrote:
> I haven't looked in detail at autovacuum in 8.1 yet, but I know that in
> previous versions it was a bad idea to depend on it to vacuum a small
> table that has a lot of update activity frequently enough. The issue is
> that since it's
I haven't looked in detail at autovacuum in 8.1 yet, but I know that in
previous versions it was a bad idea to depend on it to vacuum a small
table that has a lot of update activity frequently enough. The issue is
that since it's single-threaded if it starts a vacuum on a large table
it could be ho
In my previous post I failed to mention that after we stopped the
applications and found that autovacuum remained idle for several
minutes, we restarted postgres before cleaning up the bloat on
the problem table.
Also, the log shows that autovacuum stopped kicking in after
4:43 p.m. on Friday. I
The previous report was from the same databases -- I'm a consultant
tasked with evaluating the various open source options and making
one of them work with our existing framework. Jeff developed a new
app (which is in beta testing) which is our first use of PostgreSQL
with real production load for
1 - 100 of 173 matches
Mail list logo