gt;Cc: pgsql-performance@postgresql.org
>Sent: Friday, August 17, 2012 7:33 PM
>Subject: Re: [PERFORM] Index Bloat Problem
>
>On Thu, Aug 16, 2012 at 12:57 PM, Strahinja Kustudić
> wrote:
>>
>> @Jeff I'm not sure if I understand what you mean? I know that we never reus
On Thu, Aug 16, 2012 at 12:57 PM, Strahinja Kustudić
wrote:
>
> @Jeff I'm not sure if I understand what you mean? I know that we never reuse
> key ranges. Could you be more clear, or give an example please.
If an index leaf page is completely empty because every entry on it
were deleted, it will
Thanks for the help everyone and sorry for not replying sooner, I was on
a business trip.
@Hubert pg_reorg looks really interesting and from the first read it looks
to be a very good solution for maintenance, but for now I would rather try
to slow down, or remove this bloat, so I have to do as les
On Fri, Aug 10, 2012 at 3:15 PM, Strahinja Kustudić
wrote:
>
> For example, yesterday when I checked the database size on the production
> server it was 30GB, and the restored dump of that database was only 17GB.
> The most interesting thing is that the data wasn't bloated that much, but
> the ind
On 11/08/12 10:15, Strahinja Kustudić wrote:
We have PostgreSQL 9.1 running on Centos 5 on two SSDs, one for indices and
one for data. The database is extremely active with reads and writes. We
have autovacuum enabled, but we didn't tweak it's aggressiveness. The
problem is that after some time t
On Sat, Aug 11, 2012 at 12:15:11AM +0200, Strahinja Kustudić wrote:
> Is there a way to make the autovacuum daemon more aggressive, since I'm not
> exactly sure how to do that in this case? Would that even help? Is there
> another way to remove this index bloat?
http://www.depesz.com/index.php/201
We have PostgreSQL 9.1 running on Centos 5 on two SSDs, one for indices and
one for data. The database is extremely active with reads and writes. We
have autovacuum enabled, but we didn't tweak it's aggressiveness. The
problem is that after some time the database grows even more than 100% on
the fi
Quoting Bill Chandler <[EMAIL PROTECTED]>:
> Running PostgreSQL 7.4.2, Solaris.
> Client is reporting that the size of an index is
> greater than the number of rows in the table (1.9
> million vs. 1.5 million). Index was automatically
> created from a 'bigserial unique' column.
> We have been ru
"David Roussel" <[EMAIL PROTECTED]> writes:
> Note there is no reference to iso_pjm_data_update_events_event_id_key
> which is the index that went wacky on us. Does that seem weird to you?
What that says is that that index doesn't belong to that table. You
sure it wasn't a chance coincidence of
You would be interested in
http://archives.postgresql.org/pgsql-hackers/2005-04/msg00565.php
On Thu, Apr 21, 2005 at 03:33:05PM -0400, Dave Chapeskie wrote:
> On Thu, Apr 21, 2005 at 11:28:43AM -0700, Josh Berkus wrote:
> > Michael,
> >
> > > Every five minutes, DBCC INDEXDEFRAG will report t
On Fri, 22 Apr 2005 10:06:33 -0400, "Tom Lane" <[EMAIL PROTECTED]> said:
> David Roussel <[EMAIL PROTECTED]> writes:
> > |dave_data_update_eventsr 1593600.0 40209
> > |dave_data_update_events_event_id_key i 1912320.0 29271
>
> Hmm ... what PG version is this, and what
David Roussel <[EMAIL PROTECTED]> writes:
> |dave_data_update_eventsr 1593600.0 40209
> |dave_data_update_events_event_id_key i 1912320.0 29271
Hmm ... what PG version is this, and what does VACUUM VERBOSE on
that table show?
regards, tom lane
On 22 Apr 2005, at 06:57, Tom Lane wrote:
Bill Chandler <[EMAIL PROTECTED]> writes:
Client is reporting that the size of an index is
greater than the number of rows in the table (1.9
million vs. 1.5 million).
This thread seems to have wandered away without asking the critical
question "what did you
Bill Chandler <[EMAIL PROTECTED]> writes:
> Client is reporting that the size of an index is
> greater than the number of rows in the table (1.9
> million vs. 1.5 million).
This thread seems to have wandered away without asking the critical
question "what did you mean by that?"
It's not possible
Bill Chandler wrote:
Mischa,
Thanks. Yes, I understand that not having a large
enough max_fsm_pages is a problem and I think that it
is most likely the case for the client. What I wasn't
sure of was if the index bloat we're seeing is the
result of the "bleeding" you're talking about or
something
Bill,
> If I deleted 75% of the rows but had a max_fsm_pages
> setting that still exceeded the pages required (as
> indicated in VACUUM output), would that solve my
> indexing problem or would I still need to REINDEX
> after such a purge?
Depends on the performance you're expecting.The FSM re
Mischa,
Thanks. Yes, I understand that not having a large
enough max_fsm_pages is a problem and I think that it
is most likely the case for the client. What I wasn't
sure of was if the index bloat we're seeing is the
result of the "bleeding" you're talking about or
something else.
If I deleted
Quoting Bill Chandler <[EMAIL PROTECTED]>:
> ... The normal activity is to delete 3-5% of the rows per day,
> followed by a VACUUM ANALYZE.
...
> However, on occasion, deleting 75% of rows is a
> legitimate action for the client to take.
> > In case nobody else has asked: is your max_fsm_page
Dave,
> See http://archives.postgresql.org/pgsql-general/2005-03/msg01465.php
> for my thoughts on a non-blocking alternative to REINDEX. I got no
> replies to that message. :-(
Well, sometimes you have to be pushy. Say, "Hey, comments please?"
The hackers list is about 75 posts a day, it's e
On Thu, Apr 21, 2005 at 11:28:43AM -0700, Josh Berkus wrote:
> Michael,
>
> > Every five minutes, DBCC INDEXDEFRAG will report to the user an
> > estimated percentage completed. DBCC INDEXDEFRAG can be terminated at
> > any point in the process, and *any completed work is retained.*"
>
> Keen
Same thing happens in Oracle
ALTER INDEX rebuild
To force a rebuild. It will mark the free blocks as 'free' below the
PCTFREE value for the tablespace.
Basically If you build an index with entries. and each entry is
1/4 of a block, the database will write 2500 blocks to the disk. If
you
--- [EMAIL PROTECTED] wrote:
> I gather you mean, out-of-the-ordinary for most
> apps, but not for this client?
Actually, no. The normal activity is to delete 3-5%
of the rows per day, followed by a VACUUM ANALYZE.
Then over the course of the day (in multiple
transactions) about the same amount
Michael,
> Every five minutes, DBCC INDEXDEFRAG will report to the user an
> estimated percentage completed. DBCC INDEXDEFRAG can be terminated at
> any point in the process, and *any completed work is retained.*"
Keen. Sounds like something for our TODO list.
--
Josh Berkus
Aglio Database
Is
this a common issue among all RDBMSs or is it
something that is PostgreSQL specific?
Speaking from experience, this sort of thing affects MSSQL as well, although
the maintenance routines are different.
Yes, this is true with MSSQL too, however sql server implements a defrag
index
josh@agliodbs.com (Josh Berkus) writes:
> Bill,
>
>> What about if an out-of-the-ordinary number of rows
>> were deleted (say 75% of rows in the table, as opposed
>> to normal 5%) followed by a 'VACUUM ANALYZE'? Could
>> things get out of whack because of that situation?
>
> Yes. You'd want to ru
Alex,
> REINDEX DATABASE blah
>
> supposed to rebuild all indices in the database, or must you specify
> each table individualy? (I'm asking because I just tried it and it
> only did system tables)
"DATABASE
Recreate all system indexes of a specified database. Indexes on user tables
are not pr
Is:
REINDEX DATABASE blah
supposed to rebuild all indices in the database, or must you specify
each table individualy? (I'm asking because I just tried it and it
only did system tables)
Alex Turner
netEconomist
On 4/21/05, Josh Berkus wrote:
> Bill,
>
> > What about if an out-of-the-ordinary
Bill,
> Honestly, this seems like an inordinate amount of
> babysitting for a production application. I'm not
> sure if the client will be willing to accept it.
Well, then, tell them not to delete 75% of the rows in a table at once. I
imagine that operation brought processing to a halt, too.
Bill,
> Honestly, this seems like an inordinate amount of
> babysitting for a production application. I'm not
> sure if the client will be willing to accept it.
Well, then, tell them not to delete 75% of the rows in a table at once. I
imagine that operation brought processing to a halt, too.
--- Josh Berkus wrote:
> Bill,
>
> > What about if an out-of-the-ordinary number of
> rows
> > were deleted (say 75% of rows in the table, as
> opposed
> > to normal 5%) followed by a 'VACUUM ANALYZE'?
> Could
> > things get out of whack because of that situation?
>
> Yes. You'd want to run R
Bill,
> What about if an out-of-the-ordinary number of rows
> were deleted (say 75% of rows in the table, as opposed
> to normal 5%) followed by a 'VACUUM ANALYZE'? Could
> things get out of whack because of that situation?
Yes. You'd want to run REINDEX after and event like that. As you shoul
All,
Running PostgreSQL 7.4.2, Solaris.
Client is reporting that the size of an index is
greater than the number of rows in the table (1.9
million vs. 1.5 million). Index was automatically
created from a 'bigserial unique' column.
Database contains several tables with exactly the same
columns (
32 matches
Mail list logo