On Apr 1, 2010, at 1:42 PM, Tom Lane wrote:
> Scott Carey writes:
>> Still off topic:
>
>> Will CLUSTER/VF respect FILLFACTOR in 9.0?
>
>> As far as I can tell in 8.4, it does not.
>
> Works for me, in both branches.
>
I stand corrected. I must have done something wrong in my test. On a
Scott Carey writes:
> Still off topic:
> Will CLUSTER/VF respect FILLFACTOR in 9.0?
> As far as I can tell in 8.4, it does not.
Works for me, in both branches.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
On Mar 31, 2010, at 1:47 PM, Robert Haas wrote:
> On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey wrote:
>> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
>>>
>>> Dont "VACUUM FULL", its not helping you, and is being removed in newer
>>> versions.
>>>
>>
>> Off topic: How is that going to wor
Scott Carey wrote:
>
> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
> >
> > Dont "VACUUM FULL", its not helping you, and is being removed in newer
> > versions.
> >
>
> Off topic: How is that going to work? CLUSTER doesn't work on tables
> without an index. I would love to be able to CLU
On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey wrote:
> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
>>
>> Dont "VACUUM FULL", its not helping you, and is being removed in newer
>> versions.
>>
>
> Off topic: How is that going to work? CLUSTER doesn't work on tables
> without an index. I wou
On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
>
> Dont "VACUUM FULL", its not helping you, and is being removed in newer
> versions.
>
Off topic: How is that going to work? CLUSTER doesn't work on tables without
an index. I would love to be able to CLUSTER on some column set that doesn't
Please don't cc two of the lists here. It makes things difficult for
users who only subscribe to one list or the other who reply--their post
to the other list will be held for moderation. And that's a pain for
the moderators too. In this case, either the pgsql-admin or
pgsql-performance list
On 3/30/2010 6:17 AM, Gnanakumar wrote:
We're using pgpool-II version 2.0.1 for PostgreSQL connection management.
pgpool configurations are:
num_init_children = 450
child_life_time = 300
connection_life_time = 120
child_max_connections = 30
As you recommended, I ran "ps -ax|grep postgres" at al
Saturday, March 27, 2010 7:06 PM
To: Gnanakumar; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size growing over time and leads to
performance impact
On 03/27/2010 08:00 AM, Gnanakumar wrote:
> Hi,
>
> We're using PostgreSQL 8.2. Recently, in our production database, ther
Pierre C wrote:
If you realize you got a bloat problem, for instance due to a
misconfigured vacuum, use CLUSTER, which re-generates table AND index
data, and besides, having your table clustered on an index of your
choice can boost performance quite a lot in some circumstances.
8.2 is so old
1. VACUUM FULL ANALYZE once in a week during low-usage time and
VACUUM FULL compacts tables, but tends to bloat indexes. Running it weekly
is NOT RECOMMENDED.
A correctly configured autovacuum (or manual vacuum in some circumstances)
should maintain your DB healthy and you shouldn't ne
On 03/27/2010 08:00 AM, Gnanakumar wrote:
Hi,
We're using PostgreSQL 8.2. Recently, in our production database, there
was a severe performance impact.. Even though, we're regularly doing both:
1. VACUUM FULL ANALYZE once in a week during low-usage time and
2. ANALYZE everyday at low-usage time
Hi,
We're using PostgreSQL 8.2. Recently, in our production database, there was
a severe performance impact.. Even though, we're regularly doing both:
1. VACUUM FULL ANALYZE once in a week during low-usage time and
2. ANALYZE everyday at low-usage time
Also, we noticed that the
On Mon, Nov 2, 2009 at 7:50 AM, Peter Meszaros wrote:
> Increasing max_fsm_pages can be also helpful, but I've read that
> 'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.
> I've never seen such messag, this command is either run and finish or
> goes to an endless loop as it
I would recommend (if at all possible) to partition the table and drop
the old partitions when not needed. This will guarantee the space
free-up without VACUUM overhead. Deletes will kill you at some point
and you dont want too much of the VACUUM IO overhead impacting your
performance.
On Mon, Nov
Thank you all for the fast responses!
I changed the delete's schedule from daily to hourly and I will let you
know the result. This seems to be the most promising step.
The next one is tuning 'max_fsm_pages'.
Increasing max_fsm_pages can be also helpful, but I've read that
'vacuum verbose ...' wi
On Fri, Oct 30, 2009 at 1:18 PM, Jeremy Harris wrote:
> So, on the becoming more intelligent front: PostgreSQL already does
> some operations as background maintenance (autovacuum). Extending
> this to de-bloat indices does not seem conceptually impossible
It could be done but it's not easy bec
On 10/30/2009 08:01 PM, Greg Stark wrote:
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
This is t
Database are designed to handle very large tables..but effectiveness
is always at question. A full table scan on a partitioned table is
always preferable to a FTS on a super large table. The nature of the
query will of-course dictate performance..but you run into definite
limitations with very larg
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
> Any relational database worth its salt has partitioning for a reason.
>
> 1. Maintenance. You will need to delete data at some
> point.(cleanup)...Partitions are the only way to do it effectively.
This is true and it's unavoidably a manual proce
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
2. Performance. Partitioning offer a way to query smaller slices of
data automatically (i.e the query opt
On 10/30/2009 12:43 PM, Merlin Moncure wrote:
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
Use a parent table and 20 child tables. Create a new child every day and
drop the 20-day-old table. Table drops are far faster and lower-impact than
delete-from a 120-million row table. Index-
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
> Peter Meszaros wrote:
>>
>> Hi All,
>>
>> I use postgresql 8.3.7 as a huge queue. There is a very simple table
>> with six columns and two indices, and about 6 million records are
>> written into it in every day continously commited every 1
On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The t
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million recor
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million recor
On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote:
> 2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple
> table
> with six columns and two indices, and about 6 million records
> are
> w
Hi Peter,
Sounds like you're experiencing index bloat and vacuums do nothing to
help that. You can do one of 2 thing to remedy this:
1) The fastest and simplest (but most disruptive) way is to use REINDEX.
But this will exclusively lock the table while rebuilding the indexes:
REINDEX TABLE p
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The table stores approximately 12
On Thu, 29 Oct 2009, Josh Rovero wrote:
Do you ever "vacuum full" to reclaim empty record space?
Unless you expect the size of the database to permanently decrease by a
significant amount, that is a waste of time, and may cause bloat in
indexes. In this case, since the space will be used agai
On Thu, 2009-10-29 at 15:44 +0100, Peter Meszaros wrote:
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
> with six columns and two indices, and about 6 million records are
> written into it in every day continously commited every 10 seconds from
> 8 clients. The
I would recomend increasing fsm max_fsm_pages and shared_buffers
This changes did speed up vacuum full on my database.
With shared_buffers remember to increase max shm in your OS.
Ludwik
2009/10/29 Peter Meszaros
> Hi All,
>
> I use postgresql 8.3.7 as a huge queue. There is a very simple table
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million records, because a
cron job d
2008/8/1 Matthew Wakeling <[EMAIL PROTECTED]>:
> On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:
>>
>> Maybe I'm wrong but if this "bulk insert and delete" process is cyclical
>> then You don't need vacuum full.
>> Released tuples will fill up again with fresh data next day - after
>> regular vacuum.
On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:
Maybe I'm wrong but if this "bulk insert and delete" process is cyclical then
You don't need vacuum full.
Released tuples will fill up again with fresh data next day - after regular
vacuum.
Yes, a regular manual vacuum will prevent the table from gr
Mark Roberts wrote:
On Wed, 2008-07-30 at 13:51 -0400, Tom Lane wrote:
Huh? Vacuum doesn't block writes.
regards, tom lane
Of course, you are correct. I was thinking of Vacuum full, which is
recommended for use when you're deleting the majority of rows in a
On Wed, 2008-07-30 at 23:58 +0200, Miernik wrote:
> I have a similar, but different situation, where I TRUNCATE a table
> with
> 60k rows every hour, and refill it with new rows. Would it be better
> (concerning bloat) to just DROP the table every hour, and recreate it,
> then to TRUNCATE it? Or d
Valentin Bogdanov <[EMAIL PROTECTED]> wrote:
> I am guessing that you are using DELETE to remove the 75,000
> unimportant. Change your batch job to CREATE a new table consisting
> only of the 5,000 important. You can use "CREATE TABLE table_name AS
> select_statement" command. Then drop the old ta
On Wed, 2008-07-30 at 13:51 -0400, Tom Lane wrote:
>
>
> Huh? Vacuum doesn't block writes.
>
> regards, tom lane
>
Of course, you are correct. I was thinking of Vacuum full, which is
recommended for use when you're deleting the majority of rows in a
table.
http://ww
Mark Roberts <[EMAIL PROTECTED]> writes:
> On Wed, 2008-07-30 at 17:16 +0100, Matthew Wakeling wrote:
>> Anyway, surely it's much safer to just run VACUUM manually?
> Generally, you would think so. The problem comes from Vacuum blocking
> the application process' writes.
Huh? Vacuum doesn't blo
Dave North a écrit :
[...]
I'd suggest re-tuning as follows:
1) Increase shared_buffers to 10,000, test. Things should be
a bit faster.
2) Increase checkpoint_segments to 30, test. What you want
to watch for here whether there are periods where the server
seems to freeze for a couple of
> -Original Message-
> From: Greg Smith [mailto:[EMAIL PROTECTED]
> Sent: July 30, 2008 12:48 PM
> To: Dave North
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Database size Vs performance degradation
>
> On Wed, 30 Jul 2008, Dave North wrote
Dave North wrote:
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Well, that's pretty much the definition of bloat. Are you sure you're
vacuuming enough?
DN: Well, the auto-vac is kicking off pretty darn frequently...around
once every 2 minutes. However, you just
On Wed, 2008-07-30 at 17:16 +0100, Matthew Wakeling wrote:
>
> I believe this SQL snippet could cause data loss, because there is a
> period during which writes can be made to the old table that will not
> be
> copied to the new table.
It could indeed cause data loss.
> On a side note, I wou
On Wed, 30 Jul 2008, Dave North wrote:
One observation I've made on the DB system is the disk I/O seems
dreadfully slow...we're at around 75% I/O wait sometimes and the read
rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
un-cached reads).
This is typically what happens when
On Wed, 30 Jul 2008, Craig James wrote:
You don't have to change the application. One of the great advantages of
Postgres is that even table creation, dropping and renaming are
transactional. So you can do the select / drop / rename as a transaction by
an external app, and your main applicati
On Wed, 2008-07-30 at 10:02 -0500, Dave North wrote:
> Thank you for the suggestion..much appreciated. Alas, I don't think
> this will be possible without a change to the application but it's a
> good idea nonetheless.
Affirmative, Dave. I read you.
If I were in your situation (not having acce
Dave North wrote:
Thank you for the suggestion..much appreciated. Alas, I don't think
this will be possible without a change to the application but it's a
good idea nonetheless.
I assume you mean the "create table as select ..." suggestion (don't forget to
include a little quoted material so
"Dave North" <[EMAIL PROTECTED]> writes:
> From: Richard Huxton [mailto:[EMAIL PROTECTED]
>> Well, that's pretty much the definition of bloat. Are you sure you're
>> vacuuming enough?
> DN: Well, the auto-vac is kicking off pretty darn frequently...around
> once every 2 minutes. However, you jus
l Message-
> From: Valentin Bogdanov [mailto:[EMAIL PROTECTED]
> Sent: July 30, 2008 10:58 AM
> To: pgsql-performance@postgresql.org; Dave North
> Subject: Re: [PERFORM] Database size Vs performance degradation
>
> I am guessing that you are using DELETE to remove the 75
h <[EMAIL PROTECTED]>
> Subject: [PERFORM] Database size Vs performance degradation
> To: pgsql-performance@postgresql.org
> Date: Wednesday, 30 July, 2008, 1:09 PM
> Morning folks,
> Long time listener, first time poster. Having an
> interesting
> problem related to p
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matthew
Wakeling
Sent: July 30, 2008 8:37 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size Vs performance degradation
On Wed, 30 Jul 2008, Dave North wrote:
> Running on
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: July 30, 2008 8:28 AM
To: Dave North
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database size Vs performance degradation
Dave North wrote:
> Morning folks,
> Long time listener, firs
On Wed, 30 Jul 2008, Dave North wrote:
Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
Checking the stats, the DB size is around 7.5GB;
Doesn't fit in RAM.
...after the load, the DB size was around 2.7GB
Does fit in RAM.
One observation I've made on the DB system is the disk
Dave North wrote:
Morning folks,
Long time listener, first time poster.
Hi Dave
Postgres 8.1.8
shared_buffers = 2000
max_fsm_pages = 40
Redhat Enterprise 4
Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
Also running on the server is a tomcat web server a
Morning folks,
Long time listener, first time poster. Having an interesting
problem related to performance which I'll try and describe below and
hopefully get some enlightenment. First the environment:
Postgres 8.1.8
shared_buffers = 2000
max_fsm_pages = 40
Redhat En
07 7:15 PM
Subject: Re: [PERFORM] Database size
choksi writes:
I had a database which uses to hold some 50 Mill records and disk
space used was 103 GB. I deleted around 34 Mill records but still the
disk size is same. Can some on please shed some light on this.
When records are deleted they are o
choksi writes:
I had a database which uses to hold some 50 Mill records and disk
space used was 103 GB. I deleted around 34 Mill records but still the
disk size is same. Can some on please shed some light on this.
When records are deleted they are only marked in the database.
When you run vacu
am Fri, dem 08.06.2007, um 1:22:14 -0700 mailte choksi folgendes:
> Hi all,
>
> I had a database which uses to hold some 50 Mill records and disk
> space used was 103 GB. I deleted around 34 Mill records but still the
> disk size is same. Can some on please shed some light on this.
DELETE only
Hi all,
I had a database which uses to hold some 50 Mill records and disk
space used was 103 GB. I deleted around 34 Mill records but still the
disk size is same. Can some on please shed some light on this.
Thank in advance for all the help.
Dhawal Choksi
---(end of bro
60 matches
Mail list logo