Re: [PERFORM] database size growing continously

2009-11-02 Thread Anj Adu
I would recommend (if at all possible) to partition the table and drop the old partitions when not needed. This will guarantee the space free-up without VACUUM overhead. Deletes will kill you at some point and you dont want too much of the VACUUM IO overhead impacting your performance. On Mon,

Re: [PERFORM] database size growing continously

2009-10-30 Thread Merlin Moncure
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford scrawf...@pinpointresearch.com wrote: Peter Meszaros wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day

Re: [PERFORM] database size growing continously

2009-10-30 Thread Jeremy Harris
On 10/30/2009 12:43 PM, Merlin Moncure wrote: On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford scrawf...@pinpointresearch.com wrote: Use a parent table and 20 child tables. Create a new child every day and drop the 20-day-old table. Table drops are far faster and lower-impact than delete-from

Re: [PERFORM] database size growing continously

2009-10-30 Thread Anj Adu
Any relational database worth its salt has partitioning for a reason. 1. Maintenance. You will need to delete data at some point.(cleanup)...Partitions are the only way to do it effectively. 2. Performance. Partitioning offer a way to query smaller slices of data automatically (i.e the query

Re: [PERFORM] database size growing continously

2009-10-30 Thread Greg Stark
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu fotogra...@gmail.com wrote: Any relational database worth its salt has partitioning for a reason. 1. Maintenance.  You will need to delete data at some point.(cleanup)...Partitions are the only way to do it effectively. This is true and it's

Re: [PERFORM] database size growing continously

2009-10-30 Thread Anj Adu
Database are designed to handle very large tables..but effectiveness is always at question. A full table scan on a partitioned table is always preferable to a FTS on a super large table. The nature of the query will of-course dictate performance..but you run into definite limitations with very

Re: [PERFORM] database size growing continously

2009-10-30 Thread Jeremy Harris
On 10/30/2009 08:01 PM, Greg Stark wrote: On Fri, Oct 30, 2009 at 12:53 PM, Anj Adufotogra...@gmail.com wrote: Any relational database worth its salt has partitioning for a reason. 1. Maintenance. You will need to delete data at some point.(cleanup)...Partitions are the only way to do it

Re: [PERFORM] database size growing continously

2009-10-30 Thread Greg Stark
On Fri, Oct 30, 2009 at 1:18 PM, Jeremy Harris j...@wizmail.org wrote: So, on the becoming more intelligent front:  PostgreSQL already does some operations as background maintenance (autovacuum).  Extending this to de-bloat indices does not seem conceptually impossible It could be done but

[PERFORM] database size growing continously

2009-10-29 Thread Peter Meszaros
Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores approximately 120 million records, because a cron job

Re: [PERFORM] database size growing continously

2009-10-29 Thread Josh Rovero
On Thu, 2009-10-29 at 15:44 +0100, Peter Meszaros wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The

Re: [PERFORM] database size growing continously

2009-10-29 Thread Matthew Wakeling
On Thu, 29 Oct 2009, Josh Rovero wrote: Do you ever vacuum full to reclaim empty record space? Unless you expect the size of the database to permanently decrease by a significant amount, that is a waste of time, and may cause bloat in indexes. In this case, since the space will be used again

Re: [PERFORM] database size growing continously

2009-10-29 Thread Ludwik Dylag
2009/10/29 Peter Meszaros p...@prolan.hu Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores

Re: [PERFORM] database size growing continously

2009-10-29 Thread Chris Ernst
Hi Peter, Sounds like you're experiencing index bloat and vacuums do nothing to help that. You can do one of 2 thing to remedy this: 1) The fastest and simplest (but most disruptive) way is to use REINDEX. But this will exclusively lock the table while rebuilding the indexes: REINDEX TABLE

Re: [PERFORM] database size growing continously

2009-10-29 Thread Joshua D. Drake
On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote: 2009/10/29 Peter Meszaros p...@prolan.hu Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are

Re: [PERFORM] database size growing continously

2009-10-29 Thread Steve Crawford
Peter Meszaros wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores approximately 120 million

Re: [PERFORM] database size growing continously

2009-10-29 Thread Steve Crawford
Peter Meszaros wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores approximately 120 million

Re: [PERFORM] database size growing continously

2009-10-29 Thread Scott Marlowe
On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros p...@prolan.hu wrote: Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8