Are you calculating aggregates, and if so, how are you doing it (I ask the question from experience of a similar application where I found that my aggregating PGPLSQL triggers were bogging the system down, and changed them so scheduled jobs instead).
Alex Turner NetEconomist On 8/16/05, Ulrich Wisser <[EMAIL PROTECTED]> wrote: > Hello, > > one of our services is click counting for on line advertising. We do > this by importing Apache log files every five minutes. This results in a > lot of insert and delete statements. At the same time our customers > shall be able to do on line reporting. > > We have a box with > Linux Fedora Core 3, Postgres 7.4.2 > Intel(R) Pentium(R) 4 CPU 2.40GHz > 2 scsi 76GB disks (15.000RPM, 2ms) > > I did put pg_xlog on another file system on other discs. > > Still when several users are on line the reporting gets very slow. > Queries can take more then 2 min. > > I need some ideas how to improve performance in some orders of > magnitude. I already thought of a box with the whole database on a ram > disc. So really any idea is welcome. > > Ulrich > > > > -- > Ulrich Wisser / System Developer > > RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden > Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769 > ________________________________________________________________ > http://www.relevanttraffic.com > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly > ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings