Hi again,

first I want to say ***THANK YOU*** for everyone who kindly shared their thoughts on my hardware problems. I really appreciate it. I started to look for a new server and I am quite sure we'll get a serious hardware "update". As suggested by some people I would like now to look closer at possible algorithmic improvements.

My application basically imports Apache log files into a Postgres database. Every row in the log file gets imported in one of three (raw data) tables. My columns are exactly as in the log file. The import is run approx. every five minutes. We import about two million rows a month.

Between 30 and 50 users are using the reporting at the same time.

Because reporting became so slow, I did create a reporting table. In that table data is aggregated by dropping time (date is preserved), ip, referer, user-agent. And although it breaks normalization some data from a master table is copied, so no joins are needed anymore.

After every import the data from the current day is deleted from the reporting table and recalculated from the raw data table.


Is this description understandable? If so

What do you think of this approach? Are there better ways to do it? Is there some literature you recommend reading?

TIA

Ulrich


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to