matt wrote:
I'm wondering if the good people out there could perhaps give me some
pointers on suitable hardware to solve an upcoming performance issue. I've never really dealt with these kinds of loads before, so any
experience you guys have would be invaluable. Apologies in advance for
the amount of info below...

My app is likely to come under some serious load in the next 6 months,
but the increase will be broadly predictable, so there is time to throw
hardware at the problem.

Currently I have a ~1GB DB, with the largest (and most commonly accessed
and updated) two tables having 150,000 and 50,000 rows.

A typical user interaction with the system involves about 15
single-table selects, 5 selects with joins or subqueries, 3 inserts, and
3 updates.  The current hardware probably (based on benchmarking and
profiling) tops out at about 300 inserts/updates *or* 2500 selects per

There are multiple indexes on each table that updates & inserts happen
on.  These indexes are necessary to provide adequate select performance.

Are you sure? Have you tested the overall application to see if possibly you gain more on insert performance than you lose on select performanc?

(Hey, you asked for musings ...)

Current hardware/software:
Quad 700MHz PIII Xeon/1MB cache
RAID 10 over 4 18GB/10,000rpm drives
128MB battery backed controller cache with write-back enabled
Redhat 7.3, kernel 2.4.20
Postgres 7.2.3 (stock redhat issue)

It's possible that compiling Postgres manually with proper optimizations could yield some improvements, as well as building a custom kernel in Redhat.

Also, you don't mention which filesystem you're using:

I need to increase the overall performance by a factor of 10, while at
the same time the DB size increases by a factor of 50.  e.g. 3000
inserts/updates or 25,000 selects per second, over a 25GB database with
most used tables of 5,000,000 and 1,000,000 rows.

Notably, the data is very time-sensitive, so the active dataset at any
hour is almost certainly going to be more on the order of 5GB than 25GB
(plus I'll want all the indexes in RAM of course).

Also, and importantly, the load comes but one hour per week, so buying a
Starfire isn't a real option, as it'd just sit idle the rest of the
time.  I'm particularly interested in keeping the cost down, as I'm a
shareholder in the company!

I can't say for sure without looking at your application overall, but many applications I've seen could be optimized. It's usually a few seconds here and there that take hours to find and tweak.

But if you're in the situation where you have more time than money,
you may find that an overall audit of your app is worthwhile.  Consider
taking parts that are in perl (for example) and recoding them into C
(that is, unless you've already identified that all the bottlenecks are
at the PostgreSQL server)

I doubt if the suggestions I've made are going to get you 10x, but they
may get you 2x, and then you only need the hardware to do 5x.

Bill Moran
Potential Technologies

---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings

Reply via email to