All,

I realize the excessive-context-switching-on-xeon issue has been discussed at length in the past, but I wanted to follow up and verify my conclusion from those discussions:

On a 2-way or 4-way Xeon box, there is no way to avoid excessive (30,000-60,000 per second) context switches when using PostgreSQL 7.4.5 to query a data set small enough to fit into main memory under a significant load.

I am experiencing said symptom on two different dual-Xeon boxes, both Dells with ServerWorks chipsets, running the latest RH9 and RHEL3 kernels, respectively. The databases are 90% read, 10% write, and are small enough to fit entirely into main memory, between pg shared buffers and kernel buffers.

We recently invested in an solid-state storage device (http://www.superssd.com/products/ramsan-320/) to help write performance. Our entire pg data directory is stored on it. Regrettably (and in retrospect, unsurprisingly) we found that opening up the I/O bottleneck does little for write performance when the server is under load, due to the bottleneck created by excessive context switching. Is the only solution then to move to a different SMP architecture such as Itanium 2 or Opteron? If so, should we expect to see an additional benefit from running PostgreSQL on a 64-bit architecture, versus 32-bit, context switching aside? Alternatively, are there good 32-bit SMP architectures to consider other than Xeon, given the high cost of Itanium 2 and Opteron systems?

More generally, how have others scaled "up" their PostgreSQL environments? We will eventually have to invent some "outward" scalability within the logic of our application (e.g. do read-only transactions against a pool of Slony-I subscribers), but in the short term we still have an urgent need to scale upward. Thoughts? General wisdom?

Best Regards,

Bill Montgomery

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to