"Karl Denninger" <[EMAIL PROTECTED]> writes: > Not sure where to start here. It appears that I'm CPU limited and the problem > may be that this is a web-served application that must connect to the Postgres > backend for each transaction, perform its queries, and then close the > connection down - in other words the load may be coming not from Postgres but > rather from places I can't fix at the application layer (e.g. fork() overhead, > etc). The DBMS and Apache server are on the same machine, so there's no > actual > network overhead involved. > > If that's the case the only solution is to throw more hardware at it. I can > do > that, but before I go tossing more CPU at the problem I'd like to know I'm not > just wasting money.
I know you found the proximate cause of your current problems, but it sounds like you have something else you should consider looking at here. There are techniques for avoiding separate database connections for each request. If you're using Apache you can reduce the CPU usage a lot by writing your module as an Apache module instead of a CGI or whatever type of program it is now. Then your module would live as long as a single Apache instance which you can configure to be hours or days instead of a single request. It can keep around the database connection for that time. If that's impossible there are still techniques that can help. You can set up PGPool or PGBouncer or some other connection aggregating tool to handle the connections. This is a pretty low-impact change which shouldn't require making any application changes aside from changing the database connection string. Effectively this is a just a connection pool that lives in a separate process. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org