We just moved a large production instance of ours from Oracle to Postgres 8.0.3 on linux. When running on Oracle the machine hummed along using about 5% of the CPU easily handling the fairly constant load, after moving the data to Postgres the machine was pretty much maxed out on CPU and could no longer keep up with the transaction volume. On a hunch I switched the jdbc driver to using the V2 protocol and the load on the machine dropped down to what it was when using Oracle and everything was fine.
Now obviously I have found a work around for the performance problem, but I really don’t want to rely on using the V2 protocol forever, and don’t want to have to recommend to our customers that they need to run with the V2 protocol. So I would like to resolve the problem and be able to move back to a default configuration with the V3 protocol and the benefits thereof.
The problem is that I don’t really know where to begin to debug a problem like this. In development environments and testing environments we have not seen performance problems with the V3 protocol in the jdbc driver. But they don’t come close to approaching the transaction volume of this production instance.
What I see when running the V3 protocol under ‘top’ is that the postgres processes are routinely using 15% or more of the CPU each, when running the V2 protocol they use more like 0.3%.
Does anyone have any suggestions on an approach to debug a problem like this?
- [PERFORM] Performance problem using V3 protocol in jdbc driver Barry Lind