case 1: large J2EE based integration scenario with a lot of services, in different .ears/.sars: about 20 datasources with connection pool size of 20 - 50 each, No Idea about the CPUs, it was some Solaris Zone
We had some performance problems with the asynchronous part of the application, so we increased the connection pool size used by the MDBs and were able to increase the performance a lot. case 2: JEE web application, single .war on tomcat: 15 conections per pool, DualCore For the UI-Parts of the applications I worked on I have to admit that the connection pool size was never the bottleneck: most of the time it was rather Hibernate, general architecture (too many remote connections in loops) or the bad design of the HTML/Javascript frontend. I've never noticed a connection between number of CPUS / number of connections. btw: if you want to save resouces on connections: use read only transactions, I think to remember that they allow to reuse the same connection. Or am I wrong about this? Regards Kai --- Original Nachricht --- Absender: Clinton Begin Datum: 20.01.2009 14:43 > Hi all, > > I've been studying a few large enterprise applications and have noticed an > interesting trend... many of these apps have HUNDREDS of connections (like > 600) available or even open in their connection pools... > > Survey Questions: > > 1. How many connections do you have available in your pool? > 2. And if you know, how many CPU cores are available on your database > server (or cluster)? > 3. If you have 2x or 3x more connections than you do CPUs, do you have a > reason that you could share? > > Cheers, > Clinton >