On Saturday 02 September 2006 00:21, Jack Faley ( The Tao of Jack ) wrote:
> I have several ( 100 - 200 ) small exits c oming from an app that update
> db2 tables. This works fine but the dba's don't like that many open
> connection overheads. This isn't cgi/apache. These are all seperate
> terminating processes. From what I gather there is no way to pass the dbh
> between processes?

No.

> Is some sort of middle daemon ( I guess non perl ) that 
> keeps one persistent connection to db2 and then have perl feed the daemon
> the only option?

That would be called ... "DB2" ;-)

> I would think others have run into this wall before? 

Option 1: stop during this in a bunch of small exits - put them in a single 
exit.  Probably not a real option.

Option 2: can you put this in stored procedures instead?

Option 3: DBD::Proxy may help here.  Or at least the concept - set up a 
POE-based pseudo-server which receives messages from apps, and funnels them 
via a single connection to your server, then passes the results back.  That 
sounds like not only a lot of work to write, but also a lot of work for the 
computer.  Oh, and all those connections just moved from going directly to 
the server to directly to the proxy/concentrator - I'm not really seeing a 
savings there.

To be honest, I suspect that any option (other than a complete re-architecture 
of how you approach the business problem you're dealing with in these exits) 
will actually be a larger load on the system than what you're currently 
working with.

If option 1 works (which I doubt from what little info was in your original 
question), I think it's probably the only solution that would satisfy your 
DBAs.  But then again, I'm not seeing their problem, nor really what is 
causing it, so I'm just taking a wild guess ;-)

Reply via email to