Re: db2 - persistent connection

2006-09-06 Thread Peter J. Holzer
On 2006-09-05 08:08:54 -0600, Darin McBride wrote:
 On Monday 04 September 2006 22:28, Jack Faley ( The Tao of Jack ) wrote:
  Thank you for your reply. If their was any way to mitigate the small exits
  I would do it but I'm modifying an app that was never meant to be
  extensible. So, you are correct, Im moving the load to a daemon in the
  middle with a persistant connection rather than directly talking to DB2 :-)
  . I havent benchmarked it. Im no dba but I have a feeling this will
  actually be slower overall.
 
 I don't see it being a DBA question - it's a machine resource question.  If 
 you have n apps connecting to DB2, you need n sockets.  But if you have n 
 apps connecting to a proxy which itself connects to DB2, you have n+1 
 sockets.
 

If I understood the OP correctly, his DBA isn't concerned about
simultaneously open DB connections, but about many short-lived
connections. If you have a program (for example a CGI script), which
makes only a single query and then exits, for 1000 invokations of the
program you have 

1000 x opening a DB connection, including authentication, creating initial
   transaction, etc.
1000 x select
1000 x closing the DB connection.

If there is a middleware server which maintains a pool of DB
connections, that changes to:

   n x opening a connection, including authentication, creating initial
   transaction, etc.
1000 x opening a connection to the middleware server
1000 x select
1000 x closing the connection to the middleware server
   n x closing the connection.

where n is the number of connections that need to be opened, which tends
toward 0 as the connection pool reaches a steady state.

Whether this actually reduces the load depends on the relative cost of
DB connection overhead, the query and the middleware server overhead.

For example, on one of our Oracle servers, a connect takes 27ms, a
select sysdate from dual 1.5ms, and a disconnect 0.8ms. I don't know
how much of these 27.8ms for connect/disconnect is actually time spent
by the server, but it is an upper bound on what we can save: So the
question is: Is saving *at most* 27.8ms per invocation of the the
program worth the complication? If your program is invoked a million
times per day, probably yes (even if its only 14ms and the middleware
server adds 5 ms of overhead, you still have saved 9000 seconds per day).
If it is invoked only 1000 times a day, probably not. 

 Note also that in this scenario, all exits must use the same 
 authentication - if you're trying to get some stuff done as sysadm, and other 
 stuff done as someone unprivileged, you'll need a different proxy for each 
 user.

The authentication to the proxy or middleware server can be a lot
simpler. For example, you can use a unix socket and set the permissions
so that only authorized users can open it.

hp

-- 
   _  | Peter J. Holzer| If I wanted to be academically correct,
|_|_) | Sysadmin WSR   | I'd be programming in Java.
| |   | [EMAIL PROTECTED]  | I don't, and I'm not.
__/   | http://www.hjp.at/ |   -- Jesse Erlbaum on dbi-users


pgpXh4vDqa1XK.pgp
Description: PGP signature


Re: db2 - persistent connection

2006-09-05 Thread Darin McBride
On Monday 04 September 2006 22:28, Jack Faley ( The Tao of Jack ) wrote:
 Thank you for your reply. If their was any way to mitigate the small exits
 I would do it but I'm modifying an app that was never meant to be
 extensible. So, you are correct, Im moving the load to a daemon in the
 middle with a persistant connection rather than directly talking to DB2 :-)
 . I havent benchmarked it. Im no dba but I have a feeling this will
 actually be slower overall.

I don't see it being a DBA question - it's a machine resource question.  If 
you have n apps connecting to DB2, you need n sockets.  But if you have n 
apps connecting to a proxy which itself connects to DB2, you have n+1 
sockets.  If that proxy is on another machine, then you may save some 
resource.  But if the proxy is on the same machine as the server, there's no 
way this will be as fast as direct connection, though it may not be 
noticeably slower, either, depending on the hardware.  e.g., a single CPU on 
a fully loaded system will be much slower, but an 8-way system that is only 
90% loaded may not be noticed.

 I am holding hope DBD:Proxy connect_cached might do the trick but the docs
 seem to indicate its not fully completed. No harm in trying though. Even if
 it doesnt, I'll check the dba's quality to see if they can tell if there
 are still all those connections after a middleware to pool them that
 actually doesn't do anything.

Each client will need to make a direct, individual, non-cacheable connection 
to the proxy.  The proxy can make a cached connection to the server (as long 
as the previous connection isn't timed out, this will be pretty much a 
no-op).  Note also that in this scenario, all exits must use the same 
authentication - if you're trying to get some stuff done as sysadm, and other 
stuff done as someone unprivileged, you'll need a different proxy for each 
user.

 How would Stored Procedures help? Just for the performance after connected?

Stored procedures start as pre-connected from the server.  Not that DB2 allows 
Perl-based stored procedures, so you'd also have to rewrite them.  But I'm 
not even sure that the logic you're doing will fit as a stored procedure - I 
brought it up more as a poke to make you think about whether this stuff is 
better as a stored procedure or not.


Re: db2 - persistent connection

2006-09-04 Thread Darin McBride
On Saturday 02 September 2006 00:21, Jack Faley ( The Tao of Jack ) wrote:
 I have several ( 100 - 200 ) small exits c oming from an app that update
 db2 tables. This works fine but the dba's don't like that many open
 connection overheads. This isn't cgi/apache. These are all seperate
 terminating processes. From what I gather there is no way to pass the dbh
 between processes?

No.

 Is some sort of middle daemon ( I guess non perl ) that 
 keeps one persistent connection to db2 and then have perl feed the daemon
 the only option?

That would be called ... DB2 ;-)

 I would think others have run into this wall before? 

Option 1: stop during this in a bunch of small exits - put them in a single 
exit.  Probably not a real option.

Option 2: can you put this in stored procedures instead?

Option 3: DBD::Proxy may help here.  Or at least the concept - set up a 
POE-based pseudo-server which receives messages from apps, and funnels them 
via a single connection to your server, then passes the results back.  That 
sounds like not only a lot of work to write, but also a lot of work for the 
computer.  Oh, and all those connections just moved from going directly to 
the server to directly to the proxy/concentrator - I'm not really seeing a 
savings there.

To be honest, I suspect that any option (other than a complete re-architecture 
of how you approach the business problem you're dealing with in these exits) 
will actually be a larger load on the system than what you're currently 
working with.

If option 1 works (which I doubt from what little info was in your original 
question), I think it's probably the only solution that would satisfy your 
DBAs.  But then again, I'm not seeing their problem, nor really what is 
causing it, so I'm just taking a wild guess ;-)


Re: db2 - persistent connection

2006-09-04 Thread Jack Faley ( The Tao of Jack )

On 9/4/06, Darin McBride [EMAIL PROTECTED] wrote:


On Saturday 02 September 2006 00:21, Jack Faley ( The Tao of Jack ) wrote:
 I have several ( 100 - 200 ) small exits c oming from an app that update
 db2 tables. This works fine but the dba's don't like that many open
 connection overheads. This isn't cgi/apache. These are all seperate
 terminating processes. From what I gather there is no way to pass the
dbh
 between processes?

No.

 Is some sort of middle daemon ( I guess non perl ) that
 keeps one persistent connection to db2 and then have perl feed the
daemon
 the only option?

That would be called ... DB2 ;-)

 I would think others have run into this wall before?

Option 1: stop during this in a bunch of small exits - put them in a
single
exit.  Probably not a real option.

Option 2: can you put this in stored procedures instead?

Option 3: DBD::Proxy may help here.  Or at least the concept - set up a
POE-based pseudo-server which receives messages from apps, and funnels
them
via a single connection to your server, then passes the results
back.  That
sounds like not only a lot of work to write, but also a lot of work for
the
computer.  Oh, and all those connections just moved from going directly to
the server to directly to the proxy/concentrator - I'm not really seeing a
savings there.

To be honest, I suspect that any option (other than a complete
re-architecture
of how you approach the business problem you're dealing with in these
exits)
will actually be a larger load on the system than what you're currently
working with.

If option 1 works (which I doubt from what little info was in your
original
question), I think it's probably the only solution that would satisfy your
DBAs.  But then again, I'm not seeing their problem, nor really what is
causing it, so I'm just taking a wild guess ;-)





Thank you for your reply. If their was any way to mitigate the small exits I
would do it but I'm modifying an app that was never meant to be extensible.
So, you are correct, Im moving the load to a daemon in the middle with a
persistant connection rather than directly talking to DB2 :-) . I havent
benchmarked it. Im no dba but I have a feeling this will actually be slower
overall.

I am holding hope DBD:Proxy connect_cached might do the trick but the docs
seem to indicate its not fully completed. No harm in trying though. Even if
it doesnt, I'll check the dba's quality to see if they can tell if there are
still all those connections after a middleware to pool them that actually
doesn't do anything.

How would Stored Procedures help? Just for the performance after connected?

Thanks!

--

Why is this technology an anathema to me?


db2 - persistent connection

2006-09-02 Thread Jack Faley ( The Tao of Jack )

I have several ( 100 - 200 ) small exits c oming from an app that update db2
tables. This works fine but the dba's don't like that many open connection
overheads. This isn't cgi/apache. These are all seperate terminating
processes. From what I gather there is no way to pass the dbh between
processes? Is some sort of middle daemon ( I guess non perl ) that keeps one
persistent connection to db2 and then have perl feed the daemon the only
option? I would think others have run into this wall before?


Thanks!




--

Why is this technology an anathema to me?