On Monday 05 December 2005 18:57, Perrin Harkins wrote: > On Mon, 2005-12-05 at 17:51 +0100, Torsten Foertsch wrote: > > With Apache::DBI::Cache on the other hand handles are cached only when > > they are free. > > Now I understand -- you are using the cache as a way to mark unused > handles. This is kind of confusing. It would be easier to understand > if you always kept them in the cache and just have a "in_use" attribute > that you set for each one or something similar. In fact you already > seem to have one with your "disconnected" attribute.
I cannot cache the handle on connect. Since then it would never be DESTROYed. The "disconnected" attribute is used to prevent double disconnect (to keep statistics correct). > You actually could do all of this with a wrapper around Apache::DBI. It > could keep track of in-use handles and create new ones when needed by > adjusting a dummy attribute. Yes, but then again I won't catch the DESTROY event. This leads to the request cleanup handler what I would like to prevent. > > There are 2 occasions when a handle can go out of use. Firstly, > > when C<disconnect> is called or when the handle is simply forgotten. The > > second event can be caught with a C<DESTROY> method. > > DESTROY is unreliable. Scoping in Perl is extremely complicated and > modules like Apache::Session that rely on DESTROY for anything are a > source of constant problems on this list. People accidentally create > closures, accidentally return the object into a larger scope that keeps > it around longer, put it in global variables, etc. I would avoid this. I would not say that. DESTROY is reliable in that it does exactly what it should. It is called when the last reference to an object goes away. And as you said, DESTROY is used only a last resort to put a handle back into the cache. Normally, disconnect would be called. The module was developed to be less invasive than Apache::DBI. When an application runs without Apache::DBI and without Apache::DBI::Cache and there are closures that prevent handles from beeing forgotten then with Apache::DBI::Cache that should remain the same. On the server it was first used there where a lot of singleton DBI connections stored in global variables. In some cases resuing them for anything else led to errors. (I don't know why.) If you need to store handles in global variables you can try C<undef_at_request_cleanup> to put them back into the cache at request cleanup. Here the PerlCleanupHandler is back, ;-). If it works, ok, if not go and use the global handle. > > Now you can have as much identical connections to a DB server as you > > need. For example you can connect 2 times with AutoCommit=>1 then start a > > transaction on one handle and use the second for lookups. > > This sounds like a bad idea to me, since the second one won't be able to > see things added by the first one. There may be some other useful case > for this though. That example was used as an example. And in fact it can be useful. I have seen applications where for each month a new set of tables was created. The fact that a table did not exist simply meant 0 for each of it's columns. If a select had to check something for a particular range of months then some tables could not exist. Within a transaction that would cause the whole transaction to abort. > The only serious issue I see with this module is the way you handle > rollbacks. This will only do a rollback if you call disconnect. What > happens if your code hits an unexpected error and dies without calling > disconnect? No rollback, and potentially a transaction left open with > questionable data and possibly locks. (You can't rely on the object > going out of scope for safety.) Apache::DBI prevents this with its > cleanup handler, although that is somewhat flawed as well if you connect > with AutoCommit on and then turn it off. See above, it was not my goal to make an application better than it is. If it was developed with global handles, well ... so be it. Oh, I forgot to say the module was not developed with Registry scripts in mind. I had originally a bunch of handcrafted modperl applications that created handles, disconnected them arbitrarily. Some used singletons others connect/disconnect for each request. That led to 2 problems a) the total amount of connections to some mysql databases was quite great (several thousand) and b) the frequent connect calls led to problems on a DNS server (as I was told). > Hmm... It also does direct hash accesses on the $dbh object for storing > private data. That's a little scary. The $dbh->{AutoCommit} stuff in > DBI is special because it uses XS typeglob magic. Doing your own hash > accesses is not really safe. You mean $dbh->{$PRIVATE} is wrong? Maybe because $dbh->{$PRIVATE}||=... would not work? That has been avoided in the code. What else is wrong with that? And how can it be cirumvented? Thanks, Perrin, for reviewing my code, Torsten