to begin...
I've reconnected this thread to DBI Users - other people there may benefit from this information, and a different set of other people may have useful advice to offer.
i'm considering an app that would require transactional processing across multiple web pages of an app... if each page created it's own connection, then one couldn't really do transactional processing where various db functions are occuring on each page, with the final/all interactions being accepted if all actions are successful. keep in mind, once a page closes, so does it's connection, which means that any db interactions are also gone. ie,
you can't use commit/rollback across multiple pages...
Transactions across multiple (web) pages are a common problem. There are various solutions, I believe, but very few if any of them involve sharing a single database connection across the multiple pages.
the ability of a persistent/connection or pool of connection ids would solve this. mysql/php4 had persistent connections.
mysql4.1.3/php5 does not...
can we potentially rethink our design, sure..
I think you may have to do so.
does that limit what web apps can do regarding complex transactions, mos def...
The very nature of the web is that all http requests are (nominally) independent of each other. There are endless ways of providing continuity across requests. I believe, but stand to be shown incorrect, that the general mechanism is to treat each page's operations as some sort of transaction step that is recorded in a table of some sort, with the next step adding to that data, and so on until all the data has been collected and the final transaction can be placed into the main operational tables.
That does not seem to limit the transactional complexity of web applications.
A bigger problem with your proposed/preferred solution, at least for intensively used sites, is that each session has its own database connection, despite the fact most of those sessions are idle most of the time - they are occupying database resources, and possibly locking each other, and generally it is not very efficient.
the goal would be to create an initial id, with each subsequent page of a given session using the id. each session would create/use
it's own id... this solves your concern for multiple people
accessing the db...
the middleware logic would have to take alot into account to be robust for general usage. i'm not worried about that.. right now,
my primary issue is how to actually pass a dbi handle from a parent
to child using perl!
And, as I said earlier, in general it is impossible. I can't come out and say "in DBD::MySQL it is impossible" because I don't know that for sure. But I do think it is unlikely to be possible - extremely unlikely. There has been frequent discussion about whether it is possible to share a database handle across forked processes. I believe that the consensus is that the answer is "No", especially for programs written with a scintilla of portability. It does depend somewhat on the nature of the driver, but the two-process architecture for the typical DBMS makes such sharing unreliable if it works at all.
i suspect that it is possible to accomplish... right now, there appears to be something weird with regards to my simply doing a 'exec child.pl $dbh'...
IMNSHO, it is weird that you would expect that an address created in the parent process should have the same significance in the child process - let alone the issues of resources associated with that address in the parent.
I wish you luck, but I think you're trying to do the unattainable. Please address further questions to [EMAIL PROTECTED] and not to me.
-----Original Message----- From: Jonathan Leffler [mailto:[EMAIL PROTECTED] Sent: Sunday, August 01, 2004 8:27 PM To: [EMAIL PROTECTED] Subject: Re: passing mysql dbi handle from parent to child
bruce wrote:
i'm in need of a solution for a type of connection pooling for mysql...
OK - does MySQL provide connection pooling?
Why do you need connection pooling?
if i can create the dbi connection handle, i should be able to pass it to a child process
On what basis do you think that?
(i hope) as long as the original parent process that created the handle is still alive.
The problems here are legion. There are sharing issues - what happens if the parent tries to send a message at the same time that the child does? Which of them gets to read the answer? But that jumps the gun; how does the child's MySQL library know that a particular file descriptor that it inherited is a MySQL connection that it can use? There most likely is no mechanism to do so.
the concern i have is that the parent process id might be somehow tied to the dbi connection handle/link...
That might be part of the problem - DBD::Informix will explicitly include such a check precisely because inherited connections - even across a fork, let alone an exec as you plan to use - do not work reliably.
basically, i want to allow multiple pages within a web app to be able to access the db using the same connection id....
Que? Pages don't have connections - per se. Presumably, you're thinking about the web server opening the database connection, and then forking off children to handle CGI script functionality? What happens when two people try to access the page sufficiently closely
in time that the first session is incomplete before the seconds
starts? Answer: all hell breaks loose - assuming you can get it to
work at all.
No; sorry, I think you're barking up the wrong tree here. You need to rethink your requirements. Why do you think that it is too slow for each CGI script to connect? Have you measured it?
-----Original Message----- From: Jonathan Leffler [mailto:[EMAIL PROTECTED] Sent: Sunday, August 01, 2004 6:54 PM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: passing mysql dbi handle from parent to child
bruce wrote:relatively new to perl.... i'm trying to pass the mysql di connection handle from a parent app to a child app via the exec/system shell comand. i'm having some issues with implementation, and was wondering if anybody might have some
suggestions as to how this can be accomplished...
For many (if not most) databases, there is no way to do that. Database systems prefer each client to connect separately - the
exact opposite of what you're trying.
If you were going to do it under Informix, you'd need to consider the XA transaction management system, and have the initial process opena a global transaction - becoming a transaction manager in the process - and then arrange for the child processes to take on separate branches of the processing for the global transaction. You might, just about, make that work, but it is a whole lot of work for minimal if not zero benefit - especially since having the child process connect directly would be (relatively) trivial.
So, why did you want do it?
i've tried passing the handle directly/serialized/etc... with no luck... the test parent app is still running, so the handle
that i'm passing should be valid, but i seem to be
missing/lacking something.
[...snip...]
--
Jonathan Leffler ([EMAIL PROTECTED], [EMAIL PROTECTED]) #include <disclaimer.h>
Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/
