Linux, mysql 4.1.14, DBI 1.38, DBD mysql 2.9002

Lo,

This isn't so much much a problem as a "what's the best way to do 
it".

I have a daemon (perl webserver) that connects to a mysql db when 
it receives a request. I thought, to remove the connection overhead, 
that I'd use connect_cached (autocommit off). Now this works, up to 
a point. When a select is issued it find the records. However, when a 
completely different process (mysql cli) adds records, these are not 
found by the daemon. It seems to have a snapshot of the data that 
doesn't refresh.

Now, I can fix this by:
1) issuing a commit or rollback after the select.
Does this have a mysql server overhead? It's an ordinary select so I 
am unsure as to what a commit does, no db changes having been 
made.
2) Don't use connect_cached. But then I get the connection 
overhead.
3) Turn autocommit on.
If I do this I need to turn it off then when I need to update the db 
around multiple record creates and then on again.
This is easy to do but the dbd mysql docs hint that this could fail. So
it seems this method may be unstable.

So, what's the preferred method, or is there another one?

John

Reply via email to