Keith,
I'm not really sure you want to do that with Oracle..., I mean maybe there's a
situation where that makes sense, but I can't think of one just yet.  A db
connection is a _signficant_ resource hog.., (this goes 100x for oracle), and
to be handing these out like candy to your users seems overboard.  I'm not
really sure the specifics of the problems that your having, but are you using
Apache::Session's DBI with oracle?  That just doesn't work right now :-(, I
suggest you install mysql to deal with your http sessions.  See, Tools for the
Job seems appropriate here.  Oracle is slow, but it has a lot of functionality
(pl/sql), mysql is FAST, but is missing a lot of functionality, sub selects
being one that drives me NUTS.  Anyhoo, for sessions you really just need a
primitive database, and mysql fits the bill.  (Note: I'm not saying mysql is
primitive, it is actually quite functional, so no flames on this one :-)

I'm sort of working on something to solve this problem..., it's one that I
percieve as a big issue: We need a fast place to store sessions, that can store
for an entire "web server farm", and handle locking issues correctly.  To that
end I've been working on an in memory storage and locker daemon written in c,
but it's only about 40% done on the daemon end, and I still have to write the
perl interface.  So it probably won't be out for a couple of months.  (It's
actually kind of cool, uses a sigio interface to handle massive numbers of
connections, but only in one thread, as a result the number and need for "real
mutexes" is almost zero.  No benchmarks yet, but based on experience if I don't
screw up along the way probably at bare minimum 2000 sessions dished out per
second on a decent computer. (700Mhz or so)  If persistent connections could be
worked out that number could be higher.  The other problem is getting the right
data structure to handle this stuff..., working on that right now.)

Thanks,
Shane.
BTW: This is part of a larger project to abstract fast i/o into a library...,
the goal is to have a set of source code you compile once on any architecture
that will automatically pick the fastest i/o mechanism for your arch.  Then
when a programmer wants to use a fast i/o mechanism they just pass a struct in
which references a series of "action point" functions like: "what to do when we
get a request", etc.  Right now I'm only doing sigio, but later I'll try to pull
in other things that will work on any arch.  (non blocking with select or poll,
etc.)  Then they link they're program against that fast i/o library.  That's
the goal anyhow..., I don't know if I'll every complete the whole thing, but
I'm going to do sigio then sessions, and then start working on more inclusive
stuff.

> Hello,
> 
> I have been using mod_perl for awhile now, works great.... the db connections to my 
>oracle database are pooled for quick re-use by my mod_perl app's, --which is all fine 
>until I develope web applications that need to prevent against "dirty read" 
>situations between different user http sessions (since there is no dedicated db 
>connection used across http requests)... (and YES, I know there are "schemes" that 
>deal with this, but they are needlessly cumbersome -when the oracle database should 
>be doing the work instead...)
> 
> QUESTION: Is there a way to open a db connection and attached it to a session cookie 
>so that only one user session will be able to use that connection? (so I could 
>"select for update" in a user http request, thereby locking another user http request 
>from updating the row while the first user session is reading the row)
> 
> Please post and or email me at [EMAIL PROTECTED]
> Thanks,
> Keith
-- 


Reply via email to