please make a RFE for this. so that we don't loose this discussion.johanOn 6/22/06, Matej Knopp [EMAIL PROTECTED]
wrote:Sure this is a solution, but it complicates things a bit. All items
stored in thread locals (session, hibernate session, currenttransaction, etc) have to be propagated to the
Hi,
we have a problem with session locking. Sometimes, the underlying
database just hangs and the response is not processed properly. The
problem is that HttpSession remains locked.
Wouldn't it be possible to replace the current locking mechanism
( synchronized(lock) { doEverything() } )
not unless all page targets return the same mutex instance :)the point is that only a single page is processed at a time because of pagemap, etc. although i dont really know if the pagemap is affected all that much in
2.0.but even then, lets say i double click a link really fast, i get two
It seems to me that this synchronization also
affects other requests (not only Page). We have a page with images taken from
database. The images appear one by one instead of "simultaneous"...
Jan
"Igor Vaynberg" [EMAIL PROTECTED]
wrote in message news:[EMAIL PROTECTED]...not
unless
in 1.2 we refactored things so that resources do not necessarily sync on session. if you are using the Image component - then i think yes it will sync on session because Image component is part of the page...johan would know best since he did most of the refactoring
-IgorOn 6/21/06, jan_bar [EMAIL
Igor Vaynberg wrote:
not unless all page targets return the same mutex instance :)
The mutex would be stored in session of course :)
the point is that only a single page is processed at a time because of
pagemap, etc. although i dont really know if the pagemap is affected all
that much in
but you wouldnt want to unlock the mutex, thats kinda the whole point! what if both pages access session object? or application object? now you have to start worrying about concurrency on those. the advantage of letting only one page request through at a time is that users dont have to think about
Well, it would be your responsibility not to touch application or
session if the mutex is unlocked. This is not something an average joe
would do in every link handler. You would do that only right before a
long db operation and lock it right after it.
I'm not looking for perfect solution. I'm
in that case we sort of already have a mechanism for that, request target has a getlock() which you can override and return something other then session, just need a way to encode a url so it is resolved to your own target
-IgorOn 6/21/06, Matej Knopp [EMAIL PROTECTED] wrote:
Well, it would be
Sorry, i'm lost. How is overriding getlock() supposed to help me? I need
to lock the session, but not for the whole request. Just for a part
before the db call and the part after.
-Matej
Igor Vaynberg wrote:
in that case we sort of already have a mechanism for that, request
target has a
If there was something like
RequestCycle.lockSession()
RequestCycle.unlockSession()
where in my onClock handler I could call
RequestCycle.get().unlockSession()
--do-my-stuff-and-don't-touch-the-session!
RequestCycle.get().lockSession()
I know this adds responsibility to the developer, but this
the page targets are too flexible to support something like this because they do not lock on session specifically - the implementation just happens to do that now. and this would also require us to completely change how we do locking/refactor requesttargets possibly, etc.
we should run this past
I'm also curious what other devs would say. New thread could solve this
but it's not the solution i'm looking for.
I think the locking should be more flexible then just
synchronized(myBFLock) {
processTheWholeRequest();
}
even if it needs additional refactor.
(Maybe RequestTarget#lock
I think providing unlocking functionality is not the way to go. You
basically just provide a hack opportunity if you do that, and that's
not what a framework should do. I'm not really getting the problem at
this point, but if we have a possible deadlock in our request
handling, that's obviously a
I don't know if it can be considered a deadlock, but basicaly, if (for
any reason) a request doesn't complete (db problem, etc), no further
requests from the same session can be processed.
Now maybe if you have perfect business objects that's not an issue, but
in our current project we depend
We could consider building in a time out for the lock itself. That
timeout should be configurable. I agree that being able to hang up the
session is definitively not what we should want.
Eelco
On 6/21/06, Matej Knopp [EMAIL PROTECTED] wrote:
I don't know if it can be considered a deadlock, but
I agree with Eelco. We had a lot of problems in our project with pages
getting expired due to too lax locking. I'm not running to open up
that can of worms again by relaxing the session locks.
On the other hand, we have experienced hanging sessions as well. I
haven't looked into it, as it doesn't
I'm I the only one with this issue?
Hi,
I see your points. In my opinion, *any* database query (but not only
database) can be long operation. At that time your code is sitting in idle
loop waiting for database (located on another PC). In that time the CPU can
do other useful job for the same
well, in order to not let the internal state be corrupted requests have to be processed in sequence they originated from the user. a good and easy way to do this is to lock on session making sure that requests from that user are basically serialized. we really havent found a better way to do this.
19 matches
Mail list logo