Thanks much!  I'm going over the changes, and will let you know if there is
anything I'd like to add/change.

On Jan 24, 2008 10:32 AM, Rick McGuire <[EMAIL PROTECTED]> wrote:

> With the additional patch for the start times and connect times and the
> TimeMonitor, I feel this is in good enough shape to be committed, so I
> checked it in.  I also wrote some additional tests for this.
>
> I did find a small anomaly with my tests, however.  I was looking for a
> case that would trigger the CONNECTION_CLOSED event, and discovered
> there are no places in the code where that event is raised.  It appears,
> in general, that close events are handled by the server closing the
> connection and these come through as CONNECTION_CLOSED_BY_SERVER
> events.  Ok, perhaps the CONNECTION_CLOSED event is not needed.


Yeah, I suspect CONNECTION_CLOSED_BY_SERVER would be enough.

BTW, I noticed that CONNECTION_CLOSED_BY_SERVER fires in
HttpIoHandler.exceptionCaught() as well.  I think it's redundant and will
cause double counting.  The exceptionCaught() method calls IoSession.close()
at the end, and HttpIoHandler.sessionClosed() will be invoked as a result,
which also fires the same event.  So I do not think we should fire the event
in exceptionCaught().


>
>
> However, when writing some tests for the redirection cases, it appears
> that when we receive a redirection response back from the server, we're
> not getting a CONNECTION_CLOSED_BY_SERVER event.  If the connection is
> not cached, we appear to ended up with a dangling live connection.
> Should something to be done to explicitly close this connection in that
> situation?


Hmm, that's a good point.  The same behavior applies for other non-redirect
cases as well.  If a session cache is provided, we cache it.  But if not, we
do nothing.  Perhaps it is safer to close the session explicitly in all 3
cases in HttpIoHandler.messageReceived().

This hasn't been a problem basically because if session cache is not enabled
we explicitly add "Connection: close" to the request headers, so servers
will close them anyway.  But I think it might be safer to close on our side
still...  What do you think?



>
> Rick
>
> Sangjin Lee wrote:
> >
> >
> > On Jan 23, 2008 6:50 AM, Rick McGuire <[EMAIL PROTECTED]
> > <mailto:[EMAIL PROTECTED]>> wrote:
> >
> >
> >     Only if it can be done without having to maintain the same sort of
> >     request-to-start time map that you don't wish to do with the
> listener.
> >     The process of adding data collection should cause memory bloat
> there
> >     either, particularly if monitoring is not being used (the more
> likely
> >     case).  It seems more reasonable that this type of processing
> >     should be
> >     pushed into the monitoring implementation rather than have the async
> >     client try to keep track of everything.  This way, the overhead is
> >     only
> >     introduced while metrics are being gathered.  A very simple and
> >     relatively low cost means might be to add a couple of time stamp
> >     fields
> >     to the request object, but only for the most significant events.
> >     Perhaps request start and connection start, but nothing else.
>  Another
> >     possible approach would be to have a mechanism that would allow the
> >     monitor to attach an annotation object to the request that could
> >     be used
> >     to implement a lifecycle memory if needed.  The cost of doing this
> is
> >     relatively minor when this type of information is not needed, but
> it's
> >     flexible enough to be tailored to any type of data collection.
> >
> >
> > I agree, the only things that make sense to be present are request
> > start time and connect start time...  I'm going to upload a revised
> > patch that shows this along with a proof-of-concept monitor that uses
> > this to compute the average response time.
> >
> > Thanks,
> > Sangjin
> >
>
>

Reply via email to