Am Dienstag, 18. November 2003 20:10 schrieb Kris Schneider:

Oh, I see that I left that approach out, for I dropped it at an
early stage in my design considerations. Of course, you can
kill off session information via a filter that way, but what will
happen if the user has disabled Cookies, and when it's
not Google, but some John Doe user? OK, let's simulate the
results. Request number #1 is made, resulting in session
creation due to something in my code, but the filter intercepts
and removes the session info from the response. So far,
so good. Now the user clicks on a link. As the request
contains no session info, the server handles it as a
new session-related request and creates a new session.
Returning, the filter kills the session again, of course.
And so on. In the meantime, there may be 30+ stale
sessions hanging around, waiting for timeout just 
because the filter suppressed some crucial infor-
mation the server expected on return. Given it's
possible to say session.invalidate() somewhere in
time before committing the response via the filter, I
still would feel like doing something very wrong
somehow, for I would code against the machine
and the specifications, and that's not how it's
meant to be. It may work, even, but still I would
feel just uneasy. Then, the preferable approach
would be to avoid sessions if you don't want them
(just as in RL) instead of killing them afterwards,
If that's not possible, I tend to handle general
things in a general fashion and handle the special
conditions (possible scenario: the GoogleBot
shows up again) in special ways, too. Considering
a customized Google filter: well, the simple cloaking
approach I described is easier to implement and
just closer to my personal taste. YMMV.

Thanks for an interesting idea I may have
dropped too soon back then, but I think,
even when reconsidering it, I would still
stick to a different approach. It's my
private site after all, and I just don't
want to feel bad about it in some way :-)

-- Chris.

NB. One thing I still wonder about is whether
it's really the jsessionid thing Google dislikes
or the *.do ending in general. AFAI can tell,
Google also indexes plain .jsps, but those
run in a session anyway by default. Don't
know. If it's really just the *.do ending, it's eas
to map Struts to *.html (assigning a .htm ending
to truly static pages), use mod_rewrite or
the rewrite filter available in Resin 3.0.
But didn't check all that yet. 
 
> NAFAIK, but that's by no means definitive. I *do* know that you can
> configure a TC context (or default context) to do the opposite. In
> other words, turn off cookies and only use rewriting.
>
> Hm, what if you create a filter to wrap the response with an
> HttpServletResponseWrapper that no-ops encodeUrl and encodeURL?
>
> Quoting Brice Ruth <[EMAIL PROTECTED]>:
> > Is there any way to disable URL rewriting (with jsessionid) in
> > Tomcat or via struts-config.xml or anything? I'm about at my wits
> > end with this jsessionid thing - now our search engine which
> > indexes by crawling the site (and doesn't support cookies) can't
> > index properly because of the jsessionid property ... and frankly,
> > I'm not really caring at this point if the 3-5% of visitors to our
> > site can't use sessions (there's practically no functionality that
> > depends on it anyway - mainly a performance improvement, where it
> > is being used). I'd like to leave sessions via cookies enabled, but
> > disable URL rewriting for sessions.
> >
> > --
> > Brice D. Ruth
> > Sr. IT Analyst
> > Fiskars Brands, Inc.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to