-----Original Message-----
From: Craig R. McClanahan [mailto:craigmcc@;apache.org]
Sent: Tuesday, November 05, 2002 11:50 AM
To: Struts Users Mailing List; [EMAIL PROTECTED]
Subject: Re: robot search engines and struts




On Tue, 5 Nov 2002, Dragan Ljubojevic wrote:

> Date: Tue, 5 Nov 2002 14:11:47 +0100
> From: Dragan Ljubojevic <[EMAIL PROTECTED]>
> Reply-To: Struts Users Mailing List <[EMAIL PROTECTED]>,
>      [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: robot search engines and struts
>
> If I all jsp pages put in protected directory and all
> url's ends with .do how web crawler like google can index my application?
> What is good solution for this problem?
>

The algorithms used by search engines do not match well with the design of
MVC-based application architectures.  the principal reason for this is
that any given URL submitted by a user (typically a ".do" URL) can trigger
the output of *any* page of your app, depending on which page your action
decides to forward to -- the fact that some particular text was returned
once (when the crawler grabbed it) is not a reproducible event.

Further, any search engine crawler is automatically going to skip
protected URLs, no matter what app architecture you used.

Bottom line -- search engines are for web *sites*, not web *apps*.  I'd
recommend you use a robots.txt file on your server to tell crawlers to
skip everything in your app (except possible the welcome page if you want
that to be indexed).

>
> Dragan Ljubojevic
>

Craig McClanahan


--
To unsubscribe, e-mail:   <mailto:struts-user-unsubscribe@;jakarta.apache.org>
For additional commands, e-mail: <mailto:struts-user-help@;jakarta.apache.org>

Reply via email to