Yeah, I could see how that would be the case.  I expect it would be possible
to set it up to NOFOLLOW all the expensive links, but I doubt trac does that
out of the box, and it's probably not worth the effort.

I need to find a better solution for web-based search of our code.  Maybe
the magical LXR stuff in the works at Mac OS Forge (mentioned in the other
thread) will be the solution.

Thanks for your quick response.

-eric

On Tue, Dec 1, 2009 at 1:41 AM, Mark Rowe <[email protected]> wrote:

>
> On 2009-11-30, at 22:36, Eric Seidel wrote:
>
> It's bothered me for a while that I can't just type "trac webkit
> Document.cpp" into Google and have it give me a trac link to our
> Document.cpp page.
> http://trac.webkit.org/browser/trunk/WebCore/dom/Document.cpp
>
> I checked http://trac.macosforge.org/robots.txt tonight and low and behold
> we disallow "browser/" (which is where all these links live).  Curious if
> this is intentional, and if we should change this setting?
>
>
> Web crawler indexing of Trac is seriously painful for the servers involved.
>  The entire SVN history of the repository is accessible.  File content.
>  Changes.  Annotations.  Everything.  That's not cheap to compute and serve
> up.
>
> - Mark
>
>
_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to