On 2009-11-30, at 22:49, Eric Seidel wrote: > Yeah, I could see how that would be the case. I expect it would be possible > to set it up to NOFOLLOW all the expensive links, but I doubt trac does that > out of the box, and it's probably not worth the effort.
rel=nofollow doesn't do what you think it does. It prevents a link from implying influence. It doesn't prevent the link from being followed and the destination content from being indexed. > I need to find a better solution for web-based search of our code. Maybe the > magical LXR stuff in the works at Mac OS Forge (mentioned in the other > thread) will be the solution. "git grep" is hard to beat. - Mark > > On Tue, Dec 1, 2009 at 1:41 AM, Mark Rowe <[email protected]> wrote: > > On 2009-11-30, at 22:36, Eric Seidel wrote: > >> It's bothered me for a while that I can't just type "trac webkit >> Document.cpp" into Google and have it give me a trac link to our >> Document.cpp page. >> http://trac.webkit.org/browser/trunk/WebCore/dom/Document.cpp >> >> I checked http://trac.macosforge.org/robots.txt tonight and low and behold >> we disallow "browser/" (which is where all these links live). Curious if >> this is intentional, and if we should change this setting? > > Web crawler indexing of Trac is seriously painful for the servers involved. > The entire SVN history of the repository is accessible. File content. > Changes. Annotations. Everything. That's not cheap to compute and serve up. > > - Mark > >
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ webkit-dev mailing list [email protected] http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

