Robots.txt can exclude most of the trac site, and then include the sitemap.xml. This way you block most of the junk and only give permission to the important file. All major search engine support sitemap.xml, and those that don't will be blocked by robots.txt.
A script could generate sitemap.xml from a local svn checkout of trunk. It will produce one url for each source file (frequency=daily) and one url for every revision (frequency=year). That will cover most of the search requirements.
_______________________________________________ webkit-dev mailing list [email protected] http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

