On that note, I'm looking for information on RDF crawlers. At least I think I am. :)
I'm one of the Apache XML project (http://xml.apache.org/) developers. In Cocoon 2.0, XML content is translated via XSL:T and XSL:FO to HTML, WML, XML, PDF, and a variety of other formats by detecting the client/browser, looking up the appropriate translation file in the sitemap, and spewing the correct document. Everyone sees what they want to see, and the style, logic, and content are all seperated. We'd like to detect RDF crawlers (and other searches) and providing them with the appropriate content. That way, people can still see styled content, but search engines won't get tripped up with stylistic nonsense. The great thing is once the translation is written, you only maintain the XML source, so the RDF (or other search) version is always as up to date as the publicly browsed HTML (or whatever). I've looked at a variety of RDF and similar XML-based markups for search engines (Metalog, DAML,etc.), but I haven't found any implementations or information on what such implementations would identify themselves as, distinct from HTML crawlers. Anyone have ideas? -Steve -----Original Message----- From: Walter Underwood [mailto:[EMAIL PROTECTED]] Sent: Wednesday, May 10, 2000 7:55 PM To: [EMAIL PROTECTED] Subject: Re: On "Spider Wanted" requests -On Wednesday, May 10, 2000 8:00 PM +0300 Toivio Tuomas <[EMAIL PROTECTED]> wrote: > > Nick, would it be much bother if you or somebody else sent a weekly message > (say every Monday) informing new people where to find spiders? I remember > searchtools.com mentioned; maybe some other good sources too. Would save on > total amount of "extra" mail... :) Looking back at the traffic so far this year, only a couple of the requests would have been satisfied by a periodic FAQ. And weekly is way too often. So, I think it would increase the traffic, not decrease it. Probably not worth the trouble. wunder -- Walter R. Underwood Senior Staff Engineer, Infoseek Software http://software.infoseek.com/