Essentially, what TB-L is trying to do is give machines the ability to recognize knowledge that people intuitively have when they search for information or make online connections. So when you say you'd rather have search engines look for content based on what's relevant to the search terms, that's precisely what he's trying to do: to add metadata to websites and data so that search engine and other machine-driven tools have all the information necessary to make an informed judgment as to whether a particular search result is exactly what it's looking for.

I think part of the problem with the initiative is that they haven't done a stellar job providing simple, real-world examples of what the Semantic Web would look like. For example, Creative Commons is probably one of the best known examples of a project trying to apply the theories of the Semantic Web, but TB-L didn't even mention it; I had to throw it in myself just to give his very theoretical talk some grounding in the real world. (Well, the virtual mode, but grounding nonetheless.)

I don't think Berners-Lee expects people to do all of this metadata coding themselves, either; creating websites and relational databases is difficult enough as it is.He's working with a variety of groups to create open standards for describing this metadata, so eventually, web software and content management systems will do much of the work.

For now, I'm fascinated about all this, but I'm reserving judgment. I think Sir Tim needs a few more years and a few more concrete, everyday examples of the Semantic Web in action if the idea is going to expand beyond a rather technical exercise... -ac

Larry Phillips wrote:
I'm not sure if I understand the semantic web; but if I do, I don't think I want it.

Technically, the sematic web requires meta data to be added to the url. In addition to complicating the url it presupposes knowing how others will view or use the data. Currently, meta tags embedded in the web page meet the need of identifying and typing content.

Philosophically, rather than having content labeled with a standard identifier, I would prefer that search engines look for content that is relevant to the search terms. Assuming accurate labeling the best we could hope for is a situation similar to searches returning paid results. In other words, we will be dependent on the publisher to apply the standard identifiers in an accurate and comprehensive manner. Expecting publishers to look beyond their purposes is unreasonable and fanciful.

What will a semantic web give us that we don't have now?

Andy Carvin wrote:

Tim Berners-Lee: Weaving A Semantic Web
http://www.edwebproject.org/andy/blog/


But from the very beginning of the Web, Berners-Lee had hoped that he would be able to incorporate descriptive information into the Web’s fundamental design, but for various reasons it didn’t make the cut. “One thing I wanted to put in the original design was the ‘typing’ of links,” he said. For example, let’s say you link your website to another site. At the moment, the hyperlink connecting them contains very little information: just an address to get to the other website’s content. But Berners-Lee’s idea was to include “metadata” with each hyperlink to describe *the relationship* between the two sites. For example: do the people linking their two websites know each other personally, professionally, or not at all? If they’re colleagues, how are they working together, and in what fields? Where are they working?



-- -------------------------------------- Andy Carvin Program Director EDC Center for Media & Community acarvin @ edc . org http://www.digitaldividenetwork.org http://www.edwebproject.org/andy/blog/ --------------------------------------

_______________________________________________
DIGITALDIVIDE mailing list
[EMAIL PROTECTED]
http://mailman.edc.org/mailman/listinfo/digitaldivide
To unsubscribe, send a message to [EMAIL PROTECTED] with the word UNSUBSCRIBE in the 
body of the message.

Reply via email to