How could we scrap HTML? It defines web pages and that's what the web
is - just a big bunch of linked up web pages. Without HTML there would
be no web standards. I've been coding HTML for a decade and I'm
finding it pretty straightforward.

On 2/9/06, Stephen Stagg <[EMAIL PROTECTED]> wrote:
> HTML is primarily a text document markup language, a tiny subset of
> the total information types available on the internet, with extra
> bits added on.  Why does all information have to be presented in this
> format?

"HTML is the lingua franca for publishing hypertext on the World Wide
Web" and the Web is the biggest and most visible part of the internet.
HTML isn't the only way information is presented on the internet,
there's RSS feeds and web services that output XML with standardised
and custom schemas (xCBL, ebXML, etc).

> Create a new Document DTD, a webpage DTD with things like
> Title and meta-tag included and then people who don't adhere to these
> new standards will find that their sites, by default, don't get
> listed in search engines.

You're reinventing the wheel, (X)HTML is a webpage DTD with a title
tag and a meta tag.

I think the closest thing to what you want is an XML file with an
embedded XSLT that converts it to XHTML.

Ben Wong
The discussion list for

 for some hints on posting to the list & getting help

Reply via email to