If it won't work, that's cool. But let's not pretend we are choosing an alternative solution. We are choosing Broken. That's fine.


A couple points:
1.) Weblog software doesn't typically make use of the server's logs, so this is not creating work for the sysadmin.
2.) There's no reason the responses couldn't be cached. Hearing from one client is enough (the Pace doesn't specify a request body).


Other points in the email seem plenty valid.

*exits superhighway

Robert Sayre



Mark Baker wrote:

Big +1 to all points.

On Fri, Nov 05, 2004 at 12:18:13AM -0800, Roy T. Fielding wrote:

On Nov 4, 2004, at 12:23 PM, Paul Hoffman / IMC wrote:

At 11:40 AM -0800 11/4/04, Walter Underwood wrote:

I object for a different reason.

It seems to me that it is trying to enable a social pressure on feeds
that don't meet the standard. I think this is new ground for a protocol
standard. Usually, implementation validation is a separate phase, and not
part of normal operation.


I know we are all really tired of busted HTML and XML, but this doesn't
seem like it solves the problem. Getting rid of bad feeds requires
publicising them, not telling the publisher (who might not care).

I see a conflict between your first sentence ("It seems to...") and your last ("Getting rid of...").

I don't -- Walter's comment is spot-on. If we want folks to fix bad
software, then include a UI requirement that the recipient include
or render an indication that the feed is broken. For example, a headline
that says "Received feed contained errors due to SOFTWARE THAT SUCKS
or temporary communication errors." Guess how long it will take for
the real maintainer of that feed, not the poor dudes trying to support
the HTTP server, to fix the problem?


Another method of doing the same thing is to simply test the bad
software and post a "hall of shame" web page that explains the errors.

Sending feedback to the HTTP server is a bad idea.  First, feed errors
are more likely to be caused by bad client software that mishandles
its own reads.  If Microsoft's Web Folders sent an error every
time it received a WebDAV message that it considers to be an "error"
(i.e., every message it received from a standards-compliant server),
WebDAV would never have made it to proposed standard.

Should we encourage Atom 0.3 clients to send an error message whenever
they receive a Atom 1.0 feed?  How about when 1.0 clients receive a 1.1
feed?  What on earth makes people think that client software is capable
of differentiating between broken feeds and features that are simply
unknown to that client?

Second, the people who read error logs are not the people who care
why the feed is broken.  People who manage web servers have a tendency
to delete software that results in large error logs and then forbid
reinstallation of any *similar* software in the future.  Their job
is to maintain business critical functions, not play hand-maiden to
a bunch of bloggers.  Automated error reporting is a great way to
discourage companies from deploying Atom.

Third, it is a waste of bits.  If a feed is broken, it might be useful
for one client to report that fact to the owner of the feed (not the
web server).  It is never appropriate for all clients to report the
error, since clients frequently outnumber servers by 10,000:1 and
sometimes millions to one.  When I receive 10,000 lines in an error log,
I don't evaluate the range of potential options for other software -- I
simply turn the reporting option off and/or delete the software that
caused it.

Finally, if feeds are supplied in a reasonably efficient architecture
that can take advantage of hierarchy and proxies, then the actual
source of feed errors may have no relation to the feed source.
If a company like AOL installs a broken proxy that corrupts the data
stream in a minor way, do we think it is a good idea for the
50 million clients behind those proxies to suddenly start sending
feed error requests to every origin's report URI?  And, no, this
can't be solved by requiring that the reporting URI be a hop-by-hop
header field, since those proxies aren't HTTP/1.1 compliant anyway.

In short, there are many reasons that HTTP does not already contain
such an error reporting feature, and none of them are due to not
considering it as an option.  If the road to hell is paved with good
intentions, then automated error reporting is surely a superhighway.

If we want to ensure that feeds work, then insist that the client
software display error indicators around broken feed data, thereby
providing both sufficient information to the feed owner and a
warning to the client's owner that they might not be reading the
true source.  That is sufficient to cause people who care to report
the problem, assuming the author doesn't notice it immediately,
and does not cause the Internet harm in the process.

Given that, deciding which method to use is irrelevant.  Furthermore,
use of "X-..." as a prefix for a header field name in a specification
is contrary to the meaning of those names in MIME, and therefore
forbidden in a standards-track proposal.  In any case, HTTP doesn't
use that lame method of indicating private header fields because of
the many times that damn fools have released public software that uses
X-prefixes, even though the only reason X-prefixes are supposed
to be safe from collisions is because they are not allowed to be
standardized!  Go figure.


Cheers,

Roy T. Fielding                            <http://roy.gbiv.com/>
Chief Scientist, Day Software              <http://www.day.com/>






Reply via email to