On Feb 6, 2005, at 6:42 PM, Bob Wyman wrote:
Roy T. Fielding wrote:
Aggregators do not consume feed resources -- they consume an
iterative set of overlapping feed representations.
This is only true of pull based aggregators that poll for feeds.
None of the aggregators that I use are polling based. I use the PubSub
Sidebars and the Gush aggregator built by 2entwine.com. These aggregators
consume streams of entries that are pushed to them using the "Atom over
HTTP" protocol.

No, they consume feed representations of length=1, which contains an entry representation. They are neither streams nor entries, and if we stop confusing the messages received with the rather abstract notion of what the author considers to be their entry, then it is much easier to understand what the id tells the recipient.

This is not specific to the transfer protocol.  It is an aspect
of message passing architectures.  Most network-based systems
build tremendously complex synchronization and update mechanisms
in an attempt to make message passing match what an ideal
world would consider reality.  Unfortunately for them, the theory
of relativity is just as applicable to software as it is to us.

HTTP (or at least the use of HTTP based on REST) changes the
perspective by acknowledging that messages are not the same
as what is identified.  It seems a little odd, at first,
but it makes a huge difference because clients stop assuming
that they have a complete understanding, servers supply more
explicit information about what they do send, and the overall
system becomes less brittle during failures. However, HTTP
only tries to match what is already true of message passing --
it does not make the rules.

Regardless of the protocol used to receive an atom feed, the
only things that will actually be received are representations.
Computers can't transfer a temporal mapping that has yet to
be defined.

....Roy



Reply via email to