* Roy T. Fielding <[EMAIL PROTECTED]> [2005-02-07 04:13-0800] > > On Feb 6, 2005, at 6:42 PM, Bob Wyman wrote: > >Roy T. Fielding wrote: > >>Aggregators do not consume feed resources -- they consume an > >>iterative set of overlapping feed representations. > > This is only true of pull based aggregators that poll for feeds. > >None of the aggregators that I use are polling based. I use the PubSub > >Sidebars and the Gush aggregator built by 2entwine.com. These > >aggregators > >consume streams of entries that are pushed to them using the "Atom over > >HTTP" protocol. > > No, they consume feed representations of length=1, which contains > an entry representation. They are neither streams nor entries, and > if we stop confusing the messages received with the rather abstract > notion of what the author considers to be their entry, then it is > much easier to understand what the id tells the recipient. +1; we're exchanging representations (or maybe 'descriptions') not the things themselves. This becomes a lot clearer when we think about feed markup that describes people, places, products etc. inside the entry/item descriptions.
> This is not specific to the transfer protocol. It is an aspect > of message passing architectures. Most network-based systems > build tremendously complex synchronization and update mechanisms > in an attempt to make message passing match what an ideal > world would consider reality. Unfortunately for them, the theory > of relativity is just as applicable to software as it is to us. > > HTTP (or at least the use of HTTP based on REST) changes the > perspective by acknowledging that messages are not the same > as what is identified. It seems a little odd, at first, > but it makes a huge difference because clients stop assuming > that they have a complete understanding, servers supply more > explicit information about what they do send, and the overall > system becomes less brittle during failures. However, HTTP > only tries to match what is already true of message passing -- > it does not make the rules. > > Regardless of the protocol used to receive an atom feed, the > only things that will actually be received are representations. > Computers can't transfer a temporal mapping that has yet to > be defined. This thread is getting to the core of the different styles of thinking about RSS and Atom feeds. It is pretty widespread for people to think about RSS/Atom feeds in non-RESTy terms, ie. that the chunks of markup are 'the things themselves, being transported around the net'. From the RDF side of things, it is more natural to take the other view and see feeds as documents that describe things (such as themselves, items/entries, and the -- often realworld -- things those entries are about...). I am glad to see this discussion happening Dan
