Mark Nottingham wrote:
Our of curiosity, do you know of *any* client applications that actually support feed paging?

I think most of the use cases for paging have to do with things like GData, OpenSearch, etc -- i.e., query results. That sort of thing isn't targetted at desktop aggregators AFAICT; it seems to be more for machine->machine communication, or for browsing a result set.

Actually when I said "feed paging" I meant that as a general term, i.e. feed history too.

As for machine->machine communication, if these feeds aren't meant for desktop aggregators then does it really matter that they function differently? You can describe one algorithm for use in machine->machine communication and another for use by desktop aggregators downloading "regular" feeds. Both can use the same link relations because they should never come into contact with each other. Having said that I still don't see how a machine->machine algorithm for retrieving a paged feed can be different from your current feed history algorithm and still be useful.

Lets say I was a search engine returning paged results. A search is performed that returns 200 results. I return 20 pages, 10 results per page. First time around a client supporting the feed history algorithm would retrieve all 20 pages no problem. So far I see no difference between how a desktop aggregator would behave and how machine->machine communication would function.

The second time the client connects (assuming there is a second time) it sends through an etag and/or last-modified date so the search engine knows which results it already has. Say there are 3 new results since the previous retrieval. Either the search engine is smart enough to just return those 3 results or it's going to ignore the etag and return everything - 21 pages, 10 results per page, new items could be anywhere.

As a desktop aggregator I guarantee you I'm not going to want to download 20+ pages every hour just to find the 3 new items that *might* be there. Fortunately the feed history algorithm would stop me after the first page, and I'm thankful for that. Would a machine->machine communication be any different? Would they really want to download every single one of those 203 results just to find the 3 new items?

If that's what they really want to do, then they can always ignore the "stop when you encounter a link you've already retrieved" part of the feed history algorithm. We don't need a special link relationship for that. If anything, I would think Thomas' suggestion of some kind of flag should be enough. I just can't see anyone wanting to do something like that, though, desktop aggregator or machine->machine communication.

Regards
James

Reply via email to