Re: Tools that make use of previous/next/first/last links?
Mark, While I'm sure the other James may have his own particular set of issues, the one pain point for me with the history spec is the use of the "previous" link to point back in time. This runs counter to the use of the previous link in both OpenSearch, APP and Gdata. It would be excellent if rather than using previous, feed history would look at the set of feeds as a list of pages ordered in reverse chronological order, where "next" pointed to the next oldest set of entries. e.g. current=/feed.xml first=/2006/04/feed.xml next=/2006/03/feed.xml next=/2006/02/feed.xml next=/2006/01/feed.xml ... - James Mark Nottingham wrote: > > Did you find that algorithm wrong, too hard to understand/implement, or > did you just do a different take on it? Does the approach that you took > end up having the same result? > > Any suggestions on how to better document it appreciated. > > Cheers, > > > On 2006/04/26, at 8:35 PM, James Holderness wrote: > >> >> We added support for next/prev/previous links in version 0.3.0 of >> Snarfer [1]. We don't use the reconstruction algorithm suggested in >> the Feed History draft, but your example feed seems to work ok for an >> initial retrieval. There may be problems with subsequent updates, >> though, depending on how you handle items falling out the bottom of >> the main feed. >> >> Regards >> James >> >> [1] http://www.snarfware.com/ >> >> John Panzer wrote: >>> We just deployed support for [EMAIL PROTECTED]"previous" et al. for >>> AOL Journals. If anyone has a client that makes use of these >>> links, please let me know, I'd love to see if there are any >>> interoperability problems. >> >> > > > -- > Mark Nottingham http://www.mnot.net/ > >
Re: Tools that make use of previous/next/first/last links?
Mark Nottingham wrote: Also, if a client doesn't visit for a long time, it will see http://journals.aol.com/panzerjohn/abstractioneer/atom.xml? page=2&count=10 and assume it already has all of the entries in it, because it's fetched that URI before. Yeah. That's what I was worried about too. The couple of test feeds that I've subscribed to haven't had any new entries yet so I can't be sure, but with urls like that I don't see how it can possibly work. Did you find that algorithm wrong, too hard to understand/implement, or did you just do a different take on it? Does the approach that you took end up having the same result? The problem I had with the algorithm was that it required two passes. The first pass to gather all the links, starting with the current feed document and moving back in time through the archives; the second pass to actually process the documents, starting with the oldest and moving forwards in time. Either this required retrieving everything twice, or caching every document retrieved. Neither of those options sounded particularly appealing to me. My implementation does everything in one pass. I start by processing the current feed document. If it contains a history link which I haven't seen before, I'll retrieve and process that document next. Repeat until there are no more links or I encounter a link that I've seen before. There are subtle differences in the results that you would get from my algorithm, and technically what you're suggesting is more accurate, but I don't think the differences are significant enough to care about. Other than that, I skip steps 1 and 2, and I default to using the "next" link relation (with a fallback to "previous" and "prev"). I may consider adding support for fh:complete at some point, but for now I'm sticking with Microsoft's cf:treatAs. Regards James
Re: Tools that make use of previous/next/first/last links?
I ran it through my demo implementation for Feed History: http://www.mnot.net/rss/history/feed_history.py and it worked fine (after I fixed a bug -- thanks!). To use that, just download the .py and run it on the command line like this: ./feed_history.py [filename] [url] where filename is the name of a local file it can store state in (if you run it again in the future, it won't fetch what it's already seen) and url is the feed. One thing I did notice -- you're using URLs like this for your archives: http://journals.aol.com/panzerjohn/abstractioneer/atom.xml? page=2&count=10 Are they really permanent? If they're relative to the current state of the feed (i.e., the above URI means "give me the ten latest entries"), you can get into some inconsistent states; e.g., if somebody adds/deletes an entry between when the client fetches the different archives. Also, if a client doesn't visit for a long time, it will see http://journals.aol.com/panzerjohn/abstractioneer/atom.xml? page=2&count=10 and assume it already has all of the entries in it, because it's fetched that URI before. On 2006/04/26, at 6:36 PM, John Panzer wrote: We just deployed support for [EMAIL PROTECTED]"previous" et al. for AOL Journals. If anyone has a client that makes use of these links, please let me know, I'd love to see if there are any interoperability problems. Example Atom feed: http://journals.aol.com/panzerjohn/ abstractioneer/atom.xml Thanks, -- John Panzer System Architect http://abstractioneer.org -- Mark Nottingham http://www.mnot.net/
Re: Tools that make use of previous/next/first/last links?
Did you find that algorithm wrong, too hard to understand/implement, or did you just do a different take on it? Does the approach that you took end up having the same result? Any suggestions on how to better document it appreciated. Cheers, On 2006/04/26, at 8:35 PM, James Holderness wrote: We added support for next/prev/previous links in version 0.3.0 of Snarfer [1]. We don't use the reconstruction algorithm suggested in the Feed History draft, but your example feed seems to work ok for an initial retrieval. There may be problems with subsequent updates, though, depending on how you handle items falling out the bottom of the main feed. Regards James [1] http://www.snarfware.com/ John Panzer wrote: We just deployed support for [EMAIL PROTECTED]"previous" et al. for AOL Journals. If anyone has a client that makes use of these links, please let me know, I'd love to see if there are any interoperability problems. -- Mark Nottingham http://www.mnot.net/