Antone Roundy wrote:
> Getting back to how to use static documents for a chain of instances,
> that could easily be done as follows. The following assumes that the
> current feed document and the archive documents will each contain 15
> entries.
>
> The first 15 instances of the feed document do not contain a "prev"
> link (assuming one entry is added each time).
>
> When the 16th entry is added, a static document is created containing
> the first 15 entries, and a "prev" link pointing to it is added to the
> current feed document. This link remains unchanged until the 31st entry
> is added.
>
> When the 31st entry is added, another static document is created
> containing the 16th through 30th entries. It has a prev link pointing
> to the first static document. The current feed document's prev link is
> updated to point to the second static document, and it continues to
> point to the second static document until the 46th entry is added.
>
> When the 46th entry is added, a third static document is created
> containing the 31st through 45th entries, etc.

However, there should then be a "this" link in the "live" feed, otherwise
I'll have to retrieve (as a reader/aggregator) the "prev" feed each 15
entries:

Say I retrieved the feed when it was 15-entries long. When the 16th entry
is added and the first static document created, the "live" feed is added a
"prev" link, pointing to a document I never retrieved, so I guess I might
have missed entries and retrieve it. I end up retrieving back the 15
entries I already know of.
When the 31st entry is added, the feed's "prev" link is changed to
reference the new 16th-to-31st archive feed. This is an URI I never
dereferenced, so I guess I might have missed some entries and then
dereference the URI and retrieve the archive feed. If I had retrieved the
feed when it was 30-entries long, I end up retrieving back the 16th to
31st entries I already know of.

One could argue that I don't need to retrieve the archive feed as the live
feed already contains 14 entries (2nd to 15th, or 17th to 30th) I already
retrieved, using atom:updated and atom:id to notive them.
Well, nothing precludes an entry to be "pushed to front" even if its
atom:updated hasn't changed, so the entry following such a "puished to
front" entry could be one I never saw and I might have missed it.
And actually, this doesn't otherwise change the problem, which would still
arise if I retrieve the "live" feed when, say, it was 15-entries long and
15 entries later: I never saw the "prev" archive feed or any of the 15
entries in the "live" feed (so I can't conclude anything based on
atom:id+atom:updated), I then retrieve the "-prev" linked archive feed and
end up retrieving 15 entries I already know of, because it happens than I
actually didn't miss any entry between my two "live" feed retrievals...

So we need a mean to either identify the *next* "prev" link (a "this" or
"permalink" link in the "live" feed (no need to have one in "archive"
feeds, as already said on the list), which means it must be predictable),
or something to tell us we didn't missed entries, such as the atom:updated
of the "prev"-linked archive feed (is atom:updated enough?).

We'll end up with the "live" feed being either:
<feed xmlns="..." xmlns:fs="...">
  <link rel="archive" href="http://example.com/2005/05/"; />
  <!-- I didn't use a link construct as the document not yet exists) -->
  <fs:predicted-archive-uri>
        http://example.com/2005/06/
  </fs:predicted-archive-uri>
  ...
</feed>

or

<feed xmlns="..." xmlns:fs="...">
  <!-- I used an "extension attribute", even if it's not clearly
       defined by the Atom Syndication Format -->
  <link rel="prev" href="http://example.com/2005/05/";
        fs:updated="2005-05-31T23:59:59" />
  ...
</feed>

One advantage of the latter is that you don't rely on URIs as identifiers
for the feed archive documents and they can be moved/split/merged without
readers and aggregators being then implicitly told to retrieve back the
whole archives (if you change URIs, they'll think they missed entries...).

-- 
Thomas Broyer


Reply via email to