Re: Atom Entry Documents

2006-12-11 Thread Mark Nottingham


What would the relationship of that document be to RFC4287?

Cheers,


On 2006/12/11, at 7:32 PM, James M Snell wrote:


The I-D would be an individual draft, not a WG draft.



--
Mark Nottingham http://www.mnot.net/



Re: feed id's and paged/archive feeds

2006-11-27 Thread Mark Nottingham


Well, since you ask a leading question...

I have a demo implementation of a client at:
 http://www.mnot.net/rss/history/
and my blog does the server side:
 http://www.mnot.net/blog/index.atom

James Holderness has mentioned an implementation, and the Apache  
abdera people seem to be planning something, based on their  
repository. I know of other folks who are planning to integrate into  
products and services, but I can't disclose anything more (I'd  
encourage them to, of course). Anybody else?


The issue we're discussing here, though, isn't about ambiguities in  
*this* spec, but rather in Atom itself; i.e., what does a feed ID  
really identify?


Cheers,



On 2006/11/27, at 11:06 AM, Ernest Prabhakar wrote:


Hi Mark,

Given all the ambiguities, are there any implementations available  
to test against in practice?  Or even implementors planning to make  
the attempt?


- Ernie P.

On Nov 26, 2006, at 1:25 PM, Mark Nottingham wrote:



Sorry, this got lost in my inbox...

I think they do, although the draft is silent on it. This is one  
of those areas where it would have been really nice if the WG had  
agreed to take on FH as part of the core, rather than extension;  
there are lots of little ambiguities like this as a result.


Cheers,


On 2006/11/03, at 1:37 PM, James M Snell wrote:


Mark,

I cannot recall if I've asked you this in the past but... if I  
have a

set of paged/archive feed documents all of which make up a single
logical feed, do the atom:id's for each feed document have be the  
same?

 If not, how do I determine the atom:id of the logical feed?

- James



--
Mark Nottingham http://www.mnot.net/






--
Mark Nottingham http://www.mnot.net/



Re: feed id's and paged/archive feeds

2006-11-27 Thread Mark Nottingham


Also, the MediaRSS module references it as a best practice.

When I started working on it, there was interest from server-side  
folks as well (e.g., Six Apart); AFAIK they're just waiting for it to  
be finalised (it's taken a while).


Cheers,



On 2006/11/27, at 11:18 AM, Mark Nottingham wrote:



Well, since you ask a leading question...

I have a demo implementation of a client at:
 http://www.mnot.net/rss/history/
and my blog does the server side:
 http://www.mnot.net/blog/index.atom

James Holderness has mentioned an implementation, and the Apache  
abdera people seem to be planning something, based on their  
repository. I know of other folks who are planning to integrate  
into products and services, but I can't disclose anything more (I'd  
encourage them to, of course). Anybody else?


The issue we're discussing here, though, isn't about ambiguities in  
*this* spec, but rather in Atom itself; i.e., what does a feed ID  
really identify?


Cheers,



On 2006/11/27, at 11:06 AM, Ernest Prabhakar wrote:


Hi Mark,

Given all the ambiguities, are there any implementations available  
to test against in practice?  Or even implementors planning to  
make the attempt?


- Ernie P.

On Nov 26, 2006, at 1:25 PM, Mark Nottingham wrote:



Sorry, this got lost in my inbox...

I think they do, although the draft is silent on it. This is one  
of those areas where it would have been really nice if the WG had  
agreed to take on FH as part of the core, rather than extension;  
there are lots of little ambiguities like this as a result.


Cheers,


On 2006/11/03, at 1:37 PM, James M Snell wrote:


Mark,

I cannot recall if I've asked you this in the past but... if I  
have a

set of paged/archive feed documents all of which make up a single
logical feed, do the atom:id's for each feed document have be  
the same?

 If not, how do I determine the atom:id of the logical feed?

- James



--
Mark Nottingham http://www.mnot.net/






--
Mark Nottingham http://www.mnot.net/




--
Mark Nottingham http://www.mnot.net/



Re: feed id's and paged/archive feeds

2006-11-26 Thread Mark Nottingham


Sorry, this got lost in my inbox...

I think they do, although the draft is silent on it. This is one of  
those areas where it would have been really nice if the WG had agreed  
to take on FH as part of the core, rather than extension; there are  
lots of little ambiguities like this as a result.


Cheers,


On 2006/11/03, at 1:37 PM, James M Snell wrote:


Mark,

I cannot recall if I've asked you this in the past but... if I have a
set of paged/archive feed documents all of which make up a single
logical feed, do the atom:id's for each feed document have be the  
same?

 If not, how do I determine the atom:id of the logical feed?

- James



--
Mark Nottingham http://www.mnot.net/



Fwd: I-D ACTION:draft-nottingham-atompub-feed-history-08.txt

2006-11-26 Thread Mark Nottingham


Based on feedback on-list and off, this draft:

1) Explicitly state that semantics of feeds with more than one type  
aren't defined by this specfiication
2) Added language about duplicate entries in archived feeds,  
effectively moving the algorithm in the appendix into prose in that  
section
3) Lower-cased the SHOULD requirement that as many relations as  
possible should be present

4) Strengthened statements about paged feeds being lossy
5) Moved RSS2 examples to appendix to make them more obviously non- 
normative


Diff at:
 http://www.mnot.net/drafts/draft-nottingham-atompub-feed-history-08- 
from-7.diff.html




Begin forwarded message:


From: [EMAIL PROTECTED]
Date: 26 November 2006 12:50:01 PM
To: i-d-announce@ietf.org
Subject: I-D ACTION:draft-nottingham-atompub-feed-history-08.txt
Reply-To: [EMAIL PROTECTED]

A New Internet-Draft is available from the on-line Internet-Drafts
directories.


Title   : Feed Paging and Archiving
Author(s)   : M. Nottingham
Filename: draft-nottingham-atompub-feed-history-08.txt
Pages   : 17
Date: 2006-11-26

This specification defines three types of syndicated Web feeds that
   enable publication of entries across one or more feed documents.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-08.txt


To remove yourself from the I-D Announcement list, send a message to
[EMAIL PROTECTED] with the word unsubscribe in the body of
the message.
You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce
to change your subscription settings.

Internet-Drafts are also available by anonymous FTP. Login with the
username anonymous and a password of your e-mail address. After
logging in, type cd internet-drafts and then
get draft-nottingham-atompub-feed-history-08.txt.

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt

Internet-Drafts can also be obtained by e-mail.

Send a message to:
[EMAIL PROTECTED]
In the body type:
FILE /internet-drafts/draft-nottingham-atompub-feed-history-08.txt.

NOTE:   The mail server at ietf.org can return the document in
MIME-encoded form by using the mpack utility.  To use this
feature, insert the command ENCODING mime before the FILE
command.  To decode the response(s), you will need munpack or
a MIME-compliant mail reader.  Different MIME-compliant mail readers
exhibit different behavior, especially when dealing with
multipart MIME messages (i.e. documents which have been split
up into multiple messages), so check your local documentation on
how to manipulate these messages.

Below is the data which will enable a MIME compliant mail reader
implementation to automatically retrieve the ASCII version of the
Internet-Draft.
Content-Type: text/plain
Content-ID: [EMAIL PROTECTED]

___
I-D-Announce mailing list
I-D-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/i-d-announce



--
Mark Nottingham http://www.mnot.net/



Re: Forward Compatibility

2006-11-18 Thread Mark Nottingham



On 2006/11/18, at 8:16 AM, Tse Shing Chi ((Franklin/Whale)) wrote:

Currently, there is no version element or attribute to reflect  
the current version of Atom using in an Atom feed. Does it mean  
that there will not be any new version of Atom?


Atom has a namespace; that can be use to introduce new versions of  
the format. That said, a new version of Atom itself should only be  
necessary if a fundamental bug or limitation is found in the  
specification of the feed itself, or in required element-level  
metadata; new metadata can be added using extensions, rather than by  
versioning Atom.


However, XHTML 2.0 will have a new namespace http://www.w3.org/ 
2002/06/xhtml2/, and the chance of having more future versions of  
XHTML cannot be eliminated. Have Atom prepared for this?


This was intentional; if we allowed future, non-backwards compatible  
versions of HTML to appear in the same places that XHTML1 content is  
allowed, processors wouldn't know what to do with it unless they  
understood XHTML2. Tying the allowed content to a specific version of  
XHTML promotes interoperability.


Cheers,


--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-11-17 Thread Mark Nottingham


How about adding:

To complete feeds:
---8---
This specification does not address duplicate entries or entry  
ordering in complete feeds.

---8---

To paged feeds:
---8---
Note that this specification does not address duplicate entries or  
entry ordering in paged feeds.

---8---

To archived feeds:
---8---
If duplicate entries are present in an archived feed, the most  
recently updated entry SHOULD replace all earlier entries. If  
duplicate entries have the same update time, and they are obtained by  
different feed documents, the entry sourced from the most recently  
updated feed document SHOULD replace all other duplicates of that entry.


In Atom archived feeds, two entries are duplicates if they have the  
same atom:id element. The update time of an entry is determined by  
its atom:updated element, and likewise the update time of a feed  
document is determined by its feed-level atom:updated element.


In RSS2 archived feeds, two entries are duplicates if they have the  
same guid element. The update time of an entry is not available,  
unless an appropriate extension is present. The update time of a feed  
document is determined by the channel-level pubDate element.


This specification does not address entry ordering in archived feeds.
---8---

This will require some tweaks to the algorithm in Appendix B as well.



On 2006/11/13, at 1:59 PM, James M Snell wrote:

There is the Feed Rank extension which can be used to specify the  
order
for entries but that's still a work in progress and has been a very  
low

priority for me.

IMHO, you should base this entirely on atom:id and atom:updated.   
If you
find another entry with a duplicate atom:id and a newer  
atom:updated, it
takes precedence.  If you find an entry with a duplicate atom:id  
and an

older or equal atom:updated, the one you currently have takes
precedence.  If you want more granularity, look for app:edited  
elements.


- James

Mark Nottingham wrote:


I haven't had any feedback on the possible change below. Does anyone
want to see things move in this direction?

Cheers,


On 2006/10/11, at 10:06 PM, Mark Nottingham wrote:

1. I think your document might need to address what's supposed  
to happen
if duplicate items are discovered when trolling through the  
paged info.
Newer replaces older? (That makes the most sense to me) Although  
I guess

the argument might be, what's an 'item'?.


I started going down that road in early drafts, but backed away from
it when it started looking like a rat hole. :)

To allow FH to normatively specify what to do with duplicates, you
have to figure out the ordering of entries, so you can determine  
their
relative precedence. Since Atom doesn't have any explicit  
ordering, FH

would need to either assume ordering semantics implicitly, or doing
something explicit in an extension.

In the paged case, this seems like a tall order, because it's  
totally

context-dependent; e.g., if you have OpenSearch or GData results, or
orders for launching the missiles that are paged, and you happen to
get a duplicate, the right thing to do may be very different.

In the archived case, it's a little easier, because we're already
inferring that the pages closes to current do have precedence, so we
just need to figure out what to do about duplicates in the same feed
document.

I could see making the implied page-by-page precedence for Archived
feeds in section 4 explicit. It would also be easy to add text  
saying

that relative precedence in the same feed document can be determined
by any extension that defines ordering, defaulting to the update  
time
of the entries (or document order, topmost first? I think this is  
what
most people do, but it seems contrary to the spirit of the Atom  
spec).
I'm not crazy about actually defining an ordering extension (is  
one in

progress? James?) in FH.


--
Mark Nottingham http://www.mnot.net/



--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-11-13 Thread Mark Nottingham


I haven't had any feedback on the possible change below. Does anyone  
want to see things move in this direction?


Cheers,


On 2006/10/11, at 10:06 PM, Mark Nottingham wrote:

1. I think your document might need to address what's supposed to  
happen
if duplicate items are discovered when trolling through the paged  
info.
Newer replaces older? (That makes the most sense to me) Although I  
guess

the argument might be, what's an 'item'?.


I started going down that road in early drafts, but backed away  
from it when it started looking like a rat hole. :)


To allow FH to normatively specify what to do with duplicates, you  
have to figure out the ordering of entries, so you can determine  
their relative precedence. Since Atom doesn't have any explicit  
ordering, FH would need to either assume ordering semantics  
implicitly, or doing something explicit in an extension.


In the paged case, this seems like a tall order, because it's  
totally context-dependent; e.g., if you have OpenSearch or GData  
results, or orders for launching the missiles that are paged, and  
you happen to get a duplicate, the right thing to do may be very  
different.


In the archived case, it's a little easier, because we're already  
inferring that the pages closes to current do have precedence, so  
we just need to figure out what to do about duplicates in the same  
feed document.


I could see making the implied page-by-page precedence for Archived  
feeds in section 4 explicit. It would also be easy to add text  
saying that relative precedence in the same feed document can be  
determined by any extension that defines ordering, defaulting to  
the update time of the entries (or document order, topmost first? I  
think this is what most people do, but it seems contrary to the  
spirit of the Atom spec). I'm not crazy about actually defining an  
ordering extension (is one in progress? James?) in FH.



--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-10-23 Thread Mark Nottingham


OK. I'm adding this text just after the list of feed types in the  
introduction;


---8---
The semantics of a feed that combines these types is undefined by  
this specification.

---8---

WRT what future specs can or can't do, that's pretty much up to them.

Cheers,


On 2006/10/12, at 2:42 AM, Andreas Sewe wrote:


Mark Nottingham wrote:

Andreas Sewe wrote:
But it would be desirable, IMHO, to be able to link to archived,  
older versions of a complete feed from within the current  
complete feed document.


Say, a feed document contains this month's Top Ten. Wouldn't it  
be nice if the feed document could link to September's Top Ten,  
in case anybody is interested in all the recent Top Ten lists?  
And linking to archived feed documents is precisely what prev- 
archive and next-archive are for. So why can't I use them in  
conjunction with complete feeds?
I think you're trying to reconstruct a different dimension;  
archived feeds, when reconstructed, contain only entries that are  
currently considered members of the feed; what you want to do is  
to have snapshots of the feed's members over time, separate from  
what the current members are.


Right. The point is that for non-complete feeds members are only  
ever added (never removed) as time goes by. Hence you can  
reconstruct a previous revision of an Archived Feed simply by  
following the prev-archive links far enough into the past; so  
there is, for non-complete Archived Feeds, no real need for another  
link relation to cover this dimension:



This might work as a prev-version link relation...


Now there are already previous and prev-archive; do we really  
need prev-version to cover the case of Complete Feeds? At least  
for non-complete Archived Feeds it is not really needed, which  
gives me the gut feeling that prev-version might not be needed  
for Complete Feeds either. (Unfortunately, I cannot come up with a  
satisfactory definition for prev-archive that would work for both  
cases, complete and non-complete. But that does not mean there is  
none...)


(For the moment, the related link relation with an explanatory  
@title will do.)


I am aware, now, that this is not what the current draft says,  
since the three types of feed (complete, paged, and archived  
feed) can't be combined in *any* way, even though the draft's  
introduction claims that [t]hese types are complementary. But  
at least some of the additional expressiveness offered by the  
combination of complete with either paged or archive feeds would  
be nice to have -- even though it adds some minor complexities.
My gut feeling is that while there might be some use cases for  
this sort of thing, it's going beyond the 80/20 point, and adding  
a lot of complexity/abstraction. When I said that they were  
complementary, I meant that together, they cover most feeds in  
common use today, not that they can be used together.


I see. Maybe it would be a good idea to spell this out explicitly  
in the spec, even if the Completeness is defined as having all of  
the entries in one physical document condition implies it. That  
way you can be sure that everyone, me included, gets the semantics  
right. (FWIW, nothing prevents stable logical feeds from being both  
Archived and Paged Feeds, right? Only unstable feeds can just be  
Paged, not Archived.)


I do want to address the combination issue, however. I'm inclined  
to just state that the semantics of feeds that have more than one  
type is undefined by this spec. Does that work for you?


It does. It would be worthwhile, however, to state that a future  
revision of your spec might choose to define behavior for these  
cases. (None of the features I have proposed would contradict the  
current semantics of Complete, Paged, and Archived Feeds.)


Regards,

Andreas Sewe



--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-10-23 Thread Mark Nottingham


Yep. Every feed is, in some sense, complete. And, every feed document  
is potentially part of more than one logical feed.


Many of the use cases take their cues from the surrounding context /  
use case; below you're basically saying:


* If you're subscribing to this URI, you've got the whole thing (and  
BTW, it shouldn't change, because it's an archive).
* If you're putting together an archived feed, and this is one step  
in that, the previous archive is over there.




On 2006/10/23, at 4:42 PM, James M Snell wrote:


Mixing things can definitely get a bit wierd.  For instance, if I
structure my archive feeds to show all entries within a given month
rather than by a certain number of entries, each individual feed is  
both

complete and part of a larger set.  Complete in the sense that each
represents a complete set within a limited window of time.

feed
  idtag:example.org,2006:archives/200609.xml /
  fh:complete /
  fh:archive /
  link rel=self href=200609.xml /
  link rel=prev-archive href=200608.xml /
  ...
/feed

- James

Mark Nottingham wrote:


OK. I'm adding this text just after the list of feed types in the
introduction;

---8---
The semantics of a feed that combines these types is undefined by  
this

specification.
---8---

WRT what future specs can or can't do, that's pretty much up to them.

Cheers,

[snip]



--
Mark Nottingham http://www.mnot.net/



Re: AD Evaluation of draft-ietf-atompub-protocol-11

2006-10-17 Thread Mark Nottingham


I also noticed the split that Lisa mentions when reviewing the draft.

I agree that they're not always separate, but it should be pointed  
out that they can be separate. I didn't see any mechanism to discover  
what the URI of the normal feed is, beyond a link/@rel=alternate in  
the collection feed; did I miss something?


If that's the way to do it, it would be good to call it out (it might  
be preferable to have a separate link relation, as the semantic isn't  
just alternate, but public, etc.). It might also be good to have  
something that allows distinguishing between the two (without forcing  
it) in the service document.


Cheers,


On 2006/10/17, at 4:40 PM, James M Snell wrote:


My assumption:  The separation between subscription feeds and
collection feeds is not always clear.  There are at least two  
deployed

implementations I am aware of that use the same feeds for both and I'm
currently working on a third.  In Google's new Blogger Beta, for
instance, the subscription feed is also the collection feed.

I believe that any assumption that the subscription and collections
feeds will always be different is incorrect and dangerous.



--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-10-11 Thread Mark Nottingham


I've had a private request to add elements to optionally indicate:
  a) how many pages there are in total, and
  b) what the current page number is
in the case of paged feeds.

What do people think? I note that OpenSearch has something along  
these lines (sort of) already:
  http://www.opensearch.org/Specifications/OpenSearch/1.1#The_. 
22totalResults.22_element


Cheers,


On 2006/10/04, at 11:13 AM, Mark Nottingham wrote:



I've only had positive comments about -07 so far, so I've  
recommended it for publication as a Proposed Standard to the IESG.


As part of that process, I'm issuing an informal, pseudo-WG Last  
Call on the document to capture any remaining feedback. In particular,


* What do people think about putting this document on the Standards  
Track?


* Do you have an implementation available, in progress, planned, etc.?

http://ietfreport.isoc.org/idref/draft-nottingham-atompub-feed- 
history/


Please provide feedback by October 18th.

Cheers,

--
Mark Nottingham http://www.mnot.net/




--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-10-11 Thread Mark Nottingham
 would be  
nice to have -- even though it adds some minor complexities.


My gut feeling is that while there might be some use cases for this  
sort of thing, it's going beyond the 80/20 point, and adding a lot of  
complexity/abstraction. When I said that they were complementary, I  
meant that together, they cover most feeds in common use today, not  
that they can be used together.


I do want to address the combination issue, however. I'm inclined to  
just state that the semantics of feeds that have more than one type  
is undefined by this spec. Does that work for you?


Cheers,

--
Mark Nottingham http://www.mnot.net/



Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-10-04 Thread Mark Nottingham


I've only had positive comments about -07 so far, so I've recommended  
it for publication as a Proposed Standard to the IESG.


As part of that process, I'm issuing an informal, pseudo-WG Last Call  
on the document to capture any remaining feedback. In particular,


* What do people think about putting this document on the Standards  
Track?


* Do you have an implementation available, in progress, planned, etc.?

http://ietfreport.isoc.org/idref/draft-nottingham-atompub-feed-history/

Please provide feedback by October 18th.

Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: Pseudo-Last Call on draft-nottingham-atompub-feed-history-07

2006-10-04 Thread Mark Nottingham


My blog http://www.mnot.net/blog/ has conformant RSS and Atom  
archived feeds.

   http://www.mnot.net/blog/index.rdf
  http://www.mnot.net/blog/index.atom
I also have a demonstration implementation of a client that will  
reconstruct archived feeds;

  http://www.mnot.net/rss/history/feed_history.py

Others? I know paging is used informally a lot in other situations/ 
specs.



On 2006/10/04, at 12:45 PM, James M Snell wrote:

Are you aware of Atom feeds that are currently implementing this  
version

of the draft?  I'd like to do some interop testing.

- James

Mark Nottingham wrote:


I've only had positive comments about -07 so far, so I've  
recommended it

for publication as a Proposed Standard to the IESG.

As part of that process, I'm issuing an informal, pseudo-WG Last  
Call on

the document to capture any remaining feedback. In particular,

* What do people think about putting this document on the  
Standards Track?


* Do you have an implementation available, in progress, planned,  
etc.?


http://ietfreport.isoc.org/idref/draft-nottingham-atompub-feed- 
history/


Please provide feedback by October 18th.

Cheers,

--
Mark Nottingham http://www.mnot.net/





--
Mark Nottingham http://www.mnot.net/



Fwd: I-D ACTION:draft-nottingham-atompub-feed-history-07.txt

2006-09-18 Thread Mark Nottingham


Feed History is now Feed Paging and Archiving, to reflect what it's  
become.


This draft is mostly a cleanup of -06, incorporating all of the  
feedback I've seen to date (thanks to all). If I missed anyone's  
comments (or an acknowledgment), please point it out.


The only substantial change is to archive document stability, which  
was demoted from MUST to SHOULD, as discussed.


A diff can be found at:
  http://ietfreport.isoc.org/cgi-bin/htmlwdiff?f1=../all/ids/draft- 
nottingham-atompub-feed-history-07.txtf2=../all/ids/draft-nottingham- 
atompub-feed-history-06.txt


Barring any last-minute show-stoppers, I intend to request this draft  
to be put on the Standards Track.


As always, comments, corrections and suggestions appreciated (but I  
hope we're done!)


Cheers,


Begin forwarded message:


From: [EMAIL PROTECTED]
Date: 15 September 2006 12:50:02 PM
To: i-d-announce@ietf.org
Subject: I-D ACTION:draft-nottingham-atompub-feed-history-07.txt
Reply-To: [EMAIL PROTECTED]

A New Internet-Draft is available from the on-line Internet-Drafts
directories.


Title   : Feed Paging and Archiving
Author(s)   : M. Nottingham
Filename: draft-nottingham-atompub-feed-history-07.txt
Pages   : 16
Date: 2006-9-15

This specification defines three types of syndicated feeds that
   enable publication of entries across one or more feed documents.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-07.txt


To remove yourself from the I-D Announcement list, send a message to
[EMAIL PROTECTED] with the word unsubscribe in the body of
the message.
You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce
to change your subscription settings.

Internet-Drafts are also available by anonymous FTP. Login with the
username anonymous and a password of your e-mail address. After
logging in, type cd internet-drafts and then
get draft-nottingham-atompub-feed-history-07.txt.

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt

Internet-Drafts can also be obtained by e-mail.

Send a message to:
[EMAIL PROTECTED]
In the body type:
FILE /internet-drafts/draft-nottingham-atompub-feed-history-07.txt.

NOTE:   The mail server at ietf.org can return the document in
MIME-encoded form by using the mpack utility.  To use this
feature, insert the command ENCODING mime before the FILE
command.  To decode the response(s), you will need munpack or
a MIME-compliant mail reader.  Different MIME-compliant mail readers
exhibit different behavior, especially when dealing with
multipart MIME messages (i.e. documents which have been split
up into multiple messages), so check your local documentation on
how to manipulate these messages.

Below is the data which will enable a MIME compliant mail reader
implementation to automatically retrieve the ASCII version of the
Internet-Draft.
Content-Type: text/plain
Content-ID: [EMAIL PROTECTED]

___
I-D-Announce mailing list
I-D-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/i-d-announce



--
Mark Nottingham http://www.mnot.net/



Re: I-D ACTION:draft-nottingham-atompub-feed-history-06.txt

2006-08-16 Thread Mark Nottingham


On 2006/06/29, at 7:45 PM, James M Snell wrote:


A couple of comments...


Section 6

   Archive documents are feed documents that contain less recent  
entries

   in the feed.  The set of entries contained in an archive document
   published at a particular URI MUST NOT change over time.

I definitely understand the motivation for the MUST NOT here, but  
I'm

not sure if it's enforceable.  For instance, on my blog, I may go back
and delete an old entry that appears within an archive feed.  Such a
change should be reflected in the archive feed.  I would recommend
changing this to a SHOULD NOT with a comment that removing entries
from an archive document has a negative impact on the cacheability of
the document and the effectiveness of the archive.


How about:

[[[
Archive documents are feed documents that contain less recent entries  
in the feed. The set of entries contained in an archive document  
published at a particular URI SHOULD NOT change over time. Likewise,  
the URI for a particular archive document SHOULD NOT change over time.


These stability requirements allow clients to make certain  
assumptions about archive documents; they may safely assume that if  
they have retrieved the archive document at a particular URI once, it  
will not meaningfully change in the future.

]]]



Section 6.1

The archive document examples do not have the fh:archive / element


Fixed; thanks (and to Stefan as well).

--
Mark Nottingham http://www.mnot.net/



Fwd: I-D ACTION:draft-nottingham-atompub-feed-history-06.txt

2006-06-28 Thread Mark Nottingham


As discussed earlier.

Begin forwarded message:


From: [EMAIL PROTECTED]
Date: 28 June 2006 3:50:01 PM
To: i-d-announce@ietf.org
Subject: I-D ACTION:draft-nottingham-atompub-feed-history-06.txt
Reply-To: [EMAIL PROTECTED]

A New Internet-Draft is available from the on-line Internet-Drafts  
directories.



Title   : Extensions for Multi-Document Syndicated Feeds
Author(s)   : M. Nottingham
Filename: draft-nottingham-atompub-feed-history-06.txt
Pages   : 17
Date: 2006-6-28

This specification defines three types of syndicated feeds that
   enable publication of entries across one or more feed documents.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-06.txt


To remove yourself from the I-D Announcement list, send a message to
[EMAIL PROTECTED] with the word unsubscribe in the body  
of the message.

You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce
to change your subscription settings.


Internet-Drafts are also available by anonymous FTP. Login with the  
username

anonymous and a password of your e-mail address. After logging in,
type cd internet-drafts and then
get draft-nottingham-atompub-feed-history-06.txt.

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt


Internet-Drafts can also be obtained by e-mail.

Send a message to:
[EMAIL PROTECTED]
In the body type:
FILE /internet-drafts/draft-nottingham-atompub-feed-history-06.txt.

NOTE:   The mail server at ietf.org can return the document in
MIME-encoded form by using the mpack utility.  To use this
feature, insert the command ENCODING mime before the FILE
command.  To decode the response(s), you will need munpack or
a MIME-compliant mail reader.  Different MIME-compliant mail readers
exhibit different behavior, especially when dealing with
multipart MIME messages (i.e. documents which have been split
up into multiple messages), so check your local documentation on
how to manipulate these messages.


Below is the data which will enable a MIME compliant mail reader
implementation to automatically retrieve the ASCII version of the
Internet-Draft.
Content-Type: text/plain
Content-ID: [EMAIL PROTECTED]

___
I-D-Announce mailing list
I-D-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/i-d-announce



--
Mark Nottingham http://www.mnot.net/



RFC3229 w/ feeds [was: Paging, Feed History, etc.]

2006-06-08 Thread Mark Nottingham



On 2006/06/07, at 11:40 PM, Thomas Broyer wrote:


My main concern is that RFC3229 w/ feeds is being deployed more and
more widely and is still not even an I-D (or I missed something).


I have that concern as well.

I am also concerned that RFC3229 is an extension of HTTP, but some  
implementers are acting as if it chages the semantics of already- 
defined parts of HTTP. For example, a delta must be a subset of the  
current representation that is returned to a GET; if you GET the  
feed, it has to return all of the entries that you could retrieve by  
using delta.


I have a feeling that many people are treating it as a dynamic query  
mechanism that's capable of retrieving any entry that's ever been in  
the feed, while still only returning the last n entries to a plain  
GET. If so, they're breaking HTTP, breaking delta, and should use  
something else.


Is this the case, or am I (happily) mistaken?

--
Mark Nottingham http://www.mnot.net/



Re: Paging, Feed History, etc.

2006-06-07 Thread Mark Nottingham



On 2006/06/07, at 4:26 AM, James Holderness wrote:

I'm going to reserve judgement until I see what exactly it is  
you're proposing, but I'm not particularly keen to change our  
existing implementation.


Understood. I've been reluctant to change the spec for just that  
reason, but the split has become pretty apparent.


Our of curiosity, do you know of *any* client applications that  
actually support feed paging? Or have expressed an intention to  
support feed paging? By client application I mean something a user  
can download and install on their system and use as a feed reader.  
Last time I checked I couldn't find any on Windows. Is there better  
support on Linux and Macs?


I think most of the use cases for paging have to do with things like  
GData, OpenSearch, etc -- i.e., query results. That sort of thing  
isn't targetted at desktop aggregators AFAICT; it seems to be more  
for machine-machine communication, or for browsing a result set.


--
Mark Nottingham http://www.mnot.net/



Re: Paging, Feed History, etc.

2006-06-07 Thread Mark Nottingham



On 2006/06/07, at 9:03 AM, James Holderness wrote:


As for machine-machine communication, if these feeds aren't meant  
for desktop aggregators then does it really matter that they  
function differently? You can describe one algorithm for use in  
machine-machine communication and another for use by desktop  
aggregators downloading regular feeds. Both can use the same link  
relations because they should never come into contact with each  
other. Having said that I still don't see how a machine-machine  
algorithm for retrieving a paged feed can be different from your  
current feed history algorithm and still be useful.


I don't see a clean split between machine-to-machine vs. desktop  
aggregator cases; for example, an incremental-style feed can be  
useful both on the desktop (to make sure I see all of your blog  
entries) as well as with processes (to make sure that my program  
doesn't miss a critical event if it has some downtime or loss of  
connectivity).


Similarly, some of the cases I've heard for paging-style feeds are  
with desktop clients (e.g., get me the next results, please) and  
some are with processes (e.g., processing search results automatically).


The difference has more to do with a) what guarantees the server  
wants to provide, and b) what resources they're willing to devote  
towards meeting those guarantees.


Lets say I was a search engine returning paged results. A search is  
performed that returns 200 results. I return 20 pages, 10 results  
per page. First time around a client supporting the feed history  
algorithm would retrieve all 20 pages no problem. So far I see no  
difference between how a desktop aggregator would behave and how  
machine-machine communication would function.


The second time the client connects (assuming there is a second  
time) it sends through an etag and/or last-modified date so the  
search engine knows which results it already has. Say there are 3  
new results since the previous retrieval. Either the search engine  
is smart enough to just return those 3 results or it's going to  
ignore the etag and return everything - 21 pages, 10 results per  
page, new items could be anywhere.


As a desktop aggregator I guarantee you I'm not going to want to  
download 20+ pages every hour just to find the 3 new items that  
*might* be there. Fortunately the feed history algorithm would stop  
me after the first page, and I'm thankful for that. Would a machine- 
machine communication be any different? Would they really want to  
download every single one of those 203 results just to find the 3  
new items?


These are pretty much the assumptions that I was making previously.  
The degree of precision that FH currently provides isn't desirable  
for search results. Feed History also requires that the server  
maintain state about a particular feed, which is unworkable for  
search results; e.g., to implement feed history for search results, a  
server would have to mint a whole new set of feed documents for every  
query, and keep them around. That's not workable for most search  
engines (Yahoo, Google, Amazon, whatever), so they need another  
option -- one that needs to be clearly distinct from FH.


This brings me to my other motivation -- I found that most people who  
use previous and next don't understand the assumptions that FH  
makes about archive stability, and point them at URIs like http:// 
example.org/feed.atom?page=3. That will break the FH algorithm  
badly, reducing the value of the mechanism as a whole, because people  
will stop trusting it. The link relation for implementing the  
incremental approach needs to have the stability semantics baked in  
and explicit.


--
Mark Nottingham http://www.mnot.net/



Re: Paging, Feed History, etc.

2006-06-07 Thread Mark Nottingham



On 2006/06/07, at 11:16 AM, James Holderness wrote:


Mark Nottingham wrote:
These are pretty much the assumptions that I was making  
previously.  The degree of precision that FH currently provides  
isn't desirable  for search results. Feed History also requires  
that the server  maintain state about a particular feed, which is  
unworkable for  search results; e.g., to implement feed history  
for search results, a  server would have to mint a whole new set  
of feed documents for every  query, and keep them around.


Not necessarily. They only need to be able to sort results on a  
most-recently-discovered date. When a client connects to them with  
an etag representing date X, they return only those results have  
been discovered since date X. I can believe that there are search  
engines for which even that isn't feasible, but then FH as we know  
is not possible, and those feeds are essentially useless from a  
subscription point of view.


They can still use the paging links so you can connect to a search  
engine for a once-off paged set of results. They just need to  
return 304 on any subsequent requests (i.e. anything with an etag)  
in case someone makes the mistake of subscribing to one of those  
feeds. If you have something else in mind for that kind of server  
I'm curious to know what it is. In other words can you envision a  
server that wants to do paging, doesn't have enough state  
information to be able to do FH, but still would like to allow  
subscriptions? How would it work?


I'm not sure how ETags and 304s come into it -- it sounds like you're  
proposing using either the entry-level updated date or the entry- 
level id as input to a server-side function to select a set of  
entries from the feed.  Can you paint out your proposal in protocol  
exchanges, please?



--
Mark Nottingham http://www.mnot.net/



Re: Paging, Feed History, etc.

2006-06-07 Thread Mark Nottingham


Are you talking about using ETag HTTP response headers, If-Match  
request headers, and 304 Not Modified response status codes? That's a  
gross misapplication of those mechanisms if so, and this will break  
intermediaries along the path.


Even if it's cast as a query parameter in the URI (for example), it  
requires query support on the server side, a concept of discovered  
time (as you point out), and places constraints on the ordering of  
the feed.


Are you proposing this instead of the mechanism currently described  
in FH? Alongside it?



On 2006/06/07, at 3:35 PM, James Holderness wrote:



Mark Nottingham wrote:
I'm not sure how ETags and 304s come into it -- it sounds like  
you're proposing using either the entry-level updated date or the  
entry- level id as input to a server-side function to select a set  
of  entries from the feed.  Can you paint out your proposal in  
protocol  exchanges, please?


Entry-level updated date is close, but not quite what would be  
needed. For this to work, it requires that search engines store a  
date that represents when a result is discovered. In other words  
the date that an entry is added to the search engine database.


Client connects to server with a specific query.
Server returns the first page of results along with an etag  
representing the current internal datetime as accurately as possible.
Client connects to server with next link from first page (the  
link would obviously have to include the query as well as the page  
number).

Server returns the second page of results.
etc.

wait several days

Client connects to server with a specific query along with etag  
returned from the first query.
Server returns only those results that match the query *and* have a  
discovered date = etag date value.
Server also returns etag representing the current internal datetime  
as before.
Client connects to server with next link from first page (this  
link would include the query and the page number, but also the the  
etag value).
Server returns the second page of results that match the query  
*and* have a discovered date = etag value.

etc.

That make sense?

If the server doesn't have the concept of a discovered date, then  
there's not much we can do. We can return a paged set of results,  
but they can't be updated in any meaningful way so the feed should  
not be subscribed to.


Client connects to server with a specific query.
Server returns first page of results along with an etag containing  
a hash of the star-spanngled banner.

Client connects to server with next link from first page.
Server returns the second page of results.
etc.

wait several days

Client connects to server with a specific query along with etag  
returned from initial query.
Server notices that there is an etag, doesn't care what it is set  
to, and just returns 304 regardless.


In other words, you can retrieve the feed once, but never again.  
That's as good as it gets.


Regards
James




--
Mark Nottingham http://www.mnot.net/



Paging, Feed History, etc.

2006-06-06 Thread Mark Nottingham


I've been talking to a number of people face-to-face about Feed  
History and the use cases for it, and I've come to a position where I  
believe there are two major use cases out there for putting together  
multiple feeds to form one big, virtual feed.


1) So-called incremental feeds, where changes are happening at the  
front end of the feed, while less recent entries are pretty much  
static. If a change is required in an old entry, either a new revised  
entry with the same ID, or something like a tombstone is required.  
Publishers can easily make stable archives of old entries available,  
and clients wish to take advantage of caching, etc. to avoid re- 
transferring old entries again and again. A high degree of fidelity  
may be required; e.g., it should be possible to accurately  
reconstruct the entire state of the feed with no missed entries.  
E.g., a blog feed, but this could also be seen as an event feed,  
where the entries are changes to the state of the underlying resources.


2) So-called paging feeds, where the entries are often the results  
to a query, being paged through in groups so as to not overwhelm the  
server and/or communications link. Entries may be arbitrarily added,  
deleted and reordered. Clients expect to access what the portions  
they need in relatively quick succession, and do not require absolute  
fidelity. E.g., OpenSearch query results. The entries in the feed  
directly correspond to the underlying resources, 1-to-1.


These are very different things. Incremental feeds require that the  
server keep state around per-feed, which isn't viable for something  
like query results, but fine for a blog. Paging feeds can lose  
entries (e.g., if http://example.org/index.page?page=1 refers to  
http://example.org/index.atom?page=2, page 2's contents can change  
between the two fetches), which is OK for some applications, and not  
for others.


As such, I'm pretty much convinced that they need to be dealt with  
separately.


Originally, I proposed that Feed History use prev-archive link  
relations, but many people pushed back on that in favour of the more  
generic previous and next.


Given the above, I'd like to see if anyone would still object to  
having separate relation sets for incremental feeds (prev-archive  
and friends) and paging feeds (previous, next and friends).


If people think that's a good idea, I can prepare a new draft that  
attempts to address both. The intent would be to be compatible with  
current usage by OpenSearch, GData, etc., while giving people the  
option to use something more reliable when necessary.


Thoughts?


--
Mark Nottingham http://www.mnot.net/



Re: Tools that make use of previous/next/first/last links?

2006-05-03 Thread Mark Nottingham


If you use URIs like
  http://example.com/feed?start=5num=10
changing the directionality of next and previous will not make  
what you're doing compatible with feed history.


Such URIs have a much more fundamental problem -- they don't refer to  
a stable set of entries, and therefore only act as a snapshot of the  
*current* feed, chopped up into chunks. If the feed changes between  
accesses, the client will be in an inconsistent state. The client  
also has to walk through all of the pages every time it fetches the  
feed; it can't cache them -- which is a primary requirement for feed  
history.


What are the requirements that drove you to this type of paging  
solution?



On 2006/05/02, at 9:14 PM, James M Snell wrote:



Mark Nottingham wrote:

[snip]

As it stands now, a single feed
cannot implement APP, OpenSearch AND Feed History.


Please describe the scenario where you'd want that to happen --  
show the

feed.




The feed(s) are part of our open activities implementation and are
available via our APP interop endpoint [1].  Our APP collection feeds
are also the feeds people subscribe to and search with (e.g. any of  
our

feeds accept querystring parameters to filter the feed results).
Requesters can set the page size as a querystring, if the result  
set is

larger than the page size, the feed is automatically paged using
first/last/next/previous.  The fact that our entries are sorted in
reverse chronological order makes us compliant with APP, but makes it
impossible for clients to use the Feed History algorithm  (current  
has a

next but no previous).

- James

[1] http://www.imc.org/atom-protocol/mail-archive/msg04795.html





--
Mark Nottingham http://www.mnot.net/



Re: addition to next rev of FH?[was Tools that make use of previous/next/first/last links?]

2006-05-03 Thread Mark Nottingham


I had this already:
   Archive document refers to a feed document that is archived; i.e.,
   the set of entries inside it does not change over time.  Entries
   within an archive MAY themselves change, however.

but if this is catching people by surprise, it obviously isn't  
prominent enough. I'll hammer it home with some examples, SHOULDs, etc.


Thanks!


On 2006/05/03, at 3:42 AM, Bill de hÓra wrote:


(ot for the last thread)

Hi Mark,

I've just specced out an app that uses FH and this idea of an  
archived feed hadn't quite come across to me as safe - I had some  
what ifs about server resets that affected the feed.


However, the URL:

http://example.com/feed?start=5num=10

nails that concern for me and thus your point about chunky URLs  
which dynamically generated feeds rings true. Would you consider  
calling this out thing directly in a future rev? I think it might  
be helpful for robust server designs if some guidance were given.


cheers
Bill

Mark Nottingham wrote:

If you use URIs like
  http://example.com/feed?start=5num=10
changing the directionality of next and previous will not make  
what you're doing compatible with feed history.
Such URIs have a much more fundamental problem -- they don't refer  
to a stable set of entries, and therefore only act as a snapshot  
of the *current* feed, chopped up into chunks. If the feed changes  
between accesses, the client will be in an inconsistent state. The  
client also has to walk through all of the pages every time it  
fetches the feed; it can't cache them -- which is a primary  
requirement for feed history.
What are the requirements that drove you to this type of paging  
solution?

On 2006/05/02, at 9:14 PM, James M Snell wrote:


Mark Nottingham wrote:

[snip]

As it stands now, a single feed
cannot implement APP, OpenSearch AND Feed History.


Please describe the scenario where you'd want that to happen --  
show the

feed.




The feed(s) are part of our open activities implementation and are
available via our APP interop endpoint [1].  Our APP collection  
feeds
are also the feeds people subscribe to and search with (e.g. any  
of our

feeds accept querystring parameters to filter the feed results).
Requesters can set the page size as a querystring, if the result  
set is

larger than the page size, the feed is automatically paged using
first/last/next/previous.  The fact that our entries are sorted in
reverse chronological order makes us compliant with APP, but  
makes it
impossible for clients to use the Feed History algorithm   
(current has a

next but no previous).

- James

[1] http://www.imc.org/atom-protocol/mail-archive/msg04795.html



--
Mark Nottingham http://www.mnot.net/






--
Mark Nottingham http://www.mnot.net/




Re: Tools that make use of previous/next/first/last links?

2006-05-03 Thread Mark Nottingham



Any extension that uses previous and next has to account for the  
stablity of the underlying resources; if you use *any* paging  
application and the set of entries in previous and next changes  
over time, you're going to potentially end up with a reconstructed  
feed in inconsistent state. Or do you have use cases where it's OK to  
have sloppy feed reconstruction?




On 2006/05/03, at 8:43 AM, James M Snell wrote:



+1, I'd also highly recommend introducing an archive link relation
that can be used to cleanly separate paged feed documents used for  
state

reconstruction from paged feed documents used for other purposes (e.g.
searching)

  ===
  Attribute Value: archive

  Description: A URI that, when dereferenced, returns a feed document
  containing a fixed set of entries that does not change over time.

  Expected display characteristics: The archive relation may be used
  for background processing without displaying its value, or, a user
  agent may support displaying a hyperlink, button or other GUI  
element

  for accessing or subscribing to the linked feed document.

  Security Considerations: Automated agents should take care when this
  relation crossed administrative domains (e.g. the URI has a  
different

  authority than the current document)
  ===

Example;

  link rel=archive href=http://example.org/archive? 
when=2006/04 /


- James

David Powell wrote:


Wednesday, May 3, 2006, 6:48:55 AM, Mark Nottingham wrote:


If you use URIs like
   http://example.com/feed?start=5num=10
changing the directionality of next and previous will not make
what you're doing compatible with feed history.


Such URIs have a much more fundamental problem -- they don't  
refer to

a stable set of entries, and therefore only act as a snapshot of the
*current* feed, chopped up into chunks. If the feed changes between
accesses, the client will be in an inconsistent state. The client
also has to walk through all of the pages every time it fetches the
feed; it can't cache them -- which is a primary requirement for feed
history.


I think it would be worth recommending the use of stable URIs in the
draft.








--
Mark Nottingham http://www.mnot.net/



Re: Tools that make use of previous/next/first/last links?

2006-05-02 Thread Mark Nottingham



On 2006/05/01, at 12:55 AM, James M Snell wrote:


Eric Scheid wrote:
I thought OpenSearch results are not sorted by chronological age  
at all, but
instead by relevance? Using next with OpenSearch makes sense in  
that

context. Using previous for stepping back thru time in a data store
arranged chronologically makes sense.


What Eric said.


As it stands now, a single feed
cannot implement APP, OpenSearch AND Feed History.


Please describe the scenario where you'd want that to happen -- show  
the feed.



--
Mark Nottingham http://www.mnot.net/



Re: Tools that make use of previous/next/first/last links?

2006-05-02 Thread Mark Nottingham


Peter,

Can you expand upon being more precise about exactly what is needed?


On 2006/05/01, at 3:16 AM, Peter Robinson wrote:


Mark Nottingham [EMAIL PROTECTED] wrote:

One thing I did notice -- you're using URLs like this for your  
archives:

   http://journals.aol.com/panzerjohn/abstractioneer/atom.xml?
page=2amp;count=10

Are they really permanent? If they're relative to the current state
of the feed (i.e., the above URI means give me the ten latest
entries), you can get into some inconsistent states; e.g., if
somebody adds/deletes an entry between when the client fetches the
different archives. Also, if a client doesn't visit for a long time,
it will see
   http://journals.aol.com/panzerjohn/abstractioneer/atom.xml?
page=2amp;count=10
and assume it already has all of the entries in it, because it's
fetched that URI before.


This is the biggest issue I have with the history spec as written.  I
have urls like that, which aren't 'archive documents' as defined.   
That
means I can't implement the history spec even though I have  
conventional

chronologically ordered feeds with link rel=prev/next elements where
old entries are available.

I believe that by being more precise about exactly what is needed  
by the

client to implement feed history usefully you can significantly relax
the requirements.  I believe the algorithm can be written so that
clients can use history with a feed like mine.

Regards,

Peter





--
Mark Nottingham http://www.mnot.net/



Re: Tools that make use of previous/next/first/last links?

2006-04-30 Thread Mark Nottingham


Did you find that algorithm wrong, too hard to understand/implement,  
or did you just do a different take on it? Does the approach that you  
took end up having the same result?


Any suggestions on how to better document it appreciated.

Cheers,


On 2006/04/26, at 8:35 PM, James Holderness wrote:



We added support for next/prev/previous links in version 0.3.0 of  
Snarfer [1]. We don't use the reconstruction algorithm suggested in  
the Feed History draft, but your example feed seems to work ok for  
an initial retrieval. There may be problems with subsequent  
updates, though, depending on how you handle items falling out the  
bottom of the main feed.


Regards
James

[1] http://www.snarfware.com/

John Panzer wrote:

We just deployed support for [EMAIL PROTECTED]previous et al. for
AOL Journals.  If anyone has a client that makes use of these
links, please let me know, I'd love to see if there are any
interoperability problems.






--
Mark Nottingham http://www.mnot.net/



Re: Tools that make use of previous/next/first/last links?

2006-04-30 Thread Mark Nottingham


I ran it through my demo implementation for Feed History:
  http://www.mnot.net/rss/history/feed_history.py
and it worked fine (after I fixed a bug -- thanks!).

To use that, just download the .py and run it on the command line  
like this:

  ./feed_history.py [filename] [url]
where filename is the name of a local file it can store state in (if  
you run it again in the future, it won't fetch what it's already  
seen) and url is the feed.


One thing I did notice -- you're using URLs like this for your archives:
  http://journals.aol.com/panzerjohn/abstractioneer/atom.xml? 
page=2amp;count=10


Are they really permanent? If they're relative to the current state  
of the feed (i.e., the above URI means give me the ten latest  
entries), you can get into some inconsistent states; e.g., if  
somebody adds/deletes an entry between when the client fetches the  
different archives. Also, if a client doesn't visit for a long time,  
it will see
  http://journals.aol.com/panzerjohn/abstractioneer/atom.xml? 
page=2amp;count=10
and assume it already has all of the entries in it, because it's  
fetched that URI before.



On 2006/04/26, at 6:36 PM, John Panzer wrote:

We just deployed support for [EMAIL PROTECTED]previous et al. for AOL  
Journals.  If anyone has a client that makes use of these links,  
please let me know, I'd love to see if there are any  
interoperability problems.


Example Atom feed: http://journals.aol.com/panzerjohn/ 
abstractioneer/atom.xml


Thanks,
--
John Panzer
System Architect
http://abstractioneer.org



--
Mark Nottingham http://www.mnot.net/



Feed History -05

2006-03-01 Thread Mark Nottingham


I've submitted Feed History -05, hopefully either the last or very  
near to.
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-05.txt


Changes:

* Moved fh:prev and fh:subscription to the previous and current  
Atom link relations, respectively.


* More carefully specified the feed state reconstruction process;  
please review.


* Moved fh:incremental boolean to fh:complete empty element (has  
incremental=false semantics).


Please review and give feedback ASAP; I think this has incorporated  
all feedback and stated plans to date.


Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: I-D ACTION:draft-manoj-cachecontrol-00.txt : Call for Comments

2006-02-09 Thread Mark Nottingham


Hi Manoj,

It might be more appropriate to send this to the HTTP list;
  http://lists.w3.org/Archives/Public/ietf-http-wg/

Also, out of curiosity, is this a Network Appliance-sponsored draft,  
or your own effort? I.e., is this something that might make its way  
into NetCache?


Cheers,


On 2006/02/08, at 2:26 AM, Manoj wrote:



Hello friends,

All your comments are welcome on my draft below.

Thanks and Regards
Manoj


-- 
---
A New Internet-Draft is available from the on-line Internet-Drafts  
directories.



Title: An Extension to Cache-Control, HTTP/1.1 for group caching
Author(s): G. Manoj
Filename: draft-manoj-cachecontrol-00.txt
Pages: 7
Date: 2006-1-17

   The Cache-Control general-header field of HTTP/1.1 [1] is used to
   specify directives that must be obeyed by all caching mechanisms
   along the request/response chain. This document details an  
extension

   to the cache-control header to enable caching of resources for
   dynamic sets of users who are grouped under certain attributes.  
Also

   this document specifies user-defined header extensions of HTTP/1.1
   [1], which allow these clients to be served from the group caches.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-manoj-cachecontrol-00.txt

To remove yourself from the I-D Announcement list, send a message to
i-d-announce-request at ietf.org with the word unsubscribe in the  
body of the message.

You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce
to change your subscription settings.


Internet-Drafts are also available by anonymous FTP. Login with the  
username

anonymous and a password of your e-mail address. After logging in,
type cd internet-drafts and then
get draft-manoj-cachecontrol-00.txt.

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt


Internet-Drafts can also be obtained by e-mail.

Send a message to:
mailserv at ietf.org.
In the body type:
FILE /internet-drafts/draft-manoj-cachecontrol-00.txt.

NOTE:The mail server at ietf.org can return the document in
MIME-encoded form by using the mpack utility.  To use this
feature, insert the command ENCODING mime before the FILE
command.  To decode the response(s), you will need munpack or
a MIME-compliant mail reader.  Different MIME-compliant mail readers
exhibit different behavior, especially when dealing with
multipart MIME messages (i.e. documents which have been split
up into multiple messages), so check your local documentation on
how to manipulate these messages.


Below is the data which will enable a MIME compliant mail reader
implementation to automatically retrieve the ASCII version of the
Internet-Draft.





--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations -- Last Call

2005-11-10 Thread Mark Nottingham


I've had a response; they're happy (Joe G can confirm this), and say  
they'll update their next draft to accommodate the regs.


All systems go; requesting registration shortly.


On 03/11/2005, at 6:54 AM, Mark Nottingham wrote:



On 24/10/2005, at 2:12 PM, Peter Robinson wrote:
That is true, but have you communicated with the OpenSearch people  
about

this?  I do strongly believe that *here* is the place for work like
this, rather than behind closed doors at Amazon.  But if I was  
them I'd
feel pretty miffed that this WG appears to have basically stolen  
their
idea in a desperate 'land grab', and turned it on its head so that  
it is

(arguably) the complete opposite of their intended definition.


The only place to give them feedback is on a Web form. I've  
submitted feedback and will wait to see if there's any response.



--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations -- Last Call

2005-11-02 Thread Mark Nottingham
On 24/10/2005, at 2:12 PM, Peter Robinson wrote:That is true, but have you communicated with the OpenSearch people aboutthis?  I do strongly believe that *here* is the place for work likethis, rather than behind closed doors at Amazon.  But if I was them I'dfeel pretty miffed that this WG appears to have basically stolen theiridea in a desperate 'land grab', and turned it on its head so that it is(arguably) the complete opposite of their intended definition.The only place to give them feedback is on a Web form. I've submitted feedback and will wait to see if there's any response.Cheers,--Mark Nottingham     http://www.mnot.net/  

Re: New Link Relations -- Ready to go?

2005-10-22 Thread Mark Nottingham


First and Last are (or at least can be) static; i.e. one can read  
the relations, as currently written, as saying that they point to the  
specific set of entries (archive) that are first and list,  
respectively, at the time that the feed is minted. Subscribing to one  
of those would be... bad.


If we had more specific relations, this would certainly be a lot  
easier. Keeping everything so loose and semantic-free seems to me  
like premature optimisation and a barrier to interoperability.


HTTP, for example, seems to work just fine, despite having concrete  
semantics that are grounded in specific use cases for almost all of  
its headers (indeed, the least-used ones are those that are more  
descriptive).




On 22/10/2005, at 2:10 AM, James Holderness wrote:


Tim Bray wrote:

On consideration, I am -1 to rel=subscribe.  The reason is  
this:  one of the big potential value-adds Atom brings is a  
standards- compliant way to do one-click auto-subscribe, via link  
rel=self /

.  You are proposing to introduce a link rel=subscribe / which
is there to support autosubscribe.  But, it turns out, only in the  
special case where the feed is static and you wouldn't actually   
subscribe to it.  I think the risk of confusing implementors and   
weakening the value proposition around link rel=self greatly   
exceeds the benefit of supporting this special case.




At the time subscribe was proposed it wasn't clear that there  
would be a first and last. However, since that is now the case,  
would it not short-circuit a whole lot of argument if we just threw  
out subscribe altogether?


Determining whether an Atom document is an archive can be achieved  
by looking for the presence of a prev link and/or a first link  
that is not equal to self. As for finding the subscribtion URI  
itself - that should just be the first link shouldn't it?


I don't want to get dragged back into a long argument on this so if  
you think this is a stupid idea don't expect any defence from me.  
I'm just throwing it out there with the hope that it might be  
workable.


Regards
James





--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations -- Ready to go?

2005-10-22 Thread Mark Nottingham


Great! I'll summarise where they are and do a last call.


On 22/10/2005, at 9:52 AM, Tim Bray wrote:



On Oct 22, 2005, at 8:40 AM, Mark Nottingham wrote:


You seem to be saying that because link/@rel=self was designed  
for a specific purpose, and even though its definition is quite  
descriptive (its definition *only* says it should be used to link  
to the current document; -11 says nothing about subscription) it  
should be the only way defined to do subscription.




Agreed that the description could be better.  What I'm actually  
saying is since we already have a way to do subscription, we don't  
need to invent another.  Also that the problem of pointing from  
the static/archived version of a feed to the dynamic/subscribable  
one is a related but different problem, and the one that you ought  
to be solving.




OTOH, I'm happy to make this relation more declarative. How about:

 -  Attribute Value: current
 -  Description: A URI that, when dereferenced, returns a feed  
document containing the most recent entries in the feed.

 -  Expected display characteristics: Undefined.
 -  Security considerations: Automated agents should take care  
when this relation crosses administrative domains (e.g., the URI  
has a different authority than the current document).




Thank you.  +1 -Tim






--
Mark Nottingham http://www.mnot.net/



New Link Relations -- Last Call

2005-10-22 Thread Mark Nottingham


I've replaced subscribe with current; otherwise, these are the  
same as in the last round. I think they're ready to go -- any more  
comments?



 -  Attribute Value: previous
 -  Description: A URI that refers to the immediately preceding  
document in a series of documents.

 -  Expected display characteristics: Undefined.
 -  Security considerations: Automated agents should take care when  
this relation crosses administrative domains (e.g., the URI has a  
different authority than the current document). Such agents should  
also take care to detect circular references.


 -  Attribute Value: next
 -  Description: A URI that refers to the immediately following  
document in a series of documents.

 -  Expected display characteristics: Undefined.
 -  Security considerations: Automated agents should take care when  
this relation crosses administrative domains (e.g., the URI has a  
different authority than the current document). Such agents should  
also take care to detect circular references.


 -  Attribute Value: first
 -  Description: A URI that refers to the furthest preceding  
document in a series of documents.

 -  Expected display characteristics: Undefined.
 -  Security considerations: Automated agents should take care when  
this relation crosses administrative domains (e.g., the URI has a  
different authority than the current document). Such agents should  
also take care to detect circular references.


 -  Attribute Value: last
 -  Description: A URI that refers to the furthest following  
document in a series of documents.

 -  Expected display characteristics: Undefined.
 -  Security considerations: Automated agents should take care when  
this relation crosses administrative domains (e.g., the URI has a  
different authority than the current document). Such agents should  
also take care to detect circular references.


 -  Attribute Value: current
 -  Description: A URI that, when dereferenced, returns a feed  
document containing the most recent entries in the feed.

 -  Expected display characteristics: Undefined.
 -  Security considerations: Automated agents should take care when  
this relation crosses administrative domains (e.g., the URI has a  
different authority than the current document).



--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations -- Ready to go?

2005-10-21 Thread Mark Nottingham


That's what I was trying to do here:

 -  Description: A URI that refers to a feed document containing a  
set of the most recent entries in the feed. This URI is intended to  
be subscribed to to keep abreast of recent changes in the feed.  
When different from the URI of the document where it occurs, it  
indicates that its value should be used for this purpose in place  
of the current document's URI.


Any suggestions?


On 21/10/2005, at 8:19 AM, Tim Bray wrote:


On Oct 21, 2005, at 7:38 AM, James Holderness wrote:


The idea being that if you were to come across an archived Atom  
document (however that might happen), the presence of this link  
would, (a) let you know that it was an archive document and thus  
shouldn't be subscribed to, and (b) provide you with a URL with  
which you could subscribe to the actual feed if you so chose.




Makes sense, but not self-evident.  I would think that the  
usefulness of this thing would be improved by a few words of  
explanation for those who come upon it without knowing the history.  
-Tim



--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations -- Ready to go?

2005-10-21 Thread Mark Nottingham
  
example); such a feed has the advantage of paging that it allows  
direct access to a specific point of time inside the feed pages.  
Each archived set of entries could for example cover one or two  
week, so a user could navigate through the feed state or feed  
history not only by going from pages to pages but also by  
accessing archived chunks via an index or table of contents.


--
Thomas Broyer








--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations -- Ready to go?

2005-10-21 Thread Mark Nottingham


How about:

 -  Description: A URI that refers to a feed document containing a  
set of the most recent entries in the feed. This URI is intended to  
be subscribed to to keep abreast of recent changes in the feed; when  
different from the URI of the document where it occurs, it indicates  
that its value should be used for this purpose in place of the  
current document's URI. For example, an archived feed document might  
contain a subscribe relation that points to the subscription feed's  
location, so that clients subscribe to the appropriate link. Note  
that the self relation was designed for a similar purpose, but is  
not suitable for that use in other feeds, whereas this relation can  
be used in those situations.




On 21/10/2005, at 4:16 PM, Tim Bray wrote:


On Oct 21, 2005, at 3:13 PM, Mark Nottingham wrote:


 -  Description: A URI that refers to a feed document containing  
a set of the most recent entries in the feed. This URI is  
intended to be subscribed to to keep abreast of recent changes in  
the feed. When different from the URI of the document where it  
occurs, it indicates that its value should be used for this  
purpose in place of the current document's URI.




Any suggestions?



Yes.  Acknowledge the specific case of an archival feed, an example  
is worth a thousand words.


And discuss why this exist when Atom already has link rel=self,  
specifically designed to support auto-subscribe. -Tim







--
Mark Nottingham http://www.mnot.net/



General/Specific [was: Feed History / Protocol overlap]

2005-10-19 Thread Mark Nottingham


next
next-chunk
next-page
next-archive
next-entries
are all workable for me.

I think the real question is still (unfortunately) how specific it  
should be.


I think there are merits to both sides; the relative cost of a  
specific term isn't much, and the harm of a general term is largely  
theoretical at this point. So, I'm at a point where I'm more  
interested in moving forward than in a particular solution.


Perhaps people could +1/-1 the following options:

* Reconstructing a feed should use:
   a) a specific relation, e.g., prev-archive
   b) a generic relation, e.g., previous

(these are just examples; once we figure out how specific it should  
be, we can figure out a term more easily)



I'm +1 on both.


On 18/10/2005, at 11:44 PM, Thomas Broyer wrote:





Antone Roundy  wrote:



On Oct 18, 2005, at 5:13 PM, Robert Sayre wrote:



rel: next
definition: A URI that points to the next feed in a series of feeds.
For example, in a reverse-choronological series of feeds, the 'next'
URI would point deeper into the past.




Ohh, nice readability.  Perhaps a few refinements:

A URI that points to the next in a series of Feed documents, each
representing a segment of the same feed.  For example, in a reverse-
chronologically ordered series of Feed documents, the 'next' URI
would point to the document next further in the past.




+1, *this* is paging.

We could add another example, e.g. sorted by relevance (within a  
search

result) or priority…

If you want to link between different states of Top 100 feeds
(October, September, August, etc), then use something like  
@rel=archives
or @rel=history, or define a @rel=previous-archive if you  
really want

to navigate directly to the other feed without having to go through a
table of contents feed.

If some people here prefers next-chunk or next-page to just  
next,

why not, my mind is open…

--
Thomas Broyer







--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/




Re: Feed History / Protocol overlap

2005-10-18 Thread Mark Nottingham


Can you substantiate that with links to the appropriate portions of  
the current protocol draft?



On 18/10/2005, at 9:40 AM, Robert Sayre wrote:



I think the navigation elements of
draft-nottingham-atompub-feed-history-04.txt overlap with Atom
protocol navigation and deployed APP beta implementations. In fact, I
pointed this out way back in April 2005. I don't think anything has
changed.

In http://www.mnot.net/blog/2005/04/12/feed_state Mark Nottingham  
wrote:


Way back when I put the first Atom drafts together, I included a  
placeholder for
a section that I hoped would allow reconstruction of feed state.  
Presently, this often
isn't necessary, because you have to be away for a seriously long  
time

(e.g, on vacation) before you actually miss anything.


...

In other words, if you happen to look away for too long you miss  
information,
essentially making the channel leaky. To that end, I put together  
a proposal and a
demonstration feed (in fact this very blog's feed, dear reader),  
in the hopes of

convincing people that this is a real issue. Silence ensued, and the
ATOMPUB WG declined my proposal.



to which Robert Sayre replied:

Hmm. Your proposal concerned a couple link relations, right? Those  
would be
easy to add to the format at anytime, and... Blogger and 6A have  
both asked for
similar functionality on  the protocol side. Seems like more of a  
server layout

and protocol problem, anyway.



to which Mark Nottingham replied:


Robert —
The only problem I have with that is that AFAIK so far, I don't  
need to know about the
protocol document to consume an Atom document; it's only when you  
want to

manipulate a feed that you have to work on that side of the house.

That said, it's good to hear that others want this too.



to which Robert Sayre replied:

Well, wouldn't it be nice if you didn't need to know about the  
protocol document to
perform any of the protocol's read operations? That's my thinking.  
View-source on a

couple of link relations should be enough to pick it up.



Robert Sayre






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems


BEAWorld 2005: coming to a city near you.  Everything you need for SOA and 
enterprise infrastructure success.


Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26 Oct| Beijing 7-8 
Dec



Re: Feed History / Protocol overlap

2005-10-18 Thread Mark Nottingham


OK, well, I'm not terribly fussed by who registers them, but they  
need to be carefully defined, and it wasn't at all clear that the  
OpenSearch document did that.


Considering that there's a need for them sooner rather than later,  
would you have a problem with registering the link relations (as  
discussed in a separate thread) separately, before APP is done? If  
so, why?




On 18/10/2005, at 11:22 AM, Robert Sayre wrote:


Oh that's odd. They've gone and deleted them. I can tell you that my
general impression of the current atom-protocol list was that we would
align with Amazon OpenSearch. In any case, I think it would be unwise
for the IETF to duplicate APP navigation.



--
Mark Nottingham http://www.mnot.net/



Re: Feed History / Protocol overlap

2005-10-18 Thread Mark Nottingham



On 18/10/2005, at 11:38 AM, Robert Sayre wrote:


OK, well, I'm not terribly fussed by who registers them, but they
need to be carefully defined, and it wasn't at all clear that the
OpenSearch document did that.



I think maybe we have a difference of opinion on what's needed here.


Could you elaborate?

Thanks,

--
Mark Nottingham http://www.mnot.net/



Re: Feed History / Protocol overlap

2005-10-18 Thread Mark Nottingham


I'm confused; the current proposal (below) doesn't have that text in  
it; for example, the definition of previous is:


A stable URI that, when dereferenced, returns a feed document  
containing a set of entries that sequentially precede those in the  
current document. This can be thought of as specific to those  
entries; in other words, it represents a fixed section of the feed,  
rather than a sliding window over it. Note that the exact nature of  
the ordering between the entries and documents containing them is  
not defined by this relation; i.e., this relation is only relative.




On 18/10/2005, at 12:14 PM, Robert Sayre wrote:


On 10/18/05, Mark Nottingham [EMAIL PROTECTED] wrote:



On 18/10/2005, at 11:38 AM, Robert Sayre wrote:



OK, well, I'm not terribly fussed by who registers them, but they
need to be carefully defined, and it wasn't at all clear that the
OpenSearch document did that.




I think maybe we have a difference of opinion on what's needed here.



I vastly prefer your first definition of next/prev. The should not
change over time stuff is not testable. For example, a template
change in Movable Type followed by a Rebuild All makes every single
archive change. As a client implementor, I can't see how the the
follwing text helps me:

The set of entries in this document should not change over time;
i.e., this link points to a stable snapshot of entries, or an archive
of feed entries.

I think what you're describing is a sensible server implementation
strategy, because once the URI is visited, it should return 304 in the
future.

Robert Sayre





--
Mark Nottingham http://www.mnot.net/



Re: Feed History / Protocol overlap

2005-10-18 Thread Mark Nottingham



On 18/10/2005, at 12:38 PM, Robert Sayre wrote:


A stable URI that, when dereferenced, returns a feed document
containing a set of entries that sequentially precede those in the
current document.


I already have code that uses next for this. Why do we want to  
change it?


Why would your code have to change?


This can be thought of as specific to those
entries; in other words, it represents a fixed section of the feed,
rather than a sliding window over it. Note that the exact nature of
the ordering between the entries and documents containing them is
not defined by this relation; i.e., this relation is only relative.


1.) I don't understand why one feed is making assertions about the
stability of another, when your draft provides explicit signals for
this.


If we go down this road, my draft will be, at the most, a users'  
guide to the link relations. It probably won't be necessary at all,  
except perhaps for fh:incrementalfalse/fh:incremental.




2.) I still don't see how this helps me write a client.


What are you looking for? People said they wanted to use atom:link,  
so I'm trying to accommodate that. People said they wanted the  
relations to be generic, so I'm trying to accommodate that.



3.) I don't think the notion of fixed section is helpful.
fh:archive is good, that means don't subscribe... I get that.


It characterises the nature of the feed that's being linked to.


--
Mark Nottingham http://www.mnot.net/



Re: Feed History / Protocol overlap

2005-10-18 Thread Mark Nottingham


Please disambiguate original.

On 18/10/2005, at 12:49 PM, James M Snell wrote:

+1 on all of Roberts comments.  While I'm ok with the current  
version, I was much happier with the original.

Robert Sayre wrote:



On 10/18/05, Mark Nottingham [EMAIL PROTECTED] wrote:



I'm confused; the current proposal (below) doesn't have that text in
it; for example, the definition of previous is:




OK, then I am confused.




A stable URI that, when dereferenced, returns a feed document
containing a set of entries that sequentially precede those in the
current document.




I already have code that uses next for this. Why do we want to  
change it?





This can be thought of as specific to those
entries; in other words, it represents a fixed section of the feed,
rather than a sliding window over it. Note that the exact nature of
the ordering between the entries and documents containing them is
not defined by this relation; i.e., this relation is only relative.




1.) I don't understand why one feed is making assertions about the
stability of another, when your draft provides explicit signals for
this.

2.) I still don't see how this helps me write a client.

3.) I don't think the notion of fixed section is helpful.
fh:archive is good, that means don't subscribe... I get that.

Robert Sayre











--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-17 Thread Mark Nottingham



On 17/10/2005, at 1:20 AM, Eric Scheid wrote:

I'd prefer that our use of 'prev' and 'next' be consistent with  
other uses
elsewhere, where 'next' traverses from the current position to the  
one that

*follows*, whether in time or logical order. Consider the use of
'first/next/prev/last' with chapters or sections rendered in HTML.


I'm starting to think that the way to fix this is to make it more  
specific, so that it doesn't get conflated with other uses; e.g.,  
prev-archive.



--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-17 Thread Mark Nottingham


Exactly.

I don't want this draft to become the all-singing, all-dancing feed  
model review; although there's lots of interesting stuff there, it's  
way too ambitious for my tastes (and I think I detect the smell of a  
tarpit faintly wafting...). The feed history case gets us to a nice 80 
+% point; the rest can come in separate vehicles.


Any response to 'prev-archive'?

Cheers,


On 17/10/2005, at 11:49 AM, Thomas Broyer wrote:



James Holderness wrote:

5. Is the issue of whether a feed is incremental or not (the  
fh:incremental

element) relevant to this proposal?



non-incremental feeds wouldn't be paged, by definition, would they?



This has been debated. There have been those who have expressed an  
interest in having next and prev links traverse an archive of old  
non-incremental feeds. Say you have a feed with the top 10 books  
for this month. The next link (or prev link, depending on your  
preference) would point to the archive document with the top 10  
books from last month.


I think that Mark's concerns were that readers/aggregators  
generally keep a local history of the feeds they're subscribed to.  
fh:incremental=no would explicitly tell them not to do so.


--
Thomas Broyer







--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems


BEAWorld 2005: coming to a city near you.  Everything you need for SOA and 
enterprise infrastructure success.


Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26 Oct| Beijing 7-8 
Dec



Re: Feed History -04

2005-10-17 Thread Mark Nottingham


I already get the same results with just one link relation -- 'prev- 
archive' -- instead of three.


The algorithm for combining results is an important issue, but an  
orthogonal one.



On 17/10/2005, at 12:37 PM, James M Snell wrote:

Mark, I honestly believe that feed history can be achieved using a  
very simple model:


a. incremental=true... which means that entries (posted at any  
time) may exist in other feed documents

b. start/next/prev... points to other feeds where entries may be found

if I point my newsreader to a feed document that has a  
incremental=true, I would look for a start link.  I would process  
the start feed then begin walking my way through the next links to  
build the history.  The start feed MAY have the most recent entries  
or MAY have the oldest entries, it doesn't matter.  My Atom  
processor would just Do The Right Thing with whatever entries it  
finds in the feeds as it walks through the linked list of feed  
documents.  How does the Atom processor know when it has the  
complete history?  Either a) the user tells it to stop or b) it  
reaches a feed without a next link.


There shouldn't be any requirement that the entries in a feed (or  
even the feed documents themselves) have to be in a specific order  
in order to reconstruct the history.  The minimum requirement is  
only that we're able to find the feed documents we need.  The Atom  
processor can figure the rest out from there.


- James

Mark Nottingham wrote:




Exactly.

I don't want this draft to become the all-singing, all-dancing  
feed  model review; although there's lots of interesting stuff  
there, it's  way too ambitious for my tastes (and I think I detect  
the smell of a  tarpit faintly wafting...). The feed history case  
gets us to a nice 80 +% point; the rest can come in separate  
vehicles.


Any response to 'prev-archive'?

Cheers,


On 17/10/2005, at 11:49 AM, Thomas Broyer wrote:




James Holderness wrote:


5. Is the issue of whether a feed is incremental or not (the   
fh:incremental

element) relevant to this proposal?




non-incremental feeds wouldn't be paged, by definition, would  
they?





This has been debated. There have been those who have expressed  
an  interest in having next and prev links traverse an archive  
of old  non-incremental feeds. Say you have a feed with the top  
10 books  for this month. The next link (or prev link, depending  
on your  preference) would point to the archive document with  
the top 10  books from last month.



I think that Mark's concerns were that readers/aggregators   
generally keep a local history of the feeds they're subscribed  
to.  fh:incremental=no would explicitly tell them not to do so.


--
Thomas Broyer








--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems

_ 
___
BEAWorld 2005: coming to a city near you.  Everything you need for  
SOA and enterprise infrastructure success.



Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26  
Oct| Beijing 7-8 Dec










--
Mark Nottingham http://www.mnot.net/



Are Generic Link Relations Always a Good Idea? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


Robert,

It's a matter of personal preference as to whether one likes 'prev'  
or 'next'; if there had been wide implementation and a good  
specification of what MarkP did, I could see a strong argument for  
using it.


As it is, no one has even noticed it had similarity to this proposal  
until a few days ago, and it looks like there are a number of people  
who have strong feelings each way.


OTOH, how specific the relation is *is* a technical issue; could you  
expand on what you see as the 'tower of babel' problem?


My concern is that if there is more than one use of a link relation  
like 'next' or 'prev', those uses could conflict. For example, if I  
use 'prev' for Feed History, will that cause a problem with feeds  
using Amazon OpenSearch if they want to use it in a slightly  
different way? To put it in Thomas' terms, what if there are  
different concepts of paging using the same terms -- which there seem  
to be already?


This shows up perfectly with the whole next or previous?  
discussion. If we don't assign specific, functional semantics to the  
links, people will interpret -- and use -- them differently.


This is why I'm leaning towards prev-archive.


On 17/10/2005, at 1:15 PM, Robert Sayre wrote:


On 10/17/05, Mark Nottingham [EMAIL PROTECTED] wrote:



I already get the same results with just one link relation -- 'prev-
archive' -- instead of three.



Why should one prefer your proposal to what's in this feed:
http://diveintomark.org/xml/2004/03/index.atom

'prev-archive' is more specific, and I think that's a bad thing. It
seems to introduce tower of babel problems.

Robert Sayre





--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


Good point.

On 17/10/2005, at 2:54 PM, James M Snell wrote:

+1. An additional security concern would be the potential for  
circular references



--
Mark Nottingham http://www.mnot.net/



Re: Are Generic Link Relations Always a Good Idea? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


They seem similar. But, what if you want to have more than one paging  
semantic applied to a single feed, and those uses of paging don't  
align? I.e., there's contention for prev/next?


If no one shares my concern, I'll drop it... as long as I get to say  
I told you so if/when this problem pops up :)




On 17/10/2005, at 3:21 PM, Thomas Broyer wrote:


I don't think there are different concepts of paging.

Paging is navigation through subsets (chunks) of a complete set of  
entries.


If the complete set represents all the entries ever published  
through an ever-changing feed document (what a feed currently is,  
you subscribe with an URI and the document you get when  
dereferencing the URI changes as a sliding-window upon a set of  
entries), then paging allows for feed state reconstruction.
In other terms, feed state reconstruction is a facet of paging, an  
application to non-incremental feeds.



I think it's worth waiting consensus on previous/next or forwards/ 
backwards, first/last or head/tail, etc. and having a paging spec  
(or just IANA registration, I don't really matter), and  
orthogonally define an fh:incremental extension (fh:incremental  
will just change newsreaders behavior, not the paging concept).

It seems James is having the same feeling…



--
Mark Nottingham http://www.mnot.net/




Re: Are Generic Link Relations Always a Good Idea? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


Robert,

 As I said before, if the WG can reach consensus, I'm happy with any  
old term. I hadn't seen Mark's proposal till a few days ago, and a  
mention in an xml.com does not,  in my opinion, a spec-in-stone make.  
My only pushback on next is that to me, it seems counterintuititive  
-- same as your pushback on prev. *shrug*


The SixApart people have publicly pointed to FH, so I don't think  
they're particularly fussed about any particular approach other (not  
to put words in their mouth). I wasn't able to find a TP feed that  
uses rel=next; do you have a link to one?


If the WG registers a set of generic link relations (I still have  
that concern, but again, if there's consensus in the WG, I'm happy to  
abide by it), it effectively reduces FH to a users' guide for one use  
of those extensions, and would probably say something like walk down  
and next or prev you see in the subscription feed. The only  
normative bit would probably be fh:incremental.


Cheers,


On 17/10/2005, at 3:21 PM, Robert Sayre wrote:


I think the spec is perfectly clear. Is there something about it you
don't understand? I do think your addition of an indicator that the
feed is an archive is a good idea. I have to disagree with your
characterization of deployment. Most AtomAPI implementations work this
way--see for example typepad.com.



--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham



On 17/10/2005, at 4:07 PM, Thomas Broyer wrote:


- Attribute Value: first
- Description: A stable URI that, when dereferenced, returns a  
feed document containing those entries furthest preceding those in  
the current document at the time it was minted. Note that the  
exact nature of the ordering between the entries and documents  
containing them is not defined by this relation; i.e., this  
relation is only relative.

- Expected display characteristics:
- Security considerations:

- Attribute Value: last
- Description: A stable URI that, when dereferenced, returns a  
feed document containing those entries furthest following those in  
the current document at the time it was minted. Note that the  
exact nature of the ordering between the entries and documents  
containing them is not defined by this relation; i.e., this  
relation is only relative.

- Expected display characteristics: Undefined.
- Security considerations:

+0.5 (adding the circular references issue raised by James),  
because some people will use first to link to the live feed  
(the one you subscribe to) and next to link to the first archive  
document and so on, and some will use last and prev for the  
exact same roles…

The given definition is not precise enough.


A stable URI was intended to capture that, but I see that wasn't  
good enough. How about:


- Attribute Value: first
- Description: A stable URI that, when dereferenced, returns a feed  
document containing the set of entries furthest preceding those in  
the current document at the time it was minted. The set of entries in  
this document should not change over time; i.e., this link points to  
a stable snapshot of entries, or an archive of feed entries. Note  
that the exact nature of the ordering between the entries and  
documents containing them is not defined by this relation; i.e., this  
relation is only relative.

- Expected display characteristics: ...
- Security considerations: ...

Another thought would be first-archive, last-archive, prev- 
archive and next-archive (just expanding a previous thought).



And wrt prev, why not previous? both could also be registered  
as aliases…


I'd prefer one or the other; don't care much which it is, but two  
seems wasteful. HTTP-WG didn't alias Referer even tho it's spelled  
incorrectly, for example. That worked out OK.




- Attribute Value: subscribe
- Description: A stable URI that, when dereferenced, returns a  
feed document containing the most recent entries in the feed. This  
URI is intended to be subscribed to to keep abreast of changes in  
the feed. When different from the URI of the feed document it  
exists in, it indicates a URi that should be used for this purpose  
in place of the current document's URI.

- Expected display characteristics: Undefined.
- Security considerations: Users should always be informed of the  
actual URI they are subscribing to, and subscription should only  
take place when it is explicitly requested.


Depends whether @rel=self was really meant for subscribing and  
the spec wording is not precise enough about it; this could then be  
fixed with an errata rather than create a new link relation…


I think there's value in the current reading of self; it's  
sometimes useful for a document to know what URI it's available at.  
Also, when it occurs in another feed, self is a very non-obvious  
name for what's happening.


Otherwise, +0.5, because it seems to overlap @rel=first (or  
last?) – or I missed something…


I think we're kind of short on use cases for first and last, but  
people seem to want them. 'subscribe' is more explicit; as they're  
written, 'first' and 'last' should definately NOT be subscribed to  
(because the set of entries in them won't change).


Cheers,

--
Mark Nottingham http://www.mnot.net/




Re: New Link Relations? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


So what happens when you need the rel=self (as currently defined)  
of an archive feed?



On 17/10/2005, at 4:28 PM, Eric Scheid wrote:



On 18/10/05 9:07 AM, Thomas Broyer [EMAIL PROTECTED] wrote:



Depends whether @rel=self was really meant for subscribing and the
spec wording is not precise enough about it; this could then be fixed
with an errata rather than create a new link relation…



IIRC, it came into existence to solve the feed subscription problem.
However, I don't recall that the issue of feed archives featured  
much in

that discussion, and that thus the now understood problem of 'self' vs
'subscribe' wasn't envisaged.

Fortunately, the link relation 'self' was defined in such a woolly  
way we
could get away with re-purposing it. A few articles here or there,  
a bit of
blog chatter, and the arrival of the fabled Developers Guide and  
we'd be

set.

I'd think this would be favourable to having to come up with a  
different

pair of relations, like

'self'   = what you subscribe to,
   may not look anything like the chunk in front of  
you


'this-chunk' = link to what you are looking at,
   not to be confused with 'self'

(Maybe the Developers Guide will have a chapter called Up Is Down  
- The New
Reality, which would explain that a link to 'self' doesn't, we use  
'next'
to go backwards, and 'alternate' for feed discovery may not point  
to actual

alternates of the content in front of you ;-)


Otherwise, +0.5, because it seems to overlap @rel=first (or  
last?) –

or I missed something…



There's nothing wrong with having an overlap like this, because  
they don't
always overlap. Consider the 'subscribe' link to nature.com/nm/  
which I
described earlier - two different URIs, but the same eventual  
document.


e.







--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems


BEAWorld 2005: coming to a city near you.  Everything you need for SOA and 
enterprise infrastructure success.


Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26 Oct| Beijing 7-8 
Dec



Re: New Link Relations? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


Requiring a separate element to always be present is a non-starter;  
what is the point of a reusable link relation if you have to use it  
with another element to contextualise it? I'm really stretching to  
see any benefit from this approach.


prev-archive (or maybe prev-entries?) is starting to look better, as  
is fh:prev/.



On 17/10/2005, at 9:17 PM, James M Snell wrote:


In other words,

this does not imply a feed history thing...
 feed
   ...
   link rel=next href=... /
 /feed

this does...
 feed
   ...
   fh:incrementaltrue/fh:incremental
   link rel=next href=... /
 /feed




--
Mark Nottingham http://www.mnot.net/



Re: New Link Relations? [was: Feed History -04]

2005-10-17 Thread Mark Nottingham


+1


On 17/10/2005, at 7:57 PM, Eric Scheid wrote:



On 18/10/05 9:53 AM, Mark Nottingham [EMAIL PROTECTED]  
wrote:




So what happens when you need the rel=self (as currently defined)
of an archive feed?



The current definition being ...

 The value self signifies that the IRI in the value of the href
 attribute identifies a resource equivalent to the containing  
element.


thus a link with @rel='self' in the feed element would link to that
archive feed document. Similarly, a link with @rel='self' in the  
entry

element would link to a resource document of that particular entry.

Thus (in context of feed)

'self'  = identifies a resource equivalent to this feed
'subscribe' = identifies the resource to subscribe to

The same holds true for archive feeds and the current sliding  
window chunk,

which makes life easier.

e.






--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-15 Thread Mark Nottingham



On 14/10/2005, at 10:24 PM, James M Snell wrote:
My answer would be: if last is used, it's a closed set; if  
last  is not used, it's an open set.


Can you walk me through a use case where this would be desirable?   
E.g. what would the subscription URI be, would any of the entries  
be  updated, and how if so? In what scenario would having a closed  
set  feed be useful?


An archive for a blog that is no longer being updated? An archive  
of entries pertaining to an event with a fixed endpoint? A  
discussion forum that has been closed.


How are implementations supposed to use this information? Stop  
polling the feed? Consider its items immutable? I'm concerned if  
something so innocent-looking as last has these sorts of implications.



The first may not be relevant in the Feed history case but  
does  come into play when thinking about paged search results,  
sequences  of linked, non-incremental feeds, etc.


How? Can you give us a bit more flesh for the use case? Again,  
I'm  not saying it's bad, but I don't see how it's useful in a  
feed (as  opposed to a Web page).


Suppose that I perform a search on some feed searching service and  
get back an entry from a feed in the middle of a set.  I see the  
next and previous links and realize that the entry I found is part  
of a larger set.  In order to get the full context, I want to  
navigate to the beginning of the set and work my way down through  
the links to the end.


You seem to have a human in mind for your use case, when these  
relations are just there to allow machines to reconstruct the feed.  
In other words, the ordering used for presentation to people is  
logically separate from the ordering of feed documents (although the  
reconstructed, native feed ordering may be used). As such, your use  
case could be met without specifying 'first'; just follow the normal  
walk-back-from-the-subscription-feed method to reconstruct state.



2) What's the relationship between these feed documents and the   
feed  document that people subscribe to?


I think the subscription feed needs to be pinned to one end of   
the  set (which is what FH does now). Otherwise, it becomes   
difficult to  figure out whether you have the complete set or  
not  by polling.


I think this will be dependent on the context in which the link   
rels are used.  The subscription link rel you've suggested is  
a  good solution to this problem.  Within any of the feeds in the  
set,  the subscription link rel would point to the feed that  
should be  subscribed to -- regardless of whether the  
subscription feed  appears at the start or end of the set.


What would the algorithm be for assuring that you have the  
complete  state of the feed, without necessitating traversal of  
the entire feed  every time?


Not sure. I suppose that it would be the same as it is with your  
existing fh:prev element.


It would have to be modified to specify the rules for walking forward  
from the first document, and then subsequent updates would need to be  
caught by working backwards from the subscription document using prev.


My current take is that last is actively harmful if it implies a  
closed set, and misleading if it doesn't, and next and first are  
mostly harmless, but don't really have supporting use cases and add  
complexity to both the spec and implementations. I'd really like to  
keep it simple. Are there any other use cases?



--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-15 Thread Mark Nottingham


OK, but that still leaves us with the question below -- who's doing  
the paging, and why is it useful to have multiple ways around the thing?



On 15/10/2005, at 7:25 PM, Eric Scheid wrote:



On 16/10/05 6:54 AM, Mark Nottingham [EMAIL PROTECTED] wrote:



Can you walk me through a use case where this would be desirable?
E.g. what would the subscription URI be, would any of the entries
be  updated, and how if so? In what scenario would having a closed
set  feed be useful?



An archive for a blog that is no longer being updated? An archive
of entries pertaining to an event with a fixed endpoint? A
discussion forum that has been closed.



How are implementations supposed to use this information? Stop
polling the feed? Consider its items immutable? I'm concerned if
something so innocent-looking as last has these sorts of  
implications.




perhaps a better example would then be a feed of search results,  
which at
any time of query is a finite and closed set, and also designed to  
be paged

through.

e.






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: Feed History -04

2005-10-14 Thread Mark Nottingham


The approach I took in -04 was to say that the pseudo-ordering  
introduced by the mechanism was ONLY meaningful for the purposes of  
reconstituting the feed; it's still up to the feed itself to  
determine what the ordering of entries means (or doesn't). That  
avoids a lot of problems, although it does require some careful wording.


Also -- I'd think that the last link is already covered by self,  
no? If not, there's some pretty serious confusion about what 'self'  
means.



On 13/10/2005, at 8:01 PM, Antone Roundy wrote:



On Oct 13, 2005, at 7:58 PM, Eric Scheid wrote:


On 14/10/05 9:18 AM, James M Snell [EMAIL PROTECTED] wrote:




Excellent.  If this works out, there is an opportunity to merge the
paging behavior of Feed History, OpenSearch and APP collections  
into a

single set of paging link relations (next/previous/first/last).




'first' or 'start'?

Do we need to define what 'first' means though?  I recall a  
dissenting
opinion on the wiki that the 'first' entry could be at either end  
of the

list, which could surprise some.



Yeah, that's a good question.  Maybe calling them top and  
bottom would work better.  Considering that the convention is to  
put the newest entry at the top of a feed document, top might be  
more intuitively understandable as being the new end.  You might  
also rename next and previous (or is it previous and next?)  
to down and up.  There's SOME chance of that getting confused  
with hierarchical levels, but I could live with that.







--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems


BEAWorld 2005: coming to a city near you.  Everything you need for SOA and 
enterprise infrastructure success.


Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26 Oct| Beijing 7-8 
Dec



Re: Feed History -04

2005-10-14 Thread Mark Nottingham


On 14/10/2005, at 9:22 AM, Lindsley Brett-ABL001 wrote:
I have a suggestion that may work. The issue of defining what is  
prev and next with respect to a time ordered sequence seems to  
be a problem. How about defining the link relationships in terms of  
time - such as newer and older or something like that. That  
way, the collection returned should be either newer (more recent  
updated time) or older (later updated time) with respect to the  
current collection doc.


A feed isn't necessarily a time-ordered sequence. Even a feed  
reconstructed using fh:prev (or a similar mechanism) could have its  
constituent parts generated on the fly, e.g., in response to a search  
query.


The ordering of the entries may not matter, but the ordering of the  
documents does. Starting with the active feed document, you need to  
know whether you should be following a series of prev links or  
next links in order to traverse the archives back through time.  
While your feed history spec used prev for that purpose, previous  
implementations of atom:link appear to have used next.


I agree that it's important to honour the document order; that's what  
FH tries to do. I'm a little surprised to hear you say that people  
thought that this was previously 'next'; I'd never heard that (but  
will be happy to put a note in).


I was going to suggest that initially but I don't think it's  
strictly true. The spec says that self identifies a resource  
*equivalent* to the containing element. Considering that an  
archived document and the active feed document will quite likely  
have no entries in common I think it's a bit of a stretch to claim  
them equivalent. Derived would be a better relationship IMHO.


Hmm. Yeah, I see what you're saying. Actually, I think this is an  
opportunity -- we we define a new link relation to the subscription  
document, and specify that it can only occur in archive documents, it  
obviates the need for a separate fh:archive flag, which in turn means  
that you don't have to declare two namespaces to use fh in RSS  
archive documents -- which was one of the things making me reluctant  
to switch over to atom:link.


How about:

atom:link rel=subscription href=.../

?


--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-14 Thread Mark Nottingham


That's what I thought too, but the words in the spec don't bear it  
out; a resource equivalent to the containing element is a little  
hard to interpret (there is no equivalence function for Web  
resources, by definition), but it's a lot closer to something you  
dereference to get the same thing as what's in the containing  
element than to something you dereference to get a potentially  
completely different thing.


Arguably, there is sometimes a use case for the current definition of  
self, so it's probably best to just define a new link relation.



On 14/10/2005, at 10:28 AM, Thomas Broyer wrote:


Mark Nottingham wrote:


How about:

atom:link rel=subscription href=.../

?

I always thought this was the role of @rel=self to give the URI  
you should subscribe to, though re-reading the -11 it deals with a  
resource equivalent to the containing element.


1. Isn't a resource equivalent to the containing element the same  
as an alternate version of the resource described by the  
containing element?
2. Is the answer to 1. is no then what does a resource equivalent  
… mean? Is it really different than the URI you should subscribe  
to (at least if @type=application/atom+xml)?


--
Thomas Broyer



--
Mark Nottingham http://www.mnot.net/




Re: Feed History -04

2005-10-14 Thread Mark Nottingham


At first I really liked this proposal, but I think that the kind of  
confusion you're concerned about is unavoidable; the terms you refer  
to suffer bottom-up vs. top-down.


I think that defining the terms well and in relation to the  
subscription feed will help; after all, the terms don't surface in  
UIs, so it should be transparent.



On 14/10/2005, at 10:37 AM, Antone Roundy wrote:

Which brings me back to top, bottom, up and down.  In the  
OpenSearch case, it's clear which end the top results are going  
to be found.  In the syndication feed case, the convention is to  
put the most recent entries at the top.  If you think of a feed  
as a stack, new entries are stacked on top.  The fact that these  
terms are less generic and flexible than previous and next is  
both an advantage and a disadvantage.  I think the question is  
whether it's an advantage in a significant majority of cases or  
not.  What orderings would those terms not work well for?



--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems


BEAWorld 2005: coming to a city near you.  Everything you need for SOA and 
enterprise infrastructure success.


Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26 Oct| Beijing 7-8 
Dec



Re: Feed History -04

2005-10-14 Thread Mark Nottingham


Right. A few questions that pop up:

1) Is it a closed or open set? If it's open (and I think 99% of feeds  
are), what does last mean?


My answer is that it's probably an open set, so last doesn't mean  
much that's useful (unless it's conflated with the subscription feed;  
see below).


2) What's the relationship between these feed documents and the feed  
document that people subscribe to?


I think the subscription feed needs to be pinned to one end of the  
set (which is what FH does now). Otherwise, it becomes difficult to  
figure out whether you have the complete set or not by polling.



On 14/10/2005, at 3:16 PM, James M Snell wrote:



The way I look at this is in terms of a single linked list of  
feeds.  The ordering of the entries within those feeds is  
irrelevant.  The individual linked feeds MAY be incremental (e.g.  
blog entries,etc) or may be complete (e.g. lists,etc).  Simply  
because a feeds are linked, no assumption should be made as to  
whether or not the entries in those feeds share any form of ordered  
relationship.


link rel=first / is the first feed in the linked list
link rel=next / is the next feed in the linked list
link rel=previous / is the previous feed in the linked list
link rel=last / is the last feed in the linked list.

Terms like top, bottom, up, down, etc are meaningless in  
this model as they imply an ordering of the contents.


For feed history, it would work something like:

feed
 ...
 link rel=self href=...feed1 /
 link rel=next href=...next /
 link rel=last href=...feed3 /
 ...
/feed

feed
 ...
 link rel=self href=...feed2 /
 link rel=previous href=...feed1 /
 link rel=next href=...feed3 /
 link rel=first href=...feed1 /
 link rel=last href=...feed3 /
 ...
/feed

feed
 ...
 link rel=self=href=...feed3 /
 link rel=previous href=...feed2 /
 link rel=first href=...feed1 /
 ...
/feed

- James

Mark Nottingham wrote:




At first I really liked this proposal, but I think that the kind  
of  confusion you're concerned about is unavoidable; the terms you  
refer  to suffer bottom-up vs. top-down.


I think that defining the terms well and in relation to the   
subscription feed will help; after all, the terms don't surface  
in  UIs, so it should be transparent.



On 14/10/2005, at 10:37 AM, Antone Roundy wrote:


Which brings me back to top, bottom, up and down.  In  
the  OpenSearch case, it's clear which end the top results are  
going  to be found.  In the syndication feed case, the convention  
is to  put the most recent entries at the top.  If you think of  
a feed  as a stack, new entries are stacked on top.  The fact  
that these  terms are less generic and flexible than previous  
and next is  both an advantage and a disadvantage.  I think the  
question is  whether it's an advantage in a significant majority  
of cases or  not.  What orderings would those terms not work well  
for?






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems

_ 
___
BEAWorld 2005: coming to a city near you.  Everything you need for  
SOA and enterprise infrastructure success.



Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26  
Oct| Beijing 7-8 Dec











--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: Feed History -04

2005-10-14 Thread Mark Nottingham


This leads to:

Subscription feed:
  - can contain link/@rel=prev, OR
  - can contain fh:incremental = false

Archive feed:
  - can contain link/@rel=prev and/or link/@rel=next
  - can contain link/@rel=subscribe  (effectively gives you last)
  - link/@rel=subscribe has a semantic of if you want to  
subscribe to this feed, use the linked document, not this one.


The reconstruction algorithm is pretty much the same as in -04.

The only dangling point is first. I'm not especially against it,  
but what's the use case?




On 14/10/2005, at 4:53 PM, Mark Nottingham wrote:



Right. A few questions that pop up:

1) Is it a closed or open set? If it's open (and I think 99% of  
feeds are), what does last mean?


My answer is that it's probably an open set, so last doesn't mean  
much that's useful (unless it's conflated with the subscription  
feed; see below).


2) What's the relationship between these feed documents and the  
feed document that people subscribe to?


I think the subscription feed needs to be pinned to one end of the  
set (which is what FH does now). Otherwise, it becomes difficult to  
figure out whether you have the complete set or not by polling.



On 14/10/2005, at 3:16 PM, James M Snell wrote:




The way I look at this is in terms of a single linked list of  
feeds.  The ordering of the entries within those feeds is  
irrelevant.  The individual linked feeds MAY be incremental (e.g.  
blog entries,etc) or may be complete (e.g. lists,etc).  Simply  
because a feeds are linked, no assumption should be made as to  
whether or not the entries in those feeds share any form of  
ordered relationship.


link rel=first / is the first feed in the linked list
link rel=next / is the next feed in the linked list
link rel=previous / is the previous feed in the linked list
link rel=last / is the last feed in the linked list.

Terms like top, bottom, up, down, etc are meaningless in  
this model as they imply an ordering of the contents.


For feed history, it would work something like:

feed
 ...
 link rel=self href=...feed1 /
 link rel=next href=...next /
 link rel=last href=...feed3 /
 ...
/feed

feed
 ...
 link rel=self href=...feed2 /
 link rel=previous href=...feed1 /
 link rel=next href=...feed3 /
 link rel=first href=...feed1 /
 link rel=last href=...feed3 /
 ...
/feed

feed
 ...
 link rel=self=href=...feed3 /
 link rel=previous href=...feed2 /
 link rel=first href=...feed1 /
 ...
/feed

- James

Mark Nottingham wrote:





At first I really liked this proposal, but I think that the kind  
of  confusion you're concerned about is unavoidable; the terms  
you refer  to suffer bottom-up vs. top-down.


I think that defining the terms well and in relation to the   
subscription feed will help; after all, the terms don't surface  
in  UIs, so it should be transparent.



On 14/10/2005, at 10:37 AM, Antone Roundy wrote:



Which brings me back to top, bottom, up and down.  In  
the  OpenSearch case, it's clear which end the top results are  
going  to be found.  In the syndication feed case, the  
convention is to  put the most recent entries at the top.  If  
you think of a feed  as a stack, new entries are stacked on  
top.  The fact that these  terms are less generic and flexible  
than previous and next is  both an advantage and a  
disadvantage.  I think the question is  whether it's an  
advantage in a significant majority of cases or  not.  What  
orderings would those terms not work well for?







--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems

 

BEAWorld 2005: coming to a city near you.  Everything you need  
for SOA and enterprise infrastructure success.



Register now at http://www.bea.com/4beaworld


London 11-12 Oct| Paris13-14 Oct| Prague18-19 Oct |Tokyo 25-26  
Oct| Beijing 7-8 Dec













--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: Feed History -04

2005-10-14 Thread Mark Nottingham



On 14/10/2005, at 8:01 PM, James Holderness wrote:

I never did understand this. Why is fh:incremental needed here?  
From a feed history point of view you either have a history (a prev  
link is present) or you don't. That's all an Atom processor needs  
in order to reconstruct the feed.


I get that a feed producer may want to provide a non-incremental  
feed (top 10, todo lists, playlists, etc), but I don't see what  
that has to do with feed history. Wouldn't that be better suited in  
a separate extension along with whatever other meta-information  
might be appropriate for non-incremental lists?


It's more relevant than it seems at first glance. Currently, most (if  
not practically all) feed aggregators will keep a history by default,  
without information otherwise. Introducing a standard extension to  
enable that necessitates that there be a way to say don't do that.



The only dangling point is first. I'm not especially against  
it,  but what's the use case?


I'm not especially for it, but it's theoretically possible that  
someone subscribing to a feed for the first time may want to  
download the full archives. Depending on their processing model,  
this may be more convenient starting with the oldest archive and  
working forwards in time


Yeah, that's pretty much where I'm at; supplying effectively gives  
*two* ways to reconstruct the state of a feed, meaning that both  
would need to be supported (and optimised) by implementations, which  
isn't so great unless there's a compelling need for it.


--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-14 Thread Mark Nottingham



On 14/10/2005, at 8:32 PM, James M Snell wrote:


1) Is it a closed or open set? If it's open (and I think 99% of  
feeds  are), what does last mean?


My answer would be: if last is used, it's a closed set; if last  
is not used, it's an open set.


Can you walk me through a use case where this would be desirable?  
E.g. what would the subscription URI be, would any of the entries be  
updated, and how if so? In what scenario would having a closed set  
feed be useful?


Separately, you say:
The first may not be relevant in the Feed history case but does  
come into play when thinking about paged search results, sequences  
of linked, non-incremental feeds, etc.


How? Can you give us a bit more flesh for the use case? Again, I'm  
not saying it's bad, but I don't see how it's useful in a feed (as  
opposed to a Web page).



2) What's the relationship between these feed documents and the  
feed  document that people subscribe to?


I think the subscription feed needs to be pinned to one end of  
the  set (which is what FH does now). Otherwise, it becomes  
difficult to  figure out whether you have the complete set or not  
by polling.


I think this will be dependent on the context in which the link  
rels are used.  The subscription link rel you've suggested is a  
good solution to this problem.  Within any of the feeds in the set,  
the subscription link rel would point to the feed that should be  
subscribed to -- regardless of whether the subscription feed  
appears at the start or end of the set.


What would the algorithm be for assuring that you have the complete  
state of the feed, without necessitating traversal of the entire feed  
every time?


Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-13 Thread Mark Nottingham


I've been considering moving feed history over to atom:link, but  
wanted to check with people who are currently using / referring to  
it, as well as with the RSS communities. Please give me a little time.



On 09/10/2005, at 9:06 PM, James M Snell wrote:




I've been considering asking the Opensearch folks if they would be  
willing to separate their next/previous/first/last link relations  
out to a separate spec that could be made a working group draft.   
The paging functionality they offer provides a solution to paging  
in the protocol and are generally useful across a broad variety of  
feed application cases.  Regardless, it would be very good to see  
these registered.


- James

James Holderness wrote:





In case anyone is interested, the OpenSearch Response draft can be  
found here:


http://opensearch.a9.com/spec/opensearchresponse/1.1/

The rel values they support include next, previous (not prev),  
start and end. They have a note next to each saying This value is  
pending IETF registration. Does that mean they've actually  
started some kind of registration process or they're just hoping  
to do so at some point in the future?


Another issue worth noting is that their example RSS feed is also  
using atom:link to provide this functionality.


Robert Sayre wrote:



No, but Amazon OpenSearch has been threatening to register it,  
FWIW. :)

















--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-09 Thread Mark Nottingham


Managing Feed State was a placeholder I put in the original Atom  
spec draft http://www.mnot.net/drafts/draft-nottingham-atom- 
format-00.html#rfc.section.4 for just this kind of discussion. The  
WG couldn't come to a consensus on a mechanism (my proposal was  
http://www.intertwingly.net/wiki/pie/PaceFeedState), so we removed  
that section (earlier this year, IIRC).


On 09/10/2005, at 1:39 AM, James Holderness wrote:




Following up on the idea of using atom:link instead of fh:prev, I  
recently came across an old article by Mark Pilgrim on XML.com  
(http://www.xml.com/pub/a/2004/06/16/dive.html) in which he  
discusses the atom:link element and the various rel attributes it  
supports. He specifically brings up the issue of feed history and  
using next and prev to link archives together.


It sounded to me as if he was talking about an existing feature in  
the spec - it wasn't like he was proposing it as a new idea. So was  
this something that used to be part of an old version of the spec  
that was later removed? Or was this an early proposal that was  
never accepted into the spec?


Also of interest: there's a link from that article to his archives  
on diveintomark.org which actually include next and prev links in  
the feed. I'm almost inclined to add support for that just so I can  
access those old posts. There used to be some excellent articles on  
his site.








--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-10-09 Thread Mark Nottingham


Looks like the whole use URIs for non-registered values approach  
has already gone by the wayside. Oh, well.


Next time, it should just be URIs, period -- no shortcuts.


On 09/10/2005, at 3:41 PM, James Holderness wrote:


They have a note next to each saying This value is pending IETF  
registration. Does that mean they've actually started some kind of  
registration process or they're just hoping to do so at some point  
in the future?



--
Mark Nottingham http://www.mnot.net/



Re: Straw Poll: age:expires vs. dcterms:valid (was Re: Unofficial last call on draft-snell-atompub-feed-expires-04.txt)

2005-10-09 Thread Mark Nottingham


Yeah, that kind of tears it for me; we could profile it, but I'm less  
than convinced that the potential reuse is worth it (esp. when it's  
so trivial to map age:expires into dcterms:valid).


+1 to age:expires.


On 09/10/2005, at 10:21 AM, Phil Ringnalda wrote:




Mark Nottingham wrote:


I'm torn; on the one hand, dcterms is already defined, and already  
used in other feed formats; on the other hand, the syntax is less- 
than-simple.





Indeed. A perfectly, utterly valid dcterms:valid value:

start=George W. Bush; scheme=US Presidents; name=Bush II

I'm -1 on using Dublin Core for anything other than providing human- 
readable labels for human-readable data.


Phil Ringnalda







--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/



Re: Unofficial last call on draft-snell-atompub-feed-expires-04.txt

2005-10-06 Thread Mark Nottingham


FWIW, the Media RSS extension cites http://web.resource.org/rss/1.0/ 
modules/dcterms/#valid] as a best practice.



On 29/09/2005, at 4:45 PM, James Holderness wrote:



Just a follow-up on the representation of Date Ranges in dublin  
core. I was under the mistaken impression that you needed to use a  
DCMI Period encoding to represent a date range, but apparently ISO  
8601 time intervals are perfectly valid. In order to clarify the  
situation, the DC Date Working Group has recently recommended the  
following replacement for the comment associated with the date  
element:


Typically, Date will be associated with the creation or  
availability of the resource. A date value may be a single date or  
a date range. Date values may express temporal information at any  
level of granularity (including time). Recommended best practice  
for encoding the date value is to supply an unambiguous  
representation of the single date or date range using a widely- 
recognized syntax (e.g., -MM-DD for a single date; -MM-DD/ 
-MM-DD for a date range; -MM-DDTHH:MM to specify a single  
date and time down to the minute).


Full details of the recommendation can be found here:
http://dublincore.org/usage/meetings/2005/09/madrid/files/ 
2005-07-29.date-comment.txt


Personally I think that makes the idea of using dublin core for  
this extension a whole lot more palatable.







--
Mark Nottingham http://www.mnot.net/



Re: Next and Previous

2005-10-04 Thread Mark Nottingham


Hi Alan,

Probably the closest thing to what you want is this proposal:
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-04.txt


It has previous, but not next.

Cheers,


On 03/10/2005, at 1:27 PM, Alan Gutierrez wrote:




What is the proper way to indicate the next chunk of articles in
the feed? Is it [EMAIL PROTECTED] and if so what value for related?

Where would someone put the offset?

If this is not a good place for general Atom questions, please
tell me which forum is more appropriate.

Thank you.

--
Alan Gutierrez - [EMAIL PROTECTED] - http://engrm.com/blogometer/







--
Mark Nottingham http://www.mnot.net/




Re: Next and Previous

2005-10-04 Thread Mark Nottingham



On 04/10/2005, at 7:07 PM, James Holderness wrote:

But isn't it at least worth mentioning something about this under  
Security Considerations?


Good point; I'll try to come up with some text.

Also, a minor point I noticed while reading the draft: the  
namespace prefix you use in most of the document is fh while the  
one in all the examples is history. Technically still valid, but  
I figure you'd probably want them all to be the same.


I did that on purpose :)

Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: ACE - Atom Common Extensions Namespace

2005-10-02 Thread Mark Nottingham
 by the
   Internet Society.




Snell Expires March 5, 2006  
[Page 5]






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-09-29 Thread Mark Nottingham


Hi Henry,

Thanks for the feedback. As I've explained before, I have a pretty  
strong preference for the current design, to make it usable in other  
formats; i.e., the scope of this is not just Atom (which is why I'm  
probably going to do it as an Individual submission).


One path forward would be to have a special case of fh:prev, just for  
Atom, where it was spelled atom:link. I'm not crazy about this, as  
exceptions are generally bad, and because it would require  
implementors to special-case their code; i.e., they'd have to look  
for one element if it's an RSS feed, and a different element if it's  
an Atom feed. I know this is already done widely, but I see no reason  
to artificially require the practice here. However, if a number of  
implementors stand up and say that they wouldn't mind such a special  
case, and no one is against it, I'd make the change.


WRT namespaces, I agree that namespace clutter isn't great, but it's  
hard to avoid while still getting the benefits of namespaces. Perhaps  
once there are a lot of extensions that are used in day-to-day  
practice, someone will package them up into a bigger, wrap-up  
namespace that contains everything.


Cheers,



On 29/09/2005, at 10:21 AM, Henry Story wrote:

I think this is good, but I would prefer the atom:link to be used  
instead of
the fh:prev structure, as that would better fit the atom way of  
doing things.
I also think it may be very helpful if we could agree on an  
extension name space
that all accepted extensions would use, in order to reduce name  
space clutter.


Henry


On 7 Sep 2005, at 01:18, Mark Nottingham wrote:



Feed History -04 is out, at:
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub- 
feed-history-04.txt


Changes:
  - fh:stateful - fh:incremental, with appropriate changes in text
  - more explicit cardinality information
  - implications of fh:prev being an Absolute URI spelled out
  - more explicit white space handling
  - Acknowledgements section

More information, including implementation details, at:
  http://www.mnot.net/blog/2005/09/05/feed_history

--
Mark Nottingham http://www.mnot.net/









--
Mark Nottingham http://www.mnot.net/



Re: FYI: Updated Index draft

2005-09-22 Thread Mark Nottingham



On 14/09/2005, at 1:06 PM, David Powell wrote:


I'm probably on my own, but I expected Atom's statement that This
specification assigns no significance to the order of atom:entry
elements within the feed was non-negotiable and couldn't be changed
by extensions. This seems more like potential Atom 1.1 material to me
- it doesn't seem to layer on top of the Atom framework so much as
slightly rewrite part of it.


Strictly read, this doesn't preclude other specifications /  
extensions from adding semantics to the ordering of entries -- it  
only says that *this* spec doesn't assign any meaning to it. That was  
the intent as I recall it.



Eg - An Atom library or server that doesn't know about this extension
is free to not preserve the entry order, and yet to retain the
fi:ordered / element, even though this will have corrupted the data.


That is indeed a problem. Probably the easiest way to fix this would  
be in errata, by adding a statement like Some feeds may implicitly  
or explicitly (through extensions) have meaning assigned to the  
ordering of entries, so intermediaries SHOULD NOT reorder them.



I think that as implemented, this extension wouldn't be safe to deploy
without must-understand extensions, which Atom 1.0 doesn't support.


That would be another way to go, but people didn't want mU.

Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: FYI: Updated Index draft

2005-09-22 Thread Mark Nottingham



On 14/09/2005, at 1:06 PM, David Powell wrote:


How will this interact with the sliding-window/feed-history
interpretation of feeds? The natural order assigned by this extension
seems incompatible with the implied date order that would be implied
by two feed documents, polled over some period of time.

What should be the order of a merged feed history such as this:

Poll 1:
feed(e1, e2, e3)

Poll 2:
feed(e3, e1, e5)

- where, perhaps, 3 and 1 have been updated. How do you combine
entries sorted by their natural order, with the time-ordered feed
history?


There'd need to be an algorithm described for combing the feed  
documents; e.g., see the _combine() method in http://www.mnot.net/rss/ 
history/feed_history.py. In practice, most/all(?) popular aggregators  
do this now (feed history + natural order); the only change is that  
the algorithm would be documented and well-understood (which IMO  
would be a vast improvement, *if* we can agree on one... or more).


With the rank approach, you'd probably need to say that the ranks  
were valid within the scope of a single feed document, and then  
describe the relations between ranks in different feed documents. Not  
sure that's as interesting.


--
Mark Nottingham http://www.mnot.net/



Re: Feed History -04

2005-09-10 Thread Mark Nottingham


Hi James,

That was discussed before in the thread whose Subject was Feed  
History -02, starting in mid-July.


My position is that the semantics of fh:prev and those of a get me a  
non-current view of this feed document' are potentially -- and even  
often -- orthoganal. fh:prev is a link that allows you to reconstruct  
a feed whose entries are *all* current. Because of this, it isn't  
good to re-purpose the element for what you want; something separate  
would be better.


Also, Henry Story had some interesting thoughts about how to best  
model top 20 lists, which had some impact on the issue you raise.


Cheers,


On 10/09/2005, at 12:31 PM, James M Snell wrote:


Mark, this is looking very good.  One point, however.

For non-incremental feeds, it would be helpful to have the option  
of being able to reference archived version of the feed (e.g. prior  
top-ten-lists).  This could be accomplished by allowing fh:prev  
elements in a feed with fh:incremental set to false.


feed
 ...
 fh:incrementalfalse/fh:incremental
 fh:prevhttp://www.example.com/oldfeed/fh:prev
/feed

means that http://www.example.com/oldfeed is the previous (I hate  
to use the word) version of the feed.


thoughts?

- James

Mark Nottingham wrote:




Feed History -04 is out, at:
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub- 
feed- history-04.txt


Changes:
  - fh:stateful - fh:incremental, with appropriate changes in text
  - more explicit cardinality information
  - implications of fh:prev being an Absolute URI spelled out
  - more explicit white space handling
  - Acknowledgements section

More information, including implementation details, at:
  http://www.mnot.net/blog/2005/09/05/feed_history

--
Mark Nottingham http://www.mnot.net/










--
Mark Nottingham http://www.mnot.net/



Feed History -04

2005-09-06 Thread Mark Nottingham


Feed History -04 is out, at:
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-04.txt


Changes:
  - fh:stateful - fh:incremental, with appropriate changes in text
  - more explicit cardinality information
  - implications of fh:prev being an Absolute URI spelled out
  - more explicit white space handling
  - Acknowledgements section

More information, including implementation details, at:
  http://www.mnot.net/blog/2005/09/05/feed_history

--
Mark Nottingham http://www.mnot.net/



Re: The benefits of Lists are Entries rather than Lists are Feeds

2005-08-31 Thread Mark Nottingham


What would you like the working group to do?


On 31/08/2005, at 8:36 AM, Bob Wyman wrote:



Folks, I hate to be insistent, however, I think that in the mail  
below I
offered some pretty compelling reasons why lists should be entries  
rather
than turning feeds into lists. Could someone please comment on  
this? Is

there some point that I'm completely missing? What is wrong with my
suggestion that lists-are-entries is much more useful than the  
alternative?


-Original Message-
From: [EMAIL PROTECTED] [mailto:owner-atom- 
[EMAIL PROTECTED]

On Behalf Of Bob Wyman
Sent: Tuesday, August 30, 2005 5:10 PM
To: 'Mark Nottingham'
Cc: atom-syntax@imc.org
Subject: RE: Top 10 and other lists should be entries, not feeds.

Mark Nottingham wrote:


Are you saying that when/if Netflix switches over to Atom, they
shouldn't use it for the Queue?


No. I'm saying that if Netflix switches over to Atom, what they
should do is insert the Queue information, as a list, into a single  
entry

within the feed.
This will not only preserve the nature of Atom feeds as feeds  
but
also allow NetFlix a number of new and potentially interesting  
opportunities
for providing data to customers. Most important among these will be  
the

ability to include multiple lists in the feed (i.e. in addition to the
Queue, they could also include their Top 10 list as well as a set of
recommendations based on user experience. They might even include  
a list

of 10 most recent transactions on your account) Each list would be a
distinct entry. To make life easier on aggregators, each entry  
type should
probably use the same atom:id across versions. This allows the  
aggregators

to discard earlier, now out of date entries.
NetFlix would also be able to intermix information such as the
Queue List with non-list entries. For instance, they might have a  
Message
from NetFlix that they want to include in the feed or, they might  
include a
series of movie reviews that were carefully selected for the  
specific user.

Basically, by using entries for lists instead of converting the
entire feed into a list, NetFlix is able to offer a much richer and  
much

more satisfying experience to their users.
The ability of Atom to carry both lists and non-lists as entries
means that Atom is able to offer a much more flexible and powerful  
mechanism
to NetFlix than can be had from the less-capable RSS V2.0 solution.  
I think
that if I were NetFlix, I would want to have the opportunity to  
experiment

with and find ways to exploit this powerful capability. The richer the
opportunity for communications between NetFlix and their customers,  
the

greater the opportunity they have to generate revenues.
The alternative to using entries rather than feeds would be  
creating
multiple feeds per user. That strikes me as a solution which is  
ugly on its
face and unquestionably increases the complexity of the system for  
both
NetFlix and its customers. The list-in-entry solution is much  
more elegant

and much more powerful.

bob wyman









--
Mark Nottingham http://www.mnot.net/



Re: Top 10 and other lists should be entries, not feeds.

2005-08-30 Thread Mark Nottingham


Sorry, Bob I disagree. I tried to introduce a rigid concept of what a  
feed is much earlier, and people pushed back; as a result, Atom  
doesn't have a firm definition of the nature of a feed. As a result,  
we can't go and say what it can't be at a later date.


Besides which, I think the use cases for Atom-as-list are powerful,  
and just beginning to be seen. E.g., Netflix allows you to subscribe  
to your Queue:

  http://www.netflix.com/RSSFeeds

Are you saying that when/if Netflix switches over to Atom, they  
shouldn't use it for the Queue?


It sounds like you've got use cases for Atom that other use cases  
(e.g., lists) make difficult to work with. Banning those other use  
cases makes things easier for you, but I don't think it's good for  
Atom overall.


Cheers,


On 29/08/2005, at 10:49 PM, Bob Wyman wrote:


I’m sorry, but I can’t go on without complaining.  Microsoft has  
proposed extensions which turn RSS V2.0 feeds into lists and we’ve  
got folk who are proposing much the same for Atom (i.e. stateful,  
incremental or partitioned feeds)… I think they are wrong. Feeds  
aren’t lists and Lists aren’t feeds. It seems to me that if you  
want a “Top 10” list, then you should simply create an entry that  
provides your Top 10. Then, insert that entry in your feed so that  
the rest of us can read it. If you update the list, then just  
replace the entry in your feed. If you create a new list (Top 34?)  
then insert that in the feed along with the “Top10” list.


What is the problem? Why don’t folk see that lists are the stuff of  
entries – not feeds? Remember, “It’s about the entries, Stupid…”


I think the reason we’ve got this pull to turn feeds into Lists is  
simply because we don’t have a commonly accepted “list” schema. So,  
the idea is to repurpose what we’ve got. Folk are too scared or  
tired to try to get a new thing defined and through the process, so  
they figure that they will just overload the definition of  
something that already exists. I think that’s wrong. If we want  
“Lists” then we should define lists and not muck about with Atom.  
If everyone is too tired to do the job properly and define a real  
list as a well defined schema for something that can be the payload  
of a content element, then why not just use OPML as the list format?




What is a search engine or a matching engine supposed to return as  
a result if it find a match for a user query in an entry that comes  
from a list-feed? Should it return the entire feed or should it  
return just the entry/item that contained the stuff in the users’  
query? What should an aggregating intermediary like PubSub do when  
it finds a match in an element of a list-feed? Is there some way to  
return an entire feed without building a feed of feeds? Given that  
no existing aggregator supports feeds as entries, how can an  
intermediary aggregator/filter return something the client will  
understand?


You might say that the search/matching engine should only present  
the matching entry in its results. But, if you do that what happens  
is that you lose the important semantic data that comes from  
knowing the position the matched entry had in the original list- 
feed. There is no way to preserve that order-dependence information  
without private extensions at present.


I’m sorry but I simply can’t see that it makes sense to encourage  
folk to break important rules of Atom by redefining feeds to be  
lists. If we want “lists” we should define what they look like and  
put them in entries. Keep your hands off the feeds. Feeds aren’t  
lists – they are feeds.




bob wyman










--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/




Re: Feed History: stateful - incremental?

2005-08-27 Thread Mark Nottingham



On 25/08/2005, at 10:10 AM, Stefan Eissing wrote:
Seeing it as a data structure fh introduces a single-linked list  
of documents which the whole feed is composed of. I think such a  
document needs its own term.


A single document could be named a feed fragement. The first  
document head fragment? Not very snappy. Let's see. If the whole  
feed is atom, then the fragments are particals? feedytrons? Are  
we perhaps talking about a split feed (fh:split=true/false? the  
german term for a fragment is splitter, btw.)


That's one thing to be named. I was trying to come up with a term for  
the whole, conceptual feed (that indicates its components have this  
nature); that's why I came up with incremental.


I.e., in usage: That's a foo feed; you can walk back its previous  
elements and build the whole feed. And, That's not a foo feed; the  
document you fetch is the whole feed.


Incremental works pretty well there (although it has a lot of  
syllables); sliding (as suggested by James) also fits, but it is a  
bit evocative of time, which I'd like to avoid (despite the use of  
'history' in the document title :-/).


(BTW, incremental isn't my term; it was suggested privately by an  
implementor)


--
Mark Nottingham http://www.mnot.net/



Re: Feed History: stateful - incremental?

2005-08-25 Thread Mark Nottingham


On 25/08/2005, at 3:00 AM, Stefan Eissing wrote:


Am 25.08.2005 um 00:07 schrieb Mark Nottingham:

Just bouncing an idea around; it seems that there's a fair amount  
of confusion / fuzziness caused by the term 'stateful'. Would  
people prefer the term 'incremental'?
I.e., instead of a stateful feed, it would be an incremental  
feed; fh:stateful would become fh:incremental.


I would prefer to name such a feed a chunked feed. So, that would  
make it fh:chunked=(true|false).


Hmm. I tend to shy away from 'chunked', because that already has  
meaning in HTTP, and while the format isn't dependant upon HTTP, it  
might get confusing (witness bindings and properties in the Web  
services world).


That leaves the history analogy a bit behind, I'm afraid. So a  
chunked feed would be a history if fh:order=publish-time? Maybe  
not worth it, just a thought.


I totally see an ordering extension being useful, but I think it's  
orthogonal to fh.



I see one use of feed histories in making normal feed documents  
very small and still being able to offer a rather long list of  
entries. Clients checking for updates would just get a tiny  
document (2 entries maybe) iff they do not use HTTP caching or ETag  
validation. Could this be some transfer-volume saver?


Hopefully. I think one of the reasons people publish such big feeds  
right now is that they want to make sure people will see entries if  
they haven't checked in a while.


Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: Don't Aggregrate Me

2005-08-25 Thread Mark Nottingham


It works in both Safari and Firefox; it's just that that particular  
data: URI is a 1x1 blank gif ;)



On 25/08/2005, at 9:37 AM, Henry Story wrote:




On 25 Aug 2005, at 17:06, A. Pagaltzis wrote:



* Henry Story [EMAIL PROTECTED] [2005-08-25 16:55]:



Do we put base64 encoded stuff in html? No: that is why  there
are things like
img src=...




img src=data:image/gif;base64,R0lGODlhAQABAIAAAP/// 
yH5BAEKAAEALAABAAEAAAICTAEAOw== /





!!! That really does exist?!
Yes:
http://www.ietf.org/rfc/rfc2397.txt

But apparently only for very short data fragments (a few k at  
most). And it does not give me anything

very intersting when I look at it in either Safari or Firefox.

Thanks for pointing this out. :-)



:-)

Regards,
--
Aristotle Pagaltzis // http://plasmasturm.org/









--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Feed History: stateful - incremental?

2005-08-24 Thread Mark Nottingham


Just bouncing an idea around; it seems that there's a fair amount of  
confusion / fuzziness caused by the term 'stateful'. Would people  
prefer the term 'incremental'?


I.e., instead of a stateful feed, it would be an incremental  
feed; fh:stateful would become fh:incremental.


Worth it?

--
Mark Nottingham http://www.mnot.net/



Re: If you want Fat Pings just use Atom!

2005-08-22 Thread Mark Nottingham


Yep; an existance proof is server push, which is very similar (but  
not XML-based);

  http://wp.netscape.com/assist/net_sites/pushpull.html


On 21/08/2005, at 9:36 PM, Sam Ruby wrote:



A. Pagaltzis wrote:


* Bob Wyman [EMAIL PROTECTED] [2005-08-22 01:05]:



What do you think? Is there any conceptual problem with
streaming basic Atom over TCP/IP, HTTP continuous sessions
(probably using chunked content) etc.?



I wonder how you would make sure that the document is
well-formed. Since the stream never actually ends and there is no
way for a client to signal an intent to close the connection, the
feed at the top would never actually be accompanied by a
/feed at the bottom.

If you accept that the stream can never be a complete well-formed
document, is there any reason not to simply send a stream of
concatenated Atom Entry Documents?

That would seem like the absolute simplest solution.



I think the keyword in the above is complete.

SAX is a popular API for dealing with streaming XML (and there are a
number of pull parsing APIs too).  It makes individual elements
available to your application as they are read.  If at any point, the
SAX parser determines that your feed is not well formed, it throws an
error at that point.

With a HTTP client library and SAX, the absolute simplest  
solution is

what Bob is describing: a single document that never completes.

Note that if your application were to discard all the data it receives
before it encouters the first entry, the stream from there on out  
would

be identical.

- Sam Ruby






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: If you want Fat Pings just use Atom!

2005-08-22 Thread Mark Nottingham


Just as a data point, this should become less of a problem as event- 
loop based HTTP implementations become more popular; with them, the  
number of connections you can hold open is only practically limited  
by available memory (to keep fairly small amounts of connection- 
specific state). This technique can allow tens to hundreds of  
thousands of concurrent connections, leading to multi-hour HTTP  
connections (if both sides want them).



On 21/08/2005, at 8:08 PM, Bob Wyman wrote:


The
problem is that HTTP connections, given the current infrastructure and
standard components, are very hard to keep open permanently or  
for a very
long period of time. One is often considered lucky if you can keep  
an HTTP

connection open for 5 minutes without having to re-initialize...



--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: Feed History -03

2005-08-16 Thread Mark Nottingham


I very much disagree; relative references should be allowable in  
simple extensions, and in fact the rationale that Tim gives is the  
reasoning I assumed regarding Atom extensions; if I had known that  
the division between simple and complex extensions would be used to  
justify a constraint on the use of context in simple extensions, I  
would have objected to it.


If you're using something like RDF to model feeds, you already have a  
number of context-related issues to work through, this isn't an extra  
burden.


I should explicitly allow relative URIs in fh:prev, though.

Cheers,


On 16/08/2005, at 11:35 AM, Henry Story wrote:

I think that in section 5. you should specify that the URI  
reference MUST NOT be relative or
MUST BE absolute (if that is the proper W3C Architecture term). I  
agree with the point made by

David Powell in the thread entitled More about extensions [1].

Given that we have this problem I was wondering whether it would  
not be better
to use the link element as I think it permits relative references.  
Relative references
really are *extreemly useful*. I tried to work without them in my  
BlogEd editor
because the Sesame database folk mistakenly thought it was not part  
of RDF, and it caused
me no end of trouble: all those problems vanished as soon as they  
allowed relative references.


So if relative references are allowed in links perhaps the  
following would be better:


link type=http://purl.org/syndication/history/1.0/next; href=./ 
archives/archive1.atom



Henry Story

[1] http://www.imc.org/atom-syntax/mail-archive/msg16643.html


On 15 Aug 2005, at 22:31, Mark Nottingham wrote:



Draft -03 of feed history is now available, at:
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub- 
feed-history-03.txt


Significant changes in this revision include:
  - add fh:archive element, to indicate that an entry is an archive
  - allow subscription feed to omit fh:stateful if fh:prev is present
  - clarified that fh doesn't add ordering semantics, just allows  
you to reconstruct state

  - cleaned up text, fixed examples, general standards hygiene

There's going to be at least one more draft, as I neglected to  
acknowledge people who have made suggestions and otherwise helped  
so far. Sorry!


--
Mark Nottingham http://www.mnot.net/









--
Mark Nottingham http://www.mnot.net/



Re: Feed History -03

2005-08-16 Thread Mark Nottingham



On 16/08/2005, at 9:17 AM, Stefan Eissing wrote:
Ch. 5 similar:  MUST occur unless. If the document is an  
archive there are only 2 possiblities: either fh:prev is there or  
not. If not it will always terminate the archive list, won't  
it? You seem to have a (server-side) model in mind which drives  
the document structure. From a client perspective, there are only  
the documents and it derives its own model from that.


Not sure what you mean here; are you saying that fh:archive is  
superfluous?


Currently:
The document first defines archive documents and *afterwards*  
requires that fh:archive MUST be present in archive documents.


My proposal:
Introduce fh:archive with the semantics that the server garantuees  
that the set of entries in this document will not change over time  
if fh:archive is present. A document with fh:archive in it (and its  
implied semantics) is then called an archive document.


To tackle it from another view: The spec should say servers MUST  
NOT break the promises of fh:archive instead of saying archive  
documents MUST announce that they do not change. There is possible  
harm in breaking the first, but only suboptimal performance in  
neglecting the latter case.


I made it loose purposefully; I think there are several types of  
archives out there, and it's likely that further specs are going to  
come along that talk about the guarantees surrounding persistence,  
entry deletion, etc. Again, I want to avoid, as much as possible,  
defining what a feed is in this document, as there are many potential  
models for feeds.


For example, an archive in my blog feed can change for spelling  
mistakes and updates, but an archive of telephone records used for  
SOX compliance can't. Mandating a particular definition of what an  
archive is would necessitate ruling some types of archives out, and  
that wasn't my main use case for this; rather, it was to make sure  
that archive feeds (as defined for the purposes of this spec)  
wouldn't be accidentally subscribed to.


Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: Feed History -03

2005-08-16 Thread Mark Nottingham


On 16/08/2005, at 3:05 PM, Robert Sayre wrote:


I suggested writing the next tag like this:

link type=http://purl.org/syndication/history/1.0/next; href=./
archives/archive1.atom


That's what I would do, too. Not my spec, though. Mainly so I could
put a title in that said Entries from August or whatever.


For that matter, if Henry's interpretation were correct, the element  
could be


  fh:history nonsense=1./archives/archive1.atom/fh:history

And Atom processors would magically know that XML Base applies to the  
URI therein. It's the magic that I object to; inferring the  
applicability of context based on the presence or absence of other  
markup isn't good practice, and will lead to practical problems.  
E.g., what if I want to have an optional attribute on an empty  
element? Is it simple or complex?


This interpretation of extensions seems very fragile to me.

--
Mark Nottingham http://www.mnot.net/



Feed History -03

2005-08-15 Thread Mark Nottingham


Draft -03 of feed history is now available, at:
  http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-03.txt


Significant changes in this revision include:
  - add fh:archive element, to indicate that an entry is an archive
  - allow subscription feed to omit fh:stateful if fh:prev is present
  - clarified that fh doesn't add ordering semantics, just allows  
you to reconstruct state

  - cleaned up text, fixed examples, general standards hygiene

There's going to be at least one more draft, as I neglected to  
acknowledge people who have made suggestions and otherwise helped so  
far. Sorry!


--
Mark Nottingham http://www.mnot.net/



Re: Feed History -02

2005-08-10 Thread Mark Nottingham


So, you're really looking for entry-level, time-based invalidation, no?

I guess the simplest way to do this would be to dereference the link  
and see if you get a 404/410; if you do, you know it's no longer good.


That's not terribly efficient, but OTOH managing metadata in multiple  
places is tricky, and predicting the future doubly so :) Most people  
get expiration times really wrong. And clock sync becomes an issue as  
well.


I'd think that if you have reasonable control over the polling of the  
feed, and a solid enough state model (which might include an explicit  
deletion mechanism), you could have a similar effect by just removing  
the items from the feed when they expire, with the expectation that  
when they disappear from the feed, they disappear from the client.  
Would that work for your use case?



On 09/08/2005, at 9:07 PM, James M Snell wrote:



First off, let me stress that I am NOT talking about caching  
scenarios here...  (my use of the terms application layer and  
transport layer were an unfortunate mistake on my part that only  
served to confuse my point)


Let's get away from the multiprotocol question for a bit (it never  
leads anywhere constructive anyway)... Let's consider an aggregator  
scenario. Take an entry from a feed that is supposed to expire  
after 10 days.  The feed document is served up to the aggregator  
with the proper HTTP headers for expiration.  The entry is  
extracted from the original feed and dumped into an aggregated  
feed.  Suppose each of the entries in the aggregated feed are  
supposed to have their own distinct expirations.  How should the  
aggregator communicate the appropriate expirations to the  
subscriber?  Specifying expirations on the HTTP level does not  
allow me to specify expirations for individual entries within a  
feed.  Use case: an online retailer wishes to produce a special  
offers feed.  Each offer in the feed is a distinct entity with  
it's own terms and own expiration:  e.g. some offers are valid for  
a week, other offers are valid for two weeks, etc.  The expiration  
of the offer (a business level construct) is independent of whether  
or not the feed is being cached or not (a protocol level  
construct); publishing a new version of the feed (e.g. by adding a  
new offer to the feed) should have no impact on the expiration of  
prior offers published to the feed.


Again, I am NOT attempting to reinvent an abstract or transport- 
neutral caching mechanism in the same sense that the atom:updated  
element is not attempting to reinvent Last-Modified or that the via  
link relation is not attempting to reinvent the Via header, etc.   
They serve completely different purposes. The expires and max-age  
extensions I am proposing should NOT be used for cache control of  
the Atom documents in which they appear.


I think we can declare victory here by simply a) using whatever   
caching mechanism is available, and b) designating a won't  
change  flag.
Speaking *strictly* about cache control of Atom documents, +1.  No  
document level mechanisms for cache control are necessary.


- James


Mark Nottingham wrote:


HTTP isn't a transport protocol, it's a transfer protocol; i.e.,  
the  caching information (and other entity metadata) are *part of*  
the  entity, not something that's conceptually separate.


The problem with having an abstract or transport-neutral  
concept  of caching is that it leaves you with an awkward choice;  
you can  either a) exactly replicate the HTTP caching model, which  
is  difficult to do in other protocols, b) dumb down HTTP  
caching to a  subset that's neutral, or c) introduce a  
contradictory caching  model and suffer the clashes between HTTP  
caching and it.


This is the same road that Web services sometimes tries to go  
down,  and it's a painful one; coming up with the grand, protocol- 
neutral  abstraction that enables all of the protocol-specific  
features is  hard, and IMO not necessary. Ask yourself: are there  
any situations  where you *have* to be able to seamlessly switch  
between protocols,  or is it just a convenience?


I think we can declare victory here by simply a) using whatever   
caching mechanism is available, and b) designating a won't  
change  flag.







On 09/08/2005, at 11:53 AM, James M Snell wrote:



Henry Story wrote:


Now I am wondering if the http mechanism is perhaps all that is   
needed
for what I want with the unchanging archives. If it is then   
perhaps this
could be explained in the Feed History RFC. Or are there other
reasons to

add and expires tag to the document itself?




On the application level, a feed or entry may expire or age   
indepedently of whatever caching mechanisms may be applied at  
the  transport level.  For example, imagine a source that  
publishes  special offers in the form of Atom entries that expire  
at a given  point in time.  Now suppose that those entries are  
being  distributed via XMPP and HTTP

Re: Feed History -02

2005-08-09 Thread Mark Nottingham



On 09/08/2005, at 4:07 AM, Henry Story wrote:


But I would really like some way to specify that the next feed  
document is an archive (ie. won't change). This would make it easy  
for clients to know when to stop following the links, ie, when they  
have cought up with the changes since they last looked at the feed.


Perhaps something like this:

history:prev archive=yeshttp://liftoff.msfc.nasa.gov/2003/04/ 
feed.rss/history


I'd think that would be more appropriate as an extension to the  
archive itself, wouldn't it? That way, the metadata (the fact that  
it's an archive) is part of the data (the archive feed).


E.g.,

atom:feed
  ...
  archive:yes_im_an_archive/
/atom:feed

By (current) definition, anything that history:prev points to is an  
archive.


Cheers,


--
Mark Nottingham http://www.mnot.net/



Re: Feed History -02

2005-08-09 Thread Mark Nottingham


HTTP isn't a transport protocol, it's a transfer protocol; i.e., the  
caching information (and other entity metadata) are *part of* the  
entity, not something that's conceptually separate.


The problem with having an abstract or transport-neutral concept  
of caching is that it leaves you with an awkward choice; you can  
either a) exactly replicate the HTTP caching model, which is  
difficult to do in other protocols, b) dumb down HTTP caching to a  
subset that's neutral, or c) introduce a contradictory caching  
model and suffer the clashes between HTTP caching and it.


This is the same road that Web services sometimes tries to go down,  
and it's a painful one; coming up with the grand, protocol-neutral  
abstraction that enables all of the protocol-specific features is  
hard, and IMO not necessary. Ask yourself: are there any situations  
where you *have* to be able to seamlessly switch between protocols,  
or is it just a convenience?


I think we can declare victory here by simply a) using whatever  
caching mechanism is available, and b) designating a won't change  
flag.







On 09/08/2005, at 11:53 AM, James M Snell wrote:


Henry Story wrote:
Now I am wondering if the http mechanism is perhaps all that is  
needed
for what I want with the unchanging archives. If it is then  
perhaps this
could be explained in the Feed History RFC. Or are there other   
reasons to

add and expires tag to the document itself?


On the application level, a feed or entry may expire or age  
indepedently of whatever caching mechanisms may be applied at the  
transport level.  For example, imagine a source that publishes  
special offers in the form of Atom entries that expire at a given  
point in time.  Now suppose that those entries are being  
distributed via XMPP and HTTP.  It is helpful to have a transport  
independent expiration/max-age mechanism whose semantics operate on  
the application layer rather than the transport layer.


- James






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: spec bug: can we fix for draft-11?

2005-08-03 Thread Mark Nottingham


On 02/08/2005, at 9:15 PM, Tim Bray wrote:

So if the WG really thinks this is a sensible clarification I won't  
scream too much.


It's probably necessary any way, because RFC3470/BCP70 Section 4.16  
encourages specs to give guidelines about white space;


   Implementers might safely assume that they can ignore the white  
space
   in the example above, but white space used for pretty-printing  
can be

   a source of confusion in other situations.  Consider a minor change
   to the value element:

   value
 10.1.2.3
   /value

   where white space is found on both sides of the IP address.  XML
   processors treat the white space surrounding 10.1.2.3 as an
   integral part of the value element.  A failure to recognize this
   behavior can lead to confusion and errors in both design and
   implementation.

   All white space is considered significant in XML instances.  As a
   consequence, it is recommended that protocol designers provide
   specific guidelines to address white space handling within  
protocols

   that use XML.




--
Mark Nottingham http://www.mnot.net/



Re: Feed History -02

2005-08-03 Thread Mark Nottingham
://bblfish.net/blog/top2/Number1 in the first feed.

Looking at it this way, there really seems to be no incompatibility  
between
a top 20 feed and the history:next ... link. My talk about  
archives not

changing should be more precisely about archives not changing in any
significant way. And this advice could be moved to an implementors  
section
and be encoded in HTTP by simply giving archive pages an infinitely  
long

expiry date.


Someone could subscribe to that second feed and poll for updates,  
and all
they'll ever see are updates to the 20 items there, not the 20  
items from

the next week/whatever.

The idea of feeds linked to feeds has lots of utility -- feeds of  
comments

for one, and even a feed of feeds available on the site.



I completely agree. And remember for any two things there is at  
least one way they
are related. And there are many different ways feeds can be related  
to each other.
A feed may be an archival continuation of one - which is what the  
history:next ...
link in my opinion addresses, but there are many other ways one can  
relate feeds.




For example: this HTML page http://www.nature.com/rss/ has an  
equivalent
feed document http://npg.nature.com/pdf/newsfeeds.rdf, where  
each item
links to the individual feeds for each publication. That feed  
doesn't update
often, mostly because NPG doesn't add many new feeds to their site  
all that
often. The URIs the entries of that feed link to are redirected to  
the
permanent URIs of the current issue (each issue has it's own feed  
which is
the table of contents for that issue, with a distinct and separate  
URI for

each issue).



Yes a feed can itself be an entry in another feed.



It's conceivable they could also provide a feed for each publication
pointing to the table of contents feeds of each issue. That is, a  
feed with

an entry for each issue.



yes.


Of the above, the mechanism of a single URI which redirects to the  
current
issue is a situation which would still need a flag indicating that  
the

appropriate thing to do is to not persist older entries.



I am starting to wonder whether this is really needed now that I  
have looked

at the top20 example I gave above.




The other structure of feeds linking to feeds would require the  
aggregator
be able to do something useful with such links, but this can be  
generalised
and thus be useful for many purposes. As it is, right now with NNW  
I can do
something useful with such a feed: drag  drop the item headline  
link to my
subscriptions pane to subscribe to that feed and view the entries  
therein.




I myself have no problem with feeds being entries, feeds pointing  
to other
feeds, or anything like that. A feed is a resource. It can change.  
A feed

is simply a set of state changes to resources. It is that general.




Both require coding effort.

e.









--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Re: Feed History -02

2005-07-23 Thread Mark Nottingham



On 19/07/2005, at 2:04 AM, Henry Story wrote:

Clearly the archive feed will work best if archive documents, once  
completed (containing a
given number of entries) never change. Readers of the archive will  
have a simple way to know when
to stop reading: there should never be a need to re-read an archive  
page - they just never change.


The archive provides a history of the feed's evolution. Earlier  
changes to the resources
described by the feed will be found in older archive documents and  
newer changes in the later
ones. One should expect some entries to be referenced in multiple  
archive feed documents. These

will be entries that have been changed over time.

Archives *should not* change. I think any librarian will agree with  
that.


I very much agree that this is the ideal that should be striven for.

However, there are some practical problems with doing it in this  
proposal.


First of all, I'm very keen to make it possible to implement history  
with currently-deployed feed-generating software; e.g., Moveable  
Type. MT isn't capable of generating a feed where changed entries are  
repeated at the top out of the box, AFAIK.


Even if it (and other software) were, it would be very annoying to  
people whose feed software doesn't understand this extension; their  
show me the latest entries in the blog feed would become show me  
the latest changed entries in the blog, and every time an entry was  
modified or spell-checked, it would show up at the top.


So, it's a matter of enabling graceful deployment. Most of the reason  
I have the fh:stateful flag in there is to allow people to explicitly  
say I don't want you to do history on this feed because so many  
aggregators are already doing history in their own way.


The underlying problem, I think, is that different feeds have  
different semantics. Some will want every change to be included,  
others won't; for example, a blog probably doesn't need every single  
spelling correction propagated. There are some fundamental questions  
about the nature of a feed that need to be answered (and, more  
importantly, agreed upon) before we get there; for example, we now  
say that the ordering isn't significant by default; while that's  
nice, most software is going to infer something from it, so we need  
an extension to say 'sort by this', *and* have that extension widely  
deployed.


I tried to approach these problems when I wrote the original proposal  
for this in Pace form; I got strong pushback on defining a single  
model for a feed's state. Given that, as well as the deployment  
issues, I intentionally de-coupled the state reconstruction (this  
proposal) from the state model (e.g., ordering, deletion, exact  
semantics of an archive feed, etc.), so that they could be separately  
defined.


Cheers,

--
Mark Nottingham http://www.mnot.net/



  1   2   >