Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Eric Scheid

On 30/6/05 11:54 AM, "Antone Roundy" <[EMAIL PROTECTED]> wrote:

> I don't quite get what the "hub feed" would look like.  Could you show
> us some XML?

I think something like this:


...
archives hub for x
http://example.com/archive/feed/2005/05/";
  type="application/atom+xml"
  rel="prev" />
http://example.com/archive/feed/2005/04/";
  type="application/atom+xml"
  rel="prev" />
http://example.com/archive/feed/2005/03/";
  type="application/atom+xml"
  rel="prev" />
http://example.com/archive/feed/2005/02/";
  type="application/atom+xml"
  rel="prev" />



... although ... now that I've typed that out ... the semantics of "prev"
are borked.

So maybe something more like this would make better sense


...


http://example.com/archive/feed/2005/05/";
type="application/atom+xml"/>
Archive for 2005/05
27 posts



That is: "here is a resource representation providing a list of entries
which are representative of some other resources, not necessarily in
text/html format"

e.



Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Eric Scheid

On 30/6/05 11:54 AM, "Antone Roundy" <[EMAIL PROTECTED]> wrote:

>> I¹d much rather have a single archive feed containing all
>> entries, and use RFC3229+feed to return partial versions of it;
> That might be good for those who can support it, but many people won't
> be able to.  On the other hand, if that single feed grows to where it's
> hundreds of MB, it could cause real problems if someone requests the
> whole thing or a large portion of it.

also, how can RFC3229+feed provide subsets where one end isn't tied to the
most recent entry?

e.




Re: Dealing with namespace prefixes when syndicating signed entries

2005-06-29 Thread Bill de hÓra

Antone Roundy wrote:

> [...]
> Perhaps a reasonable way to deal with the namespace prefix conflict
> would be for the signature to be applied after a transform that yielded
> this (putting full namespace names in where the prefixes were):
> [...]

ex-c14n is where to deal with this - it's a necessary preprocessing step
to dsigging XML and walks the line between the Infoset and bytewise
needs. IIRC it can result in moving the namespace declarations around.

cheers
Bill




Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Antone Roundy


On Wednesday, June 29, 2005, at 06:50  PM, A. Pagaltzis wrote:

My first thought upon reading the draft was what I assume is
what Stefan Eissing said: I would rather have a single,
entry-less “archive hub” feed which contains “prev” links to
*all* previous instances
For an active feed, that document could easily grow till it was larger 
than many feed instances.  I prefer the chain of instances method.



, leading to a setup like

http://example.com/latest
└─> http://example.com/archive/feed/
├─> http://example.com/archive/feed/2005/05/
├─> http://example.com/archive/feed/2005/04/
├─> http://example.com/archive/feed/2005/03/
├─> http://example.com/archive/feed/2005/02/
└─> http://example.com/archive/feed/2005/01/
I don't quite get what the "hub feed" would look like.  Could you show 
us some XML?



I don’t see anything in the draft that would preclude this use,
and as far as I can tell, aggregators which support the draft
should have no trouble handling this scenario correctly.
The draft doesn't explicitly say that a feed can only contain one 
"prev" link, but I find it hard to read "a" to mean "one or more" in 
'and optionally a Link element with the relation "prev"'.



Again, I don’t see anything in the draft that would preclude
this use, and as far as I can tell, aggregators which support
the draft should have no trouble handling this scenario
correctly.

...unless they expected only to find one "prev" link per document.


Note how the archive directory feed being static makes this
painlessly possible, while it would be a pain to achive
something similar using the paginated approach with local
“prev” links (you’d need to go back and change the previously
newest old version every time a new one was added).
I don't see why this would be any more difficult.  The paginated 
approach could easily use static documents that never need to be 
updated, as I described earlier.  I'll re-explain at the end of this 
email.



It would in fact require a “prev” link to what is actually the “next”
page.

Funnily enough, I don’t see anything in the draft that would
preclude this counterintuitive use of the “prev” link to point
to the “next” version

Could you explain what you mean by that?


I’d much rather have a single archive feed containing all
entries, and use RFC3229+feed to return partial versions of it;
That might be good for those who can support it, but many people won't 
be able to.  On the other hand, if that single feed grows to where it's 
hundreds of MB, it could cause real problems if someone requests the 
whole thing or a large portion of it.



Getting back to how to use static documents for a chain of instances, 
that could easily be done as follows. The following assumes that the 
current feed document and the archive documents will each contain 15 
entries.


The first 15 instances of the feed document do not contain a "prev" 
link (assuming one entry is added each time).


When the 16th entry is added, a static document is created containing 
the first 15 entries, and a "prev" link pointing to it is added to the 
current feed document. This link remains unchanged until the 31st entry 
is added.


When the 31st entry is added, another static document is created 
containing the 16th through 30th entries. It has a prev link pointing 
to the first static document. The current feed document's prev link is 
updated to point to the second static document, and it continues to 
point to the second static document until the 46th entry is added.


When the 46th entry is added, a third static document is created 
containing the 31st through 45th entries, etc.


If you want to reduce the number of requests required to get the entire 
history (which I don't imagine would happen often enough that it would 
necessary be worth bothering), you could put more entries into each 
static document.  If you didn't correspondingly increase the number of 
entries in the current feed document, you'd have to update the most 
recent static document a number of times rather than only outputting it 
once as described above, but even that would only require multiple 
updates to the most recent static document at any time.




Re: More on Atom XML signatures and encryption

2005-06-29 Thread Paul Hoffman


At 12:47 PM -0700 6/29/05, James M Snell wrote:
1. After going through a bunch of potential XML encryption use 
cases, it really doesn't seem to make any sense at all to use XML 
Encryption below the document element level.  The I-D will not cover 
anything about encryption of Atom documents as there are really no 
special considerations that are specific to Atom.


Good.

2. The I-D will allow a KeyInfo element to included as a child of 
the atom:feed, atom:entry and atom:source elements.  These will be 
used to identify the signing key. (e.g. the KeyInfo in the Signature 
can reference another KeyInfo contained elsewhere in the Feed).


This is OK from a security standpoint, but why have it? Why not 
always have the signature contain all the validating information?


3. When signing complete Atom documents (atom:feed and top level 
atom:entry), Inclusive Canonicalization with no pre-c14n 
normalization is required.


There seems to be many more interoperability issues with Inclusive 
Canonicalization than with Exclusive. What is your reasoning here?


4. The signature should cover the signing key. (e.g. if a x509 cert 
stored externally from the feed is used, the Signature should 
reference and cover that x509 cert).  Failing to do so opens up a 
security risk.


Please explain the "security risk". I probably disagree with this 
requirement, but want to hear your risk analysis.


5. When signing individual atom:entry elements within a feed, 
Exclusive Canonicalization MUST be used.  If a separate KeyInfo is 
used to identify the signing key, it MUST be contained as either a 
child of the entry or source elements.  A source element SHOULD be 
included in the entry.


Why is this different than #3?

6. If an entry contains any "enclosure" links, the digital signature 
SHOULD cover the referenced resources.  Enclosure links that are not 
covered are considered untrusted and pose a potential security risk


Fully disagree. We are signing the bits in the document, not the 
outside. There is "security risk", those items are simply unsigned.


7. If an entry contains a content element that uses @src, the 
digital signature MUST cover the referenced resource.


Fully disagree.

8. Aggregators and Intermediaries MUST NOT alter/augment the content 
of digitally signed entry elements.


Also disagree, but for a different reason. Aggregators and 
intermediaries should be free to diddle bits if they strip the 
signatures that they have broken.


9. In addition to serving as a message authenticator, the Signature 
may be used by implementations to assert that potentially 
untrustworthy content within a feed can be trusted (e.g. binary 
enclosures, scripts, etc)


How will you assert that?


10. The I-D will not introduce any new elements or attributes


Thank you!

11. Certain types of [X]HTML content will be forbidden in unsigned 
feeds. e.g. , , 

Re: FWD: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread A. Pagaltzis

Hi Mark,

* Mark Nottingham <[EMAIL PROTECTED]> [2005-06-28 22:40]:
>This document specifies mechanisms that allow feed
>publishers to  give hints about the nature of the feed's
>statefulness, and a means of retrieving ^missed^ entries
>from a stateful feed.

I agree with Antone Roundy that the “this” link is unncessary
for the reasons already stated.

My first thought upon reading the draft was what I assume is
what Stefan Eissing said: I would rather have a single,
entry-less “archive hub” feed which contains “prev” links to
*all* previous instances, leading to a setup like

http://example.com/latest
└─> http://example.com/archive/feed/
├─> http://example.com/archive/feed/2005/05/
├─> http://example.com/archive/feed/2005/04/
├─> http://example.com/archive/feed/2005/03/
├─> http://example.com/archive/feed/2005/02/
└─> http://example.com/archive/feed/2005/01/ 

so that it’s only necessary for the aggregator to fetch one
document to find out about all previous versions. It seems
cleaner and more robust to keep a global history list, rather
than encoding it implicitly as a chain of documents.

I don’t see anything in the draft that would preclude this use,
and as far as I can tell, aggregators which support the draft
should have no trouble handling this scenario correctly. Is it
acceptable, or did you intend to outlaw it? If yes, what is the
reasoning?

In fact, I’d probably go one step further and add a “prev”
link to each version, which points back to the archive list
feed.

http://example.com/latest
└─> http://example.com/archive/feed/ <┐
├─> http://example.com/archive/feed/2005/05/ ─┤
├─> http://example.com/archive/feed/2005/04/ ─┤
├─> http://example.com/archive/feed/2005/03/ ─┤
├─> http://example.com/archive/feed/2005/02/ ─┤
└─> http://example.com/archive/feed/2005/01/ ─┘

In that way, any of the files can be copied around the place,
but they never lose their association with the originals.

Again, I don’t see anything in the draft that would preclude
this use, and as far as I can tell, aggregators which support
the draft should have no trouble handling this scenario
correctly. Is it acceptable, or did you intend to outlaw it? If
yes, what is the reasoning?

Note how the archive directory feed being static makes this
painlessly possible, while it would be a pain to achive
something similar using the paginated approach with local
“prev” links (you’d need to go back and change the previously
newest old version every time a new one was added). It would
in fact require a “prev” link to what is actually the “next”
page.

Funnily enough, I don’t see anything in the draft that would
preclude this counterintuitive use of the “prev” link to point
to the “next” version, and as far as I can tell, aggregators
which support the draft [etc]?

Overall, I must say this feels kludgy to me, any way I turn it.
I’d much rather have a single archive feed containing all
entries, and use RFC3229+feed to return partial versions of it;
as far as I can tell, this is a use case which your draft does
*NOT* allow. Is that so?

Regards,
-- 
Aristotle Pagaltzis // 



Re: Dealing with namespace prefixes when syndicating signed entries

2005-06-29 Thread James M Snell


After digging through things a bit I *believe* the Exclusive 
Canonicalization spec handles this.  It was designed to allow a chunk of 
signed XML to be portable from one containing element to another.  
http://www.w3.org/TR/2002/REC-xml-exc-c14n-20020718/ 


Antone Roundy wrote:



Mulling more...

Let's say an aggregator is putting these two entries into the same 
aggregate feed:



...

[signature]


...




...

[signature]


...



Perhaps a reasonable way to deal with the namespace prefix conflict 
would be for the signature to be applied after a transform that 
yielded this (putting full namespace names in where the prefixes were):


<[atom's namespace]:entry>
[signature]


...


Unprefixed attributes would naturally remain unprefixed, but elements 
in the default namespace would need to have their namespace names 
prepended.







Dealing with namespace prefixes when syndicating signed entries

2005-06-29 Thread Antone Roundy


Mulling more...

Let's say an aggregator is putting these two entries into the same 
aggregate feed:



...

[signature]


...




...

[signature]


...



Perhaps a reasonable way to deal with the namespace prefix conflict 
would be for the signature to be applied after a transform that yielded 
this (putting full namespace names in where the prefixes were):


<[atom's namespace]:entry>
[signature]


...


Unprefixed attributes would naturally remain unprefixed, but elements 
in the default namespace would need to have their namespace names 
prepended.




Re: Annotating signed entries (was Re: More on Atom XML signatures and encryption)

2005-06-29 Thread James M Snell


Another possible alternative approach would be to have signed entries 
include a special container for metadata additions that is expressly not 
covered by the Signature via a Transform. (the name "annotations" for 
the tag is just a strawman for discussion purposes)


e.g.


  
  
  
 

Annotating signed entries (was Re: More on Atom XML signatures and encryption)

2005-06-29 Thread Antone Roundy


On Wednesday, June 29, 2005, at 01:47  PM, James M Snell wrote:
8. Aggregators and Intermediaries MUST NOT alter/augment the content 
of digitally signed entry elements.



Just mulling over things...

Obviously, we don't have any way to annotate signed entries without 
breaking the signature.  I hesitate to introduce new complexity, so I 
don't know whether I LIKE the idea I'm about to write about, but here 
it is.  If you want to annotate a signed entry, or even annotate an 
unsigned one but keep your annotations separate, you might do something 
like this:



[feed metadata]

		the entry's signature goes 
here

[this annotation could be signed here]
...
...

...

foo
[entry's signature here if signed]
...



Notes:
1)  is optional, but recommended if the entry is 
signed and the annotation is signed.

2) Multiple annotations could point to the same entry
3) It could be requested that aggregators forward annotations along 
with their entries...but of course, that's optional, and they could 
certainly be dropped at the request of the end user if they only want 
to see the originals.
4) It might be recommended or required that  elements 
appear before the entries they annotate (whether above all entries or 
interspersed with them) to make life easier for processors that 
finalize their processing of entries as soon as they hit  
rather than doing it after they've parsed the whole document.
5) Aggregators COULD attach annotations from various sources when 
outputting entries, even if those annotations never appeared together 
within a feed before.

6) I don't see any way to choose between conflicting annotations.



Re: More on Atom XML signatures and encryption

2005-06-29 Thread James M Snell


Ok, I've been going back through all of the discussion on this thread 
and use scenarios, etc and should have an I-D ready for this by this 
weekend.  Here's the summary (in no particular order)


1. After going through a bunch of potential XML encryption use cases, it 
really doesn't seem to make any sense at all to use XML Encryption below 
the document element level.  The I-D will not cover anything about 
encryption of Atom documents as there are really no special 
considerations that are specific to Atom.


2. The I-D will allow a KeyInfo element to included as a child of the 
atom:feed, atom:entry and atom:source elements.  These will be used to 
identify the signing key. (e.g. the KeyInfo in the Signature can 
reference another KeyInfo contained elsewhere in the Feed).


3. When signing complete Atom documents (atom:feed and top level 
atom:entry), Inclusive Canonicalization with no pre-c14n normalization 
is required.


4. The signature should cover the signing key. (e.g. if a x509 cert 
stored externally from the feed is used, the Signature should reference 
and cover that x509 cert).  Failing to do so opens up a security risk.


5. When signing individual atom:entry elements within a feed, Exclusive 
Canonicalization MUST be used.  If a separate KeyInfo is used to 
identify the signing key, it MUST be contained as either a child of the 
entry or source elements.  A source element SHOULD be included in the 
entry. 

6. If an entry contains any "enclosure" links, the digital signature 
SHOULD cover the referenced resources.  Enclosure links that are not 
covered are considered untrusted and pose a potential security risk


7. If an entry contains a content element that uses @src, the digital 
signature MUST cover the referenced resource.


8. Aggregators and Intermediaries MUST NOT alter/augment the content of 
digitally signed entry elements.


9. In addition to serving as a message authenticator, the Signature may 
be used by implementations to assert that potentially untrustworthy 
content within a feed can be trusted (e.g. binary enclosures, scripts, etc)


10. The I-D will not introduce any new elements or attributes

11. Certain types of [X]HTML content will be forbidden in unsigned 
feeds. e.g. , , 

Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread James M Snell


Hey Mark,

A few comments after my first read through on this:

1. This appears to be addressed at solving the same problem as Bob 
Wyman's RFC3229+feed proposal 
[http://bobwyman.pubsub.com/main/2004/09/using_rfc3229_w.html].  Do you 
have any empiracle data similar to what Bob provides @ 
http://bobwyman.pubsub.com/main/2004/10/massive_bandwid.html that would 
indicate that your approach is a better solution to this problem?  These 
are actually not mutually exclusive solutions, they're just different 
and could be used for different scenarios -- e.g. Bob's tends to make a 
lot of sense for blog dashboard feeds like what we use within IBM to 
show all post and commenting activity within our internal blogs server 
while your mechanism would work rather well for things like Top Ten 
lists, etc.  I would just like to see a bit of a compare/contrast on the 
two approaches.


2. Is the feed state mechanism a way of paging through the current 
contents of a collection or a snapshot-in-time view of a feed?  That is...


   is it

   A) Collection has a bunch of entries.  Each feed representation 
has 15 entries and the prev link
acts like a paging mechanism similar to what we see 
currently use in search results.  Deleting
the first ten entries out of the collection would cause all 
of the entries in the feed to "shift backwards"

in the feeds

B) Each prev link is representative of how the feed looked at a 
given point in time.  E.g. the feed as it would

 have appeared at a given hour of a given day

   If it's A, then Bob's RFC3229+feed solution seems much more 
efficient. (see #1)


   If it's B, then I'm wondering why you don't just use an ETag based 
approach, e.g.


  1
  {ETag}

   This would allow clients to only ever have to deal with a single URI 
for a feed and use conditional-gets with ETag to differentiate which 
snapshot of the feed they want to get and would likely make it easier to 
remediate potential recursive reference attacks, (e.g. feed A references 
feed B which references feed C which is a blind redirect to Feed A).


3. Microsoft's RSS Lists spec uses  to attach behavioral 
semantics to a feed.  This proposal uses  to attach 
behavioral semantics.  It would be nice if we could come up with a 
relatively simple and standardizable way of attaching behavioral 
semantics.  For example, a standardized  element:


   stateful

   The value of the treatAs element would be a list of tokens with 
defined semantics.  Each token SHOULD be registered with IANA.  Unknown 
tokens would be ignored.  Incompatible tokens would be ignored with  
first-in-the-list takes precedence semantics. For example:


   stateful list

   Indicates that the feed should be treated as a list whose past 
states can be queried using the kind of mechanism you've defined.


- James


Mark Nottingham wrote:



Hi Danny,

Thanks for the comments.

On 29/06/2005, at 1:57 AM, Danny Ayers wrote:



Trivial: might 1/0 be confusing compared to something clearly binary:
true/false or yes/no, and difficult to extend: true/false/unknown



ack


What is the Stateful nature of a feed *without* a Stateful element,
the default if you will? (Could be per-case or 'indeterminate', but I
think some comment on this would be helpful).



Yes, that's the intent; if you don't have a flag, there isn't any  
information about what it is (or isn't), and you act as you would today.



 If you're talking about feed reconstruction, might it not make sense
to have something like an "all" (which could appear as a minimal list
of dated URIs of entries) to avoid countless GET/interpret cycles?



I put that forth in the original Pace  (see history), and it got very negative  
reviews, because it requires a lot of work to maintain, and can be a  
bandwidth hog. I'm of two minds about it.


--
Mark Nottingham http://www.mnot.net/






Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Mark Nottingham


I think you're right... I was concerned about servers doing  
redirection, so that a client might miss the fact that it's already  
seen an archive, but as long as it uses the same identifiers to  
locate documents, it should be fine.


On 29/06/2005, at 6:21 AM, Antone Roundy wrote:




If it's for identification rather than retrieval, maybe it could be  
an Identity Construct...except Identity Constructs got nuked in  
format-06...not necessarily dereferencable.  Another option would  
be to identify whether you need to continue by checking whether  
you've seen the "prev" link before.  Would not that be as reliable  
as checking the "this" link?


On Wednesday, June 29, 2005, at 12:10  AM, Mark Nottingham wrote:



You need to be able to figure out which documents you've seen  
before and which ones you haven't, so you don't recurse down the  
entire stack. Although you can come up with some heuristics to  
determine when you've seen a document before, most (if not all) of  
them can be fooled by particular sequences of entries. Remembering  
which ones you've seen (using their 'this' URI) allows you to  
easily figure this out.



On 28/06/2005, at 8:48 PM, Antone Roundy wrote:




Thinking a little more about this, I'm not sure what the "this"  
link would be used for.  The "prev" link seems to be doing all  
the work, and especially assuming a "batches of 15" sort of  
model, the "this" link seems likely to end up pointing to a  
document that's going to disappear soon 14 times out of 15.






--
Mark Nottingham http://www.mnot.net/











--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



--
Mark Nottingham http://www.mnot.net/



Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Mark Nottingham


Hi Danny,

Thanks for the comments.

On 29/06/2005, at 1:57 AM, Danny Ayers wrote:


Trivial: might 1/0 be confusing compared to something clearly binary:
true/false or yes/no, and difficult to extend: true/false/unknown


ack


What is the Stateful nature of a feed *without* a Stateful element,
the default if you will? (Could be per-case or 'indeterminate', but I
think some comment on this would be helpful).


Yes, that's the intent; if you don't have a flag, there isn't any  
information about what it is (or isn't), and you act as you would today.



 If you're talking about feed reconstruction, might it not make sense
to have something like an "all" (which could appear as a minimal list
of dated URIs of entries) to avoid countless GET/interpret cycles?


I put that forth in the original Pace  (see history), and it got very negative  
reviews, because it requires a lot of work to maintain, and can be a  
bandwidth hog. I'm of two minds about it.


--
Mark Nottingham http://www.mnot.net/



Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Antone Roundy


On Wednesday, June 29, 2005, at 07:27  AM, Dave Pawson wrote:

I guess the answer is:
http://example.com/latest is your feed, e.g. containing the latest 10 
entries

http://example.com/archive-1 through n are your "archive" feeds.


Which would mean that the instance at /latest keeps changing?
I need to keep swapping old ones out, new ones in, i.e. rebuilding
each time?

  I guess that's another reason it feels like a kludge.

Replace http://example.com/latest with http://example.com/atom.xml.  Of 
course the latest document keeps changing and has to be rebuilt and 
replaced each time.  It's the feed document just like what we see 
today.  At least that's how I read what was written 
above--"http://example.com/latest"; was intended as the URI to which 
you'd subscribe.




Re: FWD: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Dave Pawson

On Wed, 2005-06-29 at 15:03 +0200, Thomas Broyer wrote:
> Dave Pawson wrote:
> > Any one site could now have n instances, each being a feed, the only
> > variant (apart from entries) being the links to previous feeds.
> > If I'm to say *this* is my feed, I guess I point to the most recent...
> > which will change over time?
> >
> > With the example of 15 entries per,
> >
> > feed1 1..15
> > feed4 45..60
> >
> > my 'feed' for my site rolls over from feed1...n as time progresses?
> 
> I guess the answer is:
> http://example.com/latest is your feed, e.g. containing the latest 10 entries
> http://example.com/archive-1 through n are your "archive" feeds.

Which would mean that the instance at /latest keeps changing?
I need to keep swapping old ones out, new ones in, i.e. rebuilding
each time?

  I guess that's another reason it feels like a kludge.


> 
> You can see "latest" as an Atom alternate for your home page (or latest
> news page) and "archive-1" through "archive-n" as Atom alternates for your
> "archive" pages.

I can see the logic of your suggestion. 
  Doesn't seem clean though?


 Other issues.

regards DaveP





Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Antone Roundy


If it's for identification rather than retrieval, maybe it could be an 
Identity Construct...except Identity Constructs got nuked in 
format-06...not necessarily dereferencable.  Another option would be to 
identify whether you need to continue by checking whether you've seen 
the "prev" link before.  Would not that be as reliable as checking the 
"this" link?


On Wednesday, June 29, 2005, at 12:10  AM, Mark Nottingham wrote:

You need to be able to figure out which documents you've seen before 
and which ones you haven't, so you don't recurse down the entire 
stack. Although you can come up with some heuristics to determine when 
you've seen a document before, most (if not all) of them can be fooled 
by particular sequences of entries. Remembering which ones you've seen 
(using their 'this' URI) allows you to easily figure this out.



On 28/06/2005, at 8:48 PM, Antone Roundy wrote:


Thinking a little more about this, I'm not sure what the "this" link 
would be used for.  The "prev" link seems to be doing all the work, 
and especially assuming a "batches of 15" sort of model, the "this" 
link seems likely to end up pointing to a document that's going to 
disappear soon 14 times out of 15.




--
Mark Nottingham http://www.mnot.net/





Re: FWD: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Thomas Broyer


Dave Pawson wrote:
> Any one site could now have n instances, each being a feed, the only
> variant (apart from entries) being the links to previous feeds.
> If I'm to say *this* is my feed, I guess I point to the most recent...
> which will change over time?
>
> With the example of 15 entries per,
>
> feed1 1..15
> feed4 45..60
>
> my 'feed' for my site rolls over from feed1...n as time progresses?

I guess the answer is:
http://example.com/latest is your feed, e.g. containing the latest 10 entries
http://example.com/archive-1 through n are your "archive" feeds.

"latest" is likely to contain entries which are also "archived" in
"archive-n", but I don't see it as a problem (and it doesn't violate Atom
Feed Document rules wrt uniqueness of entries), at most I will retrieve 14
entries (if using 15 entries per "archive feed document") which I already
got from the "live" feed.

You can see "latest" as an Atom alternate for your home page (or latest
news page) and "archive-1" through "archive-n" as Atom alternates for your
"archive" pages.

What I'm wondering is, if I had "archive feeds" on a per-day basis
(instead of "N entries per archive feed", although what follows also
applies) and say I published 3 entries yesterday, 5 entries the day before
and no entry yet today:
 - http://example.net/archive/2005/06/27: 5 entries
 - http://example.net/archive/2005/06/28: 3 entries
 - http://example.net/archive/2005/06/29: doesn't exist yet
Say I'm having a "live feed" showing the latest 5 entries: it will contain
all the 3 yesterday entries and the 2 latest entries from 2005/06/27.
Could I "prev"-link to "archive/2005/06/28" or should I try to figure out
the "archive feed" containing the "previous to earliest" entry (here,
"archive/2005/06/27", but it could have been "archive/2005/06/25" if I had
only 2 entries last monday and not at all on sunday) ?
If I can just link to the "latest archive feed document", shouldn't we
then just have an "archive" [EMAIL PROTECTED] in the "live feeds" (similar to 
the
"List-Archive" header in mail messages) and use "prev" only between
"archive feed documents" (similar to the "Next Page" link in the HTML list
archive).

-- 
Thomas Broyer




Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Stefan Eissing


What little I can add to this, not being an atom boy:

Most versioning system cannot reliably predict the URI of the next  
version. So, embedding the URI of the document inside the document  
itself, either requires document changes on submitting a new version or  
a try-and-error approach. (WebDAV deltav offers a try-and-error  
mechanism)


One could work around this by inventing a "version history" document  
which keeps the URIs of all feed versions (and the prev/next relations,  
so a reader could find out that there is a newer one also). Since the  
version history URI never changes for a feed, embedding the history URI  
inside the feed poses no challenge.


I cannot judge weather coming up with an extra document for this  
feature is overkill or not. But it would allow for easier integration  
into existing versioning mechanisms.


Best Regards, Stefan - wondering if the version history would be a feed  
of its own...


Am 28.06.2005 um 22:25 schrieb Mark Nottingham:



A New Internet-Draft is available from the on-line Internet-Drafts  
directories.



Title   : Feed History: Enabling Stateful Syndication
Author(s)   : M. Nottingham
Filename: draft-nottingham-atompub-feed-history-00.txt
Pages   : 6
Date: 2005-6-27

   This document specifies mechanisms that allow feed publishers to  
give

   hints about the nature of the feed's statefulness, and a means of
   retrieving ^missed^ entries from a stateful feed.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-nottingham-atompub-feed- 
history-00.txt


To remove yourself from the I-D Announcement list, send a message to
i-d-announce-request at ietf.org with the word unsubscribe in the body  
of the message.

You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce
to change your subscription settings.


Internet-Drafts are also available by anonymous FTP. Login with the  
username

"anonymous" and a password of your e-mail address. After logging in,
type "cd internet-drafts" and then
"get draft-nottingham-atompub-feed-history-00.txt".

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt


Internet-Drafts can also be obtained by e-mail.

Send a message to:
mailserv at ietf.org.
In the body type:
"FILE  
/internet-drafts/draft-nottingham-atompub-feed-history-00.txt".


NOTE:   The mail server at ietf.org can return the document in
MIME-encoded form by using the "mpack" utility.  To use this
feature, insert the command "ENCODING mime" before the "FILE"
command.  To decode the response(s), you will need "munpack" or
a MIME-compliant mail reader.  Different MIME-compliant mail  
readers

exhibit different behavior, especially when dealing with
"multipart" MIME messages (i.e. documents which have been split
up into multiple messages), so check your local documentation  
on

how to manipulate these messages.


___
I-D-Announce mailing list
I-D-Announce at ietf.org
https://www1.ietf.org/mailman/listinfo/i-d-announce


--
Mark Nottingham http://www.mnot.net/






Re: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Danny Ayers

On 6/28/05, Mark Nottingham <[EMAIL PROTECTED]> wrote:

>  Title   : Feed History: Enabling Stateful Syndication

Interesting, sounds like it could be useful. On a first read,what came to mind:

Trivial: might 1/0 be confusing compared to something clearly binary: 
true/false or yes/no, and difficult to extend: true/false/unknown

What is the Stateful nature of a feed *without* a Stateful element,
the default if you will? (Could be per-case or 'indeterminate', but I
think some comment on this would be helpful).

 If you're talking about feed reconstruction, might it not make sense
to have something like an "all" (which could appear as a minimal list
of dated URIs of entries) to avoid countless GET/interpret cycles?

Cheers,
Danny.

-- 

http://dannyayers.com



Re: FWD: I-D ACTION:draft-nottingham-atompub-feed-history-00.txt

2005-06-29 Thread Dave Pawson

On Tue, 2005-06-28 at 13:25 -0700, Mark Nottingham wrote:

> This document specifies mechanisms that allow feed publishers to  
> give
> hints about the nature of the feed's statefulness, and a means of
> retrieving ^missed^ entries from a stateful feed.

This tries to address my concern earlier, about an author generating
entries, with only one feed. 
  I like the basic idea of enabling differentiation between 'stable' and
other feeds.

Now I'm confused about the feed metadata and stability of feed uri's.

Any one site could now have n instances, each being a feed, the only
variant (apart from entries) being the links to previous feeds. 
If I'm to say *this* is my feed, I guess I point to the most recent...
which will change over time?

With the example of 15 entries per,

feed1 1..15
feed4 45..60

my 'feed' for my site rolls over from feed1...n as time progresses?

This smells like a kludge to me Mark. I'd *like* to have a single
url which is my feed. Using this logic presented, if I'm going to 
chunk my feed then I can't have a stable feed url?

Equally I'm not happy about a moderate blog keeping all entries
in one feed. It won't take long for the instance to become significant
in size.


-- 
Regards, 

Dave Pawson
XSLT + Docbook FAQ
http://www.dpawson.co.uk