Re: More about Extensions

2005-08-09 Thread Henry Story


Sorry to note the obvious, but does this not sound so much like a  
good reason we should
have engineered atom to *be* RDF? Is this not exactly one of the many  
problems that RDF

sets out to solve?

Henry Story

On 10 Aug 2005, at 02:34, Tim Bray wrote:


On Aug 9, 2005, at 5:11 PM, David Powell wrote:



No, we just need to warn publishers (and extension authors) that the
base URI of Simple Extension elements is not significant, and that
they must not expect it to be preserved.



Either the software understands the foreign markup, in which case  
it might recognize a relative URI, in which case it should apply  
the correct base URI, or it doesn't, in which case everything in  
the foreign markup is just a semantics-free string.


The problem could hypothetically arise when someone extracts  
properties from the foreign markup, stuffs them in a tuple store,  
and then when the software that knows what to do with comes along  
and retrieves it and recognizes the relative URI and can't do much  
because the base URI is lost.


So... IF you know how to handle some particular extension, AND IF  
you expect to handle it when the extension data has been ripped out  
of the feed and stored somewhere without any context, THEN you  
shouldn't use a relative reference.  Alternatively, IF you want to  
empower extensions to process they data they understand, AND IF you  
want to rip that data out of the feed and store it somewhere, THEN  
it would be smart to provide software an interface to retrieve  
context, such as feed-level metadata and the base URI.


Sounds like implementor's-guide material to me.

And, to whoever said relative references are "fragile": Wrong.   
When you have to move bales of web content around from one place to  
another, and just supposing hypothetically that you have internal  
links, relative references are robust, absolute URIs are fragile.  - 
Tim






Re: Expires extension draft (was Re: Feed History -02)

2005-08-09 Thread Henry Story


There is an interesting problem of how this interacts with the  
history tag.


If you set an a feed


  ...
  
  


then what are you setting it on? Well not the document, clearly, as  
you have
pointed out since HTTP headers deal with that. So it must be on the  
feed. And

of course a feed is identified by its id.

Now we have to make sure we avoid contradictions where different feed  
documents

describing the same feed state that the feed has different expiry dates.

Eg:

the main feed--

  August 25 2006
  tag:example.com,2000/feed1
  
  
  http://example.com/archive1

---

The above is a partial description of the feed tag:example.com,2000/ 
feed1

The history:previous link points to another document that gives
us more information about that feed.

---the archive feed

  August 15 2006
  tag:example.com,2000/feed1
  
  
  http://example.com/archive2

---

Now of course we have a feed id with two expiry dates. Which one is  
correct?

In graph terms we end up with something like this:

tag:example.com,2000
   |expires-->August 25 2006
   |expires-->August 15 2006
   |entry>...
   |entry>...
   |entry>...
   |entry>...

One has the feeling that the expires relation should be functional,  
ie have

only one value.

This makes me think again that for what I was looking for (that the  
document
in history:previous not change, so that one can work out when to stop  
fetching
documents) can in fact be entirely be taken care of by the http  
expiry dates and
cache control mechanism. Of course if this is so, I think it should  
be noted
clearly in the history spec. ((btw. Is it easy to set expiry dates  
for documents served

by apache?))


Henry Story


On 10 Aug 2005, at 04:46, James M Snell wrote:


This is fairly quick and off-the-cuff, but here's an initial draft  
to get the ball rolling..


 http://www.snellspace.com/public/draft-snell-atompub-feed-expires.txt

- James

Henry Story wrote:



To answer my own question

[[
Interesting... but why have a limit of one year? For archives, I   
would like a limit of

forever.
]]

 I found the following in the HTTP spec

[[
   To mark a response as "never expires," an origin server sends an
   Expires date approximately one year from the time the response is
   sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
   year in the future.
]]

(though that still does not explain why.)

Now I am wondering if the http mechanism is perhaps all that is  
needed
for what I want with the unchanging archives. If it is then  
perhaps this
could be explained in the Feed History RFC. Or are there other   
reasons to

add and "expires" tag to the document itself?

Henry Story

On 9 Aug 2005, at 19:09, James M Snell wrote:







rules as atom:author elements.

Here it is: 




The expires and max-age elements look fine. I hesitate at  
bringing  in a caching discussion.  I'm much more comfortable  
leaving the  definition of caching rules to the protocol level  
(HTTP) rather  than the format extension level.  Namely, I don't  
want to have to  go into defining rules for how HTTP headers that  
affect caching  interact with the expires and max-age elements...  
IMHO, there is  simply no value in that.
The expires and max-age extension elements affect the feed /  
entry  on the application level not the document level.  HTTP  
caching  works on the document level.





Adding max-age also means defining IntegerConstruct and disallowing
white space around it. Formerly, it was OK as a text construct, but
the white space issues change that.





This is easy enough.



Also, we should decide whether cache information is part of the   
signature.

I can see arguments either way.





-1.  let's let caching be handled by the transport layer.











Re: Expires extension draft (was Re: Feed History -02)

2005-08-09 Thread James M Snell


Eric Scheid wrote:


On 10/8/05 12:46 PM, "James M Snell" <[EMAIL PROTECTED]> wrote:

 


This is fairly quick and off-the-cuff, but here's an initial draft to
get the ball rolling..

http://www.snellspace.com/public/draft-snell-atompub-feed-expires.txt

   



Looks good, I think it does need a little bit of prose explaining that this
has nothing to do with caching, and should not be used in scheduling when to
revisit/refresh/expire local copies of the resource.

 

Absolutely.  I put this together rather quickly fully knowing that I 
would need to revisit the text to fully explain the use of the 
extension.  I'll be taking a few whacks at that over the coming days as 
time allows.



Similarly, if I understand correctly, when you write
  "The 'max-age' extension element is used to indicate
   the maximum age of a feed or entry."
you are referring to the max-age until the informational content of the
respective feed or entry expires. And similarly with age:expires. Yes?

 

Yes. I will be clarifying this in the draft.  The extensions are really 
only reflective of the informational content in individual entries.  As 
we do with the atom:author element, expires and max-age can be specified 
at the feed or source level, but apply to the content of individual 
entries. e.g.



  ...
  
  


is equivalent to


  ...
  ...



Aside: a perfect example of what sense of 'expires' is in the I-D itself...

   Network Working Group
   Internet-Draft
   Expires: January 2, 2006

 


Excellent point :-)

- James




Re: Feed History -02

2005-08-09 Thread James M Snell


First off, let me stress that I am NOT talking about caching scenarios 
here...  (my use of the terms "application layer" and "transport layer" 
were an unfortunate mistake on my part that only served to confuse my point)


Let's get away from the multiprotocol question for a bit (it never leads 
anywhere constructive anyway)... Let's consider an aggregator scenario. 
Take an entry from a feed that is supposed to expire after 10 days.  The 
feed document is served up to the aggregator with the proper HTTP 
headers for expiration.  The entry is extracted from the original feed 
and dumped into an aggregated feed.  Suppose each of the entries in the 
aggregated feed are supposed to have their own distinct expirations.  
How should the aggregator communicate the appropriate expirations to the 
subscriber?  Specifying expirations on the HTTP level does not allow me 
to specify expirations for individual entries within a feed.  Use case: 
an online retailer wishes to produce a "special offers" feed.  Each 
offer in the feed is a distinct entity with it's own terms and own 
expiration:  e.g. some offers are valid for a week, other offers are 
valid for two weeks, etc.  The expiration of the offer (a business level 
construct) is independent of whether or not the feed is being cached or 
not (a protocol level construct); publishing a new version of the feed 
(e.g. by adding a new offer to the feed) should have no impact on the 
expiration of prior offers published to the feed.


Again, I am NOT attempting to reinvent an abstract or transport-neutral 
caching mechanism in the same sense that the atom:updated element is not 
attempting to reinvent Last-Modified or that the via link relation is 
not attempting to reinvent the Via header, etc.  They serve completely 
different purposes. The expires and max-age extensions I am proposing 
should NOT be used for cache control of the Atom documents in which they 
appear.


>I think we can declare victory here by simply a) using whatever  
caching mechanism is available, and b) designating a "won't change"  flag.
Speaking *strictly* about cache control of Atom documents, +1.  No 
document level mechanisms for cache control are necessary.


- James


Mark Nottingham wrote:

HTTP isn't a transport protocol, it's a transfer protocol; i.e., the  
caching information (and other entity metadata) are *part of* the  
entity, not something that's conceptually separate.


The problem with having an "abstract" or "transport-neutral" concept  
of caching is that it leaves you with an awkward choice; you can  
either a) exactly replicate the HTTP caching model, which is  
difficult to do in other protocols, b) "dumb down" HTTP caching to a  
subset that's "neutral", or c) introduce a contradictory caching  
model and suffer the clashes between HTTP caching and it.


This is the same road that Web services sometimes tries to go down,  
and it's a painful one; coming up with the grand, protocol-neutral  
abstraction that enables all of the protocol-specific features is  
hard, and IMO not necessary. Ask yourself: are there any situations  
where you *have* to be able to seamlessly switch between protocols,  
or is it just a "convenience?"


I think we can declare victory here by simply a) using whatever  
caching mechanism is available, and b) designating a "won't change"  
flag.







On 09/08/2005, at 11:53 AM, James M Snell wrote:


Henry Story wrote:


Now I am wondering if the http mechanism is perhaps all that is  needed
for what I want with the unchanging archives. If it is then  perhaps 
this
could be explained in the Feed History RFC. Or are there other   
reasons to

add and "expires" tag to the document itself?



On the application level, a feed or entry may expire or age  
indepedently of whatever caching mechanisms may be applied at the  
transport level.  For example, imagine a source that publishes  
special offers in the form of Atom entries that expire at a given  
point in time.  Now suppose that those entries are being  distributed 
via XMPP and HTTP.  It is helpful to have a transport  independent 
expiration/max-age mechanism whose semantics operate on  the 
application layer rather than the transport layer.


- James






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems






Re: Expires extension draft (was Re: Feed History -02)

2005-08-09 Thread Eric Scheid

On 10/8/05 12:46 PM, "James M Snell" <[EMAIL PROTECTED]> wrote:

> This is fairly quick and off-the-cuff, but here's an initial draft to
> get the ball rolling..
> 
> http://www.snellspace.com/public/draft-snell-atompub-feed-expires.txt
> 

Looks good, I think it does need a little bit of prose explaining that this
has nothing to do with caching, and should not be used in scheduling when to
revisit/refresh/expire local copies of the resource.

Similarly, if I understand correctly, when you write
   "The 'max-age' extension element is used to indicate
the maximum age of a feed or entry."
you are referring to the max-age until the informational content of the
respective feed or entry expires. And similarly with age:expires. Yes?

Aside: a perfect example of what sense of 'expires' is in the I-D itself...

Network Working Group
Internet-Draft
Expires: January 2, 2006

:-)

e.



Re: Feed History -02

2005-08-09 Thread Mark Nottingham


HTTP isn't a transport protocol, it's a transfer protocol; i.e., the  
caching information (and other entity metadata) are *part of* the  
entity, not something that's conceptually separate.


The problem with having an "abstract" or "transport-neutral" concept  
of caching is that it leaves you with an awkward choice; you can  
either a) exactly replicate the HTTP caching model, which is  
difficult to do in other protocols, b) "dumb down" HTTP caching to a  
subset that's "neutral", or c) introduce a contradictory caching  
model and suffer the clashes between HTTP caching and it.


This is the same road that Web services sometimes tries to go down,  
and it's a painful one; coming up with the grand, protocol-neutral  
abstraction that enables all of the protocol-specific features is  
hard, and IMO not necessary. Ask yourself: are there any situations  
where you *have* to be able to seamlessly switch between protocols,  
or is it just a "convenience?"


I think we can declare victory here by simply a) using whatever  
caching mechanism is available, and b) designating a "won't change"  
flag.







On 09/08/2005, at 11:53 AM, James M Snell wrote:


Henry Story wrote:
Now I am wondering if the http mechanism is perhaps all that is  
needed
for what I want with the unchanging archives. If it is then  
perhaps this
could be explained in the Feed History RFC. Or are there other   
reasons to

add and "expires" tag to the document itself?


On the application level, a feed or entry may expire or age  
indepedently of whatever caching mechanisms may be applied at the  
transport level.  For example, imagine a source that publishes  
special offers in the form of Atom entries that expire at a given  
point in time.  Now suppose that those entries are being  
distributed via XMPP and HTTP.  It is helpful to have a transport  
independent expiration/max-age mechanism whose semantics operate on  
the application layer rather than the transport layer.


- James






--
Mark Nottingham   Principal Technologist
Office of the CTO   BEA Systems



Expires extension draft (was Re: Feed History -02)

2005-08-09 Thread James M Snell


This is fairly quick and off-the-cuff, but here's an initial draft to 
get the ball rolling..


 http://www.snellspace.com/public/draft-snell-atompub-feed-expires.txt

- James

Henry Story wrote:


To answer my own question

[[
Interesting... but why have a limit of one year? For archives, I  
would like a limit of

forever.
]]

 I found the following in the HTTP spec

[[
   To mark a response as "never expires," an origin server sends an
   Expires date approximately one year from the time the response is
   sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
   year in the future.
]]

(though that still does not explain why.)

Now I am wondering if the http mechanism is perhaps all that is needed
for what I want with the unchanging archives. If it is then perhaps this
could be explained in the Feed History RFC. Or are there other  
reasons to

add and "expires" tag to the document itself?

Henry Story

On 9 Aug 2005, at 19:09, James M Snell wrote:





rules as atom:author elements.

Here it is: 



The expires and max-age elements look fine. I hesitate at bringing  
in a caching discussion.  I'm much more comfortable leaving the  
definition of caching rules to the protocol level (HTTP) rather  than 
the format extension level.  Namely, I don't want to have to  go into 
defining rules for how HTTP headers that affect caching  interact 
with the expires and max-age elements... IMHO, there is  simply no 
value in that.
The expires and max-age extension elements affect the feed / entry  
on the application level not the document level.  HTTP caching  works 
on the document level.




Adding max-age also means defining IntegerConstruct and disallowing
white space around it. Formerly, it was OK as a text construct, but
the white space issues change that.




This is easy enough.


Also, we should decide whether cache information is part of the  
signature.

I can see arguments either way.




-1.  let's let caching be handled by the transport layer.








Re: More about Extensions

2005-08-09 Thread James M Snell


Tim Bray wrote:



Sounds like implementor's-guide material to me.


1

- James



Re: More about Extensions

2005-08-09 Thread Robert Sayre

On 8/9/05, Tim Bray <[EMAIL PROTECTED]> wrote:
> And, to whoever said relative references are "fragile": Wrong.  When
> you have to move bales of web content around from one place to
> another, and just supposing hypothetically that you have internal
> links, relative references are robust, absolute URIs are fragile.

And when you have bales of web content that have to be displayed on a
super-wide variety of display technologies, many of which have limited
supported for changing and mixing base URIs (like HTML), relative
references are fragile. This is the kind of situation you run into
with syndication formats, and it should go in the implementation
guide.

Robert Sayre



Re: More about Extensions

2005-08-09 Thread Robert Sayre

On 8/9/05, David Powell <[EMAIL PROTECTED]> wrote:
> 
> Publishers should expect that relative refs used in atom:link will
> work, but publishers should expect that relative refs used in Simple
> Extensions will break.

Disagree. We have no idea what people will do with this, or where they
will be deployed. You're suggesting adding implementation advice,
since the content of a simple extension element is not defined as a
URI reference. By your logic, we have to explicitly clarify that
atom:updated is not subject to xml:base processing. Sorry, I strongly
disagree.

Robert Sayre



Re: More about Extensions

2005-08-09 Thread Tim Bray


On Aug 9, 2005, at 5:11 PM, David Powell wrote:


No, we just need to warn publishers (and extension authors) that the
base URI of Simple Extension elements is not significant, and that
they must not expect it to be preserved.


Either the software understands the foreign markup, in which case it  
might recognize a relative URI, in which case it should apply the  
correct base URI, or it doesn't, in which case everything in the  
foreign markup is just a semantics-free string.


The problem could hypothetically arise when someone extracts  
properties from the foreign markup, stuffs them in a tuple store, and  
then when the software that knows what to do with comes along and  
retrieves it and recognizes the relative URI and can't do much  
because the base URI is lost.


So... IF you know how to handle some particular extension, AND IF you  
expect to handle it when the extension data has been ripped out of  
the feed and stored somewhere without any context, THEN you shouldn't  
use a relative reference.  Alternatively, IF you want to empower  
extensions to process they data they understand, AND IF you want to  
rip that data out of the feed and store it somewhere, THEN it would  
be smart to provide software an interface to retrieve context, such  
as feed-level metadata and the base URI.


Sounds like implementor's-guide material to me.

And, to whoever said relative references are "fragile": Wrong.  When  
you have to move bales of web content around from one place to  
another, and just supposing hypothetically that you have internal  
links, relative references are robust, absolute URIs are fragile.  -Tim




Re: More about Extensions

2005-08-09 Thread David Powell


Tuesday, August 9, 2005, 11:22:14 PM, Robert Sayre wrote:

> What are we going to do, outlaw strings that happen to look like
> relative references?

No, we just need to warn publishers (and extension authors) that the
base URI of Simple Extension elements is not significant, and that
they must not expect it to be preserved.

We do the same regarding xml:lang already by saying that the element
is not Language Sensitive, which means that the language context is
not significant and that publishers must not expect it to be
preserved.


from Section 2:

> The language context is only significant for elements and attributes
> declared to be "Language-Sensitive" by this specification.

I'd suggest adding something similar to Section 6.4.1, eg:

"The base URI is not significant for Simple Extension elements."


> Relative references are fragile, and people understand why they
> break.

Publishers should expect that relative refs used in atom:link will
work, but publishers should expect that relative refs used in Simple
Extensions will break.

-- 
Dave



Re: More about Extensions

2005-08-09 Thread Robert Sayre

On 8/9/05, David Powell <[EMAIL PROTECTED]> wrote:
> If I'm wrong, and the rationale behind Simple Extensions isn't
> important...

Sorry, I don't buy this. You're wrong, but the rationale is important. :)

What are we going to do, outlaw strings that happen to look like
relative references? If you want a generic processor to handle your
extension, you've got atom:link, which will work fine. Maybe you want
the relative reference to point at something relative, no matter where
it ends up. I can't think of why anyone would want to do that, but
maybe they will. Relative references are fragile, and people
understand why they break. None of the other pros for this capability
are affected.

Robert Sayre



Re: Feed History -02

2005-08-09 Thread Mark Nottingham



On 09/08/2005, at 4:07 AM, Henry Story wrote:


But I would really like some way to specify that the next feed  
document is an archive (ie. won't change). This would make it easy  
for clients to know when to stop following the links, ie, when they  
have cought up with the changes since they last looked at the feed.


Perhaps something like this:

http://liftoff.msfc.nasa.gov/2003/04/ 
feed.rss


I'd think that would be more appropriate as an extension to the  
archive itself, wouldn't it? That way, the metadata (the fact that  
it's an archive) is part of the data (the archive feed).


E.g.,


  ...
  


By (current) definition, anything that history:prev points to is an  
archive.


Cheers,


--
Mark Nottingham http://www.mnot.net/



More about Extensions

2005-08-09 Thread David Powell


I still believe that relative URIs shouldn't exist in Simple Extension
constructs [1]. I think that the lack of rationale for their being 2-3
classes of extension construct is proving to be harmful.


Prior to the introduction of Section 6, Atom pretty much said you can
include any foreign markup anywhere. I thought that this conflicted
with the claim made by the charter that:

> Atom consists of:
> * A conceptual model of a resource
> * A concrete syntax for this model

I thought that the model should be separable from the syntax, so that
people can use databases and RDF stores as their back-ends rather than
just XML files. And I thought that it was important that extensions
should be part of that model, rather than only be representable in the
syntax, else extensions would be poor-cousins of the core elements.

Restricting Atom extensions to only being simple string name/value
parameters would ensure that they were represented in the model, but
it would have been too limiting.

So the two classes of Extension construct, Simple and Structured, are
a compromise between constraints and flexibility.

The pros and cons of each class are:

Simple Extension constructs:


  + simple string name/value properties of the feed/entry/person. Easy
to implement generically end-to-end in servers/clients so that
extensions can be deployed generically without requiring "boil the
ocean" acceptance.

  + property semantics as described by section 6.4.1.

  + publishing clients could provide an extension editor, where
metadata fields could be added to the clients form, given a
namespace URI and element name.

  + extensions don't need to be defined specifically for Atom. RDF
Vocabularies, RSS extensions, DC, and PRISM already define
properties that are compatible with Atom Simple Extensions.

  + simple, useful mapping to RDF

  - can't represent language sensitive text. This decision was made
because very few RSS extensions contain language sensitive text,
(they tend to contain dates, numbers, tokens, URIs etc - when
language-sensitive text is required Structured Extensions should
be used). Also, the barrier for implementations such as custom
property tables, CRMs, and WebDAV implementations would be high. 

  - can't represent relative URI references, because they are defined
to be strings only, and generic implementations can't know what is
or isn't a URI reference.


Structured Extension constructs:


  + Can support (almost) arbitrary XML.

  - no pre-defined semantics.

  + no pre-defined semantics.

  - clumsy generic mapping to RDF (by preserving the XML blob), though
with extension specific knowledge a better mapping could be used.
  
  + Publishing servers can generically support them by preserving the
blob of XML.

  - Publishing clients can't easily generically support them, as the UI
to edit a chunk of arbitrary XML wouldn't be very user-friendly.
  
  - require at least a mandatory attribute or child in order to exist.


Namespaced attributes & atom:link children
--

  - Not part of the Atom model - only representable by the syntax.

  - Not really practical to support generically; require
"boil the ocean" adoption.
  
  - Really not something I'm keen on as evidenced by this biased
assessment...  Are they really allowed for things other than
future versions of Atom?

  + ...OK, they let you add annotations to elements in a way that
would be difficult to address without an RDF style graph-based
format.


Does that sound about right?

So, can we agree that relative URIRefs aren't allowed in Simple
Extension constructs and add a clarification, else their
implementation won't satisfy the rationale for their design.

If I'm wrong, and the rationale behind Simple Extensions isn't
important, then can someone explain why there are two classes of
extension?

[1] http://www.imc.org/atom-syntax/mail-archive/msg16598.html

-- 
Dave



[Fwd: I-D ACTION:draft-saintandre-atompub-notify-03.txt]

2005-08-09 Thread Peter Saint-Andre

FYI...

**

 Original Message 
Subject: I-D ACTION:draft-saintandre-atompub-notify-03.txt
Date: Tue, 09 Aug 2005 15:50:01 -0400
From: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: i-d-announce@ietf.org

A New Internet-Draft is available from the on-line Internet-Drafts 
directories.



Title   : Transporting Atom Notifications over the
  Extensible Messaging and Presence Protocol (XMPP)
Author(s)   : P. Saint-Andre, et al.
Filename: draft-saintandre-atompub-notify-03.txt
Pages   : 14
Date: 2005-8-9

This memo describes a method for notifying interested parties about
   changes in syndicated information encapsulated in the Atom feed
   format, where such notifications are delivered via an extension to
   the Extensible Messaging and Presence Protocol (XMPP) for publish-
   subscribe functionality.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-saintandre-atompub-notify-03.txt

To remove yourself from the I-D Announcement list, send a message to
[EMAIL PROTECTED] with the word unsubscribe in the body of 
the message.

You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce
to change your subscription settings.


Internet-Drafts are also available by anonymous FTP. Login with the username
"anonymous" and a password of your e-mail address. After logging in,
type "cd internet-drafts" and then
"get draft-saintandre-atompub-notify-03.txt".

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt


Internet-Drafts can also be obtained by e-mail.

Send a message to:
[EMAIL PROTECTED]
In the body type:
"FILE /internet-drafts/draft-saintandre-atompub-notify-03.txt".

NOTE:   The mail server at ietf.org can return the document in
MIME-encoded form by using the "mpack" utility.  To use this
feature, insert the command "ENCODING mime" before the "FILE"
command.  To decode the response(s), you will need "munpack" or
a MIME-compliant mail reader.  Different MIME-compliant mail readers
exhibit different behavior, especially when dealing with
"multipart" MIME messages (i.e. documents which have been split
up into multiple messages), so check your local documentation on
how to manipulate these messages.


Below is the data which will enable a MIME compliant mail reader
implementation to automatically retrieve the ASCII version of the
Internet-Draft.

**

--
Peter Saint-Andre
Jabber Software Foundation
http://www.jabber.org/people/stpeter.shtml
<<< message/external-body; x-mac-type="0"; x-mac-creator="0"; name="draft-saintandre-atompub-notify-03.txt": Unrecognized >>>
___
I-D-Announce mailing list
I-D-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/i-d-announce



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Feed History -02

2005-08-09 Thread James M Snell


Henry Story wrote:



Now I am wondering if the http mechanism is perhaps all that is needed
for what I want with the unchanging archives. If it is then perhaps this
could be explained in the Feed History RFC. Or are there other  
reasons to

add and "expires" tag to the document itself?


On the application level, a feed or entry may expire or age indepedently 
of whatever caching mechanisms may be applied at the transport level.  
For example, imagine a source that publishes special offers in the form 
of Atom entries that expire at a given point in time.  Now suppose that 
those entries are being distributed via XMPP and HTTP.  It is helpful to 
have a transport independent expiration/max-age mechanism whose 
semantics operate on the application layer rather than the transport layer.


- James



Re: Feed History -02

2005-08-09 Thread Henry Story


To answer my own question

[[
Interesting... but why have a limit of one year? For archives, I  
would like a limit of

forever.
]]

 I found the following in the HTTP spec

[[
   To mark a response as "never expires," an origin server sends an
   Expires date approximately one year from the time the response is
   sent. HTTP/1.1 servers SHOULD NOT send Expires dates more than one
   year in the future.
]]

(though that still does not explain why.)

Now I am wondering if the http mechanism is perhaps all that is needed
for what I want with the unchanging archives. If it is then perhaps this
could be explained in the Feed History RFC. Or are there other  
reasons to

add and "expires" tag to the document itself?

Henry Story

On 9 Aug 2005, at 19:09, James M Snell wrote:




rules as atom:author elements.

Here it is: 



The expires and max-age elements look fine. I hesitate at bringing  
in a caching discussion.  I'm much more comfortable leaving the  
definition of caching rules to the protocol level (HTTP) rather  
than the format extension level.  Namely, I don't want to have to  
go into defining rules for how HTTP headers that affect caching  
interact with the expires and max-age elements... IMHO, there is  
simply no value in that.
The expires and max-age extension elements affect the feed / entry  
on the application level not the document level.  HTTP caching  
works on the document level.




Adding max-age also means defining IntegerConstruct and disallowing
white space around it. Formerly, it was OK as a text construct, but
the white space issues change that.




This is easy enough.


Also, we should decide whether cache information is part of the  
signature.

I can see arguments either way.




-1.  let's let caching be handled by the transport layer.





FYI: License and Comments I-D's Submitted

2005-08-09 Thread James M Snell


I've submitted the Comments thread and License extensions as Internet 
Drafts.  They should post in the coming few days.  In the meantime, the 
latest versions can be found at the locations below.


http://www.snellspace.com/public/draft-snell-atompub-feed-license-00.txt
http://www.snellspace.com/public/draft-snell-atompub-feed-thread-00.txt

As a reminder, the license extension provides a link relation for 
associating a copyright license with a feed or entry.  The most common 
use case will be to associate creative commons licenses with feeds.


- James



Re: Feed History -02

2005-08-09 Thread James M Snell


Walter Underwood wrote:


--On August 9, 2005 9:28:52 AM -0700 James M Snell <[EMAIL PROTECTED]> wrote:

 


I made some proposals for cache control info (expires and max-age).
That might work for this.

 


I missed these proposals.  I've been giving some thought to an  and 
 extension myself and was getting ready to write up a draft. Expires is a 
simple date construct specifying the exact moment (inclusive) that the entry/feed expires.  
Max-age is a non negative integer specifying the number of miliseconds (inclusive) from the 
moment specified by atom:updated when then entry/feed expires.  The two cannot appear 
together within a single entry/feed and follows the same basic
   


rules as atom:author elements.

Here it is: 

 

The expires and max-age elements look fine. I hesitate at bringing in a 
caching discussion.  I'm much more comfortable leaving the definition of 
caching rules to the protocol level (HTTP) rather than the format 
extension level.  Namely, I don't want to have to go into defining rules 
for how HTTP headers that affect caching interact with the expires and 
max-age elements... IMHO, there is simply no value in that. 

The expires and max-age extension elements affect the feed / entry on 
the application level not the document level.  HTTP caching works on the 
document level.



Adding max-age also means defining IntegerConstruct and disallowing
white space around it. Formerly, it was OK as a text construct, but
the white space issues change that.

 


This is easy enough.


Also, we should decide whether cache information is part of the signature.
I can see arguments either way.

 


-1.  let's let caching be handled by the transport layer.

- James



Re: Feed History -02

2005-08-09 Thread Henry Story



On 9 Aug 2005, at 18:32, Walter Underwood wrote:
--On August 9, 2005 9:28:52 AM -0700 James M Snell  
<[EMAIL PROTECTED]> wrote:

I made some proposals for cache control info (expires and max-age).
That might work for this.


I missed these proposals.  I've been giving some thought to an  
 and  extension myself and was getting ready  
to write up a draft. Expires is a simple date construct specifying  
the exact moment (inclusive) that the entry/feed expires.  Max-age  
is a non negative integer specifying the number of miliseconds  
(inclusive) from the moment specified by atom:updated when then  
entry/feed expires.  The two cannot appear together within a  
single entry/feed and follows the same basic



rules as atom:author elements.

Here it is: 


Interesting... but why have a limit of one year? For archives, I  
would like a limit of

forever.

But otherwise I suppose this would do. Instead of putting the  
information in the
 link of the linking feed, you would put it in the archive  
feed. Which sounds
good. I suppose we end up with some duplication of information here  
with the http headers

again.


Adding max-age also means defining IntegerConstruct and disallowing
white space around it. Formerly, it was OK as a text construct, but
the white space issues change that.

Also, we should decide whether cache information is part of the  
signature.

I can see arguments either way.

wunder
--
Walter Underwood
Principal Architect, Verity





Re: Feed History -02

2005-08-09 Thread Walter Underwood

--On August 9, 2005 9:28:52 AM -0700 James M Snell <[EMAIL PROTECTED]> wrote:

>> I made some proposals for cache control info (expires and max-age).
>> That might work for this.
>>
> I missed these proposals.  I've been giving some thought to an  
> and  extension myself and was getting ready to write up a draft. 
> Expires is a simple date construct specifying the exact moment (inclusive) 
> that the entry/feed expires.  Max-age is a non negative integer specifying 
> the number of miliseconds (inclusive) from the moment specified by 
> atom:updated when then entry/feed expires.  The two cannot appear together 
> within a single entry/feed and follows the same basic
rules as atom:author elements.

Here it is: 

Adding max-age also means defining IntegerConstruct and disallowing
white space around it. Formerly, it was OK as a text construct, but
the white space issues change that.

Also, we should decide whether cache information is part of the signature.
I can see arguments either way.

wunder
--
Walter Underwood
Principal Architect, Verity



Re: Feed History -02

2005-08-09 Thread James M Snell


Walter Underwood wrote:


--On August 9, 2005 1:07:29 PM +0200 Henry Story <[EMAIL PROTECTED]> wrote:
 


But I would really like some way to specify that the next feed  document is an
archive (ie. won't change). This would make it easy  for clients to know when
to stop following the links, ie, when they have cought up with the changes
since they last looked at the feed.
   



I made some proposals for cache control info (expires and max-age).
That might work for this.

 

I missed these proposals.  I've been giving some thought to an /> and  extension myself and was getting ready to write up a 
draft. Expires is a simple date construct specifying the exact moment 
(inclusive) that the entry/feed expires.  Max-age is a non negative 
integer specifying the number of miliseconds (inclusive) from the moment 
specified by atom:updated when then entry/feed expires.  The two cannot 
appear together within a single entry/feed and follows the same basic 
rules as atom:author elements.


- James



Re: Feed History -02

2005-08-09 Thread Walter Underwood

--On August 9, 2005 1:07:29 PM +0200 Henry Story <[EMAIL PROTECTED]> wrote:
>
> But I would really like some way to specify that the next feed  document is an
> archive (ie. won't change). This would make it easy  for clients to know when
> to stop following the links, ie, when they have cought up with the changes
> since they last looked at the feed.

I made some proposals for cache control info (expires and max-age).
That might work for this.

wunder
--
Walter Underwood
Principal Architect, Verity



Re: Feed History -02

2005-08-09 Thread Henry Story



On 4 Aug 2005, at 06:27, Mark Nottingham wrote:
So, if I read you correctly, it sounds like you have a method  
whereby a 'top20' feed wouldn't need history:prev to give the kind  
of history that you're thinking of, right?


If that's the case, I'm tempted to just tweak the draft so that  
history:stateful is optional if history:prev is present. I was  
considering dropping stateful altogether, but I think something is  
necessary to explicitly say "don't try to keep a history of my  
feed." My latest use case for this is the RSS feed that Netflix  
provides to let you keep an eye on your queue (sort of like top20,  
but more specialised).


Sound good?


Sounds good to me.

But I would really like some way to specify that the next feed  
document is an archive (ie. won't change). This would make it easy  
for clients to know when to stop following the links, ie, when

they have cought up with the changes since they last looked at the feed.

Perhaps something like this:

http://liftoff.msfc.nasa.gov/2003/04/ 
feed.rss


Henry Story



Re: nested feeds (was: Feed History -02)

2005-08-09 Thread Henry Story


Sorry for taking so long to reply. I have been off on a 700km cycle trip
http://blogs.sun.com/roller/page/bblfish/20050807

I don't really want to spend to much time on the top-X discussion, as  
I am
a lot more interested in the feed history itself, but here are some  
thoughts

anyway...


On 29 Jul 2005, at 17:01, Eric Scheid wrote:

On 29/7/05 11:39 PM, "Henry Story" <[EMAIL PROTECTED]> wrote:
Below I think I have worked out how one can in fact have a top20   
feed, and I
show how this can be combined without trouble with the   


link...

On 29 Jul 2005, at 13:12, Eric Scheid wrote:

On 29/7/05 7:57 PM, "Henry Story" <[EMAIL PROTECTED]> wrote:

1- The top 20 list: here one wants to move to the previous top   
20  list and
think of them as one thing. The link to the next feed is not  
meant  to be
additive. Each feed is to be seen as a whole. I have a little   
trouble still

thinking of  these as feeds, but ...


What happens if the publisher realises they have a typo and need  
to  emit an
update to an entry? Would the set of 20 entries (with one entry   
updated) be

seen as a complete replacement set?

Well if it is a typo and this is considered to be an  
insignificant  change
then they can change the typo in the feed document and not need to  
change  any

updated time stamps.



Misspelling the name of the artist for the top 20 songs list is not
insignificant. Even worse fubars are possible too -- such as  
attributing the

wrong artist/author to the #1 song/book (and even worse: leaving off a
co-author).


Yes, I see this now. This is a problem for my suggestion. The  
atom:updated field
cannot be used to indicate the date at which an entry has a certain  
position in a
chart for the reason you mention. We could then no longer update that  
entry
for spelling mistakes or other more serious issues. One would have to  
add
a about date or something, and then the things gets a little more  
complicated

than I care to think about right now.

The way I see it, maybe a better way would be to have a sliding   
window feed
where each entry points to another Atom Feed Document with it's  
own  URI, and
it is that second Feed Document which contains the individual  
items  (the top

20 list).

This is certainly closer to my intuitions too.  A top 20 something  
is  *not* a
feed. Feed entries are not ordered, and are not meant to be  
thought of as a

closed collection. At least this is my initial intuition. BUT



Not all Atom Feed Documents are feeds, some are static collections of
entries. We keep tripping over this :-(

I can think of a solution like the following: Let us imagine a top  
20  feed
where the resources being described by the entries are the  
position in the

top list. So we have entries with ids such as

http://TopOfThePops.co.uk/top20/Number1
http://TopOfThePops.co.uk/top20/Number2
http://TopOfThePops.co.uk/top20/Number3 ...



will contain a new entry such as

  
   Top of the pops entry number 1
   http://TopOfThePops.co.uk/top20/Number1/"/>
   http://TopOfThePops.co.uk/top20/Number1
   2005-07-05T18:30:00Z
   Top of the pops winner for the week starting 5 July
2005
 



The problem here is that this doesn't describe the referent, it only
describes the reference. I want to see top 20 feeds where each  
entry links
to the referent in question. For example, the Amazon Top 10 Selling  
Books
feed would link to the book specific page at Amazon, not to some  
page saying

"the #3 selling book is at the other end of this link".


Oh, I don't really want to defend this position too much but there would
be a way around this criticism by simply having the link point to the  
album

like this:

  
   Top of the pops entry number 1
   http://www.amazon.fr/exec/obidos/ASIN/B4ULZV"/>
   http://TopOfThePops.co.uk/top20/Number1
   2005-07-05T18:30:00Z
   Top of the pops winner for the week starting 5 July
2005


So here the id would be the same for each position from week to week,  
but

the link it points to would change.

We would still need to solve the issue of the date at which it had that
position, though...

And so yes, a feed where the entry is a feed seems easier to work  
with in this case.

The feed would be something like this I suppose:


   top 20 French songs
   ...

   week of August 1 2005
   ...?...
   2005-08-01T18:30:00Z
   http://TopOfThePops.fr/top20/2005_08_01/";  
type="application/atom+xml">



   week of August 1 2005
   ...?...
   
   http://TopOfThePops.fr/top20/2005_08_01/";  
type="application/atom+xml">

   2005-08-02T18:30:00Z


   week of August 8 2005
   ...?...
   
   http://TopOfThePops.fr/top20/2005_08_08/";  
type="application/atom+xml">

   2005-08-08T18:30:00Z



But while you are at it then why not just create a special top-X  
ontology and express

your information in RDF?


   top 20 French songs
   ...

   week of August 1 2005
   ...?...
   2005-08-

Project descriptions with Atom

2005-08-09 Thread Danny Ayers

DOAP is Description of a Project, done in RDF.  CodeZoo are using Atom
as a format to receive project updates, using DOAP RDF/XML as a
payload.

Details -

-- Forwarded message --
From: Edd Dumbill <[EMAIL PROTECTED]>
Date: Aug 3, 2005 10:46 PM
Subject: [doap-interest] CodeZoo launches DOAP support
To: [EMAIL PROTECTED]


http://usefulinc.com/doap/news/contents/2005/08-03-codezoo/read

   O'Reilly's [1]CodeZoo has just launched a major deployment of DOAP.

   CodeZoo is a software registry for code libraries, covering Java,
Ruby
   and Python. Each entry in the registry has a DOAP export available.

   Additionally, developers can provide CodeZoo with an Atom feed
   containing embedded DOAP. This means the repository can be kept up to
   date with releases.

   More information:
 * [2]DOAP in CodeZoo
 * [3]DOAP over Atom
 * [4]Keep CodeZoo up to date with DOAP

References

   1. http://www.codezoo.com/
   2. http://ruby.codezoo.com/about/doap.csp
   3. http://www.codezoo.com/about/doap_over_atom.csp
   4. http://www.codezoo.com/cs/user/create/doap_feedback


___
doap-interest mailing list
[EMAIL PROTECTED]
http://lists.gnomehack.com/mailman/listinfo/doap-interest


-- 

http://dannyayers.com