> > I'd wonder if calling the latter anything with the word "Feed" in it
> > isn't more trouble than it's worth?
>
> I'm flexible on terminology, as long as it makes sense (Walter had some
> proposals).

Good.  I'm for the idea of making reuse of the code for handling entries, be
they individual, feeds or larger collections.  I just get nervous when
terminologies for something as popular and misunderstood as "feeds" are
mixed up.

> >> When you get a Feed Resource, it contains a [EMAIL PROTECTED]'this']/@href
> >> that points to the Atom Document Resource for this particular Feed
> >> Document;
> >
> > Being required or optional?
>
> Optional; the proposal makes this a SHOULD.

If the major players at the outset support the idea of publishing a URL in a
feed (in the RSS sense) that would directly retrieve that set of entries
then I'd certainly support the idea.  It would be a tremendous improvement
over the current situation in RSS.

I'm not sure how many tools that currently produce feeds as static documents
will be able to do this (let alone desire doing so).  Some tools, like MT
are database-driven but depend on rendering to static files for delivery.
This is good thing for saving server CPU cycles upon consumption but less
than ideal for finer-grained activities.

> GETting multiple resources' representations didn't stop the IMG tag
> from taking off in HTML,

Ummm, that had much more to do with HTML view controls rather than anything
else.  The examples are not the same and you doubtless know it.  The use of
an HTML control fobs off responsibility from the application.  The idea of
multiple calls for entry data is quite different and requires a much more
active role on the part of the application itself.

> I don't dispute that this calls for some (manageable) work on the
> client side. I think that is vastly preferable to requiring support for
> query on the server side; there will be many more feeds than
> aggregators, if Atom is successful.

So which is it?  Client or Server side work?  It seems to flip back and
forth here. Both sides will (and should) require more effort than they're
doing now.

> BTW, are you proposing that every request to the server for the feed's
> state is a query? Otherwise, I don't see how you can avoid multiple
> requests when reconstructing feed state (one to get the latest entries,
> at least one to get the entries you missed).

Of course not every request.  Although a GREAT many feeds are dynamically
driven so it really wouldn't be much different for them.  As for multiple
requests, please, what sense of false economy are you talking about here?
ALL of this is going to require multiple requests and getting that behavior
integrated into aggregator programs will be a GOOD THING.  As to how many
requests and what is retrieved is a matter for discussion.

> If I publish a feed, I want my consumers to see the entire information
> channel, not portions of it. That's a huge benefit.

I'd suggest "we" represent more of the edge-case group than most.

> > Hmmm, I start getting worried when I see parameters in URLs and how
their
> > semantics might be 'assumed' by folks.  Just as dates or integers in
> > namespace URI aren't versions.  Not to mention i18n issues of calling
them
> > 'entries' in a URL.  If you want to publish metadata then do so without
> > requiring special knowledge in a URL.
>
> Perhaps you've misunderstood the proposal. It does not break URI
> opacity; the client is not required or encouraged to interpret the
> semantics of the URI; it's only interpreted by the server (the same
> party that issues it). This has the benefit that from the server side,
> it can be used as a query function if necessary, but the client doesn't
> need to understand the query's semantics, and, more importantly, no
> standardisation of query semantics is necessary.

Wording to this effect would be a crucial part of any spec that emerged.

> > I'd wonder how the
> > requesting client would know how many it includes?
>
> Why does it need to?

Oh, how about cell phones?  Or other resource deprived sitautions?

> > Imagine, rather than just http for prev how about torrent or ed2k
> > requests?
> > Or redirection to an archiving service or 'nearer' data store?
>
> You clearly would like to do many ambitious things with Atom. That's
> great, but we need to be realistic about what we can do in this effort
> while still differentiating Atom from RSS.

Your idea of realistic obviously differs.  I'm not suggesting anything
outrageously complicated.  I'm simply suggesting that if we're going to get
into the idea of retreiving back content for a feed-like instance then it
might benefit from more extensibility.

> Both of these approaches can satisfy the requirement to reconstruct a
> feed's state.

Again, what you're talking about is not a feed in the current sense.  It's
perhaps better described as "a site's range of entries previously published
in what may have been multiple feed-like instances".  I stress this to make
the point of avoiding things like RSS for newsfeeds versus RSS for site
indexes.  The former being what most folks consider a "feed" and the latter
being something like what you're suggesting, but in fragmented sections.

> The design I've proposed is simple, robust, leverages the
> Web and delivers needed functionality quickly; looking at similar
> efforts in the past (SQL, XML Query, RDF Query, conneg, etc.), we can
> reasonably expect that getting query right is going to take a long time
> and a fair amount of iteration, and it's going to be much more complex.

Quickly now versus lacking in extensibility is what troubles me.

-Bill Kearney

Reply via email to