* Bob Wyman <[EMAIL PROTECTED]> [2005-04-07 19:45]:
> I.e. if I didn't like something you published, I would simply
> publish something in my blog that had the same atom:id as
> something you had published. PubSub and other synthetic feed
> producers would then flush your post from the system and
> replace it with my post...
> 
> [...]
> 
> If, when reading a feed, I could be informed that this feed was
> a duplicate of another "preferred" feed, I could switch to the
> preferred feed and stop reading the duplicate. I could use this
> knowledge to map subscriptions to the duplicate feeds to their
> equivalents.

Together, to me, these suggest that atom:entry/atom:id should be
conisdered unique only with respect to the originating feed, or,
per the proposal, with with respect to the originating feed and
all of the feeds it points to. Correct?

Under the assumption that my understanding is correct, then the
proposition is insufficient. It works for aggregating/
republishing services that consume feeds from original producers,
such as PubSub. But it breaks down for the aggregate feeds
published by third parties. Imagine someone using Bloglines
subscribes to PlanetFoo and PlanetBar, both of which republish
content from feed Baz. Now how does Bloglines assure that the
duplicate Baz entries it receives from PlanetFoo and PlanetBar,
neither of which are its original producers but (in absence of
malicious intent) reproduce content from the same source, do
indeed originate from the same source?

If look at more convoluted examples, it fast turns into web of
trust territory...

Regards,
-- 
Aristotle

Reply via email to