On 4/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> We already have tools to maintain a local cache of network-accessible data:
> * USENET news propagation and caching (going back approx 25 years)
> * ftp archive mirror maintenance tools (going back approx 15 years)
> * HTML web spidering, wget, etc (maybe 10 years??)
> * "Intelligent Agents" (latest craze 5 years ago, already fading)
>
I understood your original post to be about the value of an rss feed
vs. a page of links you can click to manually download content. I
wasn't suggesting that rss was better or worse than any other
techniques that may have already been in place for
publishing/syndication, but rss + a decent aggregation program is
better (for me) than a page of links that I manually need to track. I
wouldn't say it's mankind's most amazing achievement, but it saves me
a little time/effort :)
> Once more we are back to the beginning... RSS does nothing that wasn't
> already done (by several existing methods) and in many cases offers
> less functionality than existing systems. None of these systems solve
> the basic problem which is being able to selectively collect "useful"
> information while filtering out junk. RSS doesn't solve that either.
>
It can help a little. There are zillions of blogs/podcasts out there,
I don't have to read all of them (mostly "junk"), I can subscribe to
the ones I like ("useful").
Finding them in the first place is obviously the hard bit, but I don't
think that is the problem rss is intended to solve.
Steve
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html