On Wed, Mar 29, 2006 at 09:33:47PM +1000, Steve Lindsay wrote:
> On 3/29/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >
> > I wrote an RSS to HTML translator because I couldn't see the value
> > in RSS (no doubt someone will explain it to me). Then I just click on
> > the links in the HTML and download it like any regular file
> > (oh wow, downloading files, I've only been doing that since I first
> > got hold of a modem so now I have yet another layer of indirection to
> > achieve exactly the same result).
> >
>
> The value is pretty straight forward, I don't have to know there's a
> new science show available, I've subscribed to the feed so it
> magically appears on my computer when it's ready. No checking the web
> site, no link clicking, no effort at all really.
We already have tools to maintain a local cache of network-accessible data:
* USENET news propagation and caching (going back approx 25 years)
* ftp archive mirror maintenance tools (going back approx 15 years)
* HTML web spidering, wget, etc (maybe 10 years??)
* "Intelligent Agents" (latest craze 5 years ago, already fading)
I'm sure there are other examples that I can't think of right now.
Once more we are back to the beginning... RSS does nothing that wasn't
already done (by several existing methods) and in many cases offers
less functionality than existing systems. None of these systems solve
the basic problem which is being able to selectively collect "useful"
information while filtering out junk. RSS doesn't solve that either.
- Tel
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html