> On Tue, 14 Jul 1998, Alastair Reid wrote:
> > and it's not clear that
> > fetching URLs outside the IO monad is sensible. Not at all clear.
S. Alexander Jacobson replied:
> I am thinking about using Haskell for XML scripting in which one imports
> XML DTD's into Haskell as a set of data statements and then reads XML
> datastructures off of web pages out on the net. This should be legal
> outside the IO monad as long as the page being accessed does not expire
> before the computation is complete. I think failure of retrieval is an
> issue of operational semantics rather than denotational semantics.
> I figure that the reliability of the network should be treated like
> the reliability of the underlying computer (which can run out of memory).
I'm not so worried about reliability (though that too is an issue)
as by the fact that web pages keep changing.
I'd normally expect that this would hold - without knowing what f
does.
foldr1 (==) (replicate 1000 (f x))
=
foldr1 (==) (map f (replicate 1000 x))
=
True
Now lets choose f=getURL and x="http://haskell.org"
Now, the above equality obviously doesn't hold because haskell.org's
home page is such a hotbed of activity.
Even if we ignore people changing their home pages, we have page counters,
cgi scripts, etc all producing different results each time you fetch em.
I am very dubious about making getURL pure.
Alastair