The links to 4.04 on the download page of the
ghc web page are not workign, they point to research
.microsoft.com, not ot where ever the dists are. Ok, well
I guessed the correct url and got it, but the links
should be fixed!
Oops, I thought I'd fixed all the links. Sorry about that, it
{-# notInline test #-}
test :: IORef [a]
test = unsafePerformIO $ newIORef []
main = do
writeIORef test [42]
bang - readIORef test
print (bang :: [Char])
This is a very well-known problem in the ML community.
In the original monadic I/O paper (POPL'93), Phil and I mentioned
I don't know any way to make unsafePerformIO type-safe without imposing
some drastic or complicated restriction. Something in the back of
my mind tells me that John Launchbury has another example of
type unsafety induced by unsafePerformIO but I can't remember what;
so I'm cc'ing him.
Sven Panne wrote:
[snip]
I guess that even the computer on *your* desktop would be fast
enough for the current parser by the time a completely tuned
rewrite of Happy would be finished. Moore's Law comes to the
rescue here... :-)
Well we're going in circles here. So far we've established
George Russell writes:
Parser combinators don't actually seem to analyse the grammar at
compile time at all, and instead just try all possibilities. This
looks like stone-age technology to me. The first version of MLj
was written with parser combinators. As a result the parsing
George Russell wrote:
Disagree. I think it's nice fast. I challenge you to write a faster
Haskell parser using a combinator library.
Parser combinators are fine if the grammar is very simple or you don't
care about CPU times. But using them in a serious compiler for Haskell
would be
Simon Marlow wrote:
Nonsense. I contend that you really don't want an error-correcting parser.
- parsing is quick
- error-correction is by definition unreliable
- error-correction is hard to implement well
I agree, I find that I often only fix the first error even
George Russell wrote:
[ measurements deleted]
Measurements from the seventies (buried somewhere deep in my
"real world" folders :-) showed that lexing takes more than one
order more time than parsing, and that both times are negligible
in an optimizing compiler. Consequently, blaming Happy is
Simon Marlow wrote:
Ok. It's going to be hard to get a fair comparison here, but I've just done
a rough measurement on GHC:
I can't obviously run MLj on Simon Marlow's computer, so I have rerun both tests
on this (obviously much slower) Sparc box, with GHC 4.04 and SML/NJ 110.7,
using the
Simon Marlow wrote:
[snip]
Eeek! I've just rewritten it! And I don't plan to do that again for a long
time :-)
It is really
appalling that
(a) there is no error-correction.
Nonsense. I contend that you really don't want an error-correcting parser.
- parsing is quick
George Russell wrote:
I hope I will not tread on too many corns if I say that a complete
rewrite of GHC's parser (at least) is long overdue.
Well, it *has* already been rewritten not so long ago, and if Simon M
doesn't get into a masochistic mood, it won't happen again soon... :-}
It is
Please, do *not* put error correction into ghc! I think error correction
is one of the Classic Bad Ideas for a compiler. It's much better to
focus on providing understandable error messages: when the user knows
what the compiler thinks is wrong, it's usually not so hard to fix the
error.
GHCers,
We at GHC HQ are considering getting a bug tracking system of some
description. There are three free ones I know about:
- GNATS. Primarily email based, a bit awkward to use
- Debian bug tracker. Email based again. lots of
projects using it, therefore
Hi Fermin,
| Should redundant dependencies trigger an error or a warning? I'd
| say that if I'm writing some haskell code, I wouldn't mind if a
| redundancy is flagged as an error; most likely, it'd take a short
| time to fix. However, if someone is generating haskell automatically
| (maybe
On 14-Sep-1999, Simon Peyton-Jones [EMAIL PROTECTED] wrote:
Suppose I want to read a file and write filtered contents back to it
(don't mind a backup). [...explains why hGetContents doesn't work...]
An entirely reasonable question. The semantics of lazy file
reading in the Haskell98
Hi everyone. I am a sometime O'Camler just learning Haskell. Type
classes are fun and I like the expressiveness you get without grafting a
whole "object system" onto your nice functional language. But sometimes
they baffle me, as in the following.
This function fails to typecheck:
--
On 14-Sep-1999, Simon Peyton-Jones [EMAIL PROTECTED] wrote:
the semantics of hGetFileContents was just as if
the entire contents of the file was read instantaneously
Tue, 14 Sep 1999 20:41:40 +1000, Fergus Henderson [EMAIL PROTECTED] pisze:
Well, consider the case where the file
Mark P Jones wrote:
| | Neat. And it solves a problem I was kludging around with explicit,
| | existentially quantified dictionaries.
|
| Great! Can I look forward to hearing more about that some time?
OK, it's to do with arrows:
class Arrow a where
arr :: (b - c) - a b c
()
On Tue, 14 Sep 1999, Mark P Jones wrote:
given further enhancements. So perhaps I should have said: "Some
folks out there want to write programs in a stable language.
For them, there's Haskell 98." For the rest, there are choices to be
made. One person may decide that programming in "ghc"
In a previous message, I wrote:
| Some folks out there want to use Haskell to write real programs. For
| them, there's Haskell 98.
To which Alex replied:
| To be clear, I am not an academic researcher. I develop real world
| web sites. I would really like to use Haskell for this process,
[Most common concepts and definitions of functional
language Haskell]
The new official URL of the above overrides the previous
unofficial experimental pointer, which is no longer
useful. I think I found some sort of a stable working mode,
so now
21 matches
Mail list logo