Is there any chance that someone could send to me in Germany the head of the
canting Puritan who outlawed punning in Haskell 98? I'm trying to migrate
some code which makes heavy use of punning, and I'm about to yet again
(for literally the thirtieth time) fix yet another (subtly different)
If all you want to do is learn to program Haskell, then I think the existing
explanations
of monads suffer too much from history. A few years ago we had () and (=), but no
"do"
notation. Afterwards "do" was invented, making everyone's life easier. I suggest that
introductions to Haskell stop
Antti-Juhani Kaijanaho wrote:
[snip]
I had many problems writing programs in the do notation until I understood
the underlying (=). Why? For example, in imperative languages I
can rewrite
a - b
c - f a
as
c - f b
In Haskell I can't. Why? b is of type
Don't add more functions like concatSep to the standard library or prelude. Instead
document
what is there better. I found it far easier to find functions in the Standard ML Basis
library than in the Haskell standard. Here are some suggestions for what could be
done:
(1) document the IO
Andy Gill wrote:
[snip]
I've been playing will possible formats of such documentation.
Have a look at http://www.cse.ogi.edu/~andy/gooddoc.htm
for what I'm currently thinking of.
This is very much better than what we have already, but I'll make
the following quibbles anyway:
(1) it should be
For comparison, see
http://www.cs.bell-labs.com/~jhr/sml/basis/pages/list-pair.html
I think this style of documentation is fairly useful, and it doesn't take long to
see if the function you want is there. My only quibble with this format is that it
separates the type of a function from its
I suppose for consistency you need to allow any number of holes in a tuple, so
(,True,) is an expression of type a - b - (a,Bool,b).
One possible problem with this is that it means you can no longer think on a syntactic
level of tuples with more than two elements being equivalent to the obvious
Marcin 'Qrczak' Kowalczyk wrote:
[snip]
Maybe it's simply not possible to compile Haskell more efficiently?
Maybe not yet. But someday we'll have enough processing power to do
really sophisticated whole-program analysis, including decent region
analysis. A lot of the techniques have already
Kevin Atkinson wrote:
[snip]
1) Support for true ad-hoc overloading.
[snip]
2) Support for TRUE OO style programming.
[snip]
4) Being able to write
do a - getLine
b - getLine
proc a b
as
proc getLine getLine
[snip]
AAARRRGGH no. I don't like overloading. For one thing it
Jerome K. Jerome wrote
George said:
"You know we are on a wrong track altogether. We must not think of the
things we could do with, but only of the things that we can't do
without."
("Three Men in a Boat", chapter III)
By all means write new libraries. But this doesn't have to have
"S.D.Mechveliani" wrote:
[snip]
And why the dense matrix representation is better for Haskell?
Rather i would expect it is the sparse one.
I really don't think this kind of comparison is going to be very meaningful.
I've written some sparse matrix code in C myself. Since memory is often
as
Fergus Henderson wrote:
[snip]
So why limit expressiveness by providing only the former?
Why indeed? You are right. I hadn't realised that
a - IO (Maybe a)
would still suffer from non-determinism.
(Because if you have
x = error "foo" + _|_
it may cause a return of Nothing or else no
Simon Peyton-Jones wrote:
One solution is to add
macros (presumably in a more hygienic form than cpp), but I'm reluctant
to advocate that, because macros undoubtedly do overlap with functions.
You don't need macros. (For speed purposes inline functions are obviously
better.) All you need are
"Manuel M. T. Chakravarty" wrote:
George Russell [EMAIL PROTECTED] wrote,
This sounds interesting. So you want the branch that is not
taken to be syntactically correct, but it need not type
check. How about other semantic constraints (visibility of
names etc)? If you w
Matthias Kilian wrote:
It doesn't. There's no special treatment of constant conditionals, except
that clever (or rather a not totally braindamaged) compiler may be
expected to optimize the unreachable branch away. At least this is what
several Java books say.
Yes, you seem to be right.
Simon PJ is too valuable to lose. I
(a) second the creation of comp.lang.haskell;
(b) suggest that [EMAIL PROTECTED] should have a policy
(enforced mechanically if necessary) of 1 contribution of length
at most 5 lines (or 350 characters) per user per thread.
Keith Wansbrough wrote:
[snip]
I disagree. One major reason is the spam problem: a post to a
newsgroup essentially guarantees putting your name on a spam mailing
list, and receiving large quantities of Make Money Fast postings.
I normally spam-proof my e-mail address on newsgroups for this
It would be nice to be able to say
module Shape(
Shape,
Square :: Int - Shape,
RotateDegrees :: Int - Shape - Shape,
...
) where . . .
Ideally one would want to be able to have instance declarations as well.
This would mean that someone using the Shape module
I think the whole idea of making strings lists of characters is barmy, and
one of the few things which are a big disadvantage of Haskell over ML.
The consequence is that one must take a huge performance hit on any portable
code that deals with text a lot (as most code does), for the very dubious
Andreas Marth wrote:
I just tried the ghc-4.045 for Win32-sytems and got the error "Variable not in
scope: 'fromInt' ".
Someone correct me if I'm wrong, but I think the all-purpose function
you are supposed to use to turn an integer into some other sort of
number is called "fromIntegral".
I think the GHC developers have got their priorities about right. Yes, GHC is
slow, hard to build, and big. That's because it's a research project.
It's more important now to concentrate on demonstrating that Haskell is a good
language for all sorts of real programming problems. It won't be
Daan Leijen wrote:
[snip]
Representing the 'args' as a value in haskell is probably not
the right thing to do. For one thing, it is not possible to substitute the
value 'args' with its definition, since it is unknown until load time.
I think that the 'dynamic environment' info is much
Personally I have only one gripe with the Random class in Standard Haskell;
this is that it provides too much functionality.
In general I think you can only have two of the following 3:
(1) good random numbers
(2) speed
(3) small state
For example the Mersenne Twister is very very good and
I might be slightly more inclined to look at Clean if it was free, not just
to people in educational environments.
Matt Harden wrote:
I don't think that's really true. If I understand it correctly, the
state can be any type; it doesn't have to fit in, say, an Int or other
small type. I think the Mersenne Twister could be implemented as an
instance of Random.RandomGen. The only thing is I don't really
Sven Panne wrote:
Just a thought: Some compilers for other languages (e.g. Java and
Eiffel) warn the user about deprecated features, but continue
compilation. Given the current state of flux in the Haskell libraries,
this would be a nice thing in Haskell, too. So my suggestion is a new
Chris Angus wrote:
Put simply
What do people think about being able to access functions from other modules
without
importing the module.
i.e. Rather than
---Start-
import Foo
-- call f
callFoof x = f x
--End
We can do
(sorry, can't remember the original author)
| The correct definitions would be:
|
| take -2 -- drops the last 2 elements from the list
| (takes everything except the last 2 elements)
| drop -2 -- grabs the last 2 elements from the list
| (drops everything except
Simon Peyton-Jones wrote:
b) genRange (snd (next g)) = genRange g
genRange (fst (split g)) = genRange g
genRange (snd (split g)) = genRange g
I can't see any reason why genRange shouldn't just depend on the type. So why
not just specify that
genRange _|_
is defined?
I think you
Jerzy Karczmarczuk wrote:
class RandomGen g where
next :: g - (Int, g)
split :: g - (g, g)
genRange :: g - (Int, Int)
genRange _ = (minBound, maxBound)
Do you always use integer random numbers?
No. But this is the primitive class we're discussing here. The library
(LONG and about floating point, so I suspect many Haskellers are not
going to be interested in this message . . .)
Julian Assange wrote:
The precission and or rounding used by hugs/ghc seems strange, to wit:
Prelude sin(pi)
-8.74228e-08
Prelude pi
3.14159
sin(3.14159265358979323846)
Jerzy Karczmarczuk wrote:
but one should always try to perform all
the needed reductions before the floating precision gets out of
control. There are nice recurrences which simplify the computations
with trigonometric functions. But between 0 and Pi/4 it should be
well done.
No one should
George Russell wrote:
Julian Assange wrote:
Once you are within a few UDP, the underlaying grainyness of the
representation is going to get you, so that smoothe, monotonic line
segment you have below, will look like an appalling zigzag at
best. This is my point. Near the limits
"Manuel M. T. Chakravarty" wrote:
IMHO it would be much more important to think about a
mechanism for automatically extracting all the interface
information (including the interface comments) from a
Haskell module. Something like an automatically generated
Modula-2 definition module that
I have no problem with software having an explicit license, I just don't see
that it normally needs to be quoted at the top of EVERY module. (There
are probably exceptional jurisdictions where it does, but not many.)
The GHC method, where the license file is in the distribution and easy
to find
Frank Atanassow wrote:
What do you all think?
Well I suppose that includes me, but I'm a bit confused. I've looked at some of
the .lhs files containing the source of GHC, but the so-called literate nature
of the code doesn't seem to me to make it any better. Specifically,
it doesn't do
"D. Tweed" wrote:
Documentation is a vague term: certainly it'd be undesirable for a
specification to the libraries to just a literate copy of the code
itself. But if you're thinking in terms of an open source project where
people fix bugs in the libraries then libraries that contain some
"D. Tweed" wrote:
* Comments that actually contain meta-program information, eg pragmas
The Haskell standard system for putting information for the compiler in
things which look like comments is obnoxious, but fortunately not _very_
common. I don't think it justifies adding yet another comment
Karlsson Kent - keka wrote:
Well, that doesn't even look much like XML: it's not well-formed XML.
Personally I'd rather people spent time trying to make their comments clear,
rather than worrying about correctness of XML tags . . .
In any case, in the original example
haskell:module
Volker Wysk wrote:
Hello.
The mentioned requirements point to using SGML for literate programming.
This would lead to a systematic approach.
Can you summarise please the main ways in which (as an example) GHC development
would be helped if SGML was used?
Fergus Henderson wrote:
[snip]
For FLOOR_{F-I}(NaN), the result is defined as a NaN:
[snip]
But in both cases this doesn't really make much sense to me,
since here the result type of the operation is an integer rather
than floating point type. I guess the earlier part of 6.1 does
shed a
One further point I want to make. It should not be the purpose of the
Glasgow Haskell Implementors to solve all the world's programming problems;
they should focus on providing a good set of Haskell tools. As I think this
discussion has illustrated, there are a number of high-tech experimental
"Ch. A. Herrmann" wrote:
I believe that if as much research were spent on Haskell compilation as
on C compilation, Haskell would outperform C.
I wish I could say I believed that. The big thing you aren't going to be
able to optimise away is laziness, which means you are going to have
We've had numerous suggestions to add things to Haskell. However in my opinion
many more computer languages (and programs) are ruined by too many features, than
by too few. So here is my own list of things to remove. I realise there is
no chance whatever of it making it into the Haskell
"D. Tweed" wrote:
The disadvantage of this kind of scheme for haskell
is that there's no way to get a user setable global variable without
everything going monadic (or you use an unsafe operation) so it'd have to
be passed as an explicit argument to every function needing it.
But I bet 99%
Malcolm Wallace wrote:
class ShowType a where
showType :: a - String
Or you can do what hbc has done for donkey's years and
include 'showType' in Show.
Incidentally, nhc98 also has had 'showType' in class Show since the
year dot - only Hugs and ghc lack it. (Does
Ralf Muschall wrote:
Where does the habit to use "flip (.)" in many FP people come
from?
I think it may come partly from category
theorists
Jon Fairbairn wrote:
Then, the question is why we write
result = function operand1 operand2
instead of
operand1 operand2 function = result
I actually think the latter is cooler. :)
I think there may be cultural influences about word order and/
or writing direction creeping
Why are these illegal? I appreciate that they can't give useful information
to the compiler, which knows the type already from the class, but in my
opinion they are still useful to the maintainer, because they serve as
a reminder of the type.
Mark P Jones wrote:
I guess that H/Direct would be the best way to take advantage of these
right now.
I agree actually. Integer only needs to be an implementation of
multiprecision arithmetic; we shouldn't tie it to GMP. There are
other multiprecision arithmetic packages out there, for
Marc van Dongen wrote:
Do you have any data about comparisons with this or
other packages?
I've just looked around Dave Rusin's page:
http://www.math.niu.edu/~rusin/known-math/index/11YXX.html
but it doesn't seem to contain any up-to-date comparisons; in
particular not of GMP 3. There are
George Russell wrote:
(GMP is faster if
you use the mpn_ functions, but then you have to do all your own
allocation and only get non-negative integers.)
Sorry, I meant GMP is faster if you use mpn_ than if you use the other
GMP functions, not that the mpn_ functions are faster than LIP.
I know it used to be; see
http://www.haskell.org/mailinglist.html
for example, but the archive I get from that page seems not
to have been updated in two months.
Marcin 'Qrczak' Kowalczyk wrote:
Just a small generic comment:
IMVHO we should concentrate on making the thing useful for programmers.
Not on exact modelling of mathematical concepts.
I agree completely. There are two problems with freezing large modules into
languages:
(1) they make life
I think the way to proceed with basAlgPropos is to implement it
outside the language as a library. (Since it redefines
the basic arithmetic symbols and so on it will be necessary to
tell the user to import Prelude() or qualified and perhaps provide
an alternative version of the Prelude.) The
Marcin 'Qrczak' Kowalczyk wrote:
Show instance for functions should not be needed. It is only for lazy
programmers who want to make a quick dirty instance, for debugging
perhaps.
And why not? There is no problem with Showing functions with finite domains.
For example, try:
module ShowFun
For MLj the answers so far as I remember are:
[EMAIL PROTECTED] wrote:
To those of you who are working on implementations:
How do you implement
1) tail recursion
You can only do so much. The Java VM has a goto instruction but you
can't jump from one virtual method to another. Things
Is there a standard way of splitting a file name up,
EG (on Unix) "a/b/c" goes to ["a","b","c"]? Knowing the local
file separator would do. I apologise if this is such a standard
function that everyone else but me on the list knows the answer . . .
Peter Hancock wrote:
"George" == George Russell [EMAIL PROTECTED] writes:
Is there a standard way of splitting a file name up,
EG (on Unix) "a/b/c" goes to ["a","b","c"]?
IIRC, filename syntax on vms is
logicalna
"Ronald J. Legere" wrote:
SUMMARY: How about a supplement to the standard that contains the
'standard' extensions that everyone uses.
One problem I have with this is that "unsafe" operations, being unsafe,
are difficult to fit in with the rest of the language. For example
a common use of
Fergus Henderson wrote:
(If nothing at all can be guaranteed, then no-one should be using those
features, and they should be removed from the Hugs/ghc extension libraries.
But it should be possible to make some guarantees.)
What on earth is a guarantee? GHC is a research project. I don't
Jon Fairbairn wrote:
Am I alone in thinking that the prelude is desperately in
need of restructuring?
No. Personally I think it should be got rid of entirely, or rather
trimmed down to the absolute bare minimum required for the syntax.
By the way I think Sven's proposals are thoroughly
Lennart Augustsson wrote:
By definition, if you follow the standard you can't be wrong. :)
But the standard can be wrong. Perhaps this is a typo in the report?
I think I looked at this a while back. The standard is kaput. It gets even
worse if you try to make sense of the definitions of succ
Jerzy Karczmarczuk wrote:
[snip]
I remind you that there is still an uncorrected bug in the domain of
rationals (at least in Hugs, but I believe that also elsewhere, since
this is a plain Haskell bug in the Prelude).
succ (3%2)
gives 2%1.
Yes, this is also loony. succ should either give
Keith Wansbrough wrote:
GHC is no different from any other compiler for any other language in
this respect. Floating-point values are *not* the mathematical `real
numbers', and should not be treated as such. This is second-year CS
course material.
No, but they ARE, assuming IEEE arithmetic,
"Wilhelm B. Kloke" wrote:
IMHO you are not right in this. See W. Kahan on the ridiculous Java
restriction in FP reproducibility.
We are getting into deep waters here but I think I _am_ right in this case.
Kahan's point (I presume you are referring to his excellent (online)
paper "How JAVA's
Sven Panne wrote:
(As an aside: I *hate* standards which are not freely
available, I've never seen a real IEEE-754 document. A $4000 annual
subscription price for a single person is ridiculous, and I would probably
have slight problems persuade my company to buy the $40.000 enterprise
Axel Simon wrote:
One for industrial-strength and
complete libraries that will remain stable as long as Haskell lives and
one for the rest.
What you need for that is SUPPORT, for example, to ensure that things
still work when Haskell changes. This is difficult to guarantee in
an academic
Julian Assange wrote:
Microsoft VCC once (still?) suffers from this problem. Whether
it is because it accesses random, unassigned memory locations
or because the optimiser has time thesholds, is unknown.
Optimisers for Intel can produce different results on floating point
because floating
Sengan wrote:
I don't buy this: for a long time the embedded hard realtime people
refused to use CPUs with cache because they would be
"non-deterministic".
They finally gave up, realizing that CPU's with caches are much faster.
If garbage collection is relatively cheap and makes it 10x
George Russell wrote:
(Is there anything better than Baker's train algorithm?)
Sorry, I meant "treadmill" not "train". The train algorithm is an almost-bounded
garbage collection algorithm. (However it fails to be
properly bounded if you have large numbers of in-pointers to a node.)
The attached file is accepted by GHC 4.08 and Hugs 98. However if you remove the
declaration of "foo" (and for Hugs, the now unnecessary "where" in the declaration of
class A), both compilers complain. It appears that in the absence of any information,
GHC and Hugs assume that the subject of a
Zhanyong Wan wrote:
I guess the rational behind the current design is that everything by
default should be private. However, I doubt whether it is valid: In
Haskell the let/where clause allows us to keep auxilliary functions from
polluting the top-level name space. As a result, I seldom
Why does the Haskell language not allow "type" declarations to appear in
the declaration parts of where and let clauses? I've just been writing a huge
functions which requires lots and lots of repetitive internal type annotations
(to disambiguate some complicated overloading) but I can't
There are numerous ways of optimising sieving for primes, none of which have much
to do with this list. For example, two suggestions:
(1) for each k modulo 2*3*5*7, if k is divisible by 2/3/5 or 7, ignore, otherwise
sieve separately for this k on higher primes. (Or you might use products of
I think the following program
import List
main = putStr . show . fst . (partition id) . cycle $ [True,False]
should display [True,True,True,...]. But instead, for both GHC and Hugs,
you get a stack overflow. Is this a bug, or could someone explain it to me?
Paul Hudak wrote:
Unforunately, the "Gentle Introduction To Haskell" that
haskell.org links to is not a very useful introduction.
John and I should probably rename this document, since it really isn't a
very gentle intro at all. We should probably also downplay it's
prominance on the
Simon Peyton-Jones wrote:
It's a bug in the defn of 'partition' in the Haskell 98 report.
I have (still) failed to publish this as an errata, let alone revise
the report itself, so the buggy defn stands at present, I'm afraid.
I really plan to get to the revision in early '01.
One thing I
Paul Hudak wrote:
[snip]
So I suppose the main thing that John and I should think about is
changing the title. Something like "An Introduction to Haskell for
People Who Have Previously Programmed in Scheme or Some Other Functional
Language" might be good! :-)
"A Gentle Introduction to
Sebastien Carlier wrote:
import Monad
...
do y - liftM unzip m1
Thanks.
I'm constantly amazed by the number of tricks one has
to know before he can write concise code using the
do-notation (among other things, I used to write
"x - return $ m" instead of "let x = m").
[snip]
I am finding functional dependencies confusing. (I suspect I am not alone.)
Should the following code work?
class HasConverter a b | a - b where
convert :: a - b
instance (HasConverter a b,Show b) = Show a where
show value = show (convert value)
Andreas Rossberg wrote:
George Russell wrote:
I'm sorry if this is a FAQ, but I'm curious to know why Haskell (or at least, GHC
and Hugs)
doesn't seem able to use contexts on variables at the front of data declarations.
There has been some discussion on contexts on data declarations
Andreas Rossberg wrote:
[snip]
Such monomorphisation is not possible for Haskell in general, because it
allows polymorphic recursion. As a consequence, the number of
dictionaries constructed for a given program also is potentially
infinite.
[snip]
Yes OK, that's another case, like
I hope nobody actually uses this random number generator. It appears
to be linear congruential with period less than 14. The author
of these benchmarks seem to have got this algorithm from "Numerical
Recipes", whose code (at least, in the early editions)
has come in for some very heavy
The MLj compiler from ML to the Java Virtual Machine (which is being
actively worked on by the geniuses at Microsoft as we speak) expands
out all polymorphism. I helped develop this compiler and I believe
this approach to be a good one. There are particular reasons why it's
good for this
I don't want to seem incredibly Luddite, but there are some things the World Wide
Web is not good at, and one of them is permanence. Try for example finding out
about Glasgow Haskell from http://www.dcs.gla.ac.uk, which was I think the
standard URL a few years ago. In 2050 we may not even have
Alastair David Reid wrote:
I find it therefore of concern that many crucial Haskell documents,
including the standard and, for example, the various Glasgow Haskell
manuals, are only available online.
My printed copy of the Haskell 98 report is numbered:
YaleU/DCS/RR-1106
[snip]
Well, I think we all know the answer to this one, namely
{-# NOINLINE [name] #-}
This is, after all, what GHC does, and what several of the
files in ghc/fptools do.
Only slight problem is that according to the Haskell 98
report, including Simon Peyton Jones' revised draft, you
should use:
Alistair David Reid wrote:
Rijk-Jan van Haaften [EMAIL PROTECTED]:
What does the language definition say about [tabs]?
Sigbjorn:
Nothing at all, I believe, but the convention is [...]
The Haskell 1.4 report says what is meant to happen (section 1.5)
(which was to follow the
I personally think the inclusion of Float and Double in Enum is an unmitigated
disaster.
Enum consists of three separate parts:
(1) succ pred. These appear for float to correspond to adding or subtracting
1.0. (I am finding this out by testing with ghci; it's not specified where
Ketil Malde wrote:
George Russell [EMAIL PROTECTED] writes:
In addition, I suggest that, since it is widely agreed that the instances of
Enum for Float and Double
And (Ratio a)?
Yes, you've got a point there. They'd none of 'em be missed.
Of course mathematicians are well aware
. Then perhaps we might be able to get the whole thing done by
Christmas.
George Russell
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell
The University where I did my Maths degree also had an extremely good Computer Science
department, but sadly this connection was not exploited nearly as much as it should
have been. I have two anecdotes which might be relevant:
(1) I remember being told off by one teaching assistant for writing
Simon PJ wrote:
Fpr the Revised Haskell 98 report, Russell O'Connor suggests:
=20
| Also, I understand you are reluctant to make library changes,=20
| but sinh and cosh can easily be defined in terms of exp
|=20
| sinh x =3D (exp(x) - exp(-x))/2
| cosh x =3D (exp(x) +
Dylan Thurston wrote:
[snip]
No. As has been pointed out, this is a bad idea numerically because
it will give the wrong answer for sinh x for very small values of
x. As a matter of fact, you will also get the wrong answer for very large
values of x, where exp(x) can overflow even though
I wrote
I'm afraid that I have very little faith in the numerical analysis
expertise of the typical Haskell implementor, so I think it is dangerous
to give them an incorrect default implementation. I am reminded of
the notorious ASCII C (very)-pseudo-random number generator . . .
Dylan wrote
S.D.Mechveliani wrote
Does the Report specify that gcd 0 0 is not defined?
Yes. The Report definition says
gcd :: (Integral a) = a - a - a
gcd 0 0 = error Prelude.gcd: gcd 0 0 is undefined
gcd x y = gcd'
I've reconsidered my earlier position and think now that the Prelude is wrong to make
gcd 0 0 an error, and should return 0. It probably doesn't make much difference to
anyone, but it's like 1 not being a prime; it may be slightly harder to explain, but it
makes the maths come out nicer and is
One thing I would very much like to see done in a functional language is fault-tree
analysis.
A fault tree has as nodes various undesirable events, with as top node some disaster
(for example,
nuclear reactor meltdown) and as leaves various faults which can occur, with their
probabilities
I too am against broadening the scope of n+k patterns, for reasons that others have
already
given. In particular, I am absolutely against allowing n+k patterns to be used for
Float/Double.
If n+k patterns are to be meaningful at all, you want matching y against x+1, you want
a unique
x such
1 - 100 of 436 matches
Mail list logo