Let me try to give my answers to some of the points that have come up since
yesterday.

Hans Aberg says:

          If now the language should be standardized, why not make it an 
        ISO/ANSI standard?

I don't think this is the time. Look at Pascal. After the revised definition
was published many years passed before it became an ISO standard, during which
the language did not change one jot. The final standardization just made one
or two small revisions; the language itself was already largely in its final
form. Let's jump through these hoops in five years time, when Haskell has been
fixed for so long that language researchers aren't interested in it any
more... (:-)

          Is it not possible to make the versions upwards compatible, so that
        Haskell 1.4 code somehow can be run on Haskell 1.5? Does "being stable"
        need to mean unchangeable?

Well, that's really been the aim all along, but things haven't turned out that
way. Rather the committee has had debates about whether `many' programs will
break, or whether there are `easy fixes'. In practice I think true upwards
compatibility is hard to achieve, and maybe not even desirable if it leads to
a more baroque design. It's important also to remember that even extensions
that don't change the meaning of correct programs may transform programs that
fail for a simple reason into programs which fail for a complex one; we've
seen examples of that. "Being stable" should also include producing
predictable and understandable error messages!

          It seems me that this increased complexity is the result of that
        people start to find the language useful. An idea to handle this might
        be to opt for a more condensed kernel, based on logically clean
        principles

Yes, some people favour this approach. I'm strongly against it, because I've
encountered all too many students with the impression that functional
languages are OK for toy programs, but for real work you need
C/C++/Java/whatever. They can easily get that impression, paradoxically,
because of the success we functional programmers have had in introducing
functional languages early in the curriculum! Students believe functional
languages are good for toy programs because they learned them when they could
only write toy programs. If they gradually discover that, in fact, the very
language they have learned is also used for very serious applications then
there's a chance of countering that impression. If instead they discover that
they were quite right, the language they have learned is considered a toy even
by functional language researchers, then I don't think we have much chance. So
personally I am dead against designing a `Noddy Haskell' which is clean enough
for teaching; let Standard Haskell be clean instead!

          Standardizing a language tends to make it obsolete, due to lack of
        creativity. Perhaps it is time to start discussing the successor of
        Haskell then.

Please not yet! Let us finish Haskell first!

Sergey Mechveliani says:

        To my mind, the 1.4 version is still very "intermediate".
        The language needs badly the multiple class parameters and other
        important extensions - to fit better CA.

Agreed, the restriction to single parameter classes has become
severe. Dropping the restriction (that is, allowing multi-parameter classes)
could be considered a simplification, and this will be considered for Standard
Haskell --- see the web pages.

John Whitley says:

        But surely as the above benefits accrue, further areas for refinement
        of the language will be revealed, and wholly new research areas will
        emerge and mature.  The research community centered around Haskell has
        done much to create a language in which powerful abstraction tools can
        be brought to bear on real world problems.  This research environment
        should continue, and requires a focused research forum for the same
        reasons Haskell was created in the first place.

        Perhaps what is needed are two tracks of language development,
        "Standard Haskell" and "Research Haskell". The research community
        continues to develop, distribute, and test new language concepts with
        less fear of disrupting existing users.  After sufficient time the
        lessons from Research Haskell can be folded back into Standard
        Haskell.

I think this is essentially what is being proposed, with two exceptions:

* Since `Standard Haskell' and `Research Haskell' are expected to diverge, the
  proposal is to use more different names for them.

* When a new stable design is reached, rather than change the definition of
  Haskell, it should be given a new name. This could be `Haskell 2001', it
  could be `Curry', or who knows it might be `Alonzo'! That will be a question
  for whoever designs it.

Of course, nobody is proposing to freeze research on functional languages!
Just to fix the meaning of the word `Haskell'. The big advantage that I see is
that it puts the decision of when to upgrade from Standard Haskell to a more
modern design in the hands of users, rather than in the hands of the Haskell
committee.

        As an example, I would wish to see the power of Haskell brought to
        bear on systems programming.  Moreover, I want powerful abstraction
        tools made available to the operating system designer and implementor.
        OS entities are usually untyped (i.e. just bits), or have an ad-hoc
        typing notion at best.  Thus the issues that interest me most strongly
        are persistence, support for programming in the large, and dynamic
        typing.

These sound like good research issues to me! But I suspect the best way to
address them is to let individual research groups work away as they have in
the past, and then pull together the most successful results into a new design
some years from now. I'm sure the good experiences we've had of working
together and designing a common language will mean that the same kind of
approach continues. But when the next committee forms, it just won't be called
the Haskell committee.

Frank Christoph says:

        I'm not a historian of programming languages, but I suspect that ideas
        such as Hindley-Milner type systems, SML's module system, Simula's
        objects and Scheme's continuations were all regarded once as
        questionable features because they were perceived as complex _then_,
        but now I think that (to varying degrees) they have seeped into the
        literature, the curriculum and the general consciousness of
        programmers enough that they are not regarded as unduly complex or
        difficult to use.

        I think Haskell has certainly started this process for monads,
        probably also for type classes and maybe even for laziness itself,
        even though that's nothing new, simply because of its popularity.

Don't worry, none of these is at risk!

        So I would urge the committee, when it is considering the question of
        standardization, to keep in mind not only the current perception of
        Haskell, but also to anticipate how Haskell might be regarded in the
        near future...

Yes, it's important to distinguish between `complex' and `unfamiliar'. But
interestingly, at least the first two of your examples (Hindley-Milner typing
and the SML module system) have also been simplified since their inception ---
another reason why they are not considered unduly complex today. That's the
kind of simplification which I hope we can achieve.


John Hughes



Reply via email to