Re: relocate_TSO

1998-10-16 Thread Ralf Hinze

| Ralf Hinze [EMAIL PROTECTED] writes:
| 
|  jod 78 a.out
|  a.out: fatal error: relocate_TSO
|  jod 79 
| 
| Gotta be a native code generator bug.  Try compiling with -fvia-C.
| 
| Cheers,
|   Simon

Does not work, I'm afraid ...

jod 157 ghc -fvia-C EDigits.lhs
ghc: module version changed to 1; reason: no old .hi file
jod 158 a.out +RTS -Sstderr
a.out +RTS -Sstderr 
  Alloc  Collect   Live   Resid   GCGC TOT TOT  Page Flts
  bytes   bytesbytes   ency  user  elapuserelap
  262144  2621444368   1.7%  0.00  0.000.030.0205  
  262144  2662407064   2.7%  0.00  0.000.040.0300  
a.out: fatal error: relocate_TSO


Fiddling around with `-k' and `-K' does not help either.

Ralf



Re[2]: Forwarded from haskell list: Catch 22?

1998-10-16 Thread john_r_velman

Sigbjorn writes:
john_r_velman [EMAIL PROTECTED] writes: 

 I've been using HUGS for a few weeks, learning Haskell, and fairly
 excited about it.  When I saw the GHC 4 announcement, I decided to try 
 to install GHC.  But..
  [snip]
 
Hi,
 
have a look at
 
  http://www.dcs.gla.ac.uk/~sof/ghc-win32.html
 
it contains info on how to set up a ghc-3.03 binary distribution under 
cygwin32 b19.

It seems like it almost worked.  

I had a fair amount of trouble with the well known cygwin32 CRLF
problem.  I have my cyg-win/gnu-win environment set up so that
everything works if there are no CRs next to the LFs in  shell
scripts, and apparently also in perl scripts.  Apparently the tar file
ghc-3_03-cygwin32_tar.gz, was prepared in an environment that resulted
in the included shell scripts and perl scripts having CRLFs (or else
something has gone wrong with my installation recently! -- not exactly
an impossible event).  

I wrote a shell script to replace the most obvious scripts with
versions with the CRs stripped out.  Once that was done, 'configure'
and 'make install' went smoothly, apparently.  (I did what little
makefile editing I needed with Vim, which is very honest about showing
the CRs if there are any, and not putting them in unless explicitly
asked to.) Next step (after fixing my .bashrc to add
/usr/local/ghc3_03/bin) to my PATH was to try the "hello world"
example.

Instead of using cat as in the example,  used Vim (truth in problem
explanation).  I've put together two versions: one with CRs and one
without.

Time now for an image of a bash session:

-- bash-2.01$ pwd
-- /home/haskelltest/ghc
-- bash-2.01$ ls
-- main.hsmaincr.hs
-- bash-2.01$ cat main.hs
-- module Main(main) where
-- 
-- main = putStrLn "Hello World!"
-- bash-2.01$ cat maincr.hs
-- module Main(main) where
-- 
-- main = putStrLn "Hello World!"
-- bash-2.01$ type ghc
-- ghc is hashed (/usr/local/ghc3_03/bin/ghc)
-- bash-2.01$ type ghc-3.03
-- ghc-3.03 is hashed (/usr/local/ghc3_03/bin/ghc-3.03)
-- bash-2.01$ ghc -o main main.hs
-- 
-- headFS: empty FS:
-- bash-2.01$ ls
-- main.hsmaincr.hs
-- bash-2.01$
-- 

One suspects that there is still a CRLF lurking somewhere important
but somehow invisible.  On the otherhand, I may have just missed
something obvious.

By the way, main.hs works with Hugs.  Also, with ghc I get the same
results whether I try to compile main.hs or maincr.hs.  Or, if I use
ghc-3.03 instead of the linked version.

Thanks,

John Velman
[EMAIL PROTECTED]
(Message composed with NTemacs, mailed with ccmail)




Re: bootstrapping ghc on AIX

1998-10-16 Thread Jan Kort

Hi,

  In ghc/Makefile it says:

17  # Order is important! driver/ has to come before includes/ which
18  # again has to come before the rest.
19  #
20  # If we're booting from .hc files, swap the order
21  # we descend into compiler/ and lib/
22  #
23  ifeq "$(GhcWithHscBuiltViaC)" "NO"
24  SUBDIRS = utils driver includes rts docs compiler lib
25  else
26  SUBDIRS = utils driver includes rts docs lib compiler
27  endif

Shouldn't this be 'ifeq "$(GhcWithHscBuiltViaC)" "YES"' ?

  Jan

-- 

Jan Kort



bootstrapping ghc on AIX

1998-10-16 Thread Jan Kort

Hi,

  I tried to bootstrap ghc4.00 on AIX with the "--enable-hc-boot"
flag, but I got an error:

RTS -K2m -H10m -RTS-g rename/ParseIface.y
make[2]: RTS: Command not found
make[2]: *** [rename/ParseIface.hs] Error 127
make[1]: *** [boot] Error 1
make: *** [boot] Error 1

It looks like the variable HAPPY in ghc/compiler/Makefile is not
defined, but since I'm bootstrapping I don't see why it would have
to be defined ?

Here are the commands I typed to produce the bug report,
which I have gzipped and attached (unfortunately netscape
attachments don't get through, so I'll send it as a seperate
mail after this one):

script myscript
uname -srv
gcc --version
make --version
tar -xf ghc-4.00-src.tar
cd fptools
./configure --enable-hc-boot
make boot
exit

I also tried to compile ghc2.10, but this gives the same error.

I can get "make boot" through by commenting out 6 lines related
to Happy and some dependencies in "ghc/compiler/Makefile". But the "make all"
does not get very far, the first thing it complains about is "etext"
being undefined in "ghc/includes/ClosureMacros.h", I changed to "_etext"
to "etext" in "ghc/includes/config.h". Then I added a line to
"ghc/rts/MBlock.c":
#define ASK_FOR_MEM_AT 0x5000
Then I copied the aix aix_TARGET_OS ifdef from "ghc/rts/PrimOps.hc" to
"ghc/includes/PrimOps.h".
The I got the following error:

make[4]: Leaving directory `/pluto/home/jan/fptools/ghc/rts/gmp/mpz'
../../ghc/driver/ghc  -I../includes -I. -Igum -optc-Wall  -optc-W
-optc-Wstrict-prototypes  -optc-Wmissing-proto
Assembler:
/tmp/ccHbwaUa.s: line 61: 1252-044 The specified source character 0xc does not have
meaning in the command context used.
/tmp/ccHbwaUa.s: line 61: 1252-142 Syntax error.
make[2]: *** [StgRun.o] Error 1
make[1]: *** [all] Error 1
make: *** [all] Error 1

I removed the assembler files from "ghc/rts/Makefile" (probably not a good
idea). Then I got the error:


==fptools== make all --no-print-directory -r;
 in /pluto/home/jan/ghc4/fptools/ghc/lib/std

make[3]: *** No rule to make target `Array.o', needed by `libHS.a'.  Stop.
make[2]: *** [all] Error 1
make[1]: *** [all] Error 1
make: *** [all] Error 1

Which I don't understand, am I missing some hc files ?

Regards,
  Jan

-- 

Jan Kort



RE: bootstrapping ghc on AIX

1998-10-16 Thread Sigbjorn Finne (Intl Vendor)


Jan Kort [mailto:[EMAIL PROTECTED]] writes: 
 
 Hi,
 
   In ghc/Makefile it says:
 
 17  # Order is important! driver/ has to come before 
 includes/ which
 18  # again has to come before the rest.
 19  #
 20  # If we're booting from .hc files, swap the order
 21  # we descend into compiler/ and lib/
 22  #
 23  ifeq "$(GhcWithHscBuiltViaC)" "NO"
 24  SUBDIRS = utils driver includes rts docs compiler lib
 25  else
 26  SUBDIRS = utils driver includes rts docs lib compiler
 27  endif
 
 Shouldn't this be 'ifeq "$(GhcWithHscBuiltViaC)" "YES"' ?
 
   Jan
 

"NO" :-) 

You'll need to descend into lib/ before compiler/ when booting,
since you'll need to have the .o's for the Prelude libs around
when you eventually get to do the big link in compiler/

--Sigbjorn "building 4.00 .hc files now"



Re: category theory

1998-10-16 Thread David Glen JEFFERY

On 15-Oct-1998, Hans Aberg [EMAIL PROTECTED] wrote:
 At 17:25 +1000 98/10/15, David Glen JEFFERY wrote:
 Does something like this exist?  FWIW, I'm using Hugs 1.4
 
   I gather that "FWIW" is yet another SSMA; what does it mean?

For What It's Worth.  Okay... I'll bite. What's SSMA?

Anyhow, for those who are interested in my original question about a
non-aborting read, a colleague of mine provided this:


-- Author: Bernie Pope ([EMAIL PROTECTED])

module MyRead (myread) where

myread :: Read a = String - Maybe a
myread s = case [x | (x,t) - reads s, ("","") - lex t] of
 [x] - Just x
 []  - Nothing
 _   - Nothing


Also, Noel Winstanley mailed me directly and suggested that I use
unsafePerformIO and catch the exception: 

safe_read s = unsafePerformIO $ (map Just . readIO $ s) `catch`
(\_ - return Nothing)

That's also not a bad idea, but I think that "myread" above is simpler. 
Perhaps something like this should be added to the standard library.(?)


dgj
-- 
David Jeffery ([EMAIL PROTECTED]) |  Marge: Did you just call everyone "chicken"?
PhD student,|  Homer: N.  I swear on this Bible!
Department of Computer Science  |  Marge: That's not a Bible; that's a book of
University of Melbourne | carpet samples!
Australia   |  Homer: Ooooh... Fuzzy.





RE: Haskell in Scientific Computing?

1998-10-16 Thread Simon Peyton-Jones

 Could Haskell ever be used for serious scientific computing?
 What would have to be done to promote it from a modelling
 tool to a more serious number crunching engine? Maybe not
 necessarily competing with Cray, but not terribly lagging
 far behind the other languages?

Short answer: I think so, but it's a lot of work.

Going head to head with FORTRAN is very challenging. FORTRAN
compilers deal with a first-order, strict, language with explicit
storage management -- and they have two decades lead.

First, you absolutely have to store arrays of unboxed values;
having arrays of pointers to thunks is a real killer.  The Clean
people have done a lot of good work on this, and with a bit of 
multi-paramter type class magic I think one can use unboxed arrays
without it polluting your program too much.

I just do not know how important update in place is.  With programs
that manipulate a few very large arrays the detailed storage management
of those arrays becomes rather important.  Sisal did an incredibly
good job of this.  Laziness makes that harder.  Clean allows the
programmer to get update inplace via uniqueness types.

Declarative languages *ought* to give a big handle on optimisation.
FORTRAN compilers spend a lot of time deriving a functional program
from the imperative one they started with, but they have to make
conservative approximations.  So in principle we might do better.
I know of four encouraging examples:
Sisal
NESL
SAC
FISh

SAC is the least well known, but they have now successfully implemented
'with-loop folding'. This amounts to elimininating intermediate arrays
in the same sort of way as we eliminate intermediate lists.  It's cool.
And they beat FORTRAN.  (See their IFL'98 paper.)

All of these languages are restricted in some way.  All
also have extensions that are aimed at array computations.
Haskell, or Clean, or ML are less specialised, and therefore harder 
to compile as well.  


Another approach is to compete not head-to-head on speed, but on
cunning.   Get a good library of numeric procedures (e.g. Mathlab),
interface them to Haskell, and use Haskell as the glue code to make
it really fast to write complex numerical algorithms.  99% of the
time will still be spent in the library, so the speed of the Haskell
implementation is not very important.  This looks like a jolly productive
line to me.



So in principle, yes.  But it takes a big investment and there's a long
hill to climb before people start to take notice of you.  But (to change
the metaphor) I think there's some fertile territory there.

Incidentally, if anyone wants to work with us to make GHC do a better job of

scientific computation (and it's currently nowhere near good), I'd be
glad to work with them.

Simon





RE: Haskell 98

1998-10-16 Thread Simon Peyton-Jones

 Classes appear in *contexts*, not in types. So there's no 
 confusion. This is
 another `bug fix' which simplifies the language, and I think 
 we should do it.

Consider the function

t :: T a = T a - T a

I think that it's far from clear what each of the T's mean!
Worse, in Haskell 2 we'll also have

t :: T T = T a - T a

In (T T) one is class and the other is a type constructor.


Simon





Re: category theory

1998-10-16 Thread Jerzy Karczmarczuk

Alan Wood:

...

 On another point ... I assume *someone* out there must have re-written the ML
 code from Rydeheard and Burstall's 'Computational Category Theroy' in Haskell -
 even if only partially. If you have, I'd welcome a copy of the code.
 
 Alan
 
 --
 Dr A.M. Wood email: [EMAIL PROTECTED]


Perhaps I missed some postings, but a reasonable starting point is
also the article by Erik Meijer and Luc Duponcheel on the categorical
prelude to Gofer:

http://www.cs.ruu.nl/people/luc/GlasgowFP94.ps


Jerzy Karczmarczuk
Caen, France.





Haskell in Scientific Computing

1998-10-16 Thread John O'Donnell


There's another way to look at the role of Haskell in scientific computing.  
All the discussion so far is assuming that (1) you write your program in 
Haskell, (2) you run it through a compiler, (3) you compare the speed with 
Fortran, (4) you sigh and give up...

In this picture, Haskell is being used to express the algorithm at just one 
level of abstraction.  Now, Fortran is a very limited language, and it's only
capable of expressing one level of abstraction.  But Haskell does *not*
have that limitation!

So there is another way to use functional languages: they can help you to 
express your algorithm cleanly and simply, and they can also help you in 
deriving a more efficient low-level version via program transformation.  If 
you like, it's possible to transform the program all the way down to C or 
Fortran, and at that point you don't need a Haskell compiler or graph 
reduction at all - you just use the Fortran or C compiler.

Naturally, this transformational approach doesn't help if you start out 
thinking of the algorithm in a low level, first order style with lots of 
imperative iterations over arrays.  In such cases one might as well just write 
the program in Fortran to start with.

However, there are some algorithms that are very difficult to write in Fortran 
or C, but which can be expressed much more clearly and simply in a functional 
style.  (A good example of this is the hierarchical radiosit algorithm.)  You 
can write a high level executable specification, using Haskell and ghc; if the 
performance is inadequate, then you can transform the program to a lower 
level. The crucial point here is that the transformations are not aimed at 
getting rid of inefficiencies caused by graph reduction or the like; the 
transformations are aimed at making the algorithm fit better onto the 
underlying architecture.

One apparent problem with the transformational approach is that it's a lot of 
work.  Isn't it easier just to write your program once, in a language that 
supports only one level of abstraction, and compile it?  I think this is a 
valid criticism (after all, *I* am the one raising it!) but there are two 
counterarguments:

  -- some of the transformation steps required are straightforward and
 boring, but there may be the possibility of automating these.  After
 all, even the ghc compiler is a transformational system `under
 the hood'.  Full transformational automation has not been achieved
 yet, but work is underway.

  -- For some algorithms, insight is needed to transform down to a
 low level efficient implementation: "Eureka steps".  It is
 unrealistic to expect a compiler to find Eureka steps, but the
 transformational approach may make it possible to automate the
 boring steps while allowing the programmer to insert clever
 insights.  You can't do this with compilers, and you can't do
 it with languages that can express algorithms only at one level
 of abstraction.

John O'Donnell






Re: Haskell 98

1998-10-16 Thread Ralf Hinze

| Comments to me directly ([EMAIL PROTECTED]), or the Haskell mailing
| list.

Here we are ... (comments are marked with `]')



Typing of do expressions

[...]

  2. Nuke MonadZero altogether. Instead, augment the Monad class with

 class Monad a where
   ...as before..
   mfail :: m a

 Now all do expressions are in class Monad. You can interpret mfail as a
 zero if you like. Two reasons for liking this:
o It simplifies the typing of do expressions.

o Pretty much every monad will have to be in MonadZero in order to
  do something sensible with do expressions that include patterns.
  If that's the case, why not simplify and combine the classes?

o No known Haskell monad obeys the laws for a zero:

m  zero = m

  (e.g. Take m to be bottom.) So calling it a zero is a bit of a
  misnomer.


] I prefer the second option simply because all `do' expressions are then
] in the basic Monad class. One could even add a default definition for
] `mfail'
]
]   class Monad a where
]   ...as before..
]mfail :: m a
]mfail =  error "computation failed"
]
] Then no changes to existing instances of Monad are required. 
]
] BTW I guess the law should read as
]
]   m  zero = zero .



The simple-context restriction

The question here is whether

f :: Eq (h a) = ...
g :: Eq [a]   = ...

should be legal types. In Haskell 1.4 both are illegal; the constraints in a
context must take the form (Class tyvar). There are three possibilities:

  1. Status quo. Accept that we don't have principal types, and that
 occasionally hurts.
  2. Allow constraints of the form (Class (tyvar types)). This would make
 f's type signature legal, but still exclude g's. It still permits eager
 context reduction (as Haskell currently has). It has principal types.
 But Haskell 2 is going to go further.
  3. Allow arbitrary types in contexts. This would make both f and g's type
 signature legal. This is where Haskell 2 is going; it is sound and
 implementable. But it forces lazy context reduction, which is a big
 change, perhaps all the more so because it is largely hidden.
 Furthermore, it is a change that hbc and nhc might find it hard to
 track.


] Don't stick to the status quo!!! Types like `Eq (h a)' just occur too
] often (as a rule of thumb they always show up if you mix classes and
] constructor classes). So if (3) is a problem because of lazy context
] reduction adopt (2), but not (1).



Lexical and syntactic matters

VarIds can begin with "_".

The ICFP meeting didn't like this change. Partly because '_' on its own is
not a normal identifier, in a pattern at least: it's a wild card.

The change was apparently originally motivated by noting that

___

(three consequtive underscores) would lex as _ _ _, clearly a Bad Thing. We
agreed to fix it thus:

   * make '_' lexically a valid identifier character anywhere
   * but disallow identifiers starting with '_'

Thus '___' would lex as '___' but then be rejected. '_' on its own remains a
wild-card pattern, and illegal in expressions.


] `_' is a special case whatever option one chooses. So I can see no real
] advantage in disallowing identifiers starting with '_'.


Maximal munch rule for '---'

Modified. Yes, use the maximal munch rule, but any lexeme consisting of two
or more hyphens begins a comment.

So '---' is not a valid operator symbol, but '--' is. A line of hyphens of
any length introduces a comment.


] I do not understand the example: if every lexeme consisting of two
] or more hyphens begins a comment, `--' begins a comment!


Allow a type and a class to have the same name

Rejected. It's an un-forced change, and it allows even more obscure programs
than now. Data constructors and type constructors can share the same name,
but data constructors appear only in expressions, and type constructors only
in types, so there's no confusion. But classes appear in types too.

] No, no, no! Why on earth should Haskell 98 dictate the choice of names?
] Nobody is forced to use the same names for types and classes, if she or he
] does not like it. But there may be a situation where this is pretty useful.
] The next step is probably to ban the indiscriminate use of recursion
] because one can use recursion to write very obscure programs.
] Sorry for my passionate words but I do not like the tendency here :-(.


Allow import decls anywhere in a file

Rejected. Most people wanted them to stay at the top.

] The same comment applies here ;-).


Remove concept of "special identifier"

The idea was to make hiding and qualified into keywords, and 

RE: Haskell in Scientific Computing?

1998-10-16 Thread Hans Aberg

At 02:30 -0700 98/10/16, Simon Peyton-Jones wrote:
Declarative languages *ought* to give a big handle on optimisation.
FORTRAN compilers spend a lot of time deriving a functional program
from the imperative one they started with, but they have to make
conservative approximations.  So in principle we might do better.

  Exactly what does this mean that FORTRAN compilers derives a functional
program?

  Hans Aberg
  * Email: Hans Aberg mailto:[EMAIL PROTECTED]
  * Home Page: http://www.matematik.su.se/~haberg/
  * AMS member listing: http://www.ams.org/cml/






Re: Haskell in Scientific Computing?

1998-10-16 Thread David Barton

Simon Peyton-Jones writes:

 Another approach is to compete not head-to-head on speed, but on
 cunning.  Get a good library of numeric procedures (e.g. Mathlab),
 interface them to Haskell, and use Haskell as the glue code to make
 it really fast to write complex numerical algorithms.  99% of the
 time will still be spent in the library, so the speed of the Haskell
 implementation is not very important.  This looks like a jolly
 productive line to me.

I don't know if it is better to go with a commercial product here
(like Mathlab) or one of the semi-public domain (Reduce) or wholly
public domain tools here.  It would be a shame if Haskell were
publically available but the thing that made it useful for scientific
computing was not.

Dave Barton *
[EMAIL PROTECTED] )0(
http://www.intermetrics.com/~dlb





RE: Haskell in Scientific Computing?

1998-10-16 Thread Thorsten Zoerner

A few comments:

 Could Haskell ever be used for serious scientific computing?
 What would have to be done to promote it from a modelling
 tool to a more serious number crunching engine? Maybe not
 necessarily competing with Cray, but not terribly lagging
 far behind the other languages?

Short answer: I think so, but it's a lot of work.

Going head to head with FORTRAN is very challenging. FORTRAN
compilers deal with a first-order, strict, language with explicit
storage management -- and they have two decades lead.

First, you absolutely have to store arrays of unboxed values;
having arrays of pointers to thunks is a real killer.  The Clean
people have done a lot of good work on this, and with a bit of
multi-paramter type class magic I think one can use unboxed arrays
without it polluting your program too much.
For the programmer unboxed arrays in Clean 1.3 appear just as an attribute
to the type name.
Example: a vector is an unboxed array of reals:
:: Vector :== {# Real}
and that works the same way for matrices
:: Matrix :== {# Vector}
where the '#' indicates unboxing.
Then you can start defining your operations with these types. Let's add two
vectors:
add :: Vector Vector - Vector
add x y = { xx + yy \\ xx -: x, yy -: y}
This is called array comprehension. The '-:' extract elements from an
array, like the '-' extracts elements fomr lists in Clean.
To work with infix operators you can instantiate (+) for vectors, matrices,
tensors just by writing this recursive instantiation:
instance + {# a} | + , ArrayElem a
where
(+) a b = { aa + bb \\ aa -: a  bb -: b}

I just do not know how important update in place is.  With programs
that manipulate a few very large arrays the detailed storage management
of those arrays becomes rather important.  Sisal did an incredibly
good job of this.  Laziness makes that harder.  Clean allows the
programmer to get update inplace via uniqueness types.
Concerning the importance of updates, I would distinguish between two large
groups of algorithms. They are - with their favourable storage scheme and
the most important operation:

full, small systems - direct methods   -  updates
sparse, large systems   - iterative methods-  matrix vector
multiplication,
dot products

Two examples: You don't want to use a direct method for a matrix in a
sparse storage scheme, because random access in such matrices is slow and
typically the matrix fills in (and isn't sparse anymore). You don't want to
run an iterative method with a full matrix either, as the matrix vector
product (in each iteration step) becomes really expensive.

Another rule of thumb: In direct methods the main 'information' is located
in the matrix elements, which are accessed and updated. In iterative
methods the information is contained in vectors, which are added,
multiplied with a scalar or a matrix - the result being another vector.
There is often only one matrix involved in iterative methods, being
multiplied with in each step.

[...]
Another approach is to compete not head-to-head on speed, but on
cunning.   Get a good library of numeric procedures (e.g. Mathlab),
interface them to Haskell, and use Haskell as the glue code to make
it really fast to write complex numerical algorithms.  99% of the
time will still be spent in the library, so the speed of the Haskell
implementation is not very important.  This looks like a jolly productive
line to me.
Most parts of Matlab (which stands for Matrix Laboratory and really is
good) are direct calls to LAPACK, a huge library of basic numerical tasks,
written in FORTRAN and C (and maybe more) and specialized for many
platforms.
So using Matlab routines is somehow an indirect step. You don't have the
nice graphic features of Matlab in LAPACK of course.

Cheers, thorsten.

o---o
  T h o r s t e n   Z o e r n e r
  Computing Science Institute   Catholic University of Nijmegen
  snail : Postbus 9010  6500 GL Nijmegen, The Netherlands
  office: +31 (24) 36-52069 fax:   +31 (24) 36-52525
  email : [EMAIL PROTECTED] visit: Toernooiveld 1, room A-6032
  url   : http://www.cs.kun.nl/~zoerner/
o---o






Re: Haskell 98 -- Overloading

1998-10-16 Thread S. Alexander Jacobson

I guess I missed the controversy over at ICFP, but I would like to know
why overloading of lists is being eliminated.

Arguments for Overloading:
1. Generality/Re-use is good
The big point of Hughes "Why Functional Progamming Matters" is that
functional programming enables much more high level notions of re-use and
that is a good thing.  If you eliminate overloading of the ++ operator
then it is much less straightforward to replace code which uses Lists 
as a collection type with code that uses Trees as its collection type.
Now that you are adding MPTC, the argument in favor or retaining ++ as an
overloaded operator is even stronger.  It would be a pity to have to go
through that code, find every instance of ++ and replace it with mplus.  
It makes more sense to encourage programmers to use reusable constructs
from day 1.  The most reusable constructs are the overloaded operators.

2. Provide a path for beginners to learn the generalization
The overloading constructs provide a path for a newby (like me) to learn
how to generalize concepts like e.g. Lists to arbitrary monads (still
working on this one).  Eliminating overloading makes it that much harder
for someone who has learned Haskell basics to move from understanding ++
as the concatentation operator to understanding that  xs ++ ys really
means "the content of xs" orelse "the content of ys".  This generality
allows me to go from understanding ++ as list concatenation to
understanding ++ as collection type combination to understanding ++ as
describing a certain type of computation (the backtracking monad).

--
From the discussion of Integer vs. Int, it seems like the best objection
to overloading is that error messages are hard for beginners to
understand.  This doesn't strike me as an argument against overloading but
rather an argument in favor of a better error reporting mechanism.  
I am not sure how it would work, but I think that one could construct an
error reporting mechanism with special purpose errors for special purpose
uses that are common.  E.g. a type error involving use of lists and ++
might generate a different error message from the same type error
involving a less common type.

-Alex-

___
S. Alexander Jacobson   i2x Media  
1-212-697-0184 voice1-212-697-1427 fax







RE: Haskell in Scientific Computing?

1998-10-16 Thread R.S. Nikhil

What Simon is probably referring to is the fact that Fortran compilers
attempt to convert the internal representation of the program into
"SSA-form"
(Static Single Assignment form).

You might want to take a look at the following article that makes this
point well:

"SSA is Functional Programming"
Andrew Appel
ACM SIGPLAN Notices 33:4, April 1998, pp 17-20

Nikhil

Rishiyur S. Nikhil,  Compaq Cambridge Research Laboratory
One Kendall Sq., Bldg. 700, Cambridge MA 02139
[EMAIL PROTECTED]; www.crl.digital.research.com
phone +1 (617) 692 7639; fax +1 (617) 692 7650

 --
 From: Hans Aberg[SMTP:[EMAIL PROTECTED]]
 Sent: Friday, October 16, 1998 9:30 AM
 To:   [EMAIL PROTECTED]
 Cc:   Simon Peyton-Jones
 Subject:  RE: Haskell in Scientific Computing?
 
 At 02:30 -0700 98/10/16, Simon Peyton-Jones wrote:
 Declarative languages *ought* to give a big handle on optimisation.
 FORTRAN compilers spend a lot of time deriving a functional program
 from the imperative one they started with, but they have to make
 conservative approximations.  So in principle we might do better.
 
   Exactly what does this mean that FORTRAN compilers derives a functional
 program?
 
   Hans Aberg
   * Email: Hans Aberg mailto:[EMAIL PROTECTED]
   * Home Page: http://www.matematik.su.se/~haberg/
   * AMS member listing: http://www.ams.org/cml/
 





RE: Haskell in Scientific Computing?

1998-10-16 Thread Hans Aberg

At 02:30 -0700 98/10/16, Simon Peyton-Jones wrote:
Another approach is to compete not head-to-head on speed, but on
cunning.   Get a good library of numeric procedures (e.g. Mathlab), ...

  Note that it is "MatLab", short for "Matrix Laboratory".

...interface them to Haskell, and use Haskell as the glue code to make
it really fast to write complex numerical algorithms.  99% of the
time will still be spent in the library, so the speed of the Haskell
implementation is not very important.  This looks like a jolly productive
line to me.

  In addition, this method might give clues on how Haskell might be extended.

  Hans Aberg
  * Email: Hans Aberg mailto:[EMAIL PROTECTED]
  * Home Page: http://www.matematik.su.se/~haberg/
  * AMS member listing: http://www.ams.org/cml/






Re: Haskell in Scientific Computing?

1998-10-16 Thread Rod Price

There is a copylefted almost-clone of Matlab called Octave, which
uses the GNU tools, available at http://www.che.wisc.edu/octave/.
It also includes hooks to many well-known scientific libraries, such
LAPACK, FFTPACK, etc.

-Rod Price

David Barton wrote:

 Simon Peyton-Jones writes:

  Another approach is to compete not head-to-head on speed, but on
  cunning.  Get a good library of numeric procedures (e.g. Mathlab),
  interface them to Haskell, and use Haskell as the glue code to make
  it really fast to write complex numerical algorithms.  99% of the
  time will still be spent in the library, so the speed of the Haskell
  implementation is not very important.  This looks like a jolly
  productive line to me.

 I don't know if it is better to go with a commercial product here
 (like Mathlab) or one of the semi-public domain (Reduce) or wholly
 public domain tools here.  It would be a shame if Haskell were
 publically available but the thing that made it useful for scientific
 computing was not.

 Dave Barton *
 [EMAIL PROTECTED] )0(
 http://www.intermetrics.com/~dlb







Re: Haskell in Scientific Computing

1998-10-16 Thread edward barry jr

Hey! Good for you John!! We seem to hear an awlfull lot about
what Haskell does not(or should) do. Never too much about
what does or can be made to do.

Ed

John O'Donnell wrote:

 There's another way to look at the role of Haskell in scientific computing.
 All the discussion so far is assuming that (1) you write your program in
 Haskell, (2) you run it through a compiler, (3) you compare the speed with
 Fortran, (4) you sigh and give up...

 In this picture, Haskell is being used to express the algorithm at just one
 level of abstraction.  Now, Fortran is a very limited language, and it's only
 capable of expressing one level of abstraction.  But Haskell does *not*
 have that limitation!

 So there is another way to use functional languages: they can help you to
 express your algorithm cleanly and simply, and they can also help you in
 deriving a more efficient low-level version via program transformation.  If
 you like, it's possible to transform the program all the way down to C or
 Fortran, and at that point you don't need a Haskell compiler or graph
 reduction at all - you just use the Fortran or C compiler.

 Naturally, this transformational approach doesn't help if you start out
 thinking of the algorithm in a low level, first order style with lots of
 imperative iterations over arrays.  In such cases one might as well just write
 the program in Fortran to start with.

 However, there are some algorithms that are very difficult to write in Fortran
 or C, but which can be expressed much more clearly and simply in a functional
 style.  (A good example of this is the hierarchical radiosit algorithm.)  You
 can write a high level executable specification, using Haskell and ghc; if the
 performance is inadequate, then you can transform the program to a lower
 level. The crucial point here is that the transformations are not aimed at
 getting rid of inefficiencies caused by graph reduction or the like; the
 transformations are aimed at making the algorithm fit better onto the
 underlying architecture.

 One apparent problem with the transformational approach is that it's a lot of
 work.  Isn't it easier just to write your program once, in a language that
 supports only one level of abstraction, and compile it?  I think this is a
 valid criticism (after all, *I* am the one raising it!) but there are two
 counterarguments:

   -- some of the transformation steps required are straightforward and
  boring, but there may be the possibility of automating these.  After
  all, even the ghc compiler is a transformational system `under
  the hood'.  Full transformational automation has not been achieved
  yet, but work is underway.

   -- For some algorithms, insight is needed to transform down to a
  low level efficient implementation: "Eureka steps".  It is
  unrealistic to expect a compiler to find Eureka steps, but the
  transformational approach may make it possible to automate the
  boring steps while allowing the programmer to insert clever
  insights.  You can't do this with compilers, and you can't do
  it with languages that can express algorithms only at one level
  of abstraction.

 John O'Donnell







Re: Haskell in Scientific Computing?

1998-10-16 Thread Matthew Donadio

Dave Tweed wrote:
 But there's a lot of problems, probably more in the hazy region between
 science  engineering, where `numerically intensive' algorithms are
 developed which don't look anything like existing classical techniques.
 Here the issue is to generate CORRECT results REASONABLY QUICKLY, ie, the
 time has to be within a factor of 3-4 times of a C implementation but this
 slowdown is acceptable if you are more confident your infant algorithm is
 correctly implemented in your infant code.

Let me give an example here that demonstrates a need for speed and one
reason why some people still use FORTRAN.

One of my pervious jobs was developing new receiver techniques for
digital communication systems.  Given a description of the communication
channel we were working with (point-to-point mincrowave, cellular, etc)
we could generate our theoretical error rate curves bases on system
parameters.  Our job was see how close we could get to the curve; ie.
there isn't a concept of correct results, just degrees of success.  We
would develop a solution and then run a simulation of this solution
against a set of system parameters (generally signal to noise ratio). 
For a 99% confidence in our results, we would run our simulation until
we got 100 bit-errors.  For theoretical errors rates to 1e-5, this would
require about 1000 input points (I think I did my math right, it's
been a while since I did this...).  Needless to say, these would take a
while.  Even a 1.5 times slowdown would mean that we could run less
simulations per day.  For numerically intensive applications like this
where running time is measure in hours, speed really does matter.  A
friend of mine uses FORTRAN just for this reason.

Looking back, I would have really liked to have used Haskell.  Infinite
lists map very nicely into DSP and digital comm methods; I was able to
boil down a failrly ugly C funtion to 3 lines of Haskell using infinite
lists.

-- 
Matt Donadio ([EMAIL PROTECTED]) | 43 Leopard Rd, Suite 102
Sr. Software Engineer | Paoli, PA 19301-1552
Image  Signal Processing, Inc.   | Phone: +1 610 407 4391
http://www.isptechinc.com | FAX:   +1 610 407 4405





RE: Haskell in Scientific Computing?

1998-10-16 Thread Jeremy D. Frens


[...]
 Have quadtrees of David Wise's ([WEISE] and [WEISE1])
 proved to be of any importance to scientific computing
 in Haskell? Among other things, the quadtree algorithms
 supposed to improve array updating schemes. Judging
 from the publishing dates (1992, 1995 with a paper version
 of the latter scheduled for 1998) this idea is still
 very much alive. Has anyone benefited from it yet?
[...]

Wise has been my advisor for nearly five years now.  We worked in Haskell
for a while, using it to write the LU decomposition he worked out in your
second reference.  We even put out a tech report of this code (TR #433 at
http://www.cs.indiana.edu/ftp/techreports/).  That code was actually
written in Gofer, and I'm unsure how much would have to change to update it
for the most recent version of Haskell.

But we wanted to compete with traditional code that was written in FORTRAN,
C, and even assembly (e.g., machine specific BLAS).  That code is heavily
optimized and (most importantly for scientific computation) makes very good
use of memory.  The biggest memory concern being the memory hierarchy.  We
needed more control over our matrix structure to compete with the legacy
code.  This control included things like unboxed values, a memory
sequential array, and side-effects.

Many Haskell systems would give us some of the control we wanted, except
for the side-effects.  Other functional languages (where we could get even
more control) were often not tuned to make floating point operations as
fast as possible for the particular architectures that we were interested
in.

So we ended up in C.  We started with the most basic, interesting matrix
algorithm: matrix-matrix multiply.  Using Morton ordering, we get the
matrix into a sequential array such that each quadrant at each level of the
quadtree is in contiguous memory.  Our results are not as impressive as
BLAS, but they are promising.  See our paper in PPoPP'97 or the updated
Tech Report #449 (at the same URL above).  More recent work may allow us to
speed up our time.

Currently, I'm working on QR Factorization with quadtree matrices written
in C.  Again, I'm looking for performance tuned to the machine I compile
the code on.

If we were compiler writers (I only pretend to play one in my free time),
we could probably put many of the basic things we needed into a Haskell
compiler (like the matrix representation, assembly code for the
multiplication of small matrices, etc.).  The major algorithm could be
written in Haskell.  There are certain patterns that are emerging from all
of this code, that could be used in a compiler to write high-level code
that results in machine-fast assembly code.

jdf

-
Jeremy D. Frens  "I'm not saying what I'm saying.  I'm not saying
Assistant Professor   what I'm thinking.  For that matter I'm not
Computer Science  even thinking what I'm thinking."
Northwestern College (Iowa)-Captain Sheridan, _Babylon 5_






RE: Haskell in Scientific Computing?

1998-10-16 Thread Jan Skibinski



On Fri, 16 Oct 1998, Dave Tweed wrote:

 On Fri, 16 Oct 1998, Simon Peyton-Jones wrote:
 
  Another approach is to compete not head-to-head on speed, but on
  cunning.   Get a good library of numeric procedures (e.g. Mathlab),
  interface them to Haskell, and use Haskell as the glue code to make
  it really fast to write complex numerical algorithms.  99% of the
  time will still be spent in the library, so the speed of the Haskell
  implementation is not very important.  This looks like a jolly productive
  line to me.

It is, but it no so simple either. I do remember some
fine points we had to consider before comitting ourselves
to a certain approach when interfacing Eiffel to NAG library
(Numerical Algorithm Group). I think it is worth to cite
some of those points here to underline a complexity of
a gluing process for anything larger than a simple
one-two-three program. After all there is a world of
difference between a program and a library - with many
potential users: cursing or loving you later. :-)


1. Choice of general interfacing mechanism, which corresponds to
   making similar choice in Haskell: Green Card, Hsakell-direct?
   We used Cecil.

2. Error handling. Nag used some global error structure,
   similarly as X Window does. Back then we did not worry
   much about multithreading, but still there were some
   inconsistencies between object-oriented and classical
   approaches.

3. Accuracy issues. Machine dependencies and such.
   Double? Double-Double? Interval arithmetic?

4. Breaking functions with many-many arguements into
   smaller and more digestable pieces, with some default
   setups. In object oriented approach this is quite simple,
   because you can use local object variables for this.
   Not so for functional programming, especially Haskell - unless
   you do not care much about user-friendliness.

5. Side effects. Nag, as many other C-based programs,
   mixes concepts of functions with those of procedures.
   In Eiffel one can also do it, but is considered inelegant
   and unsafe, so one strives for constistency breaking
   a mixed beast into two steps: a procedure, followed
   by a function.  

6. The most painful stuff was related to pointers to
   functions-to_functions-to_functions. But here Haskell would
   shine, of course!
 
7. Precondition, postconditions, invariants. Here Eiffel really
   shines, and this part was most awarding for all of us
   and for users as well, I hope.
 
Most of those things became almost automatically treated
after we gained some experience and set up some rules.
Yet, there was a lot of initial sweating!

 
 I guess that this is one of those points where Jan mentioned there's a
 line between scientific  engineering computing. From (my admittedly very
 limited) experience, there's a lot of scientific computing which boils
 down to the variations on the same basic problem formulation, e.g., some
 species of system of differential equations, finding eigenvalues, etc.
 This area would be well served by being able to link to Mathlab, LINPACK,
 etc, but the amount  complexity of the gluing to be done would be
 relatively small so that working with C or Fortran wouldn't really bite
 you. So I suspect that, if you're talking about `max gain for least pain'
 in Real World applications, it's low pain but also low gain.

I guess you are right here - subject to my previous
qualification what the 'least pain' means. :-)

 
 But there's a lot of problems, probably more in the hazy region between
 science  engineering, where `numerically intensive' algorithms are
 developed which don't look anything like existing classical techniques.
 Here the issue is to generate CORRECT results REASONABLY QUICKLY, ie, the
 time has to be within a factor of 3-4 times of a C implementation but this
 slowdown is acceptable if you are more confident your infant algorithm is
 correctly implemented in your infant code. 

I agree whole-heartedly here!

 As an example, I've got several
 variations on the standard idea of a Markov Random Field being used in my
 current work which require heavy modifications within the various solution
 schemes for MRF problems so that I can't interface to an existing MRF
 solver (although I can use ideas from looking at their source code). I've
 spent many hours searching for errors occasioned by the low-level nature
 of C;  I'd have loved it if I could have saved myself time OVERALL by
 using a functional language. 

 
 So in my opinion, although it's much more work, you'd get much more `Real
 World' usage by concentrating on rapid development scientific/engineering
 computing.

I wonder 

Re: Haskell in Scientific Computing?

1998-10-16 Thread Alex Ferguson


Various people write:
 cc:  [EMAIL PROTECTED], [EMAIL PROTECTED]

And so on.  Please, these are _not_ the correct list addresses to
us for this list -- all list mail ought to go to [EMAIL PROTECTED],
and not any of these variants.

[Glasgow people, is it possible to tweak the list config so that
email sent to these addresses is not resent, or at least to adjust
reconstructed mail headers to make it less likely that they are
generated by likely "reply-to" algorithms?]

Slainte,
Alex.





Re: Haskell in Scientific Computing?

1998-10-16 Thread Jan Skibinski



On Fri, 16 Oct 1998, Alex Ferguson wrote:

 
 Various people write:
  cc:  [EMAIL PROTECTED], [EMAIL PROTECTED]

I guess, I am guilty too. Sorry.

But I have a related question. Suppose I want to browse
the archive (I am afraid I lost some answers because
of overloading of our mailserver caused by an attack from
some malignant afficionado of internet advertisement).

But what I see in Glasgow archive looks like an incomplete
thread. Where is the other half of the messages? At
Yale? Where should I look for them?

Jan
 





Re: Haskell in Scientific Computing

1998-10-16 Thread Jan Skibinski



On Fri, 16 Oct 1998, John O'Donnell wrote:

 So there is another way to use functional languages: they can help you to 
 express your algorithm cleanly and simply, and they can also help you in 
 deriving a more efficient low-level version via program transformation.  If 
 you like, it's possible to transform the program all the way down to C or 
 Fortran, and at that point you don't need a Haskell compiler or graph 
 reduction at all - you just use the Fortran or C compiler.

Very clever! But how practical it is, John, at the current state
of art? Assume nothing about my understanding of transformation
techniques. What would be a time-frame of getting such magic
usable, provided that you have found some rich uncle
(definitely not me :)) to sponsor such a project?
 
Jan







Re: Haskell in Scientific Computing

1998-10-16 Thread Rex L Page


An illustration of the Eureka phenomenon is described in
  Barasch and Page, "Parallel computation, functional programming, and
  Fortran 8x", Hypercube Multiprocessors 1986 (Michael T. Heath, ed.)
  SIAM, 1986, 57-69 
In this case, it amounted to a reversal in the order of nested loops
that wasn't spotted in the original code.

The following paper discusses the transformational approach and gives
an example of generating a clean and simple presentation of the
computation that John mentions in his note (35,000 Fortran code,
rethought from scratch and expressed in a 9000-line, literate
Miranda code), with plans (never carried out) to convert back to
vector Fortran.
  Moe and Page, Experience with a large scientific application in a
  functional language, Proc 1993 Conference on Functional Programming
  Languages and Computer Architecture, Copenhagen, Denmark (June 1993)

On the negative side, my experience was that it's a real struggle to
get the people in charge of scientific computing (physicists,
chemical engineers, etc.) to believe there might be some value in
changing programming languages. The Sisal people will confirm this,
I think.

I concluded that effort invested in making functional languages
more accessible (good tools, GUI support, education and training,
sneaking in the side door like Pizza, ...) would contribute more
towards increasing the use of functional languages than attacking
the high performance area.

Rex Page
__

On Fri, 16 Oct 1998, John O'Donnell wrote:

 
 There's another way to look at the role of Haskell in scientific computing.  
 All the discussion so far is assuming that (1) you write your program in 
 Haskell, (2) you run it through a compiler, (3) you compare the speed with 
 Fortran, (4) you sigh and give up...
 
 In this picture, Haskell is being used to express the algorithm at just one 
 level of abstraction.  Now, Fortran is a very limited language, and it's only
 capable of expressing one level of abstraction.  But Haskell does *not*
 have that limitation!
 
 So there is another way to use functional languages: they can help you to 
 express your algorithm cleanly and simply, and they can also help you in 
 deriving a more efficient low-level version via program transformation.  If 
 you like, it's possible to transform the program all the way down to C or 
 Fortran, and at that point you don't need a Haskell compiler or graph 
 reduction at all - you just use the Fortran or C compiler.
 
 Naturally, this transformational approach doesn't help if you start out 
 thinking of the algorithm in a low level, first order style with lots of 
 imperative iterations over arrays.  In such cases one might as well just write 
 the program in Fortran to start with.
 
 However, there are some algorithms that are very difficult to write in Fortran 
 or C, but which can be expressed much more clearly and simply in a functional 
 style.  (A good example of this is the hierarchical radiosit algorithm.)  You 
 can write a high level executable specification, using Haskell and ghc; if the 
 performance is inadequate, then you can transform the program to a lower 
 level. The crucial point here is that the transformations are not aimed at 
 getting rid of inefficiencies caused by graph reduction or the like; the 
 transformations are aimed at making the algorithm fit better onto the 
 underlying architecture.
 
 One apparent problem with the transformational approach is that it's a lot of 
 work.  Isn't it easier just to write your program once, in a language that 
 supports only one level of abstraction, and compile it?  I think this is a 
 valid criticism (after all, *I* am the one raising it!) but there are two 
 counterarguments:
 
   -- some of the transformation steps required are straightforward and
  boring, but there may be the possibility of automating these.  After
  all, even the ghc compiler is a transformational system `under
  the hood'.  Full transformational automation has not been achieved
  yet, but work is underway.
 
   -- For some algorithms, insight is needed to transform down to a
  low level efficient implementation: "Eureka steps".  It is
  unrealistic to expect a compiler to find Eureka steps, but the
  transformational approach may make it possible to automate the
  boring steps while allowing the programmer to insert clever
  insights.  You can't do this with compilers, and you can't do
  it with languages that can express algorithms only at one level
  of abstraction.
 
 John O'Donnell
 
 





RE: Haskell in Scientific Computing?

1998-10-16 Thread R.S. Nikhil

 From: Alex Ferguson[SMTP:[EMAIL PROTECTED]]
 Various people write:
  cc:  [EMAIL PROTECTED], [EMAIL PROTECTED]
 
 And so on.  Please, these are _not_ the correct list addresses to
 us for this list -- all list mail ought to go to [EMAIL PROTECTED],
 and not any of these variants.
 
 
And if this were not confusing enough, on the Haskell
web page at http://haskell.org, it says that the Haskell
mailing list address is "[EMAIL PROTECTED]"

Nikhil