RE: ghc 4.00 seg fault on very simple program

1998-11-25 Thread Sigbjorn Finne (Intl Vendor)


Franklin Chen [EMAIL PROTECTED] writes: 
 
 I am using ghc 4.00 on i386 Linux:
 
 Linux dahmer.pscico.cs.cmu.edu 2.0.35 #1-GENERIC Thu Sep 3 
 09:20:31 EDT 1998 i686 unknown
 
 I compiled the following program by
 ghc Sieve.hs -o Sieve-h
 
 and ran it on various inputs.  It worked for small n, but seg faulted
 on larger n:
 
 314 ~/haskell chen@dahmer$ ./Sieve-h 84  /dev/null 
 zsh: 8924 segmentation fault  ./Sieve-h 84  /dev/null
 
 
Thanks for the report - unable to reproduce this behaviour
here though (linux (2.0.30 + libc5) and cygwin.) Which
ghc-4.00 is this, bin-dist, rpm, compiled from source?

--Sigbjorn



RE: isAlphanum or isAlphaNum, that´s the question...

1998-11-25 Thread Sigbjorn Finne (Intl Vendor)


Sven Panne [EMAIL PROTECTED] writes: 
 
 The current GHC-4.00 from the CVS repository highlights a bug in the
 Haskell Library Report: There is confusion between isAlphanum and
 isAlphaNum in the report and in the compiler. Hmmm, IMHO that?s a
 little bit *too* obedient to the report...   ;-)
 
 Although isAlphaNum looks nicer, previous GHCs, Hugs, NHC, and HBC
 all use isAlphanum, so this should be the right choice.
 

It has been isAlphanum since 1.3, but Haskell 98 changes that.. 
I've reverted it back to 'isAlphanum' for the time being though.

--Sigbjorn



ghc 4.00 seg fault on very simple program

1998-11-25 Thread Franklin Chen


T)g(vq4nlh#kLTw3)VBjeKym4+1{PygkT/[G8advE3xwDG)rx]4uf3hQ__Tttxx4D{:q']L"z|D 
8]exCk6'A6./E=.5#"j=/2_D"Ji+angC
Sender: [EMAIL PROTECTED]
Precedence: bulk
Resent-Date:  Tue, 24 Nov 1998 23:14:52 +
Resent-From: [EMAIL PROTECTED]
Resent-To: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"

I am using ghc 4.00 on i386 Linux:

Linux dahmer.pscico.cs.cmu.edu 2.0.35 #1-GENERIC Thu Sep 3 09:20:31 EDT 1998 i686 
unknown

I compiled the following program by
ghc Sieve.hs -o Sieve-h

and ran it on various inputs.  It worked for small n, but seg faulted
on larger n:

314 ~/haskell chen@dahmer$ ./Sieve-h 84  /dev/null 
zsh: 8924 segmentation fault  ./Sieve-h 84  /dev/null

module Main (main)
where

import System

primes :: [Int]
primes = sieve [2..]
sieve (p:xs) =  p : sieve [ x | x-xs, x `mod` p /= 0 ]

main =
  do
args - getArgs
main2 args

main2 :: [String] - IO ()
main2 [arg] = mapM_ print prs
  where
num = read arg
prs = take num primes
main2 _ = error "Usage: sieve num\n"

-- 
Franklin Chen mailto:[EMAIL PROTECTED]
Graduate Student  http://www.cs.cmu.edu/~chen/
Computer Science Department   Wean Hall 8218
Carnegie Mellon University



RE: No instance for `Eq (c r w)' when deriving classes for `Trans '

1998-11-25 Thread Sigbjorn Finne (Intl Vendor)


For the benefit of g-h-b listeners, this one has been
fixed in the current sources (this piece of Good News
was only relayed to Byron by SPJ, not Cc'ing the list.)

--Sigbjorn

 Byron Cook [mailto:[EMAIL PROTECTED]] writes: 
 
 Here is some code that works well in the Hugs 98 BETA, but not in
 GHC-4.00:
 
 data Trans c i r w = Trans [c r w] i [c r w] [c r w]
deriving (Eq,Show)
 
 
 GHC complains:
 Trans.hs:1:
 No instance for `Eq (c r w)'
 When deriving classes for `Trans'
 
 
 Hugs merrily works on the same code:
 Trans Trans [Reg R (Val (5::Word32))] () [] [] == Trans [Reg 
 R (Val (5::Word32))] () [] []
 True
 Trans 
 
 
 
 - Byron
 



library module Directory

1998-11-25 Thread Peter Thiemann

Directory.doesFileExist returns True for directories on FreeBSD-2.2.5
built from source distribution.

The library report says:

The doesFileExist and doesDirectoryExist operations return True if the
corresponding file or directory exists. The 
operations can be used to test whether a file-path refers to a file or
to a directory. The operations will return False if the file does not
exist. 

... which is agreedly a bit vague, so one place should be fixed.

-peter



Re: Reduction count as efficiency measure?

1998-11-25 Thread Carl R. Witty

Keith Wansbrough [EMAIL PROTECTED] writes:

 So while Hugs gives you a reduction count (or even a millisecond
 duration), this is essentially meaningless: in a real application
 you would compile the code with an optimising compiler.  The effect
 this can have on your execution time can easily be more than merely
 a constant factor: it can change the order of your algorithm.

Is this true in practice?  That is, are there programs which have
different asymptotic running times when compiled under ghc or hbc than
when running under Hugs?

It would actually surprise me if there were; I'm having a hard time
imagining a realistic optimization that would do this.  (Which could
easily be a failure of my imagination.)

Carl Witty
[EMAIL PROTECTED]





Efficiencies of containers

1998-11-25 Thread Jan Skibinski



 The real answer, as others have pointed out, is to use a profiler,
 which performs timings on the actual code output by the compiler you
 have chosen to use.  In the end, the only benchmark that makes any sense
 is running your real application under real conditions.

Thank you and all the others that responded to my
question about "Reduction count as efficiency measure".
I understand and accept the points that have been made
about uselessness of Hugs statistics given as
the reduction count or the timing.

Two of the answers were of particular importance to
me.
Graeme Moss suggested to use Auburn when trying to decide
on what data structure to use for a particular algorithm.
I appreciate this because it seems like a practical
solution -- no matter how convoluted it might look
at the first glance. I have not tried Auburn yet, but
I read Graeme's documentation and it seems that Auburn
could be of some value to me.

Chris Okasaki privately confirmed that my best guess
regarding the choice of standard lists for scalar
products, instead of arrays or random access lists, was
not so foolish after all.  

But there were few threads in this discussion that
really worry me.

1. A mysterious profiler, first raised as a question
   by myself, and then advised back to me as a solution
   to what I want to do. Am I missing a tool that everyone
   else uses, or such a tool has not been developed
   for Hugs yet? 

2. If reduction count in Hugs is a useless statistics,
   then what was it designed for?  

3. It has been suggested to me by several people that I
   should run my benchmarks in my final target environment,
   not in Hugs. Frankly, I really do not understand it.
   Suddenly, a portability is not important here?
   Suppose that I intend to develop a library of modules
   that should be reasonably efficient on all Haskell
   platforms. Is it too foolish to try to decide on what
   data structures would be the most efficient in average
   for a particular class of algorithms?
   I am not trying to squeeze out every single flop, I am
   just trying to avoid bad choices.

4. Putting aside non-portable ByteArrays, one of such
   choices is Array vs. List vs. RandomAccessList.
   For algorithms based on scalar products my guess
   was that List would be the best choice. Chris
   confirmed it. Someone else suggested otherwise
   pointing out that Arrays are supposedly optimized
   specifically for numerical applications. Sorry,
   but I do not see why they should fare better
   for algorithms that avoid indexing. As I understand,
   Haskell Arrays are based on associative lists
   and their lookups cannot be possibly faster than well
   optimized 'head' and 'tail' operations on standard
   lists.
   
Jan
  





Re: Reduction count as efficiency measure?

1998-11-25 Thread Fergus Henderson

On 24-Nov-1998, Jan Skibinski [EMAIL PROTECTED] wrote:
 
 On Tue, 24 Nov 1998, Lennart Augustsson wrote:
 
  I'm not sure what you're trying to accomplish.  If you want to
  decide from a theoretical point of view which is the better
  algorithm then you should do something more abstract.
  If you're trying to decide which algorithm is better in a
  certain situation then you should time it in that context.
  Comparing reduction counts in Hugs will not help you much
  if you ultimately want to run your code compiled with e.g. hbc.
  Something which is better with one compiler can be worse
  on another.
 
   I am simply trying to choose the best tool for certain
   class of algorithms. 
   Given a choice of Array, List and one of the varieties
   of Random Accees List; and assuming that I am not doing
   anything really stupid implementation-wise, such as using
   indexing for Lists; the practical question is:
   which of those data structures would give me the best
   response time?

I suspect that array performance in Haskell is quite likely to be
strongly dependent on compiler optimizations.  Using Hugs for
performance comparisons may (or may not) give you reasonable results
for comparing List and Random Access List, but for comparisons with
Array I'd advise you use something that is close to what you will use
for the final implementation.

-- 
Fergus Henderson [EMAIL PROTECTED]  |  "Binaries may die
WWW: http://www.cs.mu.oz.au/~fjh  |   but source code lives forever"
PGP: finger [EMAIL PROTECTED]| -- leaked Microsoft memo.





Re: Reduction count as efficiency measure?

1998-11-25 Thread Ralf Hinze

| Is this true in practice?  That is, are there programs which have
| different asymptotic running times when compiled under ghc or hbc than
| when running under Hugs?
| 
| It would actually surprise me if there were; I'm having a hard time
| imagining a realistic optimization that would do this.  (Which could
| easily be a failure of my imagination.)

What about common subexpression elimination? AFAIK neither Hugs nor GHC
nor HBC implement CSE, but if they did the asymptotic running time
could sometimes radically change. Here is a nice example several
first-year students came up with when I asked them to program `maximum'.

 maximum [a]   =  a
 maximum (a : as)  =  if a  maximum as then maximum as else a

Thus implemented `maximum' has an exponential running time. Eliminating
the double call `maximum as' yields a linear implementation.

 maximum [a]   =  a
 maximum (a : as)  =  let m = maximum as in if a  m then m else a

Does this example convince you?

Cheers, Ralf





Re: Reduction count as efficiency measure?

1998-11-25 Thread Carl R. Witty

Ralf Hinze [EMAIL PROTECTED] writes:

 | Is this true in practice?  That is, are there programs which have
 | different asymptotic running times when compiled under ghc or hbc than
 | when running under Hugs?
 | 
 | It would actually surprise me if there were; I'm having a hard time
 | imagining a realistic optimization that would do this.  (Which could
 | easily be a failure of my imagination.)
 
 What about common subexpression elimination? AFAIK neither Hugs nor GHC
 nor HBC implement CSE, but if they did the asymptotic running time
 could sometimes radically change. 

OK, so my imagination is severely lacking. :-)

I'm still curious about my first question, though, about the specific
optimizations included in ghc and hbc.  If in fact they don't do CSE,
are there optimizations which they do perform which would change the
asymptotic running time?

Carl Witty
[EMAIL PROTECTED]