[Haskell-cafe] how do you debug programs?

2006-09-06 Thread Tamas K Papp
Hi,

I would like to learn a reasonable way (ie how others do it) to debug
programs in Haskell.  Is it possible to see what's going on when a
function is evaluated?  Eg in

f a b = let c = a+b
d = a*b
in c+d

evaluating 

f 1 2

would output something like

f called with values 1 2
c is now (+) a b = (+) 1 2 = 3
d is now (*) a b = (*) 1 2 = 2
...

Or maybe I am thinking the wrong way, and functional programming has
its own debugging style...

Thanks,

Tamas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how do you debug programs?

2006-09-06 Thread Pepe Iborra

Hi Tamas

There are several ways to debug a Haskell program.

The most advanced ones are based in offline analysis of traces, I
think Hat [1] is the most up-to-date tool for this. There is a Windows
port of Hat at [5].

Another approach is to simply use Debug.Trace. A more powerful
alternative for this approach is Hood [2]. Even if it hasn't been
updated in some time, Hood works perfectly with the current ghc
distribution. Even more, Hugs has it already integrated [3]. You can
simply import Observe and use observations directly in your program.
For instance:

import Observe

f' = observe f f
f a b = 

And then in hugs the expression:

f' 1 2


would output what you want.

Finally, the GHCi debugger project [4] aims to bring dynamic
breakpoints and intermediate values observation to GHCi in a near
future. Right now the tool is only available from the site as a
modified version of GHC, so unfortunately you will have to compile it
yourself if you want to try it.

Cheers
pepe


1. www.haskell.org/hat
2. www.haskell.org/hood
3. http://cvs.haskell.org/Hugs/pages/users_guide/observe.html
4. http://haskell.org/haskellwiki/GHC/GHCiDebugger
5. http://www-users.cs.york.ac.uk/~ndm/projects/windows.php

On 06/09/06, Tamas K Papp [EMAIL PROTECTED] wrote:

Hi,

I would like to learn a reasonable way (ie how others do it) to debug
programs in Haskell.  Is it possible to see what's going on when a
function is evaluated?  Eg in

f a b = let c = a+b
d = a*b
in c+d

evaluating

f 1 2

would output something like

f called with values 1 2
c is now (+) a b = (+) 1 2 = 3
d is now (*) a b = (*) 1 2 = 2
...

Or maybe I am thinking the wrong way, and functional programming has
its own debugging style...

Thanks,

Tamas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how do you debug programs?

2006-09-06 Thread Donald Bruce Stewart
mnislaih:
 Hi Tamas
 
 There are several ways to debug a Haskell program.
 
 The most advanced ones are based in offline analysis of traces, I
 think Hat [1] is the most up-to-date tool for this. There is a Windows
 port of Hat at [5].
 
 Another approach is to simply use Debug.Trace. A more powerful
 alternative for this approach is Hood [2]. Even if it hasn't been
 updated in some time, Hood works perfectly with the current ghc
 distribution. Even more, Hugs has it already integrated [3]. You can
 simply import Observe and use observations directly in your program.
 For instance:
 
 import Observe
 
 f' = observe f f
 f a b = 
 
 And then in hugs the expression:
 f' 1 2
 
 would output what you want.
 
 Finally, the GHCi debugger project [4] aims to bring dynamic
 breakpoints and intermediate values observation to GHCi in a near
 future. Right now the tool is only available from the site as a
 modified version of GHC, so unfortunately you will have to compile it
 yourself if you want to try it.

Pepe, would you like to put up a page on the haskell.org wiki about
debugging in Haskell? You could use the above mail as a start :)

-- Don



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
Tamas K Papp [EMAIL PROTECTED] writes:

 I would like to learn a reasonable way (ie how others do it) to debug
 programs in Haskell.  Is it possible to see what's going on when a
 function is evaluated?  Eg in
 
 f a b = let c = a+b
   d = a*b
   in c+d
 
 evaluating 
 
 f 1 2
 
 would output something like
 
 f called with values 1 2
 c is now (+) a b = (+) 1 2 = 3
 d is now (*) a b = (*) 1 2 = 2

But why should c and d exist at runtime? They're only used
once each, so the compiler is free to replace f with 

\a b - (a+b)+ a*b

and indeed, it's quite likely that f should disappear
too. So if your debugger outputs what you want, it's not
debugging the programme that you would otherwise have run.

 Or maybe I am thinking the wrong way, and functional programming has
 its own debugging style...

I've said this before, but I think it's worth repeating: in
most cases, if you need to debug your programme it's because
it's too complicated for you to understand, so the correct
thing to do is to simplify it and try again. That's not to
say that I don't think you should test programmes... in fact
I'm all for testing little bits. You can define f above and
try it on a few arguments in ghci or hugs, and you can use
QuickCheck and/or HUnit to automate this. But there's no
reason for you to care what happens while f is evaluating.

Remember there's very little (often no) overhead for
defining subsidiary functions, so break complicated stuff
into parts you can understand.



-- 
Jón Fairbairn [EMAIL PROTECTED]
http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html  (updated 2006-07-14)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Neil Mitchell

Hi


But why should c and d exist at runtime? They're only used
once each, so the compiler is free to replace f with

\a b - (a+b)+ a*b


Yes, and if the compiler is this clever, it should also be free to
replace them back at debug time.



I've said this before, but I think it's worth repeating: in
most cases, if you need to debug your programme it's because
it's too complicated for you to understand, so the correct
thing to do is to simplify it and try again.


That makes it impossible to ever improve - initially all programs are
beyond you, and only by pushing the boundary do you get anywhere. As
for me, I always write programs I can't understand, and that no one
else can. Perhaps I understood it when I originally wrote it, perhaps
not, but that was maybe over a year ago.

It's been my experience that debugging is a serious weakness of
Haskell - where even the poor mans printf debugging changes the
semantics! And everyone comes up with arguments why there is no need
to debug a functional language - that sounds more like excuses about
why we can't build a good lazy debugger :)

[Sorry for the slight rant, but I've used Visual Studio C++ so I know
what a good debugger looks like, and how indispensable they are]

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Lennart Augustsson
I've also used Visual Studio, and I wouldn't mind having something  
like that for Haskell.  But I have to agree with Jon, I think the  
best way of debugging is to understand your code.  I think people who  
come from imperative programming come with a mind set that you  
understand your code by stepping through it in the debugger.  But I  
find this paradigm much less useful for functional code.


-- Lennart

On Sep 6, 2006, at 06:22 , Neil Mitchell wrote:


Hi


But why should c and d exist at runtime? They're only used
once each, so the compiler is free to replace f with

\a b - (a+b)+ a*b


Yes, and if the compiler is this clever, it should also be free to
replace them back at debug time.



I've said this before, but I think it's worth repeating: in
most cases, if you need to debug your programme it's because
it's too complicated for you to understand, so the correct
thing to do is to simplify it and try again.


That makes it impossible to ever improve - initially all programs are
beyond you, and only by pushing the boundary do you get anywhere. As
for me, I always write programs I can't understand, and that no one
else can. Perhaps I understood it when I originally wrote it, perhaps
not, but that was maybe over a year ago.

It's been my experience that debugging is a serious weakness of
Haskell - where even the poor mans printf debugging changes the
semantics! And everyone comes up with arguments why there is no need
to debug a functional language - that sounds more like excuses about
why we can't build a good lazy debugger :)

[Sorry for the slight rant, but I've used Visual Studio C++ so I know
what a good debugger looks like, and how indispensable they are]

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how do you debug programs?

2006-09-06 Thread Pepe Iborra

Thanks for the suggestion Don,

I started the wiki page at http://haskell.org/haskellwiki/Debugging

On 06/09/06, Donald Bruce Stewart [EMAIL PROTECTED] wrote:

mnislaih:
 Hi Tamas

 There are several ways to debug a Haskell program.

 The most advanced ones are based in offline analysis of traces, I
 think Hat [1] is the most up-to-date tool for this. There is a Windows
 port of Hat at [5].

 Another approach is to simply use Debug.Trace. A more powerful
 alternative for this approach is Hood [2]. Even if it hasn't been
 updated in some time, Hood works perfectly with the current ghc
 distribution. Even more, Hugs has it already integrated [3]. You can
 simply import Observe and use observations directly in your program.
 For instance:

 import Observe

 f' = observe f f
 f a b = 

 And then in hugs the expression:
 f' 1 2

 would output what you want.

 Finally, the GHCi debugger project [4] aims to bring dynamic
 breakpoints and intermediate values observation to GHCi in a near
 future. Right now the tool is only available from the site as a
 modified version of GHC, so unfortunately you will have to compile it
 yourself if you want to try it.

Pepe, would you like to put up a page on the haskell.org wiki about
debugging in Haskell? You could use the above mail as a start :)

-- Don





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Tamas K Papp
On Wed, Sep 06, 2006 at 06:33:32AM -0400, Lennart Augustsson wrote:
 I've also used Visual Studio, and I wouldn't mind having something  
 like that for Haskell.  But I have to agree with Jon, I think the  
 best way of debugging is to understand your code.  I think people who  
 come from imperative programming come with a mind set that you  
 understand your code by stepping through it in the debugger.  But I  
 find this paradigm much less useful for functional code.

At this point, I need debugging not because I don't understand my
code, but because I don't understand Haskell ;-) Most of the mistakes
I make are related to indentation, precedence (need to remember that
function application binds tightly).  The compiler and the type system
catches some mistakes, but a few remain.

Thanks for the suggestions.

Tamas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread Tamas K Papp
Hi,

I have a question about coding and compilers.  Suppose that a function
is invoked with the same parameters inside another function declaration, eg

-- this example does nothing particularly meaningless
g a b c = let something1 = f a b
  something2 = externalsomething (f a b) 42
  something3 = externalsomething2 (137 * (f a b)) in
  ...

Does it help (performancewise) to have

g a b c = let resultoff = f a b
  something2 = externalsomething resultoff 42
  something3 = externalsomething2 (137 * resultoff) in
  ...

or does the compiler perform this optimization?  More generally, if a
function is invoked with the same parameters again (and it doesn't
involve anything like monads), does does it makes sense
(performancewise) to store the result somewhere?

Thank you,

Tamas

PS: I realize that I am asking a lot of newbie questions, and I would
like to thank everybody for their patience and elucidating replies.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: installing modules

2006-09-06 Thread Stephane Bortzmeyer
On Wed, Sep 06, 2006 at 02:19:14PM +0200,
 Tamas K Papp [EMAIL PROTECTED] wrote 
 a message of 23 lines which said:

 Cleanly means that it ends up in the right directory.  How do
 people usually do that?  I am using ghc6 on debian testing.

Follow the usual cabal routine (everything is in the ghc6 package):

cd chart
runghc Setup.hs configure
runghc Setup.hs build
runghc Setup.hs install

 Am I supposed to use the thing called cabal? 

Yes and the stuff I quoted above is taken verbatim from the 'chart'
page.

 Is it perhaps possible to create a debian package from sources that
 were packaged with this tool?

It si certainly possible but, AFAIK, nobody did it yet.

The Debian-Haskell mailing list is probably the good place to ask:

http://urchin.earth.li/mailman/listinfo/debian-haskell
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread Alex Queiroz

Hallo,

On 9/6/06, Tamas K Papp [EMAIL PROTECTED] wrote:


PS: I realize that I am asking a lot of newbie questions, and I would
like to thank everybody for their patience and elucidating replies.


   I am a newbie myself (second week of learning Haskell), but I'll
give it a shot: Since functions have no side effects, the compiler
executes the function only once.

--
-alex
http://www.ventonegro.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Stephane Bortzmeyer
[Warnings: newbie and not having tested or read the generated assembly
code.]

On Wed, Sep 06, 2006 at 09:32:07AM -0300,
 Alex Queiroz [EMAIL PROTECTED] wrote 
 a message of 18 lines which said:

I am a newbie myself (second week of learning Haskell), but I'll
 give it a shot: Since functions have no side effects, the compiler
 executes the function only once.

Sure, pure programming languages *allow* the compiler to perform
such optimizations but it does not mean it is *actually* done
(promises are cheap but writing compiler code is expensive).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Lennart Augustsson
Furthermore, doing that optimization (common subexpression  
elimination) can lead to space leaks.  So you should not count on the  
compiler doing it.  Besides, I often find it more readable and less  
error prone to name a common subexpression; only one place to change  
when you need to change the call.


-- Lennart

On Sep 6, 2006, at 08:38 , Stephane Bortzmeyer wrote:


[Warnings: newbie and not having tested or read the generated assembly
code.]

On Wed, Sep 06, 2006 at 09:32:07AM -0300,
 Alex Queiroz [EMAIL PROTECTED] wrote
 a message of 18 lines which said:


   I am a newbie myself (second week of learning Haskell), but I'll
give it a shot: Since functions have no side effects, the compiler
executes the function only once.


Sure, pure programming languages *allow* the compiler to perform
such optimizations but it does not mean it is *actually* done
(promises are cheap but writing compiler code is expensive).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Alex Queiroz

Hallo,

On 9/6/06, Lennart Augustsson [EMAIL PROTECTED] wrote:

Furthermore, doing that optimization (common subexpression
elimination) can lead to space leaks.  So you should not count on the
compiler doing it.  Besides, I often find it more readable and less
error prone to name a common subexpression; only one place to change
when you need to change the call.



So there is no benefits from the functions have no side-effects
property, performance-wise. Pity.

--
-alex
http://www.ventonegro.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Stephane Bortzmeyer
On Wed, Sep 06, 2006 at 02:12:28PM +0100,
 Sebastian Sylvan [EMAIL PROTECTED] wrote 
 a message of 36 lines which said:

 I think most compilers actually do CSE 

And automatic memoization? Because common subexpressions can be
difficult to recognize but, at run-time, it is much easier to
recognize a function call that has already been done. Any common
compiler / interpreter which automatically memoizes? In theory, it is
also a huge advantage of pure functional programming languages.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread David Roundy
On Wed, Sep 06, 2006 at 03:26:29PM +0200, Stephane Bortzmeyer wrote:
 On Wed, Sep 06, 2006 at 02:12:28PM +0100,
  Sebastian Sylvan [EMAIL PROTECTED] wrote 
  a message of 36 lines which said:
 
  I think most compilers actually do CSE 
 
 And automatic memoization? Because common subexpressions can be
 difficult to recognize but, at run-time, it is much easier to
 recognize a function call that has already been done. Any common
 compiler / interpreter which automatically memoizes? In theory, it is
 also a huge advantage of pure functional programming languages.

Have you even considered the space costs of this? For almost any
non-trivial bit of code I've written, automatic memoization would
result in a completely useless binary, as it almost guarantees
horrific space leaks.  In order to do this, you'd need to have some
rather sophisticated analysis to figure out the memory costs of both
the output of a function, and its input, both of which would need to
be stored in order to memoize the thing.

Before something like this were to be implemented, I'd rather see
automatic strictifying for avoidence of stack overflows (i.e. when the
stack starts filling up, start evaluating things strictly), which is
most likely another insanely dificult bit of compiler code, which
would also involve figuring out which code paths would save memory,
and which would increase the memory use.
-- 
David Roundy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Sebastian Sylvan

On 9/6/06, Sebastian Sylvan [EMAIL PROTECTED] wrote:

On 9/6/06, Stephane Bortzmeyer [EMAIL PROTECTED] wrote:
 On Wed, Sep 06, 2006 at 02:12:28PM +0100,
  Sebastian Sylvan [EMAIL PROTECTED] wrote
  a message of 36 lines which said:

  I think most compilers actually do CSE

 And automatic memoization? Because common subexpressions can be
 difficult to recognize but, at run-time, it is much easier to
 recognize a function call that has already been done. Any common
 compiler / interpreter which automatically memoizes? In theory, it is
 also a huge advantage of pure functional programming languages.

Not that I'm aware of. I don't think it's very easy to do well. The
compiler would have to keep a buffer of input/output pairs for each
function (even ones which never gets called with the same input twice)
which eats up memory - it's probably very difficult to statically say
anything about the frequency of certain inputs to functions so you'd
have to store caches for every function in the program.

If you have a function with a certain range which is used often, it's
very easy to do memoization yourself in Haskell (untested!):

import Data.Array

fac n | n = 100 = facArr ! n
| otherwise = fac' n
where fac' x = product [1..x]
  facArr = listArray (1,100) (map fac' [1..100])



Sorry, the facArr needs to be toplevel otherwise it probably won't
retain it's values. So something like:
module Fac (fac) where -- we don't export fac' only fac

import Data.Array
fac n | n = 100 = facArr ! n
   | otherwise = fac' n

fac' x = product [1..x]
facArr = listArray (1,100) (map fac' [1..100])

--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread Donald Bruce Stewart
tpapp:
 Hi,
 
 I have a question about coding and compilers.  Suppose that a function
 is invoked with the same parameters inside another function declaration, eg
 
 -- this example does nothing particularly meaningless
 g a b c = let something1 = f a b
 something2 = externalsomething (f a b) 42
 something3 = externalsomething2 (137 * (f a b)) in
 ...
 
 Does it help (performancewise) to have
 
 g a b c = let resultoff = f a b
 something2 = externalsomething resultoff 42
 something3 = externalsomething2 (137 * resultoff) in
 ...
 
 or does the compiler perform this optimization?  More generally, if a
 function is invoked with the same parameters again (and it doesn't
 involve anything like monads), does does it makes sense
 (performancewise) to store the result somewhere?
 

on the wiki,
http://www.haskell.org/haskellwiki/Performance/GHC#Common_subexpressions

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Stephane Bortzmeyer
On Wed, Sep 06, 2006 at 09:44:05AM -0400,
 David Roundy [EMAIL PROTECTED] wrote 
 a message of 33 lines which said:

 Have you even considered the space costs of this? For almost any
 non-trivial bit of code I've written, automatic memoization would
 result in a completely useless binary, as it almost guarantees
 horrific space leaks.

And a compile-time option, where the programmer explain which
functions should be memoized?

ghc -fmemoize=foo,bar darcs.hs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Ketil Malde
Tamas K Papp [EMAIL PROTECTED] writes:

 Most of the mistakes I make are related to indentation, 

I use Emacs, which has a reasonably decent mode for this.  Hit TAB
repeatedly to show the possible indentations.

 precedence (need to remember that function application binds
 tightly).

It's not always intuitive, but at least there is a fixed rule that's
easy to remember.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Stefan Monnier
 Purely functional does give you some performance benefits, though.

Nice theory.  People don't use functional languages for their performance,
mind you.  All the optimizations that are supposedly made possible by having
a pure functional language tend to be either not quite doable (because of
non-termination making the language a bit less pure) or simply too hard: the
difficulty being to decide when/where the optimization is indeed going to
improve performance rather than worsen it.

It's much too difficult for a compiler to figure out which functions might
benefit from memoization (and with which specific form of memoization).
Especially compared with how easy it is for the programmer to do the
memoization explicitly.


Stefan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
Neil Mitchell [EMAIL PROTECTED] writes:

 Hi
 
  But why should c and d exist at runtime? They're only used
  once each, so the compiler is free to replace f with
 
  \a b - (a+b)+ a*b
 
 Yes, and if the compiler is this clever, it should also be free to
 replace them back at debug time.

And where does that get us? You snipped the salient bit
where I said that you'd be debugging a different programme.

If the debugger serialises stuff and refers to variables, it
reinforces an imperative view of the world and supports the
erroneous view of variables as things that exist at
runtime. AFAIK, no implementation nowadays compiles to pure
combinators, but such would still be a valid implementation
of Haskell.  You need to develop a way of thinking about
programmes that produces insights that apply to Haskell, not
to a particular implementation.

 
  I've said this before, but I think it's worth repeating: in
  most cases, if you need to debug your programme it's because
  it's too complicated for you to understand, so the correct
  thing to do is to simplify it and try again.
 
 That makes it impossible to ever improve - initially all programs are
 beyond you, and only by pushing the boundary do you get anywhere.

I offer myself as a counter example. I'm not the world's
best Haskel programmer (that might be Lennart?;-), but I
understand it pretty well, I think. At no point have I ever
used a debugger on a Haskell programme. So clearly not
impossible (I didn't come into existence a fully fledged
functional programmer, which is the only other possibility
admitted by your argument).

 As for me, I always write programs I can't understand, and
 that no one else can. Perhaps I understood it when I
 originally wrote it, perhaps not, but that was maybe over
 a year ago.

No comment.

 It's been my experience that debugging is a serious weakness of
 Haskell - where even the poor mans printf debugging changes the
 semantics! And everyone comes up with arguments why there is no need
 to debug a functional language - that sounds more like excuses about
 why we can't build a good lazy debugger :)

It may sound like that to you, but the arguments why
debugging is a bad thing are quite strong.  There are almost
certainly bugs in my Haskell code. Would a debugger help me
find them? Not in the least, because none of the testcases
I've used provokes them. Would it help when one is provoked?
I don't think so, because either the logic is right and
there's a slip up in the coding, which is usually easy to
find from the new testcase, or the logic is tortuous and
needs replacement.

 [Sorry for the slight rant, but I've used Visual Studio C++ so I know
 what a good debugger looks like, and how indispensable they are]

So I'm ranting back at you.  I've used debuggers on C and
other old languages (particularly when writing in assembler)
and indeed they are useful when the language provides so
many opportunities for mistakes. When writing Haskell I've
never felt the need.

Here's a story:

A long time ago, when I was an undergraduate, I encountered
a visitor to the Computing Service who often asked for help
with the idiosyncrasies of the local system. I was happy to
help, though somewhat bemused by all the muttering at his
terminal while he was programming, which involved some sort
of debugging, for sure.  After some weeks he finished his
programme and whatever kept him in Cambridge and
disappeared.  Being curious, I dug (strictly against
regulations) a listing of his programme out of the scrap
bin. It was in PL/I, and several millimetres thick. It took
me quite a while (hours, though, not weeks) to work out what
the programme did, but eventually I understood it well
enough that I could write a version in Algol68 (my favourite
language at the time).  It came to one sheet of listing
paper.  This wasn't because A68 is more expressive than PL/I
(it is, but not by that much), it was because I understood
the problem before I started to write the code.

Now, you could argue that the first programmer spent most of
his time working out what the problem was (it might even be
true, but given that it boiled down to 1 page of A68, I'm
not sure), but my point is that if you proceed by debugging
rather than rewriting, you are likely to end up with this
sort of mess. Personally, I don't mind too much if that kind
of programmer finds Haskell too hard. Elitist? Certainly!
Immorally so? No.

-- 
Jón Fairbairn [EMAIL PROTECTED]
http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html  (updated 2006-07-14)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread John Hughes



On 9/6/06, Tamas K Papp [EMAIL PROTECTED] wrote:
 


or does the compiler perform this optimization?  More generally, if a
function is invoked with the same parameters again (and it doesn't
involve anything like monads), does does it makes sense
(performancewise) to store the result somewhere?
   



I was wondering something like this too, and then I found this email:
http://www.haskell.org/pipermail/glasgow-haskell-bugs/2004-December/004530.html

So I guess it is as Stephane said: theoretically possible but not actually done?

--eric the perpetual newbie

 

The trouble is that this isn't always an optimisation. Try these two 
programs:


powerset [] = [[]]
powerset (x:xs) = powerset xs++map (x:) (powerset xs)

and

powerset [] = [[]]
powerset (x:xs) = pxs++map (x:) pxs
 where pxs = powerset xs

Try computing length (powerset [1..n]) with each definition. For small 
n, the second is faster. As n gets larger, the second gets slower and 
slower, but the first keeps chugging along. The problem is that the 
second has exponentially higher peak memory requirements than the first. 
Round about n=25, on my machine, all other programs stop responding 
while the second one runs. You don't really want a compiler to make that 
kind of pessimisation to your program, which is why it's a deliberate 
decision to leave most CSE to the programmer. You can still write the 
second version, and suffer the consequences, but at least you know it's 
your own fault!


John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread Neil Mitchell

Hi


All the optimizations that are supposedly made possible by having
a pure functional language tend to be either not quite doable (because of
non-termination making the language a bit less pure) or simply too hard: the
difficulty being to decide when/where the optimization is indeed going to
improve performance rather than worsen it.


There are an absolute ton of optimisations that aren't possible in a
non-pure language. In a strict language for example, inlining is not
allowed, while in a lazy language its always valid. I suggest you take
a look at GRIN.


It's much too difficult for a compiler to figure out which functions might
benefit from memoization (and with which specific form of memoization).
Especially compared with how easy it is for the programmer to do the
memoization explicitly.


The reason is probably more that the number of functions which benefit
from memoization is probably pretty low.  And the number where it
harms the performance is probably pretty high.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Neil Mitchell

Hi


 Yes, and if the compiler is this clever, it should also be free to
 replace them back at debug time.

And where does that get us? You snipped the salient bit
where I said that you'd be debugging a different programme.


In Visual C there are two compilation modes. In debug mode, if you
create a variable i it WILL be there in the compiled version. It
won't be merged with another variable. It won't be constant folded. It
won't have its value globablly computed and stored. It won't be placed
only in the register. In release mode all these things can (and do)
happen. I'm not saying Haskell compilers aren't free to optimize, but
maybe there is a need for one that is this debug mode style.


At no point have I ever
used a debugger on a Haskell programme.


Because you couldn't find a working debugger? ;) If one had been
there, just a click away, would you never have been tempted?



It may sound like that to you, but the arguments why
debugging is a bad thing are quite strong.  There are almost
certainly bugs in my Haskell code. Would a debugger help me
find them? Not in the least, because none of the testcases
I've used provokes them. Would it help when one is provoked?
I don't think so, because either the logic is right and
there's a slip up in the coding, which is usually easy to
find from the new testcase, or the logic is tortuous and
needs replacement.


Let take for example a bug I spent tracking down in Haskell this
weekend. The bug can be summarized as Program error: pattern match
failure: head []. And indeed, thats all you get. A quick grep reveals
there are about 60 calls to head in the program. In this instance I
have the choice between 1) crying, 2) a debugger, 3) a lot of hard
work. [Of course, knowing this situation arises, I had already
prepared my getout plan, so it wasn't a massive problem]

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
Tamas K Papp [EMAIL PROTECTED] writes:

 On Wed, Sep 06, 2006 at 06:33:32AM -0400, Lennart Augustsson wrote:
  I've also used Visual Studio, and I wouldn't mind having something  
  like that for Haskell.  But I have to agree with Jon, I think the  
  best way of debugging is to understand your code.  I think people who  
  come from imperative programming come with a mind set that you  
  understand your code by stepping through it in the debugger.  But I  
  find this paradigm much less useful for functional code.
 
 At this point, I need debugging not because I don't understand my
 code, but because I don't understand Haskell ;-) Most of the mistakes
 I make are related to indentation, precedence (need to remember that
 function application binds tightly).  The compiler and the type system
 catches some mistakes, but a few remain.

I don't think you need a debugger for that. In addition to
what Lennart and Ketil have said, I think you need to take
on board the usefulness of breaking functions up into small
pieces that can be run at the GHCi or Hugs command line. And
if you aren't sure about precedence, you can type something
that demonstrates it, like this sort of thing:

   *Main let f = (+1)
   *Main f 1 * 2
   4

(That particular example only works in GHCi, but you could use
let f = (+1) in f 1 * 2
in Hugs)

On top of that, in GHCi you can do this:

   *Main :info +
   class (Eq a, Show a) = Num a where
 (+) :: a - a - a
 ...
   -- Imported from GHC.Num
   infixl 6 +
   *Main :info *
   class (Eq a, Show a) = Num a where
 ...
 (*) :: a - a - a
 ...
   -- Imported from GHC.Num
   infixl 7 *
   *Main

Note in particular the two infixl lines, which say that both
operators bind to the left and that “*” binds more tightly
than “+”

And there's always Hoogle (and the documentation it links
to) if you want to find something of a particular
type. Kudos to Neil for producing this, by the way (though I
do hope his remarks about the readability of his code were a
more self deprecating than accurate).

-- 
Jón Fairbairn [EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Andrae Muys
On 06/09/2006, at 8:22 PM, Neil Mitchell wrote:It's been my experience that debugging is a serious weakness ofHaskell - where even the poor mans printf debugging changes thesemantics! And everyone comes up with arguments why there is no needto debug a functional language - that sounds more like excuses aboutwhy we can't build a good lazy debugger :)[Sorry for the slight rant, but I've used Visual Studio C++ so I knowwhat a good debugger looks like, and how indispensable they are]I simply can't let this pass without comment.  It's irrelevant if you're using a functional or imperative language, debuggers are invariably a waste of time.  The only reason to use a debugger is because you need to inspect the contents of a processes address-space; so either you're using it as a disassembler, or you're using it to examine the consequences of heap/stack corruption.  Consequently, if you're using Java, C#, Scheme, Haskell, Erlang, Smalltalk, or any one of a myriad of languages that don't permit direct memory access, there's no reason for you to be using a debugger.Jon understates it by implying this is a Functional/Haskell specific quality - it's not.  Debuggers stopped being useful the day we finally delegated pointer handling to the compiler/vm author and got on with writing code that actually solves real problems.It's just that historically functional programmers have tended to already be experienced programmers who realise this.  Why would they waste their time building a tool that no-one needs?It's a truism to say if your code doesn't work it's because you don't understand it; clearly if you did understand it, you wouldn't have included the bug that's causing you difficulty.Therefore either1) The code is poorly structured and you need to restructure it to better represent your understanding of the problemor2) Your understanding of the problem is flawed, so you need to sit back and reexamine your thinking on this problem in light of the counter-example you have found (the bug).Spending your time tracing through individual lines of code is counter-productive in both cases.Andrae MuysP.S. It is worth noting that I am here talking about the sort of debugger raised in the original post.  I am not talking about using a separate tool to extract a stracktrace from a core file in a C/C++ program or equivalent - I'm talking about runtime debugging with variable watches, breakpoints, and line-by-line stepping. -- Andrae Muys[EMAIL PROTECTED]Principal Kowari ConsultantNetymon Pty Ltd ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
Neil Mitchell [EMAIL PROTECTED] writes:

 Hi
 
   Yes, and if the compiler is this clever, it should also be free to
   replace them back at debug time.
 
  And where does that get us? You snipped the salient bit
  where I said that you'd be debugging a different programme.
 
 In Visual C there are two compilation modes. In debug mode, if you
 create a variable i it WILL be there in the compiled version. It
 won't be merged with another variable. It won't be constant folded. It
 won't have its value globablly computed and stored. It won't be placed
 only in the register. In release mode all these things can (and do)
 happen. I'm not saying Haskell compilers aren't free to optimize, but
 maybe there is a need for one that is this debug mode style.

I think this chiefly indicates that you think of variables
as things, which in Haskell they are not.

  At no point have I ever
  used a debugger on a Haskell programme.
 
 Because you couldn't find a working debugger? ;) If one had been
 there, just a click away, would you never have been tempted?

Not in the least. The highest level language I've ever
wanted to use a debugger on was C, and that was for
debugging a pointer-reversing garbage collector, which ties
in with what Andrae says.

 Let take for example a bug I spent tracking down in Haskell this
 weekend. The bug can be summarized as Program error: pattern match
 failure: head []. And indeed, thats all you get. A quick grep reveals
 there are about 60 calls to head in the program. In this instance I
 have the choice between 1) crying, 2) a debugger, 3) a lot of hard
 work. [Of course, knowing this situation arises, I had already
 prepared my getout plan, so it wasn't a massive problem]

H'm.  I've never been completely convinced that head should
be in the language at all.  If you use it, you really have
to think, every time you type in the letters h-e-a-d, “Am I
/really/ /really/ sure the argument will never be []?”
Otherwise you should be using a case expression or (more
often, I think) a subsidiary function with a pattern match.

Most of the occurrences of “head” in my code seem to be
applied to infinite lists, or commented out, or rebindings
of head to something else, sloppy bits of nonce programming
or references in comments to the top of my own noddle.

-- 
Jón Fairbairn [EMAIL PROTECTED]
http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html  (updated 2006-07-14)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Neil Mitchell

Hi

I think that when it comes to debuggers and Haskell its fairly safe to say:

1) There aren't any which are production quality, regularly work out
of the box and are avaiable without much effort. There may be ones
which are debuggers (Hat/GHCi breakpoints), but we probably haven't
got to the stage where they are polished (being exceptionally vague
about what that means).

2) We disagree if they would be useful. I say yes. Others say no. I
guess this is an academic argument until we fix 1, and I guess no one
who says no to 2 is going to bother fixing 1.


H'm.  I've never been completely convinced that head should
be in the language at all.  If you use it, you really have
to think, every time you type in the letters h-e-a-d, Am I
/really/ /really/ sure the argument will never be []?
Otherwise you should be using a case expression or (more
often, I think) a subsidiary function with a pattern match.


I disagree (again...) - head is useful. And as you go on to say, if
you apply it to the infinite list, who cares if you used head. Head is
only unsafe on an empty list. So the problem then becomes can you
detect unsafe calls to head and let the programmer know.

Answer, not yet, but again, its being worked on:

http://www.cl.cam.ac.uk/~nx200/research/escH-hw.ps

http://www-users.cs.york.ac.uk/~ndm/projects/catch.php

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Stefan Monnier
 I simply can't let this pass without comment.  It's irrelevant if you're
 using a functional or imperative language, debuggers are invariably
 a waste of time.  The only reason to use a debugger is because you need
 to inspect the contents of a processes address-space;

That's a very narrow definition of debugger.


Stefan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Pepe Iborra
On 06/09/2006, at 17:10, Andrae Muys wrote:On 06/09/2006, at 8:22 PM, Neil Mitchell wrote:It's been my experience that debugging is a serious weakness ofHaskell - where even the poor mans printf debugging changes thesemantics! And everyone comes up with arguments why there is no needto debug a functional language - that sounds more like excuses aboutwhy we can't build a good lazy debugger :)[Sorry for the slight rant, but I've used Visual Studio C++ so I knowwhat a good debugger looks like, and how indispensable they are]I simply can't let this pass without comment.  It's irrelevant if you're using a functional or imperative language, debuggers are invariably a waste of time.  The only reason to use a debugger is because you need to inspect the contents of a processes address-space; so either you're using it as a disassembler, or you're using it to examine the consequences of heap/stack corruption.  Consequently, if you're using Java, C#, Scheme, Haskell, Erlang, Smalltalk, or any one of a myriad of languages that don't permit direct memory access, there's no reason for you to be using a debugger.Jon understates it by implying this is a Functional/Haskell specific quality - it's not.  Debuggers stopped being useful the day we finally delegated pointer handling to the compiler/vm author and got on with writing code that actually solves real problems.You seem to base everything on the assumption that a debugger is a program that lets you, and I quote your words below, "trace through individual lines of code".A debugger in the sense that this post regards it is any kind of program that helps you to understand a piece of code. A debugger is the program that tries to answer the following questions:"What information can we provide to the programmers about how a program is running?""What information will help the programmer most?"If it happens that traditionally debuggers are based in inspecting the memory, this is an unavoidable situation considering the history of programming languages. But certainly there are many other possibilities that can help a programmer to manage the complexity of a running program, and it seems as if you disregard them all completely in your argument !It's just that historically functional programmers have tended to already be experienced programmers who realise this.  Why would they waste their time building a tool that no-one needs?This whole block is offensive to the rest of the world. Fortunately it has nothing to do with reality: - the recent GHC survey uncovered "some kind of debugger" as the most demanded tool - Other functional languages have seen magnificent efforts in the debugging camp, such as the awesome Ocaml debugger or the now sadly defunct ML Time-Travel debugger from A. Tolmach - The Lispish languages, which are arguably on the functional side too, have always  enjoyed impressive online debugging tools.It's a truism to say if your code doesn't work it's because you don't understand it; clearly if you did understand it, you wouldn't have included the bug that's causing you difficulty.Therefore either1) The code is poorly structured and you need to restructure it to better represent your understanding of the problemor2) Your understanding of the problem is flawed, so you need to sit back and reexamine your thinking on this problem in light of the counter-example you have found (the bug).Spending your time tracing through individual lines of code is counter-productive in both cases.Andrae MuysP.S. It is worth noting that I am here talking about the sort of debugger raised in the original post.  I am not talking about using a separate tool to extract a stracktrace from a core file in a C/C++ program or equivalent - I'm talking about runtime debugging with variable watches, breakpoints, and line-by-line stepping. -- Andrae Muys[EMAIL PROTECTED]Principal Kowari ConsultantNetymon Pty Ltd ___Haskell-Cafe mailing listHaskell-Cafe@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Malcolm Wallace
Andrae Muys [EMAIL PROTECTED] wrote:

 It's a truism to say if your code doesn't work it's because you don't 
 understand it; ...

Indeed, but tracing the execution of the code, on the test example where
it fails, will often give insight into one's misunderstanding.  And often,
the person trying to fix some code is not even the person who wrote it.

Help with understanding a program is the job of tracing tools like Hat,
Hood, or Buddha.  (These are often called debuggers.)  And I believe
this is what the original poster was asking for - a tool to reveal what
his code _really_ does as opposed to what he _thinks_ it ought to do.
Is that so terrible?

Many people seem to think that banishing some misunderstanding of a
piece of code is as simple as staring at it until enlightenment hits.  I
can vouch for the fact that, when the code is complex (and necessarily
so, because the problem logic is complex), then tools are needed for
full comprehension of the logic.

For example, when presented with a bug report for the nhc98 compiler
(which I maintain but did not create), using Hat has been invaluable to
me in narrowing down which function is to blame in mere minutes.  Prior
to Hat, it sometimes took several days of reading sources by hand,
manually tracking possible stacks of function calls over large data
structures, before even finding the site of the error.

Large software systems are complex.  Tool support for understanding
their behaviour is not needed because we humans are stupid, but because
there is just too much information context to keep in one head-ful at a
time.

 P.S. It is worth noting that I am here talking about the sort of  
 debugger raised in the original post. 
   - I'm talking about runtime debugging with  
 variable watches, breakpoints, and line-by-line stepping.

I don't believe that is necessarily what the original poster was asking
for.  And reduction-by-reduction stepping can be extremely useful for
beginners to understand how lazy evaluation works.

Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: does the compiler optimize repeated calls?

2006-09-06 Thread David House

On 06/09/06, David Roundy [EMAIL PROTECTED] wrote:

Before something like this were to be implemented, I'd rather see
automatic strictifying for avoidence of stack overflows (i.e. when the
stack starts filling up, start evaluating things strictly), which is
most likely another insanely dificult bit of compiler code, which
would also involve figuring out which code paths would save memory,
and which would increase the memory use.


Keep in mind that strict and lazy semantics aren't interchangeable,
that is automatic strictifying could change program behaviour unless
the compiler knew which bits need to be lazy. If it knew that, we
might as well adopt the 'strict where possible, lazy when necessary'
semantics.

--
-David House, [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jason Dagit

On 9/6/06, Neil Mitchell [EMAIL PROTECTED] wrote:


Let take for example a bug I spent tracking down in Haskell this
weekend. The bug can be summarized as Program error: pattern match
failure: head []. And indeed, thats all you get. A quick grep reveals
there are about 60 calls to head in the program. In this instance I
have the choice between 1) crying, 2) a debugger, 3) a lot of hard
work. [Of course, knowing this situation arises, I had already
prepared my getout plan, so it wasn't a massive problem]


I don't know what your getout plan was but when I'm in this situation
I do the following (hopefully listing this trick will help the OP):

head' :: String - [a] - a
head' s [] = error $ head' empty list at  ++ s
head' _ xs = head xs

Then I do the tedious thing of replacing all the heads in my program
with head' and giving each a unique string to print out, such as head'
line 5.

You could even take this a step further and let the type system help
you find the places to replace head by doing something like:

import Prelude hiding (head)
import qualified Prelude (head)

head :: String - [a] - a
head s [] = error $ head empty list at  ++ s
head _ xs = Prelude.head

Now you should get type errors anywhere you're using head.

Or maybe even more extreme you could use template haskell or the c
preprocessor to fill in the line number + column.

HTH,
Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Johan Tibell

Note: I meant to send this to the whole list a couple of messages ago
but accidentally I only sent it to Lennart, sorry Lennart!

I know that Linus Torvalds doesn't find debuggers all that useful
either and he hacks C [1].

1. http://linuxmafia.com/faq/Kernel/linus-im-a-bastard-speech.html

On 9/6/06, Lennart Augustsson [EMAIL PROTECTED] wrote:

I've also used Visual Studio, and I wouldn't mind having something
like that for Haskell.  But I have to agree with Jon, I think the
best way of debugging is to understand your code.  I think people who
come from imperative programming come with a mind set that you
understand your code by stepping through it in the debugger.  But I
find this paradigm much less useful for functional code.

-- Lennart

On Sep 6, 2006, at 06:22 , Neil Mitchell wrote:

 Hi

 But why should c and d exist at runtime? They're only used
 once each, so the compiler is free to replace f with

 \a b - (a+b)+ a*b

 Yes, and if the compiler is this clever, it should also be free to
 replace them back at debug time.


 I've said this before, but I think it's worth repeating: in
 most cases, if you need to debug your programme it's because
 it's too complicated for you to understand, so the correct
 thing to do is to simplify it and try again.

 That makes it impossible to ever improve - initially all programs are
 beyond you, and only by pushing the boundary do you get anywhere. As
 for me, I always write programs I can't understand, and that no one
 else can. Perhaps I understood it when I originally wrote it, perhaps
 not, but that was maybe over a year ago.

 It's been my experience that debugging is a serious weakness of
 Haskell - where even the poor mans printf debugging changes the
 semantics! And everyone comes up with arguments why there is no need
 to debug a functional language - that sounds more like excuses about
 why we can't build a good lazy debugger :)

 [Sorry for the slight rant, but I've used Visual Studio C++ so I know
 what a good debugger looks like, and how indispensable they are]

 Thanks

 Neil
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread Brian Hulley

John Hughes wrote:

The trouble is that this isn't always an optimisation. Try these two
programs:

powerset [] = [[]]
powerset (x:xs) = powerset xs++map (x:) (powerset xs)

and

powerset [] = [[]]
powerset (x:xs) = pxs++map (x:) pxs
 where pxs = powerset xs

Try computing length (powerset [1..n]) with each definition. For small
n, the second is faster. As n gets larger, the second gets slower and
slower, but the first keeps chugging along. The problem is that the
second has exponentially higher peak memory requirements than the
first. Round about n=25, on my machine, all other programs stop responding
while the second one runs. You don't really want a compiler to make
that kind of pessimisation to your program, which is why it's a
deliberate decision to leave most CSE to the programmer. You can
still write the second version, and suffer the consequences, but at least 
you know

it's your own fault!


Thanks for the above example. I found it quite difficult to understand why 
the second is worse than the first for large n, but I think the reason is 
that you're using the second def in conjunction with (length). Therefore it 
is the *combination* of the cse'd (powerset) with (length) that is less 
efficient, because (length) just reads its input as a stream so there is no 
need for the whole of (powerset xs) to exist in memory thus the non cse'd 
version gives a faster (length . powerset).


Ideally it would be great if the compiler could make use of the context in 
which a function is being applied to produce optimized code across function 
boundaries. In the above example of (length . powerset), (length) has no 
interest in the contents of the powerset itself so could the compiler not 
fuse (length . powerset) into the following function:


   lengthPowerset [] = 1
   lengthPowerset (x:xs) = 2 * lengthPowerset xs

The compiler would need to analyse the definition of (++) and (map) to 
discover that


   length (x ++ y) === length x + length y

   length (map f y) === length y

and with that knowledge I imagine the steps could be something like:

   lengthPowerset [] = length (powerset []) = length ([[]]) = 1

   lengthPowerset (x:xs)
   = length (powerset xs ++ map (:x) (powerset xs))
   = length (powerset xs) + length (map (:x) (powerset xs))
   = length (powerset xs) + length (powerset xs)
   = lengthPowerset xs + lengthPowerset xs
   = 2 * lengthPowerset xs

After getting the function (lengthPowerset) as above, I'd also expect the 
compiler to apply a transformation into a tail recursive function:


   lengthPowerset y = lengthPowerset' y 1
   where
   lengthPowerset' [] i = i
   lengthPowerset' (_:xs) i = lengthPowerset' xs $! 2*i

resulting in a tightly coded machine code loop to rival, or greatly 
exceed(!), the best efforts of C.


In the meantime I tend to code in Haskell just expecting these kind of 
optimizations to be done (unless I'm writing a really time-critical piece of 
code that can't wait), knowing of course that they might not be done just at 
the moment but at least some time in the (hopefully not too distant) 
future... ;-)


Regards, Brian.
--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread David Roundy
On Wed, Sep 06, 2006 at 09:56:17AM -0700, Jason Dagit wrote:
 Or maybe even more extreme you could use template haskell or the c
 preprocessor to fill in the line number + column.

Which is precisely what darcs does for fromJust (which we use a lot):
we define a C preprocessor macro fromJust.  It's ugly, but beats any
other choice I'm aware of.  I wish that built in functions that call
error could be automatically this way...
-- 
David Roundy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Chris Kuklewicz

David Roundy wrote:

On Wed, Sep 06, 2006 at 09:56:17AM -0700, Jason Dagit wrote:

Or maybe even more extreme you could use template haskell or the c
preprocessor to fill in the line number + column.


Which is precisely what darcs does for fromJust (which we use a lot):
we define a C preprocessor macro fromJust.  It's ugly, but beats any
other choice I'm aware of.  I wish that built in functions that call
error could be automatically this way...


This old mailing list message may help: 
http://www.mail-archive.com/haskell-cafe@haskell.org/msg13034.html


Suggested by a question from sethk on #haskell irc channel. 
Solves an FAQ where people have often resorted to cpp or m4:
a `trace' that prints line numbers.


--
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] getContents and lazy evaluation

2006-09-06 Thread Esa Ilari Vuokko

Hi

On 9/6/06, David Roundy [EMAIL PROTECTED] wrote:

Fortunately, the undefined behavior in this case is unrelated to the
lazy IO.  On windows, the removal of the file will fail, while on
posix systems there won't be any failure at all.  The same behavior
would show up if you opened the file for non-lazy reading, and tried
to read part of the file, then delete it, then read the rest.


This is not strictly speaking true.  If all the handles opened to the file
in question are in FILE_SHARE_DELETE-sharing mode, it can be
marked for deletion when last handle to it is closed.  It can also be
moved and renamed.

But it is true that removal might fail because of open handle, and it is true
that it will fail as implemented currently for ghc (and probably for other
compilers as well.)


The undefinedness in this example, isn't in the haskell language,
but in the filesystem semantics, and that's not something we want the
language specifying (since it's something over which it has no


Happily this isn't lazy IO-issue, it's just file IO issue for all
files opened as
specified by haskell98.  Sharing mode would be really nice to have in
Windows, as would security attributes.  But as you say, these are hard
things to specify because not everyone has those features.  So, at least
it works nicely in posixy-systems, eh?

Best regards,
--Esa Ilari Vuokko
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread John Hughes

John Hughes wrote:

The trouble is that this isn't always an optimisation. Try these two
programs:

powerset [] = [[]]
powerset (x:xs) = powerset xs++map (x:) (powerset xs)

and

powerset [] = [[]]
powerset (x:xs) = pxs++map (x:) pxs
 where pxs = powerset xs

Try computing length (powerset [1..n]) with each definition. For small
n, the second is faster. As n gets larger, the second gets slower and
slower, but the first keeps chugging along. The problem is that the
second has exponentially higher peak memory requirements than the
first. Round about n=25, on my machine, all other programs stop 
responding

while the second one runs. You don't really want a compiler to make
that kind of pessimisation to your program, which is why it's a
deliberate decision to leave most CSE to the programmer. You can
still write the second version, and suffer the consequences, but at least 
you know

it's your own fault!


Thanks for the above example. I found it quite difficult to understand why 
the second is worse than the first for large n, but I think the reason is 
that you're using the second def in conjunction with (length). Therefore 
it is the *combination* of the cse'd (powerset) with (length) that is less 
efficient, because (length) just reads its input as a stream so there is 
no need for the whole of (powerset xs) to exist in memory thus the non 
cse'd version gives a faster (length . powerset).


Yes... not just length, of course, but any function that consumes its input 
lazily, or perhaps I should say in one pass. For example, if you print 
out the result of powerset, then the print function makes only one pass over 
it, and the first version will run in linear space in n, while the second 
takes exponential. But then you'll be doing so much I/O that you won't be 
able to run the code for such large n in reasonable time--that's the reason 
I chose length in my example, it's a list consumer that isn't I/O-bound.


Just to be explicit, the reason the second is worse is that the pointer to 
pxs from the expression map (x:) pxs prevents the garbage collector from 
recovering the space pxs occupies, while the pxs++ is being computed and 
consumed. So you end up computing all of pxs while the pxs++ is running, AND 
STORING THE RESULT, and then making a second pass over it with map (x:) pxs, 
during which pxs can be garbage collected as it is processed. In the first 
version, we compute powerset xs twice, but each time, every cell is 
constructed, then immediately processed and discarded, so every garbage 
collection reclaims almost all the allocated memory.


Ideally it would be great if the compiler could make use of the context in 
which a function is being applied to produce optimized code across 
function boundaries. In the above example of (length . powerset), (length) 
has no interest in the contents of the powerset itself so could the 
compiler not fuse (length . powerset) into the following function:


   lengthPowerset [] = 1
   lengthPowerset (x:xs) = 2 * lengthPowerset xs

The compiler would need to analyse the definition of (++) and (map) to 
discover that


   length (x ++ y) === length x + length y

   length (map f y) === length y

and with that knowledge I imagine the steps could be something like:

   lengthPowerset [] = length (powerset []) = length ([[]]) = 1

   lengthPowerset (x:xs)
   = length (powerset xs ++ map (:x) (powerset xs))
   = length (powerset xs) + length (map (:x) (powerset xs))
   = length (powerset xs) + length (powerset xs)
   = lengthPowerset xs + lengthPowerset xs
   = 2 * lengthPowerset xs

After getting the function (lengthPowerset) as above, I'd also expect the 
compiler to apply a transformation into a tail recursive function:


   lengthPowerset y = lengthPowerset' y 1
   where
   lengthPowerset' [] i = i
   lengthPowerset' (_:xs) i = lengthPowerset' xs $! 2*i

resulting in a tightly coded machine code loop to rival, or greatly 
exceed(!), the best efforts of C.


In the meantime I tend to code in Haskell just expecting these kind of 
optimizations to be done (unless I'm writing a really time-critical piece 
of code that can't wait), knowing of course that they might not be done 
just at the moment but at least some time in the (hopefully not too 
distant) future... ;-)


Regards, Brian.


You know, I suspect you could get a lot of this to happen by programming 
GHC's optimiser using rewrite rules. But I'm going to leave it as an 
exercise for the reader (he he:-). For the compiler to do all this without 
guidance, would, I suspect, require much more theorem proving than it will 
be reasonable for compilers to do for a long. long time.


John 




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Reading integers

2006-09-06 Thread Bertram Felgenhauer
Hi,

prompted by a discussion about http://www.spoj.pl/problems/MUL/ on
#haskell I implemented some faster routines for reading integers,
and managed to produce a program that passes this problem.

Currently I have a single module that provides reading operations
for Integers and Ints. I'm not quite sure what to do with it.

Ideally, read should use faster code for parsing numbers. This is hard
to do though, because read for numbers is specified in terms of lex,
which handles arbitrary tokens. One nasty side effect of this, besides
bad performance, is that for example

   read ('':reapeat 'a') :: Int

diverges, even though there's obviously no possible parse. Incorporating
faster integer reading code into read without breaking backward
compatibility is not trivial.

The next obvious place to put the functions is the Numeric module.
This sounds like a good idea until one realizes that readSigned (which
is the obvious thing to use for signed integers) again uses lex, with
all the complications above.

A third option is to put it all in a separate module. This would
probably be hard to find, so few people would use it.

I'm pondering breaking compatibility with Haskell 98 to some extent
and implementing  read instances for Int and Integer that don't diverge
on weird inputs, but handle all valid numbers that Haskell 98 supports.

This still requires some work to support octal and hexadecimal numbers,
whitespace and parentheses.

What do you think?


The code (quite unpolished) is available with

  darcs get http://int-e.home.tlink.de/haskell/readinteger

(browsing directly doesn't work)

It also includes a simple benchmark for comparing reads, Numeric.readDec
and my readInteger functions. Results for my computer are included as
well.

regards,

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Neil Mitchell

Hi


I don't know what your getout plan was but when I'm in this situation
I do the following (hopefully listing this trick will help the OP):


I have headNote, fromJustNote, fromJustDefault, lookupJust,
lookupJustNote, tailNote - a whole set of functions which take an
extra note parameter - or for the example of lookupJust still give a
crash, but have both the thing you were searching for and the list of
things you were searching in - lookupJust requires a Show, so can
usually tell you roughly where you are.

I also always write it out as Module.Name.function - since line 5 has
a habit of being moved to a different line a lot more than names
changes.

And because I am used to this, maybe 95% of all unsafe functions are
already in Note form, so I just change the new ones I added since last
time I broke down and cried.

However, its useful, but hacky. Its not something I should have to do manually.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reading integers

2006-09-06 Thread Neil Mitchell

Hi Bertram,


Currently I have a single module that provides reading operations
for Integers and Ints. I'm not quite sure what to do with it.


Get it into base! Where it is, or what its called is less relevant -
perhaps entirely decoupled from the Read class, and I wouldn't have
thought reading octal integers etc was useful - just simple bog
standard integers. I'd really like just readInt/readInteger in
Numeric, or Prelude, or possibly even a new Read style module.


I'm pondering breaking compatibility with Haskell 98 to some extent
and implementing  read instances for Int and Integer that don't diverge
on weird inputs, but handle all valid numbers that Haskell 98 supports.


Thats nice, but a stand alone readInt that is not tied to reads, and
the whole leftover parse thing can be dropped - probably for better
performance. Of course, making an existing read faster is also good.


It also includes a simple benchmark for comparing reads, Numeric.readDec
and my readInteger functions. Results for my computer are included as
well.


Any chance of a very quick summary of the results? i.e. on average x%
faster, but y% faster for integers over size z? Saves everyone darcs's
in. As far as I am concerned, the code is a black box, but the
statistics are something everyone will want.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reading integers

2006-09-06 Thread Bertram Felgenhauer
Neil Mitchell wrote:
 Currently I have a single module that provides reading operations
 for Integers and Ints. I'm not quite sure what to do with it.

 Get it into base! Where it is, or what its called is less relevant -

Ok, one vote for base.

 I'm pondering breaking compatibility with Haskell 98 to some extent
 and implementing  read instances for Int and Integer that don't diverge
 on weird inputs, but handle all valid numbers that Haskell 98 supports.
 
 Thats nice, but a stand alone readInt that is not tied to reads, and
 the whole leftover parse thing can be dropped - probably for better
 performance.

Performance won't be much better I think. A first attempt at this was
actually slower (I have not yet investigated why), and the functions are
more useful with leftover parsing, in my opinion (in the MUL problem it
saved a call to words - which bought a little bit of extra performance).

 Of course, making an existing read faster is also good.

I think it's very desirable because that's what people use most often.

 Any chance of a very quick summary of the results? i.e. on average x%
 faster, but y% faster for integers over size z? Saves everyone darcs's
 in. As far as I am concerned, the code is a black box, but the
 statistics are something everyone will want.

For small integers (absolute value less than 1000), Numeric.readDec
is 5 to 10 times faster than reads and my readInteger is 5 to 8
times faster than readDec.

For medium integers (about 100 digits), readDec and reads are about
the same speed while readInteger is about 10 times faster.

For larger integers the speedup becomes larger, too. At 100k digits,
it reaches 100 (timed using ghci - but hopefully the number is correct
anyway.)

All measurements were done with ghc 6.5, compiling with -O.

regards,

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Aaron Denney
On 2006-09-06, Andrae Muys [EMAIL PROTECTED] wrote:
 Jon understates it by implying this is a Functional/Haskell specific  
 quality - it's not.  Debuggers stopped being useful the day we  
 finally delegated pointer handling to the compiler/vm author and got  
 on with writing code that actually solves real problems.

Of course, these sometimes have bugs which we need to track down, in
which case having a debugger to show just how the vm is messing up 
can be very helpful.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Claus Reinke

scientists, who ought to know,
assure us that it must be so,
oh, let us never, never, doubt,
what nobody is sure about.

(or something like that..;-)

as everyone else in this thread, I have my own experiences and
firm opinions on the state of debugging support in Haskell (in
particular on the apparent ease with which operational semantics
or stepwise reduction are often dismissed), but just a few points:

- if you're sure about anything, that probably just means you've 
   stopped thinking about that thing - not a good position for

   imposing your assured opinion on others, imho.

- a debugger is (mainly) a tool for finding and removing bugs:

   - if you're crawling through the machine's circuitboards in
   search of a real bug, that might be a screwdriver, a
   flashlight, and a circuitdiagram/floorplan

   - if you don't like to use computers to augment your
   mental processes, that might be pencil and paper

   - if you do like to use computers to augment your
   mental processes, that might be some piece of software

- what kind of software might be helpful in debugging
   depends as much on what you are debugging as on
   your individual approach to debugging

- assuming that you're not debugging the hardware, compiler,
   runtime system, or foreign code, functional languages free 
   you from many sources of bugs. but not of all sources.


- simplifying the code until it becomes easily comprehensible
   is a good habit anyway, and it does help to expose bugs
   that creep in while you're juggling too many balls at once
   (is the code obviously correct or not obviously wrong?).

   for those so inclined, tools can help here, too: they can
   expand our limits of comprehension, they can assist in
   the simplification, they can highlight big-balls-of-mud in
   your code, etc.

- often, finding bugs is linked to comprehending the program's
   operational behaviour, so that you can be sure that it is
   going to do all that you need it to do, and nothing else.
   that in itself does not imply, however, that you need to 
   include your language's implementation into the set of 
   things to debug.


- it is perfectly possible to study the operational behaviour
   of functional programs without resorting to implementation
   details - that falls under operational semantics, and last 
   time I checked, it had become at least as well respected

   as denotational semantics, not least because of its successes
   in reasoning about programs in concurrent languages. 


- a useful equivalent to observing instruction-by-instruction
   state changes when debugging imperative programs is to
   observe reduction-by-reduction program transformations
   when debugging functional programs. 

   based on operational semantics, tool support for this 
   approach is not just possible, but was used with good 
   success in the past (interested parties might like to browse, 
   eg, the 1994 user's guide for PI-RED, or the 1996 papers 
   on teaching experience, in FPLE, and system overview, in JFP:

   http://www.informatik.uni-kiel.de/~wk/papers_func.html
   ), just not for Haskell. 

   note that this constitutes a semantics-based inspection at 
   the language level, completely independent of the actual 
   implementation below. hiding implementation details while

   presenting a high-level operational semantics is a non-trivial
   exercise that includes such topics as variables, bindings 
   and substitution, as well as retranslating low-level machine

   states to corresponding intermediate high-level programs.

so, while there are reasonable and less reasonable ways of
using debuggers, the question is not whether or not there
should be debuggers, but what kind of tools we have or
need to help debugging and, more generally, comprehending,
Haskell programs. and if the state of the art indicates that
such tools are either kludges (affecting program semantics,
exposing low-level machine details, etc), do not cover the 
whole Haskell-in-use, or simply don't help those who'd

like to use them, then that state is not good. it is a lot better
than it used to be, but far from optimal.

thinking helps, but claiming that tools can't help doesn't.

claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] head c Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
Neil Mitchell [EMAIL PROTECTED] writes:
 I wrote:
  H'm.  I've never been completely convinced that head should
  be in the language at all.  If you use it, you really have
  to think, every time you type in the letters h-e-a-d, Am I
  /really/ /really/ sure the argument will never be []?
  Otherwise you should be using a case expression or (more
  often, I think) a subsidiary function with a pattern match.
 
 I disagree (again...) - head is useful.

Given that a programming language remain computationally
complete, utility is never a sufficient reason for including
something in it.  Guns are occasionally useful, but if you
walk around with a loaded automatic down the front of your
pants, you are *far* more likely to blow your own balls off
than encounter circumstances where the gun would be useful.

A large part of good programming language design is about
preventing you from blowing your balls off (or anyone
else's).

 And as you go on to say, if you apply it to the infinite
 list, who cares if you used head. Head is only unsafe on
 an empty list. So the problem then becomes can you detect
 unsafe calls to head and let the programmer know.

No, that's the wrong approach because it relies on detecting
something that (a) the programmer probably knows already and
(b) is undecidable in general. It would be far better for
the programmer to express this knowledge in the
programme. In Haskell it's not possible for the two
datatypes

   data NonFiniteList t = (:) {head::t, tail::  NonFiniteList t}

and 

   data List t = (:) {head::t, tail::  List t}
   | []

to coexist, and though it would even then have been possible
to use a type system where all the operations on List were
also applicable to NonFiniteList, the means available then
would have lost type inference. So the consensus at the time
was that doing clever stuff with types (making one a
transparent supertype of the other) was too problematic, and
keeping them separate would have meant providing all the
list operations twice. Once typeclasses came along, this
argument no longer held, but redoing lists and nonfinite
lists as special cases of a sequence class would have been
hard work.

I think it's hard work that should have been done, though.

As I said in the post to which you replied, if you use head,
you really have to have a convincing argument that it isn't
going to be applied to []. If you don't have one, use
something other than head (it's always possible). If your
programme crashes with a head [], the only reasonable
thing to do is to go through all 60 uses and check them. If
you don't have time for that right now, doing an automatic
replace is a reasonable interim measure, so long as you
don't replace them back once you've found the particular
bug.

1
i
import Control.Exception
.
s/head/(\l - assert (not $ null l) (head l))/g

...

Ugly, and 
*** Exception: /tmp/foo.hs:4:7-12: Assertion failed
is really ugly, but not as ugly as 
*** Exception: Prelude.head: empty list
:-)

And the ugliness might encourage a rethink of particular
uses of head next time you have to look at that bit of code,
and that's a good thing even if all you end up doing is
manually substituting a let for the lambda and adding a
comment on the lines I really can't see how this list could
ever be empty, but I can't prove it.


-- 
Jón Fairbairn [EMAIL PROTECTED]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
Claus Reinke [EMAIL PROTECTED] writes:


 thinking helps, but claiming that tools can't help doesn't.

Lets be absolutely clear about this: I've never claimed that
tools can't help. In this thread I've been using the term
debugger in the narrow sense implied by the OP's question --
something that steps through the execution of the code. Such
a debugger is inappropriate for Haskell programmers, and
doubly so for beginners.

-- 
Jón Fairbairn [EMAIL PROTECTED]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: how do you debug programs?

2006-09-06 Thread Jón Fairbairn
David Roundy [EMAIL PROTECTED] writes:

 On Wed, Sep 06, 2006 at 09:56:17AM -0700, Jason Dagit wrote:
  Or maybe even more extreme you could use template haskell or the c
  preprocessor to fill in the line number + column.
 
 Which is precisely what darcs does for fromJust (which we use a lot):
 we define a C preprocessor macro fromJust. 

Curiously, the only bug in darcs that has bitten me so far
was a use of fromJust.  Perhaps that indicates a weakness in
the style, rather than the tools?

-- 
Jón Fairbairn [EMAIL PROTECTED]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monad laws

2006-09-06 Thread Deokhwan Kim
What is the practical meaning of monad laws?

(M, return, =) is not qualified as a category-theoretical monad, if
the following laws are not satisfied:

  1. (return x) = f == f x
  2. m = return == m
  3. (m = f) = g == m  (\x - f x = g)

But what practical problems can unsatisfying them cause? In other words,
I wonder if declaring a instance of the Monad class but not checking it
for monad laws may cause any problems, except for not being qualified as
a theoretical monad?

Cheers,

-- 
Deokhwan Kim
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe