[Haskell-cafe] Re: A question about monad laws

2008-02-18 Thread Ben Franksen
Wilhelm B. Kloke wrote:
 Ben Franksen [EMAIL PROTECTED] schrieb:
 Wilhelm B. Kloke wrote:
 [EMAIL PROTECTED] [EMAIL PROTECTED] schrieb:

 I would consider a good idea if ghc would provide language support to
 this sort of integers.

 No need, you can do that for yourself:

 {-# LANGUAGE GeneralizedNewtypeDeriving #-}
 newtype DInt = DInt Double deriving (Eq, Ord, Enum, Num)

 instance Show DInt where show (DInt x) = show (truncate x :: Integer)
 
 Probably you missed the point I wanted to make.

Obviously ;)

 This works only properly,
 if you can catch the Inexact Trap which indicates the overflow. The
 problem whith normal Ints is that the hardware does not support overflow
 detection. You get silent wrong results if you use an Int type which maps
 to C int, but not if it maps to C float or double with Inexact trap
 enabled. Therefore you need add runtime checks to be sure that you notice
 a possible overflow condition.

I agree completely.

Cheers
Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Roman Leshchinskiy

Richard A. O'Keefe wrote:

On 14 Feb 2008, at 6:01 pm, Roman Leshchinskiy wrote:
I don't understand this. Why use a type which can overflow in the 
first place? Why not use Integer?


[...]

Presumably the reason for having Int in the language at all is speed.
As people have pointed out several times on this list to my knowledge,
Integer performance is not as good as Int performance, not hardly,
and it is silly to pay that price if I don't actually need it.


Do I understand correctly that you advocate using overflowing ints (even 
if they signal overflow) even if Integers are fast enough for a 
particular program? I strongly disagree with this. It's premature 
optimisation of the worst kind - trading correctness for unneeded 
performance.



SafeInt is what you should use when you *THINK* your results should all fit
in a machine int but aren't perfectly sure.  (And this is nearly all the 
time.)


Again, I strongly disagree. You should use Integer unless your program 
is too slow and profiling shows that Integer is the culprit. If and only 
if that is the case should you think about alternatives. That said, I 
doubt that your SafeInt would be significantly faster than Integer.



You just have to check for exceptional conditions.

Why should it be *MY* job to check for exceptional conditions?


It shouldn't unless you use a type whose contract specifies that it's 
your job to check for them. Which is the case for Int, Float and Double.


Wrong.  You're confusing two things here.  One is Float and Double,
where we get in serious trouble WITHOUT ANY EXCEPTIONAL CONDITIONS IN 
SIGHT.  The other is Int overflow.


I'm not sure what I'm confusing here, my comment referred specifically 
to exceptional conditions which floats provide plenty of. As to getting 
in trouble, I don't need floats for that, I manage to do it perfectly 
well with any data type including (). Seriously, though, I think we 
agree that using floating point numbers correctly isn't trivial, people 
who do that should know what they are doing and should best use existing 
libraries. I just don't see how floats are special in this regard.



The checking I am talking about is done by the hardware at machine speeds
and provides *certainty* that overflow did not occur.


So you advocate using different hardware?


If you think that, you do not understand floating point.
x+(y+z) == (x+y)+z fails even though there is nothing exceptional about
any of the operands or any of the results.


For all practical purposes, the semantics of (==) is not well defined 
for floating point numbers.


With respect to IEEE arithmetic, wrong.


Yes, IEEE does define an operation which is (wrongly, IMO) called 
equality. It's not a particularly useful operation (and it is not 
equality), but it does have a defined semantics. However, the semantics 
of (==) on floats isn't really defined in Haskell or C, for that matter, 
even if you know that the hardware is strictly IEEE conformant.


In general, floating point numbers do not really have a useful notion of 
equality. They are approximations, after all, and independently derived 
approximations can only be usefully tested for approximate equality. And 
yes, x+(y+z) is approximately equal to (x+y)+z for floats. How 
approximate depends on the particular values, of course.


That's one of the first things I used to teach my students about 
floats: *never* compare them for equality.


That's one of the warning signs I watch out for.  Never compare floats for
equality is a sure sign of someone who thinks they know about floats 
but don't.


Hmm. Personally, I've never seen an algorithm where comparing for exact 
equality was algorithmically necessary. Sometimes (rarely) it is 
acceptable but necessary? Do you know of one? On the other hand, there 
are a lot of cases where comparing for exact equality is algorithmically 
wrong.


As an aside, you might want to try googling for Never compare floats 
for equality. I'm positive some of those people *do* know about floats.



Creating denormals and underflow are equivalent.


No they are not.  Underflow in this sense occurs when the result is too
small to be even a denormal.


I'm fairly sure that IEEE underflow occurs when the result cannot be 
represented by a *normal* number but I don't have a copy of the 
standard. Anyway, it's not important for this discussion, I guess.



Underflow is indeed a standard IEEE exception.  Like other standard IEEE
exceptions, it is *disabled by default*.  In this case, the hardware
delivered the exception *to the operating system*, but the operating
system did not deliver it to the *user code*.  It is the *combination*
of hardware and operating system that conforms to the IEEE standard (or 
not).

So we are talking about a situation where the only legal IEEE outcomes are
return 0.0 or raise the Underflow exception and where raising an 
exception

*in the user code* wasn't allowed and didn't happen.


Now I'm curious. I would have guessed 

Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread ajb

G'day all.

Richard A. O'Keefe wrote:


That's one of the warning signs I watch out for.  Never compare floats for
equality is a sure sign of someone who thinks they know about   
floats but don't.


Quoting Roman Leshchinskiy [EMAIL PROTECTED]:


Hmm. Personally, I've never seen an algorithm where comparing for exact
equality was algorithmically necessary.


One trick I've occasionally used is to avoid the need for a discriminated
union of floating point and integer types by just using a floating point
number.

If you ignore bitwise operations and division/remainder, any integer
operation that doesn't cause overflow on 32-bit integers will work just
the same if you use IEEE-754 64-bit floating point numbers instead.
That includes equality.  Moreover, you even get a few more significant
bits of precision.

In these days of fast 64 and 128 bit words, it might not be entirely
moral, but it works.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread jerzy . karczmarczuk
Jed Brown comments the answer of - 


-- Roman Leshchinskiy who answers my question concerning the replacement
of floating-point numbers: 


 First, when I see the advice use something else, I always ask what,
 and I get an answer very, very rarely... Well? What do you propose?


For Haskell, Rational seems like a good choice. The fact that the 
standard requires defaulting to Double is quite unfortunate and 
inconsistent, IMO; the default should be Rational. Float and Double 
shouldn't even be in scope without an explicit import. There really 
is no good reason to use them unless you are

writing a binding to existing libraries or really need the performance.


Here Jed Brown says: 

Until you need to evaluate a transcendental function. 

...
It would be killing, wouldn't it?... 


I would say more. It is known that in randomly taken, usual formulae in
physics, engineering, etc., if you start with rationals, *typically* the
GCD between numerator and denominator will be small, the reductions of
fractions are not very effective. Your rationals explode very fast!
If after some reasonable number of operations you get rationals whose
num/den have millions of digits, the program becomes *completely useless*,
this is not just the questions of performance. 


Richard O'Keefe said that people speculate about floating-point numbers
without knowing them. Well, nobody is perfect...
I am a bit disturbed by people, who speculate without ever *needing*
fl.p's, and who haven't thus the sensibility. For them this is a kind of
game. Well, it isn't. 

R.L. says: 

For all practical purposes, the semantics of (==) is not well defined 
for floating point numbers. That's one of the first things I used to 
teach my students about floats: *never* compare them for equality. 
So in my view, your example doesn't fail, it's undefined. 
That Haskell provides (==) for floats is unfortunate. 


I disagree, on practical basis. Floating-point numbers are very well
defined, we know how the mantissa is represented. If the numbers are
normalized, as they should, plenty of low-level iterative algorithms
may use the equality - after some operation - to check that the machine-
precision convergence has been obtained. On the contrary, the verification 
that the absolute value between two terms is less than some threshold,
may be arbitrary or dubious. 

Jerzy Karczmarczuk 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Wilhelm B. Kloke
[EMAIL PROTECTED] [EMAIL PROTECTED] schrieb:
 G'day all.

 Richard A. O'Keefe wrote:

 Hmm. Personally, I've never seen an algorithm where comparing for exact
 equality was algorithmically necessary.

 One trick I've occasionally used is to avoid the need for a discriminated
 union of floating point and integer types by just using a floating point
 number.

IMHO it is a perfectly good idea to use the FP processor for integer
computations. You can use the Inexact Trap as Overflow Exception,
a service you don't get from i386 (and most other) hardware for int
operations. Of course your integers are limited to 24bit+sign in
single precision and 54bit+sign in double. In i387 extended precision
you get 64bit+sign.

I would consider a good idea if ghc would provide language support to
this sort of integers.
-- 
Dipl.-Math. Wilhelm Bernhard Kloke
Institut fuer Arbeitsphysiologie an der Universitaet Dortmund
Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257
PGP: http://vestein.arb-phys.uni-dortmund.de/~wb/mypublic.key

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Roman Leshchinskiy

[EMAIL PROTECTED] wrote:

Jed Brown comments the answer of -
-- Roman Leshchinskiy who answers my question concerning the replacement
of floating-point numbers:
 First, when I see the advice use something else, I always ask 
what,

 and I get an answer very, very rarely... Well? What do you propose?


For Haskell, Rational seems like a good choice. The fact that the 
standard requires defaulting to Double is quite unfortunate and 
inconsistent, IMO; the default should be Rational. Float and Double 
shouldn't even be in scope without an explicit import. There really 
is no good reason to use them unless you are

writing a binding to existing libraries or really need the performance.


Here Jed Brown says:
Until you need to evaluate a transcendental function. 

...
It would be killing, wouldn't it?...


Yes, it would. I was talking about average programs, though, which (I 
suspect) don't do numerics and really only need fractions. If you do 
numerics, by all means use a data type that supports numerics well. But 
even here, and especially in a functional language, IEEE floating point 
usually isn't the best choice unless you really need the performance.


You seem to be after a type that can be used to represent non-integer 
numbers in next to all problem domains. I don't think such a type 
exists. So, as usual, one has to choose a data structure suited to the 
problem at hand. IMO, standard floating point is not a good choice for 
most problem domains so Float and Double shouldn't be used by default. 
Whether Rational is a good default is certainly debatable.


For all practical purposes, the semantics of (==) is not well defined 
for floating point numbers. That's one of the first things I used to 
teach my students about floats: *never* compare them for equality. So 
in my view, your example doesn't fail, it's undefined. That Haskell 
provides (==) for floats is unfortunate. 


I disagree, on practical basis. Floating-point numbers are very well
defined, we know how the mantissa is represented. If the numbers are
normalized, as they should, plenty of low-level iterative algorithms
may use the equality - after some operation - to check that the machine-
precision convergence has been obtained.


If you are absolutely sure that for every possible precision and for 
every sequence of operations that compilers will generate from your code 
your algorithm will actually converge to a particular binary 
representation and not flip-flop on the last bit of the mantissa, for 
instance, and if you do not care about the actual precision of your 
algorithm (i.e., you want as much as possible of it) then yes, you might 
get away with using exact equality. Of course, you'll have to protect 
that part of your code by a sufficient number of warnings since you are 
using a highly unsafe operation in a very carefully controlled context. 
I'm not sure the trouble is really worth it. Anyway, in my view, such an 
unsafe operation shouldn't be in scope by default and definitely 
shouldn't be called (==). It's really quite like unsafePerformIO.


On the contrary, the verification that the absolute value between two 
terms is less than some threshold, may be arbitrary or dubious.


Only if you use an inappropriate theshold. Choosing thresholds and 
precision is an important part of numeric programming and should be done 
with great care.


Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Aaron Denney
On 2008-02-14, Roman Leshchinskiy [EMAIL PROTECTED] wrote:
 Richard A. O'Keefe wrote:
 Presumably the reason for having Int in the language at all is speed.
 As people have pointed out several times on this list to my knowledge,
 Integer performance is not as good as Int performance, not hardly,
 and it is silly to pay that price if I don't actually need it.

 Do I understand correctly that you advocate using overflowing ints (even 
 if they signal overflow) even if Integers are fast enough for a 
 particular program? I strongly disagree with this. It's premature 
 optimisation of the worst kind - trading correctness for unneeded 
 performance.

Fast enough is not absolute.  It's not trading correctness, it's
trading /completion/.  And if you expect everything to fit in
[-2^31..2^31-1] or [0..2^32-1], finding out it doesn't might be valuable
information about your problem domain.  For exploratory coding, speed
and knowing when something breaks can be more important than knowing
that all possible corner case are covered, even ones you don't expect to
hit.

 SafeInt is what you should use when you *THINK* your results should all fit
 in a machine int but aren't perfectly sure.  (And this is nearly all the 
 time.)

 Again, I strongly disagree. You should use Integer unless your program 
 is too slow and profiling shows that Integer is the culprit. If and only 
 if that is the case should you think about alternatives. That said, I 
 doubt that your SafeInt would be significantly faster than Integer.

Why?  GMP is pretty good, but it's not going to be anywhere near
hardware speeds.

 The checking I am talking about is done by the hardware at machine speeds
 and provides *certainty* that overflow did not occur.

 So you advocate using different hardware?

At a minimum, any usable hardware sets flags on overflow.
Testing on those is pretty cheap.  Much cheaper than calling a GMP
routine to compare with 2^32, for instance.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Ben Franksen
Wilhelm B. Kloke wrote:
 [EMAIL PROTECTED] [EMAIL PROTECTED] schrieb:
 G'day all.

 Richard A. O'Keefe wrote:

 Hmm. Personally, I've never seen an algorithm where comparing for exact
 equality was algorithmically necessary.

 One trick I've occasionally used is to avoid the need for a discriminated
 union of floating point and integer types by just using a floating point
 number.
 
 IMHO it is a perfectly good idea to use the FP processor for integer
 computations. You can use the Inexact Trap as Overflow Exception,
 a service you don't get from i386 (and most other) hardware for int
 operations. Of course your integers are limited to 24bit+sign in
 single precision and 54bit+sign in double. In i387 extended precision
 you get 64bit+sign.
 
 I would consider a good idea if ghc would provide language support to
 this sort of integers.

No need, you can do that for yourself:

{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype DInt = DInt Double deriving (Eq, Ord, Enum, Num)

instance Show DInt where show (DInt x) = show (truncate x :: Integer)


You can even make it H98 by defining the instances manually...

Cheers
Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Thorkil Naur
Hello,

On a tangent, probably:

On Thursday 14 February 2008 10:24, Roman Leshchinskiy wrote:
 ...
 Hmm. Personally, I've never seen an algorithm where comparing for exact  
 equality was algorithmically necessary. Sometimes (rarely) it is 
 acceptable but necessary? Do you know of one? 

Finding the machine epsilon, perhaps, that is, the smallest (floating point, 
surely) number for which 1.0+machine_eps==1.0 would be a candidate?

 ...

Best regards
Thorkil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Roman Leshchinskiy

Aaron Denney wrote:

On 2008-02-14, Roman Leshchinskiy [EMAIL PROTECTED] wrote:

Richard A. O'Keefe wrote:



SafeInt is what you should use when you *THINK* your results should all fit
in a machine int but aren't perfectly sure.  (And this is nearly all the 
time.)
Again, I strongly disagree. You should use Integer unless your program 
is too slow and profiling shows that Integer is the culprit. If and only 
if that is the case should you think about alternatives. That said, I 
doubt that your SafeInt would be significantly faster than Integer.


Why?  GMP is pretty good, but it's not going to be anywhere near
hardware speeds.


This how Integer is defined in the libraries:

data Integer
   = S# Int#-- small integers
   | J# Int# ByteArray# -- large integers

As long as the Int# doesn't overflow, you don't call any GMP routines.

Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread ajb

G'day all.

Quoting Thorkil Naur [EMAIL PROTECTED]:

Finding the machine epsilon, perhaps, that is, the smallest   
(floating point,

surely) number for which 1.0+machine_eps==1.0 would be a candidate?


The machine epsilon is the smallest relative separation between two
adjacent normalised floating point numbers.  (The largest is the
machine epsilon multiplied by the radix, more or less.)

So as I understand it, if you're thinking relative error, this test:

(fabs(x1-x2)  machine_eps * fabs(x1))

should be true if and only if x1 == x2, assuming that x1 and x2 are
nonzero and normalised.

I've always had the impression that using the machine epsilon for
pseudo-equality testing is fairly useless, especially if you can work
out a meaningful problem-specific tolerance.

What seems to be more useful is using the machine epsilon to compute an
estimate of how much relative error your algorithm accumulates.  I've
seen this in a lot of Serious Numeric Code(tm), and I've done it myself
(probably inexpertly) a few times.

I haven't tried this, but I imagine that a computed relative error
estimate could be useful for assisting your approximate-equality
tests under some circumstances.  Richard might know of some
circumstances where this sort of thing would be useful.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Richard A. O'Keefe


On 14 Feb 2008, at 10:24 pm, Roman Leshchinskiy wrote:
Do I understand correctly that you advocate using overflowing ints  
(even if they signal overflow) even if Integers are fast enough for  
a particular program?


No you most certainly do NOT.  There is no way to soundly, and I  
would have

thought no way to plausibly, infer that from anything I wrote.




Again, I strongly disagree. You should use Integer unless your  
program is too slow and profiling shows that Integer is the  
culprit. If and only if that is the case should you think about  
alternatives. That said, I doubt that your SafeInt would be  
significantly faster than Integer.


SafeInt should be as fast as Int, or very nearly.
The representation of SafeInt is identical to the representation of Int,
so the space overheads are definitely lower.
The checking I am talking about is done by the hardware at machine  
speeds

and provides *certainty* that overflow did not occur.


So you advocate using different hardware?


Again, this is the opposite of what I wrote.
On my desk there are a Pentium machine and an UltraSPARC and a G4.
They *all* support cheap integer overflow checks.
I am saying that we should use the hardware we have already paid for!

It's not a particularly useful operation (and it is not equality),  
but it does have a defined semantics. However, the semantics of  
(==) on floats isn't really defined in Haskell or C, for that  
matter, even if you know that the hardware is strictly IEEE  
conformant.


The semantics of == on floats in C99 is, under certain circumstances,  
and on the
machines I use with the compilers I use, defined to be those of the  
IEEE (or,

as the C99 standard puts it, IEC) operation.


In general, floating point numbers do not really have a useful  
notion of equality. They are approximations.


The *numbers* are not approximations, the *operations* are approximate.
In particular, floating point +, -, *, , ==, abs, sign, and other  
related
operations (but not /) are *exact* on integral values, if the results  
fit.

AWK programs would be in terrible trouble if this were not so
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Richard A. O'Keefe

On 15 Feb 2008, at 2:03 am, Wilhelm B. Kloke wrote:
IMHO it is a perfectly good idea to use the FP processor for integer

computations. You can use the Inexact Trap as Overflow Exception,
a service you don't get from i386 (and most other) hardware for int
operations.


A neat idea.  However, the i386 has the INTO instruction, the SPARC  
family
has the TRAPV instruction, and other processors have analogues.  Some  
machines

have two sets of add/subtract instructions, one trapping, one not.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-14 Thread Wilhelm B. Kloke
Ben Franksen [EMAIL PROTECTED] schrieb:
 Wilhelm B. Kloke wrote:
 [EMAIL PROTECTED] [EMAIL PROTECTED] schrieb:

 I would consider a good idea if ghc would provide language support to
 this sort of integers.

 No need, you can do that for yourself:

 {-# LANGUAGE GeneralizedNewtypeDeriving #-}
 newtype DInt = DInt Double deriving (Eq, Ord, Enum, Num)

 instance Show DInt where show (DInt x) = show (truncate x :: Integer)

Probably you missed the point I wanted to make. This works only properly,
if you can catch the Inexact Trap which indicates the overflow. The problem
whith normal Ints is that the hardware does not support overflow detection.
You get silent wrong results if you use an Int type which maps to
C int, but not if it maps to C float or double with Inexact trap enabled.
Therefore you need add runtime checks to be sure that you notice
a possible overflow condition.
-- 
Dipl.-Math. Wilhelm Bernhard Kloke
Institut fuer Arbeitsphysiologie an der Universitaet Dortmund
Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257
PGP: http://vestein.arb-phys.uni-dortmund.de/~wb/mypublic.key

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-13 Thread Richard A. O'Keefe


On 12 Feb 2008, at 5:14 pm, [EMAIL PROTECTED] wrote:

Would you say that *no* typical floating-point software is reliable?


With lots of hedging and clutching of protective amulets around the
word reliable, of course not.  What I *am* saying is that
(a) it's exceptionally HARD to make reliable because although the  
operations
are well defined and arguably reasonable they do NOT obey the  
laws that

school and university mathematics teach us to expect them to obey
(b) reliable in each case needs to be defined with some care; it will
almost never mean gives answers accurate to machine precision for
any legal input and probably won't mean gives sensible answers  
for
any legal input either.  With luck, it will mean gives answers  
accurate
to a specified tolerance for an input that differs from the  
input you

actually provided by no more than a specified tolerance for inputs
that are neither too big nor too small, a stated range.  (Note  
that
the problem that gets almost solved may only be almost your  
problem.)

(c) practical advice is to use reputable packaged software whenever you
possibly can rather than writing your own, and always check that  
the
answers make physical sense before putting any trust with them;  
if (or

should I say when) things go weird, seek the help of an expert.
(d) if you trust the output of a certain popular spreadsheet program, I
have a bridge you might be interested in buying...

This is leaving aside all sorts of machine strangeness, like the student
whose neural net program started running hundreds of times slower than
he expected.  I advised him to replace
s = 0;
for (i = 0; i  n; i++) s += x[i]*x[i];
by
s = 0;
for (i = 0; i  n; i++)
if (fabs(x[i])  1e-19)
s += x[i]*x[i];
and the problem went away.  Dear reader: do you know why I expected this
problem, what it was, and why this is NOT a general solution?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-13 Thread jerzy . karczmarczuk

Trialog:
Roman Leshchinskiy writes:

Richard A. O'Keefe wrote:



[EMAIL PROTECTED] wrote:

Would you say that *no* typical floating-point software is reliable?


With lots of hedging and clutching of protective amulets around the
word reliable, of course not.  What I *am* saying is that
(a) it's exceptionally HARD to make reliable because although the 
operations

 are well defined and arguably reasonable they do NOT obey the laws that
school and university mathematics teach us to expect them to obey


Ints do not obey those laws, either. It is not exceptionally hard to write 
reliable software using ints. You just have to check for exceptional 
conditions. That's also the case for floating point. 

That said, I suspect that 90% of programs that use float and double would 
be much better off using something else. The only reason to use floating 
point is performance.


I have a bit different perspective...
First, when I see the advice use something else, I always ask what,
and I get an answer very, very rarely... Well? What do you propose? 


Then, the problem is not always pathological, in the sense of exceptional
conditions. There are touchy points related to the stability of the
algorithms for the solution of differential equations. There are doubtful
random number generators in Monte-Carlo business. There are ill-conditioned
matrices and screwed-up iterative definitions. Algorithms work, work, and
ultimately explode or produce rubbish. The laws which get broken are
almost respected for a long time, and then we have the Bald Man (Sorites)
paradox... 


RAO'K very wisely says that people should avoid reinventing wheels, and
they should use established packages, written by people who know. 


The problem *here* is that we would like to have something fabulous in
Haskell - for example... And there aren't too many experts, who would
convert to the Functional Religion just for fun.
What is *much worse*, some potential users who could encourage building
such packages in the numerical domain, typically don't believe that FP
gives anything interesting. At least, this is the opinion of physicists
I spoke to recently.
Never mind. We shall dance over their cadavers, unless they dance over
ours. In both cases we shall be happy. 

Jerzy Karczmarczuk 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-13 Thread Roman Leshchinskiy

Richard A. O'Keefe wrote:


On 14 Feb 2008, at 2:28 pm, Roman Leshchinskiy wrote:


Richard A. O'Keefe wrote:

On 12 Feb 2008, at 5:14 pm, [EMAIL PROTECTED] wrote:

Would you say that *no* typical floating-point software is reliable?

With lots of hedging and clutching of protective amulets around the
word reliable, of course not.  What I *am* saying is that
(a) it's exceptionally HARD to make reliable because although the 
operations
are well defined and arguably reasonable they do NOT obey the 
laws that

school and university mathematics teach us to expect them to obey


Ints do not obey those laws, either.


They obey a heck of a lot more of them.
Any combination of Ints using (+), (-), (*), and negate
is going to be congruent to the mathematically correct answer modulo 2**n
for some n.  In particular, (+) is associative for Ints.


Yes, but neither school nor, for the most part, university mathematics 
teach us to expect modulo arithmetic. Good programmers learn about it at 
some point in their carreer, though, and write their programs 
accordingly. If they intend to use floating point, they should learn 
about it, too.


I do agree that most programmers don't know how to use floats properly 
and aren't even aware that they can be used improperly. But that's an 
educational problem, not a problem with floating point.



This would be my top priority request for Haskell':
require that the default Int type check for overflow on all
operations where overflow is possible,
provide Int32, Int64 for people who actually *want* wraparound.


I don't understand this. Why use a type which can overflow in the first 
place? Why not use Integer?



You just have to check for exceptional conditions.


Why should it be *MY* job to check for exceptional conditions?


It shouldn't unless you use a type whose contract specifies that it's 
your job to check for them. Which is the case for Int, Float and Double. 
It's not the case for Integer and Rational.



If you think that, you do not understand floating point.
x+(y+z) == (x+y)+z fails even though there is nothing exceptional about
any of the operands or any of the results.


For all practical purposes, the semantics of (==) is not well defined 
for floating point numbers. That's one of the first things I used to 
teach my students about floats: *never* compare them for equality. So in 
my view, your example doesn't fail, it's undefined. That Haskell 
provides (==) for floats is unfortunate.



I have known a *commercial* program blithely invert a singular matrix
because of this kind of thing, on hardware where every kind of arithmetic
exception was reported.  There were no exceptional conditions, the
answer was just 100% wrong.


If they used (==) for floats, then they simply didn't know what they 
were doing. The fact that a program is commercial doesn't mean it's any 
good.


I guess it trapped on creating denormals. But again, presumably the 
reason the student used doubles here was because he wanted his program 
to be fast. Had he read just a little bit about floating point, he 
would have known that it is *not* fast under certain conditions.


Well, no.  Close, but no cigar.
(a) It wasn't denormals, it was underflow.


Creating denormals and underflow are equivalent. Denormals are created 
as a result of underflow. A denormalised number is smaller than any 
representable normal number. When the result of an operation is too 
small to be represented by a normal number, IEEE arithmetic will either 
trap or return a denormal, depending on whether underflow is masked or not.



(b) The fact underflow was handled by trapping to the operating system,
which then completed the operating by writing a 0.0 to the appropriate
register, is *NOT* a universal property of floating point, and is *NOT*
a universal property of IEEE floating point.  It's a fact about that
particular architecture, and I happened to have the manual and he 
didn't.


IIRC, underflow is a standard IEEE exception.


(c) x*x = 0 when x is small enough *is* fast on a lot of machines.


Only if underflow is masked (which it probably is by default). Although 
I vaguely recall that denormals were/are slower on some architectures.


As it were, he seems to have applied what he though was an 
optimisation (using floating point) without knowing anything about it. 
A professional programmer would get (almost) no sympathy in such a 
situation.


You must be joking.  Almost everybody working with neural nets uses 
floating point.


[...]

If you are aware of any neural net software for general purpose hardware 
done

by programmers you consider competent that *doesn't* use floating point, I
would be interested to hear about it.


I'm not. But progammers I consider competent for this particular task 
know how to use floating point. Your student didn't but that's ok for a 
student. He had someone he could ask so hopefully, he'll know next time.


To be clear, I do not mean to 

Re: [Haskell-cafe] Re: A question about monad laws

2008-02-13 Thread Roman Leshchinskiy

[EMAIL PROTECTED] wrote:

Trialog:
Roman Leshchinskiy writes:

Richard A. O'Keefe wrote:



[EMAIL PROTECTED] wrote:

Would you say that *no* typical floating-point software is reliable?


With lots of hedging and clutching of protective amulets around the
word reliable, of course not.  What I *am* saying is that
(a) it's exceptionally HARD to make reliable because although the 
operations

 are well defined and arguably reasonable they do NOT obey the laws that
school and university mathematics teach us to expect them to obey


Ints do not obey those laws, either. It is not exceptionally hard to 
write reliable software using ints. You just have to check for 
exceptional conditions. That's also the case for floating point.
That said, I suspect that 90% of programs that use float and double 
would be much better off using something else. The only reason to use 
floating point is performance.


I have a bit different perspective...
First, when I see the advice use something else, I always ask what,
and I get an answer very, very rarely... Well? What do you propose?


For Haskell, Rational seems like a good choice. The fact that the 
standard requires defaulting to Double is quite unfortunate and 
inconsistent, IMO; the default should be Rational. Float and Double 
shouldn't even be in scope without an explicit import. There really is 
no good reason to use them unless you are writing a binding to existing 
libraries or really need the performance.



Then, the problem is not always pathological, in the sense of exceptional
conditions. There are touchy points related to the stability of the
algorithms for the solution of differential equations. There are doubtful
random number generators in Monte-Carlo business. There are ill-conditioned
matrices and screwed-up iterative definitions. Algorithms work, work, and
ultimately explode or produce rubbish. The laws which get broken are
almost respected for a long time, and then we have the Bald Man (Sorites)
paradox...
RAO'K very wisely says that people should avoid reinventing wheels, and
they should use established packages, written by people who know.


Yes, I completely agree with that (even though my original email 
probably didn't sound as if I did). My point was that (a) most people 
don't need floating point and (b) those who do need it should learn how 
to use it.



The problem *here* is that we would like to have something fabulous in
Haskell - for example...


I think we mostly have it already.

Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-13 Thread Richard A. O'Keefe

On 14 Feb 2008, at 6:01 pm, Roman Leshchinskiy wrote:
I don't understand this. Why use a type which can overflow in the  
first place? Why not use Integer?


Why is this hard to understand?
Dijkstra's classic A Discipline of Programming distinguishes
several kinds of machine.  I'm quoting from memory here.

A Sufficiently Large Machine is one which can run your program
to completion giving correct answers all the way.

An Insufficiently Large Machine is one which can't do that and
silently goes crazy instead.

A Hopefully Sufficiently Large Machine is one which *either*
does what a Sufficiently Large Machine would have *or* reports
that it couldn't.

The good thing about an SLM is that it always gives you right answers  
(assuming

your program is correct).  The bad thing is that you can't afford it.

The good thing about an ILM is that you can afford it.  The bad thing  
is that

you can't trust it.

The great thing about a HSLM is that you can both trust and afford it.

Presumably the reason for having Int in the language at all is speed.
As people have pointed out several times on this list to my knowledge,
Integer performance is not as good as Int performance, not hardly,
and it is silly to pay that price if I don't actually need it.

The thing about using SafeInt is that I should get the *same* space  
and speed
from SafeInt as I do from Int, or at the very least the same space  
and far
better speed than Integer, while at the same time EITHER the results  
are the
results I would have got using Integer *OR* the system promises to  
tell me

about it, so that I *know* there is a problem.

SafeInt is what you should use when you *THINK* your results should  
all fit
in a machine int but aren't perfectly sure.  (And this is nearly all  
the time.)


Int is what you should use when you don't give a damn what the  
results are as
long as you get them fast.  (But in that case, why not use C or  
assembler?)





You just have to check for exceptional conditions.

Why should it be *MY* job to check for exceptional conditions?


It shouldn't unless you use a type whose contract specifies that  
it's your job to check for them. Which is the case for Int, Float  
and Double.


Wrong.  You're confusing two things here.  One is Float and Double,
where we get in serious trouble WITHOUT ANY EXCEPTIONAL CONDITIONS IN  
SIGHT.
The other is Int overflow.  There may also be an equivocation on  
'checking'.
When was the last time you proved that a large program would not  
incur an
integer overflow?  When was the last time you proved that a library  
package
would not incur integer overflow provided it was called in accord  
with its
contract.  When was the last time you even *found* a sufficiently  
informative

contract in someone else's Haskell code?

The checking I am talking about is done by the hardware at machine  
speeds

and provides *certainty* that overflow did not occur.


It's not the case for Integer and Rational.


If you think that, you do not understand floating point.
x+(y+z) == (x+y)+z fails even though there is nothing exceptional  
about

any of the operands or any of the results.


For all practical purposes, the semantics of (==) is not well  
defined for floating point numbers.


With respect to IEEE arithmetic, wrong.

That's one of the first things I used to teach my students about  
floats: *never* compare them for equality.


That's one of the warning signs I watch out for.  Never compare  
floats for
equality is a sure sign of someone who thinks they know about floats  
but don't.


So in my view, your example doesn't fail, it's undefined. That  
Haskell provides (==) for floats is unfortunate.


The operation is well defined and required by the IEEE standard.




If they used (==) for floats, then they simply didn't know what  
they were doing. The fact that a program is commercial doesn't mean  
it's any good.


Er, we weren't talking about (==) for floats; I don't know where you  
got that.
I never said it was any good; quite the opposite.  My point is that  
bad software
escaped into the commercial market because floating point doesn't  
follow the

laws people expect it to.


I guess it trapped on creating denormals. But again, presumably  
the reason the student used doubles here was because he wanted  
his program to be fast. Had he read just a little bit about  
floating point, he would have known that it is *not* fast under  
certain conditions.

Well, no.  Close, but no cigar.
(a) It wasn't denormals, it was underflow.


Creating denormals and underflow are equivalent.


No they are not.  Underflow in this sense occurs when the result is too
small to be even a denormal.  (The IEEE exceptions Underflow and Inexact
are not the same.  Creating denormals is likely to trigger Inexact but
should not trigger Underflow.  I am talking only about a condition that
triggered Underflow.)

Denormals are created as a result of underflow. 

Re: [Haskell-cafe] Re: A question about monad laws

2008-02-13 Thread Richard A. O'Keefe


On 14 Feb 2008, at 2:28 pm, Roman Leshchinskiy wrote:


Richard A. O'Keefe wrote:

On 12 Feb 2008, at 5:14 pm, [EMAIL PROTECTED] wrote:

Would you say that *no* typical floating-point software is reliable?

With lots of hedging and clutching of protective amulets around the
word reliable, of course not.  What I *am* saying is that
(a) it's exceptionally HARD to make reliable because although the  
operations
are well defined and arguably reasonable they do NOT obey the  
laws that

school and university mathematics teach us to expect them to obey


Ints do not obey those laws, either.


They obey a heck of a lot more of them.
Any combination of Ints using (+), (-), (*), and negate
is going to be congruent to the mathematically correct answer modulo  
2**n

for some n.  In particular, (+) is associative for Ints.


It is not exceptionally hard to write reliable software using ints.


I did my BSc and MSc computing on a B6700, where the hardware *always*
notified you in case of an integer overflow.  In that case, it was
perfectly easy to write reliable software.  You just pretended that
the type 'INTEGER' in your program meant 'mathematical integer', and if
that got you into trouble, the machine was certain to tell you about it.

Using languages that do not check for integer overflow, even on hardware
(like, as it happens, both the different machines on my desk) that makes
it cheap to do so, I *have* had trouble with multiplying two positive  
integers
and getting a negative rules and also with a program that went into  
an infinite
loop because it happened to multiply two positive numbers and get  
another
positive number that was smaller than the one it started with.   
There's also
the problem dividing two negative integers can give you a negative  
result.
And one problem I definitely ran into was a Pascal 'for' loop with  
positive

bounds that ran forever.

When I contrast the amount of manual checking I have to do when  
writing C
(or, for that matter, Haskell) with the amount of manual checking I  
have to
do when using Smalltalk or SETL or Lisp, and when I remember how life  
was
*better* for me in this respect back in the 70s, well, it doesn't  
make me

happy.

This would be my top priority request for Haskell':
require that the default Int type check for overflow on all
operations where overflow is possible,
provide Int32, Int64 for people who actually *want* wraparound.

I've been told that there was a day when there was serious trouble in  
the
US financial markets because the volume of trade exceeded the 32-bit  
signed
integer limit, and many programs started giving nonsense results.   
But the

Smalltalk programs just kept powering on...


You just have to check for exceptional conditions.


Why should it be *MY* job to check for exceptional conditions?
That's the *MACHINE*'s job.  When you bought your computer, you paid
for hardware that will do this job for you!


That's also the case for floating point.


If you think that, you do not understand floating point.
x+(y+z) == (x+y)+z fails even though there is nothing exceptional about
any of the operands or any of the results.

I have known a *commercial* program blithely invert a singular matrix
because of this kind of thing, on hardware where every kind of  
arithmetic

exception was reported.  There were no exceptional conditions, the
answer was just 100% wrong.

I guess it trapped on creating denormals. But again, presumably the  
reason the student used doubles here was because he wanted his  
program to be fast. Had he read just a little bit about floating  
point, he would have known that it is *not* fast under certain  
conditions.


Well, no.  Close, but no cigar.
(a) It wasn't denormals, it was underflow.
(b) The fact underflow was handled by trapping to the operating system,
which then completed the operating by writing a 0.0 to the  
appropriate
register, is *NOT* a universal property of floating point, and  
is *NOT*
a universal property of IEEE floating point.  It's a fact about  
that
particular architecture, and I happened to have the manual and  
he didn't.

(c) x*x = 0 when x is small enough *is* fast on a lot of machines.

As it were, he seems to have applied what he though was an  
optimisation (using floating point) without knowing anything about  
it. A professional programmer would get (almost) no sympathy in  
such a situation.


You must be joking.  Almost everybody working with neural nets uses  
floating
point.  (One exception I came across was some people using special  
vector
processor hardware that didn't *have* floating point.  These days,  
you could

probably use a programmable GPU to good effect.)

For neural net calculations, you have to do lots of dot products.
When this example happened, machines were commonly 32 bit, not 64 bit.
So doing the calculations in integers would
 (a) have limited him to 16 bits for the coefficients,
 instead of double's 

[Haskell-cafe] Re: A question about monad laws

2008-02-12 Thread apfelmus

Yitzchak Gale wrote:

Ben Franksen wrote:

...and the Unimo paper[1] explains how to easily write a 'correct' ListT.


Are you sure? Maybe I am missing something, but I don't
see any claim that the Unimo ListT satisfies the laws any
more than the old mtl ListT.



ListT-Done-Right could also be defined via the Unimo
framework, and then it would satisfy the monad laws.


The list monad transformer implemented with Unimo (figure 13) is 
different from  ListT m a = m [a]  (figure 11 for reference).


Note that I say the list monad transformer.

I don't understand what's so special about ListT m does not fulfill the 
monad laws, it just shows that naïvely using  m [a]  to implement the 
list monad transformer is incorrect for general  m . In other words, 
there is a big bug in  Control.Monad.List  and that's all there is to it.



Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-12 Thread Yitzchak Gale
I wrote:
 Floating point intentionally trades accuracy for speed,

Jerzy Karczmarczuk wrote:
 1. This is not a possible trade-off or not. In scientific/engineering
   computation there is really no choice, since you have to compute
   logarithms, trigonometric functions, etc., and some inaccuracy is
   unavoidable. Of course, one may use intervals, and other extremely
   costly stuff, but if the stability of the algorithms is well controlled,
   and in normal case it is (especially if the basic arithmetics has some
   extra control bits to do the rounding), th issue is far from being
   mortal.

Agreed. That is what I meant by trade-off. And I am not trying
to say at all that it is wrong to use it. Life is full of trade-offs.

 It used to be true - and may still be - that the engineers
 who implement floating point in the hardware of our
 CPUs would never fly on commercial airliners. Would you?

 2. The story about engineering not flying commercial planes is largely
   anecdotical, and you know that. Repeating it here doesn't change much.

I heard it from someone who personally worked with one such team
of engineers about ten years ago.

 Would you entrust your country's nuclear arsenal to an
 automated system that depends on floating point arithmetic?

 3. Nuclear arsenal is never really entrusted to an automated system,
   because of reasons much beyond the fl.point inaccuracies.

Yes, of course, no one is really that stupid. Are they?

   On the other hand, in all those software one has to deal with
   probabilities, and with imprecise experimental data, so even if for God
   knows which purpose everything used exact algebraic numbers, or
   controlled transcendental extensions, the input imprecision would kill
   all the sense of infinitely precise computations thereupon.
 4. The non-reliability of engineering software has many more important
   reasons, sometimes incredibly stupid, such as the confusion between
   metric and English units in the Mars Climate Orbiter crash...
   The Ariane 5 crash was the result not of the floating-point computation
   but of the conversion to signed 16-bit numers (from a 64bit double).

Yes, high reliability is very hard. There are many factors that
make it hard; floating point is undeniably one of them.
Again - that doesn't mean that floating point should not be used.
It is a powerful and important tool, as you say.

I was once part of a software project in which people's lives
might depend on there not being any bugs in my code.
It was an experience that changed my life forever,
and that is probably one of the factors that contributes
to my interest in Haskell.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-12 Thread jerzy . karczmarczuk
Yitzchak Gale writes: 


Jerzy Karczmarczuk wrote:

Would you say that *no* typical floating-point software is reliable?


It depends on how you define reliable. 


Floating point intentionally trades accuracy for speed,
... 


It used to be true - and may still be - that the engineers
who implement floating point in the hardware of our
CPUs would never fly on commercial airliners. Would you? 


Would you entrust your country's nuclear arsenal to an
automated system that depends on floating point arithmetic?


1. This is not a possible trade-off or not. In scientific/engineering
 computation there is really no choice, since you have to compute
 logarithms, trigonometric functions, etc., and some inaccuracy is
 unavoidable. Of course, one may use intervals, and other extremely
 costly stuff, but if the stability of the algorithms is well controlled,
 and in normal case it is (especially if the basic arithmetics has some
 extra control bits to do the rounding), th issue is far from being
 mortal. 


2. The story about engineering not flying commercial planes is largely
 anecdotical, and you know that. Repeating it here doesn't change much. 


3. Nuclear arsenal is never really entrusted to an automated system,
 because of reasons much beyond the fl.point inaccuracies.
 On the other hand, in all those software one has to deal with
 probabilities, and with imprecise experimental data, so even if for God
 knows which purpose everything used exact algebraic numbers, or
 controlled transcendental extensions, the input imprecision would kill
 all the sense of infinitely precise computations thereupon. 


4. The non-reliability of engineering software has many more important
 reasons, sometimes incredibly stupid, such as the confusion between
 metric and English units in the Mars Climate Orbiter crash...
 The Ariane 5 crash was the result not of the floating-point computation
 but of the conversion to signed 16-bit numers (from a 64bit double). 


5. Of course, in the original posting case, the underlying math/logic is
 discrete, and has no similar inaccuracies, so the two worlds should
 not be confounded... Here, if some laws get broken, it is the result of
 bad conventions, which usually can be easily avoided. 

Jerzy Karczmarczuk 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-12 Thread Yitzchak Gale
Ben Franksen wrote:
 ...and the Unimo paper[1] explains how to easily write a 'correct' ListT.
 BTW, Unimo is an extreme case of the monad laws holding only w.r.t.
 the 'right' equality, i.e. in the laws m == m' is to be understood as
   observe_monad m == observe_monad m'
 (and even this '==' is of course /not/ the Eq class method but a semantic
 equality.)
 [1] http://web.cecs.pdx.edu/~cklin/papers/unimo-143.pdf

Are you sure? Maybe I am missing something, but I don't
see any claim that the Unimo ListT satisfies the laws any
more than the old mtl ListT. It looks to me like Unimo is
just an attempt to provide an easier way to create, use,
and understand monads, not a change in their semantics.

ListT-Done-Right could also be defined via the Unimo
framework, and then it would satisfy the monad laws.

Thanks,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-12 Thread Yitzchak Gale
Jerzy Karczmarczuk wrote:
 Would you say that *no* typical floating-point software is reliable?

It depends on how you define reliable.

Floating point intentionally trades accuracy for speed,
leaving it to the user to worry about round-off errors.
It is usually not too hard to get the probability of
failure somewhat low in practice, if you don't require
a proof.

It used to be true - and may still be - that the engineers
who implement floating point in the hardware of our
CPUs would never fly on commercial airliners. Would you?

Would you entrust your country's nuclear arsenal to an
automated system that depends on floating point arithmetic?

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Andrew Butterfield

Andrew Butterfield wrote:

let m denote the list monad (hypothetically). Let's instantiate:

return :: x - [x]
return x = [x,x]

(=) :: [x] - (x - [y]) - [y]
xs = f   =  concat ((map f) xs)

Let g n = [show n]

Here  (return 1 = g ) [1,2,3] = [1,1,1,1,1,1]
but  g[1,2,3] = [1,2,3],
thus violating the first monad law   | return 
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:return 
a = 
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:gt;gt;= 
f  =  f a


I messed this up - I was trying for the simplest example I could get ! 
Apologies.


Start over:

Program 

module BadMonad where
import Monad

newtype MyList t = MyList [t]

instance Show t = Show (MyList t) where
 show (MyList xs) = show xs

unmylist (MyList xs) = xs

myconcat xs = MyList (concat (map unmylist xs))

instance Monad MyList  where
 return x  =  MyList [x,x]
 (MyList xs) = f  =  myconcat ((map f) xs)

i2s :: Int - MyList Char
i2s x = MyList (show x)

m = i2s 9

Hugs transcript 

BadMonad m
9 :: MyList Char
BadMonad m = return
99 :: MyList Char

We violate the second law (Right Identity, m = m = return )


--

Andrew Butterfield Tel: +353-1-896-2517 Fax: +353-1-677-2204
Foundations and Methods Research Group Director.
Course Director, B.A. (Mod.) in CS and ICT degrees, Year 4.
Department of Computer Science, Room F.13, O'Reilly Institute,
Trinity College, University of Dublin, Ireland.
   http://www.cs.tcd.ie/Andrew.Butterfield/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Felipe Lessa
On Feb 11, 2008 1:35 PM, Andrew Butterfield
[EMAIL PROTECTED] wrote:
 Hugs 1.0 + (2.5e-15 + 2.5e-15)
 1.01 :: Double
 Hugs (1.0 + 2.5e-15) + 2.5e-15
 1.0 :: Double

Prelude (1e30 + (-1e30)) + 1
1.0
Prelude 1e30 + ((-1e30) + 1)
0.0

I love this example from David Goldberg
(http://docs.sun.com/source/806-3568/ncg_goldberg.html).

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread jerzy . karczmarczuk
Richard A. O'Keefe comments: 

  [floating point addition is not associative]] 


And this is an excellent example of why violating expected laws is BAD.
The failure of floating point addition to be associative means that  there
are umpteen ways of computing polynomials, for example, and doing it  
different ways will give you different answers.  This is *not* a good 
way to write reliable software. 


[Then we see the scalar product whose value *may* depend on the ev. order] 


I wonder...
Would you say that *no* typical floating-point software is reliable? 

Jerzy Karczmarczuk 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Wolfgang Jeltsch
Am Montag, 11. Februar 2008 16:35 schrieb Andrew Butterfield:
 This is precisely Jerzy's point - you can have many mathematical laws as
 you like but there is no guarantee that a programming languages
 implementation will satisfy them.

But people writing instances of type classes should take care of satisfying 
the laws since other libraries will most likely expect this.

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Andrew Butterfield

apfelmus wrote:

Deokjae Lee wrote:

Tutorials about monad mention the monad axioms or monad laws. The
tutorial All About Monads says that It is up to the programmer to
ensure that any Monad instance he creates satisfies the monad laws.

The following is one of the laws.

(x = f) = g == x = (\v - f v = g)

However, this seems to me a kind of mathematical identity. If it is
mathematical identity, a programmer need not care about this law to
implement a monad. Can anyone give me an example implementation of
monad that violate this law ?


I will be mean by asking the following counter question:

  x + (y + z) = (x + y) + z

is a mathematical identity. If it is a mathematical identity, a 
programmer need not care about this law to implement addition + . Can 
anyone give me an example implementation of addition that violates 
this law?

Hugs 1.0 + (2.5e-15 + 2.5e-15)
1.01 :: Double
Hugs (1.0 + 2.5e-15) + 2.5e-15
1.0 :: Double

Hugs, on Pentium 4 machine  running Windows XP SP2 (all of which is 
largely irrelevant!)


This is precisely Jerzy's point - you can have many mathematical laws as 
you like but there is no guarantee

that a programming languages implementation will  satisfy them.

The above example is due to rounding errors and arises because the 
Double type in Haskell (or C, C++, whatever)
is a finite (rational) approximation to real numbers which are infinite 
(platonic) entities.


Associativity of addition applies for platonic reals, but not their 
widely used IEEE-standard approximate implementation

on standard hardware.

For monads, the situation is slightly different.
Haskell describes the signature of the monadic operators

return :: x - m x
(=) :: m a - (a - m b) - m b

but cannot restrict how you actually instantiate these.
It admonishes you by stating that you should obey the relevant laws, but 
cannot enforce this.


(of course, technically if you implement a dodgy monad, its not really a 
monad at all, but something
different with operators with the same name and types - also technically 
values of type Double are

not real numbers, (or true rationals either !)

let m denote the list monad (hypothetically). Let's instantiate:

return :: x - [x]
return x = [x,x]

(=) :: [x] - (x - [y]) - [y]
xs = f   =  concat ((map f) xs)

Let g n = [show n]

Here  (return 1 = g ) [1,2,3] = [1,1,1,1,1,1]
but  g[1,2,3] = [1,2,3],
thus violating the first monad law   | return 
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:return 
a = 
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:gt;gt;= 
f  =  f a


|


Andrew Butterfield Tel: +353-1-896-2517 Fax: +353-1-677-2204
Foundations and Methods Research Group Director.
Course Director, B.A. (Mod.) in CS and ICT degrees, Year 4.
Department of Computer Science, Room F.13, O'Reilly Institute,
Trinity College, University of Dublin, Ireland.
   http://www.cs.tcd.ie/Andrew.Butterfield/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Richard A. O'Keefe

On 12 Feb 2008, at 4:35 am, Andrew Butterfield wrote:
[floating point addition is not associative]

And this is an excellent example of why violating expected laws is BAD.
The failure of floating point addition to be associative means that  
there
are umpteen ways of computing polynomials, for example, and doing it  
different

ways will give you different answers.  This is *not* a good way to write
reliable software.  I did enough Numerical Analysis papers in my pre- 
PhD years

to get quite scared sometimes.  Oh, here's a good one:

dot1 [] [] = 0
dot1 (x:xs) (y:ys) = x*y + dots1 xs ys

Obvious naive code for dot product.  Switch over to tail recursion

dot2 xs ys = aux xs ys 0
  where aux [] [] s = s
aux (x:xs) (y:ys) s = aux xs ys (s + x*y)

The problem is that (a) in floating point arithmetic these two functions
give DIFFERENT answers, and (b) NEITHER of them is wrong (arguably,  
neither
of them is right either).  For integers, of course, they must agree  
(if I

haven't made any silly mistakes).

This kind of thing makes it incredibly hard to think about numerical
calculations.

Basically, laws are stated so that implementors will make stuff that
clients don't have to think about until their brains melt.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Ben Franksen
Dan Piponi wrote:
 IOn Feb 11, 2008 9:46 AM, Miguel Mitrofanov [EMAIL PROTECTED] wrote:
 It's well known that ListT m monad violates this law in general
 (though it satisfies it for some particular monads m). For example,
 
 I went through this example in quite a bit of detail a while ago and
 wrote it up here:
 http://sigfpe.blogspot.com/2006/11/why-isnt-listt-monad.html . I tried
 to show not just why the monad laws fails to hold for ListT [], but
 also show how it almost holds.

...and the Unimo paper[1] explains how to easily write a 'correct' ListT.
BTW, Unimo is an extreme case of the monad laws holding only w.r.t.
the 'right' equality, i.e. in the laws m == m' is to be understood as

  observe_monad m == observe_monad m'

(and even this '==' is of course /not/ the Eq class method but a semantic
equality.)

[1] http://web.cecs.pdx.edu/~cklin/papers/unimo-143.pdf

Cheers
Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Jonathan Cast

On 11 Feb 2008, at 7:52 AM, Arnar Birgisson wrote:


Hi all,

On Feb 11, 2008 3:14 PM, apfelmus [EMAIL PROTECTED] wrote:

I will be mean by asking the following counter question:

   x + (y + z) = (x + y) + z

is a mathematical identity. If it is a mathematical identity, a
programmer need not care about this law to implement addition + . Can
anyone give me an example implementation of addition that violates  
this law?


Depends on what you mean by addition. In general, algebraists call
any associative and commutative operation on a set addition, and
nothing else. From that POV, there is by definition no addition that
violates this law.


I agree.  The Num Double instance should be expelled from the Prelude  
immediately.


jcc

(What?  Haskell has a Float type?)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Arnar Birgisson
Hi all,

On Feb 11, 2008 3:14 PM, apfelmus [EMAIL PROTECTED] wrote:
 I will be mean by asking the following counter question:

x + (y + z) = (x + y) + z

 is a mathematical identity. If it is a mathematical identity, a
 programmer need not care about this law to implement addition + . Can
 anyone give me an example implementation of addition that violates this law?

Depends on what you mean by addition. In general, algebraists call
any associative and commutative operation on a set addition, and
nothing else. From that POV, there is by definition no addition that
violates this law.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread apfelmus

Deokjae Lee wrote:

Tutorials about monad mention the monad axioms or monad laws. The
tutorial All About Monads says that It is up to the programmer to
ensure that any Monad instance he creates satisfies the monad laws.

The following is one of the laws.

(x = f) = g == x = (\v - f v = g)

However, this seems to me a kind of mathematical identity. If it is
mathematical identity, a programmer need not care about this law to
implement a monad. Can anyone give me an example implementation of
monad that violate this law ?


I will be mean by asking the following counter question:

  x + (y + z) = (x + y) + z

is a mathematical identity. If it is a mathematical identity, a 
programmer need not care about this law to implement addition + . Can 
anyone give me an example implementation of addition that violates this law?



The only difference here is that the associative law for addition is 
obvious to you, whereas the associative law for monads is not 
obvious to you (yet). As Neil mentioned, maybe


  http://www.haskell.org/haskellwiki/Monad_Laws

can help to convince yourself that the associative law monads should be 
obvious, too.


In short, the reason for its obviousness is the interpretation of = in 
terms of sequencing actions with side effects. The law is probably best 
demonstration with its special case


  x  (y  z) = (x  y)  z

In other words, it signifies that it's only the sequence of x,y,z and 
not the nesting that matters.



Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe