Re: [Haskell-cafe] Question about arrows

2007-08-04 Thread Bryan Burgers
On 8/3/07, Lewis-Sandy, Darrell [EMAIL PROTECTED] wrote:

 Is there a class property of the Control.Arrow class that represents the
 evaluatation of an arrow:

 eval :: (Arrow a)=a b c-b-c


You could always use the ArrowEval class. Granted, it doesn't exist,
so you'd have to write it, but it should serve your purposes.

class ArrowEval a where
  eval :: a b c - b - c

instance ArrowEval (-) where
  eval = ...

instance ArrowEval YourPartial where
  eval = ...


(That's off the cuff, so I probably didn't get the syntax correct.)
Does that fit your problem? But like Mr. Morris said, eval isn't valid
for every arrow, so you can only define instances of ArrowEval where
it is valid.

Bryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread apfelmus
Sebastian Sylvan wrote:
 Claus Reinke wrote:
 mytransaction = do {
  x0 - readTVar xvar0
  x1 - readTVar xvar1
  :
  xn - readTVar xvarn
  return $ foo x0 x1 .. xn
 }

 ah, a concrete example. but isn't that the typical use case for ap?

 mytransaction = foo `liftM` r xvar0 `ap` r xvar1 ..
 where r = readTVar
 
 I really find it difficult to articulate why this isn't acceptable,
 because it seems so obvious to me! It's short yes, but I really don't
 think it's very clear...
 I have a hard time believing that anyone finds that natural.

I think it's entirely natural :)

Applicative functors (Control.Applicative) are the pattern behind this.
The notation may seem a little weird first, but in the end, `ap` is a
kind of explicit function application and similar to $. With the
notation from Control.Applicative, the line

  return foo `ap` r xvar0 `ap` r xvar1 `ap` ...

reads

  pure foo * r xvar0 * r xvar1 * ...

or

  foo $ r xvar0 * r xvar1 * ...

In other words, instead of using juxtaposition to apply an argument to a
 function, we use *. The type of `ap` is

  ap :: m (a - b) - m a - m b

so that it can be thought of as a generalized function application where
the function is under a monad.

The difference to $ is that * is left associative and allows for
currying. I.e. * is like $ used in the following way

  ((foo $ x0) $ x1) $ x2


Note that you can even incorporate the TVar by defining your own
generalized function application:

  apT :: STM (a - b) - TVar a - STM b
  apT f x = f `ap` readTVar x

Then,  mytransaction  reads

  mytransaction = return foo `apT` xvar0 `apT` xvar1 `apT` ...


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Bulat Ziganshin
Hello apfelmus,

Saturday, August 4, 2007, 12:18:33 PM, you wrote:

 Then,  mytransaction  reads

   mytransaction = return foo `apT` xvar0 `apT` xvar1 `apT` ...

how about a+b*(c+d)?

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Claus Reinke

can you give translation you mean? i don't have anything against
combinators, they just need to be easy to use, don't forcing me to
think where i should put one, as i don't think with lazy code and C
imperative code. and they shouldn't clatter the code, too. just try to
write complex expression using C and these combinators


perhaps we're misunderstanding each other? if i define a monadic
assignment operator lifted over monadic lhs/rhs, i can already have
side-effecting lhs/rhs, including post-increments and content lookup.
that's what that example demonstrated, translating everything you
asked for. one can do the same with other operations, such as 
lifting numeric operations over monadic operands (though the 
current class hierarchy may require some ugly dummy class 
instances for that; also, non-overloaded Bool always requires 
some workaround). what is it that you think is missing?


claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[6]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Bulat Ziganshin
Hello Claus,

Saturday, August 4, 2007, 3:06:11 PM, you wrote:

 can you give translation you mean? i don't have anything against
 combinators, they just need to be easy to use, don't forcing me to
 think where i should put one, as i don't think with lazy code and C
 imperative code. and they shouldn't clatter the code, too. just try to
 write complex expression using C and these combinators

 perhaps we're misunderstanding each other? if i define a monadic
 assignment operator lifted over monadic lhs/rhs, i can already have
 side-effecting lhs/rhs, including post-increments and content lookup.
 that's what that example demonstrated, translating everything you
 asked for. one can do the same with other operations, such as 
 lifting numeric operations over monadic operands (though the 
 current class hierarchy may require some ugly dummy class 
 instances for that; also, non-overloaded Bool always requires 
 some workaround). what is it that you think is missing?

i know that it may be trsanslated to everything including pure
assembler. what i'm missing in current Haskell is USEFUL SYNTAX for
these expressions. adding tons of liftM and ap can't make me happy


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How odd...

2007-08-04 Thread Andrew Coppin

 0^2
0

 (0 :+ 0)^2
0 :+ 0

 0**2
0

 (0 :+ 0)**2
NaN :+ NaN



(Is this a bug?)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Developing Programs and Proofs Spontaneously using GADT

2007-08-04 Thread Shin-Cheng Mu

I am curious about the possibility of developing Haskell programs
spontaneously with proofs about their properties and have the
type checker verify the proofs for us, in a way one would do in
a dependently typed language.

In the exercise below, I tried to redo part of the merge-sort
example in Altenkirch, McBride, and McKinna's introduction to
Epigram [1]: deal the input list into a binary tree, and fold
the tree by the function merging two sorted lists into one.
The property I am going to show is merely that the length of
the input list is preserved.

Given that dependent types and GADTs are such popular topics,
I believe the same must have been done before, and there may be
better ways to do it. If so, please give me some comments or
references. Any comments are welcomed.

 {-# OPTIONS_GHC -fglasgow-exts #-}

To begin with, we define the usual type-level representation
of natural numbers and lists indexed by their lengths.

 data Z = Z   deriving Show
 data S a = S a   deriving Show

 data List a n where
   Nil :: List a Z
   Cons :: a - List a n - List a (S n)

1. Append

To warm up, let us see the familiar append example.
Unfortunately, unlike Omega, Haskell does not provide type
functions. I am not sure which is the best way to
emulate type functions. One possibility is to introduce
the following GADT:

 data Plus m n k where--- m + n = k
   PlusZ :: Plus Z n n
   PlusS :: Plus m n k - Plus (S m) n (S k)

such that Plus m n k represents a proof that m + n = k.

Not having type functions, one of the possible ways to do
append is to have the function, given two lists of lengths
m and n, return a list of length k and a proof that m + n = k.
Thus, the type of append would be:

  append :: List a m - List a n
  - exists k. (List a k, Plus m n k)

In Haskell, the existential quantifier is mimicked by forall.
We define:

 data DepSum a p = forall i . DepSum (a i) (p i)

The term dependent sum is borrowed from the Omega tutorial
of Sheard, Hook, and Linger [2] (why is it called a sum,
not a product?).

The function append can thus be defined as:

 append :: List a m - List a n - DepSum (List a) (Plus m n)
 append Nil ys = DepSum ys PlusZ
 append (Cons x xs) ys =
case (append xs ys) of
  DepSum zs p - DepSum (Cons x zs) (PlusS p)

Another possibility is to provide append a proof that m + n = k.
The type and definition of of append would be:

 append :: Plus m n k - List a m - List a n - List a k
 append PlusZ Nil ys = ys
 append (PlusS pf) (Cons x xs) ys = Cons x (append pf xs ys)

I thought the second append would be more difficult to
use: to append two lists, I have to provide a proof about
their lengths! It turns out that this append actually composes
easier with other parts of the program. We will come to this
later.

2. Some Lemmas

Here are some lemmas represented as functions on terms. The
function, for example, converts a proof of m + (1+n) = k
to a proof of (1+m) + n = k.

 incAssocL :: Plus m (S n) k - Plus (S m) n k
 incAssocL PlusZ = PlusS PlusZ
 incAssocL (PlusS p) = PlusS (incAssocL p)

 incAssocR :: Plus (S m) n k - Plus m (S n) k
 incAssocR (PlusS p) = plusMono p

 plusMono :: Plus m n k - Plus m (S n) (S k)
 plusMono PlusZ = PlusZ
 plusMono (PlusS p) = PlusS (plusMono p)

For example, the following function revcat performs list
reversal by an accumulating parameter. The invariant we
maintain is m + n = k. To prove that the invariant holds,
we have to use incAssocL.

 revcat :: List a m - List a n - DepSum (List a) (Plus m n)
 revcat Nil ys = DepSum ys PlusZ
 revcat (Cons x xs) ys =
 case revcat xs (Cons x ys) of
   DepSum zs p - DepSum zs (incAssocL p)

3. Merge

Apart from the proof manipulations, the function merge
is not very different from what one would expect:

 merge :: Ord a = List a m - List a n - DepSum (List a) (Plus m n)
 merge Nil ys = DepSum ys PlusZ
 merge (Cons x xs) Nil = append (Cons x xs) Nil
 merge (Cons x xs) (Cons y ys)
   | x = y = case merge xs (Cons y ys) of
 DepSum zs p - DepSum (Cons x zs) (PlusS p)
   | otherwise = case merge (Cons x xs) ys of
   DepSum zs p - DepSum (Cons y zs) (plusMono p)

The lemma plusMono is used to convert a proof of
m + n = k to a proof of m + (1+n) = 1+k.

4. Sized Trees

We also index binary trees by their sizes:

 data Tree a n where
   Nul :: Tree a Z
   Tip :: a - Tree a (S Z)
   Bin :: Tree a n1 - Tree a n -
 (Plus p n n1, Plus n1 n k) - Tree a k

The two trees given to the constructor Bin have sizes n1 and n
respectively. The resulting tree, of size k, comes with a proof
that n1 + n = k. Furthermore, we want to maintain an invariant
that n1 either equals n, or is bigger than n by one. This is
represented by the proof Plus p n n1. In the definition of
insertT later, p is either PlusZ or PlusS PlusZ.

5. Lists to Trees

The function insertT inserts an element into a tree:

 insertT :: a - Tree a n - Tree a (S n)
 insertT x Nul = Tip x
 insertT x (Tip y) = 

Re: Re[8]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Brandon S. Allbery KF8NH


On Aug 4, 2007, at 11:19 , Bulat Ziganshin wrote:


and use it. want to assign a=b/(c+d)? nothing can be easier! just
define one more macro!


And?  Everything above machine code is just macros at various  
levels of abstraction, including all our favorite higher-level  
abstractions.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Andrew Coppin

Paul Johnson wrote:

Andrew Coppin wrote:

 0**2
0

 (0 :+ 0)**2
NaN :+ NaN

(Is this a bug?)

According to the Standard Prelude,
#   x ** y   =  exp (log x * y)


I had a feeling this would be the cause.


 log 0
-Infinity


Oh. So... since when does Haskell know about infinity?

BTW, I recently had some code like this:

 foo x
   | x  0 = ...
   | x == 0 = ...
   | x  0 = ...

I was most perplexed when I got a non-exhaustive patterns exception... 
It turns out there was a NaN in there. I forget about that.



 exp (log 0 * 2)
0.0


Well that's interesting. I did wonder why it *doesn't* break in the real 
case...



On to the complex number case.  From the standard for Complex:

# log z  =  log (magnitude z) :+ phase z

# phase (0 :+ 0) = 0
This is a special case for the phase of zero.

# (x:+y) * (x':+y') =  (x*x'-y*y') :+ (x*y'+y*x')

 log (0 :+ 0)
(-Infinity) :+ 0.0

 log (0 :+ 0) * 2
(-Infinity) :+ NaN

Which is the source of the problem.  The imaginary part involves 
multiplying (-Infinity) by the

imaginary part of 2 (i.e. 0), which is NaN.


Um... why would infinity * 0 be NaN? That doesn't make sense...


So no, its not a bug, its according to the standard.


So I'm the only person who was expecting zero squared to be zero? (IMHO 
the standard should try to implement mathematical operations in a 
mathematically sensible way...)


While working through this I also came across the following case which 
technically is a bug:


 0 ** 0
1.0

 exp (log 0 * 0)
NaN

I suspect that GHCi is using a built-in exponentiation operator that 
doesn't quite conform to the standard in this case.


Now that really *is* odd...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Lennart Augustsson
Haskell doesn't know much about infinity, but Haskell implementations are
allowed to use IEEE floating point which has infinity.
And to get things right, there needs to be a few changes to the library to
do the right thing for certain numbers, this is not news.  In fact I filed a
bug report a while back about it.

  -- Lennart

On 8/4/07, Andrew Coppin [EMAIL PROTECTED] wrote:

 Paul Johnson wrote:
  Andrew Coppin wrote:
   0**2
  0
 
   (0 :+ 0)**2
  NaN :+ NaN
 
  (Is this a bug?)
  According to the Standard Prelude,
  #   x ** y   =  exp (log x * y)

 I had a feeling this would be the cause.

   log 0
  -Infinity

 Oh. So... since when does Haskell know about infinity?

 BTW, I recently had some code like this:

   foo x
 | x  0 = ...
 | x == 0 = ...
 | x  0 = ...

 I was most perplexed when I got a non-exhaustive patterns exception...
 It turns out there was a NaN in there. I forget about that.

   exp (log 0 * 2)
  0.0

 Well that's interesting. I did wonder why it *doesn't* break in the real
 case...

  On to the complex number case.  From the standard for Complex:
 
  # log z  =  log (magnitude z) :+ phase z
 
  # phase (0 :+ 0) = 0
  This is a special case for the phase of zero.
 
  # (x:+y) * (x':+y') =  (x*x'-y*y') :+ (x*y'+y*x')
 
   log (0 :+ 0)
  (-Infinity) :+ 0.0
 
   log (0 :+ 0) * 2
  (-Infinity) :+ NaN
 
  Which is the source of the problem.  The imaginary part involves
  multiplying (-Infinity) by the
  imaginary part of 2 (i.e. 0), which is NaN.

 Um... why would infinity * 0 be NaN? That doesn't make sense...

  So no, its not a bug, its according to the standard.

 So I'm the only person who was expecting zero squared to be zero? (IMHO
 the standard should try to implement mathematical operations in a
 mathematically sensible way...)

  While working through this I also came across the following case which
  technically is a bug:
 
   0 ** 0
  1.0
 
   exp (log 0 * 0)
  NaN
 
  I suspect that GHCi is using a built-in exponentiation operator that
  doesn't quite conform to the standard in this case.

 Now that really *is* odd...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[10]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Bulat Ziganshin
Hello Brandon,

Saturday, August 4, 2007, 7:27:16 PM, you wrote:

 On Aug 4, 2007, at 11:19 , Bulat Ziganshin wrote:

 and use it. want to assign a=b/(c+d)? nothing can be easier! just
 define one more macro!

 And?  Everything above machine code is just macros at various  
 levels of abstraction, including all our favorite higher-level  
 abstractions.

and what you prefer? assembler or high-level language?


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[6]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Claus Reinke

I think that defining lifted versions of every function is dangerous,
especially in a widely-used library.  Monadic code will start to look
pure, and before long someone will be using let expressions and where
blocks to share monadic computations rather than using do blocks to
share the *results* of monadic computations.


yes. we actually had that fun with Conal Elliott's functional reactive
programming libraries, where all expressions were lifted to a reader
monad for time (expressions not mentioning time were constant, those
mentioning time were dependent on the overall progress of time).

the limitations of overloading (Bool, mostly) and the variations of
sharing possible in expressions overloaded this way led to quite a
bit of research as well as implementation and language extensions.

it is a borderline case: it results in an embedded domain-specific
language that looks a lot like haskell, but isn't haskell. as long as
one keeps the difference in mind, it is useful, though.

having such overloaded operations in an edsl for low-level imperative 
programming in haskell might be worthwhile, and since some people

have been asking for it, i wanted to point out that it is possible.

for general use, i agree that explicit control (using idioms perhaps) 
is safer. although there are functional languages that are based on

the everything is monadic-io slogan (scheme, lisp, mls,..).

the monad subexpressions under discussion are somewhere in 
between those extremes, with some syntactic differences, some 
control, and their own complications.


claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Andrew Coppin

Lennart Augustsson wrote:
Haskell doesn't know much about infinity, but Haskell implementations 
are allowed to use IEEE floating point which has infinity.
And to get things right, there needs to be a few changes to the 
library to do the right thing for certain numbers, this is not news.  
In fact I filed a bug report a while back about it.


It's news to me that the IEEE floating point standard defines infinity. 
(NaN I knew about...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[10]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Brandon S. Allbery KF8NH


On Aug 4, 2007, at 11:48 , Bulat Ziganshin wrote:


Hello Brandon,

Saturday, August 4, 2007, 7:27:16 PM, you wrote:


On Aug 4, 2007, at 11:19 , Bulat Ziganshin wrote:



and use it. want to assign a=b/(c+d)? nothing can be easier! just
define one more macro!



And?  Everything above machine code is just macros at various
levels of abstraction, including all our favorite higher-level
abstractions.


and what you prefer? assembler or high-level language?


That would be why I'm using a language which lets me compose things  
in complex ways.  And just once, abstracting it away into a library,  
which you seem to be missing.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Developing Programs and Proofs Spontaneously using GADT

2007-08-04 Thread apfelmus
Shin-Cheng Mu wrote:
 I am curious about the possibility of developing Haskell programs
 spontaneously with proofs about their properties and have the
 type checker verify the proofs for us, in a way one would do in
 a dependently typed language.
 
 In the exercise below, I tried to redo part of the merge-sort
 example in Altenkirch, McBride, and McKinna's introduction to
 Epigram [1]: deal the input list into a binary tree, and fold
 the tree by the function merging two sorted lists into one.
 The property I am going to show is merely that the length of
 the input list is preserved.

Cool! :)

 Given that dependent types and GADTs are such popular topics,
 I believe the same must have been done before, and there may be
 better ways to do it. If so, please give me some comments or
 references. Any comments are welcomed.
 
 {-# OPTIONS_GHC -fglasgow-exts #-}
 
 To begin with, we define the usual type-level representation
 of natural numbers and lists indexed by their lengths.
 
 data Z = Z   deriving Show
 data S a = S a   deriving Show
 
 data List a n where
   Nil :: List a Z
   Cons :: a - List a n - List a (S n)
 
 1. Append
 
 To warm up, let us see the familiar append example.
 Unfortunately, unlike Omega, Haskell does not provide type
 functions. I am not sure which is the best way to
 emulate type functions. One possibility is to introduce
 the following GADT:
 
 data Plus m n k where--- m + n = k
   PlusZ :: Plus Z n n
   PlusS :: Plus m n k - Plus (S m) n (S k)
 
 such that Plus m n k represents a proof that m + n = k.

Wouldn't type families (~ associated type synonyms) do exactly that once
they become available?

  type family   Plus :: * - * - *
  type instance Plus Z n = n
  type instance Plus (S m) n = S (Plus m n)

  append :: (Plus m n ~ k) = List a m - List a n - List a k
  append Nil ys = ys
  append (Cons x xs) ys = Cons x (append xs ys)

But I'd guess that there are some constraints on the type family
instance declarations to keep things decidable.

Viewed with the dictionary translation for type classes in mind, this is
probably exactly the alternative type of append you propose:

  append :: Plus m n k - List a m - List a n - List a k

 However, this does not type check. Assume that t has size
 n1, and u has size n. The DepSum returned by merge consists
 of a list of size i, and a proof p of type Plus m n i, for
 some i. The proof p1, on the other hand, is of type P m n k
 for some k. Haskell does not know that Plus m n is actually
 a function and cannot conclude that i=k.
 
 To explicitly state the equality, we assume that there is
 a function plusFn which, given a proof of m + n = i and
 a proof of m + n = k, yields a function converting an i
 in any context to a k. That is:
 
   plusFn :: Plus m n i - Plus m n k
   - (forall f . f i - f k)

 How do I define plusFn? I would like to employ the techniques
 related to equality types [3,4,5], but currently I have not
 yet figured out how. I've merely produced a version of
 plusFn specialised to List a:
 
 plusFn :: Plus m n h - Plus m n k - List a h - List a k
 plusFn PlusZ PlusZ xs = xs
 plusFn (PlusS p1) (PlusS p2) (Cons x xs) =
 Cons x (plusFn p1 p2 xs)
 
 Needless to say this is not satisfactory.

I remember that the

  newtype Equal a b = Proof (forall f . f a - f b)

type equality has been used to define/implement GADTs

  Ralf Hinze. Fun with phantom types.
  http://www.informatik.uni-bonn.de/~ralf/publications/With.pdf

so a general  plusFn  ought to be possible. I think that the following
induction should work (untested!):

  equalZ :: Equal Z Z
  equalS :: Equal m n - Equal (S n) (S m)

  plusFn :: Plus m n i - Plus m n k - Equal i k
  plusFn PlusZ PlusZ = equalZ
  plusFn (PlusS x) (PlusS y) = equalS (plusFn x y)

with the trivial equality proofs for natural numbers

  equalZ = Proof id

  newtype Succ f a  = InSucc { outSucc :: f (S a) }
  equalS (Proof eq) = Proof (outSucc . eq . InSucc)

The newtype is just for making the type checker recognize that  f (S a)
is indeed of the form  g a  for some type constructor  g .

Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Paul Johnson

Andrew Coppin wrote:
It's news to me that the IEEE floating point standard defines 
infinity. (NaN I knew about...)

See http://en.wikipedia.org/wiki/IEEE_754

Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Paul Johnson

Andrew Coppin wrote:

Paul Johnson wrote:

 log 0
-Infinity

Oh. So... since when does Haskell know about infinity?

I should have mentioned that the underlying platform in my case is an
Intel P4.  Haskell does not specify a floating point implementation; the
assumption is that it uses whatever the platform provides because
anything else would be horribly inefficient.  The P4 implements IEEE
floating point, which as Andrew pointed out includes Infinity, -Infinity
and NaN as special cases of floating point values.  It also seems to
distinguish between 0.0 and (-0.0), although they are still equal.  For
instance:


(-0.0)

-0.0


1.0 / 0.0

Infinity


1.0 / (-0.0)

-Infinity

(Aside: should Infinity etc. be included in the Haskell standard?  Or
possibly in a Data.Numeric.IEEE extension?  They look like constructors,
and it would be nice to be able to pattern match on them.)

So if you tried these experiments on a non-IEEE platform then you would
get different results.  You might get exceptions or just wierd big
numbers.  ISTR someone posted results from a Sun Sparc along these lines
some time ago.

 foo x
   | x  0 = ...
   | x == 0 = ...
   | x  0 = ... 
I was most perplexed when I got a non-exhaustive patterns 
exception... It turns out there was a NaN in there. I forget about that.

Nasty.  I'll have to bear that one in mind myself.

You can detect infinities by equality, but not NaN.


let inf = (1.0 / 0.0)
let nan = inf * 0.0
inf == inf

True

nan == nan

False


 exp (log 0 * 2)
0.0
Well that's interesting. I did wonder why it *doesn't* break in the 
real case...

I haven't perused the IEEE standard, but I expect it defines something
like this:

Infinity * 0 = NaN
Infinity * _ = Infinity
exp Infinity = Infinity
exp (-Infinity) = 0

Um... why would infinity * 0 be NaN? That doesn't make sense...

Infinity times anything is Infinity.  Zero times anything is zero.  So
what should Infinity * zero be?  There isn't one right answer.  In this
case the morally correct answer is zero, but in other contexts it
might be Infinity or even some finite number other than zero.

Consider 0.0 / 0.0, which also evaluates to NaN.  What should its value
be?  If you take the limit of (x / x) as x - 0 then the right answer is
0.  On the other hand if you take the limit of (0 / x) as x - 0 then
the right answer is infinity.  Worse yet, if you take the limit of (2x /
x) as x- 0 then the right answer is 2.  You can pick any finite or
infinite value you like.  So the only possible answer is NaN.



So no, its not a bug, its according to the standard.


So I'm the only person who was expecting zero squared to be zero? 
(IMHO the standard should try to implement mathematical operations in 
a mathematically sensible way...)

It does *try*.  I'm not sure if IEEE arithmetic was actually defined
back in 98.  It certainly wasn't widely implemented.  There might well
be a case for revisiting the standard to allow for IEEE values, but you
can't mandate them because not all platforms support IEEE arithmetic.
While working through this I also came across the following case 
which technically is a bug:


 0 ** 0
1.0

 exp (log 0 * 0)
NaN

I suspect that GHCi is using a built-in exponentiation operator that 
doesn't quite conform to the standard in this case.


Now that really *is* odd...

When I said built in I meant built in to the hardware.  This is
probably another special case defined in the IEEE standard, which is not
the same as the Haskell 98 definition.

The reason why the IEEE standard was defined in the first place was that
floating point software portability was being seriously limited by these
corner cases.  You would get some numerical code working on a Sun, and
then find it broke on a PC because one of them defined (0.0 / 0.0) as 1
and the other defined it as 0.   Worse yet, you might find it worked on
Intel chips but not IBM or AMD.  Programmers also had to code explicit
checks for division by zero and implement their own versions of Infinity
and NaN in cases where they might appear, which cluttered up the code
and slowed down execution.

One way out of this morass would be to define Haskell floating point
arithmetic as IEEE standard, and document non-conformance for certain
platforms.  In the long term that is probably the way forwards (do any
currently sold chips *not* implement IEEE?).  It would also let us
include Infinity and NaN as constructors.  However it would lead to
significant problems when compiling code that used these values on
non-IEEE platforms.  What do you do then?

Paul.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Paul Johnson

Also see
http://hal.archives-ouvertes.fr/hal-00128124
before you start on any serious numerical software.

Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Lennart Augustsson
The Haskell type class RealFloat has a reasonable number of operations to
test for NaN, denormalized numbers, etc.
You can also ask if the implementation uses IEEE.

  -- Lennart

On 8/4/07, Paul Johnson [EMAIL PROTECTED] wrote:

 Andrew Coppin wrote:
  Paul Johnson wrote:
   log 0
  -Infinity
  Oh. So... since when does Haskell know about infinity?
 I should have mentioned that the underlying platform in my case is an
 Intel P4.  Haskell does not specify a floating point implementation; the
 assumption is that it uses whatever the platform provides because
 anything else would be horribly inefficient.  The P4 implements IEEE
 floating point, which as Andrew pointed out includes Infinity, -Infinity
 and NaN as special cases of floating point values.  It also seems to
 distinguish between 0.0 and (-0.0), although they are still equal.  For
 instance:

  (-0.0)
 -0.0

  1.0 / 0.0
 Infinity

  1.0 / (-0.0)
 -Infinity

 (Aside: should Infinity etc. be included in the Haskell standard?  Or
 possibly in a Data.Numeric.IEEE extension?  They look like constructors,
 and it would be nice to be able to pattern match on them.)

 So if you tried these experiments on a non-IEEE platform then you would
 get different results.  You might get exceptions or just wierd big
 numbers.  ISTR someone posted results from a Sun Sparc along these lines
 some time ago.
   foo x
 | x  0 = ...
 | x == 0 = ...
 | x  0 = ...
  I was most perplexed when I got a non-exhaustive patterns
  exception... It turns out there was a NaN in there. I forget about that.
 Nasty.  I'll have to bear that one in mind myself.

 You can detect infinities by equality, but not NaN.

  let inf = (1.0 / 0.0)
  let nan = inf * 0.0
  inf == inf
 True
 nan == nan
 False

   exp (log 0 * 2)
  0.0
  Well that's interesting. I did wonder why it *doesn't* break in the
  real case...
 I haven't perused the IEEE standard, but I expect it defines something
 like this:

 Infinity * 0 = NaN
 Infinity * _ = Infinity
 exp Infinity = Infinity
 exp (-Infinity) = 0
  Um... why would infinity * 0 be NaN? That doesn't make sense...
 Infinity times anything is Infinity.  Zero times anything is zero.  So
 what should Infinity * zero be?  There isn't one right answer.  In this
 case the morally correct answer is zero, but in other contexts it
 might be Infinity or even some finite number other than zero.

 Consider 0.0 / 0.0, which also evaluates to NaN.  What should its value
 be?  If you take the limit of (x / x) as x - 0 then the right answer is
 0.  On the other hand if you take the limit of (0 / x) as x - 0 then
 the right answer is infinity.  Worse yet, if you take the limit of (2x /
 x) as x- 0 then the right answer is 2.  You can pick any finite or
 infinite value you like.  So the only possible answer is NaN.
 
  So no, its not a bug, its according to the standard.
 
  So I'm the only person who was expecting zero squared to be zero?
  (IMHO the standard should try to implement mathematical operations in
  a mathematically sensible way...)
 It does *try*.  I'm not sure if IEEE arithmetic was actually defined
 back in 98.  It certainly wasn't widely implemented.  There might well
 be a case for revisiting the standard to allow for IEEE values, but you
 can't mandate them because not all platforms support IEEE arithmetic.
  While working through this I also came across the following case
  which technically is a bug:
 
   0 ** 0
  1.0
 
   exp (log 0 * 0)
  NaN
 
  I suspect that GHCi is using a built-in exponentiation operator that
  doesn't quite conform to the standard in this case.
 
  Now that really *is* odd...
 When I said built in I meant built in to the hardware.  This is
 probably another special case defined in the IEEE standard, which is not
 the same as the Haskell 98 definition.

 The reason why the IEEE standard was defined in the first place was that
 floating point software portability was being seriously limited by these
 corner cases.  You would get some numerical code working on a Sun, and
 then find it broke on a PC because one of them defined (0.0 / 0.0) as 1
 and the other defined it as 0.   Worse yet, you might find it worked on
 Intel chips but not IBM or AMD.  Programmers also had to code explicit
 checks for division by zero and implement their own versions of Infinity
 and NaN in cases where they might appear, which cluttered up the code
 and slowed down execution.

 One way out of this morass would be to define Haskell floating point
 arithmetic as IEEE standard, and document non-conformance for certain
 platforms.  In the long term that is probably the way forwards (do any
 currently sold chips *not* implement IEEE?).  It would also let us
 include Infinity and NaN as constructors.  However it would lead to
 significant problems when compiling code that used these values on
 non-IEEE platforms.  What do you do then?

 Paul.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 

Re: [Haskell-cafe] How odd...

2007-08-04 Thread Andrew Coppin





Um... why would infinity * 0 be NaN? That doesn't make sense...
Infinity times anything is Infinity.  Zero times anything is zero.  So 
what should Infinity * zero be?  There isn't one right answer.  In 
this case the morally correct answer is zero, but in other contexts 
it might be Infinity or even some finite number other than zero.


I don't follow.

Infinity times any positive quantity gives positive infinity.
Infinity times any negative quantity gives negative infinity.
Infinity times zero gives zero.

What's the problem?


Consider 0.0 / 0.0, which also evaluates to NaN.


Division by zero is *definitely* undefined. (The equation 0 * k = v has 
no solutions.)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Neil Mitchell
Hi

If you just use Catch (http://www-users.cs.york.ac.uk/~ndm/catch/):

   foo x
 | x  0 = ...
 | x == 0 = ...
 | x  0 = ...

This gives an error. Something identical to this code is in
Data.FiniteMap, and indeed, when using floats and NaN's (or just silly
Ord classes) you can cause Data.FiniteMap to pattern match error. See
section 6.3 of the draft paper on the Catch website for details.

 myNot x
  | x == True  = False
  | x == False = True

This is determined to be exhaustive.

 as it is (in general) undecidable whether all patterns are covered.

In general, yes. But its possible to do a lot better than GHC's
current warnings (I'm not saying its worth changing, though)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[12]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Bulat Ziganshin
Hello Brandon,

Saturday, August 4, 2007, 8:44:46 PM, you wrote:

 That would be why I'm using a language which lets me compose things
 in complex ways.  And just once, abstracting it away into a library,  
 which you seem to be missing.

and you hate 'do' syntax sugar?

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Albert Y. C. Lai

Andrew Coppin wrote:

  0^2
0

  (0 :+ 0)^2
0 :+ 0

  0**2
0

  (0 :+ 0)**2
NaN :+ NaN


There is nothing wrong AFAIK. (How much do I know? BSc in math, went 
through classes on real analysis and complex analysis.)


There is no reason to expect complex ** to agree with real **.

Real x**y is first defined for natural y (agreeing with x^y), then 
extend to integer y (agreeing with x^^y), then extend to rational y 
(taking nth root when y = m/n), then extend to real y by continuity 
wherever possible. You can still expect real 0**2 = 0^2 = 0.


Complex x**y involves picking a phase angle of x. Phase angle is an 
ill notion for 0. Complex 0**y is better left undefined.


You said


So I'm the only person who was expecting zero squared to be zero?


Are you trying to be sensational, or are you going hyperbole? If you 
want zero squared, you're welcome to use 0^2. Complex 0**2 does not have 
to be related to zero squared.


You said

(IMHO the standard should try to implement mathematical 

 operations in a mathematically sensible way...)

But AFAIK it is already a mathematically sensible way - it is what I 
learned from my math classes.


Perhaps you mean highschoolly sensible. I understand that highschool 
math is a bit simple-minded, e.g., you can greatly confuse someone by


1 = ((-1)**2)**0.5 = ((-1)**0.5)**2) = i**2 = -1

So I'm the only one expecting x square-root squared to be x?

Thank God complex ** is supposed to be different from real **.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Dan Piponi
On 8/4/07, Albert Y. C. Lai [EMAIL PROTECTED] wrote:
 There is no reason to expect complex ** to agree with real **.

There's every reason. It is standard mathematical practice to embed
the integers in the rationals in the reals in the complex numbers and
it is nice to have as many functions as possible respect that
embedding. In particular, it's nice to get a kind of commutativity
when we start casting between different numeric types. There's an old
paper on this subject by Reynolds
(http://lambda-the-ultimate.org/node/2078) though I'm more familiar
with the treatment in Pierce's Basic Category for Computer
Scientists.

If you look through a variety of complex analysis books you'll find a
number of different approaches to handling 0^x for complex x. Many
agree that this is defined for Re(x)0. Some say it is defined for
integer x0. Some say it's not defined at all, and then in the
following paragraphs talk about x^2 where x takes values in the entire
complex plane.

So it seems to me that it is entirely reasonable to hope that (0:+0)**2=0:+0.

 Complex x**y involves picking a phase angle of x. Phase angle is an
 ill notion for 0. Complex 0**y is better left undefined.

There is no right answer here, it's a matter of taste. Among
mathematicians there doesn't seem to be a uniform consensus. But I
think that in computing a commutativity principle is nice to have, so
I think we should have x**fromInteger y = x^y, where defined, so that,
in some sense, (^) is embedded in (**).
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Developing Programs and Proofs Spontaneously using GADT

2007-08-04 Thread Derek Elkins
On Sat, 2007-08-04 at 23:25 +0800, Shin-Cheng Mu wrote:
 
[...]
 In Haskell, the existential quantifier is mimicked by forall.
 We define:
 
   data DepSum a p = forall i . DepSum (a i) (p i)
 
 The term dependent sum is borrowed from the Omega tutorial
 of Sheard, Hook, and Linger [2] (why is it called a sum,
 not a product?).

There is already a dependent product.  One compelling reason for this
naming comes from category theory and how they relate to each other (the
dependent sum is a left adjoint, the dependent product is a right
adjoint [not directly to each other].)  They are related in the same way
products and sums are related, or universal quantification and
existential quantification are.  So, looking at the product case
(A,B)=AxB (and ()=1) and the sum case Either A B = A+B we can see how
they generalize to be the above.

Let's say we have a pair of the same type.  We can clearly represent it
as AxA, but another alternative is 1+1-A.  We can write the isomorphism
in Haskell:
pToF :: (a,a) - (Either () () - a)
pToF (x,_) (Left  ()) = x
pToF (_,y) (Right ()) = y

fToP :: (Either () () - a) - (a,a)
fToP f = (f (Left ()), f (Right ()))

Similarly for sums, A+A can be written (1+1)xA.  Again, the Haskell:
sToP :: Either a a - (Either () (),a)
sToP (Left  x) = (Left  (), x)
sToP (Right x) = (Right (), x)

pToS :: (Either () (), a) - Either a a
pToS (Left  (), x) = Left x
pToS (Right (), x) = Right x

Incidentally, this is how sums are usually encoded, i.e. a pair of a tag
and a value.  (Also incidentally, these are almost exactly the
isomorphisms defining adjunctions related to the ones that define
product and sum, i.e. the above definitions immediately fall out of the
category theory.)

So how to represent AxB or A+B with this encoding?  You can't.  However,
in a dependently typed language we can write (in some dependently typed
pseudo-Haskell) [for concreteness, let's set A=Int, B=String]:
f :: Either () () - Type
f (Left  ()) = Int
f (Right ()) = String

Now we use the same definitions as above only with the types,
pToF :: (Int,String) - (s :: Either () () - f s)
fToP :: (s :: Either () () - f s) - (Int,String)
sToP :: Either Int String - s :: Either () (). f s
pToS :: (s :: Either () (). f s) - Either Int String

This leads to the dependent product being a function space when there is
no dependency, and similarly the dependent sum being a (normal) product.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-04 Thread Stephen Forrest
On 8/4/07, Dan Piponi [EMAIL PROTECTED] wrote:

 On 8/4/07, Albert Y. C. Lai [EMAIL PROTECTED] wrote:
  There is no reason to expect complex ** to agree with real **.

 There's every reason. It is standard mathematical practice to embed
 the integers in the rationals in the reals in the complex numbers and
 it is nice to have as many functions as possible respect that
 embedding.


A example I have seen before that illustrates some the difficulties with
preserving such behaviour is (-1)^(1/3).

Of course, taking the nth root is multi-valued, so if you're to return a
single value, you must choose a convention.  Many implementations I have
seen choose the solution with lowest argument (i.e. the first solution
encounted by a counterclockwise sweep through the plane starting at (1,0).)

With this interpretation, (-1)^(1/3) = 0.5 + sqrt(3)/2 * i.  If you go with
the real solution (-1) you might need to do so carefully in order to
preserve other useful properties of ^, like continuity.

Steve
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Developing Programs and Proofs Spontaneously using GADT

2007-08-04 Thread Dan Licata
Another way of understanding the terminology is this:

A dependent product type (usually written \Pi x:A.B) can be thought of
as a big product of the types (B[a1/x], ..., B[aN/x]) for the
inhabitants a1...aN of the type A.  To introduce a dependent product, you
have to provide each component of the tuple.  To eliminate a dependent
product, you can project out one component.

A dependent sum type (usually written \Sigma x:A.B) can be thought of
as a big sum of the types B[a1/x] + ... + B[aN/x], where again a1 .. aN
are all the inhabitants of A.  To introduce a dependent sum, you pick a
particular branch of the sum (by choosing an a_i) and inject a value of
that type (i.e., B[a_i/x]).  To eliminate a dependent sum, you can see
what branch a value is in (i.e., recover the a_i) and you can see what
the associated data is.

This terminology takes a little getting used to because of how we
usually present these intro and elim forms syntactically.  Consider
dependent products.  For finite products (e.g., (Int, String)) we
introduce a product just by writing down an inhabitant of each component
type.  However, the dependent product needs one component for each
inhabitant of the type A, so we can't in general list them all
explicitly.  Thus, the usual intro form for a dependent product is to
give an inhabitant of B for a generic inhabitant x of A (i.e., a
variable x:A).  So the intro form is a function.  On the other hand, to
eliminate a finite product we usually say I'd like the 2nd component,
please.  But for a dependent product, you have to say I'd like the
a_i^th component, so the elim is application.

Sums are similar.  For a finite sum, we usually say this data is in the
second branch, but for a dependent sum, we have to say this data is in
the a_i^th branch.  So the intro form is a pair of a tag a_i and a
value of type B[a_i/x].  On the other hand, to eliminate a finite sum,
we usually do a case with one branch for each of the tags, where in each
branch we get the tagged value.  To eliminate a dependent sum, we cover
all those cases at once by getting (1) a generic tag x:A and (2) a value
of the type associated with that tag.  But this is exactly a

 split (x,y) = e1 in e2

pattern-matching style elimination form for a pair.  

So, dependent products are represented by functions, and dependent sums
are represented by pairs.  It's confusing at first, but it's established
terminology.  I tend to use dependent function and dependent pair
myself, since sometimes people use dependent product to mean \Sigma
rather than \Pi (understandable given the intro and elim for \Sigma!),
so I sometimes find it ambiguous.

Does that make sense?
-Dan


On Aug04, Derek Elkins wrote:
 On Sat, 2007-08-04 at 23:25 +0800, Shin-Cheng Mu wrote:
  
 [...]
  In Haskell, the existential quantifier is mimicked by forall.
  We define:
  
data DepSum a p = forall i . DepSum (a i) (p i)
  
  The term dependent sum is borrowed from the Omega tutorial
  of Sheard, Hook, and Linger [2] (why is it called a sum,
  not a product?).
 
 There is already a dependent product.  One compelling reason for this
 naming comes from category theory and how they relate to each other (the
 dependent sum is a left adjoint, the dependent product is a right
 adjoint [not directly to each other].)  They are related in the same way
 products and sums are related, or universal quantification and
 existential quantification are.  So, looking at the product case
 (A,B)=AxB (and ()=1) and the sum case Either A B = A+B we can see how
 they generalize to be the above.
 
 Let's say we have a pair of the same type.  We can clearly represent it
 as AxA, but another alternative is 1+1-A.  We can write the isomorphism
 in Haskell:
 pToF :: (a,a) - (Either () () - a)
 pToF (x,_) (Left  ()) = x
 pToF (_,y) (Right ()) = y
 
 fToP :: (Either () () - a) - (a,a)
 fToP f = (f (Left ()), f (Right ()))
 
 Similarly for sums, A+A can be written (1+1)xA.  Again, the Haskell:
 sToP :: Either a a - (Either () (),a)
 sToP (Left  x) = (Left  (), x)
 sToP (Right x) = (Right (), x)
 
 pToS :: (Either () (), a) - Either a a
 pToS (Left  (), x) = Left x
 pToS (Right (), x) = Right x
 
 Incidentally, this is how sums are usually encoded, i.e. a pair of a tag
 and a value.  (Also incidentally, these are almost exactly the
 isomorphisms defining adjunctions related to the ones that define
 product and sum, i.e. the above definitions immediately fall out of the
 category theory.)
 
 So how to represent AxB or A+B with this encoding?  You can't.  However,
 in a dependently typed language we can write (in some dependently typed
 pseudo-Haskell) [for concreteness, let's set A=Int, B=String]:
 f :: Either () () - Type
 f (Left  ()) = Int
 f (Right ()) = String
 
 Now we use the same definitions as above only with the types,
 pToF :: (Int,String) - (s :: Either () () - f s)
 fToP :: (s :: Either () () - f s) - (Int,String)
 sToP :: Either Int String - s :: Either () (). f s
 pToS :: (s :: Either 

Re: [Haskell-cafe] How odd...

2007-08-04 Thread Dan Piponi
On 8/4/07, Stephen Forrest [EMAIL PROTECTED] wrote:
 Of course, taking the nth root is multi-valued, so if you're to return a
 single value, you must choose a convention.  Many implementations I have
 seen choose the solution with lowest argument (i.e. the first solution
 encounted by a counterclockwise sweep through the plane starting at (1,0).)

This is one approach but not the one used by some languages. Another
way of looking at it is this: a function like log is multivalued as
you pass around the origin. So what you can do is 'cut' the complex
plane so that you're not allowed to cross the cuts. Now you have a
region on which a single-valued function can be unambiguously defined.
It's fairly standard practice, when documenting functions of a complex
variable, to specify precisely which 'branch cuts' are being used.
Here's a quote from the Mathematica documentation describing their Log
function: Log[z] has a branch cut discontinuity in the complex z
plane running from  -infinity to 0.

 With this interpretation, (-1)^(1/3) = 0.5 + sqrt(3)/2 * i.  If you go with
 the real solution (-1) you might need to do so carefully in order to
 preserve other useful properties of ^, like continuity.

You can guarantee this by making sure you make the right 'cuts'.
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FreeBSD/amd64 support?

2007-08-04 Thread Tim Newsham

I noticed that there is no official FreeBSD/amd64 port yet, although
there's a binary with a working ghc but non-working ghci.  Has
there been any progress on the amd64 port?  Any estimate on when it
will be available?  Any known workarounds (is it possible to install
and run an x86 ghc in FreeBSD/amd64)?
Is hardware access holding anyone up?

Tim Newsham
http://www.thenewsh.com/~newsham/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] c2hs and structs?

2007-08-04 Thread Magnus Therning
I can't seem to find any information on how to deal with C functions
that return a (pointer to a) struct.  C2hs tells me there's no automatic
support for marshalling structs (I'm using version 0.14.5).

If I'm to do it by hand, is there a preferred way?  (E.g. make the type
adhere to the type Storable.)

/M

-- 
Magnus Therning (OpenPGP: 0xAB4DFBA4)
[EMAIL PROTECTED] Jabber: [EMAIL PROTECTED]
http://therning.org/magnus

Unreadable code,
Why would anyone use it?
Learn a better way.
 -- Geoff Kuenning's contribution to the 2004 Perl Haiku Contest,
Haikus about Perl - 'Dishonerable Mention' winner


pgpvEivZj1ap4.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[12]: [Haskell-cafe] Re: monad subexpressions

2007-08-04 Thread Brandon S. Allbery KF8NH


On Aug 4, 2007, at 14:51 , Bulat Ziganshin wrote:


Hello Brandon,

Saturday, August 4, 2007, 8:44:46 PM, you wrote:


That would be why I'm using a language which lets me compose things
in complex ways.  And just once, abstracting it away into a library,
which you seem to be missing.


and you hate 'do' syntax sugar?


Not particularly; I use both do and = (even intermixing them),  
although I'm uncertain that *teaching* Haskell should start with the  
do notation.  Your point being?


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell FCGI server.

2007-08-04 Thread Donald Bruce Stewart
george.moschovitis:
 
is it possible to create a FCGI server that listens to a
specific port using the Haskell FCGI library?
The front end web server would then communicate with this
back end FCGI server through this port.
A small example would be really appreciated.
thanks,

Yep, certainly possible. lambdaweb does this, as should others. AFAIK
its just like using fastcgi in any other language.

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fastcgi-3000.0.0

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe