Re: [Haskell-cafe] guards in applicative style

2012-09-12 Thread Lorenzo Bolla
I'm no expert at all, but I would say no.
guard type is:
guard :: MonadPlus m = Bool - m ()

and MonadPlus is a monad plus (ehm...) mzero and mplus
(http://en.wikibooks.org/wiki/Haskell/MonadPlus).
On the other hand Applicative is less than a monad
(http://www.haskell.org/haskellwiki/Applicative_functor), therefore
guard as is cannot be defined.

But, in your specific example, with lists, you can always use filter:
filter (uncurry somePredicate) ((,) $ list1 * list2 (somePredicate ???))

hth,
L.


On Wed, Sep 12, 2012 at 3:40 PM, felipe zapata tifonza...@gmail.com wrote:

 Hi Haskellers,

 Suppose I have two list and I want to calculate
 the cartesian product between the two of them,
 constrained to a predicate.
 In List comprehension notation is just

 result = [ (x, y) | x - list1, y -list2, somePredicate x y ]

 or in monadic notation

 result = do
  x - list1
  y - list2
  guard (somePredicate x y)
 return $ (x,y)

 Then I was wondering if we can do something similar using an applicative style

 result = (,) $ list1 * list2 (somePredicate ???)

 The question is then,
 there is a way for defining a guard in applicative Style?

 Thanks in advance,

 Felipe Zapata.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Lorenzo Bolla
I definitely agree!
http://www.reddit.com/r/haskell/comments/x4knd/what_is_the_reason_for_haskells_cabal_package/

L.


On Wed, Aug 15, 2012 at 12:38:33PM -0700, Bryan O'Sullivan wrote:
 Hi, folks -
 
 I'm sure we are all familiar with the phrase cabal dependency hell at this
 point, as the number of projects on Hackage that are intended to hack around
 the problem slowly grows.
 
 I am currently undergoing a fresh visit to that unhappy realm, as I try to
 rebuild some of my packages to see if they work with the GHC 7.6 release
 candidate.
 
 A substantial number of the difficulties I am encountering are related to
 packages specifying upper bounds on their dependencies. This is a recurrent
 problem, and its source lies in the recommendations of the PVP itself
 (problematic phrase highlighted in bold):
 
 
 When publishing a Cabal package, you should ensure that your dependencies
 in the build-depends field are accurate. This means specifying not only
 lower bounds, but also upper bounds on every dependency.
 
 
 I understand that the intention behind requiring tight upper bounds was good,
 but in practice this has worked out terribly, leading to depsolver failures
 that prevent a package from being installed, when everything goes smoothly 
 with
 the upper bounds relaxed. The default response has been for a flurry of small
 updates to packages in which the upper bounds are loosened, thus guaranteeing
 that the problem will recur in a year or less. This is neither sensible, fun,
 nor sustainable.
 
 In practice, when an author bumps a version of a depended-upon package, the
 changes are almost always either benign, or will lead to compilation failure 
 in
 the depending-upon package. A benign change will obviously have no visible
 effect, while a compilation failure is actually better than a depsolver
 failure, because it's more informative.
 
 This leaves the nasty-but-in-my-experience-rare case of runtime failures 
 caused
 by semantic changes. In these instances, a downstream package should 
 reactively
  add an upper bound once a problem is discovered.
 
 I propose that the sense of the recommendation around upper bounds in the PVP
 be reversed: upper bounds should be specified only when there is a known
 problem with a new version of a depended-upon package.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


-- 
Lorenzo Bolla
http://lbolla.info

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] StableNames and monadic functions

2012-06-26 Thread Lorenzo Bolla
The point I was making is that StableName might be what you want. You are
using it to check if two functions are the same by comparing their
stablehash. But from StableName documentation:

The reverse is not necessarily true: if two stable names are not equal,
 then the objects they name may still be equal.


The `eq` you implemented means this, I reckon: if `eq` returns True then
the 2 functions are equal, if `eq` returns False then you can't tell!

Does it make sense?
L.


On Tue, Jun 26, 2012 at 1:54 PM, Ismael Figueroa Palet ifiguer...@gmail.com
 wrote:

 Thanks Lorenzo, I'm cc'ing the list with your response also:

 As you point out, when you do some kind of let-binding, using the where
 clause, or explicit let as in:

 main :: IO ()
 main = do
let f1 = (successor :: Int - State Int Int)
let f2 = (successor :: Int - Maybe Int)
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)

 The behavior is as expected. I guess the binding triggers some internal
 optimization or gives more information to the type checker; but I'm still
 not clear why it is required to be done this way -- having to let-bind
 every function is kind of awkward.

 I know the details of StableNames are probably implementation-dependent,
 but I'm still wondering about how to detect / restrict this situation.

 Thanks


 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 From StableName docs:

 The reverse is not necessarily true: if two stable names are not equal,
 then the objects they name may still be equal.


 This version works as expected:

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 --  main :: IO ()
 --  main = do
 -- b2 - eq (successor :: Int - State Int Int) (successor :: Int
 - State Int Int)
 -- b1 - eq (successor :: Int - Maybe Int) (successor :: Int -
 Maybe Int)
 -- print (show b1 ++   ++ show b2)

 main :: IO ()
 main = do
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)
where f1 = (successor :: Int - Maybe Int)
  f2 = (successor :: Int - State Int Int)



 hth,
 L.




 On Tue, Jun 26, 2012 at 1:15 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 I'm using StableNames to have a notion of function equality, and I'm
 running into problems when using monadic functions.

 Consider the code below, file Test.hs

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 main :: IO ()
 main = do
b1 - eq (successor :: Int - Maybe Int) (successor :: Int -
 Maybe Int)
b2 - eq (successor :: Int - State Int Int) (successor :: Int -
 State Int Int)
print (show b1 ++   ++ show b2)

 Running the code into ghci the result is False False. There is some
 old post saying that this is due to the dictionary-passing style for
 typeclasses, and compiling with optimizations improves the situation.

 Compiling with ghc --make -O Tests.hs and running the program, the
 result is True True, which is what I expect.
 However, if I change main to be like the following:

  main :: IO ()
 main = do
b2 - eq (successor :: Int - State Int Int) (successor :: Int -
 State Int Int)
b1 - eq (successor :: Int - Maybe Int) (successor :: Int -
 Maybe Int)
print (show b1 ++   ++ show b2)

 i.e. just changing the sequential order, and then compiling again with
 the same command, I get True False, which is very confusing for me.
 Similar situations happens when using the state monad transformer, and
 manually built variations of it.

 It sounds the problem is with hidden closures created somewhere that do
 not point to the same memory locations, so StableNames yields false for
 that cases, but it is not clear to me under what circumstances this
 situation happens. Is there other way to get some approximation of function
 equality? or a way to configure the behavior of StableNames in presence
 of class constraints?

 I'm using the latests Haskell Platform on OS X Lion, btw.

 Thanks

 --
 Ismael


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





 --
 Ismael


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] StableNames and monadic functions

2012-06-26 Thread Lorenzo Bolla
I think about StableName like the  operator in C, that returns you the
memory address of a variable. It's not the same for many reasons, but by
analogy, if x == y then x == y, but x != y does not imply x != y.

So, values that are semantically equal, may be stored in different memory
locations and have different StableNames.

The fact that changing the order of the lines also changes the result of
the computation is obviously stated in the type signature of
makeStableName, which lives in the IO monad. On the other hand
hashStableNAme is a pure function.

L.



On Tue, Jun 26, 2012 at 3:26 PM, Ismael Figueroa Palet ifiguer...@gmail.com
 wrote:



 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 The point I was making is that StableName might be what you want. You are
 using it to check if two functions are the same by comparing their
 stablehash. But from StableName documentation:

  The reverse is not necessarily true: if two stable names are not equal,
 then the objects they name may still be equal.


 The `eq` you implemented means this, I reckon: if `eq` returns True then
 the 2 functions are equal, if `eq` returns False then you can't tell!

 Does it make sense?
 L.


 Yes  it does make sense, and I'm wondering why the hash are equal in one
 case but are not equal on the other case (i.e. using let/where vs not using
 it) because I'd like it to behave the same in both situations

 Thanks again




 On Tue, Jun 26, 2012 at 1:54 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 Thanks Lorenzo, I'm cc'ing the list with your response also:

 As you point out, when you do some kind of let-binding, using the
 where clause, or explicit let as in:

 main :: IO ()
 main = do
let f1 = (successor :: Int - State Int Int)
let f2 = (successor :: Int - Maybe Int)
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)

 The behavior is as expected. I guess the binding triggers some internal
 optimization or gives more information to the type checker; but I'm still
 not clear why it is required to be done this way -- having to let-bind
 every function is kind of awkward.

 I know the details of StableNames are probably implementation-dependent,
 but I'm still wondering about how to detect / restrict this situation.

 Thanks


 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 From StableName docs:

 The reverse is not necessarily true: if two stable names are not
 equal, then the objects they name may still be equal.


 This version works as expected:

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 --  main :: IO ()
 --  main = do
 -- b2 - eq (successor :: Int - State Int Int) (successor ::
 Int - State Int Int)
 -- b1 - eq (successor :: Int - Maybe Int) (successor :: Int
 - Maybe Int)
 -- print (show b1 ++   ++ show b2)

 main :: IO ()
 main = do
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)
where f1 = (successor :: Int - Maybe Int)
  f2 = (successor :: Int - State Int Int)



 hth,
 L.




 On Tue, Jun 26, 2012 at 1:15 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 I'm using StableNames to have a notion of function equality, and I'm
 running into problems when using monadic functions.

 Consider the code below, file Test.hs

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 main :: IO ()
 main = do
b1 - eq (successor :: Int - Maybe Int) (successor :: Int -
 Maybe Int)
b2 - eq (successor :: Int - State Int Int) (successor :: Int
 - State Int Int)
print (show b1 ++   ++ show b2)

 Running the code into ghci the result is False False. There is some
 old post saying that this is due to the dictionary-passing style for
 typeclasses, and compiling with optimizations improves the situation.

 Compiling with ghc --make -O Tests.hs and running the program, the
 result is True True, which is what I expect.
 However, if I change main to be like the following:

  main :: IO ()
 main = do
b2 - eq (successor :: Int - State Int Int) (successor :: Int
 - State Int Int)
b1 - eq (successor :: Int - Maybe Int) (successor :: Int -
 Maybe Int)
print (show b1 ++   ++ show b2)

 i.e. just changing the sequential order, and then compiling again with
 the same command, I get True False, which is very confusing for me.
 Similar situations happens when using the state monad transformer, and
 manually built variations of it.

 It sounds the problem is with hidden closures

Re: [Haskell-cafe] StableNames and monadic functions

2012-06-26 Thread Lorenzo Bolla
In other words there is a difference between Identity and Equivalence. What
you have implemented with StableName is an Identity (sometimes called
reference equality), as opposed to an Equivalence (aka value
equality).

In Python, for example:

 x = {1:2}
 y = {1:2}
 x == y
True
 x is y
False

L.


On Tue, Jun 26, 2012 at 3:42 PM, Lorenzo Bolla lbo...@gmail.com wrote:

 I think about StableName like the  operator in C, that returns you the
 memory address of a variable. It's not the same for many reasons, but by
 analogy, if x == y then x == y, but x != y does not imply x != y.

 So, values that are semantically equal, may be stored in different memory
 locations and have different StableNames.

 The fact that changing the order of the lines also changes the result of
 the computation is obviously stated in the type signature of
 makeStableName, which lives in the IO monad. On the other hand
 hashStableNAme is a pure function.

 L.



 On Tue, Jun 26, 2012 at 3:26 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:



 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 The point I was making is that StableName might be what you want. You
 are using it to check if two functions are the same by comparing their
 stablehash. But from StableName documentation:

  The reverse is not necessarily true: if two stable names are not equal,
 then the objects they name may still be equal.


 The `eq` you implemented means this, I reckon: if `eq` returns True then
 the 2 functions are equal, if `eq` returns False then you can't tell!

 Does it make sense?
 L.


 Yes  it does make sense, and I'm wondering why the hash are equal in one
 case but are not equal on the other case (i.e. using let/where vs not using
 it) because I'd like it to behave the same in both situations

 Thanks again




 On Tue, Jun 26, 2012 at 1:54 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 Thanks Lorenzo, I'm cc'ing the list with your response also:

 As you point out, when you do some kind of let-binding, using the
 where clause, or explicit let as in:

 main :: IO ()
 main = do
let f1 = (successor :: Int - State Int Int)
let f2 = (successor :: Int - Maybe Int)
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)

 The behavior is as expected. I guess the binding triggers some internal
 optimization or gives more information to the type checker; but I'm still
 not clear why it is required to be done this way -- having to let-bind
 every function is kind of awkward.

 I know the details of StableNames are probably
 implementation-dependent, but I'm still wondering about how to detect /
 restrict this situation.

 Thanks


 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 From StableName docs:

 The reverse is not necessarily true: if two stable names are not
 equal, then the objects they name may still be equal.


 This version works as expected:

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 --  main :: IO ()
 --  main = do
 -- b2 - eq (successor :: Int - State Int Int) (successor ::
 Int - State Int Int)
 -- b1 - eq (successor :: Int - Maybe Int) (successor :: Int
 - Maybe Int)
 -- print (show b1 ++   ++ show b2)

 main :: IO ()
 main = do
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)
where f1 = (successor :: Int - Maybe Int)
  f2 = (successor :: Int - State Int Int)



 hth,
 L.




 On Tue, Jun 26, 2012 at 1:15 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 I'm using StableNames to have a notion of function equality, and I'm
 running into problems when using monadic functions.

 Consider the code below, file Test.hs

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 main :: IO ()
 main = do
b1 - eq (successor :: Int - Maybe Int) (successor :: Int -
 Maybe Int)
b2 - eq (successor :: Int - State Int Int) (successor :: Int
 - State Int Int)
print (show b1 ++   ++ show b2)

 Running the code into ghci the result is False False. There is some
 old post saying that this is due to the dictionary-passing style for
 typeclasses, and compiling with optimizations improves the situation.

 Compiling with ghc --make -O Tests.hs and running the program, the
 result is True True, which is what I expect.
 However, if I change main to be like the following:

  main :: IO ()
 main = do
b2 - eq (successor :: Int - State Int Int) (successor :: Int
 - State Int Int)
b1 - eq (successor :: Int

Re: [Haskell-cafe] StableNames and monadic functions

2012-06-26 Thread Lorenzo Bolla
This is very tricky and it really depends on what you mean...
Formally, two functions are the same if they have the same domain and f(x)
== g(x) for each x in the domain. But this is not always
easy/feasible/efficient to implement! (See also
http://en.wikipedia.org/wiki/Rice%27s_theorem and
http://stackoverflow.com/questions/4844043/are-two-functions-equal.)

Depending on your problem, you might get away with just defining a
signature of your function and compare them: for example the signature
could be the concat of the function name, args types, etc. But I'm
speculating here...

L.



On Tue, Jun 26, 2012 at 4:50 PM, Ismael Figueroa Palet ifiguer...@gmail.com
 wrote:

 thanks again for your comments, any idea on how to implement Equivalence
 for functions?

 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 In other words there is a difference between Identity and Equivalence.
 What you have implemented with StableName is an Identity (sometimes
 called reference equality), as opposed to an Equivalence (aka value
 equality).

 In Python, for example:

  x = {1:2}
  y = {1:2}
  x == y
 True
  x is y
 False

 L.


 On Tue, Jun 26, 2012 at 3:42 PM, Lorenzo Bolla lbo...@gmail.com wrote:

 I think about StableName like the  operator in C, that returns you
 the memory address of a variable. It's not the same for many reasons, but
 by analogy, if x == y then x == y, but x != y does not imply x != y.

 So, values that are semantically equal, may be stored in different
 memory locations and have different StableNames.

 The fact that changing the order of the lines also changes the result of
 the computation is obviously stated in the type signature of
 makeStableName, which lives in the IO monad. On the other hand
 hashStableNAme is a pure function.

 L.



 On Tue, Jun 26, 2012 at 3:26 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:



 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 The point I was making is that StableName might be what you want. You
 are using it to check if two functions are the same by comparing their
 stablehash. But from StableName documentation:

  The reverse is not necessarily true: if two stable names are not
 equal, then the objects they name may still be equal.


 The `eq` you implemented means this, I reckon: if `eq` returns True
 then the 2 functions are equal, if `eq` returns False then you can't tell!

 Does it make sense?
 L.


 Yes  it does make sense, and I'm wondering why the hash are equal in
 one case but are not equal on the other case (i.e. using let/where vs not
 using it) because I'd like it to behave the same in both situations

 Thanks again




 On Tue, Jun 26, 2012 at 1:54 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 Thanks Lorenzo, I'm cc'ing the list with your response also:

 As you point out, when you do some kind of let-binding, using the
 where clause, or explicit let as in:

 main :: IO ()
 main = do
let f1 = (successor :: Int - State Int Int)
let f2 = (successor :: Int - Maybe Int)
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)

 The behavior is as expected. I guess the binding triggers some
 internal optimization or gives more information to the type checker; but
 I'm still not clear why it is required to be done this way -- having to
 let-bind every function is kind of awkward.

 I know the details of StableNames are probably
 implementation-dependent, but I'm still wondering about how to detect /
 restrict this situation.

 Thanks


 2012/6/26 Lorenzo Bolla lbo...@gmail.com

 From StableName docs:

 The reverse is not necessarily true: if two stable names are not
 equal, then the objects they name may still be equal.


 This version works as expected:

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb)

 successor :: (Num a, Monad m) = a - m a
 successor n = return (n+1)

 --  main :: IO ()
 --  main = do
 -- b2 - eq (successor :: Int - State Int Int) (successor
 :: Int - State Int Int)
 -- b1 - eq (successor :: Int - Maybe Int) (successor ::
 Int - Maybe Int)
 -- print (show b1 ++   ++ show b2)

 main :: IO ()
 main = do
b2 - eq f2 f2
b1 - eq f1 f1
print (show b1 ++   ++ show b2)
where f1 = (successor :: Int - Maybe Int)
  f2 = (successor :: Int - State Int Int)



 hth,
 L.




 On Tue, Jun 26, 2012 at 1:15 PM, Ismael Figueroa Palet 
 ifiguer...@gmail.com wrote:

 I'm using StableNames to have a notion of function equality, and
 I'm running into problems when using monadic functions.

 Consider the code below, file Test.hs

 import System.Mem.StableName
 import Control.Monad.State

 eq :: a - b - IO Bool
 eq a b = do
  pa - makeStableName a
  pb - makeStableName b
  return (hashStableName pa == hashStableName pb

Re: [Haskell-cafe] not enough fusion?

2012-06-25 Thread Lorenzo Bolla
I wonder why this performs really badly, though (I would expect it to be
the same as s2):

s3 :: Int - Int
s3 n = sum [gcd x y | x - [ 0 .. n-1 ], y - [ 0 .. n-1 ]]

From the links posted by Dmitry, it might be that the code generated is
made of 2 recursive calls: in fact, what I observe is a stack space
overflow error on runtime...

L.




On Mon, Jun 25, 2012 at 10:09 AM, Dmitry Olshansky olshansk...@gmail.comwrote:

 s1 ~ sum $ map (sum . flip map [0..n] . gcd) [0..n]
 s2 ~ sum $ concatMap (flip map [0..n] . gcd) [0..n]

 There are some posts from Joachim Breitner investigated fusion for
 concatMap:

 http://www.haskell.org/pipermail/haskell-cafe/2011-December/thread.html#97227



 2012/6/25 Johannes Waldmann waldm...@imn.htwk-leipzig.de

 Dear all,

 while doing some benchmarking (*)
 I noticed that function  s1  is considerably faster than  s2
 (but I wanted  s2  because it looks more natural)
 (for n = 1,  s1 takes 20 s, s2 takes 13 s; compiled by ghc-7.4.2 -O2)

 s1 :: Int - Int
 s1 n = sum $ do
x - [ 0 .. n-1 ]
return $ sum $ do
y - [ 0 .. n-1 ]
return $ gcd x y

 s2 :: Int - Int
 s2 n = sum $ do
  x - [ 0 .. n-1 ]
  y - [ 0 .. n-1 ]
  return $ gcd x y

 I was expecting that in both programs,
 all lists will be fused away (are they?)
 so the code generator essentially can produce straightforward
 assembly code (no allocations, no closures, etc.)


 For reference, I also wrote the equivalent imperative program
 (two nested loops, one accumulator for the sum)
 (with the straightforward recursive gcd)
 and runtimes are (for same input as above)

 C/gcc: 7.3 s , Java: 7.7 s, C#/Mono: 8.7 s


 So, they sort of agree with each other, but disagree with ghc.
 Where does the factor 2 come from? Lists? Laziness?
 Does  ghc  turn the tail recursion (in gcd) into a loop? (gcc does).
 (I am looking at  -ddump-asm  but can't quite see through it.)


 (*) benchmarking to show that today's compilers are clever enough
 such that the choice of paradigm/language does not really matter
 for this kind of low-level programming.






 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] not enough fusion?

2012-06-25 Thread Lorenzo Bolla
You are right, probably I didn't because I cannot reproduce it now.
Sorry for the noise.
(Anyway, I am still surprised that list-comprehension gives a different
result from do-notation in the list monad.)

L.



On Mon, Jun 25, 2012 at 11:55 AM, Dmitry Olshansky olshansk...@gmail.comwrote:

 In my test it works ~20% faster than s2 and ~20% slower than s1.
 Did you use -O2 flag?


 2012/6/25 Lorenzo Bolla lbo...@gmail.com

 I wonder why this performs really badly, though (I would expect it to be
 the same as s2):

 s3 :: Int - Int
 s3 n = sum [gcd x y | x - [ 0 .. n-1 ], y - [ 0 .. n-1 ]]

 From the links posted by Dmitry, it might be that the code generated is
 made of 2 recursive calls: in fact, what I observe is a stack space
 overflow error on runtime...

 L.




 On Mon, Jun 25, 2012 at 10:09 AM, Dmitry Olshansky olshansk...@gmail.com
  wrote:

 s1 ~ sum $ map (sum . flip map [0..n] . gcd) [0..n]
 s2 ~ sum $ concatMap (flip map [0..n] . gcd) [0..n]

 There are some posts from Joachim Breitner investigated fusion for
 concatMap:

 http://www.haskell.org/pipermail/haskell-cafe/2011-December/thread.html#97227



 2012/6/25 Johannes Waldmann waldm...@imn.htwk-leipzig.de

 Dear all,

 while doing some benchmarking (*)
 I noticed that function  s1  is considerably faster than  s2
 (but I wanted  s2  because it looks more natural)
 (for n = 1,  s1 takes 20 s, s2 takes 13 s; compiled by ghc-7.4.2
 -O2)

 s1 :: Int - Int
 s1 n = sum $ do
x - [ 0 .. n-1 ]
return $ sum $ do
y - [ 0 .. n-1 ]
return $ gcd x y

 s2 :: Int - Int
 s2 n = sum $ do
  x - [ 0 .. n-1 ]
  y - [ 0 .. n-1 ]
  return $ gcd x y

 I was expecting that in both programs,
 all lists will be fused away (are they?)
 so the code generator essentially can produce straightforward
 assembly code (no allocations, no closures, etc.)


 For reference, I also wrote the equivalent imperative program
 (two nested loops, one accumulator for the sum)
 (with the straightforward recursive gcd)
 and runtimes are (for same input as above)

 C/gcc: 7.3 s , Java: 7.7 s, C#/Mono: 8.7 s


 So, they sort of agree with each other, but disagree with ghc.
 Where does the factor 2 come from? Lists? Laziness?
 Does  ghc  turn the tail recursion (in gcd) into a loop? (gcc does).
 (I am looking at  -ddump-asm  but can't quite see through it.)


 (*) benchmarking to show that today's compilers are clever enough
 such that the choice of paradigm/language does not really matter
 for this kind of low-level programming.






 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Confused by ghci output

2012-05-31 Thread Lorenzo Bolla
It looks like you are overflowing `Int` with 3^40.
In your QuickCheck test, the function signature uses Int:

prop_sanemodexp :: Int - Int - Int - Property

Note:
Prelude 3^40
12157665459056928801
Prelude 3^40 :: Int
689956897

Prelude 3^40 `mod` 3
0
Prelude (3^40 `mod` 3) :: Int
1


L.




On Thu, May 31, 2012 at 5:35 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 *X 3^40 `mod` 3 == modexp2 3 40 3
 False
 *X modexp2 3 40 3
 0
 *X 3^40 `mod` 3
 0

 I'm confused. Last I checked, 0 == 0.

 Using GHC 7.4.1, and the file x.hs (which has been loaded in ghci) can be
 found here: http://hpaste.org/69342

 I noticed this after prop_sanemodexp was failing.

 Any help would be appreciated,
   - Clark
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Please critique my code (a simple lexer)

2012-05-23 Thread Lorenzo Bolla
 On Tue, May 22, 2012 at 4:13 PM, John Simon zildjoh...@gmail.com wrote:

 data Lexer = Lexer String

 makeLexer :: String - Lexer
 makeLexer fn = Lexer fn


`makeLexer` is redundant. You can simply use `Lexer`.

L.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] desactivate my Show instance implementations temporarily

2012-04-22 Thread Lorenzo Bolla
On Sun, Apr 22, 2012 at 11:11:51AM +0200, TP wrote:
 Hello,
 
 I have a module where I have made several types as instances of the Show 
 typeclass.
 
 For debugging purpose, I would like to use the default implementation for 
 Show 
 (the one obtained when using deriving, which shows all the constructors). 
 Is 
 there some option to do that, or have I to comment all the Show instances of 
 my code, and add Show in deriving (...) for each of my types? If this is 
 the only possibility, is there some script around here to do that 
 automatically?
 
 Thanks in advance,
 
 TP

You could use some preprocessor's #if/#else/#endif pragmas.
See here: 
http://stackoverflow.com/questions/6556778/using-if-else-endif-in-haskell

L.

-- 
Lorenzo Bolla
http://lbolla.info


pgppqbachV6j0.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe