Re: [Haskell-cafe] Thunks

2010-10-15 Thread Bernie Pope
On 15 October 2010 07:53, Mihai Maruseac mihai.marus...@gmail.com wrote:
 Hi,

 Is there a way to determine the order in which thunks are created and
 expanded/evaluated in Haskell (GHC)? I'm looking mainly at some
 existing interface but if there is only something in the GHC source it
 will suffice.

You can use side effects to observe the order of evaluation, by
wrapping observed expressions (thunks) with some IO computation inside
unsafePerformIO. This is roughly what HOOD does, and it can be used to
provide some clues about evaluation order, and maybe even GHood can
help you visualise it. I've no idea if they work at the moment, but
Hood and GHood are available on Hackage.

You have to be careful of observer effects whereby the observation
wrappers change the evaluation order of the observed code.

Cheers,
Bernie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: MonadCatchIO, finally and the error monad

2010-10-15 Thread oleg

Michael Snoyman wrote:
 I have a recommendation of how to fix this: the MonadCatchIO typeclass
 should be extended to include finally, onException and everything
 else. We can provide default definitions which will work for most
 monads, and short-circuiting monads like ErrorT (and I imagine ContT
 as well) will need to override them.

It seems that `finally' can be fixed without all these proposed
additions. The method catch is the only necessary method of the
class. 

The subject of catching errors in non-IO monads has a long history,
some of which is documented at

http://okmij.org/ftp/Haskell/index.html#catch-MonadIO

The page points out to an old CaughtMonadIO file. I have just updated
it for new Exceptions, and added the final test:

 test3c go = runErrorT $ go `gfinally` (liftIO $ putStrLn sequel called)

 test31 = test3c (return return :: ErrorT String IO String)

*CaughtMonadIO test31
sequel called
Right return

 test32 = test3c (error error   :: ErrorT String IO String)

*CaughtMonadIO test32
sequel called
*** Exception: error

 test33 = test3c (throwError throwError :: ErrorT String IO String)

*CaughtMonadIO test33
sequel called
*** Exception: ErrorException \throwError\

As we can see, sequel is always called. Here is the updated file:

http://okmij.org/ftp/Haskell/CaughtMonadIO.lhs

Incidentally, one should be very careful of using `finally' with the
continuation monad. The Cont monad lets us ``enter the room once, and
exit many times''. So, finally may be called more than once. We need
the ugly dynamic-wind -- or try to use less powerful monads if they
suffice.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: MonadCatchIO, finally and the error monad

2010-10-15 Thread Michael Snoyman
On Fri, Oct 15, 2010 at 9:35 AM,  o...@okmij.org wrote:

 Michael Snoyman wrote:
 I have a recommendation of how to fix this: the MonadCatchIO typeclass
 should be extended to include finally, onException and everything
 else. We can provide default definitions which will work for most
 monads, and short-circuiting monads like ErrorT (and I imagine ContT
 as well) will need to override them.

 It seems that `finally' can be fixed without all these proposed
 additions. The method catch is the only necessary method of the
 class.

 The subject of catching errors in non-IO monads has a long history,
 some of which is documented at

        http://okmij.org/ftp/Haskell/index.html#catch-MonadIO

 The page points out to an old CaughtMonadIO file. I have just updated
 it for new Exceptions, and added the final test:

 test3c go = runErrorT $ go `gfinally` (liftIO $ putStrLn sequel called)

 test31 = test3c (return return         :: ErrorT String IO String)

 *CaughtMonadIO test31
 sequel called
 Right return

 test32 = test3c (error error           :: ErrorT String IO String)

 *CaughtMonadIO test32
 sequel called
 *** Exception: error

 test33 = test3c (throwError throwError :: ErrorT String IO String)

 *CaughtMonadIO test33
 sequel called
 *** Exception: ErrorException \throwError\

 As we can see, sequel is always called. Here is the updated file:

        http://okmij.org/ftp/Haskell/CaughtMonadIO.lhs

 Incidentally, one should be very careful of using `finally' with the
 continuation monad. The Cont monad lets us ``enter the room once, and
 exit many times''. So, finally may be called more than once. We need
 the ugly dynamic-wind -- or try to use less powerful monads if they
 suffice.


Perhaps I'm misunderstanding your code, but it seems like it's not
really respecting the ErrorT monad at all. Instead, it's converting
the error type to a runtime exception, which often times is not at all
what we want. A pertinent example: in Yesod, I use a modified ErrorT
to allow short-circuiting handler functions to perform special
responses such as redirects. Using your code, I believe that such code
on my part would result in a 500 server error every time, quite the
opposite of what I wanted.

I would prefer if the test read as:

 test33 = fmap (== Left throwError) $ test3c (throwError throwError :: 
 ErrorT String IO String)

Which never in fact returns True. Or, more to the point, the test is
never even called, since the runtime exception prevents it.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] allocation for pure FFI functions

2010-10-15 Thread Duncan Coutts
On Thu, 2010-10-14 at 17:45 +, Johannes Waldmann wrote:
 Hi. I wonder how to do the following properly.
 
 I have one (large) C  type, let's call it T,
 and I want to sell it as an abstract type in Haskell.
 
 I want to use C functions as if they were of type T - T  
 (pure function, returns a modified copy of the input)
 and the question is, how to do the memory allocation for that,
 in particular, how to avoid  IO  showing up 
 in the (visible) types on the Haskell side:
 
 I don't want IO because I don't want to declare some artificial
 order of execution - instead I want lazy evaluation.
 E.g., I might have some Haskell record with a T component
 which may or may not be evaluated (accessed) at all.

It is exactly for this purpose that the Haskell FFI library includes
unsafePerformIO. This is basically *the* legitimate use case for it, so
you don't need to feel bad about it.

The FFI spec says:

Sometimes an external entity is a pure function, except that it
passes arguments and/or results via pointers. To permit the
packaging of such entities as pure functions, Foreign provides
the following primitive: 

unsafePerformIO :: IO a - a

http://www.cse.unsw.edu.au/~chak/haskell/ffi/ffi/ffise5.html#x8-240005.1


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: MonadCatchIO, finally and the error monad

2010-10-15 Thread oleg

Michael Snoyman wrote:
 I would prefer if the test read as:

  test33 = fmap (== Left throwError) $ test3c (throwError throwError ::
  ErrorT String IO String)
 
 Which never in fact returns True. Or, more to the point, the test is   
 never even called, since the runtime exception prevents it.

If you prefer the raised exception to be reflected back into the monad
it came from, that can be arranged. I have updated 

http://okmij.org/ftp/Haskell/CaughtMonadIO.lhs

to use your test:

 test331 = fmap (== Left (show throwError)) $ 
   test3c (throwError throwError :: ErrorT String IO String)

*CaughtMonadIO test331
sequel called
True

The `show' is the artifact of `reconciling' Error and Exception
classes. The class Error doesn't seem very informative; one may wonder
if it is needed given that we already have Exception. If in your real
code, the argument of throwError is actually an Exception, the show
hack can be eliminated, where it is mentioned in the 
instance CaughtMonadIO (ErrorT e m). I can do the adjustment if you
post more details about the desired functionality.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: MonadCatchIO, finally and the error monad

2010-10-15 Thread Michael Snoyman
On Fri, Oct 15, 2010 at 10:22 AM,  o...@okmij.org wrote:

 Michael Snoyman wrote:
 I would prefer if the test read as:

  test33 = fmap (== Left throwError) $ test3c (throwError throwError ::
  ErrorT String IO String)

 Which never in fact returns True. Or, more to the point, the test is
 never even called, since the runtime exception prevents it.

 If you prefer the raised exception to be reflected back into the monad
 it came from, that can be arranged. I have updated

        http://okmij.org/ftp/Haskell/CaughtMonadIO.lhs

 to use your test:

 test331 = fmap (== Left (show throwError)) $
           test3c (throwError throwError :: ErrorT String IO String)

 *CaughtMonadIO test331
 sequel called
 True

 The `show' is the artifact of `reconciling' Error and Exception
 classes. The class Error doesn't seem very informative; one may wonder
 if it is needed given that we already have Exception. If in your real
 code, the argument of throwError is actually an Exception, the show
 hack can be eliminated, where it is mentioned in the
 instance CaughtMonadIO (ErrorT e m). I can do the adjustment if you
 post more details about the desired functionality.

By the way, I completely agree that the Error typeclass is not very
useful, and I wish it would just disappear. I wrote the neither
package[1]- the modified error monad I keep mentioning- in large part
to avoid the Error typeclass. The other reason was the oprhan Monad
Either instance.

To the point at hand: I'm aware that you can promote the Error monad's
error type into a runtime exception, catch it, and pull it back down.
My point is that you shouldn't have to: there are perfectly valid
definitions of finally for the error monad that don't require any of
this trickery. Additionally, sometimes the error type cannot
(reasonably) be promoted to an exception, eg ErrorT (a - b) m. We'd
have to start mucking around with dummy Show instances/blind wrappers.

I just finished writing up a post describing the MonadInvertIO
approach[2], which I think is a more appropriate solution to the
problem. It doesn't involve any runtime exception trickery, and works
for a number of other use cases, including memory allocation. I would
appreciate any critiques of my ideas.

Michael

[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/neither
[2] 
http://docs.yesodweb.com/blog/invertible-monads-exceptions-allocations/#monadinvertio
(links straight to the appropriate section, skips the long buildup)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] downloading GHC

2010-10-15 Thread Ketil Malde

I needed GHC on a new machine, and went to download a binary tarball.

First, I go to http://haskell.org/ghc/download_ghc_6_12_3.html,
which kindly suggests to get the Haskell Platform instead.  

Then, at http://hackage.haskell.org/platform/linux.html, I'm told that I
first need GHC, and pointed back to the GHC download page.

I'll manage, of course - but I think this is a bit more confusing than
it need be.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Thunks

2010-10-15 Thread Ketil Malde
Bernie Pope florbit...@gmail.com writes:

 You can use side effects to observe the order of evaluation, by
 wrapping observed expressions (thunks) with some IO computation inside
 unsafePerformIO.

Not what OP asks for, but I've used a variant of this as a rather
hackish to provide progress reporting.  I take a list that is lazily
generated, and wrap the elements with an IO action that outputs the
count every 'step' elements.  When the list is evaluated, the IO actions
are executed. Code below.

-k

-- | Output (to stderr) progress while evaluating a lazy list.
--   Useful for generating output while (conceptually, at least) in pure code
countIO :: String - String - Int - [a] - IO [a]
countIO msg post step xs = sequence' $ map unsafeInterleaveIO ((blank  outmsg 
(0::Int)  c):cs)
   where (c:cs) = ct 0 xs
 output   = hPutStr stderr
 blank= output ('\r':take 70 (repeat ' '))
 outmsg x = output ('\r':msg++show x)  hFlush stderr
 ct s ys = let (a,b) = splitAt (step-1) ys
   next  = s+step
   in case b of [b1] - map return a ++ [outmsg (s+step)  
hPutStr stderr post  return b1]
[]   - map return (init a) ++ [outmsg 
(s+length a)  hPutStr stderr post  return (last a)]
_ - map return a ++ [outmsg s  return (head 
b)] ++ ct next (tail b)

-- | A lazier version of 'Control.Monad.sequence' in Control.Monad, needed by 
'countIO' above.
sequence' :: [IO a] - IO [a]
sequence' ms = foldr k (return []) ms
where k m m' = do { x - m; xs - unsafeInterleaveIO m'; return (x:xs) }

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: allocation for quot;purequot; FFI functions

2010-10-15 Thread Johannes Waldmann
Duncan Coutts duncan.coutts at googlemail.com writes:

 It is exactly for this purpose that the Haskell FFI library includes
 unsafePerformIO. This is basically *the* legitimate use case for it, so
 you don't need to feel bad about it.

OK, thanks. Then this means my C type is a ForeignPtr,
and each time I use it (even read-only) it looks like
unsafePerformIO $ withForeignPtr $ \ p - ...

Meanwhile I think I found a nice example and explanation here:
http://en.wikibooks.org/wiki/Haskell/FFI#Self-Deallocating_Pointers

J.W.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Increasing number of parameters

2010-10-15 Thread Jacek Generowicz

Thanks Brandon!

I really like the addParam utility,


value val prompt = Question prompt (show val) (readCheck val)

addParam :: (Show a) = (funTy - String - qty) - (a - funTy) -  
String - (a

- qty)
addParam qmakr fun string v = qmakr (fun v) (string++ ++show v)

prefix1 = addParam value
prefix2 = addParam prefix1
prefix3 = addParam prefix2


but my crusty and sleep-deprived brain is not really grokking the  
internal plumbing.


So I'm trying to get to grips with a simpler variation on the same  
theme, and I'm still failing. I'm trying to write something along the  
lines of


addArg :: nArgFn - a - nPlus1ArgFn
addArg fn a = (a+)  fn where
 = something which applies its right parameter to however  
many arguments it needs and feeds the result to the left parameter


in order to allow me to say

sum2 = (+)
sum3 = addArg sum2
sum4 = addArg sum3

etc.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Who is afraid of arrows, was Re: ANNOUNCE: Haskell XML Toolbox Version 9.0.0

2010-10-15 Thread Heinrich Apfelmus

Uwe Schmidt wrote:

In HXT, the concept of a filter is the most important one. This
concept is a natural generalisation of a function (and that's what
arrows are). A user has to grasp this idea of a filter. And he/she
can do this even without knowing anything about arrows or monads.
People knowing a little bit of Unix pipes and filter will become
easily familiar with the simple parts of this DSL.

[...]

The intention with HXT was not to build a general purpose languages,
where you can do any kind of complex things. The intention was to
build a (rather) simple and and powerful language for processing XML,
nothing more. You may of course argue, whether we've found the right
set of combinators, but that's another story. As Sebasiaan wrote in
this reply, when processing XML, the cases for higher order
computations are not very frequent. The few combinators available for
this are, from a Real World Haskell point of view, sufficient.

To sum it up, I think, from an implementers point of view for this
eDSL, we agree that both ways arrows/monads are possible and rather
similar. From a users point of view, I prefer a simple and
specialised DSL, you would prefer a more general one.


The question is indeed whether HXT offers the right set of combinators.
Gregory and I are inclined to assert that monad combinators are most
suitable. Sebastiaan and you prefer the arrow combinators.

But I think that *neither* of these two options satisfies the worthwhile
simple and specialised DSL criterion. You already entertain the notion
that this is the case for the monad combinators, so I'll focus on the
arrow case.


The problem with the arrow combinators is that HXT does not use them in
their full generality. Taking chapter 3 of Manuel Ohlendorfs' thesis as
representative example, it is telling that:

* The combinators  first, second, (***) and () are completely unused,
even though they are the core arrow combinators that can plumb multiple
arguments.
* Multiple arguments are handled with ($), which is not a general arrow
combinator, but specific to kleisli arrows, i.e. those coming from a monad.

That's why I don't like the slogan HXT = XML transformations with
arrows: it suggests that the defining property of arrows - not being
able to do currying while still being able to plumb multiple arguments -
is used in an essential way, but this is actually not the case. Like
monads, I think that arrows are not the right abstraction for HXT. (This
reasoning is why I even thought that HXT were poorly designed and that's
why, personally, I avoided using HXT and opted for HaXmL instead.)


Personally, I would be much happier with the slogan HXT = XML
transformations with filters. Browsing through Manuel's thesis, I
discover that your combinators are quite slick (  , choiceA , when,
guards ), it's just that they are a very specialized subset of the
general arrow combinators. I think that dropping the arrows and
rebranding your nice set of combinators as filter combinators would
greatly improve the library. In particular, mastering arrows, like
Manuel does in chapter 2 of this thesis, would become superfluous; an
advantage that is similar to the advantage of not using monads, as you note.


PS:
Interestingly, this whole discussion is caused by just a small technical
restriction of type classes: XMLArrow has to be a newtype because  a -
[b]  cannot be made an instance of  Arrow . You can make it either an
arrow or a monad, but not both; even though it actually is both.

PSS:
By the way, the reason why I was preferring monad combinators is that
they are a natural extension of lists. For instance, we have

   deep :: (XmlTree - XmlTree) - XmlTree - XmlTree
   deep f xml = [y | x - children xml, y - f x `orElse` deep f x]
   where
   [] `orElse` ys = ys
   xs `orElse` _  = xs

which can also be written as

   deep f xml = do
   x - children xml
   f x `orElse` deep f x


Regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Increasing number of parameters

2010-10-15 Thread Kevin Jardine
Jacek,

I haven't been following this thread in any detail, so I apologise if
I misunderstand your goal, but the ctm function in the polyToMonoid
library (which maps its parameters to any specified monoid) appears to
work in just this way.

It keeps consuming parameters until you hand it to the trm function to
deliver the final result. More documentation here:

http://hackage.haskell.org/packages/archive/polyToMonoid/0.1/doc/html/Data-PolyToMonoid.html

Kevin

On Oct 15, 11:38 am, Jacek Generowicz jacek.generow...@cern.ch
wrote:
 Thanks Brandon!

 I really like the addParam utility,

  value val prompt = Question prompt (show val) (readCheck val)

  addParam :: (Show a) = (funTy - String - qty) - (a - funTy) -  
  String - (a
  - qty)
  addParam qmakr fun string v = qmakr (fun v) (string++ ++show v)

  prefix1 = addParam value
  prefix2 = addParam prefix1
  prefix3 = addParam prefix2

 but my crusty and sleep-deprived brain is not really grokking the  
 internal plumbing.

 So I'm trying to get to grips with a simpler variation on the same  
 theme, and I'm still failing. I'm trying to write something along the  
 lines of

 addArg :: nArgFn - a - nPlus1ArgFn
 addArg fn a = (a+)  fn where
       = something which applies its right parameter to however  
 many arguments it needs and feeds the result to the left parameter

 in order to allow me to say

 sum2 = (+)
 sum3 = addArg sum2
 sum4 = addArg sum3

 etc.

 ___
 Haskell-Cafe mailing list
 haskell-c...@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Increasing number of parameters

2010-10-15 Thread Jacek Generowicz


On 2010 Oct 15, at 11:53, Kevin Jardine wrote:


Jacek,

I haven't been following this thread in any detail, so I apologise if
I misunderstand your goal,


My goal (in this thread, at least) is to become a better Haskell  
programmer, rather than to actually write any specific program. Yes,  
there are specific goals cited as examples, but the overall purpose is  
the journey, rather than the destination: I want to learn to walk and  
to run, rather than to get anywhere, just yet.



but the ctm function in the polyToMonoid
library (which maps its parameters to any specified monoid) appears to
work in just this way.


Yes, I noticed your earlier announcement. Yes, I recognized that it's  
pertinent to my last message. Yes, I've stored it in my (rapidly  
growing) list of things that Haskell Cafe has thrown at me that I  
should look into more deeply :-)


But my current short-term goal is to understand the plumbing in a  
function that Brandon supplied, and to acquire the ability to write  
this kind of function myself in my sleep, with my hands tied behind my  
back, while the walls are falling all around me. At the moment I'm not  
managing to write it at all :-(



It keeps consuming parameters until you hand it to the trm function to
deliver the final result. More documentation here:


Sounds a bit like the scheme I use for curried functions in Python.  
Though in Python I also have the option of calling the function with  
zero arguments to indicate termination, rather than terminating more  
explicitly by giving it to a terminate function.


(Curried functions in Python? Can you tell that there's a Haskell  
programmer dying to get out ? :-)


I've thrown in an example at the end, in case anybody is interested.


http://hackage.haskell.org/packages/archive/polyToMonoid/0.1/doc/html/Data-PolyToMonoid.html


It's already in my bookmarks, but thanks for taking the time to bring  
it to my attention.



===

from functools import partial

def curry(fn):
Function decorator. Curries its argument. The curried version
collects all positional and keyword arguments it is given, until
it is called with an empty argument list, at which point it
applies the function to all the collected arguments.

def curried_function(*args, **kwds):
if not (args or kwds):
return fn()
else:
it = partial(fn, *args, **kwds)
try:
it.__name__ = fn.__name__
except AttributeError:
pass
return curry(it)

try:
curried_function.__name__ = fn.__name__ + ' (curried)'
except AttributeError:
pass

curried_function.fn = fn
return curried_function


@curry
def collect(*args, **kwds):
return I've collected: %s %s % (args, kwds)

print collect# function collect (curried) at 0x712db0
print collect(1) # function collect (curried) at 0x712d30
print collect(1)(2,3,c=4)# function collect (curried) at 0x712bf0
print collect(1)(2,3,c=4)()  # I've collected: (1, 2, 3) {'c': 4}

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Who is afraid of arrows, was Re: ANNOUNCE: Haskell XML Toolbox Version 9.0.0

2010-10-15 Thread Malcolm Wallace


On 15 Oct 2010, at 10:44, Heinrich Apfelmus wrote:


Personally, I would be much happier with the slogan HXT = XML
transformations with filters. Browsing through Manuel's thesis, I
discover that your combinators are quite slick (  , choiceA , when,
guards ), it's just that they are a very specialized subset of the
general arrow combinators. I think that dropping the arrows and
rebranding your nice set of combinators as filter combinators would
greatly improve the library.


But then, HXT's filter combinators would return to being rather like  
HaXml's filter combinators, where the concept was first introduced.   
Personally, I'm very happy that customers avoid HXT (due to the  
complexity of the arrow interface), because that means more customers  
for HaXml...  :-)


Regards,
Malcolm

P.S. Coming soon in the next release of HaXml: full support for xmlns  
namespaces, and an XSDToHaskell translator.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Client-extensible heterogeneous types (Duck-typed variadic functions?)

2010-10-15 Thread Jacek Generowicz
Using Brandon's code as a starting point (as it's far neater than  
mine), let's try asking some questions about fractions (I've included  
the whole program at the end).


questions = [ addition 1 2, addition (1%2) (1%3) ]

This works, but the the fractions are shown as 1 % 2 and to make it  
presentable to non-Haskellers, we have to change that to 1/2.


In order to do this, I tried to replace show with my own version which  
I call view (in type class View). At this point I get


../arithmetic/hackBrandon.hs:63:23:
Ambiguous type variable `t' in the constraints:
  `Num t'
arising from the literal `1'
 at ../arithmetic/hackBrandon.hs:63:23
  `View t'
arising from a use of `addition'
 at ../arithmetic/hackBrandon.hs:63:14-25
  `Read t'
arising from a use of `addition'
 at ../arithmetic/hackBrandon.hs:63:14-25
Probable fix: add a type signature that fixes these type  
variable(s)



My problem is that I don't see where I could add a type signature, but  
still keep


   addition :: a - a - Question

polymorphic.

 === Here's the code demonstrating the problem =


{-# LANGUAGE NoMonomorphismRestriction #-}
import System.IO (hFlush, stdout)
import Data.Ratio

data Result = Correct | Improve String | Huh String | Incorrect String
  deriving Show

data Question = Question { ask:: String
 , answer :: String
 , check  :: String - Result }

bool2result True  = Correct
bool2result False = Incorrect 

readCheckBy :: (Read a) = (a - Bool) - String - Result
readCheckBy pred str =
 case reads str of [(val,)] - bool2result (pred val)
   _ - Huh 

readCheck :: (Read a, Eq a) = a - String - Result
readCheck v s = readCheckBy (==v) s

-- customized show

class View a where
view :: a - String

instance View Int where
view = show

instance (Integral n) = View (Ratio n) where
view = show

-- helpers

value val prompt = Question prompt (view val) (readCheck val)

infix2 op symbol a b = value (op a b) (unwords [view a, symbol, view b])

addParam :: (View a) = (funTy - String - qty) - (a - funTy) -  
String - (a - qty)

addParam qmakr fun string v = qmakr (fun v) (string++ ++view v)

prefix1 = addParam value
prefix2 = addParam prefix1
prefix3 = addParam prefix2

-- question 'types'

addition   = infix2 (+) +

questions = [ addition 1 2
, addition (1%2) (1%3)
]

test :: Question - IO ()
test q = do
 putStr $ ask q ++  = 
 hFlush stdout
 reply - getLine
 putStrLn $ show $ check q reply

main = mapM_ test questions

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Client-extensible heterogeneous types (Duck-typed variadic functions?)

2010-10-15 Thread Jacek Generowicz


On 2010 Oct 15, at 13:32, Jacek Generowicz wrote:


   questions = [ addition 1 2, addition (1%2) (1%3) ]



My problem is that I don't see where I could add a type signature,  
but still keep


  addition :: a - a - Question

polymorphic.


Well, OK, I could write

addition 1 (2 :: Int)

inside the question list, but that's rather ugly, and it would be  
immensely annoying to have to do this for every specific question.


 Is there anywhere else it could go ?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] deploying applications that use plugins

2010-10-15 Thread Stefan Kersten
hi all,

i am working on an application that evaluates haskell code entered by the user
using the plugins package [1]; now i want to deploy the application as a
self-contained .app bundle on OSX, i.e. the end user should not have to install
anything in addition.

what do i need to bundle in order for plugins to work on a vanilla machine? i
suppose i need ghc in PATH and have to ship modified versions of the
package.conf files that point to the corresponding libraries within the
application bundle? is there a parser for package.conf files? any pointers would
be greatly appreciated!

thanks,
sk

[1] http://hackage.haskell.org/package/plugins
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] All binary strings of a given length

2010-10-15 Thread rgowka1
Hi -

How can I generate all binary string of a given length? The type
signature would something like -

genbin :: Int - [String]

For example genbin 2 would give [00,11,01,10] and genbin 3
would give [000,001,010,011,100,101,110,111] etc..

thanks..
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] All binary strings of a given length

2010-10-15 Thread Eugene Kirpichov
genbin = flip replicateM 01

2010/10/15 rgowka1 rgow...@gmail.com:
 Hi -

 How can I generate all binary string of a given length? The type
 signature would something like -

 genbin :: Int - [String]

 For example genbin 2 would give [00,11,01,10] and genbin 3
 would give [000,001,010,011,100,101,110,111] etc..

 thanks..
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Senior Software Engineer,
Grid Dynamics http://www.griddynamics.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] All binary strings of a given length

2010-10-15 Thread Eugene Kirpichov
Here's why it works:

genbin 3 = replicateM 3 01 = (unfold replicateM) do x1 - 01; x2
- 01 ; x3 - 01; return [x1,x2,x3] = your desired result
(enumerate all combinations of x1,x2,x3 with each being 0 or 1).

2010/10/15 Eugene Kirpichov ekirpic...@gmail.com:
 genbin = flip replicateM 01

 2010/10/15 rgowka1 rgow...@gmail.com:
 Hi -

 How can I generate all binary string of a given length? The type
 signature would something like -

 genbin :: Int - [String]

 For example genbin 2 would give [00,11,01,10] and genbin 3
 would give [000,001,010,011,100,101,110,111] etc..

 thanks..
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 Senior Software Engineer,
 Grid Dynamics http://www.griddynamics.com/




-- 
Eugene Kirpichov
Senior Software Engineer,
Grid Dynamics http://www.griddynamics.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] allocation for pure FFI functions

2010-10-15 Thread Nicolas Pouillard
On Fri, 15 Oct 2010 09:07:22 +0100, Duncan Coutts 
duncan.cou...@googlemail.com wrote:
 On Thu, 2010-10-14 at 17:45 +, Johannes Waldmann wrote:
  Hi. I wonder how to do the following properly.
  
  I have one (large) C  type, let's call it T,
  and I want to sell it as an abstract type in Haskell.
  
  I want to use C functions as if they were of type T - T  
  (pure function, returns a modified copy of the input)
  and the question is, how to do the memory allocation for that,
  in particular, how to avoid  IO  showing up 
  in the (visible) types on the Haskell side:
  
  I don't want IO because I don't want to declare some artificial
  order of execution - instead I want lazy evaluation.
  E.g., I might have some Haskell record with a T component
  which may or may not be evaluated (accessed) at all.
 
 It is exactly for this purpose that the Haskell FFI library includes
 unsafePerformIO. This is basically *the* legitimate use case for it, so
 you don't need to feel bad about it.

I still feel bad about it. Its so easy to turn unsafePerformIO into
unsafeCoerce, that I can well happen by mistake. I would like to have
an unsafePerformIO that is only unsafe w.r.t. performing effects, not
breaking the type-system. Here is a suggestion, it may be not new but
I never seen it on unsafePerformIO:

unsafePerformIO :: Typeable a = IO a - a
unsafePerformIO = ... same code ...

Provided that Typeable instance are all generated by the compiler this
has the desired effect of preventing generalization of mutable data.

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] All binary strings of a given length

2010-10-15 Thread Michael Snoyman
Not the most efficient, but easy to understand:

genbin 0 = []
genbin 1 = [0, 1]
genbin i =
map ('0' :) x ++ map ('1' :) x
  where
x = genbin $ i - 1


On Fri, Oct 15, 2010 at 2:21 PM, rgowka1 rgow...@gmail.com wrote:
 Hi -

 How can I generate all binary string of a given length? The type
 signature would something like -

 genbin :: Int - [String]

 For example genbin 2 would give [00,11,01,10] and genbin 3
 would give [000,001,010,011,100,101,110,111] etc..

 thanks..
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: All binary strings of a given length

2010-10-15 Thread rgowka1
Amazing, will never find this in any other languagw. But ghci crashes
for bigger input. Like genbin 20. How to scale this function?

On 10/15/10, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Here's why it works:

 genbin 3 = replicateM 3 01 = (unfold replicateM) do x1 - 01; x2
 - 01 ; x3 - 01; return [x1,x2,x3] = your desired result
 (enumerate all combinations of x1,x2,x3 with each being 0 or 1).

 2010/10/15 Eugene Kirpichov ekirpic...@gmail.com:
 genbin = flip replicateM 01

 2010/10/15 rgowka1 rgow...@gmail.com:
 Hi -

 How can I generate all binary string of a given length? The type
 signature would something like -

 genbin :: Int - [String]

 For example genbin 2 would give [00,11,01,10] and genbin 3
 would give [000,001,010,011,100,101,110,111] etc..

 thanks..
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 Senior Software Engineer,
 Grid Dynamics http://www.griddynamics.com/




 --
 Eugene Kirpichov
 Senior Software Engineer,
 Grid Dynamics http://www.griddynamics.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: All binary strings of a given length

2010-10-15 Thread Aleksandar Dimitrov

On Fri, 15 Oct 2010 14:34:42 +0200, rgowka1 rgow...@gmail.com wrote:


Amazing, will never find this in any other languagw. But ghci crashes
for bigger input. Like genbin 20. How to scale this function?


Well, scaling this isn't really possible, because of its complexity. It  
generates all permutations of a given string with two states for each  
position. In regular languages, this is the language {1,0}^n, n being the  
length of the string. This means that there are 2^n different strings in  
the language. For 20, that's already 1048576 different Strings! Strings  
are furthermore not really the best way to encode your output. Numbers  
(i.e. bytes) would be much better. You could generate them, and only  
translate into strings when needed.


HTH,
Aleks
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: All binary strings of a given length

2010-10-15 Thread Eugene Kirpichov
Actually my ghci doesn't crash for genbin 25 (haven't tried further),
though it eats quite a bit of memory.
How are you going to use these bit strings? Do you need all of them at once?

2010/10/15 Aleksandar Dimitrov aleks.dimit...@googlemail.com:
 On Fri, 15 Oct 2010 14:34:42 +0200, rgowka1 rgow...@gmail.com wrote:

 Amazing, will never find this in any other languagw. But ghci crashes
 for bigger input. Like genbin 20. How to scale this function?

 Well, scaling this isn't really possible, because of its complexity. It
 generates all permutations of a given string with two states for each
 position. In regular languages, this is the language {1,0}^n, n being the
 length of the string. This means that there are 2^n different strings in the
 language. For 20, that's already 1048576 different Strings! Strings are
 furthermore not really the best way to encode your output. Numbers (i.e.
 bytes) would be much better. You could generate them, and only translate
 into strings when needed.

 HTH,
 Aleks
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Senior Software Engineer,
Grid Dynamics http://www.griddynamics.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] dph question

2010-10-15 Thread Warren Harris
I trying to learn a bit about data parallel haskell, and started from the
wiki page here: http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell.
Two questions:

The examples express the dot product as:

dotp_double xs ys = sumP [:x *
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
y | x - xs | y - ys:]

Unless I'm missing something, shouldn't this actually be:

dotp_double xs ys = sumP [:x *
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
y | x - xs, y - ys:]

Second, when I run Main with the prescribed 1 element array, everything
seems to work quite nicely. The task takes about 2 seconds on my 4 processor
x86_64, and threadscope shows all processors nicely utilized. However, when
bumping this to 10 elements, rather than taking 10x longer as I
expected, the process never terminates. During one run I even lost control
of my machine and needed to do a hard reset. Are there known limits to the
array sizes that can be handled with dph, or can someone suggest what might
be going wrong here? Thanks,

Warren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Fwd: [Haskell-cafe] Fuzzy time deltas

2010-10-15 Thread Alberto G. Corona
Michael,

The package Workflow has persistent timeouts (can wait for years and
restart on system failure if embedded in the workflow monad, although
it can run in the IO monad, with no recovery). They are composable
with any action in the STM monad with orElse:


   flag - getTimeoutFlag $  5*24*60 * 60-- wait exactly 5 days. even
  -- if the
program restart
   ap - step  .  atomically $  readSomewhere  return False
 `orElse`  waitUntilSTM flag   return True

   case ap of
   False - print something received
   True  - print timeout



step  lift it from the IO to the workflow monad, and gives it
persistence and recovery.

without step, it runs in the IO monad (No recovery on system failure):

flag - transientTimeout $  5*24*60 * 60-- wait 5 days, timeout
 -- restarts in
case of  failure
ap -  atomically $  readSomewhere  return False
`orElse`  waitUntilSTM flag   return True
case ap of
  False - print something received
  True  - print timeout

--
transientTimeout t= do
  flag - atomically $ newTVar False
  forkIO $ threadDelay(t * 100)  atomically (writeTVar flag True)
   myThreadId = killThread
  return flag


2010/10/14 Michael Snoyman mich...@snoyman.com:
 Hey all,

 Is there a library that supports fuzzy time deltas? For example, given
 two UTCTimes (or something like that) it could produce:

 43 seconds
 13 minutes
 17 hours
 4 days
 8 months

 I want to use it for the news feature on Haskellers. It's not that
 hard to write, just wondering if it's already been done.

 Michael
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: All binary strings of a given length

2010-10-15 Thread Daniel Gorín
I expect this one to run in constant space:

import Data.Bits

genbin :: Int - [String]
genbin n = map (showFixed n) [0..2^n-1::Int]
where showFixed n i = map (bool '1' '0' . testBit i) [n-1,n-2..0]
  bool t f b = if b then t else f

Daniel

On Oct 15, 2010, at 9:43 AM, Eugene Kirpichov wrote:

 Actually my ghci doesn't crash for genbin 25 (haven't tried further),
 though it eats quite a bit of memory.
 How are you going to use these bit strings? Do you need all of them at once?
 
 2010/10/15 Aleksandar Dimitrov aleks.dimit...@googlemail.com:
 On Fri, 15 Oct 2010 14:34:42 +0200, rgowka1 rgow...@gmail.com wrote:
 
 Amazing, will never find this in any other languagw. But ghci crashes
 for bigger input. Like genbin 20. How to scale this function?
 
 Well, scaling this isn't really possible, because of its complexity. It
 generates all permutations of a given string with two states for each
 position. In regular languages, this is the language {1,0}^n, n being the
 length of the string. This means that there are 2^n different strings in the
 language. For 20, that's already 1048576 different Strings! Strings are
 furthermore not really the best way to encode your output. Numbers (i.e.
 bytes) would be much better. You could generate them, and only translate
 into strings when needed.
 
 HTH,
 Aleks
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 
 -- 
 Eugene Kirpichov
 Senior Software Engineer,
 Grid Dynamics http://www.griddynamics.com/
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: All binary strings of a given length

2010-10-15 Thread rgowka1
Thanks Daniel.

But genbin 32 gives an empty list.. works till 31.

On Fri, Oct 15, 2010 at 9:05 AM, Daniel Gorín dgo...@dc.uba.ar wrote:
 I expect this one to run in constant space:

 import Data.Bits

 genbin :: Int - [String]
 genbin n = map (showFixed n) [0..2^n-1::Int]
    where showFixed n i = map (bool '1' '0' . testBit i) [n-1,n-2..0]
          bool t f b = if b then t else f

 Daniel

 On Oct 15, 2010, at 9:43 AM, Eugene Kirpichov wrote:

 Actually my ghci doesn't crash for genbin 25 (haven't tried further),
 though it eats quite a bit of memory.
 How are you going to use these bit strings? Do you need all of them at once?

 2010/10/15 Aleksandar Dimitrov aleks.dimit...@googlemail.com:
 On Fri, 15 Oct 2010 14:34:42 +0200, rgowka1 rgow...@gmail.com wrote:

 Amazing, will never find this in any other languagw. But ghci crashes
 for bigger input. Like genbin 20. How to scale this function?

 Well, scaling this isn't really possible, because of its complexity. It
 generates all permutations of a given string with two states for each
 position. In regular languages, this is the language {1,0}^n, n being the
 length of the string. This means that there are 2^n different strings in the
 language. For 20, that's already 1048576 different Strings! Strings are
 furthermore not really the best way to encode your output. Numbers (i.e.
 bytes) would be much better. You could generate them, and only translate
 into strings when needed.

 HTH,
 Aleks
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 Senior Software Engineer,
 Grid Dynamics http://www.griddynamics.com/
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: All binary strings of a given length

2010-10-15 Thread Steve Schafer
On Fri, 15 Oct 2010 09:16:58 -0400, rgowka1 rgow...@gmail.com wrote:

But genbin 32 gives an empty list.. works till 31.

That's because Daniel uses values of type Int as intermediate storage
during the computation, and Int values are only 32 bits long. By
replacing Int with Integer (which does not have that limitation), you
can make it work for larger numbers/longer strings:

 genbin n = map (showFixed n) [0..2^n-1::Integer]

-Steve Schafer
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: dph question

2010-10-15 Thread steffen
 I trying to learn a bit about data parallel haskell, and started from the
 wiki page here:http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell.
 Two questions:

 The examples express the dot product as:

 dotp_double xs ys = sumP [:x *
 http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
 y | x - xs | y - ys:]

 Unless I'm missing something, shouldn't this actually be:

 dotp_double xs ys = sumP [:x *
 http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
 y | x - xs, y - ys:]

No, array comprehension desugaring works the same way as for list
comprehension.
So this correct:

dotp_double xs ys = sumP [:x * y | x - xs | y - ys:]

After desugaring this will be translated into (simplified):

dotp_double xs ys = sumP (zipWithP (*) xs ys)

which will multiply the arrays element wise and sum the result.

The other definition

dotp_double xs ys = sumP [:x * y | x - xs, y - ys:]

will be translated into (something equivalent):

dotp_double xs ys = sumP (concatMapP (\x - mapP (\y - x * y)) xs
ys)

which definitely is not the dot product.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] dph question

2010-10-15 Thread Daniel Fischer
On Friday 15 October 2010 14:59:18, Warren Harris wrote:
 I trying to learn a bit about data parallel haskell, and started from
 the wiki page here:
 http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell. Two
 questions:

 The examples express the dot product as:

 dotp_double xs ys = sumP [:x *
 http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
 y | x - xs | y - ys:]

 Unless I'm missing something, shouldn't this actually be:

 dotp_double xs ys = sumP [:x *
 http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
 y | x - xs, y - ys:]


No, it's supposed to be a parallel list comprehension, the dot product is

sum $ zipWith (*) xs ys

and the

{ blah x y | x - xs | y - ys }

syntax (where {, } stand in for [, ] in parallel list comprehensions and 
for [:, :] in parallel array comprehensions) means

{ blah x y | (x,y) - zip xs ys }

 Second, when I run Main with the prescribed 1 element array,
 everything seems to work quite nicely. The task takes about 2 seconds on
 my 4 processor x86_64, and threadscope shows all processors nicely
 utilized. However, when bumping this to 10 elements, rather than
 taking 10x longer as I expected, the process never terminates. During
 one run I even lost control of my machine and needed to do a hard reset.
 Are there known limits to the array sizes that can be handled with dph,
 or can someone suggest what might be going wrong here? Thanks,

 Warren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] dph question

2010-10-15 Thread Warren Harris
Got it - thanks. Any idea about the run-away process problem? Thanks,

Warren

On Fri, Oct 15, 2010 at 9:32 AM, Daniel Fischer daniel.is.fisc...@web.dewrote:

 On Friday 15 October 2010 14:59:18, Warren Harris wrote:
  I trying to learn a bit about data parallel haskell, and started from
  the wiki page here:
  http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell. Two
  questions:
 
  The examples express the dot product as:
 
  dotp_double xs ys = sumP [:x *
  http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
  y | x - xs | y - ys:]
 
  Unless I'm missing something, shouldn't this actually be:
 
  dotp_double xs ys = sumP [:x *
  http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:.
  y | x - xs, y - ys:]
 

 No, it's supposed to be a parallel list comprehension, the dot product is

 sum $ zipWith (*) xs ys

 and the

 { blah x y | x - xs | y - ys }

 syntax (where {, } stand in for [, ] in parallel list comprehensions and
 for [:, :] in parallel array comprehensions) means

 { blah x y | (x,y) - zip xs ys }

  Second, when I run Main with the prescribed 1 element array,
  everything seems to work quite nicely. The task takes about 2 seconds on
  my 4 processor x86_64, and threadscope shows all processors nicely
  utilized. However, when bumping this to 10 elements, rather than
  taking 10x longer as I expected, the process never terminates. During
  one run I even lost control of my machine and needed to do a hard reset.
  Are there known limits to the array sizes that can be handled with dph,
  or can someone suggest what might be going wrong here? Thanks,
 
  Warren


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Increasing number of parameters

2010-10-15 Thread Jacek Generowicz


On 2010 Oct 15, at 11:38, Jacek Generowicz wrote:


[...]
So I'm trying to get to grips with a simpler variation on the same  
theme, and I'm still failing. I'm trying to write something along  
the lines of


addArg :: nArgFn - a - nPlus1ArgFn
addArg fn a = (a+)  fn where
    = something which applies its right parameter to however  
many arguments it needs and feeds the result to the left parameter


in order to allow me to say

sum2 = (+)
sum3 = addArg sum2
sum4 = addArg sum3

etc.



-- OK, I've understood.

-- You use an accumulator to keep track of what has been done with the
-- arguments that have been seen so far, and addArg takes one more
-- argument, each time, and mixes it in with what is already there.

-- I smell a monad.

addArgSum :: (Num a) = (a - t) - a - a - t
addArgSum fn acc arg = fn (acc + arg)

sum1' = id
sum2' = addArgSum sum1'
sum3' = addArgSum sum2'

-- And here's a more general version.

addArg combine fn acc arg = fn (combine arg acc)

sum1 = id
sum2 = addArg (+) sum1
sum3 = addArg (+) sum2
sum4 = addArg (+) sum3

-- But I don't really get why the following leads to complaints about
-- infinite types.

-- sumN n = iterate (addArg (+)) id

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: darcs 2.5 release candidate 1

2010-10-15 Thread Reinier Lamers
The darcs team would like to announce the immediate availability of darcs 2.5
release candidate 1 (also known as darcs 2.4.99.1 due to Cabal restrictions). 
If no blocking issues are found in the coming week, we will finally releease 
darcs 2.5.

Important changes since darcs 2.4.4 are:

   * trackdown can now do binary search with the --bisect option
   * darcs always stores patch metadata encoded with UTF-8
   * diff now supports the --index option
   * amend-record now supports the --ask-deps option
   * apply now supports the --match option
   * amend-record has a new --keep-date option
   * inventory-changing commands (like record and pull) now operate in
 constant time with respect to the number of patches in the repository
   * the push, pull, send and fetch commands no longer set the default
 repository by default
   * the --edit-description option is now on by default for the send command

Changes since the last beta release are:
   * Fix a bug that let users add files outside the working copy directory
   * Fix excessive network usage when pulling from an old-fashioned repository
   * Make sure darcs builds on Windows with Haskell Platform 2010.2.0.0

If you have installed the Haskell Platform or cabal-install, you can install
this beta release by doing:

  $ cabal update
  $ cabal install darcs-beta

Alternatively, you can download the tarball from 
http://darcs.net/releases/darcs-2.4.99.1.tar.gz and build it by hand as 
explained in the README file.

Kind Regards,
the darcs release manager,
Reinier Lamers  


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] downloading GHC

2010-10-15 Thread Don Stewart
ketil:
 
 I needed GHC on a new machine, and went to download a binary tarball.
 
 First, I go to http://haskell.org/ghc/download_ghc_6_12_3.html,
 which kindly suggests to get the Haskell Platform instead.  
 
 Then, at http://hackage.haskell.org/platform/linux.html, I'm told that I
 first need GHC, and pointed back to the GHC download page.
 
 I'll manage, of course - but I think this is a bit more confusing than
 it need be.

Linux users don't have easy binary installers, usually. What can we do
about this bootstrapping problem?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bulletproof resource management

2010-10-15 Thread Florian Weimer
* Henning Thielemann:

 Some open/close pairs have corresponding 'with' functions, that are
 implemented using Exception.bracket. You can also use them within
 GHCi. I think using both manual resource deallocation and finalizers
 makes everything more complicated and more unreliable.

It seems that Exception.bracket does not work in all cases, see the
recent MonadCatchIO, finally and the error monad thread.

Anyway, the ability of closures (and threads) means that something
like Exception.bracket does not prevent access to closed handles, so
I still need an additional safety net.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Who is afraid of arrows, was Re: ANNOUNCE: Haskell XML Toolbox Version 9.0.0

2010-10-15 Thread Nikitiskiy Dmitriy
15.10.2010 15:03, Malcolm Wallace пишет:

 On 15 Oct 2010, at 10:44, Heinrich Apfelmus wrote:

 Personally, I would be much happier with the slogan HXT = XML
 transformations with filters. Browsing through Manuel's thesis, I
 discover that your combinators are quite slick (  , choiceA , when,
 guards ), it's just that they are a very specialized subset of the
 general arrow combinators. I think that dropping the arrows and
 rebranding your nice set of combinators as filter combinators would
 greatly improve the library.

 But then, HXT's filter combinators would return to being rather like
 HaXml's filter combinators, where the concept was first introduced.
 Personally, I'm very happy that customers avoid HXT (due to the
 complexity of the arrow interface), because that means more customers
 for HaXml... :-)

 Regards,
 Malcolm

 P.S. Coming soon in the next release of HaXml: full support for xmlns
 namespaces, and an XSDToHaskell translator.

Sorry, for offtopic.

But how in HaXml will look equivalent this filter:

data MyAttr = MyAttr String String

getAttrs = deep (isElem  hasName SomeTag) 
proc x - do
aname - getAttrValue Name - x
atype - getAttrValue Type - x
returnA - MyAttr aname atype


I personally have swithed to HaXml because they have less memory
consumption, but for extracting attributes from nodes haven't found
standart method.

Regards,
Dmitriy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IORef memory leak

2010-10-15 Thread Evan Laforge
 The latter. atomicModifyIORef is harder though still, since it is a
 primop with the same properties as modifyIORef :/

 So would it make sense to create a strict modifyIORef' function?


 Very much so. In fact, I'd argue the vast majority of uses are for the
 WHNF-strict version.

I just fixed a leak with atomicModifyIORef that was exactly this
problem.  If it had at least been documented I wouldn't have had to do
that.  So I'm going to submit a library proposal to either 1)
strictify atomicModifyIORef, 2) add atomicModifyIORef', or at the
least 3) add documentation that says this function leaks.  Same
story for modifyIORef of course.

The only workaround I could find is to immediately read the value back
out and 'seq' on it, but it's ugly.

So two questions:

writeIORef doesn't have this problem.  If I am just writing a simple
value, is writeIORef atomic?  In other words, can I replace
'atomicModifyIORef r (const (x, ())' with 'writeIORef r x'?

Any reason to not do solution 1 above?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bulletproof resource management

2010-10-15 Thread Antoine Latter
On Fri, Oct 15, 2010 at 11:09 AM, Florian Weimer f...@deneb.enyo.de wrote:
 * Henning Thielemann:

 Some open/close pairs have corresponding 'with' functions, that are
 implemented using Exception.bracket. You can also use them within
 GHCi. I think using both manual resource deallocation and finalizers
 makes everything more complicated and more unreliable.

 It seems that Exception.bracket does not work in all cases, see the
 recent MonadCatchIO, finally and the error monad thread.

 Anyway, the ability of closures (and threads) means that something
 like Exception.bracket does not prevent access to closed handles, so
 I still need an additional safety net.

That thread is for the function bracket provided by the package
MonadCatchIO. Control.Exception.bracket should work fine as far as I
know.

Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] downloading GHC

2010-10-15 Thread Ketil Malde
Don Stewart d...@galois.com writes:

 First, I go to http://haskell.org/ghc/download_ghc_6_12_3.html,
 which kindly suggests to get the Haskell Platform instead.  

 Then, at http://hackage.haskell.org/platform/linux.html, I'm told that I
 first need GHC, and pointed back to the GHC download page.

 Linux users don't have easy binary installers, usually. What can we do
 about this bootstrapping problem?

Uh, AFAIC, it's only a documentation bug - the GHC page seems to say
that GHC comes with HP, the HP page tells to go get GHC first.  I'd just
change it to something like:

GHC:  Click here to download... then go see Haskell Platform for the
standard set of libraries.

HP: Go to GHC HQ and get GHC, then make sure to come back here for the
libraries.

(Assuming I understand how this is set up - I usually just get stuff
from my distribution, and didn't come around to installing the HP)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] downloading GHC

2010-10-15 Thread Ketil Malde
Don Stewart d...@galois.com writes:

 Linux users don't have easy binary installers, usually.

Speaking about which - this is made a lot more difficult than it need be
due to the way libc doesn't work with statically linked executables.
Basically, it seems to manually load hardwired dynamic libraries for
various functionality that you typically *don't* want in your compiler
(nsswitch and whatnot).

Perhaps there's a stripped down libc that ghc could link with that
avoids this?  Or some other solution to this.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] downloading GHC

2010-10-15 Thread Paulo Tanimoto
On Fri, Oct 15, 2010 at 1:47 PM, Ketil Malde ke...@malde.org wrote:

 Uh, AFAIC, it's only a documentation bug - the GHC page seems to say
 that GHC comes with HP, the HP page tells to go get GHC first.  I'd just
 change it to something like:

 GHC:  Click here to download... then go see Haskell Platform for the
 standard set of libraries.

 HP: Go to GHC HQ and get GHC, then make sure to come back here for the
 libraries.

 (Assuming I understand how this is set up - I usually just get stuff
 from my distribution, and didn't come around to installing the HP)


The warning on GHC 6.12.3 download page [1] says:


Stop!

For most users, we recommend installing the Haskell Platform instead
of GHC. The current Haskell Platform release includes a recent GHC
release as well as some other tools (such as cabal), and a larger set
of libraries that are known to work together. This standalone GHC
6.12.3 release is aimed primarily at package maintainers and early
adopters.


That seems to be aimed at Windows (and Mac?) users, who would indeed
get a binary installer that contains GHC and the libraries from
Haskell Platform, right?  So in a sense for most users is right.
But I agree with you, we could improve the message.

Paulo

[1] http://www.haskell.org/ghc/download_ghc_6_12_3.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] An interesting paper from Google

2010-10-15 Thread Andrew Coppin

 http://k1024.org/~iusty/papers/icfp10-haskell-reagent.pdf

I'm sure some of you have seen this already. For those who lack the time 
or inclination to read through the (six) pages of this report, here's 
the summary...


We [i.e., the report authors] took a production Python system and 
rewrote bits of it in Haskell, some of which is now in production use. 
We conclude the following:


- Python lets you just do whatever the hell you want, but Haskell 
demands that you actually have a *plan* before you start churning out 
code and running it. The result is generally cleaner and more consistent 
when you get there.


- Haskell's much-criticised immutable data is actually an *advantage* 
for backtracking search problems.


- Haskell wins for thread-safety.

- ADTs are nicer than exceptions.

- The parts of Haskell stolen by Python aren't as nice in Python as they 
are in Haskell. [Well, duh.]


- We like what GHC provides for profiling.

- We are dissappointed by what GHC provides for debugging.

- String is too slow. None of the alternatives seem to be widely 
supported. If your library consumes Strings and returns Strings, the 
fact that ByteString exists doesn't help you.


- Recent changes to GHC broke our code. (Specifically, extensible 
exceptions.) We were quite surprised that such a stable and mature 
system as GHC would do this to us.


- Haskell has quite a high barrier to entry. [Again, duh.]

The paper also contains an interesting section that basically says we 
tried porting the Python implementing of XYZ into Haskell, but there 
wasn't really any advantage because it's all I/O. In my humble opinion, 
it's all I/O is a common beginner's mistake. Reading between the 
lines, it sounds like they wrote the whole thing in the IO monad, and 
then decided it looked just like the existing Python code so there 
wasn't much point in continuing. I guess it's one of the bad habits that 
imperative programmers get into. With a little more experience, you 
eventually figure out that you can limit the stuff actually in the IO 
monad to a surprisingly small section, and so almost everything else in 
pure code, no matter how much the problem looks like it's all I/O. But 
anyway, I'm only guessing from what I can actually see with my own eyes 
in the report itself.


I'm surprised about the profiler. They seem really, really impressed 
with it. Which is interesting to me, since I can never seen to get 
anything sensible out of it. It always seems to claim that my program is 
spending 80% of its runtime executing zipWith or something equally 
absurd. I'm unsurprised which their dissappointment with debugging. I'm 
still quite surprised that there's no tool anywhere which will trivially 
print out the reduction sequence for executing an expression. You'd 
think this would be laughably easy, and yet nobody has done it yet.


Their comments about String are sadly true.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IORef memory leak

2010-10-15 Thread Gregory Collins
Evan Laforge qdun...@gmail.com writes:

 The only workaround I could find is to immediately read the value back
 out and 'seq' on it, but it's ugly.

Yep! C'est la vie unfortunately.

The way atomicModifyIORef works is that the new value isn't actually
evaluated at all; GHC just swaps the old value with a thunk which will
do the modification when the value is demanded.

It's done like that so that the atomic modification can be done with a
compare-and-swap CPU instruction; a fully-fledged lock would have to be
taken otherwise, because your function could do an unbounded amount of
work. While that's happening, other mutator threads could be writing
into your memory cell, having read the same old value you did, and then
*splat*, the souffle is ruined.

Once you're taking a lock, you've got yourself an MVar. This is why
IORefs are generally (always?) faster than MVars under contention; the
lighter-weight lock mechanism means mutator threads don't block, if the
CAS fails atomicModifyIORef just tries again in a busy loop. (I think!)

Of course, the mutator threads themselves then tend to bump into each
other or do redundant work when it's time to evaluate the thunks (GHC
tries to avoid this using thunk blackholing). Contention issues here
have gotten radically better in recent versions of GHC I think.

Forgive me if I've gotten anything wrong here, I think Simon Marlow
might be the only person who *really* understands how all this stuff
works. :)


 So two questions:

 writeIORef doesn't have this problem.  If I am just writing a simple
 value, is writeIORef atomic?  In other words, can I replace
 'atomicModifyIORef r (const (x, ())' with 'writeIORef r x'?

 Any reason to not do solution 1 above?

Well if you're not inspecting or using the old value then it's safe to
just blow it away, yes.

Cheers,

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] In what language...?

2010-10-15 Thread Andrew Coppin

 Yesterday I read a rather interesting paper:

http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf

It's fascinating stuff, and I *think* I understand the gist of what it's 
saying. However, the paper is utterly festooned with formulas that look 
so absurdly over-the-top that they might almost be a spoof of a 
mathematical formula rather than the real thing. A tiny fraction of the 
notation is explained in the paper, but the rest is simply taken to be 
obvious. The paper also uses several ordinary English words in a way 
that suggests that they are supposed to have a more specific technical 
meaning - but I have no idea what.


Does anybody have any idea which particular dialect of pure math this 
paper is speaking? (And where I can go read about it...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-15 Thread Gregory Collins
Andrew Coppin andrewcop...@btinternet.com writes:

 Does anybody have any idea which particular dialect of pure math this
 paper is speaking? (And where I can go read about it...)

It's pretty garden-variety programming language/type theory. I can
recommend Benjamin Pierce's Types and Programming Languages textbook
for an introduction to the material:
http://www.cis.upenn.edu/~bcpierce/tapl/

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-15 Thread Thomas DuBuisson
I think you would enjoy reading (and working) through TAPL[1] and/or
Software Foundations[2] if this interests you.

Cheers,
Thomas

[1] 
http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091
[2] http://www.cis.upenn.edu/~bcpierce/sf/

On Fri, Oct 15, 2010 at 1:36 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
  Yesterday I read a rather interesting paper:

 http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf

 It's fascinating stuff, and I *think* I understand the gist of what it's
 saying. However, the paper is utterly festooned with formulas that look so
 absurdly over-the-top that they might almost be a spoof of a mathematical
 formula rather than the real thing. A tiny fraction of the notation is
 explained in the , but the rest is simply taken to be obvious. The
 paper also uses several ordinary English words in a way that suggests that
 they are supposed to have a more specific technical meaning - but I have no
 idea what.

 Does anybody have any idea which particular dialect of pure math this paper
 is speaking? (And where I can go read about it...)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-15 Thread Alexander Solla


On Oct 15, 2010, at 1:36 PM, Andrew Coppin wrote:

Does anybody have any idea which particular dialect of pure math  
this paper is speaking? (And where I can go read about it...)


It's some kind of typed logic with lambda abstraction and some notion  
of witnessing, using Gertzen (I think!) style derivation notation.   
Those A |- B things mean A derives B.  The |- is also called a  
Tee.  If your mail client can see underlining, formulas like


A, B
   |
   A

mean:

A, B |- A

That's where the Tee gets its name.  It's a T under A, B.  The former  
notation is better for some uses, since it meshes with the method of  
truth trees.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yi on Windows

2010-10-15 Thread Jeff Wheeler
On Thu, Oct 14, 2010 at 1:41 PM, Peter Marks pe...@indigomail.net wrote:

 If you start Yi with no config file, press any key, press h, choose a key
 binding, then save the file, you get a file c:\Users\peter\.yi\yi.hs

Oh, indeed. I didn't realize that. The behavior is defined in
Yi.Config.Default.nilKeymap and is very stupid about how it calculates
the config file location. We should be able to get this path from Dyre
instead.

 Hmm, although what I said is correct, it is not causing this problem.
 Windows 7 does use a different directory for local user data, but it
 implements some magic to make references to the old location access the new
 location. If you do a dir of C:\Users\peter, Local Settings doesn't
 exist, but if you dir C:\Users\peter\Local Settings\Cache\yi, you actually
 get the contents of C:\Users\peter\AppData\Local\Cache\yi, which does
 contain the file errors.txt! It is a sort of invisible simlink.

Woah . . .

 The actual problem I am having is that dyre tries to delete errors.txt
 straight after reading it with readFile. As readFile is lazy, the runtime is
 keeping the file open so, on Windows at least, it can't be deleted. I'm not
 really sure why it wants to delete the file though. I guess it is so that
 any warning messages are only shown the first time you launch after a
 compile, then deleted. I don't see why the errors file can't just be left so
 that you see errors whenever you launch. I'll try changing this later this
 evening (UK) and let you know if it works. The alternative would be to force
 the file to be read strictly then closed.

I think it's only important that they be deleted after a successful compile.

-- 
Jeff Wheeler

Undergraduate, Electrical Engineering
University of Illinois at Urbana-Champaign
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Strict Core?

2010-10-15 Thread Gregory Crosswhite

 Hey everyone,

Out of curiosity, are there any plans for GHC to eventually use the 
Strict Core language described in 
http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf?


Cheers,
Greg



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper from Google

2010-10-15 Thread Iustin Pop
On Fri, Oct 15, 2010 at 09:28:09PM +0100, Andrew Coppin wrote:
  http://k1024.org/~iusty/papers/icfp10-haskell-reagent.pdf
 
 I'm sure some of you have seen this already. For those who lack the
 time or inclination to read through the (six) pages of this report,
 here's the summary...

Nice summary, I hope you found the paper interesting!

 We [i.e., the report authors] took a production Python system and
 rewrote bits of it in Haskell, some of which is now in production
 use. We conclude the following:
 
 - Python lets you just do whatever the hell you want, but Haskell
 demands that you actually have a *plan* before you start churning
 out code and running it. The result is generally cleaner and more
 consistent when you get there.
 
 - Haskell's much-criticised immutable data is actually an
 *advantage* for backtracking search problems.
 
 - Haskell wins for thread-safety.
 
 - ADTs are nicer than exceptions.
 
 - The parts of Haskell stolen by Python aren't as nice in Python as
 they are in Haskell. [Well, duh.]

I'd say unfortunately, not just duh…

 - We like what GHC provides for profiling.
 
 - We are dissappointed by what GHC provides for debugging.
 
 - String is too slow. None of the alternatives seem to be widely
 supported. If your library consumes Strings and returns Strings, the
 fact that ByteString exists doesn't help you.
 
 - Recent changes to GHC broke our code. (Specifically, extensible
 exceptions.) We were quite surprised that such a stable and
 mature system as GHC would do this to us.
 
 - Haskell has quite a high barrier to entry. [Again, duh.]
 
 The paper also contains an interesting section that basically says
 we tried porting the Python implementing of XYZ into Haskell, but
 there wasn't really any advantage because it's all I/O. In my
 humble opinion, it's all I/O is a common beginner's mistake.
 Reading between the lines, it sounds like they wrote the whole thing
 in the IO monad, and then decided it looked just like the existing
 Python code so there wasn't much point in continuing.

Not quite (not all was in the I/O monad). It doesn't make sense to
rewrite 40K of lines from language A into language B just for fun. But
the advantages were not as strong as for the balancing algorithms to
justify any potential conversion. They were strong, just not strong
enough.

 I guess it's
 one of the bad habits that imperative programmers get into. With a
 little more experience, you eventually figure out that you can limit
 the stuff actually in the IO monad to a surprisingly small section,
 and so almost everything else in pure code, no matter how much the
 problem looks like it's all I/O. But anyway, I'm only guessing
 from what I can actually see with my own eyes in the report itself.

That's not how I would describe it (if I had to write it in a single
paragraph).

Basically, if you take a random, numerical/algorithmic problem, and you
write it in FP/Haskell, it's easy to show to most non-FP programmers why
Haskell wins on many accounts. But if you take a heavy I/O problem
(networking code, etc.), while Haskell is as good as Python, it is less
easy to show the strengths of the language. Yes, all the nice bits are
still there, but when you marshall data between network and your
internal structures the type system is less useful than when you just
write algorithms that process the internal data. Similar with the other
nice parts.

Now, if I were to start from scratch… :)

 I'm surprised about the profiler. They seem really, really impressed
 with it. Which is interesting to me, since I can never seen to get
 anything sensible out of it. It always seems to claim that my
 program is spending 80% of its runtime executing zipWith or
 something equally absurd.

I'm surprised that you're surprised :) The profiler is indeed awesome,
and in general I can manage to get one factor of magnitude speedup on my
initial algorithms, if not more.

Even if it just tells me that zipWith is the slow part, that's enough.
I'd even say it's a very good hint where to start.

 I'm unsurprised which their
 dissappointment with debugging. I'm still quite surprised that
 there's no tool anywhere which will trivially print out the
 reduction sequence for executing an expression. You'd think this
 would be laughably easy, and yet nobody has done it yet.

Indeed.

Thanks again for the summary :)

regards,
iustin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strict Core?

2010-10-15 Thread Andrew Coppin

 On 15/10/2010 10:27 PM, Gregory Crosswhite wrote:

 Hey everyone,

Out of curiosity, are there any plans for GHC to eventually use the 
Strict Core language described in 
http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf?


Is that because I just mentioned the paper?

Regardless, I'd be quite interested in the answer too. The paper looks 
quite promosing...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper from Google

2010-10-15 Thread Andrew Coppin

 On 15/10/2010 10:43 PM, Iustin Pop wrote:

On Fri, Oct 15, 2010 at 09:28:09PM +0100, Andrew Coppin wrote:

  http://k1024.org/~iusty/papers/icfp10-haskell-reagent.pdf

I'm sure some of you have seen this already. For those who lack the
time or inclination to read through the (six) pages of this report,
here's the summary...

Nice summary, I hope you found the paper interesting!


I often find it interesting seeing newcommer's opinions of Haskell. 
Usually that's in the form of a blog that's just a braindump of what a 
person has learned in half an hour of tinkering, following a tutorial. 
(And usually it either says oh wow, this is amazing! I never knew 
writing programs could be so much FUN! or it says oh my God, I can't 
believe Haskell SUCKS so much! It doesn't even have an IDE yet. How 
primitive is that? And what the heck is a monomorphism anyway?!) It's a 
first for me to see a coherantly written, measured account from people 
who actually do software for real, as it were.



- The parts of Haskell stolen by Python aren't as nice in Python as
they are in Haskell. [Well, duh.]

I'd say unfortunately, not just duh…


Well, yeah, I can see that angle. I'm just a hopeless Haskell fanatic, 
sorry. ;-)



The paper also contains an interesting section that basically says
we tried porting the Python implementing of XYZ into Haskell, but
there wasn't really any advantage because it's all I/O. In my
humble opinion, it's all I/O is a common beginner's mistake.
Reading between the lines, it sounds like they wrote the whole thing
in the IO monad, and then decided it looked just like the existing
Python code so there wasn't much point in continuing.

Not quite (not all was in the I/O monad). It doesn't make sense to
rewrite 40K of lines from language A into language B just for fun.


...you're talking to somebody who just spent an entire day implementing 
a 800-line JavaScript monstrosity that uses the DOM to programatically 
generate SVG (hell, I didn't even know that was physically possible!) in 
order to do a real-time demonstration of Huffman coding. Just to see if 
I could do it. Just for the hell of it.


Really, the notion of actually getting *paid* to write software is quite 
alien to me. I'd imagine you prioritise things quite differently.



But
the advantages were not as strong as for the balancing algorithms to
justify any potential conversion. They were strong, just not strong
enough.



Basically, if you take a random, numerical/algorithmic problem, and you
write it in FP/Haskell, it's easy to show to most non-FP programmers why
Haskell wins on many accounts. But if you take a heavy I/O problem
(networking code, etc.), while Haskell is as good as Python, it is less
easy to show the strengths of the language. Yes, all the nice bits are
still there, but when you marshall data between network and your
internal structures the type system is less useful than when you just
write algorithms that process the internal data. Similar with the other
nice parts.


I forget whether it was Galios or Well-Typed who claimed that every 
program we write is either a compiler or an interpretter. It depends on 
exactly how much low-level bit-fiddling your program domain actually 
requires of course, but most problems aren't nearly as I/O-centric as 
they look. (Then again, you're the one who's seen the code, not me.)



Now, if I were to start from scratch… :)


Hasn't every programmer said *that* before? ;-)


I'm surprised about the profiler. They seem really, really impressed
with it. Which is interesting to me, since I can never seen to get
anything sensible out of it. It always seems to claim that my
program is spending 80% of its runtime executing zipWith or
something equally absurd.

I'm surprised that you're surprised :) The profiler is indeed awesome,
and in general I can manage to get one factor of magnitude speedup on my
initial algorithms, if not more.

Even if it just tells me that zipWith is the slow part, that's enough.
I'd even say it's a very good hint where to start.


zipWith is a generic library function which always takes exactly the 
same amount of time. Unless you're using it so extensively that it's 
allocating huge amounts of memory or something, it would seem infinitely 
more likely that whatever function zipWith is *applying* should be the 
actual culprit, not zipWith itself.


Of course, I'm talking about profiling in time. GHC also enables you to 
profile in space as well. I'm not actually sure to which one you're 
referring. I haven't had much success with either. It's just too hard to 
figure out what the sea of numbers actually represent. (Since it's quite 
new, I'm assuming it's not the new ThreadScope functionallity - which I 
haven't tried yet, but looks extremely cool...)



Thanks again for the summary :)

regards,
iustin


No problem. There's no charge for this service. ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Strict Core?

2010-10-15 Thread Gregory Crosswhite
 Yes, I had seen this paper before and wondered the same thing at the 
time, but it was only just now when you brought the paper up that I 
realized I could ask people about it here.  :-)


On 10/15/2010 03:01 PM, Andrew Coppin wrote:

 On 15/10/2010 10:27 PM, Gregory Crosswhite wrote:

 Hey everyone,

Out of curiosity, are there any plans for GHC to eventually use the 
Strict Core language described in 
http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf?


Is that because I just mentioned the paper?

Regardless, I'd be quite interested in the answer too. The paper looks 
quite promosing...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] An interesting paper on VM-friendly GC

2010-10-15 Thread Andrew Coppin

 Somebody showed me this the other day, and I thought it was interesting:

http://www.cs.umass.edu/~emery/pubs/f034-hertz.pdf

Basically, we designed a garbage collector which tries to avoid 
touching memory pages that have been swapped out to disk just because we 
need to do a GC sweep. Which is a pretty obvious thing to do, when you 
think about it, and has several obvious performance implications. Maybe 
we should think about how GHC handles this?


On the other hand, their implementation uses a modified Linux kernel, 
and no sane person is going to recompile their OS kernel with a custom 
patch just to run Haskell applications, so we can't do quite as well as 
they did. But still, and interesting read...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper from Google

2010-10-15 Thread Iustin Pop
On Fri, Oct 15, 2010 at 11:08:14PM +0100, Andrew Coppin wrote:
  On 15/10/2010 10:43 PM, Iustin Pop wrote:
 On Fri, Oct 15, 2010 at 09:28:09PM +0100, Andrew Coppin wrote:
 I'm surprised about the profiler. They seem really, really impressed
 with it. Which is interesting to me, since I can never seen to get
 anything sensible out of it. It always seems to claim that my
 program is spending 80% of its runtime executing zipWith or
 something equally absurd.
 I'm surprised that you're surprised :) The profiler is indeed awesome,
 and in general I can manage to get one factor of magnitude speedup on my
 initial algorithms, if not more.
 
 Even if it just tells me that zipWith is the slow part, that's enough.
 I'd even say it's a very good hint where to start.
 
 zipWith is a generic library function which always takes exactly the
 same amount of time. Unless you're using it so extensively that it's
 allocating huge amounts of memory or something, it would seem
 infinitely more likely that whatever function zipWith is *applying*
 should be the actual culprit, not zipWith itself.

I know about zipWith. And if the profile tells me I spend too much time
in zipWith, it means a few things:

- zipWith might have to force evaluation of the results, hence the
  incorrect attribution of costs
- if even after that zipWith is the culprit, it might be the way the
  lists are consumed (are they lazy-built?), and that might mean you
  just have to workaround that via a different algorithm

Again, the fact that it tells you time is being spent in a library
function is not bad, not at all.

 Of course, I'm talking about profiling in time. GHC also enables you
 to profile in space as well. I'm not actually sure to which one
 you're referring.

In general, time profiling. Although the space profiling is useful too,
it gives you hints on what the (lazy) program does, as opposed to what
you think it does. The retainer graphs are cool, e.g. you might see that
some code hangs on to data more than you fought, and you can save some
heap and GC time due to that.

 I haven't had much success with either. It's just
 too hard to figure out what the sea of numbers actually represent.
 (Since it's quite new, I'm assuming it's not the new ThreadScope
 functionallity - which I haven't tried yet, but looks extremely
 cool...)

I haven't used ThreadScope yet, but it's on my todo list.

iustin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strict Core?

2010-10-15 Thread Andrew Coppin

 On 15/10/2010 11:10 PM, Gregory Crosswhite wrote:
 Yes, I had seen this paper before and wondered the same thing at the 
time, but it was only just now when you brought the paper up that I 
realized I could ask people about it here.  :-)


I wonder if anybody has a list somewhere of really cool stuff that we'd 
love to add to GHC if only we weren't constantly snowed under with 
PowerPC linker glitches and obscure type system errors... ;-)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper from Google

2010-10-15 Thread Andrew Coppin

 On 15/10/2010 11:18 PM, Iustin Pop wrote:

I know about zipWith. And if the profile tells me I spend too much time
in zipWith, it means a few things:

- zipWith might have to force evaluation of the results, hence the
   incorrect attribution of costs
- if even after that zipWith is the culprit, it might be the way the
   lists are consumed (are they lazy-built?), and that might mean you
   just have to workaround that via a different algorithm

Again, the fact that it tells you time is being spent in a library
function is not bad, not at all.


I remember writing a long, complex program that was really slow. So I 
looked at the profile, and discovered that it was spending 98% of the 
runtime in... Prelude.floor?! o_O I thought maybe the costs were just 
being mis-attributed. But on replacing Prelude.floor with some dark, 
mysterious GHC primitives, my program's runtime went from minues to 
milliseconds.


So, yeah, profiling can sometimes help you to discover where the time is 
_really_ being spent, not where you _think_ it's being spent. (I would 
have expected the long complex numerical algorithm to be eating cycles, 
not a trivial conversion from floating-point to integers.) But 
personally, I usually find it almost intractably difficult to figure out 
precisely what's going on by looking at a time profile. (Maybe that says 
more about me than about GHC's time profiler...)



Of course, I'm talking about profiling in time. GHC also enables you
to profile in space as well. I'm not actually sure to which one
you're referring.

In general, time profiling. Although the space profiling is useful too,
it gives you hints on what the (lazy) program does, as opposed to what
you think it does. The retainer graphs are cool, e.g. you might see that
some code hangs on to data more than you fought, and you can save some
heap and GC time due to that.


Again, I have great difficulty figuring out exactly what the numbers 
mean. But it's definitely nice that GHC can spit out all these 
statistics for you just by recompiling your program. (Shame it slows 
down to a crawl, but I guess that's rather unavoidable...) Knowing how 
much GC time you're using is especially invaluable.



I haven't used ThreadScope yet, but it's on my todo list.


Yeah, that does look like an extremely cool piece of equipment. I hope 
they start adding things like space profiling and so forth to the same 
infrastructure. (As I understand it, they're intending to do so, it's 
just a question of developer time...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Jacek Generowicz

-- Given a definition of view which is essentially a synonym for show:

class View a where
view :: a - String

instance View Int where
view = show

-- why does show 2 compile, while view 2 gives an
-- 'Ambiguous type variable' error

fine  = view (2::Int)
noProblem = show 2
ambiguousTypeVariable = view 2

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Christopher Done
On 16 October 2010 00:47, Jacek Generowicz jacek.generow...@cern.ch wrote:
 -- why does show 2 compile, while view 2 gives an
 -- 'Ambiguous type variable' error

 fine                  = view (2::Int)
 noProblem             = show 2
 ambiguousTypeVariable = view 2

Don't integral literals default to Integer, of which there is a Show
instance but no View instance?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Ivan Lazar Miljenovic
On 16 October 2010 09:47, Jacek Generowicz jacek.generow...@cern.ch wrote:
 -- Given a definition of view which is essentially a synonym for show:

 class View a where
    view :: a - String

 instance View Int where
    view = show

 -- why does show 2 compile, while view 2 gives an
 -- 'Ambiguous type variable' error

 fine                  = view (2::Int)
 noProblem             = show 2
 ambiguousTypeVariable = view 2

2 is a generic number.  If you don't specify a type, it usually
defaults to Integer.  All Num instances that come in the Prelude have
Show instances, so no matter which gets picked show 2 works.
However, when you say view 2 ghc/ghci doesn't know that you want 2
to be an Int (as that's the only type you have an instance for View
for).

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper on VM-friendly GC

2010-10-15 Thread Gregory Crosswhite

 On 10/15/2010 03:15 PM, Andrew Coppin wrote:
On the other hand, their implementation uses a modified Linux kernel, 
and no sane person is going to recompile their OS kernel with a custom 
patch just to run Haskell applications, so we can't do quite as well 
as they did. But still, and interesting read...


Ah, but you are missing an important fact about the article:  it is not 
about improving garbage collection for Haskell, it is about improving 
collection for *Java*, which a language in heavy use on servers.  If 
this performance gain really is such a big win, then I bet that it would 
highly motivate people to make this extension as part of the standard 
Linux kernel, at which point we could use it in the Haskell garbage 
collector.


Cheers,
Greg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Jacek Generowicz

On 2010 Oct 16, at 00:51, Ivan Lazar Miljenovic wrote:

On 16 October 2010 09:47, Jacek Generowicz  
jacek.generow...@cern.ch wrote:
-- Given a definition of view which is essentially a synonym for  
show:


class View a where
   view :: a - String

instance View Int where
   view = show

-- why does show 2 compile, while view 2 gives an
-- 'Ambiguous type variable' error

fine  = view (2::Int)
noProblem = show 2
ambiguousTypeVariable = view 2


2 is a generic number.  If you don't specify a type, it usually
defaults to Integer.  All Num instances that come in the Prelude have
Show instances, so no matter which gets picked show 2 works.
However, when you say view 2 ghc/ghci doesn't know that you want 2
to be an Int (as that's the only type you have an instance for View
for).


Which implies that defining all instances of Num to be instances of  
View should do the trick, and that doesn't seem to work. See below.


On 2010 Oct 16, at 00:51, Christopher Done wrote:


Don't integral literals default to Integer, of which there is a Show
instance but no View instance?




Hmm, it doesn't seem to be that simple.

The phenomenology seems to be:

As far as entering view 2 into ghci is concerned, you need 'instance  
View Integer' or 'instance View Double'.


To get x = view 2 to compile in ghc, having all of Int, Integer,  
Float and Double as instances of View is still not enough.


I did all this in an environment where I had not imported any other  
Num instances, and :i Num in ghci showed only the 4 aforementioned  
types as instances.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IORef memory leak

2010-10-15 Thread Thomas Schilling
Correct, here's a video of Simon explaining the thunk blackholing
issue and its solution in GHC 7:

http://vimeo.com/15573590

On 15 October 2010 21:31, Gregory Collins g...@gregorycollins.net wrote:
 Evan Laforge qdun...@gmail.com writes:

 The only workaround I could find is to immediately read the value back
 out and 'seq' on it, but it's ugly.

 Yep! C'est la vie unfortunately.

 The way atomicModifyIORef works is that the new value isn't actually
 evaluated at all; GHC just swaps the old value with a thunk which will
 do the modification when the value is demanded.

 It's done like that so that the atomic modification can be done with a
 compare-and-swap CPU instruction; a fully-fledged lock would have to be
 taken otherwise, because your function could do an unbounded amount of
 work. While that's happening, other mutator threads could be writing
 into your memory cell, having read the same old value you did, and then
 *splat*, the souffle is ruined.

 Once you're taking a lock, you've got yourself an MVar. This is why
 IORefs are generally (always?) faster than MVars under contention; the
 lighter-weight lock mechanism means mutator threads don't block, if the
 CAS fails atomicModifyIORef just tries again in a busy loop. (I think!)

 Of course, the mutator threads themselves then tend to bump into each
 other or do redundant work when it's time to evaluate the thunks (GHC
 tries to avoid this using thunk blackholing). Contention issues here
 have gotten radically better in recent versions of GHC I think.

 Forgive me if I've gotten anything wrong here, I think Simon Marlow
 might be the only person who *really* understands how all this stuff
 works. :)


 So two questions:

 writeIORef doesn't have this problem.  If I am just writing a simple
 value, is writeIORef atomic?  In other words, can I replace
 'atomicModifyIORef r (const (x, ())' with 'writeIORef r x'?

 Any reason to not do solution 1 above?

 Well if you're not inspecting or using the old value then it's safe to
 just blow it away, yes.

 Cheers,

 G
 --
 Gregory Collins g...@gregorycollins.net
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Jacek Generowicz


On 2010 Oct 16, at 01:14, Jacek Generowicz wrote:


On 2010 Oct 16, at 00:51, Ivan Lazar Miljenovic wrote:


2 is a generic number.  If you don't specify a type, it usually
defaults to Integer.  All Num instances that come in the Prelude have
Show instances, so no matter which gets picked show 2 works.
However, when you say view 2 ghc/ghci doesn't know that you want 2
to be an Int (as that's the only type you have an instance for View
for).



On 2010 Oct 16, at 00:51, Christopher Done wrote:


Don't integral literals default to Integer, of which there is a Show
instance but no View instance?


An in both of the explanations above, it should then complain about  
the lack of an instance of View, rather than about ambiguous type  
variables, n'est-ce pas?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Daniel Fischer
On Saturday 16 October 2010 01:14:51, Jacek Generowicz wrote:
 On 2010 Oct 16, at 00:51, Ivan Lazar Miljenovic wrote:
  On 16 October 2010 09:47, Jacek Generowicz
 
  jacek.generow...@cern.ch wrote:
  -- Given a definition of view which is essentially a synonym for
  show:
 
  class View a where
 view :: a - String
 
  instance View Int where
 view = show
 
  -- why does show 2 compile, while view 2 gives an
  -- 'Ambiguous type variable' error
 
  fine  = view (2::Int)
  noProblem = show 2
  ambiguousTypeVariable = view 2
 
  2 is a generic number.  If you don't specify a type, it usually
  defaults to Integer.  All Num instances that come in the Prelude have
  Show instances, so no matter which gets picked show 2 works.
  However, when you say view 2 ghc/ghci doesn't know that you want 2
  to be an Int (as that's the only type you have an instance for View
  for).

 Which implies that defining all instances of Num to be instances of
 View should do the trick, and that doesn't seem to work. See below.

http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3.4

Defaulting only takes place when all involved classes are defined in the 
Prelude or the standard libraries.

Your View class isn't, hence there's no defaulting.

It works in ghci because ghci uses extended default rules (otherwise it 
would have to give too many `ambiguous type variable' messages).


 On 2010 Oct 16, at 00:51, Christopher Done wrote:
  Don't integral literals default to Integer, of which there is a Show
  instance but no View instance?

 Hmm, it doesn't seem to be that simple.

 The phenomenology seems to be:

 As far as entering view 2 into ghci is concerned, you need 'instance
 View Integer' or 'instance View Double'.

 To get x = view 2 to compile in ghc, having all of Int, Integer,
 Float and Double as instances of View is still not enough.

 I did all this in an environment where I had not imported any other
 Num instances, and :i Num in ghci showed only the 4 aforementioned
 types as instances.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strict Core?

2010-10-15 Thread Luke Palmer
On Fri, Oct 15, 2010 at 4:21 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
  On 15/10/2010 11:10 PM, Gregory Crosswhite wrote:

  Yes, I had seen this paper before and wondered the same thing at the
 time, but it was only just now when you brought the paper up that I realized
 I could ask people about it here.  :-)

 I wonder if anybody has a list somewhere of really cool stuff that we'd
 love to add to GHC if only we weren't constantly snowed under with PowerPC
 linker glitches and obscure type system errors... ;-)

I think a more appropriate description of the list would be: really
cool stuff that we'd love to add to GHC if only we weren't constantly
busy adding other cool stuff.

GHC Team++.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Jacek Generowicz

jacek.generow...@cern.ch wrote:


-- Given a definition of view which is essentially a synonym for
show:

class View a where
  view :: a - String

instance View Int where
  view = show

-- why does show 2 compile, while view 2 gives an
-- 'Ambiguous type variable' error

fine  = view (2::Int)
noProblem = show 2
ambiguousTypeVariable = view 2




On 2010 Oct 16, at 01:25, Daniel Fischer wrote:


http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3.4

Defaulting only takes place when all involved classes are defined in  
the

Prelude or the standard libraries.

Your View class isn't, hence there's no defaulting.


Bingo. Thank you.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Daniel Fischer
On Saturday 16 October 2010 01:18:55, Jacek Generowicz wrote:
 On 2010 Oct 16, at 01:14, Jacek Generowicz wrote:
  On 2010 Oct 16, at 00:51, Ivan Lazar Miljenovic wrote:
  2 is a generic number.  If you don't specify a type, it usually
  defaults to Integer.  All Num instances that come in the Prelude have
  Show instances, so no matter which gets picked show 2 works.
  However, when you say view 2 ghc/ghci doesn't know that you want 2
  to be an Int (as that's the only type you have an instance for View
  for).
 
  On 2010 Oct 16, at 00:51, Christopher Done wrote:
  Don't integral literals default to Integer, of which there is a Show
  instance but no View instance?

 An in both of the explanations above, it should then complain about
 the lack of an instance of View, rather than about ambiguous type
 variables, n'est-ce pas?

Non.

If you write

x = view 2

the type checker looks at the expression 2 and it sees it has the type

Num a = a

(because integer literals are actually shorthand for (fromInteger literal) 
and fromInteger :: Num a = Integer - a).

Then (or before that) it looks at the context in which the 2 appears.
That context is that the function view is applied to it.

view :: View v = v - String

So, in order for the expression 2 to be well typed, the type checker finds 
the two constraints

2 :: Num a = a
2 :: View v = v

These constraints have to be unified (which is very easy here), resulting 
in

2 :: (Num a, View a) = a

But there's no way to find out what type a is/has to be.
Hence a is an ambiguous type variable.

Now, under certain circumstances such ambiguous type variables are resolved 
by defaulting. If you replace view with show, you get a Show constraint 
instead of the View constraint and then defaulting may (and must) happen.
But since View is defined outside the standard libraries, the language 
report says defaulting mustn't take place, so it doesn't (it may work if 
you specify
{-# LANGUAGE ExtendedDefaultRules #-}
at the top of your module).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] downloading GHC

2010-10-15 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/15/10 11:26 , Don Stewart wrote:
 Linux users don't have easy binary installers, usually. What can we do
 about this bootstrapping problem?

I thought the answer to that was supposed to be bug your distribution to
package the Platform.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAky46mEACgkQIn7hlCsL25XBNgCfefI3QCUmwGTMA5KlE05QY3S6
tAMAnjPMmFRQitxhB97o0lysnfGL41yj
=VTuz
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2010-10-15 Thread Jacek Generowicz


On 2010 Oct 16, at 01:39, Daniel Fischer wrote:


On Saturday 16 October 2010 01:18:55, Jacek Generowicz wrote:

On 2010 Oct 16, at 01:14, Jacek Generowicz wrote:

On 2010 Oct 16, at 00:51, Ivan Lazar Miljenovic wrote:

2 is a generic number.  If you don't specify a type, it usually
defaults to Integer.  All Num instances that come in the Prelude  
have

Show instances, so no matter which gets picked show 2 works.
However, when you say view 2 ghc/ghci doesn't know that you  
want 2

to be an Int (as that's the only type you have an instance for View
for).


On 2010 Oct 16, at 00:51, Christopher Done wrote:
Don't integral literals default to Integer, of which there is a  
Show

instance but no View instance?


An in both of the explanations above, it should then complain about
the lack of an instance of View, rather than about ambiguous type
variables, n'est-ce pas?


Non.

If you write

x = view 2

the type checker looks at the expression 2 and it sees it has the type

Num a = a

(because integer literals are actually shorthand for (fromInteger  
literal)

and fromInteger :: Num a = Integer - a).

Then (or before that) it looks at the context in which the 2 appears.
That context is that the function view is applied to it.

view :: View v = v - String

So, in order for the expression 2 to be well typed, the type checker  
finds

the two constraints

2 :: Num a = a
2 :: View v = v

These constraints have to be unified (which is very easy here),  
resulting

in

2 :: (Num a, View a) = a

But there's no way to find out what type a is/has to be.
Hence a is an ambiguous type variable.


Exactly. Which is why I made the point that the two explanations  
offered by Christopher and Ivan (both of which suggested that the  
problem was related to a missing View instance) imply that the error  
report should mention missing View instances: As the error message  
does not mention them, I took this as further evidence that those  
suggestions were not correct.


Now, under certain circumstances such ambiguous type variables are  
resolved
by defaulting. If you replace view with show, you get a Show  
constraint
instead of the View constraint and then defaulting may (and must)  
happen.

But since View is defined outside the standard libraries, the language
report says defaulting mustn't take place, so it doesn't


Yup. That much was clear from you last message.


(it may work if
you specify
{-# LANGUAGE ExtendedDefaultRules #-}
at the top of your module).


Interesting. Thanks.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper from Google

2010-10-15 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/15/10 16:28 , Andrew Coppin wrote:
 I'm surprised about the profiler. They seem really, really impressed with
 it. Which is interesting to me, since I can never seen to get anything
 sensible out of it. It always seems to claim that my program is spending 80%
 of its runtime executing zipWith or something equally absurd. I'm

That just means you haven't internalized managing laziness yet, so you're
seeing thunks get processed by zipWith instead of where they ought to be.
(Not that I'll claim to be any better; I just know why it happens.)

 surprised that there's no tool anywhere which will trivially print out the
 reduction sequence for executing an expression. You'd think this would be
 laughably easy, and yet nobody has done it yet.

Hat hasn't been maintained for years, sigh.  A number of times I could have
used it... and I'm not confident enough of my ability to grok the code.

 Their comments about String are sadly true.

HP's still struggling with that one (I think some people need to realize
that Text and ByteString have different use cases and correspondingly
different data models, and trying to force both into the list API will only
cause grief, but I digress).  I have hope that this situation will improve
in the future.

(Also, I process enough short strings that I'm uncertain of the wisdom of
just using Text or ByteString for everything; this is admittedly a function
of lack of experience.)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAky4/tgACgkQIn7hlCsL25UR4ACeJ/HY2OGyjEPCz1k3te+x0MRU
ZUIAoI+P5KL//rkPv8nOZmqYqs90VruC
=UBvU
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-15 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/15/10 16:36 , Andrew Coppin wrote:
 Does anybody have any idea which particular dialect of pure math this paper
 is speaking? (And where I can go read about it...)

Type theory.  It makes my head spin, too, since essentially my only exposure
to it so far is Haskell itself.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAky5AAUACgkQIn7hlCsL25UxawCePztYYnJLXZS8Cx78H4IdNs4q
pG4AnjrRLBkL96gduOhN9AyBJPp+xKSv
=IcA6
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A rant against the blurb on the Haskell front page

2010-10-15 Thread Ben Franksen
This is a critique of the current 'Haskell Blurb', the first paragraph on
www.haskell.org.

This blurb should, IMO, give a concise description of what Haskell, the
programming language, is, what makes it different from other languages, and
why I should be interested in it.

What it does, instead, is to make me scratch my head and ask myself: what
marketing idiot has written this inclonclusive mumble-jumble of buzz-words?

Let me explain.

Haskell is an advanced purely functional programming language.

Good start, if only the advanced were replaced with something more
characteristic, like lazy, or statically typed. Which, BTW, both do not
appear in the whole blurb, even though they are *the* characteristics of
Haskell, lazyness being even something that sets it apart from most other
languages. I hear the marketeers crying but the average visitor has no
idea what lazyness means. So what? Give them a link to the wiki with an
explanation. So, a better introductory sentence would be

- Haskell is a lazily evaluated, purely functional programming language
with a very flexible and powerful static type system.

Next sentence:

An open source product of more than twenty years of cutting edge research,
it allows rapid development of robust, concise, correct software.

This really gets me every time I read it. How can anyone write such a
nonsense? Haskell is not an open source product! It is no product at all.
That most (maybe all) implementations are opens source is certainly an
interesting fact, but IMO not something that should appear at the top of
the page right under the header The Haskell Programming Language. The
second and third sentences deliberately conflate language and
implementation(s). This is a well known falacy and I am ashamed that it
appears on the front page of my favourite programming language. The blurb
talks about robust, concise, correct software, but misses itself most of
these goals: it is imprecise, incorrect, and not robust (because
implementations vary), and therefore not a good advertisement, though quite
possibly rapidly developed.

The blurb promises rapid development of robust, concise, correct software
lest one think this were something akin to Perl which certainly allows
rapid development, yet typically neither robust nor correct, especially if
done rapidly. So, how does Haskell differ from that? Well, I'd say this is
where lazyness and a static yet flexible type system come into play. But
no, I forgot, we don't want to explain anything or even be logical, dear
reader, we want to pound slogans into your head!

That cutting edge research is done for Haskell as well as for its
implementations is of course good to know, but just stating it is not
nearly enough: such a statement must be corroberated with evidence,
otherwise it is just idle marketing. (Not that there wouldn't be evidence
amass, it's just that none is given.)

On we go:

With strong support for integration with other languages, built-in
concurrency and parallelism, debuggers, profilers, rich libraries and an
active community, Haskell makes it easier to produce flexible, maintainable
high-quality software.

Let us take that apart:

(1) Fact: Haskell has a good and very easy to use FFI. To the C language. I
have never heard of integration with any other langauge being directly
supported.

(2) Fact: Built-in concurrency and parallelism is not exactly part of the
langauage, although purity makes it possible to support them in a more
precise and less error-prone way; which is exploited by ghc's concurrency
and parallelism extensions.

(3) Fact: Traditional debuggers are practically useless in Haskell, due to
lazy evaluation. But, oh, we forgot to mention the small fact that Haskell
is lazy. Too bad. Profiling is supported by ghc only (AFAIK), but is
supposed to be very useful (never seriously used it, so I can't judge.)

(4) Fact: there are libraries. Some of them are good, others are not so
good. Most leave a lot to be desired, but I would be hard pressed to come
up with something better myself. What really distinguishes Haskell
libraries from what is found in other languages is the level of
abstraction. I know no other language where stuff like Monad, (Applicative)
Functor, Monoid, Category etc. is *at the heart* of all the libraries. But,
oh, I forgot, we don't want to scare people off, so better not talk about
this in public.

(5) Fact: Haskell has an active community. No question.

(6) Fact: Haskell makes certain things easier (than other languages). Other
things that are easy in other langauges are hard in Haskell. Or at least
very non-obvious. Whether Haskell programs are more flexible, when compared
with dynamically typed OO languages like Python, I seriously doubt.
Maintainable high-quality software is the real selling point, IMO; or
rather *reliably correct* software. However, I cannot see how this
dirtectly follows from the previous points, which is what the sentence is
saying. What has an FFI do to with high 

Re: [Haskell-cafe] A rant against the blurb on the Haskell front page

2010-10-15 Thread Don Stewart
Great! It's a Friday. Why not step in.

Just some context, since the current blurb was born from a critique at
CUFP 2007, prior to which the Haskell blurb was:

Haskell is a general purpose, purely functional programming
language. Haskell compilers are freely available for almost any
computer.

while python.org said:

Python is a programming language that lets you work more quickly
and integrate your systems more effectively. You can learn to use
Python and see almost immediate gains in productivity and lower
maintenance costs.

We tried to meet them mid-way, while emphasizing why you'd even consider 
Haskell.

ben.franksen:
 This is a critique of the current 'Haskell Blurb', the first paragraph on
 www.haskell.org.
 
 This blurb should, IMO, give a concise description of what Haskell, the
 programming language, is, what makes it different from other languages, 
and
 why I should be interested in it.

Agreed!

 What it does, instead, is to make me scratch my head and ask myself: what
 marketing idiot has written this inclonclusive mumble-jumble of 
buzz-words?
 
 Let me explain.
 
 Haskell is an advanced purely functional programming language.
 
 Good start, if only the advanced were replaced with something more
 characteristic, like lazy, or statically typed. Which, BTW, both do 
not

lazy and statically typed don't mean much to other people. They are
buzz words that mean nothing to many people.

 appear in the whole blurb, even though they are *the* characteristics of
 Haskell, lazyness being even something that sets it apart from most other
 languages. I hear the marketeers crying but the average visitor has no
 idea what lazyness means. So what? Give them a link to the wiki with an
 explanation. So, a better introductory sentence would be
 
 - Haskell is a lazily evaluated, purely functional programming language
 with a very flexible and powerful static type system.

What are the benefits of laziness?

 Next sentence:
 
 An open source product of more than twenty years of cutting edge 
research,
 it allows rapid development of robust, concise, correct software.

It is open source, and was born open source. It is the product of
research.

 This really gets me every time I read it. How can anyone write such a
 nonsense? Haskell is not an open source product! It is no product at 
all.
 That most (maybe all) implementations are opens source is certainly an
 interesting fact, but IMO not something that should appear at the top of
 the page right under the header The Haskell Programming Language. The
 second and third sentences deliberately conflate language and
 implementation(s). This is a well known falacy and I am ashamed that it

As Python, Ruby, C and every other language do.

 appears on the front page of my favourite programming language. The blurb
 talks about robust, concise, correct software, but misses itself most of
 these goals: it is imprecise, incorrect, and not robust (because
 implementations vary), and therefore not a good advertisement, though 
quite
 possibly rapidly developed.

 The blurb promises rapid development of robust, concise, correct 
software
 lest one think this were something akin to Perl which certainly allows
 rapid development, yet typically neither robust nor correct, especially if
 done rapidly. So, how does Haskell differ from that? Well, I'd say this is
 where lazyness and a static yet flexible type system come into play. But
 no, I forgot, we don't want to explain anything or even be logical, dear
 reader, we want to pound slogans into your head!

I don't think these are useful contributions.

 That cutting edge research is done for Haskell as well as for its
 implementations is of course good to know, but just stating it is not
 nearly enough: such a statement must be corroberated with evidence,
 otherwise it is just idle marketing. (Not that there wouldn't be evidence
 amass, it's just that none is given.)

You literally want evidence that research played a part in Haskell, in
its opening statement? Why??

 On we go:
 
 With strong support for integration with other languages, built-in
 concurrency and parallelism, debuggers, profilers, rich libraries and an
 active community, Haskell makes it easier to produce flexible, 
maintainable
 high-quality software.
 
 Let us take that apart:
 
 (1) Fact: Haskell has a good and very easy to use FFI. To the C language. 
I
 have never heard of integration with any other langauge being directly
 supported.

It is OK to contest these, but consider the FFI of our competition:
Python, Ruby, Erlang. Woeful FFIs. You are not at risk using Haskell, as
you can always call out to your favorite $language library.

 (2) Fact: Built-in concurrency 

Re: [Haskell-cafe] A rant against the blurb on the Haskell front page

2010-10-15 Thread Donn Cave
Quoth Ben Franksen ben.frank...@online.de,

 Enough. I think I have made my point.

Yes, though possibly a little overstated it.  While it's easy to share
your distaste for the blurb, if you take a generous attitude towards it,
most of it is true enough.

The implementation specific features are at least widely available to
anyone who wants to use the language on the most popular computing
platforms, so it's expedient, if a little cheesy, to say that Haskell
supports those features.

We agree about strong support for integration with other languages,
but I wouldn't like to say strong support for integration with C,
either.  The FFI is mostly independent of C, per se - outside of the
hsc macros, it just addresses a sort of platform standard for exposed
library functionality, which happens to be commonly implemented in C.
Someone might be able to think of a better way to put that.

The point I liked best is the one you started with:

 This blurb should, IMO, give a concise description of what Haskell, the
 programming language, is, what makes it different from other languages, and
 why I should be interested in it.

... and, we understand, you don't find that in this blurb.  Lazy and
statically typed may not be universally understood, but they aren't 
buzz words.  Whether that's the right way to shed some light on what
Haskell is like, it sure says a lot more on a technical level than
advanced purely functional programming language.  And while that
phrase is linked to a longer exposition of Functional programming,
the latter is set in language-independent terms and is at best ambiguous
about whether it's talking about Haskell or not.

I'm trying to picture someone who might find Haskell useful, but would
be spooked by description of the language in unfamiliar technical 
terms.  Forget Python, this is a little different proposition.  A couple
days ago I was talking to a friend about Haskell, turned out he hadn't
heard of it.  I suppose he may have found this blurb.  I hope he
found the blurb that appears at the top of the Introduction page:

 Haskell is a computer programming language. In particular, it is a
  polymorphically statically typed, lazy, purely functional language,
  quite different from most other programming languages. The language
  is named for Haskell Brooks Curry, whose work in mathematical logic
  serves as a foundation for functional languages. Haskell is based
  on the lambda calculus, hence the lambda we use as a logo.

This most succinctly expresses the points I tried to convey to him
about Haskell, and I don't think it would be out of place on the
main page.

Donn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe