Re: [Haskell-cafe] lazy boxed array and builder?

2012-07-12 Thread Yves Parès
I remember this discussion, lazy vectors would also enable an
implementation of bytestring and (maybe) text only with unboxed vectors,
unifying it all:
type ByteString = Vector Word8

2012/7/12 Evan Laforge qdun...@gmail.com

 he recent discussion of whether storablevector should be deprecated in
 favor of vector reminded me: vector supports storable arrays, but it
 doesn't support lazy arrays.  While storablevector has lazy arrays and
 a builder, it doesn't support boxed types (it would be become misnamed
 if it did!).

 So it seems the niche of boxed lazy arrays is unfilled.  And if vector
 grew a lazy variant it could subsume storablevector and we could
 consolidate array libraries a little more.  It seems a lot of Writer
 monad type stuff (logging etc.) could be made more efficient with a
 boxed lazy array builder, at the cost of rougher grained laziness.

 Does such a thing already exist?  I'd check hackage, but it's down...

 http://www.downforeveryoneorjustme.com/http://hackage.haskell.org/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Cabal problem re. haskelldb-hdbc-mysql

2012-07-05 Thread Yves Parès
Hi,
the package http://hackage.haskell.org/package/haskelldb-hdbc-mysql/ the
use of HDBC 2.3.0
I'm using cabal-install 0.14, and with a fresh install (no packages already
installed), cabal-install tries to install HDBC-2.1.1 instead of, say,
HDBC-2.2.7.0.

The problem is that HDBC-2.1.1 is old (2009) and does not compile.

When I download haskelldb-hdbc-mysql manually, change the dependency and
then install, it compiles fine.

Is the dependency ill-declared, or is that a cabal problem?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Martin Odersky on What's wrong with Monads

2012-06-27 Thread Yves Parès
 I'm not happy with any of these options.

Why are you unhappy with the ImplicitParams option?

It's pretty much like resorting to a newtype, as it's been suggested before.

2012/6/27 Tillmann Rendel ren...@informatik.uni-marburg.de

 Hi Rico,

 Rico Moorman wrote:

  data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer)

  amount:: Tree - Integer
  amount (Leaf x) = x
  amount (Branch t1 t2) = amountt1 + amountt2

 [...] additional requirement: If the command-line flag --multiply is
 set,

 the function amount computes the product instead of the sum.

 How would you implement this requirement in Haskell without changing the
 line amount (Leaf x) = x?


  The (for me at least) most obvious way to do this would be, to make the
 operation to be applied to determine the amount (+ or *) an explicit
 parameter in the function's definition.


  data Tree a = Leaf a
  | Branch (Tree a) (Tree a)
  amount :: (a - a - a) - Tree a - a
  amount fun (Leaf x) = x
  amount fun (Branch t1 t2) = amount fun t1 `fun` amount fun t2


 I agree: This is the most obvious way, and also a very good way. I would
 probably do it like this.

  Which drawbacks do you see besides increased verbosity?


 Well, you did change the equation amount (Leaf x) = x to amount fun
 (Leaf x) = x. In a larger example, this means that you need to change many
 lines of many functions, just to get the the value of fun from the point
 where it is known to the point where you need it.

  [...] I am wondering which ways of doing this in Haskell you mean.


 I thought of the following three options, but see also Nathan Howells
 email for another alternative (that is related to my option (1) below):


 (1) Implicit parameters:

 {-# LANGUAGE ImplicitParams #-}
 data Tree = Leaf Integer | Branch Tree Tree

 amount :: (?fun :: Integer - Integer - Integer) = Tree - Integer

 amount (Leaf x) = x
 amount (Branch t1 t2) = ?fun (amount t1) (amount t2)


 (2) Lexical Scoping:

 data Tree = Leaf Integer | Branch Tree Tree

 amount :: (Integer - Integer - Integer) - Tree - Integer
 amount fun = amount where {

 amount (Leaf x) = x
 ; amount (Branch t1 t2) = fun (amount t1) (amount t2) }


 (3) UnsafePerformIO:

 import System.IO.Unsafe (unsafePerformIO)

 data Tree = Leaf Integer | Branch Tree Tree


 amount :: Tree - Integer
 amount (Leaf x) = x
 amount (Branch t1 t2) = fun (amount t1) (amount t2)
  where fun = unsafePerformIO ...


 I'm not happy with any of these options. Personally, I would probably go
 ahead and transform the whole program just to get the value of fun to where
 it is needed. Nevertheless, having actually done this before, I understand
 why Martin Odersky doesn't like doing it :)


  Tillmann

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Martin Odersky on What's wrong with Monads

2012-06-24 Thread Yves Parès
 To move between functional and monadic code you have to completely
rewrite the code procedurally

I don't like the way people segregate pure from monadic. Monadic code
*is* pure: it's written with completely pure Haskell and completely pure
combinators. As Alexander said, it's really important to say that 'do' is
just syntax over abstract combinators, and that it's *only for IO/ST* that
those combinators have something special.
This is why I do disagree with tutorials who begin by showing the 'do'
notation saying do IO with that.
If the beginner reads that and then puts his learning aside (because he was
interested to get some input about Haskell but has no time to continue for
the moment), he walks away with the false impression that you have do
blocks to do imperative stuff, and for the rest you just use normal
functions.
People are there because they've heard about the almighty *purely
functional* Haskell, I say let's give 'em what they came for:
getLine = \x - putStrLn (Hello  ++ x ++ !)
It's not complicated to understand the concept of an operator retrieving
the result of one operation and injecting it in the next.
Twist the syntax as you want if you think lambdas are not readable for
beginners, but still, if we begin by that, develop a little with a few more
examples and then say something like:
But it would be very cumbersome to have to write everything like that. But
don't worry, with great purity does not come great integrism!
Haskell people have been intelligent and brought you a special syntax to
write it more legibly, *but it's actually exactly the same you're doing!*
then the intuition that monads are just like pure code comes by itself.

This might be what leads people like Martin Odersky to false conclusions.

Amongst Haskellers I understand that we use this shortcut (pure VS monadic)
but as I said, towards non-Haskellers it's IMHO a bad idea to start
rightaway by making the difference between both.


Now, concerning Odersky's sentence specifically: To move between
functional and monadic code
you have to completely rewrite the code procedurally:
I don't see how that differs from any other language: monads are just
patterns (*), if you start with one method and then switch to another then
you'll have to refactor.

I'd say monads are actually better in that respect, since they can abstract
a lot due to their inherent EDSL nature (provide just one type and its set
of operations, and then you can change everything under the hood).

(*) And because of their pattern nature, they have the same drawbacks
than patterns in OO: people may (and will) try to fit triangles in round
holes, i.e. to use a monad where another abtraction (or no abstraction at
all) would be better.

2012/6/24 Chris Dornan ch...@chrisdornan.com

 What's wrong with Monads is that if you go into a Monad you have to
 change your whole syntax
from scratch. Every single line of your program changes if you get it in
 or out of a Monad. They're
not polymorphic so it's really the old days of Pascal. A monomorphic
 type system that says 'well
that's all I do' ... there's no way to abstract over things.  [0,
 53:45]

  [0] - http://css.dzone.com/articles/you-can-write-large-programs

 I think the context of the question is important here. Odersky is asked why
 provide all this elegant machinery
 for doing functional things while  avoiding the difficult/critical parts --
 the parts that deal with the effects.

 Odersky seems to be making three claims.

  * To move between functional and monadic code you have to completely
 rewrite the code procedurally -- its
true and (IMHO) regrettable.

  * Monadic code is monomorphic: this appears to be seriously mistaken.
 Monadic functions can be as polymorphic
as any non-monadic functions. (I have never wished the lambda-bound
 variables introduced by 'do' statements
were somehow polymorphic.)

  * There is no way to abstract over monadic code: this also appears to be
 mistaken as there are plenty
   of ways of abstracting while writing monadic code (using the very same
 techniques you would use
   for non-monadic code).

 Monads allow procedural code to be expressed procedurally and functional
 code to be expressed functionally and
 the type system ensures there are no mix ups. Expressing procedural code
 functionally is as unnatural and error
 prone as expressing functional code procedurally in my experience -- that
 Haskell avoids compelling the programmer to
 do either within its strongly-typed functional framework is (IMHO) its
 great
 invention(*) and enduring strength.

 Maybe someday someone will devise a way writing 'effects' code in a
 strongly-typed functional framework
 that doesn't force the programmer to commit each function to being either
 procedural or functional -- or
 better yet, do away with the need to write any effects code. Perhaps it has
 been done already. (I don't doubt
 folks have claimed to have done it.) But until there is a proven 

Re: [Haskell-cafe] What extension do I need to write type Job = Map k a?

2012-06-13 Thread Yves Parès
Mmmmh... no, to do that you need ImpredicativeTypes (which is I believe
about to be deprecated).
You have to declare Job a data, not a type, and use
ExistentialQuantification.

2012/6/13 Ismael Figueroa Palet ifiguer...@gmail.com

 Do you want to hide the specific types of the job? Presumably to then
 define a type JobList = [Job] ?
 You can do that with the ExistentialQuantification extension.

 type Job = forall k a. Map k a
 type JobList = [Job]

 ??
 Note you can't unpack the types k a once you have hidden them. But the
 typechecker can use it to ensure some static property.
 Also you could use unsafeCoerce to do some casts, but *only if you are
 *sure* that things will go OK*.


 2012/6/13 Magicloud Magiclouds magicloud.magiclo...@gmail.com

 Hi,
  I've forgotten this.
  This is OK:
 type Job k a = Map k a
  And this is OK:
 {-# LANGUAGE RankNTypes #-} -- or LiberalTypeSynonyms?
 type Job = forall a. forall k. Map k a

  Then how to write it like this?
 type Job = Map k a
 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Ismael


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I don't understand how ST works

2012-06-09 Thread Yves Parès
Oh my god, that was it?
I looked at your code for half an hour, and I've never thought about
that... That is really misleading.
So vector forces you to use strict ST? (That's right:
http://hackage.haskell.org/packages/archive/primitive/0.4.1/doc/html/Control-Monad-Primitive.html#t:PrimMonadshows
that only strict ST has a MonadPrime instance)

It's another plea against the sames names/interface in different modules
pattern, that vector, ST, State, ByteString suffer from.
This (the error messages being misleading), as well as the document being
hard to read (because you never know which type is mentionned, you have to
see to check at the module from which the type comes. And even with that,
it's just a matter of convention: for instance Control.Monad.State exports
the lazy version, but Control.Monad.ST exports the strict one (so that the
default version is closer to IO's behaviour).

I really prefer the approach taken by repa 3: one data family, generic
functions for every flavour and some specific functions for each flavour.
It's not perfect (not standard for instance), but I think such an approach
should be priviledged in the future, it makes things much clearer, and
enable you to choose between working generically (on 'Stuff a' types) or
specifically (either only on 'Stuff Strict' or only on 'Stuff Lazy').
I would be interested to know if someone has other ideas in that respect in
mind.


2012/6/9 Nicu Ionita nicu.ion...@acons.at

 Ok, the error was: I was using Control.Monad.ST.Lazy. Importing
 Control.Monad.ST compiles immediately without problem. (Is this because
 I'm using unboxed mutable vectors?)

 Now, that's a little bit odd.

 It's clear that the strict and lazy forms of ST are different types. But
 unfortunately they are named the same! So actually any error message from
 the compiler drives you crazy, because it's refering to another type.

 Probably the reason to name the types with the same name is for easy
 interchangeability. But as we see, the types are not (always)
 interchangeable.

 Anyway, now it compiles.

 Thanks,
 Nicu

 Am 08.06.2012 23:15, schrieb Nicu Ionita:

  Hi,

 I created a gist with a minimal (still 111 lines) module:

 https://gist.github.com/**2898128 https://gist.github.com/2898128

 I still get the errors:

 WhatsWrong.hs:53:5:
Couldn't match type `s' with `PrimState (ST s)'
  `s' is a rigid type variable bound by
  a type expected by the context: ST s [Move] at
 WhatsWrong.hs:48:21
In a stmt of a 'do' block: listMoves ml
In the second argument of `($)', namely
  `do { v - U.new maxMovesPerPos;
let ml = ...;
listMoves ml }'
In the expression:
  runST
  $ do { v - U.new maxMovesPerPos;
 let ml = ...;
 listMoves ml }

 WhatsWrong.hs:65:44:
Couldn't match type `s' with `PrimState (ST s)'
  `s' is a rigid type variable bound by
  the type signature for nextPhaseOnlyCapts :: GenPhase s
  at WhatsWrong.hs:64:1
Expected type: U.MVector (PrimState (ST s)) Move
  Actual type: U.MVector s Move
In the return type of a call of `mlVec'
In the third argument of `genCapts', namely `(mlVec ml)'

 Thanks,
 Nicu

 Am 08.06.2012 02:47, schrieb Silvio Frischknecht:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

  Now comes my question: in the impure values there is always that
 s. I was thinking that the whole structure should have s as a
 parameter:

 Yes

  data MList s = MList { mlVec :: MVector s Move, mlNextPh :: MList
 -

 ST s (Maybe (MList s)) }

 you probably meant:

 data MList s = MList { ... , mlNextPh :: Mlist s -  ... }

 Now I'm not sure about your exact problem since the following compiles
 for me.

 import Data.Vector
 import Data.Vector.Mutable
 import Control.Monad.ST

 type Move = ()
 data MList s = MList {
 mvVec :: MVector s Move,
 mlNextPh :: MList s -  ST s (Maybe (MList s)) }

 splitMove :: MList s -  ST s (Maybe (Move, MList s))
 splitMove ml = do
 m- unsafeRead (mvVec ml) 0
 undefined

 Something you always have to watch out for when dealing with ST is not
 to return something that depends on s in the last statement (the one
 you use runST on). In other words, if you want to return a vector you
 have to freeze it, so it's not mutable anymore.

 If you still can't figure it out paste some complete example that
 doesn't work.

 silvio
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iQIcBAEBAgAGBQJP0Uu5AAoJEDLsP+**zrbatWKFoP+**wYdmAwO3aKPIibOydDwPlcu
 GmwWLCDoylhBsA1swskPGZTlBevFFe**S0kzDMAhZ2dtR18HHf0TVLFCL6mljg**QGhu
 YLsT8a2Y5eepPd7CC0wHD7qLH0t6ln**/**urRhWNnVEGryVHmsIDCBzuKBzopsha**aOm
 8awNeEbmZApki193r/**YJ21Zsxidx4N2tSGCd712ka9Wr7l19**RzBukonTy/wNCTtN
 1sj54xCKap3MpnQe4L68nep6WjMovn**wn5ucPWlouPP5N99/**2umiEPDwX3y9moD/Q
 

Re: [Haskell-cafe] Using promoted lists

2012-06-08 Thread Yves Parès
Thanks for your answers, Anthony and Erik.

I'll try with fundeps. I know about HList, but back at the time when I
looked at it I found quite complex.

Anthony, the link you gave me [1] tends to show that actually Bool type is
promoted.

type family Member x (l :: [*]) :: Bool
type instance Member x (x ': xs) = True
works.
So if I understand well, unlike what I thought when I saw the compilation
fail, the two x's type variables are actually unified, but then the second
instance
type instance Member x (y ': xs) = True
encompasses the first, so GHC refuses to handle it (as it would at the
value level with regular case expressions).
So yes, [1] is exactly what I was trying to do.

Out of curiosity, I tried with Int, and it works too, I can express:
type family Something a :: Int
But then, the following doesn't compile
type instance Something String = 42
( The wild guess '42 does not either ) So I guess we don't have type-level
integers for now. How are promoted Ints usable then?


[1] 
http://hackage.haskell.org/trac/ghc/wiki/NewAxiomshttp://hackage.haskell.org/trac/ghc/wiki/NewAxioms


2012/6/8 AntC anthony_clay...@clear.net.nz

 Yves Parès yves.pares at gmail.com writes:

 
  The doc page
 http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/kind-
 polymorphism-and-promotion.html#promotion show that lists are now usable as
 types.So I'm trying to make a type level function to test if a type list
 contains a type. Unless I'm wrong, that calls to the use of a type family.

 {-# LANGUAGE DataKinds, TypeOperators, KindSignatures, TypeFamilies #-}
 data HBool = HTrue | HFalse
   -- Mandatory as Bool type is not currently promoted to a kind

 type family Member x (l :: [*]) :: HBool

 type instance Member x (x ': xs) = HTrue
 type instance Member x (y ': xs) = Member x xs
 type instance Member x (y ': '[]) = HFalse

 But the compiler complains about my instance conflicting.

 Hi Yves, always when you're asking a question like this, give the error
 message in full -- usually it will explain what's wrong.

 In this case I can guess: you have overlapping instances (the first
 overlaps
 the second, the third overlaps the second), which are not allowed for type
 functions (currently -- unless the equations are confluent).

 There's some early work on introducing overlaps to type functions (in a
 controlled way). http://hackage.haskell.org/trac/ghc/wiki/NewAxioms

 And as it happens, several threads are going on in the lists re options and
 implications.

  Is what I'm trying to do feasible?

 Promoted lists are really just the same as HLists, but using the standard
 Haskell syntax.

 A membership test is feasible with FunDeps (because they do allow
 overlaps),
 but I guess you know the HList stuff, judging from your HBool. It's
 feasible
 using type equality constraints to achieve the same as HList (so ~ is
 equivalent to HList's TypeCast), also with overlaps.


 Second question: how can type level tuples (also mentioned in the doc
 page)
 be exploited? Aren't they redundant with type-level lists?

 Type-level tuples are fixed length, and provide a flat structure (any
 element
 is equally accessible), whereas lists are expandable, with a nested
 structure
 that means you have to scan down the structure to get to the element you
 want.

 AntC


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using promoted lists

2012-06-07 Thread Yves Parès
The doc page
http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/kind-polymorphism-and-promotion.html#promotionshow
that lists are now usable as types.

So I'm trying to make a type level function to test if a type list contains
a type. Unless I'm wrong, that calls to the use of a type family.

{-# LANGUAGE DataKinds, TypeOperators, KindSignatures, TypeFamilies #-}

data HBool = HTrue | HFalse  -- Mandatory as Bool type is not currently
promoted to a kind

type family Member x (l :: [*]) :: HBool

type instance Member x (x ': xs) = HTrue
type instance Member x (y ': xs) = Member x xs
type instance Member x (y ': '[]) = HFalse


But the compiler complains about my instance conflicting. Is what I'm
trying to do feasible?

Second question: how can type level tuples (also mentioned in the doc page)
be exploited? Aren't they redundant with type-level lists?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Tests by properties: origin?

2012-06-01 Thread Yves Parès
Out of curiosity, does someone know if QuickCheck was the first test
framework working through test by properties associated with random
generation or if it drew the idea from something else?

Because the idea has be retaken by a lot of frameworks in several languages
(see http://en.wikipedia.org/wiki/Quickcheck), but I can't find what was
QuickCheck inspiration.

(This question pop up when arguing with a Java user about what Haskell
brings to the field of development, he pretends it would be astonishing
that QuickCheck is the first one using this testing paradigm).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tests by properties: origin?

2012-06-01 Thread Yves Parès
Yes ^^ but I can't find this paper, Koen Claessen website doesn't mention
it and the link on the page
http://www.haskell.org/haskellwiki/Introduction_to_QuickCheck is dead.

2012/6/1 Janis Voigtländer j...@informatik.uni-bonn.de

 Am 01.06.2012 12:00, schrieb Yves:

 Out of curiosity, does someone know if QuickCheck was the first test
 framework working through test by properties associated with random
 generation or if it drew the idea from something else?

 Because the idea has be retaken by a lot of frameworks in several
 languages
 (seehttp://en.wikipedia.org/**wiki/Quickcheckhttp://en.wikipedia.org/wiki/Quickcheck),
 but I can't find what was
 QuickCheck inspiration.


 How about reading the original paper introducing QuickCheck? If the
 authors drew inspiration from elsewhere, the paper is for sure where
 they would tell you, first hand. :-)

 Best,
 Janis.

 --
 Jun.-Prof. Dr. Janis Voigtländer
 http://www.iai.uni-bonn.de/~**jv/ http://www.iai.uni-bonn.de/%7Ejv/

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tests by properties: origin?

2012-06-01 Thread Yves Parès
Yes, it's that one, the first Quickcheck paper, thanks.

The link on the wikipedia page is also dead.

2012/6/1 Ivan Perez ivanperezdoming...@gmail.com

 Is this the paper you are looking for:
 http://www.eecs.northwestern.edu/~robby/courses/395-495-2009-fall/quick.pdf
 ?

 On 1 June 2012 11:20, Yves Parès yves.pa...@gmail.com wrote:
  Yes ^^ but I can't find this paper, Koen Claessen website doesn't
 mention it
  and the link on the page
  http://www.haskell.org/haskellwiki/Introduction_to_QuickCheck is dead.
 
 
  2012/6/1 Janis Voigtländer j...@informatik.uni-bonn.de
 
  Am 01.06.2012 12:00, schrieb Yves:
 
  Out of curiosity, does someone know if QuickCheck was the first test
  framework working through test by properties associated with random
  generation or if it drew the idea from something else?
 
  Because the idea has be retaken by a lot of frameworks in several
  languages
  (seehttp://en.wikipedia.org/wiki/Quickcheck), but I can't find what
 was
  QuickCheck inspiration.
 
 
  How about reading the original paper introducing QuickCheck? If the
  authors drew inspiration from elsewhere, the paper is for sure where
  they would tell you, first hand. :-)
 
  Best,
  Janis.
 
  --
  Jun.-Prof. Dr. Janis Voigtländer
  http://www.iai.uni-bonn.de/~jv/
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Requesting Feedback: I Love Haskell, but can't find a place to use it

2012-06-01 Thread Yves Parès
 Then I wrote about a dozen lines of Haskell to do the job--and running
time turned out to be O(n^2).

Do you still have the code?

2012/6/1 Doug McIlroy d...@cs.dartmouth.edu

   I love Haskell. It is my absolute favorite language.
   But I have a very hard time finding places where I can actually use it!
 
  have you considered your head as such a place that should be easy to
 find.

 An excellent reason.  Haskell shines unusually brightly on
 applications that have an algebraic structure. Laziness
 relieves a plethora of sequencing concerns.  I particularly
 treasure one experience:

 I asked a guru about the complexity of converting regular
 expressions to finite-state automata without epsilon transitions
 (state transitions that don't produce output).  The best he
 knew was O(n^3), which is the cost of removing epsilon transitions
 from arbitrary automata.  Then I wrote about a dozen lines of Haskell
 to do the job--and running time turned out to be O(n^2). Once
 I'd written the code, it became clear how to do it in other
 languages, but I never would have found the theorem without
 the help of Haskell.  (This without even bringing in the heavy
 artillery of higher-order functions and monads.)

 Doug McIlroy

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building pattern and trying monads

2012-05-28 Thread Yves Parès
 observe $ flip runStateT 10 $ (put 0  mzero) | modify (+3)
   ((),13)

If the only thing you need is backtracking, using LogicT might be a little
overkill, using Maybe in the bottom of you monad stack suits just fine:

case flip runStateT 10 $ (put 0  mzero) | modify (+3) of
Just x - 
Nothing - 

2012/5/27 Roman Cheplyaka r...@ro-che.info

 * L Corbijn aspergesoe...@gmail.com [2012-05-27 14:21:39+0200]
  The solution I've in mind depends on the stack being pure. When the
  monad stack is pure a rule can be applied, returning a maybe value (or
  having a MaybeT wrapper) and when returning Nothing (failed rule)
  reverting the stack to it's point before applying the rule.
 
  As I'm not quite sure about the design (nor good at software design)
  I've some questions about this approach.
  1. Is there a better approach then using a state monad for building
  the 'products'?

 If you need to interleave building your products and analyzing them,
 State seems a reasonable choice here.

  2. My solution with saving/reverting monad-stacks seems quite a
  hassle/hack, so is it a good approach or is there something better?

 You can use a backtracking monad here ([] or Logic).

 The key thing is to put the backtracking monad in the bottom of your
 stack (everything above it will be restored on mzero). On the other
 hand, if you want some global effects that should not be restored, you
 can put corresponding monads below LogicT in the stack.

 Example:

 observe $ flip runStateT 10 $ (put 0  mzero) | modify (+3)
((),13)

 Note that put 0 had no effect here, because it was followed by mzero.

 --
 Roman I. Cheplyaka :: http://ro-che.info/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building pattern and trying monads

2012-05-28 Thread Yves Parès
Actually, I think the backtracking property here stems more from the
MonadPlus StateT instance than from the properties of Maybe.
(mplus a b runs a and b by passing explicitely the same state to them).

2012/5/28 Roman Cheplyaka r...@ro-che.info

 * Yves Parès yves.pa...@gmail.com [2012-05-28 11:28:22+0200]
   observe $ flip runStateT 10 $ (put 0  mzero) | modify (+3)
 ((),13)
 
  If the only thing you need is backtracking, using LogicT might be a
 little
  overkill, using Maybe in the bottom of you monad stack suits just fine:
 
  case flip runStateT 10 $ (put 0  mzero) | modify (+3) of
  Just x - 
  Nothing - 

 Indeed, I didn't realise that Maybe may be (no pun intended) sufficient
 here!

 --
 Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Record syntax, reopening a can of worms.

2012-05-27 Thread Yves Parès
 case myData of
   myA@A{} - fooForA's myA
   myB@B{} - fooForB's myB

I think this would typecheck if you used GADTs.
Actually what you'd want is to use the record syntax with GADTs (there are
to add the extra type safety you want), however both are not compatible.

data ALike
data BLike

data MyData x where
A :: Int - Int - MyData ALike
B :: Int - MyData BLike

a (A x _) = x
b (A _ x) = x
c (B x) = x
-- GHC may require you to write the type signatures for those three
functions.
Le 27 mai 2012 10:52, timothyho...@seznam.cz a écrit :

 Your Maybe example is very contrived.  The place where I ran into this was
 much less contrived I think.

 I have an editor for a visual programming language. That looks like this:

 https://github.com/timthelion/gridhaskell-haskarrow/wiki

 I'm using a modified version of the Document-View model for application
 design.  The commands in the language are defined in the Cell data type:


 https://github.com/timthelion/gridhaskell-haskarrow/blob/master/Cell.lhs


 Some Cell types, like Jump, have a long crooked line comming out of them
 called a Path.  This line is held in the path :: Maybe Path feild.


 When I'm drawing the Cell tree, I have a function that extracts these
 Paths.
 https://github.com/timthelion/gridhaskell-haskarrow/blob/master/ArrowDrawing.lhs#L75


 It used to be, that more types of Cell than just the Jump Cell had a path.


 As I removed these Paths from the Cell datatype, my line drawing function
 started crashing, whenever it encountered those Cell types.  By having
 chosen to use the short hand record selector syntax rather than the long
 hand place value syntax, I caused a runtime error in my code.


 I could of course have written the same function like this:

where path_points =
   case
case cell of

 Types of cells which have paths:

  Cell.Jump _ path- Just path
  _  - Nothing of
   Just path - maybe [] (\p - [Path.point p]) (Cell.path cell)
   Nothing   - []


 Record selection is a more convenient syntax.  It *could* be usefull for
 making more maintainable code(the reason why I choose it).  The method I
 have chosen *would* be more maintainable in the case I want to add another
 feild to Jump.  In that case I would not have to re-write the path_points
 function.  Sadly, due to GHC's unnessesary and complete inability to do
 type safety checking, what should have been a convenient tool, has become
 an unnessecary source of runtime errors!


 How would I use your syntax in functions that want to pattern match
 against both A and B?  I tried this without success:

 data ALike
 data BLike

 data MyData t = A {a::Int,
  b::Int} |
  B {c::Int}

 mkA x y = A x y :: MyData ALike
 mkB x = B x :: MyData BLike

 altRecordSyntaxes.lhs:15:18:
 Couldn't match type `BLike' with `ALike'
 Expected type: MyData ALike
   Actual type: MyData t
 In the first argument of `g', namely `myA'
 In the expression: g myA

 foo :: MyData t - Int

 foo myA@A{} = g myA

  where

   g :: MyData ALike - Int

   g myA' = a myA'

 foo myB@B{} = g myB

  where

   g :: MyData BLike - Int

   g myB' = c myB'

 main :: IO ()

 main = do

   print $ foo $ mkA 1 3


 This also doesn't work:

 foo :: MyData t - Int

 foo myData =

  case myData of

   myA@A{} - fooForA's myA

   myB@B{} - fooForB's myB

  where

   fooForA's :: MyData ALike - Int

   fooForA's myA' = a myA'

   fooForB's :: MyData BLike - Int

   fooForB's myB' = a myB'

 main :: IO ()

 main = do

   print $ foo $ mkA 1 3

 My solution looks very clean(except for that nasty looking data
 declaration) and has the same advantages.  Really, the current record
 syntax is just flawed :D

 data MyData = A A' | B B'

 data A' = A'{a::Int,
  b::Int}
 data B' = B'{c::Int}

 foo :: MyData - Int
 foo (A myA) = a myA
 foo (B myB) = c myB

 main :: IO ()
 main = print $ foo (A (A' 1 2))

 Timothy

 -- Původní zpráva --
 Od: John Meacham j...@repetae.net
 Datum: 27. 5. 2012
 Předmět: Re: [Haskell-cafe] Record syntax, reopening a can of worms.

 Is it any more ridiculous than

  f x@Nothing {} = fromJust x
  main = print (f Nothing)

 crashing at run time? That is what you are expressing with your first
 one. This issue is completely unrelated to the named field syntax,
 they behave exactly like data types with non-named fields.

 However, you can achieve something like what you want with phantom types.

  data ALike
  data BLike

 data MyData t = A {a::Int,
  b::Int} |
  B {c::Int}

  mkA x y = A x y :: MyData ALike
  mkB x = B x :: MyData BLike

 then you can write functions of
 'MyData ALike' to indicate it will only have 'A' as a constructor
 'MyData BLike' to indicate it will only have 'B'
 and 'forall t . MyData t' for functions that can take a general MyData
 that can have either.

 John


 ___
 Haskell-Cafe mailing list
 

Re: [Haskell-cafe] Record syntax, reopening a can of worms.

2012-05-27 Thread Yves Parès
I enclosed a source file that shows the use of a GADT in that case.

2012/5/27 timothyho...@seznam.cz

 Somehow I don't understand you.
 Could you please fill out your example into a working bit of code?

 Thank you,

 Timothy


 -- Původní zpráva --
 Od: Yves Parès limestr...@gmail.com

 Datum: 27. 5. 2012
 Předmět: Re: [Haskell-cafe] Record syntax, reopening a can of worms.

  case myData of
myA@A{} - fooForA's myA
myB@B{} - fooForB's myB

 I think this would typecheck if you used GADTs.
 Actually what you'd want is to use the record syntax with GADTs (there are
 to add the extra type safety you want), however both are not compatible.

 data ALike
 data BLike

 data MyData x where
 A :: Int - Int - MyData ALike
 B :: Int - MyData BLike

 a (A x _) = x
 b (A _ x) = x
 c (B x) = x
 -- GHC may require you to write the type signatures for those three
 functions.
 Le 27 mai 2012 10:52, timothyho...@seznam.cz a écrit :

 Your Maybe example is very contrived.  The place where I ran into this was
 much less contrived I think.

 I have an editor for a visual programming language. That looks like this:

 https://github.com/timthelion/gridhaskell-haskarrow/wiki

 I'm using a modified version of the Document-View model for application
 design.  The commands in the language are defined in the Cell data type:


 https://github.com/timthelion/gridhaskell-haskarrow/blob/master/Cell.lhs


 Some Cell types, like Jump, have a long crooked line comming out of them
 called a Path.  This line is held in the path :: Maybe Path feild.


 When I'm drawing the Cell tree, I have a function that extracts these
 Paths.
 https://github.com/timthelion/gridhaskell-haskarrow/blob/master/ArrowDrawing.lhs#L75


 It used to be, that more types of Cell than just the Jump Cell had a path.


 As I removed these Paths from the Cell datatype, my line drawing function
 started crashing, whenever it encountered those Cell types.  By having
 chosen to use the short hand record selector syntax rather than the long
 hand place value syntax, I caused a runtime error in my code.


 I could of course have written the same function like this:

where path_points =


   case
case cell of



 Types of cells which have paths:

  Cell.Jump _ path- Just path


  _  - Nothing of
   Just path - maybe [] (\p - [Path.point p]) (Cell.path cell)


   Nothing   - []


 Record selection is a more convenient syntax.  It *could* be usefull for
 making more maintainable code(the reason why I choose it).  The method I
 have chosen *would* be more maintainable in the case I want to add another
 feild to Jump.  In that case I would not have to re-write the path_points
 function.  Sadly, due to GHC's unnessesary and complete inability to do
 type safety checking, what should have been a convenient tool, has become
 an unnessecary source of runtime errors!


 How would I use your syntax in functions that want to pattern match
 against both A and B?  I tried this without success:

 data ALike
 data BLike

 data MyData t = A {a::Int,
  b::Int} |
  B {c::Int}

 mkA x y = A x y :: MyData ALike
 mkB x = B x :: MyData BLike

 altRecordSyntaxes.lhs:15:18:
 Couldn't match type `BLike' with `ALike'
 Expected type: MyData ALike
   Actual type: MyData t
 In the first argument of `g', namely `myA'
 In the expression: g myA

 foo :: MyData t - Int

 foo myA@A{} = g myA

  where

   g :: MyData ALike - Int

   g myA' = a myA'

 foo myB@B{} = g myB

  where

   g :: MyData BLike - Int

   g myB' = c myB'

 main :: IO ()

 main = do

   print $ foo $ mkA 1 3


 This also doesn't work:

 foo :: MyData t - Int

 foo myData =

  case myData of

   myA@A{} - fooForA's myA

   myB@B{} - fooForB's myB

  where

   fooForA's :: MyData ALike - Int

   fooForA's myA' = a myA'

   fooForB's :: MyData BLike - Int

   fooForB's myB' = a myB'

 main :: IO ()

 main = do

   print $ foo $ mkA 1 3

 My solution looks very clean(except for that nasty looking data
 declaration) and has the same advantages.  Really, the current record
 syntax is just flawed :D

 data MyData = A A' | B B'

 data A' = A'{a::Int,
  b::Int}
 data B' = B'{c::Int}

 foo :: MyData - Int
 foo (A myA) = a myA
 foo (B myB) = c myB

 main :: IO ()
 main = print $ foo (A (A' 1 2))

 Timothy

 -- Původní zpráva --
 Od: John Meacham j...@repetae.net
 Datum: 27. 5. 2012
 Předmět: Re: [Haskell-cafe] Record syntax, reopening a can of worms.

 Is it any more ridiculous than

  f x@Nothing {} = fromJust x
  main = print (f Nothing)

 crashing at run time? That is what you are expressing with your first
 one. This issue is completely unrelated to the named field syntax,
 they behave exactly like data types with non-named fields.

 However, you can achieve something like what you want with phantom types.

  data ALike
  data BLike

 data MyData t = A {a::Int,
  b::Int} |
  B {c

Re: [Haskell-cafe] Formalisation for types of monads

2012-05-24 Thread Yves Parès
 Though I agree it's probably best not to mention the phrase Church
encoding to beginning students.

Be reassured, that was not my intention ^^.
I just pointed that out to support the fact that foldr was *the*
fundamental folding operator for lists.


2012/5/24 Brent Yorgey byor...@seas.upenn.edu

 On Wed, May 23, 2012 at 09:24:06AM +0200, Ertugrul Söylemez wrote:
  Yves Parès yves.pa...@gmail.com wrote:
 
Note about []:  Don't even mention foldl.  The folding
combinator for lists is foldr, period.
  
   Yes, I do agree. I came to this when I realized foldr gave the church
   encoding of a list.
 
  Not only that.  The foldr combinator has an identity fold and implements
  actual structural recursion.

 That's pretty much what a Church encoding is.  Though I agree it's
 probably best not to mention the phrase Church encoding to beginning
 students.

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalisation for types of monads

2012-05-23 Thread Yves Parès
 Note about []:  Don't even mention foldl.  The folding combinator
 for lists is foldr, period.

Yes, I do agree. I came to this when I realized foldr gave the church
encoding of a list.

(Well, actually, due to parameters ordering:
*churchList list* z0 f = foldr f z0 list
does)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-23 Thread Yves Parès
I understand your concerns about modifying the current ideology, it's fair
enough.
Actually I'm myself more in favor of adding strict couterparts, and export
them conjointly, to support both the mathematical roots and the
performances of the operations that are done most of time (Which kind of
numbers do developers work with more often? Regular Ints and Doubles or
Peano lazy numbers?)


2012/5/23 wren ng thornton w...@freegeek.org

 On 5/21/12 10:51 AM, Yves Parès wrote:

 I do think we have the opposite problem, however, in much Haskell code --

 people are using the clean, obviously correct, but inefficient code even
 in
 standard library functions that really should be optimized like crazy!

 And even before optimizing like crazy, I think the functions that are
 more often good should be emphasized: Prelude should export foldl'
 together with/instead of foldl, sum should have its sum' counterpart (or
 even be rewritten to use foldl'), and so on...
 It baffles me that functions that are said to be more efficient in the
 majority of cases are not the default.


 Part of this is due to legacy and the Haskell Report. For example, the
 definition of sum is specified by the Report. And unfortunately there exist
 (rare) cases where the semantics of sum and sum' differ: namely when the
 type supports lazy addition (e.g., Peano numbers) and therefore can return
 a partial answer to an infinite summation.

 That said, the GHC philosophy in most of these cases has been to:

 (a) use their own definitions with a CPP flag USE_REPORT_PRELUDE in case
 you *really* want the Report's definition[1], and

 (b) to secretly replace foldl by foldl' in cases where it can determine
 the function argument to be strict (and therefore that the change doesn't
 alter the semantics).

 That doesn't fix everything, and it's no justification for not having the
 newer Reports give more sane definitions, but it's better than nothing.

 Part of the issue re fixing the Report is that there's a tension between
 correctness and expediency, as well as between different notions of
 correctness. By and large, Haskell has managed to stay out of theoretical
 arguments about choosing what correct means when it is still
 controversial in mathematics. (Whence many of the complaints about the
 non-mathematical nature of the numeric type classes.) But which of the
 foldr vs foldl vs foldl' definitions of summation is the correct
 definition? Mathematically, infinite summations should be allowed, and
 summations of infinite quantities should be allowed. However,
 pragmatically, infinite summations are of a different nature than finite
 summations; e.g., when they converge it'd be nice to evaluate them in
 finite time. So on the one hand, a naive view of correctness suggests
 that we ought to allow these summations as just more of the same; but on
 the other hand, an alternative notion of correctness suggests that while
 infinite sums and sums of infinites should be handled, they should be
 handled by different means than traditional finite sums of finite values.

 The foldl' definition of summation only allows finite summations of
 finite[2] values; the foldl definition only allows finite summations, but
 they can be summations of infinite values; the foldr definition allows both
 infinite values and infinitely many summands, though it's not especially
 useful for handling infinite summations[3]. We can probably exclude foldr
 with all fairness, and tell folks to handle infinite summations in another
 way. But between foldl and foldl' there's (room for) a lot more debate
 about which should be blessed by the Prelude. Do we want to support lazy
 numbers out of the box, or do we want to support performance for strict
 numbers out of the box? While there's been a growing strictification of
 Haskell for performance reasons, do recall that laziness was one of the
 initial reasons for defining Haskell in the first place. Hence the old
 Report's definition. The role of Haskell as a research engine has moved on
 since then, but the decision to codify that is still non-trivial.


 [1] http://hackage.haskell.org/**packages/archive/base/4.5.0.0/**
 doc/html/src/Data-List.html#**sumhttp://hackage.haskell.org/packages/archive/base/4.5.0.0/doc/html/src/Data-List.html#sum

 [2] For the purpose of discussion, I'm considering the Float and Double
 values for infinities to be finite, because they are identical to the
 finite values with respect to strictness and size of representation.

 [3] It could take infinitely long to return, even for types which allow
 lazily returning partial values. For example,

data Peano = Z | S Peano
instance Num Peano where ...

zeros = Z : zeros

-- Has to traverse the whole list, just in case there's a (S _) in
-- there somewhere, before it knows whether to return Z or (S _).
main = print (foldr (+) Z zeros)

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list

[Haskell-cafe] Formalisation for types of monads

2012-05-21 Thread Yves Parès
When explaining monads to beginners (having an imperative background), I
found myself to say that there is *roughly* three groups of monads
(because they're always worried about their cost, i.e. their incidental
complexity):

- Function-oriented monads (e.g. State, Reader, Cont)
- Reductible data-oriented monads (e.g. Maybe, Identity, Writer)
- Irreductible data-oriented monads (e.g. List, free monads) (which, as I
understood, have a (=) operation that has a quadratic complexity if not
used properly)

Are there others groups I'm missing and is there terms to formalize them?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-21 Thread Yves Parès
 I do think we have the opposite problem, however, in much Haskell code --
people are using the clean, obviously correct, but inefficient code even in
standard library functions that really should be optimized like crazy!

And even before optimizing like crazy, I think the functions that are
more often good should be emphasized: Prelude should export foldl'
together with/instead of foldl, sum should have its sum' counterpart (or
even be rewritten to use foldl'), and so on...
It baffles me that functions that are said to be more efficient in the
majority of cases are not the default.

 I have worked at places in industry where teams automatically use C++ for
everything.

Have they looked at you like if you were an alien (and even said you were
not a serious developper) when you emitted the possibility of evaluating
the feasibility of using/making a more expressive language for a specific
task?

2012/5/21 Ryan Newton rrnew...@gmail.com

 The unconditional desire for maximum possible object code
 performance is usually very stupid, not to mention impossible to reach
 with any high level language and any multi-tasking operating system.


 Definitely.  I don't know if we have a catchy term for kneejerk
 optimization or if it falls under the broader umbrella of premature
 optimization [including misdirected or unneeded optimization].

 I do think we have the opposite problem, however, in much Haskell code --
 people are using the clean, obviously correct, but inefficient code even in
 standard library functions that really should be optimized like crazy!


  Haskell's average penalty compared to C is
 no reason to write the entire application in C.


 Yes, this seems to be a separate disease.  Not just using low-level langs,
 per se, but using them for *everything*.  I have worked at places in
 industry where teams automatically use C++ for everything.  For example,
 they use it for building all complete GUI applications, which surprises me
 a little bit.  I would have thought in recent years that almost everyone
 was using *something* else (Java,Python, whatever) to do much of the
 performance-non-critical portions of their application logic.

   -Ryan


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-21 Thread Yves Parès
 Not necessarily.  For example the 'nub' function from Data.List could be
 much faster.  Unfortunately this would also change its type.  O(n²)
 complexity is really the best you can get with the Eq constraint.

Why not in that kind of cases provide a second function (named
differently), together with the original function, and specify they're
differences (i.e. wrt performances) in the doc?
It seems like a pretty quick and honest trade-off to me.

2012/5/21 Ertugrul Söylemez e...@ertes.de

 Ryan Newton rrnew...@gmail.com wrote:

  I do think we have the opposite problem, however, in much Haskell code
  -- people are using the clean, obviously correct, but inefficient code
  even in standard library functions that really should be optimized
  like crazy!

 Not necessarily.  For example the 'nub' function from Data.List could be
 much faster.  Unfortunately this would also change its type.  O(n²)
 complexity is really the best you can get with the Eq constraint.  You
 have to change to Ord for better performance.

 In other words:  Some optimizations change the semantics, and semantics
 is taken very seriously in Haskell, for which I'm grateful.


 Greets,
 Ertugrul

 --
 Key-ID: E5DD8D11 Ertugrul Soeylemez e...@ertes.de
 FPrint: BD28 3E3F BE63 BADD 4157  9134 D56A 37FA E5DD 8D11
 Keysrv: hkp://subkeys.pgp.net/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-21 Thread Yves Parès
 If you are writing a program or system that has significant performance
requirements, you might just be better off doing the whole thing in C/C++
and living with the annoyance of doing GUIs

I fail to see how the GUI part would suffer from lack of performance if the
rest of the system is fine. I would hate to be bold, but to me this case
sounds a little bit like MVC done wrong if the breaking GUI apart from
the rest of the software is really that impossible.
Anyway only benchmarks could tell if in such case the coding of the GUI in
Haskell and the integration with the rest burns the performances to the
ground.

 In addition, it's pretty much impossible for me to use Haskell to write
portions (or all of) a game that would include a console version. This is
largely for build system and platform support issues. Doing everything in
C++ is nearly the only option here.

I worked in a video game company too (I refered to it when I answered to
Ryan about companies automatically using C++), and I agree, the first
unbreakable obstacle to the implementation of some parts of the application
in Haskell (or in anything else than C/C++) that comes to mind is the fact
that in the end it must run not only on personal computers.
The main issue is that those systems are way too closed by their
manufacturers. Second issue may be RAM (way scarcer than on PCs: e.g. 512Mo
in all for the PS3), but --again-- benchmarks wrt that would be more
enlightening than my own opinion.

2012/5/21 Sam Martin sam.mar...@geomerics.com


  Yes, this seems to be a separate disease.  Not just using low-level
 langs, per se,
  but using them for *everything*.  I have worked at places in industry
 where teams
  automatically use C++ for everything.  For example, they use it for
 building all
  complete GUI applications, which surprises me a little bit.  I would
 have thought
  in recent years that almost everyone was using *something* else
 (Java,Python,
  whatever) to do much of the performance-non-critical portions of their
 application logic.

 I think disease might be overstating this somewhat :) In defence of
 using C++ for everything: Interfacing different languages is not exactly
 trivial, and in some cases, impossible.

 If you are writing a program or system that has significant performance
 requirements, you might just be better off doing the whole thing in C/C++
 and living with the annoyance of doing GUIs, or whatever, in C++. The
 overhead of pulling in another language may simply outweigh the convenience.

 In addition, it's pretty much impossible for me to use Haskell to write
 portions (or all of) a game that would include a console version. This is
 largely for build system and platform support issues. Doing everything in
 C++ is nearly the only option here.

 This situation could be improved though, by making it far easier to embed
 Haskell within C/C++. It's not difficult by design, but there are large
 engineering obstacles in the way, and it's hard to see where the effort to
 remove these could come from. But then it would be more plausible to argue
 that people are missing out by not using a mix of C++ and Haskell.

 Cheers,
 Sam


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-16 Thread Yves Parès
 On the one hand, characterizing those who desire the best performance
possible as simple-minded is, at best, a gross over-generalization. Like
you, I work in a field where optimization is king (e.g., in machine
translation, program runtimes are measured in days).

You misread the logical implication, Ertugrul said:

 simple-minded people often have a desire to get the best performance
possible

Which means:
A is simple-minded == A will (most probably) want to get the best perf.

You interpreted it as:
A wants to get the best perf. == A is simple-minded

I agree with you, the second implication is wrong, but that's not what was
meant.

(Moral: Should people use more logical operators and less words we would
undertand better).

2012/5/16 wren ng thornton w...@freegeek.org

 On 5/10/12 8:44 PM, Ryan Newton wrote:

 through the trouble of writing my algorithms in C/C++, but simple-minded
 people often have a desire to get the best performance possible, in
 which case you really want to use C, C++, Fortran or whatever high level
 assembler language you like.


 I think this is a bit of an unfair accusation (simple-minded).


 Well, yes and no.

 On the one hand, characterizing those who desire the best performance
 possible as simple-minded is, at best, a gross over-generalization. Like
 you, I work in a field where optimization is king (e.g., in machine
 translation, program runtimes are measured in days).

 But on the other hand, there are quite a lot of people who focus
 excessively on optimization when the actual differences for the code
 they're writing are either negligible (e.g., because of I/O bottlenecks) or
 uninteresting (e.g., a 2x slowdown is a nuisance rather than a crisis). For
 those who focus on optimization when the benefits are negligible or
 uninteresting, I think it's fair to characterize that focus as
 simple-minded. This focus seems especially common when people start
 talking about which language is faster--- as opposed to, say, which
 library is faster, or which implementation of a given algorithm is faster.
 In some cases the question of language speed is legitimate, but IME it's
 far more often used to prop up prejudices about which language is
 better--- i.e., which group of human beings (defined by their choice of
 programming language) is better.


 I agree that, as a community, we need to come to a realistic understanding
 of the performance characteristics of Haskell compared to C etc. I don't
 like prejudice and propaganda for Haskell any more than I like prejudice
 and propaganda for C etc. In some cases it's true that Haskell out-performs
 C (or C++, or Java); but in many cases it does not, and we need to
 acknowledge that. The real problem, I feel, is that it's not always clear
 which category any given program falls into. In some cases the idiomatic
 Haskell is the fast one, in some cases its the slow one; but in all cases,
 what is meant by idiomatic Haskell is a moving target with poorly
 specified meaning.

 The advent of ByteStrings was a major coup in figuring out exactly where
 and why Haskell is slow and how to fix it. Iteratees are another one,
 though the details are still being worked out. However, things like
 strictness annotations on (non-accumulator) function arguments, the choice
 whether to use ST or not, the choice whether to CPS-transform or not, etc,
 are still very much a black art. Even with a thorough understanding of the
 STG-machine and the GHC compilation pipeline, it's not always clear whether
 they will help or hurt. ST in particular is especially finicky, to the
 point where I wonder if it's ever a win (outside of in-place updates on
 arrays).

 So while I think most language performance comparisons are dubious to
 begin with, I don't think the comparison of Haskell to C++ (or C, or
 Java,...) can even be construed as something with a meaningful answer.
 Haskell is just too different, and its in-the-large cost model is too
 poorly understood. I'm not even aware of any helpful generalizations over
 comparing two specific programs. Even when restricting to things like
 parsing or compute-intensive numeric code, the comparison comes back with
 mixed answers.

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Can't prevent memoizing in simple code

2012-05-16 Thread Yves Parès
The buffer http://hpaste.org/68595 presents a simple code I tried to
profile.
I spotted what I strongly think to be an abusive memoization. The problem
is that I don't see how to (simply) get rid of it.
Compiled with -O2, it consumes 130MB of memory, however lines A and B
executed separately consume each only 1MB.

The infinite list (l 1), whatever I do, keeps being shared between lines A
and B.
I tried to wrap it in a function, as you can see, I also tried to make it
explicitely polymorphic (bypassing monomorphic restriction), nothing solves
it, GHC is just to good at memoizing.

NB: When compiled without optimisations, the sharing does not happen (side
note: but then lack of strictness analysis -- which is what I was testing
at the first place -- makes line A (call to suminit2) consume a lot of
memory, but this is normal).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can't prevent memoizing in simple code

2012-05-16 Thread Yves Parès
Thanks ^^

My other solution was a dirty trick:

Changing the second (l 1) by a (l (div 2 2)), which would only be good
until GHC knows how to statically analyse it (2-1 wasn't working for
instance).

I also noticed (while profiling to confirm that this was the source of the
memory leak) that adding manually cost centres forces the re-evaluation:

main = do
  print $ suminit2 ({-# SCC list1 #-} l 1) 100 0
  print $ fst $ suminit ({-# SCC list2 #-} l 1) 100 0
  where
l n = enumFrom n

CAF:main5  Main 97   00.00.026.6
50.0
  main  Main118   00.00.026.6
50.0
   list2Main119   00.00.026.6  *50.0
*
main.l  Main120   1   26.6   50.026.6
50.0
[...]
CAF:main8  Main 95   00.00.030.4
50.0
  main  Main110   00.00.030.4
50.0
   list1Main111   00.00.030.4
*50.0*
main.l  Main112   1   30.4   50.030.4
50.0

We see here that allocations are shared between list1 and list2 (I expected
list1 to get 100% and list2 0%, due to sharing).
Strange...

2012/5/16 Anthony Cowley acow...@gmail.com

 On May 16, 2012, at 12:08 PM, Yves Parès wrote:

 The buffer http://hpaste.org/68595 presents a simple code I tried to
 profile.
 I spotted what I strongly think to be an abusive memoization. The problem
 is that I don't see how to (simply) get rid of it.
 Compiled with -O2, it consumes 130MB of memory, however lines A and B
 executed separately consume each only 1MB.

 The infinite list (l 1), whatever I do, keeps being shared between lines A
 and B.
 I tried to wrap it in a function, as you can see, I also tried to make it
 explicitely polymorphic (bypassing monomorphic restriction), nothing solves
 it, GHC is just to good at memoizing.


 Adding a {-# NOINLINE l #-} annotation helps here. Syntactically, it must
 be located somewhere a type signature for l would also be valid.

 Anthony


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-16 Thread Yves Parès
 The profiler is certainly useful (and much better with GHC 7.4)

What are the improvements in that matter? (I just noticed that some GHC
flags wrt profiling have been renamed)

2012/5/16 Ben Gamari bgamari.f...@gmail.com

 Kevin Charter kchar...@gmail.com writes:

 snip
  For example, imagine you're new to the language, and as an exercise
 decide
  to write a program that counts the characters on standard input and
 writes
  the count to standard output. A naive program in, say, Python will
 probably
  use constant space and be fairly fast. A naive program in Haskell stands
 a
  good chance of having a space leak, building a long chain of thunks that
  isn't forced until it needs to write the final answer.  On small inputs,
  you won't notice. The nasty surprise comes when your co-worker says
 cool,
  let's run it on this 100 MB log file! and your program dies a horrible
  death. If your friend is a sceptic, she'll arch an eyebrow and secretly
  think your program -- and Haskell -- are a bit lame.
 
 I, for one, can say that my initial impressions of Haskell were quite
 scarred by exactly this issue. Being in experimental physics, I often
 find myself writing data analysis code doing relatively simple
 manipulations on large(ish) data sets. My first attempt at tackling a
 problem in Haskell took nearly three days to get running with reasonable
 performance. I can only thank the wonderful folks in #haskell profusely
 for all of their help getting through that period. That being said, it
 was quite frustrating at the time and I often wondered how I could
 tackle a reasonably large project if I couldn't solve even a simple
 problem with halfway decent performance. If it weren't for #haskell, I
 probably would have given up on Haskell right then and there, much to my
 deficit.

 While things have gotten easier, even now, nearly a year after writing
 my first line of Haskell, I still occassionally find a performance
 issue^H^H^H^H surprise. I'm honestly not really sure what technical
 measures could be taken to ease this learning curve. The amazing
 community helps quite a bit but I feel that this may not be a
 sustainable approach if Haskell gains greater acceptance. The profiler
 is certainly useful (and much better with GHC 7.4), but there are still
 cases where the performance hit incurred by the profiling
 instrumentation precludes this route of investigation without time
 consuming guessing at how to pare down my test case. It's certainly not
 an easy problem.

 Cheers,

 - Ben


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-15 Thread Yves Parès
Yet this resource seems a little outdated:
http://www.haskell.org/haskellwiki/Performance/Strings does not speak about
Text
http://www.haskell.org/haskellwiki/Performance/Arrays about Vector
And what's the status of the Streams IO approach detailed in
http://www.haskell.org/haskellwiki/Performance/IO ? Has it evolved into
enumerators/conduits, or was that an unrelated approach?

2012/5/12 Chris Wong chrisyco+haskell-c...@gmail.com

 On Sat, May 12, 2012 at 12:41 AM, Gregg Lebovitz glebov...@gmail.com
 wrote:
  I would find it useful to pull all this information together into a
 single
  document that discusses all the performance issues in one place and
 shares
  the real life experience is dealing with each issue. I see this as a best
  practice paper rather than a research document.
 
  Does such a document exist? If not, I am willing try and start one.

 http://www.haskell.org/haskellwiki/Performance

 ;)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-08 Thread Yves Parès
One thing that baffles me is the comparison Haskell V. Java:
http://shootout.alioth.debian.org/u32/benchmark.php?test=alllang=ghclang2=java

Would've expected always shorter code and better performances on average.

2012/5/8 Silvio Frischknecht silvio.fris...@gmail.com

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

  Can anyone provide a code in Haskell that performs better in terms
  of execution speed than a well-written C/C++ program?

 http://shootout.alioth.debian.org/ compares a lot of programming
 languages on a lot of problems. Haskell is never as fast as C++, but
 for some problems it comes close.

 Also I challenge anyone to improve one of the haskell programs there.
 It'd be cool if we could make haskell get a higher rank. I recently
 managed to improve the Fasta algorithm, but not by much. Also I think
 the benchmarks don't use llvm flag. It says somewhere that they don't
 measure llvm because they don't have time. But I think they are
 refering to clang. So maybe someone should tell them to turn on the
 llvm flag since it makes a lot of haskell programs faster.

 Silvio
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iQIcBAEBAgAGBQJPqKicAAoJEDLsP+zrbatWmSIP+gNSI61KMvY2VRsWRhd7+j5U
 YAO3CnBTt6lSbMNplFf5AZnbPnY3NVJSG2ZUN12n7ZcaOawmwub3H+N9e0XTXv38
 vOEGzFb/t/OTIx4GHXiWz6kZfzyiTVQpEhzoqx/ZX4KZqrUBt5ZuSpmWtPrPXCfF
 joCZiBZIwxfOkI/l1zsb8vW3Xaqxs9dfRQOJEf7GLB5zGXwUA2ZLWG5HYvAUHu0G
 9xB9cBgaX7HKdBwo3af2WZEFznZnZ1LUpc8TuacL54wVmLpLNJU2EaeqH2LsH0R3
 VxX9yU1TIQUBubDa1Tui73xJ2L4Ns7AVD7Kx14yK5cu61wpz/KeUOU/wgedp+wNe
 o54alfqzrfOC+GAJGJFg+3aIkXuij4j1jTXZ+/3ya7nofcBZwtqoZUWCvjYSoEaI
 xecxuHLCK2pd3zwPk7ZUnG0Mo0vu4ZpVKpCs4u8nPRzVl0101mGkHSVTVXjVP8R/
 d3AIPwy74B4nvCk9gohxHwtsvsmzxoRZr7E5XkGDTQdbj0Ly5bJfBW3c1X/jvq9c
 FHvxCspERGf6S+aX9L6lg9v3/aje/av2q0zUL7jizA4no3q7D/ZvWkmIWF5ySyRh
 +QrC39I6GHDMvxXn0HIp9m2226sNGL4vpvBTgucJWJcHUX+FdytFIe7VQ0ZvdXvx
 IjxCrgMAPVy5/TH44PP+
 =TaFj
 -END PGP SIGNATURE-

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-06 Thread Yves Parès
I do not agree: the fib function is tail-recursive, any good C compiler is
able to optimize away the calls and reduce it to a mere loop.
At least that's what I learnt about tail recursion in C with GCC.

2012/5/6 Artur apeka1...@gmail.com

 On 06.05.2012 10:44, Ivan Lazar Miljenovic wrote:

 On 6 May 2012 16:40, Janek S.fremenz...@poczta.onet.pl  wrote:

 Hi,

 a couple of times I've encountered a statement that Haskell programs can
 have performance
 comparable to programs in C/C++. I've even read that thanks to
 functional nature of Haskell,
 compiler can reason and make guarantess about the code and use that
 knowledge to automatically
 parallelize the program without any explicit parallelizing commands in
 the code. I haven't seen
 any sort of evidence that would support such claims. Can anyone provide
 a code in Haskell that
 performs better in terms of execution speed than a well-written C/C++
 program? Both Haskell and C
 programs should implement the same algorithm (merge sort in Haskell
 outperforming bubble sort in
 C doesn't count), though I guess that using Haskell-specific idioms and
 optimizations is of
 course allowed.

 How about http://donsbot.wordpress.com/**2007/11/29/use-those-extra-**
 cores-and-beat-c-today-**parallel-haskell-redux/http://donsbot.wordpress.com/2007/11/29/use-those-extra-cores-and-beat-c-today-parallel-haskell-redux/
 ?

  Hi,

  isn't it that particular Haskell code is outperforming C (22 seconds vs.
 33), just because the author uses recursion in C? I surely love Haskell,
 and the way it's code is easy parallelized, but that example seams not fair.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-06 Thread Yves Parès
Sorry sorry sorry ^^
I read too fast and realized I was wrong before you sent this.

Okay so then it would be nice that somewhere with higher skills tells us
where does Haskell recursive calls stand when compared to C's.

2012/5/6 Roman Cheplyaka r...@ro-che.info

 It is not tail-recursive.

 * Yves Parès yves.pa...@gmail.com [2012-05-06 10:58:45+0200]
  I do not agree: the fib function is tail-recursive, any good C compiler
 is
  able to optimize away the calls and reduce it to a mere loop.
  At least that's what I learnt about tail recursion in C with GCC.
 
  2012/5/6 Artur apeka1...@gmail.com
 
   On 06.05.2012 10:44, Ivan Lazar Miljenovic wrote:
  
   On 6 May 2012 16:40, Janek S.fremenz...@poczta.onet.pl  wrote:
  
   Hi,
  
   a couple of times I've encountered a statement that Haskell programs
 can
   have performance
   comparable to programs in C/C++. I've even read that thanks to
   functional nature of Haskell,
   compiler can reason and make guarantess about the code and use that
   knowledge to automatically
   parallelize the program without any explicit parallelizing commands
 in
   the code. I haven't seen
   any sort of evidence that would support such claims. Can anyone
 provide
   a code in Haskell that
   performs better in terms of execution speed than a well-written C/C++
   program? Both Haskell and C
   programs should implement the same algorithm (merge sort in Haskell
   outperforming bubble sort in
   C doesn't count), though I guess that using Haskell-specific idioms
 and
   optimizations is of
   course allowed.
  
   How about
 http://donsbot.wordpress.com/**2007/11/29/use-those-extra-**
   cores-and-beat-c-today-**parallel-haskell-redux/
 http://donsbot.wordpress.com/2007/11/29/use-those-extra-cores-and-beat-c-today-parallel-haskell-redux/
 
   ?
  
Hi,
  
isn't it that particular Haskell code is outperforming C (22 seconds
 vs.
   33), just because the author uses recursion in C? I surely love
 Haskell,
   and the way it's code is easy parallelized, but that example seams not
 fair.
  
  
   __**_
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/**mailman/listinfo/haskell-cafe
 http://www.haskell.org/mailman/listinfo/haskell-cafe
  

  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe


 --
 Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and arrays

2012-05-05 Thread Yves Parès
Well actually, unless you _really_ need paramterizable indexes (i.e. not
only Ints) I don't see reasons to prefer array to vector now.

2012/5/4 Johan Tibell johan.tib...@gmail.com

 Hi Morten,

 If speed is really important I would go with the vector package. It
 has a more modern API and better performance.

 -- Johan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with forall type in type declaration

2012-05-04 Thread Yves Parès
run :: Monad IO a - IO a

Actually this type is wrong. Monad has to appear as a class constraint, for
instance :

run :: Monad m = m a - IO a

Are you trying to make:

run :: IO a - IO a
??

2012/5/4 Magicloud Magiclouds magicloud.magiclo...@gmail.com

 Hi,
  Assuming this:
 run :: Monad IO a - IO a
 data Test = Test { f }

  Here I'd like to set f to run, like Test run. Then what is the type of
 f?
  The confusing (me) part is that, the argument pass to f is not fixed
 on return type, like f1 :: Monad IO (), f2 :: Monad IO Int. So
 data Test a = Test { f :: Monad IO a - IO a} does not work.
 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HandsomeSoup-0.3: CSS selectors for HXT

2012-04-27 Thread Yves Parès
Why do you make your own overlay to download files via HTTP?
HXT has backends (hxt-http or hxt-curl) to do that.

Le 27 avril 2012 04:16, aditya bhargava bluemangrou...@gmail.com a écrit :

 *Homepage:* http://egonschiele.github.com/HandsomeSoup
 *On Hackage:* http://hackage.haskell.org/package/HandsomeSoup

 *Blurb:*

 HandsomeSoup is the library I wish I had when I started parsing HTML in
 Haskell.
 It is built on top of HXT and adds a few functions that make is easier to
 work with HTML.
 Most importantly, it adds CSS selectors to HXT. The goal of HandsomeSoup
 is to be a complete CSS2 parser for HXT (it is very close to this right
 now).



 --
 adit.io

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building ghc on arch linux ARM?

2012-04-11 Thread Yves Parès
So is this possible now to have a desktop PC compile your program using
template haskell into llvm bytecode and then run it on ARM?
If not, is it definitely impossible (as I said, I don't know much about
llvm) or is it yet to be done?

Le 10 avril 2012 19:03, Joey Hess j...@kitenet.net a écrit :

 Joachim Breitner wrote:
  Most of these architectures do not have a native code generator (so they
  are compiled via C) and are unregisterized, i.e. GHC knows nothing about
  their registers. Both cause a performance penalty; I don’t know numbers.
  I assume this is what Joey refers to. But maybe also that ARM machines
  tend to be slower :-)

 Both of course. The rare times I need to build a fairly big haskell
 program like git-annex on arm, it can easily take an hour or so with -O0.

 BTW, the other problem with Haskell on arm is that AFAIK there is no
 ghci, and so also no Template Haskell, and so if you're writing Real
 World utilities that you want to be maximally portable, this means you
 have to avoid using an increasing number of libraries. This rules Yesod
 right out; I've avoided using lenses as I'd have to write much manual
 boilerplate, etc.

 --
 see shy jo

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building ghc on arch linux ARM?

2012-04-10 Thread Yves Parès
 Other than speed, it's fine.

Do we know what speed issues are due to?

Plus, I believed some had used GHC for smartphones?

Le 9 avril 2012 01:45, Joey Hess j...@kitenet.net a écrit :

 Thomas DuBuisson wrote:
  On Sun, Apr 8, 2012 at 4:03 PM, Francesco Mazzoli f...@mazzo.li wrote:
   No, it is not possible to build GHC without GHC. Building GHC on ARM is
   going to be extremely tricky (I'm not sure anyone has ever done it).
 
  I used to use an unregistered build of GHC built by someone in the
  Debian community - it worked well enough.

 It ships with Debian, along with the full Haskell Platform built for ARM
 and lots of other libraries. Other than speed, it's fine.

 --
 see shy jo

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building ghc on arch linux ARM?

2012-04-10 Thread Yves Parès
For instance, yes.
I think I had seen some times on this mailing list or on blog posts (
http://ghcarm.wordpress.com/) people having used GHC on ARM platform.
I distinctly remember having seen on the mailing list that cross-compiling
wasn't working but that we now can compile with GHC on ARM, which means GHC
can be compiled for ARM.

Le 10 avril 2012 12:27, Joachim Breitner m...@joachim-breitner.de a écrit
:

 Hi,

 Am Dienstag, den 10.04.2012, 11:00 +0200 schrieb Yves Parès:
  Plus, I believed some had used GHC for smartphones?

 do you refer to

 http://www.joachim-breitner.de/blog/archives/300-Xmonad-on-my-mobile-phone.html
 or something more serious?

 Greetings,
 Joachim

 --
 Joachim nomeata Breitner
  m...@joachim-breitner.de  |  nome...@debian.org  |  GPG: 0x4743206C
  xmpp: nome...@joachim-breitner.de | http://www.joachim-breitner.de/


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building ghc on arch linux ARM?

2012-04-10 Thread Yves Parès
 All these are not cross-compiled, but natively
 compiled on the repective architecture, and I don’t think it is easily
 possible to cross-compile GHC itself even today.

So how did they get compiled the first time? How do you get a GHC working
on *or* for an ARM platform if you don't use Debian?
And why was Joey Hess talking about performance issues?
(I'll be eventually interested, as Graham Klyne suggested earlier, in
compiling for Raspberry Pi, if the hardware suits).


Le 10 avril 2012 12:49, Joachim Breitner m...@joachim-breitner.de a écrit
:

 Hi,

 Am Dienstag, den 10.04.2012, 12:36 +0200 schrieb Yves Parès:
  For instance, yes.
  I think I had seen some times on this mailing list or on blog posts
  (http://ghcarm.wordpress.com/) people having used GHC on ARM platform.
  I distinctly remember having seen on the mailing list that
  cross-compiling wasn't working but that we now can compile with GHC on
  ARM, which means GHC can be compiled for ARM.

 I’m not sure what the news are here: Debian has provided ghc6 on arm at
 least since Debian etch in 2006 (GHC 6.6), and the first Debian release
 with ghc6 (Debian sarge) ships it on alpha hppa i386 ia64 m68k powerpc
 s390 and sparc (GHC 6.2). All these are not cross-compiled, but natively
 compiled on the repective architecture, and I don’t think it is easily
 possible to cross-compile GHC itself even today. (All data from
 http://archive.debian.net/)

 Greetings,
 Joachim



 --
 Joachim nomeata Breitner
  m...@joachim-breitner.de  |  nome...@debian.org  |  GPG: 0x4743206C
  xmpp: nome...@joachim-breitner.de | http://www.joachim-breitner.de/


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building ghc on arch linux ARM?

2012-04-10 Thread Yves Parès
Okay, thanks for the explanation.
But that does not completely answer the original question: now, using
branch 7 of GHC what can you do to get a haskell program compiled on/for an
ARM platform without using Debian? You have to use LLVM? So you have to
compile your program on a regular x86/x64 PC for LLVM backend and then use
that bytecode on your ARM platform, is that it? Is LLVM bytecode that
portable? I don't much about LLVM, so sorry if those questions feel a bit
dumb ;)
ghcarm speaks about the Pandaboard and LLVM in a post:
http://ghcarm.wordpress.com/2011/07/03/llvm-on-arm-testing

And thanks for the information about git-annex, I'm checking that out, it
looks interesting ;)
Actually, bottomline I would be interested in running a web app (preferably
using Yesod) on a Raspberry Pi (or similar, but more expensive), but this
use case is cool too.

Le 10 avril 2012 13:13, Joachim Breitner m...@joachim-breitner.de a écrit
:

 Hi,

 Am Dienstag, den 10.04.2012, 13:04 +0200 schrieb Yves Parès:
   All these are not cross-compiled, but natively
   compiled on the repective architecture, and I don’t think it is
  easily
   possible to cross-compile GHC itself even today.
 
  So how did they get compiled the first time? How do you get a GHC
  working on or for an ARM platform if you don't use Debian?
  And why was Joey Hess talking about performance issues?
  (I'll be eventually interested, as Graham Klyne suggested earlier, in
  compiling for Raspberry Pi, if the hardware suits).

 well, GHC was more portable in version 6.8 and before (this is not
 cross-compiling, at least not really:
 http://hackage.haskell.org/trac/ghc/wiki/Building/Porting

 When I ported GHC to s390x half a year ago, I think I started with
 porting 6.8 and then kept building the next released version with the
 previous. It would be great, though, if porting current versions
 directly would become possible again.

 Most of these architectures do not have a native code generator (so they
 are compiled via C) and are unregisterized, i.e. GHC knows nothing about
 their registers. Both cause a performance penalty; I don’t know numbers.
 I assume this is what Joey refers to. But maybe also that ARM machines
 tend to be slower :-)

 I’m happily running git-annex on a NSLU2 (266MHz/23MB RAM ARM NAS
 device) and have done so before it was registerized, so it is definitely
 a useful target for Haskell.

 Greetings,
 Joachim

 --
 Joachim Breitner
  e-Mail: m...@joachim-breitner.de
  Homepage: http://www.joachim-breitner.de
  Jabber-ID: nome...@joachim-breitner.de

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Generalizing (++) for monoids instead of using ()

2012-04-02 Thread Yves Parès
Plus one might argue that using  to mean different is a bad choice, as it
graphically means strictly inferior or strictly superior which implies
comparability, whereas equality and comparison are two different things.
(e.g. Eq and Ord are two distinct classes in Haskell).

Le 1 avril 2012 23:06, Daniel Peebles pumpkin...@gmail.com a écrit :

 There are many reasons, but some of the more cited ones are that () will
 break less code than (++) would, since (++) is ubiquitous and () is most
 used in some pretty printers. Yes, mappend's type can be refined to that of
 the current list (++), but the increased polymorphism still has the
 potential to break existing code by making it harder to resolve instances.

 As for () meaning not equal to, do you also have a problem with Monad's
 () meaning a right bitwise shift, or the mutationey form of it, (=)? :)
 I don't think anyone in Haskell has ever used () to mean (/=), so the
 fact that there exist a couple of languages out there that do use it that
 way shouldn't affect our decision.

 Dan

 On Sun, Apr 1, 2012 at 4:58 PM, aditya bhargava 
 bluemangrou...@gmail.comwrote:

 After asking this question:

 http://stackoverflow.com/questions/9963050/standard-way-of-joining-two-data-texts-without-mappend

 I found out that the new infix operator for `mappend` is (). I'm
 wondering why ghc 7.4 didn't generalize (++) to work on monoids instead. To
 me, (++) is much more clear. () means not equal to for me. Can anyone
 shed light on this decision?


 Adit

 --
 adit.io

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Towards a single, unified API for incremental data processing

2012-04-02 Thread Yves Parès
That would be a great idea... too bad it's an April hoax ;)


Le 1 avril 2012 21:50, John Millikin jmilli...@gmail.com a écrit :

 There are currently several APIs for processing strict monoidal values
 as if they were pieces of a larger, lazy value. Some of the most
 popular are based on Oleg's left-fold enumerators, including the
 iteratee, enumerator, iterIO. Other choices include comonads,
 conduits, and pipes.

 Despite having various internal implementations and semantics, these
 libraries generally export a similar-looking API. This is a terrible
 duplication of effort, and it causes dependant packages to be strongly
 tied to the underlying implementation.

 I propose that a new package, tzinorot, be written to provide a
 single API based on Data.List. It should be pretty easy to use,
 requiring only a few common extensions to the type system.

 For example, the enumerator package's 'mapM' function could be
 generalized for use in tzinorot through a few simple modifications to
 the type signature:

 --
 -- enumerator mapM
 mapM :: Monad m = (ao - m ai) - Enumeratee ao ai m b

 -- tzinorot mapM
 mapM :: (Monad m, Tzinorot t, ListLike l1 a1, ListLike l2 a2) = (l1
 a1 - m (l2 a2)) - t Void s (TzinorotItems (l1 a1)) (TzinorotItems
 (l2 a2)) m r
 --

 To make it easier to install and use the tzinorot package, it will
 depend on all of its supported implementations (iteratee, enumerator,
 conduits, pipes, etc), and use Michael Snoyman's cabala tool to
 manage dependency versions. See the cabala announcement for details on
 use:

 http://www.yesodweb.com/blog/2012/04/replacing-cabal

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] for = flip map

2012-03-28 Thread Yves Parès
Think of traverse as a mapA, as it's just like Data.Traversable.mapM,
but with the Applicative class constraint instead of the Monad one.
I've always wondered why it isn't called this way, sequenceM equivalent for
Applicatives is sequenceA for instance.

Le 28 mars 2012 22:19, Christopher Done chrisd...@googlemail.com a écrit :

 On 28 March 2012 22:05, Matthew Steele mdste...@alum.mit.edu wrote:
  Doesn't for already exist, in Data.Traversable?   Except that for =
  flip traverse.

 Traverse doesn't fit the type of fmap, it demands an extra type
 constructor:

 traverse :: (Traversable t,Applicative f) = (a - f b) - t a - f (t b)

 fmap :: Functor f = (a - b) - f a - f b

 Note the (a - f b) instead of (a - b).

 E.g.

 fmap :: (a - b) - [a] - [b]

 can't be expressed with traverse, you can only get this far:

 traverse :: (a - [b]) - [a] - [[b]]

 Unless I'm missing something.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can't install snap (problem with syb)

2012-03-28 Thread Yves Parès
It may come from the version of GHC you're using (Cabal version is fixed
for one specific GHC version).
Which one is it?

Le 28 mars 2012 23:05, Michael Iles michael.i...@ca.ibm.com a écrit :

 I uninstalled ghc, removed my ~/.cabal directory, reinstalled ghc, then
 tried to install snap. I get:

 $ cabal install snap
 Resolving dependencies...
 Downloading syb-0.3.6...
 command line: cannot satisfy -package Cabal-1.14.0:
  Cabal-1.14.0-0338b6f52e6e6a56054371a110e1d79e is unusable due to missing
 or recursive dependencies:
  directory-1.1.0.2-8957520ced1fb19160c3a6126dcccfaa
 process-1.1.0.1-91185c964ab744c1f3cbca1863d2ba45
  (use -v for more information)
 cabal: Error: some packages failed to install:
 aeson-0.6.0.0 depends on syb-0.3.6 which failed to install.
 snap-0.8.0.2 depends on syb-0.3.6 which failed to install.
 syb-0.3.6 failed during the configure step. The exception was:
 ExitFailure 1



 Any ideas?

 Mike.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Arguments against an hypothetical Data.ByteString.Generic?

2012-03-27 Thread Yves Parès
Hello,

As vector provides a class generalizing all flavours
(Data.Vector.Generic.Vector), it occurs to me that the same could be done
for ByteString. Then, packages based on it would have the choice between
hardcoded and generic, they wouldn't have to duplicate a module to handle
both strict and lazy versions, as (with the exception of functions for
communication with C code) they already provide the same API.
I would be willing to make it, it's a concern I've had in mind for a long
time, but as I'm pretty sure the idea isn't new, I would very much like to
know if and what arguments (related to performance maybe ? I don't know...)
were raised against that.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Arguments against an hypothetical Data.ByteString.Generic?

2012-03-27 Thread Yves Parès
Hi Bas,

Yes, thank you to remind me of that, I remember now having seen the project.
Strict ByteStrings being an alias to Vector Word8 is a good idea (are
bytestrings are already implemented exactly like
Data.Vector.Storable.Vector). But in that case we could use the API of
vector for bytestrings (the bytestring API would be provided only for
backwards compatibility, right?).

Does vector-bytestring plans to be the new implementation for bytestrings
in the end or is it a side-package?

 In an ideal world we would have a Lazy type family which for each type
 of vector would return its lazy version

What about a type like:

data Vector v a = Empty | Chuck {-# UNPACK #-} !(v a) (Vector v a)
??

GHC accepts it.
And then every function for lazy vectors can then be written like:

import qualified Data.Vector.Generic as G (Vector(..))

cons :: (G.Vector v) = a - Vector v a - Vector v a
cons x v = ...

and also (even better):

instance (G.Vector v) = G.Vector (Vector v) where
cons x v = ...

Just make aliases to follow the API, for instance in Data.Vector.Lazy:

import qualified Data.Vector.Lazy.Internal as L
import qualified Data.Vector as V

type Vector a = L.Vector V.Vector a

Le 27 mars 2012 20:38, Bas van Dijk v.dijk@gmail.com a écrit :

 On 27 March 2012 11:00, Yves Parès yves.pa...@gmail.com wrote:
  Hello,
 
  As vector provides a class generalizing all flavours
  (Data.Vector.Generic.Vector), it occurs to me that the same could be done
  for ByteString. Then, packages based on it would have the choice between
  hardcoded and generic, they wouldn't have to duplicate a module to handle
  both strict and lazy versions, as (with the exception of functions for
  communication with C code) they already provide the same API.
  I would be willing to make it, it's a concern I've had in mind for a long
  time, but as I'm pretty sure the idea isn't new, I would very much like
 to
  know if and what arguments (related to performance maybe ? I don't
 know...)
  were raised against that.

 It's not entirely what you need but are you aware of my
 vector-bytestring library?

 http://hackage.haskell.org/package/vector-bytestring

 It doesn't (yet) abstract over strict and lazy ByteStrings. But that
 would be a nice addition!

 In an ideal world we would have a Lazy type family which for each type
 of vector would return its lazy version (where the vector is unpacked
 in the cons cell). Then we would need your generic API for working
 with both lazy and strict vectors.

 Regards,

 Bas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Arguments against an hypothetical Data.ByteString.Generic?

2012-03-27 Thread Yves Parès
Oh, okay ^^ sorry, my bad. I should have compiled with -Wall.

So I started a repo at https://github.com/YwenP/bytestring-generic


Le 27 mars 2012 22:01, Bas van Dijk v.dijk@gmail.com a écrit :

 On 27 March 2012 21:46, Yves Parès yves.pa...@gmail.com wrote:
  Yes, thank you to remind me of that, I remember now having seen the
 project.
  Strict ByteStrings being an alias to Vector Word8 is a good idea (are
  bytestrings are already implemented exactly like
  Data.Vector.Storable.Vector). But in that case we could use the API of
  vector for bytestrings (the bytestring API would be provided only for
  backwards compatibility, right?).

 Yes, I hope that one day the bytestring package and the ByteString
 type will be deprecated in favor of vector and
 Data.Vector.Storable.Vector Word8 respectively. vector-bytestring is
 indeed intended as a package which should make the transition easier.

  Does vector-bytestring plans to be the new implementation for
 bytestrings in
  the end or is it a side-package?

 I hope that once we get on par with bytestring's performance we can
 replace it (we're almost there!, Yell if you want to see some
 benchmark results).

  In an ideal world we would have a Lazy type family which for each type
  of vector would return its lazy version
 
  What about a type like:
 
  data Vector v a = Empty | Chuck {-# UNPACK #-} !(v a) (Vector v a)
  ??

 If you build with -Wall you'll see the following unfortunate warning:

 Warning: Ignoring unusable UNPACK pragma on the
 first argument of `Chunk'

 Johan Tibell recently discussed some of his ideas on how to solve this:


 http://www.haskell.org/pipermail/glasgow-haskell-users/2012-March/022079.html

 But for now we need to make a specialized type for every different
 vector type and use an associated type family to abstract over these
 different types.

 Regards,

 Bas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using HaXml

2012-03-26 Thread Yves Parès
Thanks!
Apparently some also have problems to find tutorials/help with HaXml [1].

 I'm curious to hear how HaXml compares to HXT / HXML / TagSoup.
I'm curious too. I've read that HXML is old and not longer maintained.
I believe besides the rewriting of the API to use arrows, HaXml adds
checking capabilities like use of RelaxNG.

[1]
http://stackoverflow.com/questions/6082350/how-to-access-some-xml-data-with-haskell-using-haxml

Le 25 mars 2012 11:06, aditya bhargava bluemangrou...@gmail.com a écrit :

 I don't know much about HaXml, but HXT is based on it and comes with a
 tutorial:

 http://www.haskell.org/haskellwiki/HXT

 I also show some basic functionality of HXT in this blog post:


 http://adit.io/posts/2012-03-10-building_a_concurrent_web_scraper_with_haskell.html

 I'm curious to hear how HaXml compares to HXT / HXML / TagSoup.


 Adit



 On Sun, Mar 25, 2012 at 1:34 AM, Yves Parès yves.pa...@gmail.com wrote:

 Hello café,

 Provided what I read, HaXml seems to be the recommended library for
 parsing XML files (I'm trying to browse and alter spreadsheets (ODS) and
 possibly release a package when I'm done), is there somewhere tutorials on
 how to use it?
 I used 'xml' package by the past, but HaXml is more substantial.

 Thanks.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 adit.io

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using HaXml

2012-03-25 Thread Yves Parès
Hello café,

Provided what I read, HaXml seems to be the recommended library for parsing
XML files (I'm trying to browse and alter spreadsheets (ODS) and possibly
release a package when I'm done), is there somewhere tutorials on how to
use it?
I used 'xml' package by the past, but HaXml is more substantial.

Thanks.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2012-03-23 Thread Yves Parès
Plus yampa hasn't been maintained for more than 3 years, and I lacks
documentation, which makes it a bad choice for beginners.
I don't even know what is the future of that project...

Le 23 mars 2012 09:22, Heinrich Apfelmus apfel...@quantentunnel.de a
écrit :

 erik flister wrote:

 giving
 a real-time audio synthesizer in the style of functional reactive
 programming.


 you know about yampasynth right?


 Yes. In fact, their glue code was extremely helpful for understanding
 OpenAL. As for the FRP, I prefer a style without arrows, though, see my
 reactive-banana library.



 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Regarding Haskell FFI for C/C++ (Passing of String)

2012-03-22 Thread Yves Parès
This joins the question I asked two days ago here. (See
http://haskell.1045720.n5.nabble.com/Quickest-way-to-pass-Text-to-C-code-td5582223.html
)
Hope that helps.

Le 22 mars 2012 15:10, rajendra prasad rajendradpra...@gmail.com a écrit :

 Hi,

 I have just started learning Haskell FFI. I am trying to send a string
 from hastell to a C function. For this, I am required to convert the
 haskell string to byte string. I have two methods to achieve this task.
 Both are listed below:

 1) import Foreign.C.String
 let arg1 = map castCharToCChar Hello :: [CChar]

 2) import qualified Data.ByteString.Char8 as B
 f = B.pack Hello

 I just wanted to know the optimal way to achieve this task. Please suggest
 the optimal way of doing this. If there is any other way, please share it.

 Also, please suggest me any good tutorial to start with Haskell FFI for
 C/C++.

 Thank you very much.


 Regards,
 Rajendra

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Regarding Haskell FFI for C/C++ (Passing of String)

2012-03-22 Thread Yves Parès
(Sorry for the double post)

Forget about ByteString.Char8: it doesn't handle unicode as it truncates
characters.

Going from String to bytestring is easy thanks to the utf8-string (
http://hackage.haskell.org/packages/archive/utf8-string/0.3.7/doc/html/Codec-Binary-UTF8-String.html)
package and pack function from Data.ByteString(.Lazy).

And if you want to convert your String directly to a CString (a Ptr CChar)
you better use Foreign.C.String.withCString.

Le 22 mars 2012 15:17, Yves Parès yves.pa...@gmail.com a écrit :

 This joins the question I asked two days ago here. (See
 http://haskell.1045720.n5.nabble.com/Quickest-way-to-pass-Text-to-C-code-td5582223.html
 )
 Hope that helps.

 Le 22 mars 2012 15:10, rajendra prasad rajendradpra...@gmail.com a
 écrit :

 Hi,

 I have just started learning Haskell FFI. I am trying to send a string
 from hastell to a C function. For this, I am required to convert the
 haskell string to byte string. I have two methods to achieve this task.
 Both are listed below:

 1) import Foreign.C.String
 let arg1 = map castCharToCChar Hello :: [CChar]

 2) import qualified Data.ByteString.Char8 as B
 f = B.pack Hello

 I just wanted to know the optimal way to achieve this task. Please
 suggest the optimal way of doing this. If there is any other way, please
 share it.

 Also, please suggest me any good tutorial to start with Haskell FFI for
 C/C++.

 Thank you very much.


 Regards,
 Rajendra

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Quickest way to pass Text to C code

2012-03-21 Thread Yves Parès
Hello,

I have to interact with a C++ library that accepts as string types (putting
c++ strings aside) pointers of wchar_t (CWString in Haskell) or unsigned
32-bit int (Ptr Word32 for UTF-32 codepoints).

I have read what text, bytestring and base provide, but Text can only be
directly converted to (Ptr Word16), and if I use encodeUTF32 to get a
ByteString, then I only get useAsCString, no direct conversion to CWString
or Ptr WordXX is possible.
Not to mention the extra memory allocations due to intermediate conversions.

base provides Foreign.C.String.useAsCWString, but it requires that either I
use simple Strings at the first place or (same thing than before) I convert
from Text to String before passing to C.

Is there something I'm missing or isn't this kind of conversion that easy?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Quickest way to pass Text to C code

2012-03-21 Thread Yves Parès
 You can also manually allocate and poke data into raw memory using
Foreign.Marshall.Alloc and Foreign.Storable, if you're feeling particularly
masochistic ;)
That's kind of what I did by the past (Aggregate Word8 into a single
Word32), before I discovered Text for fast string handling.

I know about storable Vectors (and already use them, but not for text), but
I would loose Haskell-side the functionnalities of Text (I'm handling
textual data in the first place, not raw bytes).
Text already provide all string handling/file reading functions.
Or else you'd have a convenient way to convert Text into Vector?

Le 21 mars 2012 12:35, James Cook mo...@deepbondi.net a écrit :

 On Mar 21, 2012, at 4:35 AM, Yves Parès wrote:

  Hello,
 
  I have to interact with a C++ library that accepts as string types
 (putting c++ strings aside) pointers of wchar_t (CWString in Haskell) or
 unsigned 32-bit int (Ptr Word32 for UTF-32 codepoints).

 The vector package has storable vectors, which are essentially raw C
 arrays.  It provides the function:

Data.Vector.Storable.unsafeWith :: Storable a = Vector a - (Ptr a
 - IO b) - IO b

 This is probably the simplest way to do what you're describing.  You can
 also manually allocate and poke data into raw memory using
 Foreign.Marshall.Alloc and Foreign.Storable, if you're feeling particularly
 masochistic ;)

 -- James
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Quickest way to pass Text to C code

2012-03-21 Thread Yves Parès
Okay, eventually it boils down to this:

import Data.Text
import Data.Text.Encoding (encodeUtf32LE)
import Data.ByteString.Unsafe (unsafeUseAsCString)

textAsPtrW32 :: Text - (Ptr Word32 - IO a) - IO a
textAsPtrW32 t = unsafeUseAsCString (encodeUtf32LE $ t `snoc` '\0') . (.
castPtr)

As the function passed copies or at least does not store the pointer, I can
use unsafeUseAsCString, but then I have to manually append the
null-termination.


Le 21 mars 2012 13:09, Antoine Latter aslat...@gmail.com a écrit :

 On Wed, Mar 21, 2012 at 3:35 AM, Yves Parès yves.pa...@gmail.com wrote:
  Hello,
 
  I have to interact with a C++ library that accepts as string types
 (putting
  c++ strings aside) pointers of wchar_t (CWString in Haskell) or unsigned
  32-bit int (Ptr Word32 for UTF-32 codepoints).
 
  I have read what text, bytestring and base provide, but Text can only be
  directly converted to (Ptr Word16), and if I use encodeUTF32 to get a
  ByteString, then I only get useAsCString, no direct conversion to
 CWString
  or Ptr WordXX is possible.

 A CString is a (Ptr CChar). You can then use castPtr to get whichever
 pointer type you need, if you believe the underlying buffer has the
 representation you want (in this case, UTF-32).

 It still won't be null-terminated, however.

 Antoine

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a better way to subtyping?

2012-03-14 Thread Yves Parès
I might have a simpler way: make you base type polymorphic and add
capabilities to it thanks to that type:

data Base a = Base Foo Bar a

data Capa1 a = Capa1 Stuff Baz a  -- We leave the 'a' so that you can
continue to stack.
data Capa2 = Capa2 Thing Stuff  -- We want it to be final, so no additional
parameter

Then to make derived types, just use (Base (Capa1 a)) or (Base Capa2).
Anything that accepts a (Base a) will accept a (Base Something).

You can also make some aliases if you want to keep types short:
type Deriv1 a = Base (Capa1 a)
type Deriv2 = Base Capa2

Le 14 mars 2012 01:26, Ryan Ingram ryani.s...@gmail.com a écrit :

 data Common = ...
 data A = ...
 data B = ...
 data C = ...
 data Super =
 SubA { commonFields :: Common, getA :: A }
 | SubB { commonFields :: Common, getB :: B }
 | SubC { commonFields :: Common, getC :: C }

 foldWithSubtype :: (A - r) - (B - r) - (C - r) - Super - r
 foldWithSubtype k _ _ (SubA {getA = a}) = k a
 foldWithSubtype _ k _ (SubB {getB = b}) = k b
 foldWithSubtype _ _ k (SubC {getC = c}) = k c

 foldSuper :: (A - Common - r) - (B - Common - r) - (C - Common -
 r) - Super - r
 foldSuper ka kb kc sup = foldWithSubtype ka kb kc sup $ commonFields sup



 On Mon, Mar 12, 2012 at 8:32 AM, Jeff Shaw shawj...@msu.edu wrote:

 More specifically, if I have a record type from which I construct
 multiple sub-record types, and I want to store these in a collection which
 I want to map over while preserving the ability to get at the sub-fields,
 is there a better way to do it than to have an enumeration for the
 sub-types and then use Dynamic? I also have a nastier version that doesn't
 require the enumeration, which throws an exception when fromDynamic can't
 return a value with one of the expected types.

 {-# LANGUAGE Rank2Types, DeriveDataTypeable #-}
 module Super where

 import Data.Dynamic
 import Data.Typeable
 import Data.Maybe

 data Super a = Super { commonFields :: (), subFields :: a }
deriving Typeable

 data SubTypes = SubA | SubB | SubC

 data A = A { aFields :: () }
deriving Typeable

 data B = B { bFields :: () }
deriving Typeable

 data C = C { cFields :: () }
deriving Typeable

 doSomethingWithSubType :: (Super A - ()) - (Super B - ()) - (Super C
 - ()) - (SubTypes, Dynamic) - Maybe ()
 doSomethingWithSubType a _ _ (SubA, dynamic) = fromDynamic dynamic =
 return . a
 doSomethingWithSubType _ b _ (SubB, dynamic) = fromDynamic dynamic =
 return . b
 doSomethingWithSubType _ _ c (SubC, dynamic) = fromDynamic dynamic =
 return . c

 doSomethingWithSubType2 :: (Super A - ()) - (Super B - ()) - (Super C
 - ()) - Dynamic - ()
 doSomethingWithSubType2 a b c dynamic =
let dynamicAsA = fromDynamic dynamic :: Maybe (Super A)
dynamicAsB = fromDynamic dynamic :: Maybe (Super B)
dynamicAsC = fromDynamic dynamic :: Maybe (Super C) in
head $ catMaybes [ dynamicAsA = return . a
 , dynamicAsB = return . b
 , dynamicAsC = return . c]


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Helper classes for Generics

2012-03-12 Thread Yves Parès
I'd have a question concerning GHC.Generics: how does it relate to SYB's
Data.Generics?
Is it intended to replace it or complete it?
In other words: does class Data.Generics.Data class do things that class
GHC.Generics.Generic can't do?


Le 12 mars 2012 04:27, Reiner Pope reiner.p...@gmail.com a écrit :

 Hi all,

 I've been playing with GHC's new generics features (see
 http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-programming.html).
 All the documentation I've seen suggests creating a helper class -- for
 instance, the GSerialize class in the above link -- on which one defines
 generic instances.

 It seems to me that this isn't necessary. For example, here's the the
 example from the GHC docs, but without a helper class:

  -- set the phantom type of Rep to (), to avoid ambiguity
  from0 :: Generic a = a - Rep a ()
  from0 = from
 
  data Bit = O | I
 
  class Serialize a where
put :: a - [Bit]
 
default put :: (Generic a, Serialize (Rep a ())) = a - [Bit]
put = put . from0
 
  instance Serialize (U1 x) where
put U1 = []
 
  instance (Serialize (a x), Serialize (b x)) = Serialize ((a :*: b) x)
 where
put (x :*: y) = put x ++ put y
 
  instance (Serialize (a x), Serialize (b x)) = Serialize ((a :+: b) x)
 where
put (L1 x) = O : put x
put (R1 x) = I : put x
 
  instance (Serialize (a x)) = Serialize (M1 i c a x) where
put (M1 x) = put x
 
  instance (Serialize a) = Serialize (K1 i a x) where
put (K1 x) = put x

 Is there a reason to prefer using helper classes? Or perhaps we should
 update the wiki page (http://www.haskell.org/haskellwiki/Generics) to
 avoid using helper classes?

 Regards,
 Reiner

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Helper classes for Generics

2012-03-12 Thread Yves Parès
Thanks for the clarification.
But could not class Data have been used for generic Deriving of classes? I
imagine it would have been harder, but I fail to see if would have been
possible...

Le 12 mars 2012 16:58, José Pedro Magalhães j...@cs.uu.nl a écrit :

 Hi Yves,

 GHC.Generics [1] and SYB [2] are two rather different approaches to
 generic programming. There are things that can be done in one but not in
 the other, and there are things that are easier on one rather than the
 other. For instance, SYB tends to be very useful for large AST
 transformations, with functions that have a general behaviour but a couple
 of particular cases for a few constructors. GHC.Generics, on the other
 hand, can encode functions such as generic fmap and traverse. It lends
 itself better to optimisation since it doesn't use runtime casts, and as
 such tends to be faster than SYB. It isn't planned to replace SYB.


 Cheers,
 Pedro

 [1] http://www.haskell.org/haskellwiki/Generics
 [2] http://www.cs.uu.nl/wiki/bin/view/GenericProgramming/SYB


 On Mon, Mar 12, 2012 at 16:35, Yves Parès yves.pa...@gmail.com wrote:

 I'd have a question concerning GHC.Generics: how does it relate to SYB's
 Data.Generics?
 Is it intended to replace it or complete it?
 In other words: does class Data.Generics.Data class do things that class
 GHC.Generics.Generic can't do?


 Le 12 mars 2012 04:27, Reiner Pope reiner.p...@gmail.com a écrit :

  Hi all,

 I've been playing with GHC's new generics features (see
 http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-programming.html).
 All the documentation I've seen suggests creating a helper class -- for
 instance, the GSerialize class in the above link -- on which one defines
 generic instances.

 It seems to me that this isn't necessary. For example, here's the the
 example from the GHC docs, but without a helper class:

  -- set the phantom type of Rep to (), to avoid ambiguity
  from0 :: Generic a = a - Rep a ()
  from0 = from
 
  data Bit = O | I
 
  class Serialize a where
put :: a - [Bit]
 
default put :: (Generic a, Serialize (Rep a ())) = a - [Bit]
put = put . from0
 
  instance Serialize (U1 x) where
put U1 = []
 
  instance (Serialize (a x), Serialize (b x)) = Serialize ((a :*: b) x)
 where
put (x :*: y) = put x ++ put y
 
  instance (Serialize (a x), Serialize (b x)) = Serialize ((a :+: b) x)
 where
put (L1 x) = O : put x
put (R1 x) = I : put x
 
  instance (Serialize (a x)) = Serialize (M1 i c a x) where
put (M1 x) = put x
 
  instance (Serialize a) = Serialize (K1 i a x) where
put (K1 x) = put x

 Is there a reason to prefer using helper classes? Or perhaps we should
 update the wiki page (http://www.haskell.org/haskellwiki/Generics) to
 avoid using helper classes?

 Regards,
 Reiner

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Understanding GC time

2012-03-10 Thread Yves Parès
I'm afraid without uniqueness or linear typing it would be hard to avoid.

2012/3/10 Thiago Negri evoh...@gmail.com

 I see. Thanks for the answers.

 Any data structure or source annotation that would prevent that?

 For example, if I try the same program to run on a
 [1..] list, I'll get an out of memory error for the
 single-threaded version. Any way to prevent it without declaring two
 different versions of list?


 2012/3/10 Anthony Cowley acow...@gmail.com:
  From that profiling data, I think you're just seeing a decrease in
 sharing. With one thread, you create the list structure in memory: the
 first fold could consume it in-place, but the second fold is still waiting
 for its turn.  The list is built on the heap so the two folds can both
 refer to the same list.
 
  With two threads, GHC is being clever and inlining the definition you
 give for list, which is then optimized into two parallel loops. No list on
 the heap means there's not much for the GC to do.
 
  Sharing of index lists like this is a common source of problems. In
 particular, nested loops can make it even trickier to prevent sharing as
 there may not be an opportunity for parallel evaluation.
 
  Anthony
 
  On Mar 10, 2012, at 10:21 AM, Thiago Negri evoh...@gmail.com wrote:
 
  Hi all.
 
  I wrote a very simple program to try out parallel Haskel and check how
  it would look like to make use of more than one core in this language.
 
  When I tried the program with RTS option -N1, total time shows it took
  2.48 seconds to complete and around 65% of that time was taken by GC.
 
  Then I tried the same program with RTS options -N2 and total time
  decreased to 1.15 seconds as I expected a gain here. But what I didn't
  expect is the GC time to drop to 0%.
 
  I guess I'm having trouble to understand the output of the RTS option
 -s.
  Can you enlighten me?
 
 
  The source for the testing program:
 
  module Main where
 
  import Data.List (foldl1')
  import Control.Parallel (par, pseq)
  import Control.Arrow (())
 
  f `parApp` (a, b) = a `par` (b `pseq` (f a b))
  seqApp = uncurry
 
  main = print result
   where result = (+) `parApp` minMax list
 minMax = minlist  maxlist
 minlist = foldl1' min
 maxlist = foldl1' max
 list = [1..1999]
 
 
  The results on a Windows 7 64bits with an Intel Core 2 Duo, compiled
  with GHC from Haskell Platform:
 
  c:\tmp\hspar +RTS -s -N1
  par +RTS -s -N1
  2000
  803,186,152 bytes allocated in the heap
  859,916,960 bytes copied during GC
  233,465,740 bytes maximum residency (10 sample(s))
   30,065,860 bytes maximum slop
  483 MB total memory in use (0 MB lost due to fragmentation)
 
   Generation 0:  1523 collections, 0 parallel,  0.80s,  0.75s elapsed
   Generation 1:10 collections, 0 parallel,  0.83s,  0.99s elapsed
 
   Parallel GC work balance: nan (0 / 0, ideal 1)
 
 MUT time (elapsed)   GC time  (elapsed)
   Task  0 (worker) :0.00s(  0.90s)   0.00s(  0.06s)
   Task  1 (worker) :0.00s(  0.90s)   0.00s(  0.00s)
   Task  2 (bound)  :0.86s(  0.90s)   1.62s(  1.69s)
 
   SPARKS: 1 (0 converted, 0 pruned)
 
   INIT  time0.00s  (  0.00s elapsed)
   MUT   time0.86s  (  0.90s elapsed)
   GCtime1.62s  (  1.74s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time2.48s  (  2.65s elapsed)
 
   %GC time  65.4%  (65.9% elapsed)
 
   Alloc rate936,110,032 bytes per MUT second
 
   Productivity  34.6% of total user, 32.4% of total elapsed
 
  gc_alloc_block_sync: 0
  whitehole_spin: 0
  gen[0].sync_large_objects: 0
  gen[1].sync_large_objects: 0
 
 
  c:\tmp\hspar +RTS -s -N2
  par +RTS -s -N2
  2000
1,606,279,644 bytes allocated in the heap
   74,924 bytes copied during GC
   28,340 bytes maximum residency (1 sample(s))
   29,004 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
 
   Generation 0:  1566 collections,  1565 parallel,  0.00s,  0.01s elapsed
   Generation 1: 1 collections, 1 parallel,  0.00s,  0.00s elapsed
 
   Parallel GC work balance: 1.78 (15495 / 8703, ideal 2)
 
 MUT time (elapsed)   GC time  (elapsed)
   Task  0 (worker) :0.00s(  0.59s)   0.00s(  0.00s)
   Task  1 (worker) :0.58s(  0.59s)   0.00s(  0.01s)
   Task  2 (bound)  :0.58s(  0.59s)   0.00s(  0.00s)
   Task  3 (worker) :0.00s(  0.59s)   0.00s(  0.00s)
 
   SPARKS: 1 (1 converted, 0 pruned)
 
   INIT  time0.00s  (  0.00s elapsed)
   MUT   time1.15s  (  0.59s elapsed)
   GCtime0.00s  (  0.01s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time1.15s  (  0.61s elapsed)
 
   %GC time   0.0%  (2.4% elapsed)
 
   Alloc rate1,391,432,695 bytes per MUT second
 
   Productivity 100.0% of total user, 190.3% of 

Re: [Haskell-cafe] Understanding GC time

2012-03-10 Thread Yves Parès
(Sorry for the double post)
A good practice would be to always factorize your list treatments [*]
(using a single fold to compute in the same time max  min), and be sure
never to keep a plain reference to the list and embed that list into a
function that returns it (to enforce its evaluation each time you need to
access it).
But I can't think of a way to check that property (list never accessed
twice) statically in Haskell.

[*] But wouldn't GHC do in that specific case do something about that with
-O2 activated?

2012/3/10 Yves Parès yves.pa...@gmail.com

 I'm afraid without uniqueness or linear typing it would be hard to avoid.


 2012/3/10 Thiago Negri evoh...@gmail.com

 I see. Thanks for the answers.

 Any data structure or source annotation that would prevent that?

 For example, if I try the same program to run on a
 [1..] list, I'll get an out of memory error for the
 single-threaded version. Any way to prevent it without declaring two
 different versions of list?


 2012/3/10 Anthony Cowley acow...@gmail.com:
  From that profiling data, I think you're just seeing a decrease in
 sharing. With one thread, you create the list structure in memory: the
 first fold could consume it in-place, but the second fold is still waiting
 for its turn.  The list is built on the heap so the two folds can both
 refer to the same list.
 
  With two threads, GHC is being clever and inlining the definition you
 give for list, which is then optimized into two parallel loops. No list on
 the heap means there's not much for the GC to do.
 
  Sharing of index lists like this is a common source of problems. In
 particular, nested loops can make it even trickier to prevent sharing as
 there may not be an opportunity for parallel evaluation.
 
  Anthony
 
  On Mar 10, 2012, at 10:21 AM, Thiago Negri evoh...@gmail.com wrote:
 
  Hi all.
 
  I wrote a very simple program to try out parallel Haskel and check how
  it would look like to make use of more than one core in this language.
 
  When I tried the program with RTS option -N1, total time shows it took
  2.48 seconds to complete and around 65% of that time was taken by GC.
 
  Then I tried the same program with RTS options -N2 and total time
  decreased to 1.15 seconds as I expected a gain here. But what I didn't
  expect is the GC time to drop to 0%.
 
  I guess I'm having trouble to understand the output of the RTS option
 -s.
  Can you enlighten me?
 
 
  The source for the testing program:
 
  module Main where
 
  import Data.List (foldl1')
  import Control.Parallel (par, pseq)
  import Control.Arrow (())
 
  f `parApp` (a, b) = a `par` (b `pseq` (f a b))
  seqApp = uncurry
 
  main = print result
   where result = (+) `parApp` minMax list
 minMax = minlist  maxlist
 minlist = foldl1' min
 maxlist = foldl1' max
 list = [1..1999]
 
 
  The results on a Windows 7 64bits with an Intel Core 2 Duo, compiled
  with GHC from Haskell Platform:
 
  c:\tmp\hspar +RTS -s -N1
  par +RTS -s -N1
  2000
  803,186,152 bytes allocated in the heap
  859,916,960 bytes copied during GC
  233,465,740 bytes maximum residency (10 sample(s))
   30,065,860 bytes maximum slop
  483 MB total memory in use (0 MB lost due to fragmentation)
 
   Generation 0:  1523 collections, 0 parallel,  0.80s,  0.75s
 elapsed
   Generation 1:10 collections, 0 parallel,  0.83s,  0.99s
 elapsed
 
   Parallel GC work balance: nan (0 / 0, ideal 1)
 
 MUT time (elapsed)   GC time  (elapsed)
   Task  0 (worker) :0.00s(  0.90s)   0.00s(  0.06s)
   Task  1 (worker) :0.00s(  0.90s)   0.00s(  0.00s)
   Task  2 (bound)  :0.86s(  0.90s)   1.62s(  1.69s)
 
   SPARKS: 1 (0 converted, 0 pruned)
 
   INIT  time0.00s  (  0.00s elapsed)
   MUT   time0.86s  (  0.90s elapsed)
   GCtime1.62s  (  1.74s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time2.48s  (  2.65s elapsed)
 
   %GC time  65.4%  (65.9% elapsed)
 
   Alloc rate936,110,032 bytes per MUT second
 
   Productivity  34.6% of total user, 32.4% of total elapsed
 
  gc_alloc_block_sync: 0
  whitehole_spin: 0
  gen[0].sync_large_objects: 0
  gen[1].sync_large_objects: 0
 
 
  c:\tmp\hspar +RTS -s -N2
  par +RTS -s -N2
  2000
1,606,279,644 bytes allocated in the heap
   74,924 bytes copied during GC
   28,340 bytes maximum residency (1 sample(s))
   29,004 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
 
   Generation 0:  1566 collections,  1565 parallel,  0.00s,  0.01s
 elapsed
   Generation 1: 1 collections, 1 parallel,  0.00s,  0.00s
 elapsed
 
   Parallel GC work balance: 1.78 (15495 / 8703, ideal 2)
 
 MUT time (elapsed)   GC time  (elapsed)
   Task  0 (worker) :0.00s(  0.59s)   0.00s(  0.00s)
   Task  1 (worker) :0.58s

Re: [Haskell-cafe] Type classes for converting to Text and String

2012-03-08 Thread Yves Parès
If you just need to go back and forth from String to Text, why do you need
to be generic? pack and unpack from Data.Text do the job.

Plus, in the way of what Christopher said, you can use the
OverloadedStrings extension. You can then use the string syntax at a place
that expects a text:

{-# LANGUAGE OverloadedStrings #-}
import Data.Text

t :: Text
t = Hello

Any instance of the IsString class can be used in this way, not only Text.

2012/3/8 Simon Hengel s...@typeful.net

 Hi!

 When writing library code that should work with both String and Text I
 find my self repeatedly introducing classes like:

class ToString a where
  toString :: a - String

class ToText a where
  toText :: a - Text

 (I use this with newtype wrapped value types backed by Text or
 ByteString.)

 So I wonder whether it would be a good idea to have a package that
 provides those classes.

 Or maybe just ToText, and provide default implementations of toString
 and toText, like:

class ToText a where

  toText :: a - Text
  toText = Text.pack . toString

  toString :: a - String
  toString = Text.unpack . toText

 How do you guys deal with that?  Any thoughts?

 Cheers,
 Simon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi fails to load C++ object files (missing symbol)

2012-03-08 Thread Yves Parès
Thanks Mark!
It works also here, and even without the -fpic flag. Was it necessary for
you?

2012/3/8 Mark Wright markwri...@internode.on.net

 Hi Yves,

 It works (on Gentoo) when I compile it as a shared library.

 % g++ -o libstuff.so -fpic -shared Stuff.cpp
 % ghci Main.hs -L$PWD -lstuff
 GHCi, version 7.4.1: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Loading object (dynamic) /h/argus/2/eng/haskell/cxx_stuff/libstuff.so ...
 done
 final link ... done
 Loading package filepath-1.3.0.0 ... linking ... done.
 Loading package bytestring-0.9.2.1 ... linking ... done.
 Loading package unix-2.5.1.0 ... linking ... done.
 Loading package old-locale-1.0.0.4 ... linking ... done.
 Loading package old-time-1.1.0.0 ... linking ... done.
 Loading package directory-1.1.0.2 ... linking ... done.
 Loading package process-1.1.0.1 ... linking ... done.
 Loading package goa-3.1 ... linking ... done.
 [1 of 1] Compiling Main ( Main.hs, interpreted )
 Ok, modules loaded: Main.
 *Main GOA

 Hope this helps.

 Regards, Mark

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHCi fails to load C++ object files (missing symbol)

2012-03-07 Thread Yves Parès
Hi, I'm trying to have GHCi load a haskell file that depends on a C++
object file, which causes GHCi to fail because of an unknown symbol (*and I
did link with **libstdc++*), whereas the link into an executable with ghc
works and runs perfectly.

I've reduced my code to the smallest one that produces the problem:
The C++ part defines a mere class and C externals that enable communication
with Haskell:

// Header Stuff.hpp
class Base {
public:
Base();
~Base();
}

extern C {
Base* mkthing();
void delthing(Base*);
}

---

// Source Stuff.cpp
#include iostream
#include Stuff.hpp
using namespace std;

Base::Base()
{
cout  Base created  endl;
}

Base::~Base()
{
cout  Base deleted  endl;
}

extern C {
Base* mkthing()
{
return new Base();
}

void delthing(Base* b)
{
delete b;
}
}

Haskell code (just for reference, but I'm not sure it's relevant), Main.hs:

{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign
import Foreign.C

foreign import ccall unsafe mkthing
  mkthing :: IO (Ptr ())

foreign import ccall unsafe delthing
  delthing :: Ptr () - IO ()

main = do
  p - mkthing
  delthing p


I then compile it with:
g++ -c Stuff.cpp
ghci Main.hs Stuff.o -lstdc++

And then the error is:
Loading object (static) Stuff.o ... done
Loading object (dynamic)
/usr/lib/i386-linux-gnu/gcc/i686-linux-gnu/4.5.2/libstdc++.so ... done
final link ... ghc: Stuff.o: unknown symbol `__dso_handle'
linking extra libraries/objects failed

Whereas 'ghc Main.hs Stuff.o -lstdc++' works just fine.
Does GHCi lacks a link directive that regular GHC has?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi fails to load C++ object files (missing symbol)

2012-03-07 Thread Yves Parès
Sorry for the double post, but I forgot to mention I'm using GHC 7.4 and
gcc/libstdc++ 4.5.2.
(Running Ubuntu 11.04 Natty Narwhal).

2012/3/7 Yves Parès yves.pa...@gmail.com

 Hi, I'm trying to have GHCi load a haskell file that depends on a C++
 object file, which causes GHCi to fail because of an unknown symbol (*and
 I did link with **libstdc++*), whereas the link into an executable with
 ghc works and runs perfectly.

 I've reduced my code to the smallest one that produces the problem:
 The C++ part defines a mere class and C externals that enable
 communication with Haskell:

 // Header Stuff.hpp
 class Base {
 public:
 Base();
 ~Base();
 }

 extern C {
 Base* mkthing();
 void delthing(Base*);
 }

 ---

 // Source Stuff.cpp
 #include iostream
 #include Stuff.hpp
 using namespace std;

 Base::Base()
 {
 cout  Base created  endl;
 }

 Base::~Base()
 {
 cout  Base deleted  endl;
 }

 extern C {
 Base* mkthing()
 {
 return new Base();
 }

 void delthing(Base* b)
 {
 delete b;
 }
 }

 Haskell code (just for reference, but I'm not sure it's relevant), Main.hs:

 {-# LANGUAGE ForeignFunctionInterface #-}
 import Foreign
 import Foreign.C

 foreign import ccall unsafe mkthing
   mkthing :: IO (Ptr ())

 foreign import ccall unsafe delthing
   delthing :: Ptr () - IO ()

 main = do
   p - mkthing
   delthing p


 I then compile it with:
 g++ -c Stuff.cpp
 ghci Main.hs Stuff.o -lstdc++

 And then the error is:
 Loading object (static) Stuff.o ... done
 Loading object (dynamic)
 /usr/lib/i386-linux-gnu/gcc/i686-linux-gnu/4.5.2/libstdc++.so ... done
 final link ... ghc: Stuff.o: unknown symbol `__dso_handle'
 linking extra libraries/objects failed

 Whereas 'ghc Main.hs Stuff.o -lstdc++' works just fine.
 Does GHCi lacks a link directive that regular GHC has?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell showcase in 5 minutes

2012-02-29 Thread Yves Parès
Yes, but in 5 minutes I take it they won't have time to ask questions
before your presentation is over.

I haven't thought about parallel computing but it's one of the many assets
of the language.
The problem IMHO with STM is that it relies on too many Haskell elements to
be grasped in 5 minutes. Monads, Monoids, you name it. You can't still
present it, but bare in mind people will just understand the vague idea.
There are so many cool things to show... ;). (But that's true for any
expressive language)

Oh, BTW I'm coming to the Battle Language tonight.


2012/2/28 Arnaud Bailly arnaud.oq...@gmail.com

 Thanks for your support. I would really like to do this but 1) the talk is
 tomorrow evening and 2) I do not have time in the interval to learn yesod
 and/or gloss enough to be confident that I will not botch anything in a 5
 minutes time frame.

 I did recently a 2-hours long talk with same purpose (introducing Haskell
 to an audience of mixed-level Scala programmers), using some code to
 produce sound and music, up to a web server for generating wav files from
 scores, and I had to make giant steps in the last 15 minutes to get to
 the web stuff. There was a lot of questions right from the start on various
 strange aspects of the language : type inference, laziness, generalized
 tail recursion, monadic I/O, point-free definitions and I barely managed to
 keep some time to show how easy it is to write a web server with simple
 HTML combinators (I discovered miku in the process).

 I timed myself on the menu problem and I am a little bit under 5 minutes,
 given I want to explain quite a few things in the process: what you can do
 with lists, what you can do with pairs, how to simply generate all the
 combinations of elements of a list, how to map a function on list, how to
 use list-comprehensions to integrate everything into a concise form and how
 to avoid combinatorial blow-up through laziness.

 I also would love to have the time to show some cool concurrency stuff
 following your suggestion. I will try to pack this tomorrow.

 Thanks a lot again for your advices,

 Arnaud

 2012/2/28 Ertugrul Söylemez e...@ertes.de

  Arnaud Bailly arnaud.oq...@gmail.com wrote:

  Thanks Yves for your advice. And I agree with you that too much
  laziness may be mind-blowing for most of the audience, yet this is one
  of the characteristics of Haskell, whether or not we like it and
  whatever troubles it can induce.
 
  I really think the knapsack is simple, not too far away from real
  world and might be demonstrated with live code in 5 minutes. I will
  have a look anyway at more spectacular stuff like gloss or yesod but
  I fear this is out of scope.

 Gloss is definitely not out of scope.  It is to simple 2D graphics what
 Yesod is to web applications.  I write two-minutes visualizations using
 it all the time.  Of course if you want to show something great, you
 shouldn't fear learning it first.

 Also showing the language features, despite their greatness, makes
 people go like:  Ok, that's great, but I can do it in my language using
 insert control construct here.  If you really don't want to go for
 something amazing like Diagrams, Gloss or Yesod, I really suggest at
 least bringing the run-time system into the game.  Show concurrency, STM
 and parallel evaluation.  Show how you can write a full-featured finger
 server in five minutes that is fast, secure and amazingly readable.
 Something like that.

 Math problems amaze Haskellers, not programmers in general.  Show how
 Haskell solves practical problems, for which there is no simple solution
 in more common languages.  Don't show why Haskell is also good.  Show
 why Haskell is /a lot better/.


 Greets,
 Ertugrul

 --
 nightmare = unsafePerformIO (getWrongWife = sex)
 http://ertes.de/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell showcase in 5 minutes

2012-02-28 Thread Yves Parès
Where exactly does that event take place?
Is it open to public?

And I strongly disadvise fibonacci, quicksort and other mind-blowing
reality-escapist stuff. Show something real world and practical.

2012/2/27 Arnaud Bailly arnaud.oq...@gmail.com

 Hello Cafe,

 I will be (re)presenting Haskell in a Batlle Language event Wednesday
 evening: A fun and interactive contest where various programming language
 champions try to attract as much followers as possible in 5 minutes.

 Having successfully experimented the power of live coding in a recent
 Haskell introduction for the Paris Scala User Group, I would like to do the
 same but given the time frame I need a simpler example than the music
 synthesizer program.

 So I would like to tap in the collective wisdom looking for some concise,
 eye-opening, mind-shaking and if possible fun example of what one can
 achieve in Haskell. Things that sprung to my mind are rather dull: prime
 factors, fibonacci numbers.

 Thanks in advance,
 Arnaud

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell showcase in 5 minutes

2012-02-28 Thread Yves Parès
Nevermind, I think I found:
http://jduchess.org/duchess-france/blog/battle-language-a-la-marmite/

You could try the JSON parser exercise. (
https://github.com/revence27/JSON-hs) Or anything else with Parsec, it's a
pretty good power-showing library.

2012/2/28 Yves Parès yves.pa...@gmail.com

 Where exactly does that event take place?
 Is it open to public?

 And I strongly disadvise fibonacci, quicksort and other mind-blowing
 reality-escapist stuff. Show something real world and practical.

 2012/2/27 Arnaud Bailly arnaud.oq...@gmail.com

  Hello Cafe,

 I will be (re)presenting Haskell in a Batlle Language event Wednesday
 evening: A fun and interactive contest where various programming language
 champions try to attract as much followers as possible in 5 minutes.

 Having successfully experimented the power of live coding in a recent
 Haskell introduction for the Paris Scala User Group, I would like to do the
 same but given the time frame I need a simpler example than the music
 synthesizer program.

 So I would like to tap in the collective wisdom looking for some concise,
 eye-opening, mind-shaking and if possible fun example of what one can
 achieve in Haskell. Things that sprung to my mind are rather dull: prime
 factors, fibonacci numbers.

 Thanks in advance,
 Arnaud

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Vim plugin for ghc-mod

2012-02-28 Thread Yves Parès
Hi,
I downloaded those vim extensions, and I just wonder how I could have done
before syntastic ;)

Is there a vim plugin useful for runtime (putting breakpoints, seeing if an
expression has been evaluated or if it's still a thunk?)

I believe it exists for emacs.

2012/2/16 Nicolas Wu nicolas...@gmail.com

 On 16 February 2012 08:51, Kazu Yamamoto k...@iij.ad.jp wrote:
  eagletmt implemented a Vim plugin for ghc-mod:
 
 https://github.com/eagletmt/ghcmod-vim
 
  Happy Haskell programming on Vim!

 Note that there's also support for ghc-mod using [syntastic][1] for
 vim, which is well supported for Haskell and other languages too.

 Nick

 [1]: https://github.com/scrooloose/syntastic

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FFI: Overhead of foreign unsafe imports

2012-02-26 Thread Yves Parès
Hello,
When I was using C code from Python, the overhead put on calling C code by
Python was significant.
To simplify, say I have on C-side two procedures f and g, I do all the
stuff to call them in a row from Python, well I'm better off factorizing:
adding on C side a wrapper h procedure that calls them both, and then call
h from Python, because then I will have half as much overhead:

Instead of SwitchToC - call f - SwitchToPython - SwitchToC - call g -
SwitchToPython,
the factorization leads to SwitchToC - call f - call g - SwitchToPython,
which gives the same result yet is different performance-wise because each
switching has a cost.

This is painful, because if another time I have to call f and j (another
function), then I have to make another wrapper.

In Haskell world, now, given that my functions f and g would have been
imported using *unsafe*:

foreign import unsafe f f :: Thing - Foo - IO ()
foreign import unsafe g g :: Stuff - Bar - IO ()
foreign import unsafe h h :: Thing - Foo - Stuff - Bar - IO ()

Are
doStuff = f x y  g z w
and
doStuff = h x y z w
equivalent, or is there an overhead (e.g. due to IO monad, or due to the
way the FFI does the calls) when compiled (if need be with optimizations)
with GHC?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Clarifying a mis-understanding about regions (and iteratees)

2012-02-23 Thread Yves Parès
Hi, so there are different regions libraries?
Is there a shootout comparing them, possibly also with ResourceT from
conduit (which has also been implemented in a stand-alone package
http://hackage.haskell.org/package/resource-simple-0.1 by Robin Banks), for
I take it it tries to respond to the same problem?

2012/2/23 o...@okmij.org


 I have just come across the reddit discussion about regions and
 iteratees.

 http://www.reddit.com/r/haskell/comments/orh4f/combining_regions_and_iteratees/

 There is an unfortunate misunderstanding about regions which I'd like to
 correct. I'm posting in Cafe in hope that some of the participants of
 that discussion read Haskell-Cafe (I'm not a redditor).

 The user Faucelme wrote:
  Would it be possible to implement a region-based iteratee which opened
 some
  file A and wrote to it, then opened some file B and wrote to it,
 while
  ensuring that file A is closed before file B is opened?

 To which the user tibbe replied
  You can typically explicitly close the files as well.

 and the user dobryak commented

  Regions that Oleg refers to started out with region-based memory
 allocation,
  which is effectively a stack allocation strategy, in which resource
  deallocation is in LIFO order. So I think your assumption is correct.

 Regretfully, these comments are incorrect. First of all, memory
 regions were invented to get around the stack allocation, LIFO
 strategy. If the stack allocation sufficed, why do we need heap?  We
 have heap specifically because the memory allocation patterns are
 complex: a function may allocate memory that outlives it.  Regions
 let the programmer create arbitrarily many nested regions; everything
 in a parent region is available to a child. Crucially, a child may
 request any of its ancestors to allocate memory in their
 regions. Therefore, although regions are nested, memory does not have
 to be allocated and freed in LIFO order.

 The Lightweight monadic regions implement all these patterns for files
 or other resources (plus the dynamic bequest).
http://okmij.org/ftp/Computation/resource-aware-prog/region-io.pdf

 The running example of the Haskell Symposium 08 paper was the
 following (see sec 1.1)

 1. open two files for reading, one of them a configuration file;
 2. read the name of an output file (such as the log file) from the
   configuration file;
 3. open the output file and zip the contents of both input files into
   the output file;
 4. close the configuration file;
 5. copy the rest, if any, of the other input file to the output file.

 As you can see, the pattern of opening and closing is non-LIFO: the
 output file has to be opened after the configuration file and is
 closed also after the configuration file. Therefore, the user Faucelme
 can find the solution to his question in the code accompanying the
 Monadic Regions paper.

 Section 5 of the paper describes even more complex example:

 1. open a configuration file;
 2. read the names of two log files from the configuration file;
 3. open the two log files and read a dated entry from each;
 4. close the configuration file and the newer log file;
 5. continue processing the older log file;
 6. close the older log file.

 where the pattern of opening and closing is not statically known:
 it is decided on values read from the files.

 So, Faucelme's question can be answered in affirmative using the
 existing RegionIO library (which, as has been shown, well integrates
 with Iteratees). There is already a region library with the desired
 functionality.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stable pointers: use of cast to/from Ptr

2012-02-12 Thread Yves Parès
Hello,

According to the documentation (
http://hackage.haskell.org/packages/archive/base/4.5.0.0/doc/html/Foreign-StablePtr.html),
StablePtrs aims at being opaque on C-side.
But they provide functions to be casted to/from regular *void**'s.
Does that mean if for instance you have a StablePtr CInt you can cast it to Ptr
() and alter it on C-side?

void alter(void* data)
{
int* x = (int*)data;
*x = 42;
}

--

-- using 'unsafe' doesn't change anything.
foreign import ccall safe alter
alter :: Ptr () - IO ()

main = do
sptr - newStablePtr (0 :: CInt)
deRefStablePtr sptr = print
alter (castStablePtrToPtr sptr)  -- SEGFAULTS!
deRefStablePtr sptr = print
freeStablePtr sptr


But I tried it, and it doesn't work: I got a segfault when 'alter' is
called.

Is it normal? Does this mean I can only use my pointer as opaque? (Which I
know to be working, as I already got a C function call back into Haskell
and pass it the StablePtr via a 'foreign export')
But in that case, what is the use of castStablePtrToPtr/castPtrToStablePtr,
as you can already pass StablePtrs to and from C code?

Thanks!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stable pointers: use of cast to/from Ptr

2012-02-12 Thread Yves Parès
Yes, but from C side, a StablePtr* is **already a **void**. (See HsFFI.h
which typedefs HsStablePtr to void*)
So its sufficient for a use as an opaque pointer, no need to cast it.

So what is the use of casting it to a Ptr () if this doesn't allow to
access the memory space addressed?


2012/2/12 Antoine Latter aslat...@gmail.com

 On Sun, Feb 12, 2012 at 8:18 AM, Yves Parès yves.pa...@gmail.com wrote:
  Hello,
 
  According to the documentation
  (
 http://hackage.haskell.org/packages/archive/base/4.5.0.0/doc/html/Foreign-StablePtr.html
 ),
  StablePtrs aims at being opaque on C-side.
  But they provide functions to be casted to/from regular void*'s.
  Does that mean if for instance you have a StablePtr CInt you can cast it
 to
  Ptr () and alter it on C-side?
 
  void alter(void* data)
  {
  int* x = (int*)data;
  *x = 42;
  }
 
  --
 
  -- using 'unsafe' doesn't change anything.
  foreign import ccall safe alter
  alter :: Ptr () - IO ()
 
  main = do
  sptr - newStablePtr (0 :: CInt)
  deRefStablePtr sptr = print
  alter (castStablePtrToPtr sptr)  -- SEGFAULTS!
  deRefStablePtr sptr = print
  freeStablePtr sptr
 
 
  But I tried it, and it doesn't work: I got a segfault when 'alter' is
  called.
 

 I think that 'castStablePtrToPtr' exists because many C APIs use
 'void*' to mean 'opaque lump of data', and these exist to conform to
 that sort of API.

 Antoine

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stable pointers: use of cast to/from Ptr

2012-02-12 Thread Yves Parès
Thanks for your explanation Albert, it makes things clearer.

So StablePtrs are just useful so that C code can:
1) call back into Haskell (through a foreign exported function like
doSomethingWithTheObjectIGaveYou
:: StablePtr MyObjectType - Stuff - IO ())
2) store them to return them later to Haskell when prompted (through a
foreign imported function like getObject :: Stuff - IO (StablePtr
MyObjectType))
That's it?

But then,
In use case 1), how can a Haskell function modify the data addressed?

If StablePtrs cannot have their pointed value modified (either C or
Haskell-side), that mostly limits their interest, doesn't it?


2012/2/12 Albert Y. C. Lai tre...@vex.net

 On 12-02-12 09:18 AM, Yves Parès wrote:

 According to the documentation
 (http://hackage.haskell.org/**packages/archive/base/4.5.0.0/**
 doc/html/Foreign-StablePtr.**htmlhttp://hackage.haskell.org/packages/archive/base/4.5.0.0/doc/html/Foreign-StablePtr.html
 ),
 StablePtrs aims at being opaque on C-side.


 The doc multiply warns again and again that StablePtr, as well as whatever
 Ptr you get from castStablePtrToPtr, are opague (meaningless) to the C
 side. This is sanctioned by Haskell 2010, and GHC certainly exploits it to
 the fullest. The following example shows what kind of pointer values the
 C side receives for real (I deliberately do not free anything to show you
 more possible values):

 #include stdio.h
 void expose(void *p, void *q)
 {
  printf(%p %p\n, p, q);
 }

 import Foreign.StablePtr
 import Foreign.Ptr
 main = do
  printout (0 :: Int)
  printout (let x = not x in x)
  printout ([] :: [Integer])
 printout :: a - IO ()
 printout thunk = do
  p - newStablePtr thunk
  expose p (castStablePtrToPtr p)
  -- I deliberately do not free
 foreign import ccall expose :: StablePtr a - Ptr b - IO ()

 Typically the output is like
 0xf 0xf
 0x10 0x10
 0x11 0x11
 Looks more like keys of a lookup table than pointers.

 I do not know what good is castStablePtrToPtr for, given that StablePtr is
 already translated to C side void*, so that no intermediate Ptr step is
 necessary. Perhaps there is a story from a historical perspective.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stable pointers: use of cast to/from Ptr

2012-02-12 Thread Yves Parès
You mean I have to use a type like StablePtr (IORef Stuff)?
Because there I can only peek (deRefStablePtr) the pointer, not poke it.

I take it I should return to C a StablePtr to the new value if I don't want
to use IORefs...

Or else I have to use regular Ptrs with Foreign.Marshall.Alloc.malloc

2012/2/12 Antoine Latter aslat...@gmail.com

 On Sun, Feb 12, 2012 at 3:09 PM, Yves Parès yves.pa...@gmail.com wrote:
  But then,
  In use case 1), how can a Haskell function modify the data addressed?
 


 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Foreign-StablePtr.html#v:deRefStablePtr

 Antoine

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mersenne-random and standard random API

2012-02-10 Thread Yves Parès
I just thought about something: basically all these APIs provides a IO
[a] (where a is a randomly generable type) function.
Is there a problem with the approach that is to rely on lazy evaluation to
pass to pure code (either explicitely or through State) the infinite list
generated in IO and consume its head each time we need a random value?
Because there is no issue such as resource holding, like in the case of
file reading and enumerators, which would make lazy IO a not so good
approach...

2012/2/9 Aleksey Khudyakov alexey.sklad...@gmail.com

 On 09.02.2012 17:28, Jerzy Karczmarczuk wrote:

 Aleksey Khudyakov:

 On 09.02.2012 15:32, Jerzy Karczmarczuk wrote:

 1. Mersenne Twister, AND congruential generators AND the Marsaglia
 stuff, all use some kind of seed, all are stateful. There are no
 miracles. Just look the agressive monadization, the form of defaultSeed,
 etc. within MWC.hs, before saying that this generator doesn't rely on
 some global state.

  I think you are missing the point here. Surely all PRNG carry some
 state around. But both StdGen and mwc-random (and likely many others)
 allow to have many generators at once (for use in different threads)

 This is irrelevant. I believe that there is a misunderstanding in
 terminology. When I say global state it means not a local variable in
 some function. You seem to say one object per programme. This is
 confirmed by:

  mersenne-random is just wrapper around vastly impure library (as
 documentation says) and allows only one genrator per program.
 This is why I said it uses *global* state


  In any case, the seed changes after each generation, and
 must be stored somewhere.

  No. It doesn't and cannot

  data StdGen
  = StdGen Int32 Int32

 If generator state is stored in IORef it's not possible to implement
 `next :: g → (Int,g)'. To do something useful with it one have to
 go to IO monad but we can't. So state have to be copied.

  We can't WHAT?
 Look, all data that change or are created MUST be stored somewhere,
 don't say dubious things. Your next function is a threading generator,
 which makes another StdGen, OK, but this is not a copy. This is a
 creation of a new seed. When I spoke about IORefs, I thought about the
 global generator, which USES the l'Ecuyer stuff, newStdGen and its
 friends.

 The threading becomes implicit. Try, say
 r=newStdGen
 r = return . next

 and you will see, it works, and you don't keep explicitly your seed.
  From the efficiency point of view all this is comparable. With IOrefs
 you do not pollute the memory which has to be garbage-collected, but
 its administration is heavier than the standard heap operations. StdGen
 with l'Ecuyer two-number seed, or some 600 of the Mersenne, I don't see
 a conceptual difference. The Marsaglia generator has a global seed quite
 voluminous as well.

  I didn't quite understand you

 In order to implement RandomGen one have to implement `next' function

  next :: g → (Int,g)

 Lets assume that g stores internal generator state (In case of NWC256 it's
 258 Word32s). We cannot modify it in place. Someone may hold this g and
 changing it behind the scenes will violate referential transparence. We
 have to create new array and this is expensive.

 There still way out as Duncan Coutts pointed out. We may generate
 stream of random numbers in lazy ST monad and use them as random generator.



  No idea what do you mean. In the Random library you will find the
 generators using IORefs, AND the class Random, with the member random
 (or randomR, etc.) and you may write
 getStdRandom random
 getStdRandom random
 ...
 as you wish, getting different results. What's wrong with that?


 Nothing. I was talking about problems with `next' function. One always can
 use IORefs to create global generator but that's irrelevant


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mersenne-random and standard random API

2012-02-10 Thread Yves Parès
 It's more like Seed → [a]. IO is not really needed here.
You're absolutely right. I stand corrected.

 1. One may want to generate values of different types and/or distibutions.
Yes, I saw this limitation before. But you can just generate one infinite
list (and then one seed) per distribution, right?
Or could you generate something homomorphic to [(Random a) = a]? (Each
element would then basically be a closure containing the current Seed)

 2. No way to make snapshot of generator state.
You mean the current seed?
You don't need to access that if you only consume PRNs from the infinite
list(s), and that during the whole program.

 3. Laziness overhead. It may be significant if you try to squeeze last
bit of performance.
I'm afraid that if you're frightened by the laziness overhead then you
won't use Haskell.
I'm kidding, but it's just I don't picture the overhead being so huge in
that case. What so big would the runtime need to keep in memory except for
the closure that generates the infinite list?

2012/2/10 Aleksey Khudyakov alexey.sklad...@gmail.com

 On 10.02.2012 18:38, Yves Parès wrote:

 I just thought about something: basically all these APIs provides a IO
 [a] (where a is a randomly generable type) function.
 Is there a problem with the approach that is to rely on lazy evaluation
 to pass to pure code (either explicitely or through State) the infinite
 list generated in IO and consume its head each time we need a random
 value?
 Because there is no issue such as resource holding, like in the case of
 file reading and enumerators, which would make lazy IO a not so good
 approach...


 It's more like Seed → [a]. IO is not really needed here. I can see
 following problems. None are real showstoppers (except maybe 2)


 1. One may want to generate values of different types and/or
   distibutions. This is easily solved by state monad. But I can't
   see advantage over wrapped ST.

 2. No way to make snapshot of generator state.

 3. Laziness overhead. It may be significant if you try to sqeeze last
   bit of performance.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency strategy for 2 threads and rare events

2012-02-08 Thread Yves Parès
Why do you think it's a lot? MVar are a teeny tiny and convenient primitive
of communication, and I don't see why they wouldn't suit your need.
Sure a throwTo would do the trick... But they're is do the trick and do
the job, you see?

Using STM and TVars *would* be kind of overkill.


2012/2/8 JP Moresmau jpmores...@gmail.com

 Hello, I'm wondering what's the best strategy to use in the following
 scenario:
 - 2 threads
  - One perform some work that will take time, possibly go on forever
  - Another waits for user input (like commands from the keyboard)
 that affects thread 1 (causing it to stop, in the simplest case)

 I've read a bit on MVar and channels, but they seem to be a lot for
 cases where the reading thread block for input. In my case, I expect
 to have something that thread 2 updates when an event occur, and
 thread 1 checks it regularly. So thread 1 should not block, but should
 check is there something and there is, act on it, otherwise continue
 doing what it was currently doing. I suppose I could just tryTakeMVar
 on a MVar, but is there something more adapted to my needs?

 Thanks!

 --
 JP Moresmau
 http://jpmoresmau.blogspot.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Mersenne-random and standard random API

2012-02-08 Thread Yves Parès
Hi,
I've been in the past told that mersenne-random was much better than the
standard random package.

However, System.Random.Mersenne doesn't follow the general API described in
System.Random, MTGen is not an instance of RandomGen.
But a sample on System.Random.Mersenne.getStdRandom documentation (
http://hackage.haskell.org/packages/archive/mersenne-random/1.0.0.1/doc/html/System-Random-Mersenne.html)
indicates this:

  rollDice :: IO Int
  rollDice = getMTRandom (randomR (1,6))

It looks like wrong documentation (checking of older versions show that
getMTRandom have never lived or been exposed by this module), but suggests
MTRandom should be an instance of RandomGen (if I assume well that randomR
is that of System.Random).

So is it possible to use the fast and efficient mersenne generator with the
convenient and general random API?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: exists-0.1

2012-02-07 Thread Yves Parès
Are there documentation on constraints being types, how they can be
declared/handled and what are the interests?

2012/2/7 Mikhail Vorozhtsov mikhail.vorozht...@gmail.com

 On 02/06/2012 03:32 AM, Gábor Lehel wrote:

 There's a common pattern in Haskell of writing:

 data E where E :: C a =  a -  E
 also written
 data E = forall a. C a =  E a

 I recently uploaded a package to Hackage which uses the new
 ConstraintKinds extension to factor this pattern out into an Exists
 type parameterized on the constraint, and also for an Existential type
 class which can encompass these kind of types:

 http://hackage.haskell.org/**package/existshttp://hackage.haskell.org/package/exists

 My motivation was mostly to play with my new toys, if it turns out to
 be useful for anything that's a happy and unexpected bonus.

 Some interesting things I stumbled upon while writing it:

  [snip]

  - One of the advantages FunctionalDependencies has over TypeFamilies
 is that type signatures using them tend to be more readable and
 concise than ones which have to write out explicit equality
 constraints. For example, foo :: MonadState s m =  s -  m () is nicer
 than foo :: (MonadState m, State m ~ s) =  s -  m (). But with
 equality superclass constraints (as of GHC 7.2), it's possible to
 translate from TF-form to FD-form (but not the reverse, as far as I
 know): class (MonadStateTF m, s ~ State m) =  MonadStateFDish s m;
 instance (MonadStateTF m, s ~ State m) =  MonadStateFDish s m.

 Even better, you can write

 type ExistentialWith c e = (Existential e, c ~ ConstraintOf e)

 instead of

 class(Existential e, c ~ ConstraintOf e) = ExistentialWith c e
 instance (Existential e, c ~ ConstraintOf e) = ExistentialWith c e

 and drop UndecidableInstances.



 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rewrite this imperative in FP way

2012-02-06 Thread Yves Parès
Okay.
But that's misleading, as normally x == y = True *iff* compare x y = EQ,
but there this is not verified, as you don't redefined

A function like unionBy doesn't exist, so IMHO to limit ambiguity, it
would be a good idea to use MyTuple *only *at the specific place where you
need its Eq instance instead of that of tuple (and then remove the Ord
MyTuple instance), and use regular tuples elsewhere. The following should
do the job:
map getTuple . (`union` c) . map MyTuple

2012/2/6 Haisheng Wu fre...@gmail.com

 The reason I redefined Eq is:
   When doing `union` over two list of MyTuple is just base on its first
 element.
   Basically it means: [(1,2), (2,2)] `union` [(1,0), (2,0), (0,0)]
 produce [(1,2), (2,2), (0,0)]
   rather than [(1,2),(2,2),(1,0),(2,0),(0,0)] by default.

 -Haisheng



 On Sun, Feb 5, 2012 at 11:37 PM, Yves Parès yves.pa...@gmail.com wrote:

 Concerning your first solution, I don't understand why you redefine Eq
 but not Ord instance. Ord will still work by comparing the tuples and not
 the first elements of said tuples.
 Plus the good news is you don't have to do this: just use regular tuples
 and use sort*By *or group*By *functions from Data.List with the 'on'
 function from Data.Function.
 For instance your Eq instance could have been written
 x == y = (==) `on` (fst . getTuple)

 With regular tuples you can write sortBy (compare `on` fst).


 Plus can you rewrite your original imperative algorithm with the right
 variable names? You're using a 'd' array that's not been defined.


 2012/2/5 Haisheng Wu fre...@gmail.com

 a = [1,1,1,1]
 b = [0,1,2,3]
 d = [0,0,0,0]

 for i in b:
   for j in c:
 if (i+j)3:
   d[i+j] += a[i]

 My just work implementation in Haskell
 http://hpaste.org/57452

 Another people implementation in Haskell with Monad and it turns out
 complex and very imperatively.
 http://hpaste.org/57358

 Do you have any cool solution in FP way?

 Thanks.
 -Simon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rewrite this imperative in FP way

2012-02-05 Thread Yves Parès
Concerning your first solution, I don't understand why you redefine Eq but
not Ord instance. Ord will still work by comparing the tuples and not the
first elements of said tuples.
Plus the good news is you don't have to do this: just use regular tuples
and use sort*By *or group*By *functions from Data.List with the 'on'
function from Data.Function.
For instance your Eq instance could have been written
x == y = (==) `on` (fst . getTuple)

With regular tuples you can write sortBy (compare `on` fst).


Plus can you rewrite your original imperative algorithm with the right
variable names? You're using a 'd' array that's not been defined.


2012/2/5 Haisheng Wu fre...@gmail.com

 a = [1,1,1,1]
 b = [0,1,2,3]
 d = [0,0,0,0]

 for i in b:
   for j in c:
 if (i+j)3:
   d[i+j] += a[i]

 My just work implementation in Haskell
 http://hpaste.org/57452

 Another people implementation in Haskell with Monad and it turns out
 complex and very imperatively.
 http://hpaste.org/57358

 Do you have any cool solution in FP way?

 Thanks.
 -Simon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rewrite this imperative in FP way

2012-02-05 Thread Yves Parès
 For instance your Eq instance could have been written
 x == y = (==) `on` (fst . getTuple)

Sorry, wrong arity:
(==) = (==) `on` (fst . getTuple)

Okay for the imperative code.

2012/2/5 Yves Parès yves.pa...@gmail.com

 Concerning your first solution, I don't understand why you redefine Eq but
 not Ord instance. Ord will still work by comparing the tuples and not the
 first elements of said tuples.
 Plus the good news is you don't have to do this: just use regular tuples
 and use sort*By *or group*By *functions from Data.List with the 'on'
 function from Data.Function.
 For instance your Eq instance could have been written
 x == y = (==) `on` (fst . getTuple)

 With regular tuples you can write sortBy (compare `on` fst).


 Plus can you rewrite your original imperative algorithm with the right
 variable names? You're using a 'd' array that's not been defined.


 2012/2/5 Haisheng Wu fre...@gmail.com

 a = [1,1,1,1]
 b = [0,1,2,3]
 d = [0,0,0,0]

 for i in b:
   for j in c:
 if (i+j)3:
   d[i+j] += a[i]

 My just work implementation in Haskell
 http://hpaste.org/57452

 Another people implementation in Haskell with Monad and it turns out
 complex and very imperatively.
 http://hpaste.org/57358

 Do you have any cool solution in FP way?

 Thanks.
 -Simon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: exists-0.1

2012-02-05 Thread Yves Parès
That is a great initiative.
I didn't know about those Kind extensions that enable you to pass a
typeclass as a type parameter...

However, have you considered putting the Data.Exists.Default module in a
separate package? That would reduce the dependencies for those who just
need Exists and Existential.

2012/2/5 Gábor Lehel illiss...@gmail.com

 There's a common pattern in Haskell of writing:

 data E where E :: C a = a - E
 also written
 data E = forall a. C a = E a

 I recently uploaded a package to Hackage which uses the new
 ConstraintKinds extension to factor this pattern out into an Exists
 type parameterized on the constraint, and also for an Existential type
 class which can encompass these kind of types:

 http://hackage.haskell.org/package/exists

 My motivation was mostly to play with my new toys, if it turns out to
 be useful for anything that's a happy and unexpected bonus.

 Some interesting things I stumbled upon while writing it:

 - Did you know you can write useful existentials for Functor,
 Foldable, and Traversable? I sure didn't beforehand.

 - You can even write them for various Comonad classes, though in their
 case I don't think it's good for anything because you have no way to
 run them.

 - Surprisingly to me, the only * kinded class in the standardish
 libraries I found which is useful with existentials is Show, the * -
 * kinded ones are more numerous.

 - I don't know if anyone's ever set out what the precise requirements
 are for a type class method to be useful with existentials. For
 example, any method which requires two arguments of the same type (the
 type in the class head) is clearly useless, because if you have two
 existentials there's no way to tell whether or not their contents were
 of the same type. I think this holds any time you have more than one
 value of the type among the method's parameters in any kind of way
 (even if it's e.g. a single parameter that's a list). If the
 type-from-the-class-head (is there a word for this?) is used in the
 method's parameters in a position where it's not the outermost type
 constructor of a type (i.e. it's a type argument), that's also no
 good, because there's no way to extract the type from the existential,
 you can only extract the value. On the other hand, in the method's
 return type it's fine if there are multiple values of the
 type-from-the-class-head (or if it's used as a type argument?),
 because (as long as the method also has an argument of the type) the
 type to put into the resulting existentials can be deduced to be the
 same as the one that was in the argument. But if the
 type-from-the-class-head is used *only* in the return type, then it's
 difficult to construct an existential out of the return value because
 the instance to use will be ambiguous.

 - There are a lot of ways you can write existentials, and the library
 only captures a small part of them. Multiparameter constraint? No go.
 More than one constraint? No go (though you can use
 Control.Constraint.Combine). More than one type/value stored? No go.
 Anything which doesn't exactly match the patterns data E where E :: C
 a = a - E or data E a where E :: C f = f a - E a? No go. I don't
 think there's any way to capture all of the possibilities in a finite
 amount of code.

 - ConstraintKinds lets you write class aliases as type synonyms, type
 Stringy a = (Show a, Eq a). The old way to do this is class (Show a,
 Eq a) = Stringy a; instance (Show a, Eq a) = Stringy a and requires
 UndecidableInstances. But if the alias has multiple parameters, the
 old way is still superior, because it can be partially applied where
 type synonyms can't. This is analogous to the situation with type
 synonyms versus newtype/data declarations, but interestingly, unlike
 data and newtypes, the class+instance method doesn't require you to do
 any manual wrapping and unwrapping, only the declaration itself is
 different.

 - One of the advantages FunctionalDependencies has over TypeFamilies
 is that type signatures using them tend to be more readable and
 concise than ones which have to write out explicit equality
 constraints. For example, foo :: MonadState s m = s - m () is nicer
 than foo :: (MonadState m, State m ~ s) = s - m (). But with
 equality superclass constraints (as of GHC 7.2), it's possible to
 translate from TF-form to FD-form (but not the reverse, as far as I
 know): class (MonadStateTF m, s ~ State m) = MonadStateFDish s m;
 instance (MonadStateTF m, s ~ State m) = MonadStateFDish s m.

 - PolyKinds only seems to be useful as long as there's no value-level
 representation of the polykinded type involved (it's only used as a
 phantom). As soon as you have to write 'a' for kind * and 'f a' for
 kind * - *, you have to do the duplication manually. Is this right?

 - Writing this library really made me want to have a type-level Ord
 instance for constraints, more precisely a type-level is-implied-by
 operator. The typechecker clearly knows that Eq 

Re: [Haskell-cafe] space leak when repeatedly calling Control.Monad.State.Strict.modify

2012-01-30 Thread Yves Parès
Have you tried to compile your code with optimisations? I guess GHC's
strictness analysis would find strict evaluation is better here.


2012/1/30 Joey Hess j...@kitenet.net

 Claude Heiland-Allen wrote:
  Control.Monad.State.Strict is strict in the actions, but the state
  itself is still lazy, so you end up building a huge thunk in the
  state containing all the updates that ever took place to the initial
  state.
 
  Using this should fix it:
 
  modify' :: MonadState s m = (s - s) - m ()
  modify' f = do
x - get
put $! f x  -- force the new state when storing it

 Thanks!

 So, why does Control.Monad.State.Strict.modify not do that?

 And, I still don't quite understand why this only happened
 when the updated value is obtained using IO.

 --
 see shy jo

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TCP Server

2012-01-28 Thread Yves Parès
 http://hackage.haskell.org/package/network-server

Straightforward to use, but unfortunately uses unix package. I take it it
is not portable.
However its first version did not use it, so maybe the concerned part could
be rewritten.

 I think there is still no consensus on which iteratee library is the one
to use. There are at least iteratee, enumerator, iterIO, conduit, and
 pipes. The reusability of your libary depends on the choice of
iteratee-style library you select.

Yes, and IMO this is a growing problem. Since iteratees were designed, a
lot of different libraries providing this kind of service have appeared.
Of course they all have advantages and inconvenients, but some libraries
that could be compatible are not, because they rely on a different
iteratee-ish package. For instance pipes (as its documentation states) is
really like iteratee... but with more concrete names. Still it's sufficient
to break compatibility.

Or else, we have to make sure that each one (iteratee, enumerator, conduit,
pipes...) has its own set of associated packages and that each provide
equivalent functionalities, but then = combinatorial explosion.

^^ It's just I don't want people to start trolling by applying to Haskell
the adage I've heard quite a few times about Java, stating that There are
50 ways to achieve something... none of which is good.

2012/1/28 Jean-Marie Gaillourdet j...@gaillourdet.net

 Hello,

 On 27.01.2012, at 00:47, Alexander V Vershilov wrote:
  Recently I asked about tcp server libraries [1] and there was only one
  answer haskell-scallable-server [2], but in that package there was some
  dependencies and server logic that are not good for my task.

 A simple search for server on Hackage turned up the following packages
 for somewhat generic server infrastructure:

 http://hackage.haskell.org/package/iterio-server
 http://hackage.haskell.org/package/generic-server
 http://hackage.haskell.org/package/c10k
 http://hackage.haskell.org/package/network-server

 In issue 19 of The Monad Reader is an article discussing the design of the
 following web server:
 http://hackage.haskell.org/package/mighttpd2

 This links might be relevant to your original question.


  So I decided to make a library with skeletons for different types of
  tcp servers and a basic library to make server [3].
 
  Now there is only warp based (simplified) tcp server.
  Main logic is:
   * handle new connection and start iteratee green thread
   * enumerator (library side) reads socket and send data into enumeratee
 (application side)
   * enumeratee consumes input and produces server command (now simply
 output)
 and push it into iteratee (server side)
   * iteratee server side reads command and performs action (now simply
 output
 command)
  It gives application possibility to store state in enumerator.
 
  My questions to community:
   1). can you help to make code better ;) maybe there are some stupid
 mistakes
   2). is there any ideas what thing can be add to the server to make it
 reusable

 I think there is still no consensus on which iteratee library is the one
 to use. There are at least iteratee, enumerator, iterIO, conduit, and
 pipes. The reusability of your libary depends on the choice of
 iteratee-style library you select.

 Jean


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TCP Server

2012-01-28 Thread Yves Parès
Yes, I was forecasting a little...

Concerning conduit, yes it's not another implementation of Oleg's
iteratees, yet its API looks a lot like 'enumerators'.
Plus it aims at solving the same problem, only the implementation that
differs (roughly state variables instead of pure closure-based automata)

2012/1/28 Erik de Castro Lopo mle...@mega-nerd.com

 Yves Parès wrote:

  Yes, and IMO this is a growing problem. Since iteratees were designed, a
  lot of different libraries providing this kind of service have appeared.

 Thats mainly because the solution space was new and lots of unexplored
 terrain.

  Or else, we have to make sure that each one (iteratee, enumerator,
 conduit,
  pipes...) has its own set of associated packages and that each provide
  equivalent functionalities, but then = combinatorial explosion.

 There really isn't a combinatorial explosion, but rather a small number
 of families of packages.

  ^^ It's just I don't want people to start trolling by applying to Haskell
  the adage I've heard quite a few times about Java, stating that There
 are
  50 ways to achieve something... none of which is good.

 Java has been around 20 years. The iteratee/enumerator/iterio/conduit/pipes
 idea has really only been around for a couple of years and I would be
 surprised it one of them (or a new one combining the best features of
 the others) doesn't come out the clear winner in the next year or two.

 Erik
 --
 --
 Erik de Castro Lopo
 http://www.mega-nerd.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Terminology: different levels of strictness

2012-01-27 Thread Yves Parès
If I consider the functions head, length, elem  sum, each is of them is
strict, as:
head/length/elem x/sum _|_ are always _|_.

However:
head (x:_|_) is never _|_.
length [_|_, _|_, _|_ ...] is also never _|_.
elem x [4,5,6,8,2,90,_|_,_|_ ...] is *only sometimes *_|_ (depending on x
value).
In fact, only
sum [4,5,6,8,2,90,_|_,_|_ ...] is always _|_.

Which shows they don't have the same level of strictness.

So can you say things like all these functions are strict, but some are *more
*than other, or sum is *deeply strict* ...?
What terms can you use to compare those functions?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-26 Thread Yves Parès
One day, I _really_ should learn all GHCI commands...

Thanks, Felipe ^^

2012/1/25 Felipe Almeida Lessa felipe.le...@gmail.com

 On Wed, Jan 25, 2012 at 7:38 PM, Yves Parès yves.pa...@gmail.com wrote:
  But I haven't found a way to tell GHCI to fully evaluate 'x' but _not_
 print
  its value.

 Use the :force, Yves!

  let {a = htrace a 12; b = htrace b 29; c = htrace c 10; d = htrace
 d 90; x = htrace , (htrace + (a+b), htrace * (c*d)) }
  :force x
 ,
 +
  a
  b
 *
  c
  d
 x = (41,900)

 Cheers! =)

 --
 Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Just found GHCI can be used as a debugger

2012-01-25 Thread Yves Parès
Hello,

By randomly typing :Tab under GHCI, I discovered it could be used as a
debugger (:breakpoint, :step, etc.)
Three questions in that regard:
1) Is there some documentation about it?
2) I haven't investigated much, is that fairly thorough and convenient?
Does some people over here use those functions on a regular basis?
3) Is it possible to save the state (notably to save the position of the
breakpoints)?

Thanks!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Yves Parès
Hi, nice little package!

I just made a fork and added a new function makeHTrace to be able to have
separate variables 'level'.
I also add the htrace type signature (or else haddock won't generate
documentation for this module):
https://github.com/YwenP/htrace

I was also investigating in a way to fix an annoyment. You see, in GHCI:

 let {a = htrace a 12; b = htrace b 29; c = htrace c 10; d = htrace
d 90; x = htrace , (htrace + (a+b), htrace * (c*d)) }
 x

prints:

,
(+
  a
  b
41,*
  c
  d
900)

Instead, we'd like to have (if I'm right):

,
  +
a
b
  *
c
d
(41,900)

But I haven't found a way to tell GHCI to fully evaluate 'x' but _not_
print its value.

2012/1/25 Eugene Kirpichov ekirpic...@gmail.com

 Thanks!

 I released it:

 http://hackage.haskell.org/package/htrace
 http://github.com/jkff/htrace


 On Wed, Jan 25, 2012 at 4:18 AM, Felipe Almeida Lessa 
 felipe.le...@gmail.com wrote:

 Really nice!  Looks like it could be a useful mini-package on Hackage.

 --
 Felipe.




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data newtype differences. Today: strictness

2012-01-22 Thread Yves Parès
Hello,

I had for long thought that data and newtype were equivalent, but then I
spotted some differences when it comes to strictness.
Those can be summed up as:

data Test = Test Int

newtype TestN = TestN Int


pm (Test _) = 12  -- Strict (pm undefined = undefined)

pm2 t = t `seq` 12  -- Strict

pmN (TestN _) = 12  -- Non strict (pm undefined = 12)

pmN2 t = t `seq` 12  -- Strict


When I think about it, pm and pmN are logical, as newtype layers are
removed by the compiler and then do not exist at runtime.
Some would maybe expected pmN2 to behave just like pmN (as I don't 'seq'
the inner Int but the outer TestN), but if you follow the same logic (TestN
layer doesn't exist at runtime), then you ask to evaluate the inner Int.
This can however be misleading when you use an opaque type, can't it? As
you don't know if it's declared using data or newtype...

These make me think that pattern matching against a newtype is always lazy
(irrefutable). Am I right?

Is there some litterature expliciting in a less empiric way than I did the
differences like this between data and newtype? I've never come against
such documentation through all my learning of Haskell, yet I think it's an
important point.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data newtype differences. Today: strictness

2012-01-22 Thread Yves Parès
Thanks, that's clearer to me now.

It confirmed my thoughts:
 Matching the pattern con pat against a value, where con is a constructor
defined by newtype, depends on the value:
 - If the value is of the form con v, then pat is matched against v.
 - If the value is ⊥, then pat is matched against ⊥.

Put shorter:
newtype TestN = TestN Int

TestN x = undefined

Then x = undefined.

Which means you cannot make a boxed type out of un unboxed one using
newtype. That makes sense.

2012/1/22 Roman Cheplyaka r...@ro-che.info

 * Yves Parès yves.pa...@gmail.com [2012-01-22 11:32:30+0100]
  These make me think that pattern matching against a newtype is always
 lazy
  (irrefutable). Am I right?

 Yes.

  Is there some litterature expliciting in a less empiric way than I did
 the
  differences like this between data and newtype? I've never come against
  such documentation through all my learning of Haskell, yet I think it's
 an
  important point.

 See the Haskell report, section 3.17.2 Informal Semantics of Pattern
 Matching [1].

 In particular, this paragraph:

The irrefutable patterns are as follows: a variable, a wildcard, N
apat where N is a constructor defined by newtype and apat is
irrefutable (see Section 4.2.3), var@apat where apat is irrefutable,
or of the form ~apat (whether or not apat is irrefutable). All other
patterns are refutable.

 [1]:
 http://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-63.17.2

 --
 Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data newtype differences. Today: strictness

2012-01-22 Thread Yves Parès
Big sum up of everything:

If TestN is a newtype constructor, then
'TestN undefined' and 'undefined' are exactly the same thing.


2012/1/22 Yitzchak Gale g...@sefer.org

 Yves Parès wrote:
  Is there some litterature expliciting in a less empiric way than I did
 the
  differences like this between data and newtype? I've never come against
  such documentation through all my learning of Haskell, yet I think it's
 an
  important point.

 Roman Cheplyaka wrote:
  See the Haskell report, section 3.17.2 Informal Semantics of Pattern
  Matching [1].

 And section 4.2.3 of the report [2] addresses exactly your points very
 explicitly:

 A type created by newtype differs from an algebraic datatype in that...
 The following examples clarify the differences between data
 (algebraic datatypes), type (type synonyms), and newtype
 (renaming types)...

 Regards,
 Yitz

 [1]
 http://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-63.17.2
 [2]
 http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-740004.2.3

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [C][enums][newbie] What is natural Haskell representation of such enum?

2012-01-22 Thread Yves Parès
I may be curious to see how you intend to use such enum...
It is very C-wise, I'm not sure it will be very handy, but I need some
context.

2012/1/22 Данило Глинський abcz2.upr...@gmail.com

 What is natural Haskell representation of such enum?

 enum TypeMask
 {
UNIT,
GAMEOBJECT,

CREATURE_OR_GAMEOBJECT = UNIT | GAMEOBJECT
 };

 More sophisticated question is: and what data structures must be used when
 converting this naturally one to Haskell?

 // 1-byte flaged enum
 enum TypeMask
 {
// ...
UNIT= 0x0004,
GAMEOBJECT  = 0x0008,
// ...

CREATURE_OR_GAMEOBJECT = UNIT | GAMEOBJECT
 WORLDOBJECT = UNIT | PLAYER | GAMEOBJECT | DYNAMICOBJECT | CORPSE
// ... even more enum combos ...
 };

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Yves Parès
 (StrictT op) = f = StrictT (op = \ x - x `seq` runStrictT (f x))

Are you sure? Here you evaluate the result, and not the computation itself.
Wouldn't it be:

(StrictT op) = f  = op ` seq` StrictT (op = \x - runStrictT (f x))

??

2012/1/21 David Barbour dmbarb...@gmail.com

 On Sat, Jan 21, 2012 at 10:08 AM, Roman Cheplyaka r...@ro-che.infowrote:

 * David Barbour dmbarb...@gmail.com [2012-01-21 10:01:00-0800]
  As noted, IO is not strict in the value x, only in the operation that
  generates x. However, should you desire strictness in a generic way, it
  would be trivial to model a transformer monad to provide it.

 Again, that wouldn't be a monad transformer, strictly speaking, because
 monads it produces violate the left identity law.


 It meets the left identity law in the same sense as the Eval monad from
 Control.Strategies.

 http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/src/Control-Parallel-Strategies.html#Eval

 That is, so long as values at each step can be evaluated to WHNF, it
 remains true that `return x = f` = f x.

 I did mess up the def of =. I think it should be:
   (StrictT op) = f = StrictT (op = \ x - x `seq` runStrictT (f x))

 But I'm not interested enough to actually pull out an interpreter and
 test...

 Regards,

 Dave


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unboxed Rationals?

2012-01-12 Thread Yves Parès
uvector is deprecated, its functionnalities has been ported into vector.

2012/1/11 Artyom Kazak artyom.ka...@gmail.com

 Also, uvector already supports unboxed Ratios:
 http://hackage.haskell.org/package/uvector

 In fact, I am surprised that Data.Vector doesn't have a Ratio
 instance, but has a Complex instance. Any ideas, why?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Solved but strange error in type inference

2012-01-06 Thread Yves Parès
2012/1/6 AUGER Cédric sedri...@gmail.com

 when you write forall a. exists b. a - b - a, then you allow the
 caller to have access to b to produce a (didn't you write a-b-a?)


Yes, sorry, I always assumed independence between the type variables. Like
in:
f :: forall a. a - (forall b. b - a)
being the same than:
f :: forall a b. a - b - a
I should have specified: *if* a doesn't depend on b in the latter.
Of course the latter allows that, whereas the first does not (since its
what prevents STRefs from escaping runST, by forbidding the return type of
runST to depend on the phantom type 's' of the ST action).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Solved but strange error in type inference

2012-01-05 Thread Yves Parès
Thanks Cédric for your explanations.
To sum up, that's the way I understood all this, in a bit less formal way:
(Could help some, I guess)

Universal quantification (forall) means that the *caller *will instanciate
(fix) the type variables, and that *callee *cannot know those types.

Existential quantification (through data E = forall a. E a) means that
the *callee
*will fix the type variables, and that the *caller *cannot know those types.

Now concerning the way existential (resp. universal) in a context becomes
universal (resp. existential) in the outer context:

f :: forall a. a - a
means that when you're *inside *f, 'a' will have been fixed to some type *that
f cannot know*, only the outer context can.
f says I can work with any type a, so give me whatever type you want.

g :: (forall a. a - a) - Something
g f = 
means the exact opposite: 'a' is universally quantified in the signature of
'f', so it is existentially quantified in that of 'g'. So it's equivalent
to:
g :: exists a. (a - a) - Something
g says I can only work with *some* specific types 'a', but as you don't
know what they will be, you can but give me something that can will work
with *any *type 'a'.

And so on:
h :: ((forall a. a - a) - Something) - SomethingElse
h g = ...
h can also be written as follows, am I right?:
h :: forall a. ((a - a) - Something) - SomethingElse

And to be sure:
foo :: forall a. a - (forall b. b - a)
is equivalent to:
foo :: forall a b. a - b - a
Right?

And:
foo :: forall a. a - ((forall b. b) - a)
to:
foo :: forall a. exists b. a - b - a
??

2012/1/4 AUGER Cédric sedri...@gmail.com

 Le Wed, 4 Jan 2012 20:13:34 +0100,
 Yves Parès limestr...@gmail.com a écrit :

   f :: forall a. (forall b. b - b) - a - a
   f id x = id x
 
   is very similar to the first case, x is still universally
   quantified (a) and there is an existential quantification in id (b).
 
  I don't get it. What is 'id' existentially quantified in?
  f calls id so f will decide what will be its type. In this case,
  because it's applied to 'x', id type will get instanciated to 'a -
  a' (where a is rigid, because bound by f signature).

 Sorry I badly expressed it.

 I wanted to say that the b type variable in id is existentially
 quantified in the type of f.

 forall a. (forall b. b - b) - a - a
   ^
   existentially quantified in the overall type
   but locally universally quantified in the type forall b. b - b

 It is the same things for data types.

 Consider the data type of lists:

 data List a = Nil
| Cons a (List a)

 its elimination principle is:

 list_rec :: forall a b. b - (a - list a - b - b) - List a - b
 list_rec acc f l = case l of
Nil - acc
Cons a l - f a l (list_rec acc f l)
 (it is the type version of the induction principle:
  ∀ P. P Nil ⇒ (∀ x  l,  P l ⇒ P (Cons x l)) ⇒ ∀ l,   P l
  which by Curry-DeBruijn-Horward gives
   b- (  b - List a -  b  -  b ) - list a - b
  you can also find an optimized version where l is removed from
  the recursion as its information can be stored in P l/b;
  this function is exactly foldr
 )

 Now if we take a look at the elimination principle,

 forall a b. b - (a - list a - b - b) - List a - b

 contains only universally quantified variables.

 Cons as a function is also universally quantified:
 Cons :: forall a. a - List a - List a

 Nil as a function is also universally quantified:
 Nil :: forall a. List a

 So that elimination and introduction are all universally quantified.
 (Nothing very surprising here!)
 =

 Now for the existential part:

 data Existential = forall b. ExistIntro b

 What is its elimination principle (ie. what primitive function allows
 us to use it for all general purpouses)?

 Existential types are less easy to manipulate, since in fact they
 require functions which takes universally quantified functions as
 arguments.

 existential_elim :: forall b. (forall a. a - b) - Existential - b
 existential_elim f e = case e of
ExistIntro x - f x
 (∀ P. (∀ a. a ⇒ P (ExistIntro a)) ⇒ ∀ e. P e)

 Here, you can see that a is existentially quantified (isn't it
 normal for a type containing existentials)!

 Note also that its introduction rule:
 ExistIntro :: forall a. a - Existential
 is also universally quantified (but with a type variable which doesn't
 appear in the introduced type).

 Which is not to mess with:

 data Universal = UnivIntro (forall a. a)

 elimination principle:
 univ_elim :: forall b. ((forall a. a) - b) - Universal - b
 univ_elim f u = case u of
 UnivIntro poly - f poly
 (∀ P. (∀ (poly:∀a.a). P (UnivIntro poly)) ⇒ ∀ u. P u)

 Here you can see that you don't need special syntax, and again, the
 elimination principle is universally quantified in both a and b.
 Its introduction has some existential quantification (which doesn't
 appear

Re: [Haskell-cafe] Solved but strange error in type inference

2012-01-04 Thread Yves Parès
 Ocaml community (where most people strongly rely on type inference
 and do not put types very often)

Well it's the same for Haskell.

 foo :: bar - foobar - barfoo
 foo b = let *fooAux :: foobar - barfoo*  -- You can comment it out
fooAux fb = if f b fb
then fooAux (g b fb)
else h b fb
in fooAux

Yes, I also like to write it that way, except you can *remove *fooAux type
signature, and let GHC type inferer do its job, like in OCaml.
Plus, 'foo' signature is merely documentation for you in that case.

 I always prefer to minimize the number of specialized functions, since
 I hate jumping from parts of code to other parts of code to understand
 what a function really does.

Too bad, that's one of the assets of functional programming. The more
splitted you design your functions, the easier it is to reuse and test them
afterwards. A good practice in C-like languages also, IMHO.
You can just write subsome right under legSome, it doesn't break
legibility, and if you think it's best tell your module to export only
legSome.


...Or, remove/comment the inner type signatures ;)


2012/1/4 AUGER Cédric sedri...@gmail.com

 Le Tue, 3 Jan 2012 20:46:15 +0100,
 Yves Parès limestr...@gmail.com a écrit :

   Actually, my question is why the different type can't be unified
   with the
  inferred type?
 
  Because without ScopedTypeVariable, both types got expanded to :
 
  legSome :: *forall nt* t s. LegGram nt t s - nt - Either String
  ([t], s)
 
  subsome :: *forall nt* t s. [RRule nt t s] -  Either String ([t], s)
 
  So you see subsome declare new variables, which *do not *match the
  *rigid *ones declared by legSome signature, hence the incompatibility.
 

 How ugly! I thought that Haskell didn't allow to mask type variables.

 Does that mean that I can do:

 foo :: (a - a) - (b - b) - a - a
 foo bar dummy =
  let strange :: a - a
  strange = dummy
  in
  foo

 (ok, that's dead code, but how misleading it would be!)

 Is there some way to enforce a typing error here?
 (I am pretty sure there is some flag to have explicit 'forall's,
 but I am not sure that it forces you to use 'forall' explicitely.)

 I come from Coq community (where forall must be explicit) and Ocaml
 community (where most people strongly rely on type inference and do not
 put types very often). I didn't try this example on Ocaml.

  As we said before, you have three ways to work it out:
  1) Use scoped type variables with explicit forall nt on legSome and
  *no forall *on subsome, so that nt in subsome matches nt declared in
  legSome. 2) Don't give a type signature for subsome and let GHC find
  out which is its correct type.
  3) Extract subsome so that it becomes a top-level function with a
  polymorphic type (Recommended).

 In fact I first wanted to write directly the method without using
 auxiliary functions (I changed my mind since, as I redisigned my code
 and it wasn't possible [or at least natural for me] to put the code
 directly inlined).

 In fact I often do nested functions when there are free variables in
 the function. I particulary hate to have code like that:

 foo :: bar - foobar - barfoo
 foo b fb = if f b fb
   then foo b (g b fb)
   else h b fb

 as b is in fact invariant in the calls of 'foo'; so I always rewrite
 this as:

 foo :: bar - foobar - barfoo
 foo b = let fooAux :: foobar - barfoo
fooAux fb = if f b fb
then fooAux (g b fb)
else h b fb
in fooAux

 that is more verbose, but I well see what are the recursive parameters,
 and I have the (probably wrong) feeling that it is better since
 function calls have less arguments.

 Here subsome was depending on the free variable sg, so I had to
 generalize it to do so even if I won't have benefits in having the
 ability to use it elsewhere (as it was the only place I wanted to use
 it).

 I always prefer to minimize the number of specialized functions, since
 I hate jumping from parts of code to other parts of code to understand
 what a function really does.

 
 
  Now, concerning the error I asked you to deliberately provoke, that's
  the quickest way I found to know what is the output of the type
  inferer, and maybe the only simple one.
  So this error is:
   Couldn't match expected type `Int'
 with actual type `[([Symbols nt t], [s] - t0)]
   - Either [Char] ([t], t0)'
 In the expression: subsome :: Int
  GHC tells you the type it infers for subsome: [([Symbols nt t], [s]
  - t0)] - Either [Char] ([t], t0)
  The nt you see is the one from legSome, those messages show scoped
  variables. (You'd get something like 'nt1' or 'nt0' if it was another
  variable, meant to be instanciated to a different type).
  This accords with your previous error message, which happens to give
  you more details about where the rigid type variable nt comes from:
`nt' is a rigid type

Re: [Haskell-cafe] Solved but strange error in type inference

2012-01-04 Thread Yves Parès
 f :: a - a
 f x = x :: a  -- invalid

Perfect example indeed.

 The confusing point to me is that the 'a' from 'f' type signature on
 its own is not universally quantified as I expected,
 and it is dependent on the type of 'f'.

 This dependence is not obvious for a beginner like me.

It's indeed counterintuitive, as you'd expect type variables to be scoped,
but just note that:
You have only *one *simple rule to remember: *All* type variables
appearing in a type expression get automatically* universally *quantified
at the *beginning *of this expression.
Practically : *When you see an type variable v, then GHC automatically puts
a 'forall v' at the beginning of the whole expression.* No exceptions done,
no ambiguity at all as long as ScopedTypeVariables stays unactivated.
Following this one rule, it's clear that the example above cannot but
become:

f :: forall a. a - a
f x = x :: forall a. a

Which is obviously wrong: when you *have entered* f, x has been instatiated
to a specific type 'a', and then you want it to x to be of *any *type? That
doesn't make sense.*

**It's only logical.

*
2012/1/4 Yucheng Zhang yczhan...@gmail.com

 On Wed, Jan 4, 2012 at 3:46 AM, Yves Parès limestr...@gmail.com wrote:
  Because without ScopedTypeVariable, both types got expanded to :
 
  legSome :: forall nt t s. LegGram nt t s - nt - Either String ([t], s)
 
  subsome :: forall nt t s. [RRule nt t s] -  Either String ([t], s)
 
  So you see subsome declare new variables, which do not match the rigid
 ones
  declared by legSome signature, hence the incompatibility.
 

 It's not obvious to see the incompatibility. I looked into the Haskell
 2010 Language
 Report, and found my question answered in Section 4.4.1, along with a
 minimal
 reproducing example:

 f :: a - a
 f x = x :: a  -- invalid

 The confusing point to me is that the 'a' from 'f' type signature on
 its own is not
 universally quantified as I expected, and it is dependent on the type of
 'f'.

 This dependence is not obvious for a beginner like me.

 ScopedTypeVariables will help to express the dependence exactly. And moving
 'subsome' to top-level will prevent from bringing in the dependent type.

  Now, concerning the error I asked you to deliberately provoke, that's the
  quickest way I found to know what is the output of the type inferer, and
  maybe the only simple one.

 I find this one of the most interesting tricks I've seen.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Solved but strange error in type inference

2012-01-04 Thread Yves Parès
 I expected the type of 'x' to be universally quantified, and thus can be
 unified with 'forall a. a' with no problem

As I get it. 'x' is not universally quantified. f is. [1]
x would be universally quantified if the type of f was :

f :: (forall a. a) - (forall a. a)

[1] Yet here I'm not sure this sentence is correct. Some heads-up from a
type expert would be good.


Would you try:

f :: a - a
f x = undefined :: a

And tell me if it works? IMO it doesn't.

2012/1/4 Yucheng Zhang yczhan...@gmail.com

 On Wed, Jan 4, 2012 at 7:58 PM, Yves Parès limestr...@gmail.com wrote:
  f :: forall a. a - a
  f x = x :: forall a. a
 
  Which is obviously wrong: when you have entered f, x has been
 instatiated to
  a specific type 'a', and then you want it to x to be of any type? That
  doesn't make sense.

 I did not expect the type variables to be scoped.

 I expected the type of 'x' to be universally quantified, and thus can be
 unified
 with 'forall a. a' with no problem.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   >