Re: [Haskell-cafe] Proposal: New syntax for Haskell

2013-09-10 Thread Edward Z. Yang
This is completely irrelevant, but the .chs extension is
already taken by the c2hs tool.

Cheers,
Edward

Excerpts from Niklas Hambüchen's message of Tue Sep 10 00:30:41 -0700 2013:
 Impressed by the productivity of my Ruby-writing friends, I have
 recently come across Cucumber: http://cukes.info
 
 
 It is a great tool for specifying tests and programs in natural
 language, and especially easy to learn for beginners.
 
 I propose that we add a Cucumber syntax for Haskell, with the extension
 .chs, next to .hs and .lhs.
 
 
 Code written in cucumber syntax is concise and easy to read: You can
 find some example code in https://gist.github.com/nh2/6505995. Quoting
 from that:
 
   Feature: The Data.List module
 
 In order to be able to use lists
 As a programmer
 I want a module that defines list functions
 
 Scenario: Defining the function foldl
   Given I want do define foldl
   Which has the type (in brackets) a to b to a (end of brackets),
  to a, to list of b, to a
   And my arguments are called f, acc, and l
   When l is empty
   Then the result better be acc
   Otherwise l is x cons xs
   Then the result should be foldl f (in brackets) f acc x
 (end of brackets) xs
 
 
 PS: People even already started a testing framework for Haskell in it:
 https://github.com/sol/cucumber-haskell#cucumber-for-haskell
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] starting GHC development -- two questions

2013-08-08 Thread Edward Z. Yang
Hello Ömer,

First off, welcome to the wonderful world of GHC development!  I
recommend that you subscribe to the ghc-devs mailing list and
direct GHC specific questions there:

http://www.haskell.org/mailman/listinfo/ghc-devs

 While doing this, I think one feature would greatly help me finding my
 way through GHC source, which is huge: I want to see definition of
 some symbols. Normally what I would do for this is to load source into
 GHCi and run :info command. But in the case of GHC, even if it's
 possible to load GHC into GHCi, I don't think it will be faster than
 running ack --haskell someSymbol and searching through results
 manually.
 
 First idea came to my mind was to generate tags files and then
 navigate from within vim(my editor of choice). tags file can be added
 to Makefile as a goal and then tags can be regenerated after each
 build. Did anyone else try this before?

GHC has a 'make tags' command but I've never gotten it to work.  I have
always just run 'hasktags .' in the compiler/ directory, which works
pretty well for me.  (If you're in the RTS, run ctags, etc instead)

 My second question is do we have any low-hanging fruits in trac, to
 help new people start contributing to GHC? I know several open source
 projects using that approach and it's really helpful for beginners.
 
 I just skimmed over trac and most issues look way too advanced for a starter.

We've been discussing putting together an easy bugs list.  As a proxy,
you can search on the 'Difficulty' keyword:
http://ghc.haskell.org/trac/ghc/query?status=infoneededstatus=mergestatus=newstatus=patchdifficulty=Easy+(less+than+1+hour)col=idcol=summarycol=statuscol=typecol=prioritycol=milestonecol=componentorder=priority

For example, this bug seems like a good beginner bug to get your feet
wet with the RTS: http://ghc.haskell.org/trac/ghc/ticket/750

This one will give you some experience wrangling the test suite:
http://ghc.haskell.org/trac/ghc/ticket/8079

Moving up to the moderate category, here is a nontrivial bug involving
profiling and the optimizer: http://ghc.haskell.org/trac/ghc/ticket/609

As with all open source projects, there is always lots of
infrastructural work to be done, so if that's your sort of thing, there
are plenty of bugs in that category.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Monad.Reader Issue 22

2013-08-07 Thread Edward Z. Yang
I am pleased to announce that Issue 22 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2013/08/issue22.pdf

Issue 22 consists of the following two articles:

  * Generalized Algebraic Data Types in Haskell by Anton Dergunov
  * Error Reporting Parsers: a Monad Transformer Approach by Matt Fenwick and 
Jay Vyas
  * Two Monoids for Approximating NP-Complete Problems by Mike Izbicki

Feel free to browse the source files. You can check out the entire repository 
using Git:

git clone https://github.com/ezyang/tmr-issue22.git

If you’d like to write something for Issue 23, please get in touch!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] what is wrong w my IORef Word32 ?

2013-07-18 Thread Edward Z. Yang
shiftL has the wrong type:  Bits a = a - Int - a
so it is expecting the value in the IORef to be an Int.

Edward

Excerpts from Joerg Fritsch's message of Thu Jul 18 10:08:22 -0700 2013:
 All, what is wrong w the below code?
 
 I get an type error related to the operation shiftL
 
 import Data.Bits
 import Data.Word
 import Data.IORef
 
 data Word32s = Word32s { x :: IORef Word32 }
 
 bitfield :: Word32
 bitfield = 0
 
 mkbitfield :: Word32 - IO Word32s
 mkbitfield i = do the_bf - newIORef i
   return (Word32s the_bf)
 
 sLbitfield :: Integer - Word32s - IO ()
 sLbitfield i (Word32s bf) = do modifyIORef bf (shiftL i)
 
 main::IO()
 main = do
  oper_bf - mkbitfield bitfield 
  sLbitfield 2 oper_bf
 
 
 
 bf_003.hs:15:48:
 Couldn't match type `Int' with `Word32'
 Expected type: Word32 - Word32
   Actual type: Int - Word32
 In the return type of a call of `shiftL'
 In the second argument of `modifyIORef', namely `(shiftL i)'
 In a stmt of a 'do' block: modifyIORef bf (shiftL i)
 
 bf_003.hs:15:55:
 Couldn't match expected type `Word32' with actual type `Integer'
 In the first argument of `shiftL', namely `i'
 In the second argument of `modifyIORef', namely `(shiftL i)'
 In a stmt of a 'do' block: modifyIORef bf (shiftL i)
 
 
 
 Thanks,
 --Joerg
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-recursive let [Was: GHC bug? Let with guards loops]

2013-07-10 Thread Edward Z. Yang
In my opinion, when you are rebinding a variable with the same name,
there is usually another way to structure your code which eliminates
the variable.

If you would like to write:

let x = foo input in
let x = bar x in
let x = baz x in

instead, write

baz . bar . foo $ input

If you would like to write

let (x,s) = foo 1 [] in
let (y,s) = bar x s in
let (z,s) = baz x y s in

instead, use a state monad.

Clearly this will not work in all cases, but it goes pretty far,
in my experience.

Edward

Excerpts from Andreas Abel's message of Wed Jul 10 00:47:48 -0700 2013:
 Hi Oleg,
 
 just now I wrote a message to haskell-pr...@haskell.org to propose a 
 non-recursive let.  Unfortunately, the default let is recursive, so we 
 only have names like let' for it.  I also mentioned the ugly workaround 
 (- return $) that I was shocked to see the first time, but use myself 
 sometimes now.
 
 Cheers,
 Andreas
 
 On 10.07.2013 09:34, o...@okmij.org wrote:
  Andreas wrote:
  The greater evil is that Haskell does not have a non-recursive let.
  This is source of many non-termination bugs, including this one here.
  let should be non-recursive by default, and for recursion we could have
  the good old let rec.
 
  Hear, hear! In OCaml, I can (and often do) write
 
   let (x,s) = foo 1 [] in
   let (y,s) = bar x s in
   let (z,s) = baz x y s in ...
 
  In Haskell I'll have to uniquely number the s's:
 
   let (x,s1)  = foo 1 [] in
   let (y,s2)  = bar x s1 in
   let (z,s3)  = baz x y s2 in ...
 
  and re-number them if I insert a new statement. BASIC comes to mind. I
  tried to lobby Simon Peyton-Jones for the non-recursive let a couple
  of years ago. He said, write a proposal. It's still being
  written... Perhaps you might want to write it now.
 
  In the meanwhile, there is a very ugly workaround:
 
   test = runIdentity $ do
(x,s) - return $ foo 1 []
(y,s) - return $ bar x s
(z,s) - return $ baz x y s
return (z,s)
 
  After all, bind is non-recursive let.
 
 
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lambda Calculus question on equivalence

2013-05-02 Thread Edward Z. Yang
The notion of equivalence you are talking about (normally L is referred
to as a context) is 'extensional equality'; that is, functions f
and g are equal if forall x, f x = g x.  It's pretty easy to give
a pair of functions which are not alpha equivalent but are observationally
equivalent:

if collatz_conjecture then true else bottom
true / bottom (Depending on whether or not you think the collatz conjecture 
is true...)

Cheers,
Edward

Excerpts from Ian Price's message of Thu May 02 12:47:07 -0700 2013:
 Hi,
 
 I know this isn't perhaps the best forum for this, but maybe you can
 give me some pointers.
 
 Earlier today I was thinking about De Bruijn Indices, and they have the
 property that two lambda terms that are alpha-equivalent, are expressed
 in the same way, and I got to wondering if it was possible to find a
 useful notion of function equality, such that it would be equivalent to
 structural equality (aside from just defining it this way), though
 obviously we cannot do this in general.
 
 So the question I came up with was:
 
 Can two normalised (i.e. no subterm can be beta or eta reduced) lambda
 terms be observationally equivalent, but not alpha equivalent?
 
 By observationally equivalent, I mean A and B are observationally
 equivalent if for all lambda terms L: (L A) is equivalent to (L B) and
 (A L) is equivalent to (B L). The definition is admittedly circular, but
 I hope it conveys enough to understand what I'm after.
 
 My intuition is no, but I am not sure how to prove it, and it seems to
 me this sort of question has likely been answered before.
 
 Cheers

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lambda Calculus question on equivalence

2013-05-02 Thread Edward Z. Yang
Excerpts from Timon Gehr's message of Thu May 02 14:16:45 -0700 2013:
 Those are not lambda terms.
 Furthermore, if those terms are rewritten to operate on church numerals, 
 they have the same unique normal form, namely λλλ 3 2 (3 2 1).

The trick is to define the second one as x * 2 (and assume the fixpoint
operates on the first argument). Now they are not equal.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-22 Thread Edward Z. Yang
So, if I understand correctly, you're using the online/offline
criterion to resolve non-directed cycles in pipelines?  (I couldn't
tell how the Shivers paper was related.)

Cheers,
Edward

Excerpts from Ben Lippmeier's message of Sun Apr 21 19:29:29 -0700 2013:
 
 On 22/04/2013, at 12:23 , Edward Z. Yang ezy...@mit.edu wrote:
 
  I've got a solution for this problem and it will form the basis of
  Repa 4, which I'm hoping to finish a paper about for  the upcoming
  Haskell Symposium.
  
  Sounds great! You should forward me a preprint when you have something
  in presentable shape. I suppose before then, I should look at 
  repa-head/repa-stream
  to figure out what the details are?
 
 The basic approach is already described in:
 
 Automatic Transformation of Series Expressions into Loops
 Richard Waters, TOPLAS 1991
 
 The Anatomy of a Loop
 Olin Shivers, ICFP 2005
 
 
 The contribution of the HS paper is planning to be:
  1) How to extend the approach to the combinators we need for DPH
  2) How to package it nicely into a Haskell library.
 
 I'm still working on the above...
 
 Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] conditional branching vs pattern matching: pwn3d by GHC

2013-04-22 Thread Edward Z. Yang
Note that, unfortunately, GHC's exhaustiveness checker is *not* good
enough to figure out that your predicates are covering. :o)  Perhaps
there is an improvement to be had here.

Edward

Excerpts from Albert Y. C. Lai's message of Mon Apr 22 00:51:46 -0700 2013:
 When I was writing
 http://www.vex.net/~trebla/haskell/crossroad.xhtml
 I wanted to write: branching on predicates and then using selectors is 
 less efficient than pattern matching, since selectors repeat the tests 
 already done by predicates.
 
 It is only ethical to verify this claim before writing it. So here it 
 goes, eval uses pattern matching, fval uses predicates and selectors:
 
 module E where
 
 data E = Val{fromVal::Integer} | Neg{fromNeg::E}
| Add{fromAdd0, fromAdd1 :: E}
 isVal Val{} = True
 isVal _ = False
 isNeg Neg{} = True
 isNeg _ = False
 isAdd Add{} = True
 isAdd _ = False
 
 eval (Val n) = n
 eval (Neg e0) = - eval e0
 eval (Add e0 e1) = eval e0 + eval e1
 
 fval e | isVal e = fromVal e
 | isNeg e = - fval (fromNeg e)
 | isAdd e = fval (fromAdd0 e) + fval (fromAdd1 e)
 
 Simple and clear. What could possibly go wrong!
 
 $ ghc -O -c -ddump-simpl -dsuppress-all -dsuppress-uniques E.hs
 
 ...
 
 Rec {
 fval
 fval =
\ e -
  case e of _ {
Val ds - ds;
Neg ds - negateInteger (fval ds);
Add ipv ipv1 - plusInteger (fval ipv) (fval ipv1)
  }
 end Rec }
 
 Rec {
 eval
 eval =
\ ds -
  case ds of _ {
Val n - n;
Neg e0 - negateInteger (eval e0);
Add e0 e1 - plusInteger (eval e0) (eval e1)
  }
 end Rec }
 
 Which of the following best describes my feeling?
 [ ] wait, what?
 [ ] lol
 [ ] speechless
 [ ] oh man
 [ ] I am so pwn3d
 [ ] I can't believe it
 [ ] what can GHC not do?!
 [ ] but what am I going to say in my article?!
 [ ] why is GHC making my life hard?!
 [X] all of the above
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Edward Z. Yang
Hello all, (cc'd stream fusion paper authors)

I noticed that the current implementation of stream fusion does
not support multiple-return stream combinators, e.g.
break :: (a - Bool) - [a] - ([a], [a]).  I thought a little
bit about how might one go about implement this, but the problem
seems nontrivial. (One possibility is to extend the definition
of Step to support multiple return, but the details are a mess!)
Nor, as far as I can tell, does the paper give any treatment of
the subject.  Has anyone thought about this subject in some detail?

Thanks,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Edward Z. Yang
 I've got a solution for this problem and it will form the basis of
 Repa 4, which I'm hoping to finish a paper about for  the upcoming
 Haskell Symposium.

Sounds great! You should forward me a preprint when you have something
in presentable shape. I suppose before then, I should look at 
repa-head/repa-stream
to figure out what the details are?

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: [Haskell-cafe] Resource Limits for Haskell

2013-04-01 Thread Edward Z. Yang
I now have a paper draft describing the system in more detail.  It also
comes with a brief explanation of how GHC's profiling works, which should
also be helpful for people who haven't read the original profiling
paper.

http://ezyang.com/papers/ezyang13-rlimits.pdf

Edward

Excerpts from Edward Z. Yang's message of Fri Mar 15 14:17:39 -0700 2013:
 Hey folks,
 
 Have you ever wanted to implement this function in Haskell?
 
 -- | Forks a thread, but kills it if it has more than 'limit'
 -- bytes resident on the heap.
 forkIOWithSpaceLimit :: IO () - {- limit -} Int - IO ThreadId
 
 Well, now you can! I have a proposal and set of patches here:
 
 http://hackage.haskell.org/trac/ghc/wiki/Commentary/ResourceLimits
 http://hackage.haskell.org/trac/ghc/ticket/7763
 
 There is a lot of subtlety in this space, largely derived from the
 complexity of interpreting GHC's current profiling information.  Your
 questions, comments and suggestions are greatly appreciated!
 
 Cheers,
 Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monad.Reader #22 call for copy

2013-03-29 Thread Edward Z. Yang
Call for Copy: The Monad.Reader - Issue 22


Another ICFP submission deadline has come and gone: why not celebrate by
submitting something to The Monad.Reader?  Whether you're an established
academic or have only just started learning Haskell, if you have
something to say, please consider writing an article for The
Monad.Reader!  The submission deadline for Issue 22 will be:

**Saturday, June 1**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Future of MonadCatchIO

2013-03-26 Thread Edward Z. Yang
While block and unblock have been removed from base, they are still 
implementable
in modern GHC.  So another possible future is to deprecate MonadCatchIO
(which should have been done a while ago, honestly!), but manually redefine
the functions so that old code keeps working.

Edward

Excerpts from Arie Peterson's message of Sun Mar 03 07:40:06 -0800 2013:
 Hi all,
 
 
 The function 'block' and 'unblock' (from Control.Exception) have been 
 deprecated for some time, and are apparantly now being removed (in favour of 
 'mask').
 
 Generalisations of these functions are (part of) the interface of 
 MonadCatchIO-transformers (the 'MonadCatchIO' class has methods 'block' and 
 'unblock'). So, the interface would have to change to keep up with base.
 
 I'm inclined to deprecate MonadCatchIO-transformers itself, in favour of 
 monad-control.
 
 I suspect that most clients do not use 'block' or 'unblock' directly, but use 
 only derived functions, like 'bracket'. (I have partly confirmed this, by 
 inspecting some reverse dependencies on hackage.) This allow an easy 
 transition to monad-control: in many cases, only imports will need to be 
 changed. In the minority of cases where 'block' and 'unblock' are used and/or 
 instances of MonadCatchIO are defined, code will need to be updated.
 
 There is a difference in functionality between MonadCatchIO and 
 monad-control. 
 In the former, 'bracket' will not perform the final action if the main action 
 is an ErrorT that throws an error (in contrast with exceptions in the 
 underlying IO monad). In monad-control, 'bracket' will perform the final 
 action 
 in this case. (See this discussion for background:
 http://www.haskell.org/pipermail/haskell-cafe/2010-October/084890.html.)
 
 Probably, in most use cases the behaviour of monad-control is preferred. This 
 seems to be the case also for snap, which uses MonadCatchIO-transformers, but 
 defines its own variant of 'bracket' to get the right behaviour.
 
 
 Would anyone have a problem with a deprecation of MonadCatchIO-transformers, 
 and a failure to update it to work with a base without 'block' and 'unblock'?
 
 
 Regards,
 
 Arie
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MVar which can not be null ?

2013-03-18 Thread Edward Z. Yang
If you are doing IO operations, then the operation is hardly atomic, is it?

Just take from the MVar, compute, and when you're done, put a value
back on the MVar.  So long as you can guarantee all users of the MVar
take before putting, you will have the desired semantics.

Something worth considering: what are the desired semantics if an
asynchronous exception is thrown on the thread servicing the MVar?
If the answer is to just quit, what if it has already performed
externally visible IO actions?  If the answer is to ignore it, what
if the thread gets wedged?

Edward

Excerpts from s9gf4ult's message of Mon Mar 18 01:07:42 -0700 2013:
 18.03.2013 13:26, Alexander V Vershilov ?:
 
 I can not use atomicModifyIORef because it works with pure computation
 
 atomicModifyIORef :: IORef
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Data-IORef.html#t:IORef
 a - (a - (a, b)) - IO
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/System-IO.html#t:IO
 b
 
 nor STM, becuase IO is not acceptable inside STM transaction.
 
 I just need some thread-safe blocking variable like MVar
 
 modifyMVar :: MVar
 http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/Control-Concurrent-MVar.html#t:MVar
 a - (a - IO
 http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/System-IO.html#t:IO
 (a, b)) - IO
 http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/System-IO.html#t:IO
 b

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Monad.Reader Issue 21

2013-03-16 Thread Edward Z. Yang
I am pleased to announce that Issue 21 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2013/03/issue21.pdf

Issue 21 consists of the following two articles:

* A Functional Approach to Neural Networks by Amy de Buitléir, Michael 
Russell, Mark Daly
* Haskell ab initio: the Hartree-Fock Method in Haskell by Felipe Zapata, 
Angel J. Alvarez

Feel free to browse the source files. You can check out the entire repository 
using Git:

git clone https://github.com/ezyang/tmr-issue21.git

If you’d like to write something for Issue 22, please get in touch!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Resource Limits for Haskell

2013-03-15 Thread Edward Z. Yang
Hey folks,

Have you ever wanted to implement this function in Haskell?

-- | Forks a thread, but kills it if it has more than 'limit'
-- bytes resident on the heap.
forkIOWithSpaceLimit :: IO () - {- limit -} Int - IO ThreadId

Well, now you can! I have a proposal and set of patches here:

http://hackage.haskell.org/trac/ghc/wiki/Commentary/ResourceLimits
http://hackage.haskell.org/trac/ghc/ticket/7763

There is a lot of subtlety in this space, largely derived from the
complexity of interpreting GHC's current profiling information.  Your
questions, comments and suggestions are greatly appreciated!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Resource Limits for Haskell

2013-03-15 Thread Edward Z. Yang
The particular problem you're referring to is fixed if you compile all
your libraries with -falways-yield; see 
http://hackage.haskell.org/trac/ghc/ticket/367

I believe that it is possible to give a guarantee that the kill
signal will hit the thread in a timely fashion.  The obvious gap in
our coverage at the moment is that there may be some primops that infinite
loop, and there are probably other bugs, but I do not believe they are
insurmountable.

Edward

Excerpts from Gwern Branwen's message of Fri Mar 15 14:39:50 -0700 2013:
 On Fri, Mar 15, 2013 at 5:17 PM, Edward Z. Yang ezy...@mit.edu wrote:
  There is a lot of subtlety in this space, largely derived from the
  complexity of interpreting GHC's current profiling information.  Your
  questions, comments and suggestions are greatly appreciated!
 
 How secure is this? One of the reasons for forking a process and then
 killing it after a timeout in lambdabot/mueval is because a thread can
 apparently block the GC from running with a tight enough loop and the
 normal in-GHC method of killing threads doesn't work. Can one
 simultaneously in a thread allocate ever more memory and suppress kill
 signals?
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2013-03-12 Thread Edward Z. Yang
I also support this suggestion.  Although, do we have the build infrastructure
for this?!

Edward

Excerpts from Michael Orlitzky's message of Mon Mar 11 19:52:12 -0700 2013:
 On 03/11/2013 11:48 AM, Brent Yorgey wrote:
  
  So I'd like to do it again this time around, and am looking for
  particular projects I can suggest to them.  Do you have an open-source
  project with a few well-specified tasks that a relative beginner (see
  below) could reasonably make a contribution towards in the space of
  about four weeks? I'm aware that most tasks don't fit that profile,
  but even complex projects usually have a few simple-ish tasks that
  haven't yet been done just because no one has gotten around to it
  yet.
 
 It's not exciting, but adding doctest suites with examples to existing
 packages would be a great help.
 
   * Good return on investment.
 
   * Not too hard.
 
   * The project is complete when you stop typing.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] To seq or not to seq, that is the question

2013-03-09 Thread Edward Z. Yang
Excerpts from Tom Ellis's message of Sat Mar 09 00:34:41 -0800 2013:
 I've never looked at evaluate before but I've just found it's haddock and
 given it some thought.
 
 
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Exception-Base.html#v:evaluate
 
 Since it is asserted that
 
 evaluate x = (return $! x) = return
 
 is it right to say (on an informal level at least) that evaluating an IO
 action to WHNF means evaluating it to the outermost = or return?

Sure.

Prelude let x = undefined :: IO a
Prelude x `seq` ()
*** Exception: Prelude.undefined
Prelude (x = undefined) `seq` ()
()

  For non-IO monads, since everything is imprecise anyway, it doesn't
  matter.
 
 Could you explain what you mean by imprecise?

Imprecise as in imprecise exceptions, 
http://research.microsoft.com/en-us/um/people/simonpj/papers/imprecise-exn.htm

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] To seq or not to seq, that is the question

2013-03-08 Thread Edward Z. Yang
Are these equivalent? If not, under what circumstances are they not
equivalent? When should you use each?

evaluate a  return b
a `seq` return b
return (a `seq` b)

Furthermore, consider:

- Does the answer change when a = b? In such a case, is 'return $! b' 
permissible?
- What about when b = () (e.g. unit)?
- What about when 'return b' is some arbitrary monadic value?
- Does the underlying monad (e.g. if it is IO) make a difference?
- What if you use pseq instead of seq?

In http://hackage.haskell.org/trac/ghc/ticket/5129 we a bug in
'evaluate' deriving precisely from this confusion.  Unfortunately, the
insights from this conversation were never distilled into a widely
publicized set of guidelines... largely because we never really figured
out was going on! The purpose of this thread is to figure out what is
really going on here, and develop a concrete set of guidelines which we
can disseminate widely.  Here is one strawman answer (which is too
complicated to use in practice):

- Use 'evaluate' when you mean to say, Evaluate this thunk to HNF
  before doing any other IO actions, please.  Use it as much as
  possible in IO.

- Use 'return (a `seq` b)' for strictness concerns that have no
  relation to the monad.  It avoids unnecessary strictness when the
  value ends up never being used and is good hygiene if the space
  leak only occurs when 'b' is evaluated but not 'a'.

- Use 'return $! a' when you mean to say, Eventually evaluate this
  thunk to HNF, but if you have other thunks which you need to
  evaluate to HNF, it's OK to do those first.  In particular,

(return $! a)  (return $! b) === a `seq` (return $! b)
   === a `seq` b `seq` return b
   === b `seq` a `seq` return b [1]

  This situation is similar for 'a `seq` return ()' and 'a `seq` m'.
  Avoid using this form in IO; empirically, you're far more likely
  to run into stupid interactions with the optimizer, and when later
  monadic values maybe bottoms, the optimizer will be justified in
  its choice.  Prefer using this form when you don't care about
  ordering, or if you don't mind thunks not getting evaluated when
  bottoms show up. For non-IO monads, since everything is imprecise
  anyway, it doesn't matter.

- Use 'pseq' only when 'par' is involved.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
https://status.heroku.com/incidents/489

Unsigned Hackage packages are a ticking time bomb.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
 As long as we upload packages via plain HTTP, signing won't help though.

I don't think that's true?  If the package is tampered with, then the
signature will be invalid; if the signature is also forged, then the
private key is compromised and we can blacklist it.  We care only
about integrity, not secrecy.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
Excerpts from Joachim Breitner's message of Wed Jan 30 12:59:48 -0800 2013:
 another reason why Cabal is no package manager¹.

Based on the linked post, it seems that you are arguing that cabal-install is
not a package manager, and thus it is not necessary for it to duplicate
the work that real package managers e.g. Debian or Ubuntu put into
vetting, signing and releasing software.  (Though I am not sure, so please
correct me if I am wrong.)

This argument seems specious.  Whether or not cabal-install is or not
intended to be a package manager, users expect it to act like one (as
users expect rubygems to be a package manager), and, at the end of the
day, that is what matters.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
Excerpts from Ramana Kumar's message of Wed Jan 30 14:46:26 -0800 2013:
  This argument seems specious.  Whether or not cabal-install is or not
  intended to be a package manager, users expect it to act like one (as
  users expect rubygems to be a package manager), and, at the end of the
  day, that is what matters.
 
 
 But playing along with their delusion might make it harder to change their
 minds.

Looking at the library ecosystems of the most popular programming languages,
I think this ship has already sailed.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-01-30 Thread Edward Z. Yang
Excerpts from Joachim Breitner's message of Wed Jan 30 14:57:28 -0800 2013:
 I’m not against cryptographically signed packages on hackage. In fact, I
 would whole-heatedly appreciate it, as it would make my work as a
 package maintainer easier.
 
 I was taking the opportunity to point out an advantage of established
 package management systems, to shamelessly advertise my work there, as
 not everyone sees distro-packaged libraries as a useful thing.

Yes. In fact, I am a sysadmin for a large shared hosting environment, and
the fact that programming language libraries tend not to be distro-packaged
is an endless headache for us.  We would like it if everything were just
packaged properly!

On the other hand, working in these circumstances has made me realize
that there is a huge tension between the goals of package library
authors and distribution managers (a package library author is desires
ease of installation of their packages, keeping everyone up-to-date as
possible and tends to be selfish when it comes to the rest of the
ecosystem, whereas the distribution manager values stability, security,
and global consistency of the ecosystem.)  So there is a lot of work to
be done here.  Nevertheless, I believe we are in violent agreement that
cryptographically signed Hackage packages should happen as soon as
possible!

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Example programs with ample use of deepseq?

2013-01-07 Thread Edward Z. Yang
There are two senses in which deepseq can be overkill:

1. The structure was already strict, and deepseq just forces another
no-op traversal of the entire structure.  This hypothetically affects
seq too, although seq is quite cheap so it's not a problem.

2. deepseq evaluates too much, when it was actually sufficient only to
force parts of the structure, e.g. the spine of a list.  This is less
common for the common use-cases of deepseq; e.g. if I want to force pending
exceptions I am usually interested in all exceptions in a (finite) data
structure; a space leak may be due to an errant closure---if I don't
know which it is, deepseq will force all of them, ditto with work in
parallel programs.  Certainly there will be cases where you will want snip
evaluation at some point, but that is somewhat difficult to encode
as a typeclass, since the criterion varies from structure to structure.
(Though, perhaps, this structure would be useful:

data Indirection a = Indirection a
class DeepSeq Indirection
rnf _ = ()
)

Cheers,
Edward

Excerpts from Joachim Breitner's message of Mon Jan 07 04:06:35 -0800 2013:
 Dear Haskellers,
 
 I’m wondering if the use of deepseq to avoid unwanted lazyness might be
 a too large hammer in some use cases. Therefore, I’m looking for real
 world programs with ample use of deepseq, and ideally easy ways to test
 performance (so preferably no GUI applications).
 
 I’ll try to find out, by runtime observerations, which of the calls ot
 deepseq could be replaced by id, seq, or „shallow seqs“ that, for
 example, calls seq on the elements of a tuple.
 
 Thanks,
 Joachim
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Second Call for Copy: Monad.Reader #21

2012-12-13 Thread Edward Z. Yang
Second Call for Copy: The Monad.Reader - Issue 21
-

Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader!  The submission deadline
for Issue 21 will be:

**Tuesday, January 1**

Less than half a month away, but that's what Christmas break is for,
right? :-)

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
The monoid instance is necessary to ensure adherence to the monad laws.

Cheers,
Edward

Excerpts from Petr P's message of Sat Dec 08 10:59:25 -0800 2012:
 The class is defined as
 
  class (Monoid w, Monad m) = MonadWriter w m | m - w where
...
 
 What is the reason for the Monoid constrait? It seems superfluous to me. I
 recompiled the whole package without it, with no problems.
 
 
 Of course, the Monoid constraint is necessary for most _instances_, like in
 
  instance (Monoid w, Monad m) = MonadWriter w (Lazy.WriterT w m) where
  ...
 
 but this is a different thing - it depends on how the particular instance
 is implemented.
 
 I encountered the problem when I needed to define an instance where the
 monoidal structure is fixed (Last) and I didn't want to expose it to the
 user. I wanted to spare the user of of having to write Last/getLast
 everywhere. (I have an instance of MonadWriter independent of WriterT, its
 'tell' saves values to a MVar. Functions 'listen' and 'pass' create a new
 temporary MVar. I can post the detail, if anybody is interested.)
 
 Would anything break by removing the constraint? I think the type class
 would get a bit more general this way.
 
   Thanks for help,
   Petr Pudlak

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
Excerpts from Roman Cheplyaka's message of Sat Dec 08 14:00:52 -0800 2012:
 * Edward Z. Yang ezy...@mit.edu [2012-12-08 11:19:01-0800]
  The monoid instance is necessary to ensure adherence to the monad laws.
 
 This doesn't make any sense to me. Are you sure you're talking about the
 MonadWriter class and not about the Writer monad?

Well, I assume the rules for Writer generalize for MonadWriter, no?

Here's an example.  Haskell monads have the associativity law:

(f = g) = h === f = (g = h)

From this, we can see that

(m1  m2)  m3 === m1  (m2  m3)

Now, consider tell. We'd expect it to obey a law like this:

tell w1  tell w2 === tell (w1  w2)

Combine this with the monad associativity law:

(tell w1  tell w2)  tell w3 === tell w1  (tell w2  tell w3)

And it's easy to see that '' must be associative in order for this law
to be upheld.  Additionally, the existence of identities in monads means
that there must be a corresponding identity for the monoid.

So anything that is writer-like and also satisfies the monad laws...
is going to be a monoid.

Now, it's possible what GP is actually asking about is more a question of
encapsulation.  Well, one answer is, Well, just give the user specialized
functions which do the appropriate wrapping/unwrapping; another answer is,
if you let the user run a writer action and extract the resulting written
value, then he can always reverse engineer the monoid instance out of it.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
 First of all, I don't see why two tells should be equivalent to one
 tell. Imagine a MonadWriter that additionally records the number of
 times 'tell' has been called. (You might argue that your last equation
 should be a MonadWriter class law, but that's a different story — we're
 talking about the Monad laws here.)

Yes, I think I would argue that my equation should be a MonadWriter class
law, and if you don't grant me that, I don't have a leg to stand on.

 Second, even *if* the above holds (two tells are equivalent to one
 tell), then there is *some* function f such that
 
 tell w1  tell w2 == tell (f w1 w2)
 
 It isn't necessary that f coincides with mappend, or even that the type
 w is declared as a Monoid at all. The only thing we can tell from the
 Monad laws is that that function f should be associative.

Well, the function is associative: that's half of the way there to
a monoid; all you need is the identity!  But we have those too:
whatever the value of the execWriter (return ()) is...

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mtl: Why there is Monoid w constraint in the definition of class MonadWriter?

2012-12-08 Thread Edward Z. Yang
Excerpts from Holger Siegel's message of Sat Dec 08 15:27:38 -0800 2012:
 For deriving a monoid instance of w from monad  (Writer w), you will need
 function execWriter:: Writer w a - w, but in case of a general instance of
 (MonadWriter w m) you would have to use function listen :: m a - m (a, w)
 that will only provide you a value of type (m w), but not of type w. 
 Therefore,
 I'm not yet convinced that every instance of (MonadWriter w m) gives rise
 to a monoid instance of w.

Definitely not. I need a way of running the monad, some way or another,
otherwise, it's like having the IO monad without a 'main' function :) But
you don't need very much to get there...

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-23 Thread Edward Z. Yang
Hello Janek,

What happens if you do the benchmark without unsafePerformIO involved?

Edward

Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500 2012:
 I am using Criterion library to benchmark C code called via FFI bindings and 
 I've ran into a 
 problem that looks like a bug. 
 
 The first benchmark that uses FFI runs correctly, but subsequent benchmarks 
 run much longer. I 
 created demo code (about 50 lines, available at github: 
 https://gist.github.com/4135698 ) in 
 which C function copies a vector of doubles. I benchmark that function a 
 couple of times. First 
 run results in avarage time of about 17us, subsequent runs take about 45us. 
 In my real code 
 additional time was about 15us and it seemed to be a constant factor, not 
 relative to correct 
 run time. The surprising thing is that if my C function only allocates memory 
 and does no 
 copying:
 
 double* c_copy( double* inArr, int arrLen ) {
   double* outArr = malloc( arrLen * sizeof( double ) );
 
   return outArr;
 }
 
 then all is well - all runs take similar amount of time. I also noticed that 
 sometimes in my demo 
 code all runs take about 45us, but this does not seem to happen in my real 
 code - first run is 
 always shorter.
 
 Does anyone have an idea what is going on?
 
 Janek
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-23 Thread Edward Z. Yang
Running the sample code on GHC 7.4.2, I don't see the one
fast, rest slow behavior.  What version of GHC are you running?

Edward

Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 2012:
  What happens if you do the benchmark without unsafePerformIO involved?
 I removed unsafePerformIO, changed copy to have type Vector Double - IO 
 (Vector Double) and 
 modified benchmarks like this:
 
 bench C binding $ whnfIO (copy signal)
 
 I see no difference - one benchmark runs fast, remaining ones run slow.
 
 Janek
 
 
  Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500 2012:
   I am using Criterion library to benchmark C code called via FFI bindings
   and I've ran into a problem that looks like a bug.
  
   The first benchmark that uses FFI runs correctly, but subsequent
   benchmarks run much longer. I created demo code (about 50 lines,
   available at github: https://gist.github.com/4135698 ) in which C
   function copies a vector of doubles. I benchmark that function a couple
   of times. First run results in avarage time of about 17us, subsequent
   runs take about 45us. In my real code additional time was about 15us and
   it seemed to be a constant factor, not relative to correct run time.
   The surprising thing is that if my C function only allocates memory and
   does no copying:
  
   double* c_copy( double* inArr, int arrLen ) {
 double* outArr = malloc( arrLen * sizeof( double ) );
  
 return outArr;
   }
  
   then all is well - all runs take similar amount of time. I also noticed
   that sometimes in my demo code all runs take about 45us, but this does
   not seem to happen in my real code - first run is always shorter.
  
   Does anyone have an idea what is going on?
  
   Janek

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monad.Reader #21 call for copy

2012-10-20 Thread Edward Z. Yang
Call for Copy: The Monad.Reader - Issue 21


Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader!  The submission deadline
for Issue 21 will be:

**Tuesday, January 1**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell with all the safeties off

2012-09-07 Thread Edward Z. Yang
Haskell already does this, to some extent, in the design of imprecise
exceptions.  But note that bottom *does* have well defined behavior, so
these optimizations are not very desirable.

Edward

Excerpts from David Feuer's message of Thu Sep 06 19:35:43 -0400 2012:
 I have no plans to do such a thing anytime soon, but is there a way to tell
 GHC to allow nasal demons to fly if the program forces bottom? This mode of
 operation would seem to be a useful optimization when compiling a program
 produced by Coq or similar, enabling various transformations that can turn
 bottom into non-bottom, eliminating runtime checks in incomplete patterns,
 etc.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell with all the safeties off

2012-09-07 Thread Edward Z. Yang
Excerpts from David Feuer's message of Fri Sep 07 12:06:00 -0400 2012:
 They're not *usually* desirable, but when the code has been proven not to
 fall into bottom, there doesn't seem to be much point in ensuring that
 things will work right if it does. This sort of thing only really makes
 sense when using Haskell as a compiler target.

OK, so it sounds like what you're more looking for is a way of giving
extra information to GHC's strictness analyzer, so that it is more
willing to unbox/skip making thunks even when the analyzer itself isn't
able to figure it out.  But it seems to me that in any such case, there
might be a way to add seq's which have equivalent effect.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Combining State and List Monads

2012-08-25 Thread Edward Z. Yang
Ah, egg in my face, I appear to have misremembered how ListT is implemented ^_^
http://www.haskell.org/haskellwiki/ListT_done_right may be relevant.

Edward

Excerpts from Edward Z. Yang's message of Sat Aug 25 01:51:40 -0400 2012:
 Hello Henry,
 
 In such cases, it is often worth thinking about how you would implement
 such a scheme manually, without using pre-existing monads.  You will
 quickly see that the two candidate types:
 
 s - ([a], s)
 [s - (a, s)]
 
 both will not work (exercise: what semantics do they give?)  In fact,
 you must use continuation passing style, and you must resume the
 computation with the latest state value you would extracted from the
 last run.  See the LogicT monad for how to implement list-like monads in
 continuation passing style.
 
 Cheers,
 Edward
 
 Excerpts from Henry Laxen's message of Sat Aug 25 00:35:37 -0400 2012:
  Dear Cafe,
  
  It seems to me there should be some simple way of doing this, but thus
  far it eludes me.  I am trying to combine the State and List monads to
  do the following:
  
  countCalls = do
a - [1..2]
b - [1..2]
modify (+1)
return (a,b)
  
  
  where with some combination of ListT, StateT, List, State, or who
  knows what would result in:
  
  ([(1,1),(1,2),(2,1),(2,2)],4)
  
  assuming we initialize the state to 0
  
  Is there any way to make this happen?
  Thanks in advance.
  
  Henry Laxen
  

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Monad.Reader Issue 20

2012-08-25 Thread Edward Z. Yang
*It’s not dead, it’s resting!*

I am pleased to announce that Issue 20 of the Monad Reader is now available.

http://themonadreader.files.wordpress.com/2012/08/issue20.pdf

Issue 20 consists of the following three articles:

- Enumeration of Tuples with Hyperplanes by Tillmann Vogt
- Understanding Basic Haskell Error Messages by Jan Stolarek
- The MapReduce type of a Monad by Julian Porter

Feel free to browse the source files. You can check out the entire repository 
using Git:

git clone https://github.com/ezyang/tmr-issue20.git

If you’d like to write something for Issue 21, please get in touch!

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Combining State and List Monads

2012-08-24 Thread Edward Z. Yang
Hello Henry,

In such cases, it is often worth thinking about how you would implement
such a scheme manually, without using pre-existing monads.  You will
quickly see that the two candidate types:

s - ([a], s)
[s - (a, s)]

both will not work (exercise: what semantics do they give?)  In fact,
you must use continuation passing style, and you must resume the
computation with the latest state value you would extracted from the
last run.  See the LogicT monad for how to implement list-like monads in
continuation passing style.

Cheers,
Edward

Excerpts from Henry Laxen's message of Sat Aug 25 00:35:37 -0400 2012:
 Dear Cafe,
 
 It seems to me there should be some simple way of doing this, but thus
 far it eludes me.  I am trying to combine the State and List monads to
 do the following:
 
 countCalls = do
   a - [1..2]
   b - [1..2]
   modify (+1)
   return (a,b)
 
 
 where with some combination of ListT, StateT, List, State, or who
 knows what would result in:
 
 ([(1,1),(1,2),(2,1),(2,2)],4)
 
 assuming we initialize the state to 0
 
 Is there any way to make this happen?
 Thanks in advance.
 
 Henry Laxen
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapping a concept to a type

2012-05-18 Thread Edward Z. Yang
 I find both heavy and redundant. The first forces me to specify if I want
 an argument of not (with the constructors MR and NR)

I'm sorry, I don't understand what you mean here.

 Do you know of a construction/abstraction that allows having or not an
 argument (a variable number of arguments, here zero or one)?

Either way you do it (Maybe, or building the variant yourself), you're
going to need a constructor. This is a feature, not a bug.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell-beginners] Is it only one data structure per ST monad?

2012-04-23 Thread Edward Z. Yang
If you mean, per 'ST s a', no: you can generate as many
STRefs as you want.

Edward

Excerpts from KC's message of Mon Apr 23 14:32:57 -0400 2012:
 Is it only one data structure per ST monad?
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monad.Reader #20 - DEADLINE EXTENSION

2012-03-07 Thread Edward Z. Yang
Call for Copy: The Monad.Reader - Issue 20 - DEADLINE EXTENSION
---

Whether you're an established academic or have only just started learning
Haskell, if you have something to say, please consider writing an article for
The Monad.Reader!  The updated submission deadline for Issue 20 is now:

**Wednesday, March 21**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Second Monad.Reader #20 call for copy

2012-02-21 Thread Edward Z. Yang
Second Call for Copy: The Monad.Reader - Issue 20
-

Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader!  The submission deadline
for Issue 20 will be:

**Monday, March 5**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad-control rant

2012-01-29 Thread Edward Z. Yang
Excerpts from Mikhail Vorozhtsov's message of Sun Jan 29 05:34:17 -0500 2012:
 You are trying to make bottoms a new null pointers. Sometimes it just 
 doesn't worth the effort (or depends on the interpreter you use). I want 
 to have the option to say: sorry, in this particular case (monad) I 
 don't distinguish `error` from non-termination, so `catch ⊥ h = ⊥`.

This is a longstanding complaint that Robert Harper has with lazy languages
(the paucity of types complaint.)

http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/

There's not much I can say here, except that:

- There really is no difference: GHC can sometimes detect nontermination
  and will throw an exception (for example, the deadlocked exception), and

- The user will sometimes act as a termination checker, and ^C a program
  that is taking too long.

 I think it is one of the simplest layouts one can some up with. I'll try 
 to explain the motivation behind each inclusion.
 
 ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ)

I'm sorry, I cannot understand the discussion below because you haven't
defined precisely what ABORTS means.  (See also below; I think it's
time to write something up.)

 Why are they not equal? After all we can always write `recover weird $ 
 \e → abort e`, right? But zeros from `RECOVERABLE_ZEROES \ ABORTS` may 
 have additional effects. For example, recoverable interruptions could 
 permanently disable blocking operations (you can close a socket but you 
 can't read/write from it). Why the inclusion is not the other way 
 around? Well, I find the possibility of `abort e1` and `abort e2` having 
 different semantics (vs `recover` or `finally`) terrifying. If you can 
 throw unrecoverable exceptions, you should have a different function for 
 that.
 
 RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ)
 
 If a zero is recoverable, we can always finalize it (by 
 catch-and-rethrow).

 FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ)
 
 This one is pretty obvious. One example of non-finalizable zeros is 
 bottoms in a non-MonadUnbottom monad (e.g. my X monad). Another would be 
 `System.Posix.Process.exitImmediately`.

Ugh, don't talk to me about the exit() syscall ;-)

  If we can unify the semantics in a sensible way, I have no objection
  (the choice of exceptions or pure values is merely an implementation
  detail.)  But it's not obvious that this is the case, especially when
  you want to vary the semantics in interesting ways.
 That's why I'm trying to make things like MonadUnbottom optional.

Well, I haven't actually checked if this works or not!

   - If the semantics are different, OK, now you need to write two catch
 functions, but you are handling each type of exception separately
 already, right?
 You have to handle IO exceptions only if you leak them from your 
 implementation. For transformer stacks it is always so, for some 
 interpreters it is not. The `ErrorT e IO` problem is related to another 
 can of worms: operation lifting through transformers.

OK.

  IO has effects, so if I have mplus (effect  mzero) a, this equals
  effect  a, not a.  Same applies for MaybeT IO.  I have to be very
  careful to preserve the monoid property.  STM, on the other hand,
  by definition has the ability to rollback. This is what makes it so nice!
 Should STM/`MaybeT IO` have MonadException instances? How `catch` and 
 `finally` will interact with `retry`/`MaybeT (return Nothing)`?

I don't see why not, as long as they obey the semantics.  But someone
should do the legwork here.

  I also think that unrecoverable/recoverable exceptions is a legitimate 
  idea.  I
  think it could get its own typeclass, let's call it
  MonadUnrecoverableException.  I don't think any MonadException is 
  automatically
  a MonadUnrecoverableException, by appealing to the laws of MonadException.
  I'm confused. What methods/laws would MonadUnrecoverableException contain?
 
  They'd be very simple! Unrecoverable exceptions always cause program 
  execution
  to get stuck. There are no contexts (like catch) which affect them.
 So you are suggesting something like
 
 class MonadUnrecoverableException μ where
throwUnrecoverable ∷ Exception e ⇒ e → μ α
 
 But I'm not interested in throwing such exceptions! It may not even be 
 possible (allowed) to do that from within the monad itself (e.g. 
 external interruptions in my X monad). All I care about is that 
 unrecoverable zeros (not necessarily tied with Exception) exist, which 
 means that I cannot implement `finally` on top of `catch`.

Yes, but in that case, your semantics would have to change to add a case
for finally; you'd need to unwind the stack, etc etc.  You're talking about
finalizable, but unrecoverable exceptions.

  Yes, I think for some this is the crux of the issue. Indeed, it is why
  monad-control is so appealing, it dangles in front of us the hope that
  we do, in fact, only need one API.
 
  But, if some functions in fact do need to be different between the 

Re: [Haskell-cafe] Terminology: different levels of strictness

2012-01-27 Thread Edward Z. Yang
There are some terms for these cases, but they are a bit ad hoc.
length is what we might call spine-strict; head is head-strict.

Projection analysis takes the approach that we can more precisely
characterize the strictness only by considering both what is passed
to the function as input, as well as how much is demanded by the
context (for example, you might return a tuple, one of the values
of which causes lots of evaluation, and the other of which doesn't.)
Unfortunately, this is not enough precision to consider elem.

Edward

Excerpts from Yves Parès's message of Fri Jan 27 10:13:54 -0500 2012:
 If I consider the functions head, length, elem  sum, each is of them is
 strict, as:
 head/length/elem x/sum _|_ are always _|_.
 
 However:
 head (x:_|_) is never _|_.
 length [_|_, _|_, _|_ ...] is also never _|_.
 elem x [4,5,6,8,2,90,_|_,_|_ ...] is *only sometimes *_|_ (depending on x
 value).
 In fact, only
 sum [4,5,6,8,2,90,_|_,_|_ ...] is always _|_.
 
 Which shows they don't have the same level of strictness.
 
 So can you say things like all these functions are strict, but some are *more
 *than other, or sum is *deeply strict* ...?
 What terms can you use to compare those functions?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad-control rant

2012-01-24 Thread Edward Z. Yang
Excerpts from Mikhail Vorozhtsov's message of Tue Jan 24 07:26:35 -0500 2012:
  Sure, but note that evaluate for IO is implemented with seq# under the hood,
  so as long as you actually get ordering in your monad it's fairly 
  straightforward
  to implement evaluate.  (Remember that the ability to /catch/ an error
  thrown by evaluate is separate from the ability to /evaluate/ a thunk which
  might throw an error.)
 Yes, of course. The purpose of MonadUnbottom is to guarantee that 
 `Control.Exception.throw e ∷ μ α = abort (toException e)`. The choice of 
 a class method is somewhat arbitrary here (one could go with 'α → μ 
 (Either SomeException α)` or with no methods at all).

I want to highlight the strangeness of exception-like monads that don't have
a MonadUnbottom instance (for concreteness, let's assume that there are no
methods associated with it.  What would you expect this code to do?

catch (throw (UserError Foo)) (putStrLn caught)  putStrLn ignored 
result

If we don't have ordering, the monad is permitted to entirely ignore the thrown
exception. (In fact, you can see this with the lazy state monad, so long as you
don't force the state value.) Just like in lazy IO, exceptions can move around
to places you don't expect them.

  I think your simulation is incomplete. Let's make this concrete: suppose I'm
  running one of these programs, and I type ^C (because I want to stop the
  program and do something else.)  In normal 'run' operation, I would expect
  the program to run some cleanup operations and then exit.  But there's
  no way for the simulation to do that! We've lost something here.
 I'm not sure I would want to go ^C on a power plant controlling 
 software, but OK. We could accommodate external interruptions by:
 
 1. Adding `Finally ∷ Controller α → (Maybe α → Controller β) → Command 
 (α, β)` and a `MonadFinally Controller` instance (and modifying 
 interpreters to maintain finalizer stacks):
 
 instance MonadFinally Controller where
finally' m = singleton . Finally m
 
 2. Writing more simulators with different interruption strategies (e.g. 
 using StdGen, or `interrupt ∷ PowerPlantState → Bool`, etc).

I think this scheme would work, because your interpreter slices up the actions
finely enough so that the interpreter always gets back control before some
action happens.

  Stepping back for a moment, I think the conversation here would be helped 
  if we
  dropped loaded terms like general and precise and talked about concrete
  properties:
 
   - A typeclass has more/less laws (equivalently, the typeclass 
  constrains
 what else an object can do, outside of an instance),
   - A typeclass requires an instance to support more/less operations,
   - A typeclass can be implemented for more/less objects
 
  One important point is that general is not necessarily good.  For 
  example,
  imagine I have a monad typeclass that has the referential transparency law
  (why are you using a monad?! Well, never mind that for now.)  Obviously, 
  the IO
  monad cannot be validly be an instance of this typeclass. But despite only
  admitting instances for a subset of monads, being less general, I think 
  most
  people who've bought into Haskell agree, referentially transparent code
  is good code!  This is the essential tension of generality and specificity:
  if it's too general, anything goes, but if it's too specific, it lacks 
  elegance.
 
  So, there is a definitive and tangible difference between all bottoms are 
  recoverable
  and some bottoms are recoverable.  The former corresponds to an extra law
  along the lines of I can always catch exceptions.  This makes reduces the
  number of objects the typeclass can be implemented for (or, if you may,
  it reduces the number of admissible implementations for the typeclass), but
  I would like to defend this as good, much like referential transparency
  is a good restriction.
 OK, what MonadUnrecoverableException exactly do you have in mind?

I don't know, I've never needed one! :^)

 I was thinking about something like (no asynchronous exceptions for now):
 
 -- ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ)

Do you have a motivation behind this division?  Are there non-finalizable
but recoverable zeros? Why can't I use aborts to throw non-recoverable
or non-finalizable zeros? Maybe there should be a hierarchy of recoverability,
since I might have a top-level controller which can kill and spawn processes?
Maybe we actually want a lattice structure?

Someone has put a term for this problem before: it is an embarassment of 
riches.
There is so much latitude of choice here that it's hard to know what the right
thing to do is.

 -- RECOVERABLE_ZEROS = zeros recoverable /by `recover`/.
 -- e.g. `mzero` may not be in RECOVERABLE_ZEROS, even though it is
 -- recoverable by `mplus`.
 class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where
-- let eval m = recover (Right $ m) (return . Left)
-- LAWS:

Re: [Haskell-cafe] Monad-control rant

2012-01-21 Thread Edward Z. Yang
Excerpts from Mikhail Vorozhtsov's message of Sat Jan 21 09:25:07 -0500 2012:
  But I also believe that you can't use this as justification to stick your
  head in the sand, and pretend bottoms don't exist (regardless of whether or
  not we'rd talking about asynchronous exceptions.)  The reason is that
  understanding how code behaves in the presence of bottoms tells you
  some very important information about its strictness/laziness, and this
  information is very important for managing the time and space usage of your 
  code.
 I totally agree with you. My point is that things like `evaluate` and 
 `try undefined = return (Left (ErrorCall Prelude.undefined))` are 
 magic and should not be taken for granted. Bottoms are everywhere, but 
 the ability to distinguish them from normal values is special. We could 
 have a separate abstraction for this ability:
 
 class MonadAbort SomeException μ ⇒ MonadUnbottom μ where
-- evaluate a = abort (toException e), if WHNF(a) = throw e
--  return WHNF(a), otherwise
-- join (evaluate m) = m, ensures that `undefined ∷ μ α = abort ...`
evaluate ∷ α → μ α
 
 or something like that.

Sure, but note that evaluate for IO is implemented with seq# under the hood,
so as long as you actually get ordering in your monad it's fairly 
straightforward
to implement evaluate.  (Remember that the ability to /catch/ an error
thrown by evaluate is separate from the ability to /evaluate/ a thunk which
might throw an error.)

  The identity monad for which error FOO is a left zero is a legitimate 
  monad:
  it's the strict identity monad (also known as the 'Eval' monad.)  Treatment
  of bottom is a part of your abstraction!
 
  (I previously claimed that we could always use undefined :: m a as a left 
  zero,
  I now stand corrected: this statement only holds for 'strict' monads, a 
  moniker which
  describes IO, STM and ST, and any monads based on them. Indeed, I'll stick 
  my neck
  out and claim any monad which can witness bottoms must be strict.)
 Bottoms may be zeros in strict monads, but they are not necessarily 
 recoverable. `runX (recover (True $ undefined) (const $ return False))` 
 may be equivalent to `undefined`, not to `runX (return False)`. See my 
 example of such IO based X below. I want to have options.

I think this touches on a key disagreement, which is that I think that in 
IO-like
monads you need to be able to recover from bottoms. See below.

 Let's consider the following X (using the `operational` library):
 
 data Command α where
-- Note that we do not employ exceptions here. If command stream
-- transport fails, the interpreter is supposed to start the
-- appropriate emergency procedures (or switch to a backup channel).
-- Sending more commands wouldn't help anyway.
AdjustControlRods ∷ Height → Command Bool
AdjustCoolantFlow ∷ ...
AdjustSecondaryCoolantFlow ∷ ...
RingTheAlarm ∷ ...
-- We could even add an unrecoverable (/in the Controller monad/)
-- error and the corresponding finally command.
...
ReadCoolantFlow ∷ ...
...
 
 type Controller = Program Command
 
 -- Run a simulation
 simulate ∷ PowerPlantState → Controller α → (α, PowerPlantState)
 -- Run for real
 run ∷ Controller α → IO α
 
 type X = ErrorT SomeException Controller
 
 So the effects here are decoupled from control operations. Would you 
 still say that finalizers are useless here because exception handling is 
 implemented by pure means?

I think your simulation is incomplete. Let's make this concrete: suppose I'm
running one of these programs, and I type ^C (because I want to stop the
program and do something else.)  In normal 'run' operation, I would expect
the program to run some cleanup operations and then exit.  But there's
no way for the simulation to do that! We've lost something here.

  You are free to create another interface that supports unrecoverable
  exceptions, and to supply appropriate semantics for this more complicated
  interface. However, I don't think it's appropriate to claim this interface
  is appropriate for IO style exceptions, which are (and users expect) to 
  always
  be recoverable.
  Why exactly not? I think that everything useful written with this
  assumption in mind can be rewritten to use `finally`, just like I did
  with `withMVar` (the version in `base` actually uses `onException`).
 
  I think the argument here is similar to your argument: saying that all
  IO exceptions are recoverable is more precise.  An interface that specifies
  recoverable exceptions is more precise than an interface that specifies
  recoverable *and* unrecoverable exceptions.
 There is a crucial difference here. In the MonadZero case, being more 
 precise means having a smaller language (only `mzero`, no `mplus`) and 
 less laws. Which means that the code you write is more general 
 (applicable to a greater number of monads). But in the 
 recoverable/unrecoverable case being more precise means having 

Re: [Haskell-cafe] partial type annotations

2012-01-19 Thread Edward Z. Yang
Oleg has described a grody hack which achieves this effect.

http://okmij.org/ftp/Haskell/types.html#partial-sigs

I agree more first class support for this would be nice.

Edward

Excerpts from Nicholas Tung's message of Thu Jan 19 15:37:28 -0500 2012:
 Dear all,
 
 I wanted to voice support for a partial type annotations. Here's my
 usage scenario: I have a monad for an imperative EDSL, which has an
 associated expression data type,
 
 class (Monad m, Expression (ExprTyp m)) = MyDSLMonad m where
 data ExprTyp m :: * - *
 
 and you write imperative EDSL code like so,
 
 my_code_block = do
 x - instruction1
 y - instruction2 (x + x)
 ...
 
 I want the user to be able to annotate x is an Integer. However, to
 do that now, one has to now add a type signature for my_code_block like so,
 so that the $m$ variable is in scope,
 
 my_code_block :: forall m. MyDSLMonad m = m ()
 my_code_block = do
 x :: ExprTyp m Integer - instruction1
 ...
 
 If such a feature were available, one could write a nice type synonym
 Expr and use it like so,
 
 type Expr a = ExprTyp _ a
 
 my_code_block = do
 x :: Expr Integer - instruction1
 
 Suggestions for workarounds are appreciated. I created an
 `asExprTypeOf`, similar to Prelude's `asExprTyp`, but I don't like the
 syntax as much.
 
 Some previous discussion
 * http://www.haskell.org/pipermail/haskell/2002-April/009409.html
 * (a reply) http://www.haskell.org/pipermail/haskell/2002-April/009413.html
 * http://hackage.haskell.org/trac/haskell-prime/wiki/PartialTypeAnnotations
 
 cheers,
 Nicholas — https://ntung.com — 4432-nstung

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread Edward Z. Yang
Hello Gregory,

The original (1998!) conversation can be found here:

http://www.mail-archive.com/haskell@haskell.org/msg03002.html

I think Simon Peyton-Jones' example really sums up the whole issue:

But [MonadZero] really sticks in my craw.  How can we explain this:

f :: Monad m = m (a,b) - m a
f m1 = do { x - m1; return (fst x) }

g :: MonadZero m = m (a,b) - m a
g m1 = do { (a,b) - m1; return a }

h :: Monad m = m (a,b) - m a
h m1 = do { ~(a,b) - m1; return a }

Why must g be in MonadZero?  Because the pattern (a,b) is refutable (by
bottom).

In my opinion, the /flexibility/ that was added by mfail was the real
mistake; we should have just had incomplete - matches be handled the same
way ordinary incomplete pattern matches were accomodated, and figured out
how to nicely allow for multiple patterns in do-notation.  In other words,
MonadZero has no place in dealing with pattern match failure!

But this ship has long sailed.

Cheers,
Edward

Excerpts from Gregory Crosswhite's message of Thu Jan 19 21:47:42 -0500 2012:
 Today I learned (tldr; TIL) that the fail in the Monad class was added
 as a hack to deal with the consequences of the decision to remove
 unfailable patterns from the language.  I will attempt to describe the
 story as I have picked it up from reading around, but please feel free
 to correct me on the details.  :-)
 
 An unfailable pattern (which is a generalization of an irrefutable
 pattern) is a pattern which can never fail (excluding the possibility of
 _|_), such as
 
 let (x,y) = pair
 
 Before fail was a method of the Monad class, using refutable patterns
 in a monad required the type to be an instance of MonadZero (that is,
 MonadPlus without the plus), so that for example
 
 do Just x - m
 
 required that the monad be an instance of MonadZero.  If you avoided
 such patterns, your Monad did not have to have this instance, so that
 for example
 
 do (x,y) - pair
 
 would not require MonadZero because the pattern is unfailable.
 
 To me this seems like a lovely way of handling the whole matter, and
 much improved over the incredibly ugly wart of having a fail method in
 the Monad class.  In fact, I think I remember people on this list and in
 other forums occasionally bringing something like this approach up as a
 way of getting rid of the fail wart.
 
 So my question is, why did we go to all of the trouble to transition
 away from the MonadZero approach to the current system to begin with? 
 What was so bad about unfailable patterns that it was decided to
 remove them and in doing so replace MonadZero with a mandatory fail
 method in Monad?  I mean, this *is* Haskell, so my safest assumption is
 that smart people were involved in making this decision and therefore
 the reasons much have been really good (or at least, seemed good given
 the information at the time).  :-)
 
 Cheers,
 Greg
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread Edward Z. Yang
Oh, I'm sorry! On a closer reading of your message, you're asking not
only asking why 'fail' was added to Monad, but why unfailable patterns
were removed.

Well, from the message linked:

In Haskell 1.4 g would not be in MonadZero because (a,b) is unfailable
(it can't fail to match).  But the Haskell 1.4 story is unattractive becuase
a) we have to introduce the (new) concept of unfailable
b) if you add an extra constructor to a single-constructor type
   then pattern matches on the original constructor suddenly become
   failable

(b) is a real killer: suppose that you want to add a new constructor and
fix all of the places where you assumed there was only one constructor.
The compiler needs to emit warnings in this case, and not silently transform
these into failable patterns handled by MonadZero...

Edward

Excerpts from Gregory Crosswhite's message of Thu Jan 19 21:47:42 -0500 2012:
 Today I learned (tldr; TIL) that the fail in the Monad class was added
 as a hack to deal with the consequences of the decision to remove
 unfailable patterns from the language.  I will attempt to describe the
 story as I have picked it up from reading around, but please feel free
 to correct me on the details.  :-)
 
 An unfailable pattern (which is a generalization of an irrefutable
 pattern) is a pattern which can never fail (excluding the possibility of
 _|_), such as
 
 let (x,y) = pair
 
 Before fail was a method of the Monad class, using refutable patterns
 in a monad required the type to be an instance of MonadZero (that is,
 MonadPlus without the plus), so that for example
 
 do Just x - m
 
 required that the monad be an instance of MonadZero.  If you avoided
 such patterns, your Monad did not have to have this instance, so that
 for example
 
 do (x,y) - pair
 
 would not require MonadZero because the pattern is unfailable.
 
 To me this seems like a lovely way of handling the whole matter, and
 much improved over the incredibly ugly wart of having a fail method in
 the Monad class.  In fact, I think I remember people on this list and in
 other forums occasionally bringing something like this approach up as a
 way of getting rid of the fail wart.
 
 So my question is, why did we go to all of the trouble to transition
 away from the MonadZero approach to the current system to begin with? 
 What was so bad about unfailable patterns that it was decided to
 remove them and in doing so replace MonadZero with a mandatory fail
 method in Monad?  I mean, this *is* Haskell, so my safest assumption is
 that smart people were involved in making this decision and therefore
 the reasons much have been really good (or at least, seemed good given
 the information at the time).  :-)
 
 Cheers,
 Greg
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread Edward Z. Yang
Aw, that is really suboptimal.  Have you filed a bug?

Edward

Excerpts from Michael Snoyman's message of Thu Jan 19 23:29:59 -0500 2012:
 On Fri, Jan 20, 2012 at 5:23 AM, Edward Z. Yang ezy...@mit.edu wrote:
  Oh, I'm sorry! On a closer reading of your message, you're asking not
  only asking why 'fail' was added to Monad, but why unfailable patterns
  were removed.
 
  Well, from the message linked:
 
     In Haskell 1.4 g would not be in MonadZero because (a,b) is unfailable
     (it can't fail to match).  But the Haskell 1.4 story is unattractive 
  becuase
             a) we have to introduce the (new) concept of unfailable
             b) if you add an extra constructor to a single-constructor type
                then pattern matches on the original constructor suddenly 
  become
                failable
 
  (b) is a real killer: suppose that you want to add a new constructor and
  fix all of the places where you assumed there was only one constructor.
  The compiler needs to emit warnings in this case, and not silently transform
  these into failable patterns handled by MonadZero...
 
 But wait a second... this is exactly the situation we have today!
 Suppose I write some code:
 
 data MyType = Foo
 
 test myType = do
 Foo - myType
 return ()
 
 As expected, no warnings. But if I change this unfailable code above
 to the following failable version:
 
 data MyType = Foo | Bar
 
 test myType = do
 Foo - myType
 return ()
 
 I *still* get no warnings! We didn't make sure the compiler spits out
 warnings. Instead, we guaranteed that it *never* will. This has
 actually been something that bothers me a lot. Whereas everywhere else
 in my pattern matching code, the compiler can make sure I didn't make
 some stupid mistake, in do-notation I can suddenly get a runtime
 error.
 
 My opinion is we should either reinstate the MonadZero constraint, or
 simply can failable pattern matches.
 
 Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread Edward Z. Yang
It's not obvious that this should be turned on by -Wall, since
you would also trigger errors on uses like:

[ x | Just x - xs ]

T_T

But I do think it ought to be an option.

Cheers,
Edward

Excerpts from Michael Snoyman's message of Thu Jan 19 23:52:10 -0500 2012:
 On Fri, Jan 20, 2012 at 6:41 AM, Edward Z. Yang ezy...@mit.edu wrote:
  Aw, that is really suboptimal.  Have you filed a bug?
 
 I think it's a feature, not a bug. When dealing with monads that
 provide nice[1] implementations of `fail`, you can (ab)use this to
 avoid writing a bunch of case expressions. I remember reading it in
 one of the first tutorials on Haskell I looked at (four years ago now?
 you can see how much this bothered me if I still remember that).
 
 I admit that there are some use cases where the current behavior is
 convenient, but I think we're paying too steep a price. If we got rid
 of this feature entirely, we could (a) get rid of fail and (b) have
 the compiler warn us about a bunch of errors at compile time.
 
 But maybe I should file a feature request: provide an extra warning
 flag (turned on by -Wall) that will warn when you match on a failable
 pattern. Essentially, I would want:
 
 SomeConstr args - someAction
 
 to be interpreted as:
 
 temp - someAction
 case temp of
 SomeConstr args -
 
 Michael
 
 [1] For some people's definition of nice, not mine.
 
 
  Edward
 
  Excerpts from Michael Snoyman's message of Thu Jan 19 23:29:59 -0500 2012:
  On Fri, Jan 20, 2012 at 5:23 AM, Edward Z. Yang ezy...@mit.edu wrote:
   Oh, I'm sorry! On a closer reading of your message, you're asking not
   only asking why 'fail' was added to Monad, but why unfailable patterns
   were removed.
  
   Well, from the message linked:
  
      In Haskell 1.4 g would not be in MonadZero because (a,b) is unfailable
      (it can't fail to match).  But the Haskell 1.4 story is unattractive 
   becuase
              a) we have to introduce the (new) concept of unfailable
              b) if you add an extra constructor to a single-constructor 
   type
                 then pattern matches on the original constructor suddenly 
   become
                 failable
  
   (b) is a real killer: suppose that you want to add a new constructor and
   fix all of the places where you assumed there was only one constructor.
   The compiler needs to emit warnings in this case, and not silently 
   transform
   these into failable patterns handled by MonadZero...
 
  But wait a second... this is exactly the situation we have today!
  Suppose I write some code:
 
      data MyType = Foo
 
      test myType = do
          Foo - myType
          return ()
 
  As expected, no warnings. But if I change this unfailable code above
  to the following failable version:
 
      data MyType = Foo | Bar
 
      test myType = do
          Foo - myType
          return ()
 
  I *still* get no warnings! We didn't make sure the compiler spits out
  warnings. Instead, we guaranteed that it *never* will. This has
  actually been something that bothers me a lot. Whereas everywhere else
  in my pattern matching code, the compiler can make sure I didn't make
  some stupid mistake, in do-notation I can suddenly get a runtime
  error.
 
  My opinion is we should either reinstate the MonadZero constraint, or
  simply can failable pattern matches.
 
  Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Lifting IO (IO a) - IO a to m (m a) - m a with monad-control

2012-01-18 Thread Edward Z. Yang
Hello folks,

I was curious whether or not it is possible to lift an arbitrary
IO (IO a) - IO a function to MonadBaseControl IO m = m (m a) - m a.
That is, implement a function:

liftJoin :: MonadBaseControl mb m = (mb (mb (StM m a)) - mb (StM m a)) - m 
(m a) - m a

The difficulty seems to be that we can't extract the resumable state from the 
inner
base monad action.

If this is not possible, is there a suitable strengthening of MonadBaseControl
that achieves this effect?

Cheers,
Edward

[1] 
http://hackage.haskell.org/packages/archive/monad-control/latest/doc/html/Control-Monad-Trans-Control.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad-control rant

2012-01-18 Thread Edward Z. Yang
Excerpts from Mikhail Vorozhtsov's message of Wed Jan 18 08:47:37 -0500 2012:
  Well, that's the kind of language we live in.  The denotation of our 
  language
  always permits for bottom values, and it's not a terribly far jump from 
  there
  to undefined and error foo.  I don't consider the use of these facilities
  to be a trap door.
 Non-termination is a bug (when termination is expected) and I wish that 
 `undefined` and `error` would be interpreted as bugs too (when a value 
 is expected). Putting asynchronous exceptions aside, in some situations 
 it might be useful to recover from bugs, but they should not be treated 
 like /errors/, something that is expected to happen. At least I like to 
 think this way when `error`s meet monads. For example, what is the 
 meaning of `error` in this piece:
 
 nasty ∷ Monad μ ⇒ μ ()
 nasty = error FOO  return ()
 
 runIdentity nasty ~ () -- error is not necessarily a left zero!
 runIdentity $ runMaybeT nasty ~ error
 
 It's just slipping through abstraction and doing what it wants.

I can't argue with error should be used sparingly, and usually in cases
where there is an indication of developer error, rather than user error.
It's good, sound advice.

But I also believe that you can't use this as justification to stick your
head in the sand, and pretend bottoms don't exist (regardless of whether or
not we'rd talking about asynchronous exceptions.)  The reason is that
understanding how code behaves in the presence of bottoms tells you
some very important information about its strictness/laziness, and this
information is very important for managing the time and space usage of your 
code.

The identity monad for which error FOO is a left zero is a legitimate monad:
it's the strict identity monad (also known as the 'Eval' monad.)  Treatment
of bottom is a part of your abstraction!

(I previously claimed that we could always use undefined :: m a as a left zero,
I now stand corrected: this statement only holds for 'strict' monads, a moniker 
which
describes IO, STM and ST, and any monads based on them. Indeed, I'll stick my 
neck
out and claim any monad which can witness bottoms must be strict.)

 What is the usefulness here? Is being precise not enough?
 
 contract ∷ MonadZero μ ⇒ (α → Bool) → (β → Bool) → (α → μ β) → α → μ β
 contract pre post body x = do
unless (pre x) mzero
y ← body x
unless (post y) mzero
return y
 
 Why would I drag `mplus` here? `contract` is useful regardless of 
 whether you have a choice operation or not.

Point conceded. (Though, I probably would have used 'error' for 'contract',
since violation of pre/post-conditions is almost certainly due to developer
error.)

   - We only have three magical base monads: IO, ST and STM.  In
   ST we do not have any appreciable control over traditional IO 
  exceptions,
   so the discussion there is strictly limited to pure mechanisms of 
  failure.
 Why is this distinction necessary? Why are you trying to tie exception 
 handling idioms to the particular implementation in RTS?

The distinction I'm trying to make is between code that is pure (and cannot
witness bottoms), and code that is impure, and *can* witness bottoms.
It is true that I need language/RTS support to do the latter, but I'm
in no way tying myself to a particular implementation of an RTS: the semantics
are independent (and indeed are implemented in all of the other Haskell 
implementations.)

   - Finalizing mutable state is a very limited use-case; unlike C++
   we can't deallocate the state, unlike IO there are no external scarce
   resources, so the only thing you really might see is rolling back the
   state to some previous version, in which case you really ought not to
   be using ST for that purpose.
 Maybe. But you can. And who said that we should consider only IO, ST and 
 STM? Maybe it is a mysterious stateful monad X keeping tabs on 
 god-knows-what. Also, even though we do not deallocate memory directly, 
 having a reference to some gigantic data structure by mistake could hurt 
 too.

Claim: such a mysterious monad would have to be backed by IO/ST. (In the
case of a pure State monad, once we exit the monad all of that gets garbage
collected.)

  You are free to create another interface that supports unrecoverable
  exceptions, and to supply appropriate semantics for this more complicated
  interface. However, I don't think it's appropriate to claim this interface
  is appropriate for IO style exceptions, which are (and users expect) to 
  always
  be recoverable.
 Why exactly not? I think that everything useful written with this 
 assumption in mind can be rewritten to use `finally`, just like I did 
 with `withMVar` (the version in `base` actually uses `onException`).

I think the argument here is similar to your argument: saying that all
IO exceptions are recoverable is more precise.  An interface that specifies
recoverable exceptions is more precise than an interface that 

[Haskell-cafe] REMINDER: Hac Boston from January 20-22 at MIT

2012-01-18 Thread Edward Z. Yang
I'd like to remind everyone that Hac Boston, a Haskell hackathon is being held
January 20-22, 2012 at MIT (rooms 4-159 and 4-261) in Cambridge, MA.

The hackathon will officially kick off at 2:30 Friday afternoon, and go
until 5pm on Sunday with the occasional break for sleep.

Everyone is welcome -- you do not have to be a Haskell guru to attend!
Helping hack on someone else's project could be a great way to increase
your Haskell skills.

If you plan on coming, please officially 
registerhttp://haskell.org/haskellwiki/Hac_Boston/Register,
even if you already put your name on the wiki.  Registration, travel, some
information about lodging and many other details can now be found on the Hac
Boston wiki http://haskell.org/haskellwiki/Hac_Boston.  Edward Kmett tells
me we still have space.

We're also looking for a few people interested in giving short (15-20 min.)
talks, probably on Saturday afternoon.  Anything of interest to the
Haskell community is fair game---a project you've been working on, a
paper, a quick tutorial.  If you'd like to give a talk, add it on 
http://goog_145539007the
wiki http://haskell.org/haskellwiki/Hac_Boston/Talks.

We look forward to seeing you at MIT!

Cheers,
Edward


sup-attachment-1326923418-8029.html
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad-control rant

2012-01-16 Thread Edward Z. Yang
Hello Mikhail,

Thanks for continuing to be willing to participate in a lively discussion. :-)

Excerpts from Mikhail Vorozhtsov's message of Mon Jan 16 08:17:57 -0500 2012:
 On 01/16/2012 02:15 PM, Edward Z. Yang wrote:
  Anders and I thought a little more about your example, and we first wanted 
  to
  clarify which instance you thought was impossible to write. [snip]
 I was talking about the latter instance.

Great, that's what I thought.

 And I don't want to lift IO 
 control to AIO, I want an API that works with both IO and AIO.

Yup!  But we're going to have to define what me mean by API that works with
both IO and AIO.

 The real problem with `MonadBaseControl IO AIO` is that the interpreter 
 cuts actions into smaller pieces (at blocking operations) and then 
 reschedules them in some order. [snip]

I don't what you've said here is inconsistent with me claiming that AIO needs
to limit IO control flow, but I must admit we've been working with an
approximation of AIO because the full code hasn't been publishing anywhere.

 I don't see why functions like `throwIO`, `catch`, `finally`, `bracket`, 
 etc should be tied to IO or monads that allow lifting of IO actions. The 
 functions make perfect sense in `ErrorT SomeException Identity` and in 
 many other monads that have nothing to do with IO, why restrict 
 ourselves?

I agree. (Later in the message I'll propose a new MonadCatchIO instance that
drops the MonadIO superclass.)

The first section was intended to make sure I understood what you were
talking about.  Based on your response, I *think* I interpreted your problem
correctly.

  I don't think it makes too much sense have thing pick off a menu of
  Abort/Recover/Finally from a semantics perspective:
 
  It's easy to imagine monads that have an instance of one of the classes but
  not of the others
 
  I'd like to see some examples.  I hypothesize that most of such monads are
  incoherent, semantically speaking.  For example, what does it mean to have a
  monad that can recover exceptions, but for which you can't throw exceptions?
 Imagine a monad that disallows lifting of arbitrary IO actions, but can 
 receive asynchronous events (which would probably be /implemented/ on 
 top of asynchronous exceptions, but that's not important here) that 
 behave like runtime-inserted left zeros.
 
 COMPUTATIONALLY_HEAVY_CODE `recover` \level →
GIVE_AN_APPROXIMATION_INSTEAD(level)

The vehicle of implementation here is kind of important.  If they are 
implemented
as asynchronous exceptions, I can in fact still throw in this universe: I just
attempt to execute the equivalent of 'undefined :: m a'.  Since asynchronous 
exceptions
can always be thrown from pure code, I can /always/ do this, no matter how you
lock down the types.  Indeed, I think implementing this functionality on 
asynchronous
exceptions is a good idea, because it lets you handle nonterminating pure code 
nicely,
and allows you to bail out even when you're not doing monadic execution.

But, for the sake of argument, so let's suppose that they're not done as
asynchronous exceptions; essentially, you define some 'safe points' which have
the possibility to raise exceptions.  In this case, I claim there will never be
a *technical* difficulty against implementing manually thrown exceptions; the
concern here is you don't want the user to do that.  With some sets of
operations, this isn't a very strong injunction; if there is a deterministic
set of operations that results in an error, the user can make a gadget which is
semantically equivalent to a thrown exception.  I don't think I can argue 
anything
stronger here, so I concede the rest of the point.

So, to summarize, such an interface (has recovery but not masking or throwing)
always has a trivial throw instance unless you are not implementing it on top
of asynchronous exceptions.

Your example reminds me of what happens in pure code. In this context, we have
the ability to throw errors and map over errors (although I'm not sure how 
people
feel about that, semantically), but not to catch them or mask them.  But I don't
think we need another typeclass for that.

  There only a few options:
 
   - You have special primitives which throw exceptions, distinct from
 Haskell's IO exceptions.  In that case, you've implemented your own
 homebrew exception system, and all you get is a 'Catch MyException'
 which is too specific for a client who is expecting to be able
 to catch SomeExceptions.
 
   - You execute arbitrary IO and allow those exceptions to be caught.
 But then I can implement Throw: I just embed an IO action that
 is throwing an exception.
 
   - You only execute a limited subset of IO, but when they throw 
  exceptions
 they throw ordinary IO exceptions.  In this case, the client doesn't
 have access to any scarce resources except the ones you provided,
 so there's no reason for him to even need

Re: [Haskell-cafe] Monad-control rant

2012-01-15 Thread Edward Z. Yang
Hello Mikhail,

Sorry, long email. tl;dr I think it makes more sense for throw/catch/mask to
be bundled together, and it is the case that splitting these up doesn't address
the original issue monad-control was designed to solve.

~ * ~

Anders and I thought a little more about your example, and we first wanted to
clarify which instance you thought was impossible to write.

For example, we think it should be possible to write:

instance MonadBaseControl AIO AIO

Notice that the base monad is AIO: this lets you lift arbitrary AIO
operations to a transformed AIO monad (e.g. ReaderT r AIO), but no more.
If this is the instance you claimed was impossible, we'd like to try 
implementing
it.  Can you publish the full example code somewhere?

However, we don't think it will be possible to write:

instance MonadBaseControl IO AIO

Because this lets you leak arbitrary IO control flow into AIO (e.g. forkIO, with
both threads having the ability to run the current AIO context), and as you 
stated,
you only want to allow a limited subset of control flow in.  (I think this was
the intent of the original message.)

Maybe client code doesn't want to be attached to AIO base monads, though;
that's too restrictive for them. So they'd like to generalize a bit.  So let's
move on to the issue of your typeclass decomposition.

~ * ~

I don't think it makes too much sense have thing pick off a menu of
Abort/Recover/Finally from a semantics perspective:

 It's easy to imagine monads that have an instance of one of the classes but
 not of the others

I'd like to see some examples.  I hypothesize that most of such monads are
incoherent, semantically speaking.  For example, what does it mean to have a
monad that can recover exceptions, but for which you can't throw exceptions?
There only a few options:

- You have special primitives which throw exceptions, distinct from
  Haskell's IO exceptions.  In that case, you've implemented your own
  homebrew exception system, and all you get is a 'Catch MyException'
  which is too specific for a client who is expecting to be able
  to catch SomeExceptions.

- You execute arbitrary IO and allow those exceptions to be caught.
  But then I can implement Throw: I just embed an IO action that
  is throwing an exception.

- You only execute a limited subset of IO, but when they throw exceptions
  they throw ordinary IO exceptions.  In this case, the client doesn't
  have access to any scarce resources except the ones you provided,
  so there's no reason for him to even need this functionality, unless
  he's specifically coding against your monad.

What does it mean to not have a Finally instance, but a Recover and Throw
instance?  Well, I can manually reimplement finally in this case (with or
without support for asynchronous exceptions, depending on whether or not Mask
is available): this is how the paper does it (finally is not a primitive.)

What does it mean to have a monad that can throw exceptions, but not catch them?
This is any of the usual monads that can fail, of which we have many.  And of 
course,
you can't allow this in the presence of scarce resources since there is no way 
to
properly deallocate them when exceptions are thrown.  So it seems this is just 
ordinary
failure which cannot be used in the presence of arbitrary IO.

What does it mean to have all of the above, but not to have a mask instance?
One approach is to pretend asynchronous exceptions do not exist.  As you do in 
your
example, we can simply mask.  I think this is a bit to give up, but I'll 
concede it.
However, I don't think it's acceptable not to provide mask functionality, not 
mask
your interpreter, and allow arbitrary IO.  It's now impossible to properly 
implement
many patterns without having subtle race conditions.

So it seems we should collapse these into one class, which conveniently maps 
straight
to the semantics defined in Asynchronous Exceptions in Haskell.

class MonadAsyncExc m where
mask :: ((forall a. m a - m a) - m b) - m b
throw :: SomeException - m ()
catch :: m a - (SomeException - m a) - m a

But you get to have your cake and eat it too: if you define a monad which is 
guaranteed
to be run with asynchronous exceptions masked, you can define the 'mask' 
function
to be a no-op and not violate any laws! Hooray!

But this is in fact what MonadCatchIO was, except that MonadCatchIO was
formulated when we still had block/unblock and it required MonadIO.  So a
useful endeavour would be to punt the MonadIO superclass constraint and fix the
definitions, and we have something that is usable to your case.

  ~ * ~

To contextualize this whole discussion, recall the insiduous problem that
*motivated* the creation of monad-control.  Suppose that we've done all of the
hard work and lifted all of the Control.Exception functions to our 

Re: [Haskell-cafe] Monad-control rant

2012-01-10 Thread Edward Z. Yang
Excerpts from Mikhail Vorozhtsov's message of Tue Jan 10 09:54:38 -0500 2012:
 On 01/10/2012 12:17 AM, Edward Z. Yang wrote:
  Hello Mikhail,
 Hi.
 
  (Apologies for reviving a two month old thread). Have you put some thought 
  into
  whether or not these extra classes generalize in a way that is not /quite/ 
  as
  general as MonadBaseControl (so as to give you the power you need) but still
  allow you to implement the functionality you are looking for? I'm not sure 
  but
  it seems something along the lines of unwind-protect ala Scheme might be
  sufficient.
 I'm not sure I'm following you. The problem with MonadBaseControl is 
 that it is /not/ general enough.

Sorry, I mispoke.  The sense you are using it is the more general a type class
is, the more instances you can write for it. I think the design goal I'm going
for here is, a single signature which covers MonadAbort/Recover/Finally in a
way that unifies them.  Which is not more general, except in the sense that it
contains more type classes (certainly not general in the mathematical sense.)

 It assumes that you can eject/inject 
 all the stacked effects as a value of some data type. Which works fine 
 for the standard transformers because they are /implemented/ this way. 
 But not for monads that are implemented in operational style, as 
 interpreters, because the interpreter state cannot be internalized. This 
 particular implementation bias causes additional issues when the lifted 
 operation is not fully suited for ejecting/injecting. For example the 
 `Control.Exception.finally` (or unwind-protect), where we can neither 
 inject (at least properly) the effects into nor eject them from the 
 finalizer. That's why I think that the whole lift operations from the 
 bottom approach is wrong (the original goal was to lift 
 `Control.Exception`). The right way would be to capture the control 
 semantics of IO as a set of type classes[1] and then implement the 
 general versions of the operations you want to lift. That's what I tried 
 to do with the monad-abord-fd package.

I think this is generally a useful goal, since it helps define the semantics
of IO more sharply.  However, the exceptions mechanism is actually fairly
well specified, as far as semantics go, see A Semantics for Imprecise
Exceptions and Asynchronous Exceptions in Haskell.  So I'm not sure if
monad-abort-fd achieves the goal of expressing these interfaces, in
typeclass form, as well as allowing users to interoperate cleanly with
existing language support for these facilities.

 [1] Which turn out to be quite general: MonadAbort/Recover/Finally are 
 just a twist of MonadZero/MonadPlus

Now that's interesting! Is this an equivalence, e.g. MonadZero/MonadPlus
imply MonadAbort/Recover/Finally and vice-versa, or do you need to make
some slight modifications?  It seems that you somehow need support for
multiple zeros of the monad, as well as a way of looking at them.

 MonadMask is expectedly more specific, but permits a nice no-op
 implementation.

(See my earlier comments about asynchronous exceptions.)

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad-control rant

2012-01-09 Thread Edward Z. Yang
Hello Mikhail,

(Apologies for reviving a two month old thread). Have you put some thought into
whether or not these extra classes generalize in a way that is not /quite/ as
general as MonadBaseControl (so as to give you the power you need) but still
allow you to implement the functionality you are looking for? I'm not sure but
it seems something along the lines of unwind-protect ala Scheme might be
sufficient.

Edward

Excerpts from Mikhail Vorozhtsov's message of Mon Nov 14 01:25:34 -0500 2011:
 On 11/14/2011 06:55 AM, Bas van Dijk wrote:
  Hi Mikhail,
 
  your type class:
 
  class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where
 recover ∷ μ α → (e → μ α) → μ α
 
  looks a lot like the MonadCatchIO type class from MonadCatchIO-transformers:
 
  class MonadIO m =  MonadCatchIO m where
 catch   :: E.Exception e =  m a -  (e -  m a) -  m a
 
  I haven't looked at your code in detail but are you sure your
  continuation based AIO monad doesn't suffer from the same unexpected
  behavior as the ContT monad transformer with regard to catching and
  handling exceptions?
 Yes, I'm sure. The reason why it works is because finally/bracket/etc 
 are not implemented on top of 'recover' (i.e. they don't assume that 
 throwing an exception is the only reason control can escape). The 
 following class takes care of it:
 
 class (Applicative μ, Monad μ) ⇒ MonadFinally μ where
finally' ∷ μ α → (Maybe α → μ β) → μ (α, β)
finally ∷ μ α → μ β → μ α
finally m = fmap fst . finally' m . const
 
 Finalizers have type 'Maybe α → μ β' so we can
 
 (a) Thread transformer side effects properly:
 
 instance MonadFinally μ ⇒ MonadFinally (L.StateT s μ) where
finally' m f = L.StateT $ \s → do
  ~(~(mr, _), ~(fr, s'')) ← finally' (L.runStateT m s) $ \mbr → do
let ~(a, s') = case mbr of
   Just ~(x, t) → (Just x, t)
   Nothing → (Nothing, s)
L.runStateT (f a) s'
  return ((mr, fr), s'')
 
 (b) Detect that control escaped computation before producing a result 
 (finalizer will be called with 'Nothing' in that case).
 
 instance (MonadFinally μ, Error e) ⇒ MonadFinally (ErrorT e μ) where
finally' m f = ErrorT $ do
  ~(mr, fr) ← finally' (runErrorT m) $ \mbr →
runErrorT $ f $ case mbr of
  Just (Right a) → Just a
  _ → Nothing
  return $ (,) $ mr * fr
 
 That of course does not mean that I can use 'finally' and friends with 
 ContT, but I can use them with monads which are carefully /implemented/ 
 on top of ContT but do not expose it's full power to the users.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monad.Reader #20 call for copy

2012-01-04 Thread Edward Z. Yang
I'm pleased to announce that I, Edward Z. Yang, will be taking
over Brent Yorgey's role as lead editor of the Monad Reader!

Call for Copy: The Monad.Reader - Issue 20


Whether you're an established academic or have only just started
learning Haskell, if you have something to say, please consider
writing an article for The Monad.Reader!  The submission deadline
for Issue 20 will be:

**Monday, March 5**

The Monad.Reader


The Monad.Reader is a electronic magazine about all things Haskell. It
is less formal than journal, but somehow more enduring than a wiki-
page. There have been a wide variety of articles: exciting code
fragments, intriguing puzzles, book reviews, tutorials, and even
half-baked research ideas.

Submission Details
~~

Get in touch with me if you intend to submit something -- the sooner
you let me know what you're up to, the better.

Please submit articles for the next issue to me by e-mail (ezy...@mit.edu).

Articles should be written according to the guidelines available from

http://themonadreader.wordpress.com/contributing/

Please submit your article in PDF, together with any source files you
used. The sources will be released together with the magazine under a
BSD license.

If you would like to submit an article, but have trouble with LaTeX
please let me know and we'll work something out.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Edward Z. Yang
Here's an alternative perspective to consider: consider some
data structure, such as a queue.  There are two ways you can
implement this, one the imperative way, with mutators, and the
other the purely functional way, with no destructive updates.

The question then, I ask, is how easy does a programming language
make it to write the data structure in the latter fashion?  How
easy is it for you to cheat?

Edward

Excerpts from Steve Horne's message of Wed Dec 28 12:39:52 -0500 2011:
 This is just my view on whether Haskell is pure, being offered up for 
 criticism. I haven't seen this view explicitly articulated anywhere 
 before, but it does seem to be implicit in a lot of explanations - in 
 particular the description of Monads in SBCs Tackling the Awkward 
 Squad. I'm entirely focused on the IO monad here, but aware that it's 
 just one concrete case of an abstraction.
 
 Warning - it may look like trolling at various points. Please keep going 
 to the end before making a judgement.
 
 To make the context explicit, there are two apparently conflicting 
 viewpoints on Haskell...
 
  1. The whole point of the IO monad is to support programming with
 side-effecting actions - ie impurity.
  2. The IO monad is just a monad - a generic type (IO actions), a couple
 of operators (primarily return and bind) and some rules - within a
 pure functional language. You can't create impurity by taking a
 subset of a pure language.
 
 My view is that both of these are correct, each from a particular point 
 of view. Furthermore, by essentially the same arguments, C is also both 
 an impure language and a pure one.
 
 See what I mean about the trolling thing? I'm actually quite serious 
 about this, though - and by the end I think Haskell advocates will 
 generally approve.
 
 First assertion... Haskell is a pure functional language, but only from 
 the compile-time point of view. The compiler manipulates and composes IO 
 actions (among other things). The final resulting IO actions are finally 
 swallowed by unsafePerformIO or returned from main. However, Haskell is 
 an impure side-effecting language from the run-time point of view - when 
 the composed actions are executed. Impurity doesn't magically spring 
 from the ether - it results from the translation by the compiler of IO 
 actions to executable code and the execution of that code.
 
 In this sense, IO actions are directly equivalent to the AST nodes in a 
 C compiler. A C compiler can be written in a purely functional way - in 
 principle it's just a pure function that accepts a string (source code) 
 and returns another string (executable code). I'm fudging issues like 
 separate compilation and #include, but all of these can be resolved in 
 principle in a pure functional way. Everything a C compiler does at 
 compile time is therefore, in principle, purely functional.
 
 In fact, in the implementation of Haskell compilers, IO actions almost 
 certainly *are* ASTs. Obviously there's some interesting aspects to that 
 such as all the partially evaluated and unevaluated functions. But even 
 a partially evaluated function has a representation within a compiler 
 that can be considered an AST node, and even AST nodes within a C 
 compiler may represent partially evaluated functions.
 
 Even the return and bind operators are there within the C compiler in a 
 sense, similar to the do notation in Haskell. Values are converted into 
 actions. Actions are sequenced. Though the more primitive form isn't 
 directly available to the programmer, it could easily be explicitly 
 present within the compiler.
 
 What about variables? What about referential transparency?
 
 Well, to a compiler writer (and equally for this argument) an identifier 
 is not the same thing as the variable it references.
 
 One way to model the situation is that for every function in a C 
 program, all explicit parameters are implicitly within the IO monad. 
 There is one implicit parameter too - a kind of IORef to the whole 
 system memory. Identifiers have values which identify where the variable 
 is within the big implicit IORef. So all the manipulation of identifiers 
 and their reference-like values is purely functional. Actual handling of 
 variables stored within the big implicit IORef is deferred until run-time.
 
 So once you accept that there's an implicit big IORef parameter to every 
 function, by the usual definition of referential transparency, C is as 
 transparent as Haskell. The compile-time result of each function is 
 completely determined by its (implicit and explicit) parameters - it's 
 just that that result is typically a way to look up the run-time result 
 within the big IORef later.
 
 What's different about Haskell relative to C therefore...
 
  1. The style of the AST is different. It still amounts to the same
 thing in this argument, but the fact that most AST nodes are simply
 partially-evaluated functions has significant practical
 

Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-24 Thread Edward Z. Yang
 1. a function f is strict if  f ⊥ = ⊥
 2. ⊥ represents any computation which does not terminate, i.e. an 
 exception or an infinite loop
 3. strict describes the denotational semantics
 
 People, could you please make up your mind already? It has been more 
 than 13 years.

I have to admit, I'm a bit confused what the complaint is.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: hxournal-0.5.0.0 - A pen notetaking program written in haskell

2011-12-15 Thread Edward Z. Yang
When I attempt to build on Ubuntu, I get:

ezyang@javelin:~$ cabal install hxournal
Resolving dependencies...
Configuring hxournal-0.5.0.0...
Preprocessing library hxournal-0.5.0.0...
In file included from /usr/include/gtk-2.0/gdk/gdkscreen.h:32:0,
 from /usr/include/gtk-2.0/gdk/gdkapplaunchcontext.h:31,
 from /usr/include/gtk-2.0/gdk/gdk.h:32,
 from /usr/include/gtk-2.0/gtk/gtk.h:32,
 from Device.hsc:3:
/usr/include/gtk-2.0/gdk/gdktypes.h:55:23: fatal error: gdkconfig.h: No such 
file or directory
compilation terminated.
compiling dist/build/Application/HXournal/Device_hsc_make.c failed (exit code 1)
command was: /usr/bin/gcc -c dist/build/Application/HXournal/Device_hsc_make.c 
-o dist/build/Application/HXournal/Device_hsc_make.o -fno-stack-protector 
-fno-stack-protector 
-Wl,--hash-style=both,--no-copy-dt-needed-entries,--as-needed 
-D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Di386_BUILD_ARCH 
-Di386_HOST_ARCH -Icsrc -I/usr/include/glib-2.0 
-I/usr/lib/i386-linux-gnu/glib-2.0/include -I/usr/include/atk-1.0 
-I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 
-I/usr/include/gio-unix-2.0/ -I/usr/include/pixman-1 -I/usr/include/freetype2 
-I/usr/include/libpng12 -I/usr/include/libdrm -I/usr/include/gtk-2.0 
-I/usr/lib/gtk-2.0/include -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 
-I/usr/lib/i386-linux-gnu/glib-2.0/include -I/usr/include/cairo 
-I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng12 
-I/usr/include/glib-2.0 -I/usr/lib/i386-linux-gnu/glib-2.0/include 
-I/usr/include/glib-2.0 -I/usr/lib/i386-linux-gnu/glib-2.0/include 
-I/usr/include/cairo -I/usr/include/glib-2.0 
-I/usr/lib/i386-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 
-I/usr/include/freetype2 -I/usr/include/libpng12 
-I/usr/lib/ghc-7.0.3/process-1.0.1.5/include 
-I/usr/lib/ghc-7.0.3/directory-1.1.0.0/include 
-I/usr/lib/ghc-7.0.3/old-time-1.0.0.6/include 
-I/usr/lib/ghc-7.0.3/unix-2.4.2.0/include 
-I/usr/lib/ghc-7.0.3/time-1.2.0.3/include 
-I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include 
-I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include 
-I/usr/lib/ghc-7.0.3/include/
cabal: Error: some packages failed to install:
hxournal-0.5.0.0 failed during the building phase. The exception was:
ExitFailure 1

This may be of interest:

ezyang@javelin:~$ locate gdkconfig.h
/home/ezyang/Dev/gtk+/gdk/gdkconfig.h.win32
/usr/include/gtk-3.0/gdk/gdkconfig.h
/usr/lib/i386-linux-gnu/gtk-2.0/include/gdkconfig.h

Edward

Excerpts from Edward Z. Yang's message of Mon Dec 12 20:10:19 -0500 2011:
 Very fancy! I am a big fan of Xournal, so I will have to take this for a spin.
 
 Edward
 
 Excerpts from Ian-Woo Kim's message of Mon Dec 12 06:56:09 -0500 2011:
  Hi, everyone,
  
  I am very pleased to announce a pen notetaking program: hxournal,
  which is written entirely in haskell using gtk2hs.
  
  I uploaded the package on hackage. This program accompanies with
  two library packages, xournal-parser and xournal-render for parsing
  and rendering xournal format file.
  
  http://hackage.haskell.org/package/hxournal
  http://hackage.haskell.org/package/xournal-parser
  http://hackage.haskell.org/package/xournal-render
  
  Installing hxournal should be very simple:
   cabal update
   cabal install hxournal
  
  hxournal can be currently regarded as a clone of xournal, which is a
  notetaking program developed in C. (See
  http://xournal.sourceforge.net)
  
  As xournal, hxournal can take wacom tablet X11 input in subpixel unit
  so that it can result in very smooth notetaking experience.
  
  Currently, basic pen operations and eraser operations, file open/save
  operations, rectangular selection, cut/copy/paste operations have been
  implemented. So the application is semi-usable. The file format is
  the same as xournal but gunzipped. So to view/edit xoj files generated
  from xournal, just gunzip the xoj files and read them in hxournal.
  Gunzipped xoj files generated from hxournal are readable in xournal
  program.
  
  One NEW special feature of hxournal compared with xournal:
  This program can make a split view (horizontal and vertical and
  arbitrary combination of them) of the same document similarly to emacs
  buffers and windows. Please try Horizontal/Vertical Split in View menu
  of the program. This will be convenient when notetaking a long
  document.
  
  The git repository is located at https://www.github.com/wavewave/hxournal
  The program web page and development web/wiki pages will be announced
  soon (it will be linked from package webpage on hackage anyway) and
  the detailed manual will be presented there.
  
  Thank you for your interest.
  Enjoy haskell notetaking!
  
  Ian-Woo Kim
  

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: hxournal-0.5.0.0 - A pen notetaking program written in haskell

2011-12-12 Thread Edward Z. Yang
Very fancy! I am a big fan of Xournal, so I will have to take this for a spin.

Edward

Excerpts from Ian-Woo Kim's message of Mon Dec 12 06:56:09 -0500 2011:
 Hi, everyone,
 
 I am very pleased to announce a pen notetaking program: hxournal,
 which is written entirely in haskell using gtk2hs.
 
 I uploaded the package on hackage. This program accompanies with
 two library packages, xournal-parser and xournal-render for parsing
 and rendering xournal format file.
 
 http://hackage.haskell.org/package/hxournal
 http://hackage.haskell.org/package/xournal-parser
 http://hackage.haskell.org/package/xournal-render
 
 Installing hxournal should be very simple:
  cabal update
  cabal install hxournal
 
 hxournal can be currently regarded as a clone of xournal, which is a
 notetaking program developed in C. (See
 http://xournal.sourceforge.net)
 
 As xournal, hxournal can take wacom tablet X11 input in subpixel unit
 so that it can result in very smooth notetaking experience.
 
 Currently, basic pen operations and eraser operations, file open/save
 operations, rectangular selection, cut/copy/paste operations have been
 implemented. So the application is semi-usable. The file format is
 the same as xournal but gunzipped. So to view/edit xoj files generated
 from xournal, just gunzip the xoj files and read them in hxournal.
 Gunzipped xoj files generated from hxournal are readable in xournal
 program.
 
 One NEW special feature of hxournal compared with xournal:
 This program can make a split view (horizontal and vertical and
 arbitrary combination of them) of the same document similarly to emacs
 buffers and windows. Please try Horizontal/Vertical Split in View menu
 of the program. This will be convenient when notetaking a long
 document.
 
 The git repository is located at https://www.github.com/wavewave/hxournal
 The program web page and development web/wiki pages will be announced
 soon (it will be linked from package webpage on hackage anyway) and
 the detailed manual will be presented there.
 
 Thank you for your interest.
 Enjoy haskell notetaking!
 
 Ian-Woo Kim
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Recommended class instances for container type

2011-12-08 Thread Edward Z. Yang
I'd hazard that if you went 'containers' and looked at what instances were
implemented, that would give you a good idea. :^)  (For example,
if you look at Data.MAp, it has NFData, Typeable2 and Data instances.)

Edward

Excerpts from Christoph Breitkopf's message of Thu Dec 08 11:12:06 -0500 2011:
 Hello,
 
 I'm in the process of implementing a container data type, and wonder what
 class instances are generally considered necessary. E.g. is it ok to start
 out with a Show that's adequate for debugging, or is it a 'must' to include
 instances of everything possible (Eq, Ord if possible, Read, Show, Functor,
 ...).
 
 And what about the more experimental things? Say, DeepSeq, Typeable, Data?
 I'd like to keep this simple at start, and I've admittedly not followed
 recent developments in Haskell-land (recent meaning the last 10 years or
 so. I _do_ know about hierachical modules ;-) ).
 
 OTOH, if not having such instances makes it impossible to do things the
 modern way, I'd probably take the time to implement (and maybe understand)
 them.
 
 Thanks,
 
 Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Partial statical linking

2011-12-01 Thread Edward Z. Yang
libgmp and libffi are external libraries not associated with
Haskell, so I don't think -static (which is for Haskell libraries)
applies to them.  You'll have the same problem with any other
sort of library of this type, like libdl and friends ;-)

Edward

Excerpts from Jason Dusek's message of Sat Nov 26 01:59:18 -0500 2011:
 Some time ago, I wrote to this list about making shared
 libraries with GHC, in such a way that the RTS was linked and
 ready to go. Recently, I've been looking a similar but, in a
 sense, opposite problem: linking Haskell executables with some
 of their non-Haskell dependencies, for distribution.
 
 I tried passing a few different sets of options to the linker
 through GHC, with -optl:
 
   -optl'-Wl,-r'
   -optl'-Wl,-r,-dy'
   -optl'-Wl,-static,-lffi,-lgmp,-dy'
 
 None of these had the desired effect. In the end, running GHC
 with -v and carefully editing the linker line produced the
 desired change (I have linked to and provided the diff below).
 
 The effect -optl seems to be to introduce options in the linker
 line just before -lHSrtsmain, which would seem to prevent one
 from linking libffi and libgmp differently. Is editing and
 storing away the linker script the best option at present for
 partially static linking?
 
 --
 Jason Dusek
 ()  ascii ribbon campaign - against html e-mail
 /\  www.asciiribbon.org   - against proprietary attachments
 
 
 
 
 https://github.com/solidsnack/arx/commit/90ec5efdb0e991344aa9a4ad29456d466e022c3e
 #@@ -122,10 +122,8 @@
 #   -lHSarray-0.3.0.2 \
 #   -lHSbase-4.3.1.0 \
 #   -lHSinteger-gmp-0.2.0.3 \
 #-  -lgmp \
 #   -lHSghc-prim-0.2.0.0 \
 #   -lHSrts \
 #-  -lffi \
 #   -lm \
 #   -lrt \
 #   -ldl \
 #@@ -136,4 +134,7 @@
 #   -lgcc_s --no-as-needed \
 #   /usr/lib/gcc/x86_64-linux-gnu/4.6.1/crtend.o \
 #   /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/crtn.o \
 #+  -static \
 #+  -lgmp \
 #+  -lffi \
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SMP parallelism increasing GC time dramatically

2011-10-05 Thread Edward Z. Yang
Ketil,

For your particular problem, unevaluated thunks should be easy
to check: dump a heap profile and look for a decreasing allocation
of thunks.

That being said, IntMap is spine strict, so that will all be evaluated,
and if your threads are accessing disjoint keys there should be no
contention. If there is, yes, threads will be blocking on evaluation,
I don't have a good sense for how slow that tends to be.  (Cache effects
may be swamping you.)

You may be interested in repa, if your maps are dense.

Edward

Excerpts from Ketil Malde's message of Wed Oct 05 17:00:11 -0400 2011:
 
 I don't know if this is relevant to your problems, but I'm currently
 struggling to get some performance out of a parallel - or rather,
 concurrent - program.
 
 Basically, the initial thread parses some data into an IntMap, and then
 multiple threads access this read-only to do the Real Work.
 
 Now, there appears to be a lot of overhead incurred when using multiple
 threads, and I suspect that this is caused by the map storing
 unevaluated thunks, which then are forced by accesses by the worker
 threads.  Ideally, the evaluation should be performed in parallel, but
 perhaps there are issues (of synchronization, say) that makes this less
 performant?
 
 -k

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DSL for data definition (e.g. compiling Haskell type defs into Google's protocol buffers type defs)

2011-10-04 Thread Edward Z. Yang
Just making sure: have you looked at the Data.Data module yet?

Edward

Excerpts from Karel Gardas's message of Tue Oct 04 12:02:34 -0400 2011:
 
 Hello,
 
 I'm trying to find out if it's possible to use Haskell data type 
 definition capability to define types and compile defined types into 
 other languages, for example into Google's protocol buffers data 
 definition language. So basically speaking I'm thinking about using 
 Haskell sub-set as a data-definition DSL together with some functions 
 which will generate some code based on supplied defined data types. My 
 idea is:
 
 data Person = Person {
  id :: Int
  , name :: String
  , email :: Maybe String
  }
  deriving (Show, Data, Typeable)
 
 emit_proto Person 1
 
 where emit_proto is function which will translate Person data type 
 definition into Google's proto language (the 1 is index from which start 
 to index type's fields) by traversing data type definition and 
 translating all its children plus do some header/footer generation etc:
 
 message Person {
required int32 id = 1;
required string name = 2;
optional string email = 3;
 }
 
 I've looked for something like that and found SYB papers which works on 
 top of data instance (i.e. actual data, not data type). I also found 
 JSON lib which again works on top of data and not data type. I've tried 
 to look into Data.Typetable etc, but have not found function which will 
 print data type's field name and field type name (although JSON lib 
 seems to use field name for JSON generation so I'll need to investigate 
 this more). I've tested `typeOf' function and it's quite useful, but its 
 limitation is that it's not working on ADT name:
 
 data Color = RED|GREEN|BLUE
 
 *Main typeOf Color
 
 interactive:1:8: Not in scope: data constructor `Color'
 
 *Main typeOf RED
 Main.Color
 
 and I would need that in order to translate Color defined above into 
 enum like:
 
 enum Color {
RED = 0;
GREEN = 1;
BLUE = 2;
 }
 
 
 My question is: do you think I'm looking into good direction (i.e. 
 Data/Typeable) or do you think I'll need to use something different for 
 data definition DSL (Template Haskell?, or impossible in Haskell so 
 write my own language with full parser? etc?)
 
 Thanks for any idea or opinion on this!
 Karel
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is it possible to represent such polymorphism?

2011-10-02 Thread Edward Z. Yang
What are you actually trying to do?  This seems like a rather
unusual function.

Edward

Excerpts from sdiyazg's message of Sun Oct 02 15:17:07 -0400 2011:
 Finally I got what I meant:
 
 
 class ExpandTuple t where
 type Result t
 expand :: t-Result t
 
 instance (Integral a)=ExpandTuple (a,a) where
 type Result (a,a) = (a,a,a)
 expand (x,y) = (x,y,1)
 
 instance (Integral a)=ExpandTuple (a,a,a) where
 type Result (a,a,a) = (a,a,a)
 expand = id
 
 But it's so verbose (even more so than similar C++ template code I  
 guess), introduces an additional name (the typeclass) into the current  
 scope, and requires 2 extensions: TypeFamilies and  
 FlexibleInstances.Is there a cleaner way to do this?
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Edward Z. Yang
Hello Wishnu,

That is slightly odd. What CPU and operating system are you running on?
Include Kernel versions if Linux.

Cheers,
Edward

Excerpts from Wishnu Prasetya's message of Sun Aug 14 14:11:36 -0400 2011:
 Hi guys,
 
 I'm new in parallel programming with Haskell. I made a simple test 
 program using that par combinator etc, and was a bit unhappy that it 
 turns out to be  slower than its sequential version. But firstly, I dont 
 fully understand how to read the runtime report produced by GHC with -s 
 option:
 
SPARKS: 5 (5 converted, 0 pruned)
 
INIT  time0.02s  (  0.01s elapsed)
MUT   time3.46s  (  0.89s elapsed)
GCtime5.49s  (  1.46s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time8.97s  (  2.36s elapsed)
 
 As I understand it from the documentation, the left time-column is the 
 CPU time, whereas the right one is elapses wall time. But how come that 
 the wall time is less than the CPU time? Isn't wall time = user's 
 perspective of time; so that is CPU time + IO + etc?
 
 Any help?
 
 --Wish.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Edward Z. Yang
Ah, good catch. :-)

Edward

Excerpts from Iustin Pop's message of Sun Aug 14 14:25:02 -0400 2011:
 On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:
  Hi guys,
  
  I'm new in parallel programming with Haskell. I made a simple test
  program using that par combinator etc, and was a bit unhappy that it
  turns out to be  slower than its sequential version. But firstly, I
  dont fully understand how to read the runtime report produced by GHC
  with -s option:
  
SPARKS: 5 (5 converted, 0 pruned)
  
INIT  time0.02s  (  0.01s elapsed)
MUT   time3.46s  (  0.89s elapsed)
GCtime5.49s  (  1.46s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time8.97s  (  2.36s elapsed)
  
  As I understand it from the documentation, the left time-column is
  the CPU time, whereas the right one is elapses wall time. But how
  come that the wall time is less than the CPU time? Isn't wall time =
  user's perspective of time; so that is CPU time + IO + etc?
 
 Yes, but if you have multiple CPUs, then CPU time accumulates faster
 than wall-clock time.
 
 Based on the above example, I guess you have or you run the program on 4
 cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
 efficiency).
 
 regards,
 iustin
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Error in the asynchronous exception operational semantics

2011-08-09 Thread Edward Z. Yang
Hello all,

I was recently reading Asynchronous Exceptions as an Effect by Harrison,
Allwein, Gill and Procter, and noticed at the end that they found an error
in the operational semantics described in Asynchronous Exceptions in Haskell
by the Simons and Andrew Moran.  Does anyone know what this error was, and
whether or not it was corrected in the December 12, 2006 version of the paper
floating around on the net?

Thanks,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to ensure code executes in the context of a specific OS thread?

2011-07-04 Thread Edward Z. Yang
Sounds like something that could use a GHC Trac feature request.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pool: Why doesn't it block?

2011-06-12 Thread Edward Z. Yang
The documentation seems to indicate that the behaviour should be blocking,
so if it's not, might be a bug.

 withResource :: MonadCatchIO m = Pool a - (a - m b) - m b
 Temporarily take a resource from a Pool, perform an action with it, and
 return it to the pool afterwards. 
 
 * If the maximum number of resources has been reached, this function blocks
   until a resource becomes available. 

http://hackage.haskell.org/packages/archive/resource-pool/0.1.1.0/doc/html/Data-Pool.html

Cheers,
Edward

Excerpts from Ertugrul Soeylemez's message of Sun Jun 12 03:47:37 -0400 2011:
 Hello Michael, hello fellow haskellers,
 
 there is something, which has bothered me for quite a while, but now it
 has become a serious problem for me, because I see it as a bug, and
 there is no elegant way to work around it.
 
 I wonder if it's the right semantics for Data.Pool to simply fail with
 an exception, if the pool is exhausted.  It would be much more
 appropriate, if it would just block, until a resource becomes available.
 Otherwise it's just /safe/ for multi-threading, but not really /useful/
 for it.
 
 I noticed this when I launched 512 worker threads, but my pool had only
 16 database connections.  I need the pool to block, until a resource is
 available.
 
 It's also common that my Yesod site just returns an internal server
 error, when the pool is exhausted, so you can only handle as many
 connections successfully as there are database connections.  I would
 expect Yesod to wait for a connection to become available instead of
 simply blowing the request.
 
 Blocking should at least be an option and be somehow reachable from
 Yesod/persistent.
 
 
 Greets,
 Ertugrul
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pool: Why doesn't it block?

2011-06-12 Thread Edward Z. Yang
Oops, my bad! :-)

Cheers,
Edward

Excerpts from Michael Snoyman's message of Sun Jun 12 05:15:41 -0400 2011:
 Wrong package, that's resource-pool (which I wasn't aware of until
 this moment). Ertugrul is referring to pool:
 http://hackage.haskell.org/packages/archive/pool/0.1.0.2/doc/html/Data-Pool.html
 .
 
 My original intention of splitting pool off from persistent was so
 others could use it. If Bryan's maintaining resource-pool instead, I'd
 have no problem deprecating pool and using resource-pool in its place.
 
 Michael
 
 On Sun, Jun 12, 2011 at 12:06 PM, Edward Z. Yang ezy...@mit.edu wrote:
  The documentation seems to indicate that the behaviour should be blocking,
  so if it's not, might be a bug.
 
  withResource :: MonadCatchIO m = Pool a - (a - m b) - m b
  Temporarily take a resource from a Pool, perform an action with it, and
  return it to the pool afterwards.
  
  * If the maximum number of resources has been reached, this function blocks
    until a resource becomes available.
 
  http://hackage.haskell.org/packages/archive/resource-pool/0.1.1.0/doc/html/Data-Pool.html
 
  Cheers,
  Edward
 
  Excerpts from Ertugrul Soeylemez's message of Sun Jun 12 03:47:37 -0400 
  2011:
  Hello Michael, hello fellow haskellers,
 
  there is something, which has bothered me for quite a while, but now it
  has become a serious problem for me, because I see it as a bug, and
  there is no elegant way to work around it.
 
  I wonder if it's the right semantics for Data.Pool to simply fail with
  an exception, if the pool is exhausted.  It would be much more
  appropriate, if it would just block, until a resource becomes available.
  Otherwise it's just /safe/ for multi-threading, but not really /useful/
  for it.
 
  I noticed this when I launched 512 worker threads, but my pool had only
  16 database connections.  I need the pool to block, until a resource is
  available.
 
  It's also common that my Yesod site just returns an internal server
  error, when the pool is exhausted, so you can only handle as many
  connections successfully as there are database connections.  I would
  expect Yesod to wait for a connection to become available instead of
  simply blowing the request.
 
  Blocking should at least be an option and be somehow reachable from
  Yesod/persistent.
 
 
  Greets,
  Ertugrul
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC7 build problem

2011-06-11 Thread Edward Z. Yang
Yes, the tree was broken for some time between yesterday and today, and you
appear to have gotten unlikely.  It should have been fixed now, so you should
try again.

Cheers,
Edward

Excerpts from Scott Lawrence's message of Sat Jun 11 12:44:18 -0400 2011:
 Trying to compile GHC7 from source (as the ubuntu repository is still on
 GHC6), I came across the following error in the final phase:
 
 libraries/base/GHC/ST.lhs:78:1:
 You cannot SPECIALISE `forever'
   because its definition has no INLINE/INLINABLE pragma
   (or its defining module `Control.Monad' was compiled without -O)
 
 This was after:
 
   git clone [...]
   ./sync-all get
   perl boot
   ./configure
   make
 
 Is there some problem with trying to build GHC7 on a machine with GHC6,
 or did I just get the build procedure wrong (the last line of the error
 looks like a hint that I did).
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC7 build problem

2011-06-11 Thread Edward Z. Yang
It appears the build is still broken. Sorry! Please stand by, or roll
back the last set of commits.

Edward

Excerpts from Scott Lawrence's message of Sat Jun 11 12:44:18 -0400 2011:
 Trying to compile GHC7 from source (as the ubuntu repository is still on
 GHC6), I came across the following error in the final phase:
 
 libraries/base/GHC/ST.lhs:78:1:
 You cannot SPECIALISE `forever'
   because its definition has no INLINE/INLINABLE pragma
   (or its defining module `Control.Monad' was compiled without -O)
 
 This was after:
 
   git clone [...]
   ./sync-all get
   perl boot
   ./configure
   make
 
 Is there some problem with trying to build GHC7 on a machine with GHC6,
 or did I just get the build procedure wrong (the last line of the error
 looks like a hint that I did).
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How on Earth Do You Reason about Space?

2011-06-01 Thread Edward Z. Yang
That sounds like a plausible reason why naive copying explodes space.
Something like string interning would be good here... and since you're
hashing already...

Edward

Excerpts from Daniel Fischer's message of Wed Jun 01 06:46:24 -0400 2011:
 On Wednesday 01 June 2011 12:28:28, John Lato wrote:
  There are a few solutions to this.  The first is to make a copy of the
  bytestring so only the required data is retained.  In my experiments
  this wasn't helpful, but it would depend on your corpus.  The second is
  to start with smaller chunks.
 
 The third, check whether the word is already known, and *make a copy if 
 not*. That should only keep the required parts (including the currently 
 processed chunk) in memory.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How on Earth Do You Reason about Space?

2011-05-31 Thread Edward Z. Yang
Hello Aleksandar,

It is possible that the iteratees library is space leaking; I recall some
recent discussion to this effect.  Your example seems simple enough that
you might recompile with a version of iteratees that has -auto-all enabled.
Unfortunately, it's not really a safe bet to assume your libraries are
leak free, and if you've pinpointed it down to a single line, and there
doesn't seem a way to squash the leak, I'd bet it's the library's fault.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Thoughts about currying and optimisations

2011-05-31 Thread Edward Z. Yang
I believe this transformation is called the 'full laziness' optimization.
It can introduce space leaks if the computationally expensive test is
replaced with a reference to a space expensive value.

Edward

Excerpts from Yves Parès's message of Tue May 31 15:14:07 -0400 2011:
 Hello Café,
 An idea came to me: unless the compiler notices that stuffA and stuffB are
 equivalent, would it be correct to suppose that A is better than B?
 
 stuffA x = if someComputationallyExpensiveTest x
 then doSomething else doSomethingElse
 
 stuffB x y = if someComputationallyExpensiveTest x
 then doSomething y else doSomethingElse y
 
 I explain: in stuffA, the function only depends on x, so when doing:
 a = stuffA xxx
 runs the expensive test once and for all, and a can directly be bound to
 doSomething or doSomethingElse
 so calling after:
 a foo
 a bar
 won't run the test
 
 Whereas:
 b = stuffB xxx
 is valid due to curryfication, but:
 b foo
 b bar
 will both run the expensive test

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] runST readSTRef type error

2011-05-04 Thread Edward Z. Yang
Hello Ken,

Strictly speaking, you only need Rank-2 types.  This indeed the right
way to fix the problem. 

Cheers,
Edward

Excerpts from Ken Takusagawa II's message of Wed May 04 02:00:49 -0400 2011:
 I run into the following type error:
 
 foo :: ST s (STRef s Int) - Int
 foo p = (runST (p = readSTRef))
 
 with ghc 6.12.1
 st.hs:8:16:
 Couldn't match expected type `s1' against inferred type `s'
   `s1' is a rigid type variable bound by
the polymorphic type `forall s1. ST s1 a' at st.hs:8:9
   `s' is a rigid type variable bound by
   the type signature for `foo' at st.hs:7:10
   Expected type: ST s1 (STRef s Int)
   Inferred type: ST s (STRef s Int)
 In the first argument of `(=)', namely `p'
 In the first argument of `runST', namely `(p = readSTRef)'
 
 However, if I add
 {-# LANGUAGE RankNTypes #-}
 
 and change the type signature to
 foo :: (forall s.ST s (STRef s Int)) - Int
 
 it works.  I don't fully understand what's going on here.
 
 Is this the right way to fix the problem?  Are there other options?
 My gut feeling is, for such a simple use case of the ST monad, I
 shouldn't need such a big hammer as RankNTypes.
 
 --ken
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More ideas for controlled mutation

2011-04-30 Thread Edward Z. Yang
Excerpts from Heinrich Apfelmus's message of Mon Apr 25 04:01:03 -0400 2011:
 The thing is that lazy evaluation is referentially transparent while I 
 don't care about [(1,4),(2,2)] vs [(2,2),(1,4)] is not.

Perhaps more precisely, laziness's memoization properties rely on the
referential transparency of thunk evaluation.

 In the latter case, you have a proof obligation to the compiler that your API
 does not expose the difference between these two values. But in Haskell, you
 have no way of convincing the compiler that you fulfilled that proof
 obligation! (At least, I don't see any obvious one. Maybe a clever abuse of
 parametricity helps.) It might be an option in Agda, though.
 
 In that light, it is entirely reasonable that you have to use 
 unsafePerformIO .

Yes, of course.  But as works like 'amb' demonstrate, we can build higher-level
APIs that are unsafe under the hood, but when used as abstractions fulfill
referential transparency (or, in the case of 'amb', fulfill referential
transparency as long as some not-as-onerous properties are achieved.)  So if
you implement a reusable mechanism that does unsafe stuff under the hood,
but provides all the right guarantees as long as you don't peek inside, I
think that's a good step.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC optimizer consuming memory

2011-04-30 Thread Edward Z. Yang
Hello Mike,

I cannot reproduce using GHC HEAD (though it seems to hang on GHC 7.0.3),
so my guess is the bug has been fixed.

cvs-ghc, any guesses which patch this might have been?

Cheers,
Edward

Excerpts from mike frai's message of Sat Apr 30 14:50:17 -0400 2011:
 Hi,
 
 While using Michael Snoyman's persistent package, I discovered that a
 certain bit of code generated from it made the GHC optimizer consume a
 ridiculous amount of memory (ie. only when using the -O flag). I
 know very little about the GHC compiler and Michael Snoyman
 recommended I post my findings here to see what others thought.
 
 I've done what I can to extract the code generated from the persistent
 package and simplify/strip it down to the bare minimum of what's
 needed to cause this issue. You can find it in the attachment
 FromTest.hs. From my testing, removing five fields from Rec and the
 equivalent from the fromValue function - the memory consumption drops
 dramatically and I am able to compile.
 
 For your convenience, I've also attached FromTest.out, which is the
 output I get when compiling FromTest.hs using ghc -O -v3. This is
 just the output up to the point it starts consuming a lot of memory.
 I've never seen anything after this point because my system slows down
 to a crawl (due to swapping) and I end up killing the process.
 
 Please let me know what you guys think or if you need any more
 information about this. Thanks,
 - Mike

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make ghc 7 with llvm?

2011-04-29 Thread Edward Z. Yang
Others have answered your real question (I think) adequately, but if I'm
pedantic and answer precisely what you ask:

You can compile GHC with llvm by adding -fllvm to your build.mk file:

GhcHcOpts += -fllvm

Cheers,
Edward

Excerpts from Magicloud Magiclouds's message of Thu Apr 28 21:49:11 -0400 2011:
 Hi,
   As I recalled, ghc started to support llvm from version 7.
   But there is a problem: there is no option to make ghc with llvm. So
 Library within ghc source will be in gcc's binary format. Then when I
 install other packages, they may complain that the binary format is
 not llvm, so they install some libraries again.
   Any way I could make ghc 7 with llvm?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] More ideas for controlled mutation

2011-04-24 Thread Edward Z. Yang
Laziness can be viewed as a form of controlled mutation, where
we overwrite a thunk with its actual value, thus only running
the code once and reaping great time benefits.

I've been toying around with some ideas where we do alternative
forms of controlled mutation.  One such idea has to do with memoization.
One way we memoize functions is by building up a data-structure that
covers the entirety of input domain of the function, with the values
in each slot being thunks for the corresponding function call.  Now,
this is all very nice and well, but for some types of inputs (like
machine integers) we end up making a very big structure for a function for
which, in practice, we'll only store and retrieve a few values of the domain.

Hash tables take advantage of this fact by simply chaining together values
in a linked list if they land in the same bucket.  Could we have similarly
bucketized memoization?  What we want here is for a *thunk to possibly
evaluate to different values, but calls to the API be observationally
equivalent.*  That is, if the only way I can inspect a dictionary list
is do a lookup, I don't care if my representation is [(1,4),(2,2)] or
[(2,2),(1,4)].  An obvious way to do this is to use unsafePerformIO to
read out an IORef stating the value currently being looked up, and
have the thunk evaluate to the pair of that key and the result.  There
are some synchronization concerns, of course: ideally we would only
take out a lock on the thunk once we realize that the value doesn't
already exist in the memotable, but I don't think there's a way in GHC Haskell
to observe if a value is a thunk or not (maybe such a mechanism would be
useful?)

This seems related to lazy IO, where thunks are co-opted into performing
input-output effects.  After all, mutation is just a controlled form of IO.  Is
lazy IO evil or not?  One consensus is that for operations that involve
limited resources (file descriptors, etc.) lazy IO is too opaque for its
own good.  It seems we also haven't cracked it's performance problems either
(for the relatively un-objectionable generation of an infinite stream of random
numbers, Don Stewart notes: There are real overheads here. Consider eagerly
filling chunks and extracting elements piecewise.).  But code that looks pure
and can be reasoned about like pure code, but has the efficiency benefits of
mutation is a very attractive thing indeed.  I think it's worth some thought,
though the programming world at large has a hard enough time reasoning about
laziness even when everything is completely pure!

Cheers,
Edward

P.S. An obvious question is whether or not we could use this technique to
implement splay heaps or hash tables with pure interfaces.  My opinion is
no, because these structures demand you be able to *observe* the mutation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] generics and sql

2011-04-24 Thread Edward Z. Yang
Hmm, this is a bit peculiar.  The problem is you don't get
control over how gmapQ invokes the function toSql: it will
only ever be done with the type signature Data d = d - u.
There is good reason for this too: imagined you tried to run
gmapQ toSql on a data-type that contained a member that was
not convertible to a SqlValue: then it ought to fail with a type
error!

You may be able to work around this with more generics madness:
use Typeable to check if the types of all the fields are kosher,
and then do an appropriate casts before invoking toSql.  But you
won't get particularly good static guarantees doing it this way.

So... what are you really trying to do? :-)

Edward

Excerpts from nadine.and.henry's message of Sun Apr 24 10:21:03 -0400 2011:
 Dear Group,
 
 Greetings.  I have a feeling that what I am trying to do is easy, I
 just don't know how to say it in Haskell.  Let's start with:
 
  {-# LANGUAGE 
 DeriveDataTypeable,  GeneralizedNewtypeDeriving  #-}
  
  import Database.HDBC
  import Data.Typeable (Typeable)
  import Data.Data 
  data C = C { str :: String, dbl:: Double }
deriving (Eq, Ord, Typeable, Data)
  
  a :: C
  a = C twelve 12.0
  
 
 Now I load this up in ghci and I can do the following:
 
  toSql . str $ a  -- result: SqlString twelve
  toSql . dbl $ a  -- result: SqlDouble 12.0
 
 but what I would really like to do is something like:
 
 gmapQ toSql $ a
 
 which results in:
 interactive:1:7:
 Could not deduce (Convertible d SqlValue)
   arising from a use of `toSql'
 from the context (Data d)
   bound by a type expected by the context: Data d = d - SqlValue
   at interactive:1:1-11
 Possible fix:
   add (Convertible d SqlValue) to the context of
 a type expected by the context: Data d = d - SqlValue
   or add an instance declaration for (Convertible d SqlValue)
 In the first argument of `gmapQ', namely `toSql'
 In the expression: gmapQ toSql
 
 In other words, I'm looking for a function with a signature:
 
 (Whatever Instances I neeed here) = a - [SqlValue]
 
 I have tried various incantations of type signatures, but thus far I
 can't get it right.  Can someone point me in the right direction?  Thanks.
 
 Henry Laxen
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] generics and sql

2011-04-24 Thread Edward Z. Yang
Where did 'query' come from?

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More ideas for controlled mutation

2011-04-24 Thread Edward Z. Yang
Yep.  It harkens to my days of forcing impure, non-thread-safe
C libraries into nice, pure, Haskell FFI bindings.  I suppose what
I'd like to do here is work in the unsafe IO much more closely
with GHC's existing facilities, so that we spend as much time
as possible /not/ in unsafePerformIO.  A kind of hybrid approach,
if you will.

P.S. Don Stewart points out that Edward Kmett has can access
GHC's pointer tags http://hackage.haskell.org/package/tag-bits,
thus allowing us to approximate evaluated/not evaluated.  Maybe
I'll hack up a prototype next time round.

Excerpts from Ketil Malde's message of Sun Apr 24 17:41:23 -0400 2011:
 Edward Z. Yang ezy...@mit.edu writes:
 
  I've been toying around with some ideas where we do alternative
  forms of controlled mutation.  One such idea has to do with memoization.
   [..]
  Hash tables take advantage of this fact by simply chaining together values
  in a linked list if they land in the same bucket.  [...]
  An obvious way to do this is to use unsafePerformIO to
  read out an IORef stating the value currently being looked up, and
  have the thunk evaluate to the pair of that key and the result.  There
  are some synchronization concerns, of course:
 
 Seen this?
 
 http://augustss.blogspot.com/2011/04/ugly-memoization-heres-problem-that-i.html
 
 -k

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Killing threads in foreign calls.

2011-04-17 Thread Edward Z. Yang
This is a fairly nontrivial problem.  First off, let me tell you
what you do not /actually/ want to happen: you don't want the OS
level thread performing the foreign call to actually be killed;
most C code is not written a way that can gracefully recover from
this, and unless you have explicit indications from the Postgres
library that it use pthread_setcancelstate (and you, of course,
have the cross-platform issue.)

You need some way of making the Postgres call return early, with
an error code of some sort.  If Postgres has a signal handler
that does this, you can use something along the lines
of here: http://blog.ezyang.com/2010/11/its-just-a-longjmp-to-the-left/

If the asynchronous API has the ability to cancel a query given
some handler, you instead want to set up a custom kill thread function
that checks if a thread has an active query and then performs
another FFI call to perform that cancellation.

If the library doesn't have a way of doing graceful cancellation,
you're kind of out of luck.  Unfortunately there is no golden
touch for making this work.

Cheers,
Edward

Excerpts from Jason Dusek's message of Sun Apr 17 15:31:11 -0400 2011:
   I am building an application that uses Postgres for storage.
   If a query runs too long, I would like to kill the querying
   thread, releasing its lock on the connection; if the
   connection is a in a bad state -- for example, busy -- I would
   like to clean up the connection.
 
   Unfortunately, killing calls in to libpq seems not to work. I
   have put a minimal example on hpaste:
 
 http://hpaste.org/45774/minimal_pg_contention_example
 
   If you install libpq with Cabal, you can run it. In the
   example, the main thread spawns a worker thread that queries
   Postgres, running SELECT pg_sleep(10);; the main thread
   waits half a second and then tries to kill the worker.
   Unfortunately, the worker always manages to get as far as
   printing complete.
 
   In the code, I call `Database.PQ.exec':
 
 
 http://hackage.haskell.org/packages/archive/libpq/0.4.1/doc/html/src/Database-PQ.html#exec
 
   This in turn calls a `safe' binding, `c_PQexec', to `PQexec'
   in the C library:
 
 
 http://hackage.haskell.org/packages/archive/libpq/0.4.1/doc/html/src/Database-PQ.html#line-
 
   There are async interfaces, too; they do not seem to be any
   more killable then the sync ones.
 
   Maybe the problem is that you can't kill a thread while it's
   in a foreign call? I do not see any documentation to this
   effect; but I may have missed it.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Debugging with gdb?

2011-04-13 Thread Edward Z. Yang
Hello Svante,

I have a few recommendations, places where I'd check:

1. Consult the arguments passed to read() usign GDB (your libc has
   debugging symbols) and see if they are obviously wrong.  It seems
   more plausible that they are something that would be right for
   Linux, but not so right for Hurd.

2. Take the source code for ghc-pkg, and start adding debug prints,
   checking to see if the print is triggered or not (it will be useful
   if you can figure out what command you can run to just recompile
   ghc-pkg).  See if you can reduce the program to a minimal one
   that still hangs.

3. Compile GHC with the flag -ddump-stg, and pipe all of this output
   to a file.  With any luck, the block s9qJ will have been present
   during this stage, at which point you can use it to trace back to
   a file.  Note that due to lazy evaluation, s9qJ is probably /not/
   actually the culprit, which is why I recommend doing (2) first; it's
   a lot easier to wade around smaller bits of code.

Another recommendation is to bootstrap 6.8.3. (the last 6.8 release)
first, before attempting 6.10.1.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Current heap size and other runtime statistics -- API for accessing in GHC?

2011-04-10 Thread Edward Z. Yang
Simon Marlow and I had this conversation not too long ago, and the answer
is no. However, this is definitely something that would be useful for
a lot of people (GHC developers included!)

Cheers,
Edward

Excerpts from Ryan Newton's message of Sun Apr 10 17:30:50 -0400 2011:
 Hi cafe,
 
 The rtsopts (-s etc) can provide some nice debugging information regarding
 memory management.  And System.Mem.performGC can initiate garbage
 collection.  But are there APIs for querying the current state of the heap?
  I've googled and come up dry.
 
 In this case I'm running benchmarks and for the sake of fair comparison I
 want to make sure that everything from a previous run is cleaned up before
 the next run.
 
 Thanks,
   -Ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Deciding equality of functions.

2011-04-09 Thread Edward Z. Yang
Excerpts from Grigory Sarnitskiy's message of Sat Apr 09 13:26:28 -0400 2011:
 I guess that deciding whether two functions are equal in most cases is
 algorithmically impossible. However maybe there exists quite a large domain
 of decidable cases? If so, how can I employ that in Haskell?

In the case of functions where the domain and range are finite, function
equality is decidable but not usually feasible.  If your function is a
combinatorial circuit, you can apply technology like SAT solvers to
hopefully decide equality in faster than exponential time (this is what
Cryptol does; you may find it interesting.)

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] BlockedIndefinitelyOnMVar exception

2011-03-31 Thread Edward Z. Yang
I don't know if there's a way to disable it, but you can wrap all your
spawned threads with an exception handler that catches BlockedIndefinitelyOnMVar
and ignores it. If the thread blocks indefinitely, it's as good as dead,
so there won't be any difference in behavior.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to best deal with nullPtr from alloca and friends?

2011-03-29 Thread Edward Z. Yang
Excerpts from Jason Dagit's message of Tue Mar 29 00:43:10 -0400 2011:
 I was reading up on the documentation for alloca and friends[1], which says,
 If any of the allocation functions fails, a value of
 nullPtrhttp://www.haskell.org/ghc/docs/7.0.2/html/libraries/base-4.3.1.0/Foreign-Ptr.html#v:nullPtr
 is
 produced.
 
 It seems like every example of FFI code that I find which uses alloca
 ignores the possibility of a nullPtr[2, 3, 4].
 
 So then I started trying out examples with the help of dmwit and others from
 #haskell.
 
 It seems that actually, alloca and friends throw exceptions:
 dmwit main = allocaArray (2^30) (\ptr - print ((nullPtr :: Ptr Double) ==
 ptr))
 dmwit lispy: alloca also throws an exception.
 dmwit lispy: Or rather, allocaBytes throws an exception, and alloca is
 implemented in terms of allocaBytes, so I'm *guessing* that alloca throws an
 exception.
 
 I'm on a 64bit version of windows here with more than 4GB of memory to spare
 for the GHC process. Unfortunately, allocaBytes takes an Int so I can't test
 it with a request larger than the amount of physical ram I have.

You could try testing by setting different limits on the memory usage of your
process with ulimit. I'll give a test, but the prevailing wisdom is that if you
OOM, you're really out of luck.

Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ContT and ST stack

2011-03-11 Thread Edward Z. Yang
Excerpts from Max Bolingbroke's message of Fri Mar 11 05:15:34 -0500 2011:
 AFAIK this decision was reversed because SPJ found a simple way to
 support them. Indeed, they work fine in 7.0.2 and generate warnings.

Correct. About a week-ish ago I submitted a patch to update the docs.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rank-2 types in classes

2011-03-02 Thread Edward Z. Yang
The trick is to write the rank-2 type in the function that runs
the monad, and leave the typeclasses skolemized.

Here's an example:

-- | Typeclass for monads that write or read to a network.  Useful
-- if you define operations that need to work for all such monads.
-- You're expected to put extra constraints on h.
class (Network g, Monad (m g n), Applicative (m g n), Functor (m g n)) = 
NetworkMonad m g n where
-- | Unsafely converts an 'IO' operation that takes an 'AIG' as an
-- argument into an operation in some 'NetworkMonad'.
unsafeIOToNetwork :: (GEnv g - IO a) - m g n a
...

class OpaqueNetwork g = Network g where
-- * We cannot put NetworkMonad constraint on GNT g and GNQ g because we
--   need to be able to put that constraint as a rank-2 monad.
-- * This has a lot of stuff in it, maybe we'll split it up later.
data GNode g  :: *  -- ^ phantom type
  - *  -- ^ data type
data GNT g:: *  -- ^ phantom type
  - * - * -- ^ monad
data GNQ g:: *  -- ^ phantom type
  - * - * -- ^ monad
data GEnv g   :: *

one:: GNode g n
zero   :: GNode g n

runNT  :: (forall n. NetworkMonad GNT g n = GNT g n ()) - g
withNT :: g - (forall n. NetworkMonad GNT g n = GNT g n ()) - g

There are numerous other problems with this route (can you see them from
the sample code?) but I found this solution to be mostly acceptable.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Please review my Xapian foreign function interface

2011-02-21 Thread Edward Z. Yang
Excerpts from Oliver Charles's message of Mon Feb 21 08:53:48 -0500 2011:
 Yes, this is a concern to me as well. The only places I've used
 unsafePerformIO is with Query objects, which I am mostly treating as
 immutable data, and never exposing anyway to modify query
 objects.

That is a good way to start thinking about it.  If there is an efficient
mechanism for copying query objects, you can also implement persistent
update (e.g. copy the structure and then mutate it).

 However, what is better? Should I avoid taking this
 risk/assumption of immutability and use this within the IO monad also? I
 guess my biggest fear is that this entire library is only usable in the
 IO monad, which from what I understand limits my ability to test easily.

Don't take the risk: verify for yourself that there is no risk!  Note that
putting things in IO doesn't get you off the concurrency hook: things in
IO can be run in different threads and you need to synchronize them. Indeed,
as the Xapian faq states:

If you want to use the same object concurrently from different threads,
it's up to you to police access (with a mutex or in some other way) to
ensure only one method is being executed at once.

It is admittedly more annoying to test things in IO. One thing you can do is
if database objects are completely isolated from one another (which seems
to be the case) you can build up a custom monad for manipulating this object
in a single-threaded and/or thread safe manner.  I did something like
this (actually, I needed to enforce more complex invariants about when
what functions could get called), but unfortunately it was for work and the
code hasn't been cleared for publication yet.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Please review my Xapian foreign function interface

2011-02-20 Thread Edward Z. Yang
Thanks Oliver!

I haven't had time to look at your bindings very closely, but I do
have a few initial things to think about:

* You're writing your imports by hand.  Several other projects used
  to do this, and it's a pain in the neck when you have hundreds
  of functions that you need to bind and you don't quite do it all
  properly, and then you segfault because there was an API mismatch.
  Consider using a tool like c2hs which rules out this possibility
  (and reduces the code you need to write!)

* I see a lot of unsafePerformIO and no consideration for:
- Interruptibility
- Thread safety
  People who use Haskell tend to expect their code to be thread-safe and
  interruptible, so we have high standards ;-) But even C++ code
  that looks thread safe may be mutating shared memory under the hood,
  so check carefully.

I use Sup, so I deal with Xapian on a day-to-day basis. Bindings are good
to see.

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conditionals and case expressions in do blocks

2011-01-26 Thread Edward Z. Yang
Excerpts from Edward Amsden's message of Wed Jan 26 15:51:46 -0500 2011:
 do
 ...
 case ... of
 Foo x - do ...
   ...
   return foobar
 Bar x - do ...
  ...
  return foobar

There's not enough context here to say what is appropriate.
Is foobar defined in the outer or inner do-block?

 if cond
   then throwError BBQ!
   else do
  ...
  ...
  return ...

Try:

when cond $ throwError BBQ!
...
...
return ...

Cheers,
Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >