[Haskell-cafe] Function decoration pattern on wiki

2013-01-20 Thread Derek Elkins
I created the following page on the wiki to capture one pattern that
occurs often in Edward Kmett's lens library.

http://www.haskell.org/haskellwiki/Function_decoration_pattern

Feel free to rename it or add comments, examples, clarifications,
additional notes or extensions to the technique.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] ANNOUNCE: set-monad

2012-06-21 Thread Derek Elkins
On Thu, Jun 21, 2012 at 8:30 AM, George Giorgidze giorgi...@gmail.com wrote:
 Hi Derek,

 Thanks for providing the executable example that demonstrates your
 point. It is an interesting one. See my response below. I think it
 takes us into the discussion as to what constitutes reasonable/law
 abiding instances of Eq and Ord and what client code that uses Eq and
 Ord instances can assume.

 To give out my point in advance, Eq and Ord instances similar to yours
 (i.e., those that proclaim two values as equal but at the same time
 export or allow for function definitions that can observe that they
 are not equal; that is, to tell them apart) not only break useful
 properties of the Data.Set.Monad wrapper but they also break many
 useful properties of the underlaying Data.Set, and many other standard
 libraries and functions.

I will readily admit that the Ord instance arguably does not satisfy
the laws one would want (though it is consistent with the Eq
instance).  It arguably is supposed to produce a total order which it
does not.  However, Eq is only required to be an equivalence relation
(and even that is stretching the definition of required), and the Eq
instance is certainly an equivalence relation.  Furthermore, the only
reason your library requires an Ord instance is due to the underlying
Data.Set requiring it.  You could just as well use a set
implementation that didn't require Ord and the same issue would occur.
 I mention this to remove the bad Ord instance from consideration.

For me saying two Haskell expressions are equal means observation
equality.  I.e. I can -always- substitute one for the other in any
context.  I tolerate equal meaning equal modulo bottom, because
unless you are catching asynchronous exceptions, which Haskell 2010
does not support, the only difference is between whether you get an
answer or not, not what that answer is if you get it which is only so
harmful.  For Data.Set.Monad, fmap f . fmap g = fmap (f . g) holds (at
least modulo bottom) for all f and g as required by the Functor laws.

I have no problem if you want to say fromList . toList = id -given-
(==) is the identity relation on defined values, but omitting the
qualification is misleading and potentially dangerous.  The Haskell
Report neither requires nor enforces that those relations hold, which
is particularly underscored by the fact, as you yourself demonstrated
that even -standard- types fail to satisfy even the laws that you
perhaps can interpret the Haskell Report as requiring.  There have
been violations of type safety due to assuming instances satisfied
laws that they didn't.


 On 20 June 2012 04:03, Derek Elkins derek.a.elk...@gmail.com wrote:

 This is impressive because it's false.  The whole point of my original
 response was to justify Dan's intuition but explain why it was misled
 in this case.


 No, In my opinion, it is not false. The fact that you need to wrap the
 expression between fmap f and fmap g suggests that the problem is with
 mapping the functions f and g and not with toList and fromList as you
 suggest. See below for clarifications.

 Let us concentrate on the ex4 and ex6 expressions in your code. These
 two most clearly demonstrate the issue.

 import Data.Set.Monad

 data X = X Int Int deriving (Show)

 instance Eq X where
    X a _ == X b _ = a == b

 instance Ord X where
    compare (X a _) (X b _) = compare a b

 f (X _ b) = X b b

 g (X _ b) = X 1 b

 xs = Prelude.map (\x - X x x) [1..5]

 ex4 = toList $ fmap f . fmap g $ fromList xs

 ex6 = toList $ fmap f . fromList . toList . fmap g $ fromList xs

 print ex4 gives us [X 1 1,X 2 2,X 3 3,X 4 4,X 5 5]

 and

 print ex6 gives us [X 5 5]

 From the first look, it looks like that (fromList . toList) is not
 identity. But if tested and checked separately it is. Maybe something
 weird is going on with (fmap f) and (fmap g) and/or their composition.
 Before we dive into that let us try one more example:

 ex7 = toList $ fmap f . (empty `union`) . fmap g $ fromList xs

 print ex7 just like ex6 gives us [X 5 5] should we assume that (empty
 `union`) is not identity either?

Correct.  It is not.

This hints that, probably something
 is wrong with (fmap f), (fmap g), or their composition.

 Let us check.

 If one symbolically evaluates ex4 and ex6 (I did it with pen and paper
 and I am too lazy to type it here), one can notice that:

 ex4 boils down to evaluating Data.Set.map (f . g) (Data.Set.fromList xs)

 while

 ex6 boils down to evaluating Data.Set.map f (Data.Set.map g
 (Data.Set.fromList xs))

 (BTW, is not it great that Data.Set.Monad managed to fuse f and g for ex4)

 So for your Eq and Ord instances and f and g functions the following
 does not hold for the underlaying Data.Set library:

 map f . map g = map (f . g)

 So putting identity functions like (fromList . toList) or (empty
 `union`) prevents the fusion and allows one to observe that (map f .
 map g) is not the same  as (map (f . g)) for the underlaying Data.Set

Re: [Haskell-cafe] [Haskell] ANNOUNCE: set-monad

2012-06-19 Thread Derek Elkins
Un-top-posted.  See below.

 On 19 June 2012 02:21, Derek Elkins derek.a.elk...@gmail.com wrote:

 On Jun 18, 2012 4:54 PM, George Giorgidze giorgi...@gmail.com wrote:

 Hi Derek,

 On 16 June 2012 21:53, Derek Elkins derek.a.elk...@gmail.com wrote:
  The law that ends up failing is toList .
  fromList /= id, i.e. fmap g . toList . fromList . fmap h /= fmap g .
  fmap h

 This is not related to functor laws. The property that you desire is
 about toList and fromList.

 Sorry, I typoed.  I meant to write fromList . toList though that should've
 been clear from context.  This is a law that I'm pretty sure does hold for
 Data.Set, potentially modulo bottom.  It is a quite desirable law but, as
 you correctly state, not required.  If you add this (non)conversion, you
 will get the behavior to which Dan alludes.

 The real upshot is that Prim . run is not id. This is not immediately
 obvious, but this is actually the key to why this technique works. A
 Data.Set.Monad Set is not a set, as I mentioned in my previous email.

 To drive the point home, you can easily implement fromSet and toSet.  In
 fact, they're just Prim and run.  Thus, you will fail to have fromSet .
 toSet = I'd, though you will have toSet . fromSet = I'd, i.e. run . Prim =
 id.  This shows that Data.Set.Set embeds into but is not isomorphic to
 Data.Set.Monad.Set.



On Tue, Jun 19, 2012 at 4:02 AM, George Giorgidze giorgi...@gmail.com wrote:
 Hi Derek,

 Thanks for clarifying your point.

 You are right that (fromList . toList) = id is a desirable  and it
 holds for Data.Set.

 But your suggestions that this property does not hold for
 Data.Set.Monad is not correct.

 Please check out the repo, I have just pushed a quickcheck definition
 for this property. With a little bit of effort, one can also prove
 this by hand.

This is impressive because it's false.  The whole point of my original
response was to justify Dan's intuition but explain why it was misled
in this case.


 Let me also clarify that Data.Set.Monad exports Set as an abstract
 data type (i.e., the user cannot inspect its internal structure). Also
 the run function is only used internally and is not exposed to the
 users.

If fromList . toList = id is true for Data.Set.Set, then fromList .
toList for Data.Set.Monad.Set reduces to Prim . run.  I only spoke of
the internal functions to get rid of the noise, but Data.Set.fromList
. Data.Set.Monad.toList = run, and Data.Set.Monad.fromList .
Data.Set.toList = Prim, so it doesn't matter that these are internal
functions.

As I said to Dan I will say to you, between Dan and myself the
counter-example has already been provided, all you need to do is
execute it.  Here's the code, if fromList . toList = id, then ex4
should produce the same result as ex5 (and ex6).

import Data.Set.Monad

data X = X Int Int deriving (Show)

instance Eq X where
X a _ == X b _ = a == b

instance Ord X where
compare (X a _) (X b _) = compare a b

f (X _ b) = X b b

g (X _ b) = X 1 b

xs = Prelude.map (\x - X x x) [1..10]

-- should be a singleton
ex1 = toList . fromList $ Prelude.map g xs

-- should have 10 elements
ex2 = toList $ fmap (f . g) $ fromList xs

-- should have 1 element
ex3 = toList $ fmap g $ fromList xs

-- should have 10 element, fmap f . fmap g = fmap (f . g)
ex4 = toList $ fmap f . fmap g $ fromList xs

-- should have 1 element, we don't generate elements out of nowhere
ex5 = toList $ fmap f $ fromList ex3
-- i.e.
ex6 = toList $ fmap f . fromList . toList . fmap g $ fromList xs

main = mapM_ print [ex1, ex2, ex3, ex4, ex5, ex6]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] ANNOUNCE: set-monad

2012-06-16 Thread Derek Elkins
On Sat, Jun 16, 2012 at 3:47 AM, Dan Burton danburton.em...@gmail.com wrote:

 Convenience aside, doesn't the functor instance conceptually violate some 
 sort of law?

 fmap (const 1) someSet

 The entire shape of the set changes.

 fmap (g . h) = fmap g . fmap h

 This law wouldn't hold given the following contrived ord instance

 data Foo = Foo { a, b :: Int }
 instance Ord Foo where
   compare = compare `on` a

 Given functions

 h foo = foo { a = 1 }
 g foo = foo { a = b foo }

 Does this library address this? If so, how? If not, then you'd best note it 
 in the docs.

Your hypothesis is false.  You should at least try out your example.
It's easy to show that (fmap g . fmap h) x is true for all x (at least
ignoring potential issues with strictness.)  The thing to note is that
fmap h x is not a set, it is an expression tree (which is only
observable via run.)  The law that ends up failing is toList .
fromList /= id, i.e. fmap g . toList . fromList . fmap h /= fmap g .
fmap h

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Derek Elkins
Well, you can get A Novel Representation of Lists and Its Application
to the Function 'Reverse' by John Hughes online published in 1986
which is referenced by Wadler's 1987 The Concatenate Vanishes and
references Richard Bird's 1984 paper Transformational programming and
the paragraph problem though I'd be quite surprised if that was the
first place the representation appeared in print.

On Sun, Oct 31, 2010 at 6:51 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 There's a long-known technique in functional languages
 where
        [x1,...,xn]     = \tail - x1:...xn:tail
        xs ++ ys        = f . g
        xs              = f []

 A correspondent mentioned to me that he couldn't find a reference
 to the idea (which I gather he had independently rediscovered).
 I know I've read about it somewhere.  Can anyone provide a reference?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Derek Elkins
On Sun, Oct 31, 2010 at 7:27 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:

 On 1/11/2010, at 12:05 PM, Gregory Collins wrote:
 They're called difference lists:

 As a matter of fact the original context was precisely
 difference lists in logic programming.

 http://hackage.haskell.org/packages/archive/dlist/latest/doc/html/Data-DList.html

Difference lists in logic programming are almost the opposite of
functional representations of lists and I find the move to try to
nominally connect them worse than useless.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Derek Elkins
On Sun, Oct 31, 2010 at 9:02 PM, wren ng thornton w...@freegeek.org wrote:
 On 10/31/10 7:10 PM, Derek Elkins wrote:

 Well, you can get A Novel Representation of Lists and Its Application
 to the Function 'Reverse' by John Hughes online published in 1986
 which is referenced by Wadler's 1987 The Concatenate Vanishes and
 references Richard Bird's 1984 paper Transformational programming and
 the paragraph problem though I'd be quite surprised if that was the
 first place the representation appeared in print.

 Barring the worse than useless appellation, the technique has been around
 in logic programming (and classic Lisp, IIRC) for a few decades longer. I've
 always heard it referred to as part of the folklore of logic/functional
 programming though, so I'm not sure of any earlier print references
 off-hand.

I agree that difference lists have been around quite a bit longer, but
they are something different.

 (Though I find it curious that you think the logic version is so
 different...)

I'm curious as to how you find them similar.  Beyond both of them
being ways to get fast appends in a declarative language, they have no
similarities.  To begin, Prolog is a first order language so it
clearly can't represent functional lists.  Meanwhile, difference lists
rely on, at least, single assignment variables which Haskell does not
have as a language feature so Haskell can't represent difference lists
outside of a monad.  The use of logic variables requires a linear
use of a difference list within a branch of non-deterministic
computation, i.e. difference lists are not persistent.  Functional
lists clearly are.  As a simple example, if xs is a functional list, I
can return a pair (xs . ys, xs . zs), if I tried to do that with
difference lists I would be unifying ys and zs.  If I -really- wanted
to do that with difference lists I would have to use a difference list
copy predicate to do it.  Functional lists are an opaque data
structure.  If I want to know what the head of a functional list is, I
have to first turn it into a real list and then take the head.  With
difference lists, I can just look at the head, and this is cheap and
easy.  Both representations have junk, though I'm inclined to say the
functional representation has quite a bit more.  At any rate, the junk
is rather different.  The junk of a the functional representation is
any [a] - [a] function that can't be put into the form (xs ++) for
some list xs.  For example, reverse.  Difference lists are pairs of
lists where the latter is a suffix of the former.  The junk in the
typical representation, i.e. just pairs of lists, are pairs that don't
meet that criterion.  The idea behind difference lists is to represent
the list xs as the pair (xs ++ ys, ys), i.e. xs ++ ys - ys = xs is
where the name difference list comes from.  One way of motivating
the functional representation is that it is nothing more than the
natural embedding of the list monoid into its monoid of endofunctions;
for every set X, X - X is a monoid, and for every monoid (M,*,1),
curry (*) is a monoid homomorphism from M to (M - M).  I'm unsure how
to apply either of these views to the other representation.  In fact,
difference lists are nothing more than the normal imperative way of
handling appends quickly for singly-linked lists, with NULL replaced
by an unbound variable.

To simplify, the difference in persistence between the two
representations is enough to consider them very different as it makes
a dramatic difference in interface.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proving stuff about IORefs

2010-10-17 Thread Derek Elkins
On Sun, Oct 17, 2010 at 6:49 AM, Miguel Mitrofanov
miguelim...@yandex.ru wrote:

 On 17 Oct 2010, at 05:21, Ben Franksen wrote:

 I want to prove that

  f r == do
    s1 - readIORef r
    r' - newIORef s1
    x - f r'
    s3 - readIORef r'
    writeIORef r s3
    return x

 That is not true. Consider the following function:

 g r1 r2 = writeIORef r1 0  writeIORef r2 1  readIORef r1

 This function behaves differently depending on whether r1 and r2 are the same 
 IORef or not. Therefore, the property you want to prove doesn't hold for the 
 partially-applied function

 f = g r1

I originally was thinking along these lines, and this is an important
case, but there is an even more trivial example.

Let f be return.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proving stuff about IORefs

2010-10-16 Thread Derek Elkins
On Sat, Oct 16, 2010 at 9:21 PM, Ben Franksen ben.frank...@online.de wrote:
 I have a formal proof where I am stuck at a certain point.

 Suppose we have a function

  f :: IORef a - IO b

 I want to prove that

  f r == do
    s1 - readIORef r
    r' - newIORef s1
    x - f r'
    s3 - readIORef r'
    writeIORef r s3
    return x

 What happens here is that the temporary IORef r' takes the place of the
 argument r, and after we apply f to it we take its content and store it in
 the original r. This should be the same as using r as argument to f in the
 first place.

 How can I prove this formally?

You haven't provided us with any information about the formal model
you are using and your question is somewhat ambiguously phrased, hence
Thomas' response where, I'm pretty sure, he misunderstood what you
were asking.

At any rate, if you intend to prove this for any arbitrary f, I can't
tell you how to prove it formally because it's not true.

Regardless, this email has far too little information for anyone to
provide you an answer.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Eta-expansion destroys memoization?

2010-10-12 Thread Derek Elkins
On Tue, Oct 12, 2010 at 4:34 AM, Bertram Felgenhauer
bertram.felgenha...@googlemail.com wrote:
 Simon Marlow wrote:
 Interesting.  You're absolutely right, GHC doesn't respect the
 report, on something as basic as sections!  The translation we use
 is

   (e op)  ==  (op) e

 once upon a time, when the translation in the report was originally
 written (before seq was added) this would have been exactly
 identical to \x - e op x, so the definition in the report was
 probably used for consistency with left sections.

 We could make GHC respect the report, but we'd have to use

   (e op)  ==  let z = e in \x - z op x

 to retain sharing without relying on full laziness.

 We should keep in mind that this was changed deliberately in ghc 6.6,
 in order to support postfix operators.

    http://www.haskell.org/ghc/docs/6.6/html/users_guide/release-6-6.html

 The motivating example was the factorial operator which can currently
 be written as  (n !)  in ghc-Haskell.

From 
http://www.haskell.org/ghc/docs/6.6/html/users_guide/syntax-extns.html#postfix-operators
Since this extension goes beyond Haskell 98, it should really be
enabled by a flag; but in fact it is enabled all the time. (No Haskell
98 programs change their behaviour, of course.) 

Which is not true, but is probably true enough.

Of course, there is now a flag
http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/syntax-extns.html#postfix-operators
but it seems that the non-standard interpretation of (e !) is still
kept even without it.  Without the flag, it type checks as if you had
written \x - e ! x but it still behaves as if you had written (!) e.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Eta-expansion destroys memoization?

2010-10-07 Thread Derek Elkins
On Thu, Oct 7, 2010 at 8:44 AM, Luke Palmer lrpal...@gmail.com wrote:
 On Thu, Oct 7, 2010 at 6:17 AM, Brent Yorgey byor...@seas.upenn.edu wrote:
 The source code seems to be easy to read, but I don't think I understand 
 that. For me I think if I change the first line from
 fib = ((map fib' [0 ..]) !!)
 to
 fib x = ((map fib' [0 ..]) !!) x
 It should do the same thing since I think the previous version is just an 
 abbreviation  for the second one.

 Semantically, yes.  And it's possible that ghc -O is clever enough to
 notice that.  But at least under ghci's naive evaluation strategy,
 lambdas determine the lifetime of expressions. Any expression within a
 lambda will be re-evaluated each time the lambda is expanded.  Thus:

  fib = (map fib' [0..] !!)        -- fast
  fib = \x - map fib' [0..] !! x        -- slow
  fib = let memo = map fib' [0..] in \x - memo !! x -- fast

 The section works because (a %^)  (for some operator %^) is short
 for (%^) a and (%^ a) is short for flip (%^) a.  Sections
 don't expand into lambdas.

 In other words, in the middle expression, there is a new map fib'
 [0..] for each x, whereas in the others, it is shared between
 invocations.

 Does that make sense?

In general, f is not semantically equivalent to \x - f x in Haskell.
However, that is not what Brent said.  The Report -defines- (m !!) as
\x - m !! x.  GHC simply does not follow the Report here.  You can
witness this via: (() `undefined`) `seq` 0.  By the Report this should
evaluate to 0, in GHC it evaluates to undefined.

As for the rest... The operational behavior of the above is
implementation dependent, but GHC, and I imagine most implementations,
more or less do the natural thing.  The Report gives no way to control
sharing behavior, but being able to control it is rather important, so
predictable behavior here is desirable.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: Bug in Parsec.Token

2010-08-02 Thread Derek Elkins
This is a forward of a message from March 4th.


-- Forwarded message --
From: Derek Elkins derek.a.elk...@gmail.com
Date: Thu, Mar 4, 2010 at 9:43 PM
Subject: Re: Bug in Parsec.Token
To: Don Stewart d...@galois.com
Cc: Greg Fitzgerald gari...@gmail.com, Antoine Latter
aslat...@gmail.com, Sittampalam, Ganesh
ganesh.sittampa...@credit-suisse.com, Ian Lynagh ig...@earth.li,
librar...@haskell.org


I'm not subscribed to libraries, so this won't go there.

One of the first benchmarks against Parsec 3.0.0 was John MacFarlane's
here: http://www.haskell.org/pipermail/haskell-cafe/2008-March/040258.html

In it, he found Parsec 3.0.0 about 2x slower for his benchmark.  I
can't recreate his benchmark, but I suspect it is a variant of one he
describes here:http://code.google.com/p/pandoc/wiki/Benchmarks

I decided to do a similar benchmark.  I used Parsec 2.1.0.1, Parsec
3.0.1, and Parsec 3.1.0.  Of particular note, building all three
required -only- changing which library pandoc depended on.  No change
to the source was necessary.  All tests in pandoc's test suite passed
for all versions.

Doing that benchmark with a different input file, this file
[http://wpcal.firetree.net/wp-content/plugins/PHP%20Markdown%201.0.1k/PHP%20Markdown%20Readme.text]
concatenated to itself 32 times to produce a 730KB markdown file, I
get the following times for the last three of four runs.

Parsec 2.1.0.1
de...@derek-laptop:~/temp/pandoc-1.3/dist/build/pandoc$ time
./pandoc-2.1.0.1 --strict t.text  /dev/null
real    0m9.863s
user    0m7.792s
sys     0m0.160s

real    0m9.756s
user    0m7.792s
sys     0m0.132s

real    0m10.123s
user    0m7.976s
sys     0m0.168s

Parsec 3.0.1
de...@derek-laptop:~/temp/pandoc-1.3/dist/build/pandoc$ time
./pandoc-3.0.1 --strict t.text  /dev/null
real    0m22.008s
user    0m17.445s
sys     0m0.324s

real    0m21.789s
user    0m17.433s
sys     0m0.160s

real    0m21.754s
user    0m17.677s
sys     0m0.168s

Parsec 3.1.0
de...@derek-laptop:~/temp/pandoc-1.3/dist/build/pandoc$ time
./pandoc-3.1.0 --strict t.text  /dev/null
real    0m10.708s
user    0m8.201s
sys     0m0.168s

real    0m11.078s
user    0m8.401s
sys     0m0.232s

real    0m10.797s
user    0m8.513s
sys     0m0.224s

These results recreate the approximate 2x slowdown that John
originally mentioned between Parsec 2.1.0.1 and Parsec 3.0.  It also
demonstrates that Parsec 3.1.0 is significantly faster than 3.0.1 but
still a little bit slower than Parsec 2.1.0.1.

On Thu, Mar 4, 2010 at 4:39 PM, Don Stewart d...@galois.com wrote:
 derek.a.elkins:
 Who is going to maintain Parsec 4?

 I'm completely against this.  If people absolutely must have exactly
 Parsec 2's implementation we can simply copy it into Parsec 3, and the
 compatibility layer, in that case, will simply -be- Parsec 2.  I've
 considered this as a temporary solution for the performance issues
 just so people could move to Parsec 3 dependencies, but that should
 not be necessary now, and even then I considered it a much less than
 ideal solution.

 If the community wants to freeze on Parsec 2, then I have no problem
 renaming the package, otherwise I think it is both unnecessary and a
 waste of effort.


 The problem is the ongoing lack of confidence in Parsec 3's performance.
 The new release goes some way to addressing this, but I think this has
 gone unaddressed for too long.

 Can someone address the lingering concern with benchmarks against parsec 2?

 -- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Expression dye

2010-07-14 Thread Derek Elkins
http://hackage.haskell.org/package/simple-reflect  This is what is
used in lambdabot.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using the ContT monads for early exits of IO ?

2010-06-10 Thread Derek Elkins
Or... one could just use the exceptions that are already built into
the IO monad...

2010/6/10 Yitzchak Gale g...@sefer.org:
 Lennart Augustsson wrote:
 I would not use the continuation monad just for early exit.  Sounds
 like the error monad to me.

 I.e., the Either/ErrorT monad. But the mtl/transformers packages
 have an orphan instance for Either that requires the
 exit type to be an instance of the Error class. If that
 doesn't work in your case, use the Exit monad:

 http://www.haskell.org/haskellwiki/New_monads/MonadExit

 Or use the Maybe monad written additively, i.e. mplus
 in place of  (more or less).

 Regards,
 Yitz
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Clean proof -- correction

2010-05-23 Thread Derek Elkins
On Sun, May 23, 2010 at 11:38 AM, Daniel Fischer
daniel.is.fisc...@web.de wrote:
 On Sunday 23 May 2010 18:24:50, R J wrote:
 Correction:  the theorem is
     h . either (f, g) = either (h . f, h . g)

 Still not entirely true,

 const True . either (undefined, undefined) $ undefined = True

 while

 either (const True . undefined, const True . undefined) undefined =
 undefined

 But if we ignore bottom,

If we ignore bottom we say By parametricity.  The theorem is a free theorem.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mechanics of type-level proxies through branding?

2010-05-22 Thread Derek Elkins
On Sat, May 22, 2010 at 8:36 PM, Dave Neuer dave.ne...@pobox.com wrote:
 Hi.

 I'm a Haskell newbie, and I've been reading Oleg's work about
 lightweight dependent types in Haskell, and I've been trying to figure
 out if I understand how branding works (warning bells already, I
 know).

 At 
 http://okmij.org/ftp/Computation/lightweight-dependent-typing.html#Branding,
 he states A language with higher-rank types or existential type
 quantification lets us introduce type proxies for the values. We can
 associate a value with some type in such a way that type equality
 entails value equality...

 Then, in the code for eliminating array bounds checking at
 http://okmij.org/ftp/Haskell/eliminating-array-bound-check.lhs, the
 following example of branding is given:

 brand:: (Ix i, Integral i) = Array i e
                   - (forall s. (BArray s i e, BIndex s i, BIndex s i) - w)
                   - w - w
 brand (a::Array i e) k kempty =
     let (l,h) = bounds a
     in if l = h then k ((BArray a)::BArray () i e, BIndex l, BIndex h)
     else kempty
 ...

 The function brand has a higher-rank type. It is the existential
 quantification of 's' as well as the absence of BArray constructor
 elsewhere guarantee that the same brand entails the same bounds.

First, Haskell lacks (free) existential type quantifiers, so the above
uses universal quantification.  The usage is equivalent to using an
existential but it is an encoding and so it is important to know what
is being encoded or alternatively to stick just to the actual
universal quantification that is there.  Second, this function encodes
more than is necessary.  It is necessary to use a CPS style for the
encoding of existentials, but the above code also CPS encodes a case
analysis.  Here's an equivalent implementation and a proof that it is
equivalent.  Here and elsewhere I'll set i to Int for simplicity.

{-# LANGUAGE RankNTypes, ScopedTypeVariables, ExistentialQuantification #-}
import Data.Array

newtype BA s e = BArray (A e)
newtype BI s   = BIndex Int

type A e = Array Int e

brand :: A e - (forall s. (BA s e, BI s, BI s) - w) - w - w
brand (a :: A e) k kempty =
let (l,h) = bounds a
in if l = h then k (BArray a :: BA () e, BIndex l, BIndex h)
else kempty

brand' :: A e - (forall s. Maybe (BA s e, BI s, BI s) - w) - w
brand' (a :: A e) k = k (if l = h then Just (BArray a :: BA () e,
BIndex l, BIndex h) else Nothing)
where (l,h) = bounds a

brandFromBrand' :: A e - (forall s. (BA s e, BI s, BI s) - w) - w - w
brandFromBrand' a k kempty = brand' a (maybe kempty k)

brand'FromBrand :: A e - (forall s. Maybe (BA s e, BI s, BI s) - w) - w
brand'FromBrand a k = brand a (k . Just) (k Nothing)

Once you see the Maybe it's easier to see what the existential type
would be if Haskell had the appropriate existentials.

brand :: A e - exists s. Maybe (BA s e, BI s, BI s)
or equivalently
brand :: A e - Maybe (exist s. (BA s e, BI s, BI s))

Using local existential quantification which GHC supports we can
encode the above and prove it equal to the earlier representations.

data B e = forall s. B (BA s e) (BI s) (BI s)

brand'' :: A e - Maybe (B e)
brand'' a = if l = h then Just (B (BArray a) (BIndex l) (BIndex h))
else Nothing
where (l,h) = bounds a

brand''FromBrand' :: A e - Maybe (B e)
brand''FromBrand' a = brand' a (fmap (\(ba,l,h) - B ba l h))

brand'FromBrand'' :: A e - (forall s. Maybe (BA s e, BI s, BI s) - w) - w
brand'FromBrand'' a k = case brand'' a of
Nothing - k Nothing
Just (B ba l h) - k (Just (ba, l, h))

 I think I understand that the fact that the type variable 's' is
 shared between BArray, and BIndex's type constructors in the type
 annotation for brand, that the array and indices share the same
 brand.

Correct.

 I also think I understand that since 's' is existentially quantified
 and a phantom type, it's unique and cannot ever be unified w/ any
 other type?

The phantom type aspect is irrelevant.  s is a phantom type because we
don't need to actually represent it in any way.  As far as the rest of
the paragraph it's somewhat tricky though I think you've got the right
idea.  Whether s can unify with something depends on your perspective.
 In general, when introducing a universal quantifier we need to treat
the quantified variable as a fresh constant that doesn't unify with
anything.  This guarantees that we aren't making any assumptions about
it.  When eliminating a universal quantifier, we do allow unification,
or, when instantiation is explicit, we explicitly are saying what type
to unify the type variable with.  The rules for existentials are dual.
 brand is eliminating a universal and brand'' is equivalently
introducing an existential, within the body of brand/brand'' so s can
and does unify.  In brand it is unified with ().  Consumers of these
will be doing the dual operation and so they must not unify s.

 Additionally, it seems that this is all only within the 

Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-15 Thread Derek Elkins
On Sat, May 15, 2010 at 2:28 PM, Max Cantor mxcan...@gmail.com wrote:
 Where is my bind statement doing a case analysis? Isn't it just propagating, 
 in a sense, the case analysis that came from values coming into the monad via 
 return or via throwError?

What you did was reimplement the Either -type- not the Either -monad-.
 To see this lets make a complete interface to Either and provide the
two implementations of that, now, abstract data type.

Every function using Either can be written using the following interface:
class EitherLike e where
injLeft :: a - e a b
injRight :: b - e a b
either :: (a - c) - (b - c) - e a b - c

And here are two implementations:
instance EitherLike Either where
injLeft = Left
injRight = Right
either = Prelude.either

type CEEither a b = forall c. (a - c) - (b - c) - c

instance EitherLike CEEither where
injLeft a = \l r - l a
injRight b = \l r - r b
either f g e = e f g

Now we can write your functions and the standard Either monad
definitions in terms of this abstract interface.

 retErrCPS ::  a - ErrCPS e m a
 retErrCPS x = ErrCPS $ \_ good - good x

return x = Right x

retEither x = injRight x

retErrCPS x = ErrCPS $ injRight x

 bindErrCPS ::  ErrCPS e m b - (b - ErrCPS e m a) - ErrCPS e m a
 bindErrCPS m f =  ErrCPS $ \err good - runErrCPS m err $ \x - runErrCPS (f 
 x) err good

bindErrCPS m f = ErrCPS $ either injLeft (runErrCPS . f) (runErrCPS m)

Left e = _ = Left e
Right a = f = f a

bindEither m f = either injLeft f m

So, modulo wrapping and unwrapping, the code is identical.  In version
of GHC prior to pointer tagging, a case analysis on Either would be
implemented very much like the Church-encoded eliminator, i.e. in case
e of Left a - f a, Right b - g b pre-pointer tagging GHC would push
(essentially) f and g on a stack and enter e, e would then choose
which of f or g to return to.  So your representation is still doing a
case analysis, it is just representing it a different way.

 Also, why wouldn't callCC work here?  I'm not that familiar with the ContT 
 monad so any more details would be very much appreciated.

It's hard to implement a global abort with callCC.  You can
implement a local one easily by using an outer callCC to provide an
escape continuation, but you have to explicitly pass around this
escape continuation.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-15 Thread Derek Elkins
On Sat, May 15, 2010 at 9:20 PM, Antoine Latter aslat...@gmail.com wrote:
 On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com 
 wrote:
 You did it wrong.  All you did was Church encode the Either type.
 Your bind is still doing a case-analysis.  All you have to do is use
 ContT r (Either e).  The bind implementation for ContT is completely
 independent of the underlying monad.  It doesn't even require the m in
 ContT r m to be a functor, let alone a monad.  Therefore the ContT
 bind doesn't do any case-analysis because it doesn't know anything
 about the underlying monad.  One way to look at what is happening is
 to compare it to Andrzej Filiniski's work in Representing Monads and
 Representing Layered Monads.


 Here's a bit more fleshed out version of what Derek is talking about,
 for those following along at home:

 http://hpaste.org/fastcgi/hpaste.fcgi/view?id=25515#a25515

 Derek - should I be doing something smarter in 'catch'? I couldn't
 think of anything obvious.

No, that's pretty much what you should be doing also note, for
conceptual purposes, that the const (Left e) is equivalent to (Left e
=).  In Representing Monads to actually perform an effect it gets
reified back into a data structure, in this case Either e a,
manipulated as appropriate, then reflected back into an implicit
effect.  The reify function is just applying to the identity
continuation so your catch can be written more clearly.

reify :: Monad m = ContT r m r - m r
reify m = runContT m return

catch :: (e - Error e a) - Error e a - Error e a
catch handler m = case reify (unE m) of Left e - handler e; Right a - return a
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-14 Thread Derek Elkins
You did it wrong.  All you did was Church encode the Either type.
Your bind is still doing a case-analysis.  All you have to do is use
ContT r (Either e).  The bind implementation for ContT is completely
independent of the underlying monad.  It doesn't even require the m in
ContT r m to be a functor, let alone a monad.  Therefore the ContT
bind doesn't do any case-analysis because it doesn't know anything
about the underlying monad.  One way to look at what is happening is
to compare it to Andrzej Filiniski's work in Representing Monads and
Representing Layered Monads.

On Mon, May 10, 2010 at 4:38 AM, Max Cantor mxcan...@gmail.com wrote:
 Based on some discussions in #haskell, it seemed to be a consensus that using 
 a modified continuation monad for Error handling instead of Eithers would be 
 a significant optimization since it would eliminate a lot of conditional 
 branching (everytime = is called in the Either monad, there is a 
 conditional.

 I implemented a ErrCPS monad which does exactly that, but the speed has been 
 disappointing.  It runs almost exactly 3x slower than a drop in replacement 
 using the MonadError instance of Either from mtl.

 mkEMA and midError are basically toy functions but I dont know why Either is 
 so much faster.  I've experimented with putting some seq's in the bindErrCPS 
 and even {-# INLINE (=) #-} in the Monad instance, but to no avail.

 I've copy/pasted the code below, any suggestions on optimization, or if this 
 is simply a bad idea would be much appreciated.  Strangely, compiling with 
 -O2 seems to have no effect on the speed:


 -Max


 {-# LANGUAGE MultiParamTypeClasses #-}
 {-# LANGUAGE FlexibleInstances #-}
 {-# LANGUAGE FlexibleContexts #-}
 {-# LANGUAGE Rank2Types #-}
 module Main  where

 import Control.Applicative
 import Control.Monad.Error -- hiding (foldM)
 import Control.Monad.Trans
 import Control.Monad hiding (foldM)
 import System.Random
 import Control.Monad.Identity (runIdentity, Identity)
 import Control.Monad.Reader.Class
 import Data.Time.LocalTime as Time -- for benchmarking
 import Data.Time.Calendar (Day)
 import Data.Time.LocalTime (getZonedTime)


 midError :: MonadError String m = Double - Double - m Double
 midError a b = if (b  1) then throwError check val
                              else let r = (a + b) / 2 in r `seq` (return r)
 mkEMA l = foldM midError  1 l


 newtype ErrCPS e m a = ErrCPS { runErrCPS :: forall r . (e - m r) --  error 
 handler
                                           - (a - m r) --  success handler
                                           - m r }



 {-# INLINE retErrCPS  #-}
 retErrCPS ::  a - ErrCPS e m a
 retErrCPS x = ErrCPS $ \_ good - good x

 {-# INLINE bindErrCPS  #-}
 bindErrCPS ::  ErrCPS e m b - (b - ErrCPS e m a) - ErrCPS e m a
 bindErrCPS m f =  ErrCPS $ \err good - runErrCPS m err $ \x - runErrCPS (f 
 x) err good

 instance Monad m = Monad (ErrCPS e m)  where
   return = retErrCPS
   (=) = bindErrCPS



 main :: IO ()
 main = do
   let n = 50
       runEither e b g = either b g e
       runTest f s = do
         sg - newStdGen
         let l = take n $ randomRs (2, 5) sg
         mapM_ (\e - e `seq` return ()) l
         stopwatch $ f (mkEMA l)
                       (putStr . show)
                       (putStr . (s ++) . show)

   forever $ do runTest runEither either:  
                runTest runErrCPS errCPS:  





 ErrCPS based code seems to run almost exactly 3x slower than the
 Either based code:
  errCPS:  37453.226  Action ran in: 30 msec
  either:  26803.055  Action ran in: 11 msec
  errCPS:  15840.626  Action ran in: 34 msec
  either:  32556.881  Action ran in: 10 msec
  errCPS:  38933.121  Action ran in: 30 msec
  either:  35370.820  Action ran in: 11 msec
  ...






 instance (Error e, Monad m) = MonadError e (ErrCPS e m) where
   throwError = errCPS
   catchError m f = ErrCPS $ \err good - runErrCPS m (\e - runErrCPS (f e) 
 err good) good


 -- * MTL stuff
 instance MonadTrans (ErrCPS e ) where lift m = ErrCPS $ \_ good - m = good
 instance (MonadIO m) = MonadIO (ErrCPS e m ) where liftIO = lift . liftIO


 Random utility stuff

 stopwatch :: IO () - IO ()
 stopwatch act = do
   t1 - getFastTimeOfDay
   act
   t2 - getFastTimeOfDay
   putStrLn $   Action ran in:  ++ show (t2 - t1) ++  msec
 type FastTimeOfDay = Int

 -- | Return the current trading day.  This should respect the
 --   fact that the Trading Day ranges from
 --   SingTime 6am (UTC -02:00) to SST 5:59 am (UTC -1:59).
 getTradingDay :: IO Day
 getTradingDay = error getTradingDay undefined

 getFastTimeOfDay :: IO FastTimeOfDay
 getFastTimeOfDay = getZonedTime =
                    (return . fastFromTimeOfDay .  Time.localTimeOfDay .  
 Time.zonedTimeToLocalTime)

 timeOfDayFromFast :: FastTimeOfDay - Time.TimeOfDay
 timeOfDayFromFast fast = Time.TimeOfDay
   { Time.todHour = fromIntegral (fast `div` (3600 * 1000))
   , Time.todMin =  fromIntegral (fast `div` (60 * 1000)) `mod` 60
   , Time.todSec = 

Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-14 Thread Derek Elkins
On Fri, May 14, 2010 at 4:53 PM, Antoine Latter aslat...@gmail.com wrote:
 On Fri, May 14, 2010 at 4:25 PM, Derek Elkins derek.a.elk...@gmail.com 
 wrote:
 You did it wrong.  All you did was Church encode the Either type.
 Your bind is still doing a case-analysis.  All you have to do is use
 ContT r (Either e).  The bind implementation for ContT is completely
 independent of the underlying monad.  It doesn't even require the m in
 ContT r m to be a functor, let alone a monad.  Therefore the ContT
 bind doesn't do any case-analysis because it doesn't know anything
 about the underlying monad.  One way to look at what is happening is
 to compare it to Andrzej Filiniski's work in Representing Monads and
 Representing Layered Monads.


 Would you then use callCC to get the required short-circuit-on-error behavior?

 A church encoding of Either coded as a monad transformer still
 wouldn't hit the inner monad on bind, even if it is weaving the left
 and right continuations together.

callCC wouldn't work well here.  What would work better is another
control operator commonly called 'control' which does not resume if
the passed in continuation isn't invoked.  However, it's usually even
clearer (or at least more concise) in these situations to work with
the continuation passing style directly.

-- fail directly using CPS
fail :: String - ContT r (Either String) a
fail s = ContT $ \k - Left s
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is TChan GHC specific?

2010-05-13 Thread Derek Elkins
On Thu, May 13, 2010 at 10:49 AM, Edward Amsden eca7...@cs.rit.edu wrote:
 On Wed, May 12, 2010 at 3:29 PM, Peter Robinson thaldy...@gmail.com wrote:
 As far as I know, TChan needs the 'retry' combinator which requires GHC's 
 RTS.
 Same is true for TMVar, I think.

 (sorry for the doubling peter, I forgot reply-all)

 OK. I'm new to this and probably didn't know where to look, but I
 didn't know that 'retry' was GHC specific.

All of STM (Software Transactional Memory) is GHC-specific.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Domains and Co-Domains

2010-03-29 Thread Derek Elkins
2010/3/29 Günther Schmidt gue.schm...@web.de:
 Hi,

 I can easily see how one identifies the domain and co-domain of a unary
 function.

 How would the domain of a function be expressed that takes more than one
 argument and arguments of different type?

All functions in Haskell are unary.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] ANNOUNCE: Parsec 3.1.0

2010-03-03 Thread Derek Elkins
Parsec is a monadic combinator library that is well-documented, simple
to use, and produces good error messages.   Parsec is not inherently
lazy/incremental and is not well-suited to handling large quantities
of simply formatted data.  Parsec 3 adds to Parsec the ability to use
Parsec as a monad transformer and generalizes the input Parsec
accepts.  Parsec 3 includes a compatibility layer for Parsec 2 and
should be a drop-in replacement for code using Parsec 2.  Code using
the features of Parsec 3 should use the modules in Text.Parsec.

Due almost entirely to the work of Antoine Latter there is a new
version of Parsec 3 available.  He documented some of his thoughts on
this in this series of blog posts:
http://panicsonic.blogspot.com/2009/12/adventures-in-parsec.html

The main features of this release are:
- the performance should be much better and comparable to Parsec 2
- notFollowedBy's type and behavior have been generalized

Changes:
- the changes to the core of Parsec lead to some changes to when
things get executed when it is used as a monad transformer
In the new version bind, return and mplus no longer run in
the inner monad, so if the inner monad was side-effecting for these
actions the behavior of existing code will change.
- notFollowedBy p now behaves like notFollowedBy (try p) which
changes the behavior slightly when p consumes input, though the
behavior should be more natural now.
- the set of names exported from Text.Parsec.Prim has changed somewhat
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] ANNOUNCE: Parsec 3.1.0

2010-03-03 Thread Derek Elkins
Parsec is a monadic combinator library that is well-documented, simple
to use, and produces good error messages.   Parsec is not inherently
lazy/incremental and is not well-suited to handling large quantities
of simply formatted data.  Parsec 3 adds to Parsec the ability to use
Parsec as a monad transformer and generalizes the input Parsec
accepts.  Parsec 3 includes a compatibility layer for Parsec 2 and
should be a drop-in replacement for code using Parsec 2.  Code using
the features of Parsec 3 should use the modules in Text.Parsec.

Due almost entirely to the work of Antoine Latter there is a new
version of Parsec 3 available.  He documented some of his thoughts on
this in this series of blog posts:
http://panicsonic.blogspot.com/2009/12/adventures-in-parsec.html

The main features of this release are:
- the performance should be much better and comparable to Parsec 2
- notFollowedBy's type and behavior have been generalized

Changes:
- the changes to the core of Parsec lead to some changes to when
things get executed when it is used as a monad transformer
In the new version bind, return and mplus no longer run in
the inner monad, so if the inner monad was side-effecting for these
actions the behavior of existing code will change.
- notFollowedBy p now behaves like notFollowedBy (try p) which
changes the behavior slightly when p consumes input, though the
behavior should be more natural now.
- the set of names exported from Text.Parsec.Prim has changed somewhat
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Real-time garbage collection for Haskell

2010-02-28 Thread Derek Elkins
On Sun, Feb 28, 2010 at 10:03 AM, Heinrich Apfelmus
apfel...@quantentunnel.de wrote:
 Luke Palmer wrote:
 I have seen some proposals around here for SoC projects and other
 things to try to improve the latency of GHC's garbage collector.  I'm
 currently developing a game in Haskell, and even 100ms pauses are
 unacceptable for a real-time game.  I'm calling out to people who have
 seen or made such proposals, because I would be willing to contribute
 funding and/or mentor a project that would contribute to this goal.

 Also any ideas for reducing this latency in other ways would be very
 appreciated.

 Overly long garbage collection might also be a sign of space leaks.

 But there are many other things that can go wrong in a real time system
 and might explain your delays. For example, you might need to avoid
 amortized time data structures like  Data.Sequence . Or for physics
 simulations, you'd need to fix the time step ∆t, as described in

   http://gafferongames.com/game-physics/fix-your-timestep/

 or numerical integration will deteriorate rather quickly.

Incidentally, what's described there is a simplified version of the
frequency locked loops described in Massalin's thesis on Synthesis OS,
and it is used there for about the same purpose in a soft real-time
context.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.4871
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naive booleans and numbers - type-checking fails

2010-01-24 Thread Derek Elkins
On Sun, Jan 24, 2010 at 3:12 PM, Stephen Tetley
stephen.tet...@gmail.com wrote:
 Doesn't the simply typed lambda calculus introduce if-then-else as a
 primitive precisely so that it can be typed?

 Its not an illuminating answer to your question and I'd welcome
 clarification for my own understanding, but I don't think you can
 solve the problem without appealing to Haskell's built-in
 if-then-else.

Yes, encoding data types as pure typed lambda terms requires rank-2
types.  I'd recommend that Dušan Kolář start giving types to all these
functions.  However, it will, eventually, be necessary to go beyond
Haskell 98 to give the appropriate types.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: could we get a Data instance for Data.Text.Text?

2010-01-23 Thread Derek Elkins
On Sat, Jan 23, 2010 at 4:57 PM, Jeremy Shaw jer...@n-heptane.com wrote:
  On Sat, Jan 23, 2010 at 7:57 AM, Neil Mitchell ndmitch...@gmail.com
 wrote:


 No, that's definitely not correct, or even remotely scalable as we
 increase the number of abstract types in disparate packages.

 Yes.. happstack is facing another aspect of this scalability issue as well.
 We have a class, Serialize, which is used to serialize and deserialize data.
 It builds on the binary library, but adds the ability to version your data
 types and migrate data from older versions to newer versions.
 This has a serious scalability issue though, because it requires that each
 type a user might want to serialize has a Serialize instance.
 So do we:
   1. provide Serialize instances for as many data types from libraries on
 hackage as we can, resulting in depending on a large number of packages that
 people are required to install, even though they will only use a small
 fraction of them.
   2. convince people that Serialize deserves the same status as Data, and
 then convince authors to create Serialize instances for their type? It would
 be nice, but authors will start complaining if they are asked to provide a
 zillion other instances for their types as well. And they will be annoyed if
 they their library has to depend on a bunch of other libraries, just so they
 can provide some instances that only a small fraction of their users might
 use. So, this method does not scale as the number of 'interesting' classes
 grows.
   3. let individual users define the Serialize instances as they need them.
 Unfortunately, if two different library authors defined a Serialize instance
 for Text in their libraries, you could not use both libraries in your
 application because of the conflicting Serialize instances. So this method
 does not scale when the number of libraries using the Serialize class grows.
 Not really sure what the work around is. #1 could work if there was some way
 to just selectively install the pieces as you need them. But the only way to
 do this now would be to create a lot of cabal packages which just defined a
 single instance -- happstack-text, happstack-map, happstack-time,
 happstack-etc. One for each package that has types we want to create a
 serialization instance for...
 Any other suggestions?
 - jeremy

The only safe rule is: if you don't control the class, C, or you don't
control the type constructor, T, don't make instance C T.  Application
writers can often relax that rule as the set of dependencies for the
whole application is known and in many cases any reasonable instance
for a class C and constructor T is acceptable.  Under those
conditions, the worst-case scenario is that the application writer may
need to remove an instance declaration when migrating to new versions
of the dependencies.  When you control a class C, you should make as
many (relevant) type constructors instances of it as is reasonably
possible, i.e. without adding any extensive dependencies.  So at the
very least, all standard type constructors.  Similarly for those who
control a type constructor T.  This is for convenience.  These
correspond to solutions #1 and #2 only significantly weakened.
Definitely, making a package depend on tons of other packages just to
add instances is NOT the correct solution.

The library writers depending on a package for a class and another
package for a type are the problem case.  There are three potential
solutions in this case which basically are reduce the problem to one
of the above three cases.  Either introduce a new type and add it to a
class, introduce a new class and add the types to it, or try to push
the resolution of such things onto the application writer.  The first
two options have the benefit that they also protect you from the
upstream libraries introducing instances that won't work for you.
These two options have the drawback that they are usually less
convenient to use.  The last option has the benefit that it usually
corresponds to having a more flexible/generic library, in some cases
you can even go so far as to remove your dependence on the libraries
altogether.

One solution to this problem though it can't be done post-hoc usually,
is to simply not use the class mechanism except as a convenience.
This has the benefit that it usually leads to more flexibility and it
helps to realize the third option above.  Using Monoid as an example,
one can provide functions of the form: f :: m - (m - m - m) - ...
and then also provide f' = f mempty mappend :: Monoid m = ...  The
parameters can be collected into a record as well.  You could even
systematize this into: class C a where getCDict :: CDict a, and then
write f :: CDict a - ... and f' = f getCDict :: C a = ...

Whatever one does, do NOT add instances of type constructors you don't
control to classes you don't control.  This can lead to cases where
two libraries can't be used together at all.
___

Re: [Haskell-cafe] Re: Why no merge and listDiff?

2010-01-22 Thread Derek Elkins
On Wed, Jan 20, 2010 at 9:42 AM, Will Ness will_...@yahoo.com wrote:
 Derek Elkins derek.a.elkins at gmail.com writes:
 On Sun, Jan 17, 2010 at 2:22 PM, Will Ness will_n48 at yahoo.com wrote:
  Hello cafe,
 
  I wonder, if we have List.insert and List.union, why no List.merge (:: Ord
 a =
  [a] - [a] - [a]) and no List.minus ? These seem to be pretty general
  operations.

 Presumably by List.minus you mean the (\\) function in Data.List.

 No, it has to search its second list over and over from the start, to be able
 to deal with unordered lists, so its performance can't be good.

Then use the ordlist one.

 You
 probably also want to look at the package data-ordlist on hackage
 (http://hackage.haskell.org/packages/archive/data-ordlist/0.0.1/doc/html/Data-
 OrdList.html)
 which represents sets and bags as ordered lists and has all of the
 operations you mention.


 I did, thanks again! Although, that package deals with non-decreasing lists,
 i.e. lists with multiples possibly. As such, its operations produce non-
 decreasing lists, i.e. possibly having multiples too.

It is clear that some of the operations are guaranteed to produce sets
given sets.  The documentation could be better in this regard though.

 I meant strictly increasing ordered lists, without multiples, for which the 
 two
 operations, 'merge' and 'minus', would also have to produce like lists, i.e
 strictly increasing, without multiples.

The 'union' and 'minus' functions of ordlist meet this requirement if
you satisfy the preconditions.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why no merge and listDiff?

2010-01-17 Thread Derek Elkins
On Sun, Jan 17, 2010 at 2:22 PM, Will Ness will_...@yahoo.com wrote:
 Hello cafe,

 I wonder, if we have List.insert and List.union, why no List.merge (:: Ord a 
 =
 [a] - [a] - [a]) and no List.minus ? These seem to be pretty general
 operations.

Presumably by List.minus you mean the (\\) function in Data.List.  You
probably also want to look at the package data-ordlist on hackage
(http://hackage.haskell.org/packages/archive/data-ordlist/0.0.1/doc/html/Data-OrdList.html)
which represents sets and bags as ordered lists and has all of the
operations you mention.

 Brief look into haskell-prime-report/list.html reveals nothing.

 Could we please have them?

The trend is to remove things from standard libraries and to push
them more to 3rd party libraries hosted on hackage.

 On the wider perspective, is their a way to declare an /ordered/ list on the
 type level (e.g. [1,2,3] would be one, but not [2,3,1])? Non-decreasing lists?
 Cyclical, or of certain length? What are such types called?

There are a few ways to encode such things.  The most direct route is
to use dependent types as Miguel mentioned, but Haskell has no support
for those.  See languages like Agda or Coq.  Another approach is to
use a type that specifically represents what you want and nothing
else.  For example, a list of a set length is just a tuple.  It is
easy to make a type that represents cyclic lists.  Finally, the most
general method is to use an abstract data type to maintain the
invariant.  It is trivial to handle ordered/non-decreasing lists in
this way.  One note about the dependent types route is that the
ability to assert arbitrary properties comes with it the
responsibility to prove them later on.  So you can make a function
which only accepts ordered lists, but then the users need to pass in a
proof that their lists are ordered when they use such functions and
these proofs can be a significant burden.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lawless instances of Functor

2010-01-04 Thread Derek Elkins
On Tue, Jan 5, 2010 at 7:49 AM, Steffen Schuldenzucker
sschuldenzuc...@uni-bonn.de wrote:
 Hi Paul,

 Paul Brauner wrote:
 Hi,

 I'm trying to get a deep feeling of Functors (and then pointed Functors,
 Applicative Functors, etc.). To this end, I try to find lawless
 instances of Functor that satisfy one law but not the other.

 I've found one instance that satisfies fmap (f.g) = fmap f . fmap g
 but not fmap id = id:
 [...]
 But I can't come up with an example that satifies law 1 and not law 2.
 I'm beginning to think this isn't possible but I didn't read anything
 saying so, neither do I manage to prove it.

 I'm sure someone knows :)

 data Foo a = Foo a

 instance Functor Foo where
    fmap f (Foo x) = Foo . f . f $ x

And what is the type of f here?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lawless instances of Functor

2010-01-04 Thread Derek Elkins
On Tue, Jan 5, 2010 at 7:14 AM, Paul Brauner paul.brau...@loria.fr wrote:
 Hi,

 I'm trying to get a deep feeling of Functors (and then pointed Functors,
 Applicative Functors, etc.). To this end, I try to find lawless
 instances of Functor that satisfy one law but not the other.

 I've found one instance that satisfies fmap (f.g) = fmap f . fmap g
 but not fmap id = id:

 data Foo a = A | B

 instance Functor Foo where
  fmap f A = B
  fmap f B = B

 -- violates law 1
 fmap id A = B

 -- respects law 2
 fmap (f . g) A = (fmap f . fmap g) A = B
 fmap (f . g) B = (fmap f . fmap g) B = B

 But I can't come up with an example that satifies law 1 and not law 2.
 I'm beginning to think this isn't possible but I didn't read anything
 saying so, neither do I manage to prove it.

 I'm sure someone knows :)

Ignoring bottoms the free theorem for fmap can be written:

If h . p = q . g then fmap h . fmap p = fmap q . fmap g
Setting p = id gives
h . id = h = q . g  fmap h . fmap id = fmap q . fmap g
Using fmap id = id and h = q . g we get,
fmap h . fmap id = fmap h . id = fmap h = fmap (q . g) = fmap q . fmap g

So without doing funky stuff involving bottoms and/or seq, I believe
that fmap id = id implies the other functor law (in this case, not in
the case of the general categorical notion of functor.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Re: Data.Ring -- Pre-announce

2010-01-04 Thread Derek Elkins
On Tue, Jan 5, 2010 at 5:13 AM, Maciej Piechotka uzytkown...@gmail.com wrote:
 On Mon, 2010-01-04 at 07:17 -0700, Luke Palmer wrote:
 On Mon, Jan 4, 2010 at 6:51 AM, Maciej Piechotka uzytkown...@gmail.com 
 wrote:
  About comonad - not exactly as every comonad is copointed and the only
  possible way is extract Empty = _|_

 I think this module could be cleaned up by disallowing empty lists.
 You have this nice semantic property that every clist has a focus,
 but when you add empty you have to add unless it's empty.  focus
 returns a Maybe, isEmpty is necessary.

 I mean, it could be that your use case requires empty clists and would
 be uglier without empty, but think about it.  I find in Haskell that
 simplicity breeds simplicity; i.e. I'm willing to wager that whatever
 algorithm you are using clist for will actually be cleaner if you got
 rid of empty and modify the algorithm accordingly.  We shall see
 though...

 Luke

 However then we lost the monoid (ok. I haven't send patch but please
 accept[1]) along with alternative/monad plus - which is much more
 popular, standard and useful then Copointed/Comonad.

 Additionally it would introduce:
 fromList [] = _|_

This isn't a big deal, it just means fromList is not appropriate
(which it is not, it should be fromNonEmptyList in this case.  We can
of course, also, simply return Maybe (NonEmptyCList a) which works
out.

 Is is somehow similar to 0 \in N - sometimes it is better to include it
 sometimes to not include it.

 Regards

 [1]
 instance Monoid CList where
   mempty = Empry
   mappend = mplus

This is a bigger issue, however, given a type with a associative
binary operation, a semigroup, we can complete it to a monoid using a
Maybe-like type constructor to formally attach a unit.

data AddUnit a = Unit | Value a

class SemiGroup a where op :: a - a - a -- associative

instance (SemiGroup a) = Monoid (AddUnit a) where
mempty = Unit
Unit `mappend` y = y
x `mappend` Unit = x
Val x `mappend` Val y = Val (x `op` y)

We then have,
type CList a = AddUnit (NonEmptyCList a)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lawless instances of Functor

2010-01-04 Thread Derek Elkins
On Tue, Jan 5, 2010 at 8:15 AM, Dan Piponi dpip...@gmail.com wrote:
 On Mon, Jan 4, 2010 at 3:01 PM, Derek Elkins derek.a.elk...@gmail.com wrote:

 Ignoring bottoms the free theorem for fmap can be written:

 If h . p = q . g then fmap h . fmap p = fmap q . fmap g

 When I play with http://haskell.as9x.info/ft.html I get examples that
 look more like:

 If fmap' has the same signature as the usual fmap for a type

 and h . p = q . g

 then fmap h . fmap' p = fmap' q . fmap g

 From which it follows that if fmap' id = id then fmap' is fmap.

It should not be necessary to prove this as fmap has the appropriate
type to be fmap' and therefore fmap' can simply be set to fmap.

 But I don't know how to prove that uniformly for all types, just the
 ones I generated free theorems for.

Yes, I have the same problem.  I generated a few examples using pretty
much that site and generalized, but I haven't proven the general
statement, though I'm pretty confident that it holds.  Basically, I'm
pretty sure the construction of that free theorem doesn't rely on any
of the actual details of the type constructor and probably by using a
higher-order notion of free theorem this could be formalized and then
used to prove the above result.  At this point, though, I haven't put
much effort into proving that the free theorem holds uniformly
(enough.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lawless instances of Functor

2010-01-04 Thread Derek Elkins
On Tue, Jan 5, 2010 at 8:22 AM, Brent Yorgey byor...@seas.upenn.edu wrote:
 On Mon, Jan 04, 2010 at 11:49:33PM +0100, Steffen Schuldenzucker wrote:

 data Foo a = Foo a

 instance Functor Foo where
     fmap f (Foo x) = Foo . f . f $ x

 Then:

 fmap id (Foo x) == Foo . id . id $ x == Foo x

 fmap (f . g) (Foo x)      == Foo . f . g . f . g $ x
 fmap f . fmap g $ (Foo x) == Foo . f . f . g . g $ x

 Now consider Foo Int and

 fmap ((+1) . (*3)) (Foo x)      == Foo $ (x * 3 + 1) * 3 + 1
     == Foo $ x * 9 + 4
 fmap (+1) . fmap (*3) $ (Foo x) == Foo $ x * 3 * 3 + 1 + 1
     == Foo $ x * 9 + 2

 As others have pointed out, this doesn't typecheck; but what it DOES
 show is that if we had a type class

  class Endofunctor a where
    efmap :: (a - a) - f a - f a

As an aside, for clarity, this class does NOT correspond to the
categorical notion of endofunctor.  I don't think any such
identification was Brent's intent, I just want to avoid potential
confusion.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How Can Haskell Be Saved?

2009-12-13 Thread Derek Elkins
 What does Haskell need to be saved from?
 (Its growing popularity and mushrooming library?)

Arguably John Earle's emails suggest that the answer to this is Yes.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] From function over expression (+, *) derive function over expression (+)

2009-12-04 Thread Derek Elkins
On Fri, Dec 4, 2009 at 11:26 AM, Radek Micek radek.mi...@gmail.com wrote:
 Hello.

 I have two types for expression:

 data Expr = Add Expr Expr | Mul Expr Expr | Const Int

 data AExpr = AAdd AExpr AExpr | AConst Int

 The first one supports addition and multiplication and the second
 only addition.

 I can write a function to simplify the first expression:

 simplify :: Expr - Expr
 simplify = {- replaces:
 a*1 and 1*a by a,
 a+0 and 0+a by a -}

 And I would like to use the function simplify for the second type
 AExpr. What can I do is to convert AExpr to Expr, simplify it and
 convert it back. But I don't like this solution because
 conversions take some time.

 I would prefer following: I say to the compiler that AAdd is like Add
 and AConst is like Const and the compiler derives function
 asimplify for AExpr.

 Is it possible to do this? In fact I want to have two distinct types
 where one is extension of the second (Expr is extension of AExpr)
 and I want to define a function for the extended type (Expr) and
 use it for the original type (AExpr). I assume that the function won't
 introduce Mul to the expression which had no Mul.

What you'd ideally want is called refinement types which Haskell, and
as far as I know, no practical language has.  There is a paper about a
way to encode these, but it is fairly heavy-weight.  You could use
phantom type trickery to combine the data types into one type but
still statically check that only additive expressions are passed to
certain functions, but that too is also probably more trouble than
it's worth.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Optimizing Parsec 3 -- was: Wiki software?

2009-12-04 Thread Derek Elkins
On Fri, Dec 4, 2009 at 11:01 PM, Evan Laforge qdun...@gmail.com wrote:
 On Fri, Dec 4, 2009 at 1:09 PM, John MacFarlane j...@berkeley.edu wrote:
 On Mon, Nov 23, 2009 at 12:29 PM, Antoine Latter aslat...@gmail.com wrote:

 I finally had some time to test it.  After running it multiple times
 (of course, it would be nice to use criterion here), I'm getting
 numbers in this neighborhood:


 I used criterion to compare pandoc compiled with parsec2 to
 pandoc compiled with your version of parsec3.  (The benchmark
 is converting testsuite.txt from markdown to HTML.) The difference was
 minor:

 Very nice, I was interested in parsec 3 but scared off by the reports
 of slowness, as I'm sure many others were.

 Is there any document out there describing the differences between 2
 and 3?  I gathered 3 allows more flexibility wrt the input, so you can
 more easily use ByteString or Text, but it would be nice to have a doc
 saying what the new features are and why we should be interested in
 upgrading.

Basically, the main (only) significant changes are that Parsec 3
provides a monad transformer rather than just a monad and the input
has been generalized to take arbitrary Streams rather than lists of
tokens.

 The old parsec docs were out of date even for parsec 2, and looks like
 they haven't been updated.  The new ones look like they use haddock
 which is great, that was a gripe I had about the old doc.  However,
 the haddock docs are less friendly than the old doc.  So my suggestion
 is to paste the old introduction (with Daan permission, of course) or
 something similar into the Text.Parsec description field, along with
 links to more detailed descriptions and tutorial in the style of v2 on
 haskell.org along with a 2 vs. 3 doc, even if they're sketchy and
 brief.

The Parsec Letter applies to Parsec 3 readily.  The only thing that
needs changing is the module names and possibly one or two function
names which the haddock documentation should readily point out.  The
letter obviously does not cover the new features of Parsec 3; for that
there is only the haddock at this point.

It would be nice to update the Parsec letter but that would ideally
require the document source and necessarily require Daan's permission.
 Unfortunately, no one has been able to get in touch with Daan on this
issue to my knowledge.

 Or if it's ok I could just send some darcs patches :)

You can certainly email me patches and I'll likely apply them if there
are no copyright or licensing issues.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Great Programs to Read?

2009-11-30 Thread Derek Elkins
On Mon, Nov 30, 2009 at 6:22 AM, Michael Lesniak mlesn...@uni-kassel.de wrote:
 Hello,

 In terms of

  to become a great programmer, you need to read great programs[1]

 what are great programs written in Haskell (for your personal
 definition of great), which source code is freely available on hackage
 or somewhere else on the net?

 I'm personally also interested in your definitions of great; for me, a
 great programs is defined by one of

 * good and well-written documentation
  (literate Haskell helps a lot)
 * novel ideas to use functional programming
 * elegance
 * showing how functional programming can ease tasks that
  are difficult to achieve in an imperative style

 Maybe we should create a Page on haskell.org (which I would do if I
 had write-access) mirroring the pages [2,3]?

 Kind regards,
 Michael

 [1] http://c2.com/cgi/wiki/Wiki?ReadGreatPrograms
 [2] http://c2.com/cgi/wiki/Wiki?GreatProgramsToRead
 [3] http://c2.com/cgi/wiki/Wiki?ProgramsToRead

The functional pearls are pretty much specifically designed to do all
the things you mention.  See
http://haskell.org/haskellwiki/Research_papers/Functional_pearls
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: implementing recursive let

2009-11-26 Thread Derek Elkins
On Wed, Nov 25, 2009 at 3:48 PM, Ben Franksen ben.frank...@online.de wrote:
 Derek Elkins wrote:
 The following code works fine for me, so it seems you are missing some
 details that may help.
 [...snip code...]

 Thank you! Indeed I did simplify the code when writing the message --
 because I thought that those other bits could not possibly be at
 fault... ;-)

 *trying out many changes to my own code and yours*

 Ok, I finally found it. What actually made the difference was the case for
 variables:

 Your version is

 eval (Var x)   = gets (fromJust . M.lookup x)

 which is suitably lazy, whereas mine was (more or less)

 eval e@(Var name) = do
   env - ask
   case M.lookup name env of
     Nothing  - do
       -- undefined variable reference
       warning (reference to undefined variable  ++ show name)
       let val = Data 
       modify (M.insert name val)
       return val
     Just val - return val

 Note that whatever I do in the 'Nothing' case is irrelevant, your code with
 the Var case replaced by

 eval e@(Var name) = do
   env - ask
   case M.lookup name env of
     Just val - return val

 loops as well.

 My problem is that I still don't understand why this is so! I know of course
 that pattern matching is strict, but I thought this should be ok here,
 since I evaluate the declarations _before_ the body, so when evaluation of
 the body demands the variable, it will be defined.

 What am I missing?

The problem is the liftM2 in the Let branch of eval.  You are
executing the body while making the bindings, so you are trying to
look up x while you are still trying to bind it.  One solution is to
move the execution of the body after the binding as in:

eval (Let decls body) = mdo
 let (names,exprs) = unzip decls
 updateEnv env = foldr (uncurry M.insert) env $ zip names values
 values - local updateEnv $ mapM eval exprs
 local updateEnv $ eval body
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I miss OO

2009-11-25 Thread Derek Elkins
On Wed, Nov 25, 2009 at 2:51 PM, Michael Mossey m...@alumni.caltech.edu wrote:
 I'm fairly new to Haskell, and starting to write some big projects.
 Previously I used OO exclusively, mostly Python. I really miss the
 namespace capabilities... a class can have a lot of generic method names
 which may be identical for several different classes because there is no
 ambiguity.

 In my musical application, many objects (or in Haskell, data) have a time
 associated with them. In Python I would have an accessor function called
 time in every class.

 So if I have objects/data note1, cursor1, and staff1,

 Python:
  note1.time()
  cursor1.time()
  staff1.time()

 Haskell needs something like
  note_time note1
  cursor_time cursor1
  staff_time staff1

 which is a lot more visually disorganized.

 What's worse, I have a moderate case of RSI (repetitive strain injury) so I
 type slowly and depend on abbreviations a lot. I use the souped-up
 abbreviation capabilities of Emacs. Let's say I have a field/member-variable
 called orientedPcSet that is used across many classes. In Python, I can
 create an abbreviation for that and it is useful many times. In Haskell, I
 might need

 someType_orientedPcSet
 someOtherType_orientedPcSet
 thirdType_orientedPcSet

 which prevents me from using abbreviations effectively (especially the
 dynamic-completion feature). (It would help to make the underscore not part
 of word syntax, but that's not ideal.)

 So I'm thinking of moving to a scheme in Haskell using modules, most types
 being defined in their own modules, and doing qualified imports. Generic
 names like 'time' can be defined in each module w/o clashing. Then I have

 Note.time note1
 Cursor.time cursor1
 Staff.time staff1

 This is very useful because I can define abbreviations for the type name and
 for oft-used accessor function names and these abbrevs are more organized,
 easier to remember, and easier to combine.

 I would be interested in comments... is this a good way to do things? Am I
 trying too hard to impose OO on Haskell and is there a better way?

That is the way to do what you want and not a bad practice in general.
 There's nothing particularly OO about namespacing, for example, ML's
modules and functors are quite a bit more flexible in this regard than
typical OO languages.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] implementing recursive let

2009-11-24 Thread Derek Elkins
The following code works fine for me, so it seems you are missing some
details that may help.

{-# LANGUAGE RecursiveDo, GeneralizedNewtypeDeriving,
TypeSynonymInstances, MultiParamTypeClasses #-}
import Control.Monad
import Control.Monad.State
import Control.Monad.Error
import Control.Monad.Writer
import Control.Monad.Reader
import Control.Monad.Fix
import Data.Maybe
import qualified Data.Map as M

data Expr = Let [(String, Expr)] Expr | Const Int | Var String

data Value = Data String | Function (Value - Eval Value)

instance Show Value where
show (Data s) = s

type Env = M.Map String Value

example = Let [(x, Const 1)] (Var x)

eval :: Expr - Eval Value
eval (Const n) = return (Data (show n))
eval (Var x)   = gets (fromJust . M.lookup x)
eval (Let decls body) = mdo
  let (names,exprs) = unzip decls
  updateEnv env = foldr (uncurry M.insert) env $ zip names values
  (values,result) - local updateEnv $ liftM2 (,) (mapM eval exprs) (eval body)
  return result

newtype Eval a = Eval {
unEval :: ErrorT String (StateT Env (Writer [String])) a
  } deriving (
Monad,
MonadFix,
MonadWriter [String], -- for warnings  other messages
MonadState Env,
MonadError String
  )

runEval :: Eval Value - Either String Value
runEval = fst . runWriter . flip evalStateT M.empty . runErrorT . unEval

evaluate = runEval . eval

instance MonadReader Env Eval where
  ask = get
  local tr act = do
s - get
modify tr
r - act
put s
return r
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Experiments with defunctionalization, church-encoding and CPS

2009-11-01 Thread Derek Elkins
On Tue, Oct 13, 2009 at 4:44 AM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 I took a toy problem - find the first node satisfying a predicate in a
 binary tree, started with a naive Maybe-based implementation - and
 experimented with 3 ways of changing the program:
  - Church-encode the Maybe
  - Convert the program into CPS
  - Defunctionalize the Church-encoded or CPS-transformed program
 http://hpaste.org/fastcgi/hpaste.fcgi/view?id=10686

 The link points to code, a benchmark and conclusion.

 Conclusion:
  - Haskell implements Maybe well enough that it is not possible to do better
  - Defunctionalization and consequent optimization yields same
 performance as the one with Maybe
  - Non-simplified CPS and Church-encoded representations do bad

You may find this collection of related papers interesting if you have
not already seen them:
http://lambda-the-ultimate.org/node/2423#comment-38384
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why cannot i get the value of a IORef variable ?

2009-10-22 Thread Derek Elkins
On Thu, Oct 22, 2009 at 1:32 PM, David Menendez d...@zednenem.com wrote:
 On Thu, Oct 22, 2009 at 2:23 AM, Gregory Crosswhite
 gcr...@phys.washington.edu wrote:
 For clarity, one trick that uses unsafePerformIO which you may have seen
 posted on this list earlier today is the following way of creating a
 globally visible IORef:

 import Data.IORef
 import System.IO.Unsafe

 *** counter = unsafePerformIO $ newIORef 0 ***

 Danger! If the monomorphism restriction is disabled, this ends up
 creating a value of type forall a. Num a = IORef a, which can be used
 to break type safety.

 More generally,

 cell :: IORef a
 cell = unsafePerformIO $ newIORef undefined

 unsafeCoerce :: a - b
 unsafeCoerce x = unsafePerformIO $ do
    writeIORef cell x
    readIORef cell

 This way lies segmentation faults. That unsafe is there for a reason.

This is exactly what happened in the original example.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monadic correctness

2009-10-17 Thread Derek Elkins
On Sat, Oct 17, 2009 at 3:24 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 Edward Z. Yang wrote:

 Excerpts from Andrew Coppin's message of Sat Oct 17 15:21:28 -0400 2009:


 Suppose we have

   newtype Foo x
   instance Monad Foo
   runFoo :: Foo x - IO x

 What sort of things can I do to check that I actually implemented this
 correctly? I mean, ignoring what makes Foo special for a moment, how can I
 check that it works correctly as a monad.


 A proper monad obeys the monad laws:

 http://www.haskell.org/haskellwiki/Monad_Laws

 You can probably cook up some quickcheck properties to test for these,
 but really you should be able to convince yourself by inspection  that
 your monad follows these laws.


 I'm reasonably confident it works, but not 100% sure...

 newtype Foo x = Foo (M - IO x)

In this case it is trivial, Foo = ReaderT M IO which is a monad.  If
you want, you can verify for yourself that this is a monad.  It isn't,
however, uncommon for custom monads to be (equivalent to) stacks of
monad transformers.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] x - String

2009-10-16 Thread Derek Elkins
See vacuum: http://hackage.haskell.org/package/vacuum
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is proof by testing possible?

2009-10-12 Thread Derek Elkins
On Mon, Oct 12, 2009 at 8:15 PM, Joe Fredette jfred...@gmail.com wrote:
 Sadly not enough, I understand the basics, but the whole proof = this
 diagram commutes thing still seems like
 voodoo to me. There is a section coming up in my Topology ISP that will be
 on CT. So I hope that I will be able to
 gain some purchase on the subject, at least enough to build up a working
 understanding on my own.

 I have a practical understanding of Functors and Natural Transformations, so
 working a bit with these free theorem things
 is kind of fun.

 Actually, another germane-if-random question, why isn't there a natural
 transformation class? Something like:


    class Functor f, Functor g = NatTrans g f a where
                trans :: f a - g a

 So your flatten function becomes a `trans` a la

    instance NatTrans Tree [] a where
                trans = flatten

 In fact, I'm going to attempt to do this now... Maybe I'll figure out why
 before you reply. :)

Diagrams are just a graphical depiction of systems of equations.
Every pair of paths with the same start and end point are equal.  I
don't care for diagrams that much and that graphical depiction isn't
that important for CT, though it has some mnemonic value.

As for a NatTrans class, your example is broken in several ways.
Natural transformations, though, are trivial in Haskell.

type NatTrans f g = forall a. f a - g a

flatten :: NatTrans Tree []

I.e. a natural transformation between Haskell Functors are just
polymorphic functions between them.

In general, a polymorphic function is a dinatural transformation and
the dinaturality conditions are the free theorems (or at least,
special cases of the free theorem for the type, which I believe, but
haven't proven, implies the full free theorem.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Documentation (was: ANN: text 0.5, a major revision of the Unicode text library)

2009-10-11 Thread Derek Elkins
On Sun, Oct 11, 2009 at 8:55 AM, Iain Barnett iainsp...@gmail.com wrote:

 On 11 Oct 2009, at 13:58, John Lato wrote:

 For anyone writing introductions to generic programming, take this as
 a plea from Haskellers everywhere.  If one of the RWH authors can't
 understand how to make use of these techniques, what hope do the rest
 of us have?

 John Lato

 P.S. Some might wryly note that I'm the maintainer of a package which
 is also known for incomprehensible documentation.  To which I would
 reply that our effort is much newer, I consider it a problem, and it's
 being worked on, contrasted to the state of GP where similarly
 impenetrable documentation has been and continues to be the norm.


 You could say that about most documentation (for Haskell and beyond).
 Apparently, programmers like programming better than documenting. The effect
 of this is that less people use their programming, making their efforts
 redundant.

 Silly really, considering programmers are (allegedly:) intelligent.

Apparently, programmers like programming better than reading as
well... in my experience.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Introspection on types.

2009-10-01 Thread Derek Elkins
On Thu, Oct 1, 2009 at 6:22 PM, Gregory Propf gregorypr...@yahoo.com wrote:

 Is there a way to tell, let's say, how many constructors there are for a 
 type?  Or do I need one of the haskell extensions I've read about?

Use Data.Data and derive Data for the types you are interested in or
instance it for pre-existing types that aren't already instances.
Introspection is tedious but trivial.

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Data.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why the stack overflow?

2009-09-19 Thread Derek Elkins
On Sat, Sep 19, 2009 at 6:54 AM, Daniel Fischer
daniel.is.fisc...@web.de wrote:
 Am Samstag 19 September 2009 12:37:41 schrieb staafmeister:
 Hi haskell-cafe,

 Why does rlist 10 [] gives stack overflow in ghci?

 rlist 0 l = return l
 rlist n l = do {x - randomRIO (1,maxBound::Int); let nl = x:l in nl `seq`
 rlist (n-1) nl}

 I first uses replicateM then foldM and finally an explicit function. But
 they give all stack overflow
 I don't know why 10 is not absurd and it is tail recursive. Or is it
 not, due to the monad structure?

 Prelude System.Random :set -XBangPatterns
 Prelude System.Random let rlist2 0 l = return l; rlist2 n l = do { !x - 
 randomRIO
 (1,maxBound :: Int); let {nl = x:l}; nl `seq` rlist2 (n-1) nl }
 Prelude System.Random rlist2 10 [] = \l - print (take 3 l)  print (last 
 l)
 [800589677,541186119,1521221143]
 1279766979
 Prelude System.Random rlist2 1000 [] = \l - print (take 3 l)  print 
 (last l)
 [655069099,324945664,2137996923]
 1108985638
 Prelude System.Random rlist2 1 [] = \l - print (take 3 l)  print 
 (last l)
 [286279491,63955,2118785404]
 315689721
 Prelude System.Random rlist2 10 [] = \l - print (take 3 l)  print 
 (last l)
 [862262999,947331403,790576391]
 1250271938
 Prelude System.Random rlist2 100 [] = \l - print (take 3 l)  print 
 (last l)
 [681201080,627349875,484483111]
 1048225698
 Prelude System.Random rlist2 1000 [] = \l - print (take 3 l)  print 
 (last l)
 [1247387053,690485134,1924757191]
 1637122415

 The problem is that randomRIO doesn't evaluate its result, so you build a 
 long chain of
 calls to randomR, which isn't evaluated until the count reaches 0, hence the 
 stack
 overflow. Forcing x prevents the long chain from being built.

Incidentally, nl is already in head normal form so seq nl does
nothing.  Leading to:
rlist 0 l = return l
rlist n l = do !x - randomRIO (1, maxBound :: Int); rlist (n-1) (x:l)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Do I have this right? Remembering Memoization!

2009-09-15 Thread Derek Elkins
 But pedantically even the function:

 quux :: Int - Int
 quux x = trace Quux (bar 12)

 optmain :: IO ()
 optmain = quux 10 `seq` quux 11 `seq` return ()

 might print only once if GHC at the optimization level selected recognizes
 that quux doesn't depend on its argument and rewrote your code with more
 sharing.

Well to be specific, it depends on how you define function,

quux :: Int - Int
quux = trace Quux bar

will print Quux once under the naive semantics.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What does it mean that objects are fixpoints? (OO'Haskell)

2009-09-15 Thread Derek Elkins
On Tue, Sep 15, 2009 at 10:14 AM, Manuel Simoni msim...@gmail.com wrote:
 Hello!

 I'm trying to wrap my head around OO'Haskell's notion of objects as fixpoints.

 Is OO'Haskell's use of mfix simply a use of something like a monadic
 Y-combinator to give the object access to its own identity?

More or less, yes.  To define 'self' or 'this'.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building a monoid, continuation-passing style

2009-09-15 Thread Derek Elkins
On Mon, Sep 14, 2009 at 10:25 AM, Martijn van Steenbergen
mart...@van.steenbergen.nl wrote:
 Hello cafe,

 Inspired by Sean Leather's xformat package [1] I built a datatype with which
 you can build a monoid with holes, yielding a function type to fill in these
 holes, continuation-passing style. Here are some primitives and their types:

 now   :: m - ContSt m r r
 later :: (a - m) - ContSt m r (a - r)
 run   :: ContSt m m r - r
 instance Monoid m = Category (ContSt m)

 Here's an example of how to use it:

 run (now hello . now world)

 helloworld

 run (later id . now world) hello

 helloworld

 run (later id . later show) hello 567

 hello567

 The source code is available at [2].

 I have a couple of questions:
 * ContSt is a Category. Is it also an Arrow? Why (not)?
 * Did I miss any other obvious classes this type is an instance of?
 * What is its relation with the Cont and Reader monads?
 * Are there any other useful applications other than printf-like
 functionality?
 * ContSt is a horrible name. What is a better one?

 For those who have a bit more time: I appreciate any comments and
 suggestions on the code. :-)

I believe this technique is based on a technique introduced in Olivier
Danvy's Functional Unparsing.  While not immediately applicable to
Haskell unless you want to make/use a delimited continuation monad,
you may find the paper On Typing Delimited Continuations: Three New
Solutions to the Printf Problem by Kenichi Asai interesting.  It is
available at the following url:
http://pllab.is.ocha.ac.jp/~asai/papers/papers.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Record update fusion (or how should I call it?)

2009-09-06 Thread Derek Elkins
The first thing I would do i is verify that the compiler is not
already doing this.

On Sun, Sep 6, 2009 at 7:50 AM, Peter Verswyvelenbugf...@gmail.com wrote:
 I've seen a couple of package being announced that provide first class
 labels, and other packages already existed for this (Grapefruit
 Record, HList, Accessor, ...)

 Regarding this, I have a question about the performance of multiple
 composed field updates. Maybe an example.

 Suppose I have a large record - say WindowDescription - which contains
 a lot of fields.

 Suppose I have a couple of default window description values, e.g.
 defaultWindowDesc, dialogBoxDesc, etc

 Using accessors it is easy to take such a default value, and modify
 a couple of fields, like:

 let myWindowDesc = set title Haskell . set size (640,480) . set
 background Blue . set fontFamily Arial $ defaultWindowDesc

 However, I guess this would make a lot of intermediate
 WindowDescription copies no (whether the fields are strict or not)? So
 ideally for performance, all these updates should be fused, maybe
 running inside an ST monad?

 I'm not sure if any of this is valid, but I would like to understand
 more about this, so any links and hints are welcome :-)

 Peter Verswyvelen
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to calculate de number of digits of an integer? (was: Is logBase right?)

2009-08-22 Thread Derek Elkins
2009/8/22 Eugene Kirpichov ekirpic...@gmail.com:
 Use 'round' instead of 'truncate'.

 Prelude let numDigits = (+1) . round . logBase 10 . fromIntegral
 Prelude map (numDigits . (10^)) [0..9]
 [1,2,3,4,5,6,7,8,9,10]


round won't work because 999 is close to 1000.

You simply need to use logBase 10 as a guess and then check the answer, e.g.
numDigits n | n  n'   = e
  | otherwise = e + 1
where e = ceiling $ logBase 10 $ fromIntegral n
 n' = 10^e
This will need to special case 0 which it currently doesn't do.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to calculate de number of digits of an integer? (was: Is logBase right?)

2009-08-22 Thread Derek Elkins
On Sat, Aug 22, 2009 at 12:31 PM, Derek Elkinsderek.a.elk...@gmail.com wrote:
 2009/8/22 Eugene Kirpichov ekirpic...@gmail.com:
 Use 'round' instead of 'truncate'.

 Prelude let numDigits = (+1) . round . logBase 10 . fromIntegral
 Prelude map (numDigits . (10^)) [0..9]
 [1,2,3,4,5,6,7,8,9,10]


 round won't work because 999 is close to 1000.

 You simply need to use logBase 10 as a guess and then check the answer, e.g.
 numDigits n | n  n'       = e
                  | otherwise = e + 1
    where e = ceiling $ logBase 10 $ fromIntegral n
             n' = 10^e
 This will need to special case 0 which it currently doesn't do.


Note that logBase 10 will start failing for large Integers (or rather
fromIntegral will.)  Writing an integer log using a binary search
would be relatively easy, reasonably efficient, and would work for all
Integers.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Re: Where do I put the seq?

2009-08-21 Thread Derek Elkins
On Fri, Aug 21, 2009 at 5:04 AM, Lennart
Augustssonlenn...@augustsson.net wrote:
 On Fri, Aug 21, 2009 at 10:52 AM, Bayley, Alistair
 alistair.bay...@invesco.com wrote:

  From: haskell-cafe-boun...@haskell.org
  [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Bulat Ziganshin
  To: Peter Verswyvelen
 
   But how does GHC implement the RealWorld internally? I guess
 
  look the base library sources for RealWorld

 http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-IOBas
 e.html#IO

 On Fri, Aug 21, 2009 at 11:04 AM, Peter Verswyvelenbugf...@gmail.com wrote:
 IO also seems to use unboxed (hence strict?) tuples

 newtype IO a = IO (State# RealWorld - (# State# RealWorld, a #))

 Not sure if this is just for performance, but if the strictness is required,
 here we have the horrible hack again then (would behave different without
 it?). I guess it works because when applying primitive function likes
 putChar#, these could be considered as fully strict, since putChar# c really
 does force evaluation of c strictly and puts in the screen. This is
 different from the lazy IO situation, where a string is concatenated lazily,
 and put on the screen by the consumer as soon as it's available. Ah I'm
 having troubles to explain myself formally, never mind :)
 Actually RealWorld is not defined in that file, it is defined here, but
 hidden
 file:///C:/app/ghp/doc/libraries/ghc-prim/GHC-Prim.html#t%3ARealWorld
 But I don't understand the comment
 data RealWorld Source
 RealWorld is deeply magical. It is primitive, but it is
 not unlifted (hence ptrArg). We never manipulate values of type RealWorld;
 it's only used in the type system, to parameterise State#.
 Maybe I should reread the papers, but it seems lots of magic is needed to
 get IO right (such as the existential types to make sure different state
 threads are kept separate)

 You need a lot of magic to make the IO monad efficient.
 You don't really want to pass around (and pattern match on) a
 RealWorld token, that would be inefficient.

I've always preferred the continuation based implementation of IO as
used in Hugs and I believe in HBC.  GHC's handling of it has always
seemed hack-y to me.  I don't recall any special treatment of IO by
HBC, though Lennart will definitely be able to verify or deny that.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Derek Elkins
On Tue, Aug 11, 2009 at 3:51 PM, Robin Greengree...@greenrd.org wrote:
 On Wed, 12 Aug 2009 11:37:02 +0200
 Peter Verswyvelen bugf...@gmail.com wrote:

 Yes, sorry.

 But I think I already found the answer to my own question.

 DDC functions that are lazy don't allow side effects:
 http://www.haskell.org/haskellwiki/DDC/EvaluationOrder

 Anyway it would be cool if the DDC EffectSystem would also work on
 lazy functions :)

 As was just pointed out in the unsafeDestructiveAssign thread from which
 this thread was forked, effects are incompatible with non-strict
 evaluation.

No, they aren't.  At least, they aren't in any technical way.  There
have been more than a few languages supporting both laziness and
mutation starting with Algol.

 The compiler is supposed to be able to reorder non-strict
 evaluation to do optimisations, but that can't be done if effects
 could happen.

There's nothing special about non-strict evaluation that makes the
antecedent true.  Replacing non-strict with strict gives just as
much of a valid statement.  It is purity that allows (some) reordering
of evaluation.

 Also, effects would destroy modular reasoning.

Again, it is purity, not laziness, that allows compositional
reasoning.  Effects destroy compositional reasoning in a strict
language just as much.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unsafeDestructiveAssign?

2009-08-12 Thread Derek Elkins
On Wed, Aug 12, 2009 at 4:41 AM, John Latojwl...@gmail.com wrote:
 Hi Job,

 I don't think this hypothetical function could exist; you may as well
 call it  notEverSafeOhTheHumanity and be done with it.

 Since Haskell provides no guarantees about when (if ever) any given
 function/data will be evaluated, you would need some mechanism to tell
 the compiler that a data chunk has a certain value at one time and a
 different value at another.  The language provides this in the IO (and
 ST) monads.  So the function would need to live within IO, and you
 don't gain anything.  If you try to take it outside of IO, with e.g.
 unsafePerformIO, then the compiler will no longer treat it like IO and
 the result is free to be evaluated whenever, so you're back where you
 started.

 Also, keep in mind that purity is a language requirement in Haskell
 and such a function really would break everything.  Specifically,
 you would get differing output depending on the exact transformations
 performed by the compiler, which in general would be difficult to
 predict in advance, probably not the same between different compiler
 versions, changed by compiler flags and phases of the moon, etc.  I
 have an example in a darcs repo somewhere...

 Cheers,
 John

 From: Job Vranish jvran...@gmail.com
 Subject: Re: [Haskell-cafe] unsafeDestructiveAssign?

 Ga! Before to many people start flooding me responses of This is really
 dumb idea don't do it! I would like to clarify that for the most part
 IKnowWhatI'mDoing(TM)

 I am well aware of the usual ST/IORefs as the usual solutions to data
 mutability in haskell.
 I very very much understand purity, and why it is a good thing, and why we
 should try to stay away from IO and ST as much as possible.
 I am very much away that even if I had such a function that it will probably
 break everything.
 I am not just trying to make things run faster.

 What I am trying to do is hyper unusual and I really do need an
 unsafeHorribleThings to do it.

 - Job

(To Alberto as well.)

Unsurprisingly, Lennart stated the real issue, but I'll re-emphasize
it.  As much trouble as such a profound violation of purity would
cause, it's not the biggest problem.  If that were all, you could
easily write a C/assembly function that would do what was desired.
The real problem is that there isn't necessarily any data chunk at
all, i.e. there may not be anything to mutate.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A voyage of undiscovery

2009-07-16 Thread Derek Elkins
On Thu, Jul 16, 2009 at 2:52 PM, Andrew
Coppinandrewcop...@btinternet.com wrote:
 Ross Mellgren wrote:

 It's not where -- let also works

 Prelude let { foo x = x } in (foo 1, foo True)
 (1,True)

 Awesome. So by attempting to implement Haskell's type system, I have
 discovered that I actually don't understand Haskell's type system. Who'd
 have thought it?

 Clearly I must go consult the Report and check precisely what the rules
 are...

The answer to your questions are on the back of this T-shirt.
http://www.cafepress.com/skicalc.6225368
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Colour tutorial (Was: AC-Vector, AC-Colour and AC-EasyRaster-GTK)

2009-07-11 Thread Derek Elkins
On Fri, Jul 10, 2009 at 12:42 AM, rocon...@theorem.ca wrote:
 On Thu, 9 Jul 2009, rocon...@theorem.ca wrote:

 You can use by lib without worrying about the CIE.  You can use my library
 without ever importing or using the word CIE.  However, the CIE stuff is
 there for those who need it.

 Perhaps I (maybe with some help) need to make a tutorial on the haskell
 wiki to try to make it less intimidating.

 Okay, I threw together a quick introduction at
 http://www.haskell.org/haskellwiki/Colour.  Any changes, comments,
 corrections, and addtions are welcome.  It's a wiki!

 The word CIE does occur at all in the document.

I read this and it irks me that opaque is not a monoid homomorphism
despite being the natural injection of non-transparent colours into
semi-transparent colours with pureColour being the projection back.
[Incidentally, you have a typo in pureColour, ac `over` mempty should
be ac `over` black or opaque black presumably, or even opaque mempty,
which I think was what you were going for, illustrating my point.]
It's like defining mappend on Integers as (+) and on Reals as (*);
actually, I think this is very close to what is actually happening.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Colour tutorial (Was: AC-Vector, AC-Colour and AC-EasyRaster-GTK)

2009-07-11 Thread Derek Elkins
On Sat, Jul 11, 2009 at 12:54 PM, Derek Elkinsderek.a.elk...@gmail.com wrote:
 On Fri, Jul 10, 2009 at 12:42 AM, rocon...@theorem.ca wrote:
 On Thu, 9 Jul 2009, rocon...@theorem.ca wrote:

 You can use by lib without worrying about the CIE.  You can use my library
 without ever importing or using the word CIE.  However, the CIE stuff is
 there for those who need it.

 Perhaps I (maybe with some help) need to make a tutorial on the haskell
 wiki to try to make it less intimidating.

 Okay, I threw together a quick introduction at
 http://www.haskell.org/haskellwiki/Colour.  Any changes, comments,
 corrections, and addtions are welcome.  It's a wiki!

 The word CIE does occur at all in the document.

 I read this and it irks me that opaque is not a monoid homomorphism
 despite being the natural injection of non-transparent colours into
 semi-transparent colours with pureColour being the projection back.
 [Incidentally, you have a typo in pureColour, ac `over` mempty should
 be ac `over` black or opaque black presumably, or even opaque mempty,
 which I think was what you were going for, illustrating my point.]
 It's like defining mappend on Integers as (+) and on Reals as (*);
 actually, I think this is very close to what is actually happening.

I'm mistaken about the typo in pureColour, but luckily the mistake
just further illustrates my point.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Leaner Haskell.org frontpage

2009-07-09 Thread Derek Elkins
On Thu, Jul 9, 2009 at 12:31 PM, Jason Dagitda...@codersbase.com wrote:


 On Thu, Jul 9, 2009 at 10:00 AM, Thomas ten Cate ttenc...@gmail.com wrote:

 By the way, the most valuable pixels, right at the top of the page,
 are wasted on wiki stuff. Compare
 http://www.haskell.org/
 with, for example,
 http://www.ruby-lang.org/
 http://python.org/

 The thing I like the most from the ruby page is the top box of content where
 it starts describing ruby with a Read more... link adjacent to a code
 snippet.  Because I doubt anyone will agree on *the one* best code snippet
 to show people, I think there should/could be a pool of fun snippets and
 loading the page picks one at random.  I have no idea if the wiki engine
 supports this.  I also like the strip of links at the top with things like,
 Download, Community, and so on.  Something I think the Haskell page does
 much better than the other two, is the listing of events and hackage
 updates.  Both of those sections feel inviting to me.  It makes me curious
 and I want to explore.

 The python page looks at least as cluttered as the haskell page.  Neither
 the haskell page or the python page have the same look and feel of the ruby
 page.  I think the shaded/gradient backgrounds actually add a lot to the
 visual experience.  I also like that the boxes have a different bg color for
 the box title and the box contents.  I also like the use of icons on the
 ruby page.  The Download Ruby link/box with the download icon is very
 inviting.  I just want to download it, even if I'm not going to use ruby!

 Perhaps we could have a contest similar to the logo contest but for homepage
 asthetics redesign.  I think the content on the haskell page is great, but
 the visual style of the presentation could be improved considerably.


 If, like the consensus seems to be, the page should be made more
 friendly to beginners (who are unlikely to want to contribute to the
 wiki right away), then this should be moved elsewhere, or at the very
 least made smaller and less obtrusive.

 Optimizing for newcomers seems wise.
 Jason

This is what I see when visiting the Ruby page:
DoS vulnerability in BigDecimal
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Leaner Haskell.org frontpage

2009-07-09 Thread Derek Elkins
On Thu, Jul 9, 2009 at 5:17 PM, Jason Dagitda...@codersbase.com wrote:


 On Thu, Jul 9, 2009 at 3:11 PM, Derek Elkins derek.a.elk...@gmail.com
 wrote:

 On Thu, Jul 9, 2009 at 12:31 PM, Jason Dagitda...@codersbase.com wrote:
 
 
  On Thu, Jul 9, 2009 at 10:00 AM, Thomas ten Cate ttenc...@gmail.com
  wrote:
 
  By the way, the most valuable pixels, right at the top of the page,
  are wasted on wiki stuff. Compare
  http://www.haskell.org/
  with, for example,
  http://www.ruby-lang.org/
  http://python.org/
 
  The thing I like the most from the ruby page is the top box of content
  where
  it starts describing ruby with a Read more... link adjacent to a code
  snippet.  Because I doubt anyone will agree on *the one* best code
  snippet
  to show people, I think there should/could be a pool of fun snippets and
  loading the page picks one at random.  I have no idea if the wiki engine
  supports this.  I also like the strip of links at the top with things
  like,
  Download, Community, and so on.  Something I think the Haskell page
  does
  much better than the other two, is the listing of events and hackage
  updates.  Both of those sections feel inviting to me.  It makes me
  curious
  and I want to explore.
 
  The python page looks at least as cluttered as the haskell page.
  Neither
  the haskell page or the python page have the same look and feel of the
  ruby
  page.  I think the shaded/gradient backgrounds actually add a lot to the
  visual experience.  I also like that the boxes have a different bg color
  for
  the box title and the box contents.  I also like the use of icons on the
  ruby page.  The Download Ruby link/box with the download icon is very
  inviting.  I just want to download it, even if I'm not going to use
  ruby!
 
  Perhaps we could have a contest similar to the logo contest but for
  homepage
  asthetics redesign.  I think the content on the haskell page is great,
  but
  the visual style of the presentation could be improved considerably.
 
 
  If, like the consensus seems to be, the page should be made more
  friendly to beginners (who are unlikely to want to contribute to the
  wiki right away), then this should be moved elsewhere, or at the very
  least made smaller and less obtrusive.
 
  Optimizing for newcomers seems wise.
  Jason

 This is what I see when visiting the Ruby page:
 DoS vulnerability in BigDecimal

 That's true.  And I never said we want to copy the ruby community :)  In
 fact, I'd prefer to not be associated with them given the community's
 blatant unprofessionalism and sexism (cf. CouchDB presentation at a
 semi-recent ruby conference).  I do think their page has more visual appeal
 though.  So other than pointing out the DoS, did you have feedback?

I admit it; you caught me.

I'm not a newbie and I don't use the front page terribly often, but I
do like most of the links that are on it.  The Ruby page is certainly
prettier, but the layout of the Haskell page is fine in my opinion;
the difference is mainly eye-candy.  On another topic, I know people
have expressed that they have liked the fact that the entire Haskell
site is a wiki; this expressing openness and community involvement.

I personally don't find the Haskell front page too cluttered and I
think most of issue in that vein could be resolved by simply making
sure the most important/newbie-oriented links are above the fold and
appropriately emphasized/categorized as is partially done already.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal fun [Half-integer]

2009-06-28 Thread Derek Elkins
On Sun, Jun 28, 2009 at 4:11 PM, Antoine Latteraslat...@gmail.com wrote:
 On Sun, Jun 28, 2009 at 3:42 PM, Andrew
 Coppinandrewcop...@btinternet.com wrote:
 Andrew Coppin wrote:

 Alrighty then, so how I just do Setup configure, and now Setup sdist, and
 then I can upload the result to Ha-- oh, don't be silly. That would simply
 be too easy. ;-)

 E:\Haskell\AOC-HalfIntegerrunhaskell Setup sdist
 Building source dist for AOC-HalfInteger-1.0...
 Preprocessing library AOC-HalfInteger-1.0...
 Setup: tar is required but it could not be found.

 Time to go search the web and find out what the other 50 people who
 stumbled into this did... *sigh*

 Ah. Apparently it's fixed:

 http://hackage.haskell.org/trac/hackage/ticket/40

 Except that it isn't fixed. Yay for me...

 It seems that GHC provides ar but not tar. Looks like I might actually have
 to copy the entire directory tree to a Linux box just so I can run sdist...
 Nice to know this stuff is so easy. :-/


 I don't know anything that's gauranteed to work, as I've never tried
 packaging from a Windows box, but:

  - Is 'htar' a good enough 'tar' replacement for cabal?
  - Does cabal-install also require an external tar? You could try cabal 
 sdist

If one actually reads the discussion in the ticket, it is clear that
the conclusion was to have cabal-install handle it and that
cabal-install uses it's own tar implementation.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pike's Newsqueak talk

2009-06-05 Thread Derek Elkins
On Fri, Jun 5, 2009 at 8:14 PM, Tim Newsham news...@lava.net wrote:
 I just watched http://video.google.com/videoplay?docid=810232012617965344

 It's a great talk that is suprisingly relevant to Haskell programming
 (although at first blush it looks a bit unrelated). (It refs a lot of older
 work that actually led me to Haskell in the first place by way of McIlroy's
 haskell power-series paper).  Anyway, I thought it would be of general
 interest.

 A lot of the progrms he discusses are a lot more elegant in pure Haskell
 code (ie. prime number sieve, power series), but his language also supports
 an interesting imperative primitive that lets you pick the first available
 value from a set of channels which isn't available in pure Haskell
 expressions.  Has anyone implemented a primitive like this for Haskell?

Traditionally and as demonstrated in McIlroy's power series paper,
channels are modelled as streams.  I'm pretty sure the primitive you
are looking for is (equivalent to) the non-deterministic merge, which
is, again, the traditional way of adding such features (see, e.g.
SICP).  It is of course an impure operation and thus not implementable
in pure Haskell and not a pure expression in Haskell.  It is
implemented in Control.Concurrent as mergeIO and nmergeIO.
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html#7

And yes, Rob Pike's work is great.  I particularly like the concurrent
windowing system stuff and would like to implement something like it
in (Concurrent) Haskell.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec float

2009-05-30 Thread Derek Elkins
On Sat, May 30, 2009 at 1:12 PM, Jason Dusek jason.du...@gmail.com wrote:
 2009/05/30 Bartosz Wójcik bar...@sudety.it:
 ...reading RWH I could not memorize what those liftM funtions
 meant.

  The basic one, `liftM`, means `fmap`, though specialized for
  functors that are monads.

    Prelude Control.Monad :t liftM
    liftM :: forall a b (m :: * - *). (Monad m) = (a - b) - m a - m b
    Prelude Control.Monad :t fmap
    fmap :: forall a b (f :: * - *). (Functor f) = (a - b) - f a - f b

  I think we have `liftM` either to help the inferencer or due
  to the absence of a `(Functor m)` constraint in the definition
  of the `Monad` typeclass.

It's the latter effectively.  liftM doesn't make anything easier for
the type checker.  liftM simply has a different type than fmap, not a
more specialized one, but even if Monad did have a Functor constraint,
liftM would still never lead to any ambiguity being resolved.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [] == []

2009-05-30 Thread Derek Elkins
On Fri, May 29, 2009 at 5:36 AM, Max Rabkin max.rab...@gmail.com wrote:
 On Fri, May 29, 2009 at 12:29 PM, Paul Keir pk...@dcs.gla.ac.uk wrote:
 f''' = ([]::[()]) == ([]::[()])

 (Very pretty.)

 So why doesn't ghc have 'default' instances?

 It does. I believe Num defaults to Integer and then to Double.

 Generally, though, defaults are convenient in exploratory usage but
 confusing in compiled code. In compiled code, you don't want arbitrary
 choices of defaults to affect performance and correctness.

 I've had programs run much slower than expected because the types
 defaulted to Integer rather than Int.

http://www.haskell.org/ghc/docs/latest/html/users_guide/interactive-evaluation.html#extended-default-rules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec float

2009-05-29 Thread Derek Elkins
On Fri, May 29, 2009 at 4:02 AM, Tillmann Vogt
tillmann.v...@rwth-aachen.de wrote:
 Bartosz Wójcik wrote:

 Hi Everybody (especially Parsec Creator),

 is there any reason why float parses only positive numbers?

 I find following defition:

 float           = lexeme floating   ? float

 floating        = do{ n - decimal
                        ; fractExponent n
                        }

 If floating was defined like

 floating        = do{ n - integer ...

 or

 floating        = do{ n - int ...

 instead  then it would parse negative ones as well.


 Hi Bartek,

 I had the same problem. Daan Leijen gave me a similar answer than Malcom
 Wallace just gave you:

 Usually the minus sign is treated as an operator in the language and
 treated as a separate token

There's a more pointed reason related to the ones given.  If the float
parser parses signed floats, what then do you do when you want to
parse unsigned floats?  It's trivial to go the one way, it's less
trivial to go the other way.

Incidentally, I'd probably write something like
((try $ negate $ char '-') | pure id) * float -- untested
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Kind of confusing

2009-05-12 Thread Derek Elkins
On Tue, 2009-05-12 at 14:09 +0100, Philippa Cowderoy wrote:
 On Mon, 2009-05-11 at 20:43 -0400, Anton van Straaten wrote:
  Serious question: what is the significance of the question mark and 
  double question marks in those signatures, or better yet, where can I 
  read about it?
  
 
 I've forgotten where to find the details (try the GHC manual if you
 haven't already?), but IIRC they're part of how GHC handles boxing.
 

http://hackage.haskell.org/trac/ghc/wiki/IntermediateTypes

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Utrecht Haskell Compiler (UHC) -- first release

2009-04-19 Thread Derek Elkins
On Sun, 2009-04-19 at 20:46 -0400, Dan Doel wrote:
 On Sunday 19 April 2009 7:11:51 pm wren ng thornton wrote:
  Yes, however, because consumers (e.g. @f@) demand that their arguments
  remain polymorphic, anything which reduces the polymorphism of @a@ in
  @x@ will make it ineligible for being passed to consumers. Maybe not
  precise, but it works.
 
  Another take is to use (x :: forall a. () - F a) and then once you pass
  () in then the return value is for some @a@. It's easy to see that
  this is the same as the version above.
 
 No, I'm relatively sure this doesn't work. Take, for instance, F a = a for 
 simplicity. Then we can say:
 
   i :: Int
   i = 5
 
   ei :: exists a. a
   ei = i
 
 Because ei's type, exists a. a, means this expression has some unknown 
 type. 
 And certainly, the value i does have some type; it's Int.
 
 By contrast, you won't be writing:
 
   ei' :: forall a. a
   ei' = i
 
 and similarly:
 
   ei'' :: forall a. () - a
   ei'' () = i
 
 is not a correct type, because i is not a value that belongs to every type. 
 However, we can write:
 
   ei''' :: forall r. (forall a. a - r) - r
   ei''' k = k i
 
 as well as translations between types like it and the existential:
 
   toE :: (forall r. (forall a. a - r) - r) - (exists a. a)
   toE f = f (\x - x)
 
   toU :: (exists a. a) - (forall r. (forall a. a - r) - r)
   toU e k = k e
 

You can build a framework around this encoding,
pack :: f a - (forall a. f a - r) - r
pack x f = f x

open :: (forall r.(forall a. f a - r) - r) - (forall a. f a - r) - r
open package k = package k

Unfortunately, pack is mostly useless without impredicativity and
lacking type lambdas requires a motley of data types to be made for f.

 'forall' in GHC means universal quantification. It's doesn't work as both 
 universal and existential quantification. The only way it's involved with 
 existential quantification is when you're defining an existential datatype, 
 where:
 
   data T = forall a. C ...
 
 is used because the type of the constructor:
 
   C :: forall a. ... - T
 
 is equivalent to the:
 
   C :: (exists a. ...) - T
 
 you'd get if the syntax were instead:
 
   data T = C (exists a. ...)
 
 Which is somewhat confusing, but forall is standing for universal 
 quantification even here.
 
  Exactly. Whether you pass a polymorphic function to an eliminator (as I
  had), or pass the universal eliminator to an applicator (as you're
  suggesting) isn't really important, it's just type lifting:
 
  (x :: forall a. F a) == (x :: forall r. (forall a. F a - r) - r)
 
  (f :: (forall a. F a) - Y) == (f :: ((forall a. F a - Y) - Y) - Y))
 
 
  The type lifted version is more precise in the sense that it
  distinguishes polymorphic values from existential values (rather than
  overloading the sense of polymorphism), but I don't think it's more
  correct in any deep way.
 
 I don't really understand what you mean by type lifting. But although you 
 might be able to write functions with types similar to what you've listed 
 above (for instance, of course you can write a function:
 
   foo :: (forall a. F a) - (forall r. (forall a. F a - r) - r)
   foo x k = k x
 
 simply because this is essentially a function with type
 
   (forall a. F a) - (exists a. F a)
 
 and you can do that by instantiating the argument to any type, and then 
 hiding 
 it in an existential), 

You can do this by using undefined, but without using undefined what if
F a = Void ?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wishful thinking: a text editor that expands function applications into function definitions

2009-04-02 Thread Derek Elkins
On Thu, 2009-04-02 at 18:01 -0600, Duane Johnson wrote:
 So I was thinking about a killer feature for a text editor.   
 Wouldn't it be neat if you could expand function calls into their  
 definitions, in-place?
 
 For example, suppose we have minus defined like so, somewhere in  
 another file:
 
  minus (a, b, c) (x, y, z) = (a - x, b - y, c - z)
 
 Later, we make use of the function in our current context:
 
  let p1 = (1, 2, 3)
   p2 = (4, 5, 6)
  in p1 `minus` p2
 
 By telling the editor to expand the minus, we get a temporary  
 replacing of the above with:
 
  (1 - 4, 2 - 5, 3 - 6)
 
 Another example:
 
parse s = map readLine ls
 
 And supposing that readLine is defined somewhere else, moving the  
 cursor to readLine in the line above and expanding becomes:
 
parse s = map (\line - words $ dropWhile (== ' ') line)
 
 This is all pretty standard for the kinds of things we do in Haskell  
 to work it out by hand, but is there any reason the parser couldn't do  
 this?  I think it would be even harder to do automatically in any  
 other language.  Maybe it's already been attempted or done?

See HaRe http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-07 Thread Derek Elkins
On Sat, 2009-03-07 at 23:12 +0100, Martijn van Steenbergen wrote:
 Derek Elkins wrote:
  Both are poorish style.
  
  reader - forkIO $ forever $ do (nr', line) - readChan; when (nr /= nr') $ 
  putStrLn hdl line
 
 This is fine assuming you always want to re-enter the loop. If you want 
 to loop conditionally (which is most often the case), forever isn't 
 going to work, unless you use exceptions.

If you are doing something else, use something else.  This makes it
clear that you -aren't- going to break out (non-exceptionally), i.e. the
control flow is more obvious in this code than in the other versions.
By your logic 'map' would be bad because not everything is a map, of
course, this is precisely why using map, when applicable, is good.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naturality condition for inits

2009-03-07 Thread Derek Elkins
On Sat, 2009-03-07 at 22:18 +, R J wrote:
 Here's another Bird problem that's stymied me:
 
 The function inits computes the list of initial segments of a list;
 its type is inits :: [a] - [[a]].  What is the appropriate naturality
 condition for inits?

A natural transformation is between two Functors f and g is a
polymorphic function t :: (Functor f, Functor g) = f a - g a.  The
naturality condition is the free theorem which states*:
for any function f :: A - B, t . fmap f = fmap f . t
Note that fmap is being used in two different instances here.

For lists, fmap = map and so we have for any polymorphic function [a] -
[a] using reverse as a representative,
map f . reverse = reverse . map f.

inits is a natural transformation between [] and [] . [] (where . is
type-level composition and not expressible in Haskell).  Functors
compose just by composing their fmaps, so fmap for [] . [] is simply
map . map, therefore the naturality condition for inits is the
following:
for any function f :: A - B,
inits . map f = map (map f) . inits
which you can easily prove.

* Actually there are some restrictions relating to bottom.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-05 Thread Derek Elkins
On Thu, 2009-03-05 at 16:12 -0800, Jonathan Cast wrote:
 On Thu, 2009-03-05 at 15:36 -0800, Daryoush Mehrtash wrote:
  In this chat server implementation
  http://www.haskell.org/haskellwiki/Implement_a_chat_server
  
  forkIO is used with fix as in:
  
  reader - forkIO $ fix $ \loop - do
  
  (nr', line) - readChan chan'
  when (nr /= nr') $ hPutStrLn hdl line
  
  loop
  
  Do you have to use fix?  Or is there a way to write this with a let?
 
 You can certainly use let:
 
   reader - forkIO $ let loop = do
   (nr', line) - readChan chan'
   when (nr /= nr') $ hPutStrLn hdl line
   loop
 in loop
 
 But the version with fix is clearer (at least to people who have fix in
 their vocabulary) and arguably better style.

Both are poorish style.

reader - forkIO $ forever $ do (nr', line) - readChan; when (nr /= nr') $ 
putStrLn hdl line

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theory about uncurried functions

2009-03-03 Thread Derek Elkins
On Tue, 2009-03-03 at 10:43 +0100, Peter Verswyvelen wrote:
 Lambda calculus is a nice theory in which every function always has
 one input and one output. Functions with multiple arguments can be
 simulated because functions are first class and hence a function can
 return a function. Multiple outputs cannot be done, one must embed
 these outputs in some data type, e.g. using a tuple, or one must use
 continuation passing style.
 
 Now, does a similar theory exist of functions that always have one
 input and one output, but these inputs and outputs are *always*
 tuples? Or maybe this does not make any sense?

There's the kappa calculus and also the related Freyd categories which
are related to arrows.  Theres also the theory induced by cartesian
categories or the theory induced by (symmetric) monoidal categories
(which are strengthenings of Freyd categories).

You could probably also formalize such a language yourself.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Uses of `fix' combinator

2009-02-19 Thread Derek Elkins
On Thu, 2009-02-19 at 17:00 +0300, Khudyakov Alexey wrote:
 Hello,
 
 While browsing documentation I've found following function 
 
  -- | @'fix' f@ is the least fixed point of the function @f@,
  -- i.e. the least defined @x@ such that @f x = x...@.
  fix :: (a - a) - a
  fix f = let x = f x in x
 
 I have two questions. How could this function be used? I'm unable to imagine 
 any. Naive approach lead to nothing (no surprise):
 
 Prelude Data.Function fix (^^2)
 interactive: out of memory (requested 2097152 bytes)
 
 
 Second question what does word `least' mean?`a' isn't an Ord instance. 

Least defined, i.e. least in the definability order where undefined =
anything (hence also being called bottom) and, say, Just undefined =
Just 3 and 1 /= 2 and 2 /= 1.  Fix is the basic mechanism supporting
recursion (theoretically).

The idea is when you have a recursive definition, you can abstract out
the recursive uses and apply fix to the resulting function, e.g.

ones = 1:ones
ones = fix (\ones - 1:ones)

fact 0 = 1
fact n | n  1 = n * fact (n-1)
fact = fix (\fact n - case n of 0 - 1; _ | n  1 - n * fact (n - 1))

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] forall ST monad

2009-02-19 Thread Derek Elkins
On Thu, 2009-02-19 at 05:53 -0800, Kim-Ee Yeoh wrote:
 There's a lot to chew on (thank you!), but I'll just take something 
 I can handle for now.
 
 
 Dan Doel wrote:
  
  An existential:
  
  exists a:T. P(a)
  
  is a pair of some a with type T and a proof that a satisfies P (which has
  type 
  P(a)). In Haskell, T is some kind, and P(a) is some type making use of a.
  That 
  doesn't mean that there is only one a:T for which P is satisfied. But it 
  *does* mean that for any particular proof of exists a:T. P(a), only one
  a:T is 
  involved. So if you can open that proof, then you know that that is the 
  particular a you're dealing with, which leads to the problems in the 
  grandparent.
  
 
 re: Constructivity and the opening of a proof
 
 A form of the theorem that the primes are infinite goes
 Given a finite set of primes, there's a prime bigger than any of them.
 
 The usual proof is constructive since factorization is algorithmic,
 but I don't see why a priori, applications of this theorem on a given input
 should always yield the same prime when more than one factor exists.

They don't.  Any particular constructive proof will yield a particular
new bigger prime, but they don't have to yield the same one.  

Let's choose a simpler example: There exists an Integer.  Any
(constructive) proof of that proposition is just an Integer and thus
certainly a particular Integer.  However, there is nothing special about
any particular Integer.

 Is non-deterministic choice forbidden in constructive math? A cursory
 google seems to suggest that if not, it's at least a bête noire to some.

Not particularly, but if you ultimately want an executable algorithm,
you are sooner or later going to have to spell out how such a
non-deterministic choice is made.

-Proof-search-, though, is usually quite non-deterministic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Intergalactic Telefunctors

2009-02-15 Thread Derek Elkins
On Sun, 2009-02-15 at 18:53 +0100, Tillmann Rendel wrote:
 Gregg Reynolds wrote:
  BTW, I'm not talking about Haskell's Functor class, I guess I should
  have made that clear.  I'm talking about category theory, as the
  semantic framework for thinking about Haskell.
 
 In that case, I even less see why you are not introducing category 
 theory proper. Certainly, if one wants to use a semantic framework for 
 thinking about something, one should use the real thing, not some 
 metaphors.

The sooner you realize that Gregg is, apparently, only interested in
half-baked philosophizing and wordplay, the better off you'll be.  Of
the things he claims to be interested in, Haskell, category theory,
formal semantics, none have yet made an appearance on his blog.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Can this be done?

2009-02-12 Thread Derek Elkins
On Thu, 2009-02-12 at 19:55 -0500, Chung-chieh Shan wrote:
 wren ng thornton w...@freegeek.org wrote in article 
 4993bbee.9070...@freegeek.org in gmane.comp.lang.haskell.cafe:
  It's ugly, but one option is to just reify your continuations as an ADT, 
  where there are constructors for each function and fields for each 
  variable that needs closing over. Serializing that ADT should be simple 
  (unless some of those functions are higher-order in which case you run 
  into the same problem of how to serialize the function arguments). In 
  GHC's STG machine this representation shouldn't have much overhead, 
  though it does require the developer to do the compiler's job.
 
 FWIW, this idea is called defunctionalization (due to Reynolds),
 and it works for higher-order functions as well (because you can
 defunctionalize those function arguments in the same way).
 
 People in many fields put a lot of effort into turning their programs
 into state machines...
 
This paper by Ezra Cooper and Phil Wadler is an interesting recent
development in the theory of defunctionalization and very relevant to
this particular topic as well:
http://homepages.inf.ed.ac.uk/wadler/topics/links.html#located-lambda

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Fest

2009-02-09 Thread Derek Elkins
On Mon, 2009-02-09 at 16:54 -0800, Lyle Kopnicky wrote:
 Looks like a lot of fun!
 
 http://www.haskellchamber.com/page6.html

I could readily go there.  Maybe I could pick up a beauty at the
pageant.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bind as a sequencing operator (Was: evaluation semantics of bind)

2009-02-07 Thread Derek Elkins
On Thu, 2009-02-05 at 11:47 -0700, m...@justinbogner.com wrote:
 Jake McArthur j...@pikewerks.com writes:
  m...@justinbogner.com wrote:
  | Oops, sent this off list the first time, here it is again.
  |
  | Jake McArthur j...@pikewerks.com writes:
  | m...@justinbogner.com wrote:
  | | Bind is a sequencing operator rather than an application operator.
  |
  | In my opinion, this is a common misconception. I think that bind would
  | be nicer if its arguments were reversed.
  |
  | If this is a misconception, why does thinking of it this way work so
  | well? This idea is reinforced by the do notation syntactic sugar: bind
  | can be represented by going into imperative land and doing one thing
  | before another.
 
  An imperative-looking notation does not make something imperative.
 
  Thinking of bind as sequencing really *doesn't* work very well. What
  does bind have to do with sequencing at all in the list monad, for
  example? What about the reader monad?
 
  - Jake
 
 What doesn't bind have to do with sequencing in the list monad?
 Consider:
 
   [1..2] = return . (^2)
 
 This says generate the list [1..2] and then use it to generate a list
 of squares. It's more than just application, it's a description of a
 sequence of actions. The whole point of list comprehensions (which is
 the only reason to have a list monad, as far as I know) is to think
 of it this way rather than as an application of concatMap.
 
 As for Reader, I don't know enough about it to say anything.

Jake is more or less right.

The Monad interface does nothing to enforce any particular evaluation
order.  The interface is, however, too narrow for you to express do
these two things in an indeterminate order so you are forced to choose
a particular linearization.  Commutative monads, such as Reader, could
relax that constraint, not that it would really mean much in many of
those cases.

Now, if we wanted to give a semantics for a call-by-value programming
language, which was exactly the sort of thing Moggi was thinking about
when he was talking about monads, then application would indeed
translate into exactly (flipped) (=).  So the expression (f x) in,
say, SML would be translated to (f = x) using some appropriate monad
to model the side-effects that ML supports.  Actually, a 'let' like
notation is often preferred as it matches better with lists of
statements more prettily than the equivalent of treating ; as const.
This is the source of the name 'bind' as, e.g. the ML, (let a = M in let
b = N in a+b) translates to (M = \a - N = \b - return (a+b)) and,
of course, do-notation is just a syntactic variant of this 'let'
notation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] morphisms in IO

2009-02-07 Thread Derek Elkins
On Thu, 2009-02-05 at 20:52 -0600, Gregg Reynolds wrote:
 I'm working on a radically different way of looking at IO.  Before I
 post it and make a fool of myself, I'd appreciate a reality check on
 the following points:
 
 a)  Can IO be thought of as a category?  I think the answer is yes.

No.  At least not in any reasonable way.

 b)  If it is a category, what are its morphisms?  I think the answer
 is: it has no morphisms.  The morphisms available are natural
 transformations or functors, and thus not /in/ the category.
 Alternatively: we have no means of directly naming its values, so the
 only way we can operate on its values is to use morphisms from the
 outside (operating on construction expressions qua morphisms.)

N/A

 c)  All categories with no morphisms (bereft categories?) are
 isomorphic (to each other).  I think yes.

No.  Discrete categories which you seem to be talking about are
isomorphic to sets (namely their set of objects).  Not all sets are
isomorphic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: #haskell-in-depth IRC channel

2009-02-03 Thread Derek Elkins
On Wed, 2009-02-04 at 14:32 +0900, Benjamin L.Russell wrote:
 On Wed, 04 Feb 2009 00:15:48 +, Philippa Cowderoy
 fli...@flippac.org wrote:
 
 [...]
 
 If you need to know how to use monads so you can do IO,
 #haskell-in-depth isn't the place. On the other hand, if you want to
 discuss how Haskell's monads compare to the category theory or what the
 category theory can tell us about how individual monads relate to the
 language as a whole, -in-depth is a good place! In particular, we're
 hoping that the kind of category theory discussions that give the
 mistaken impression you actually need to know CT will increasingly live
 in #haskell-in-depth.
 
 We're not after a theory channel though - architectural discussion,
 compiler implementation, possible type system extensions, library
 design, all are good subjects.
 
 Great work!  I look forward to participating sometime in the near
 future.
 
 In that case, for people who need to know how to use monads so that
 they can do IO, why not create a #haskell-beginners channel?  I have
 occasionally read posts of some users who were hesitant to participate
 in #haskell until they learned enough to keep up with the discussions
 there.  If neither #haskell nor #haskell-in-depth is appropriate,
 perhaps they would feel more comfortable in a
 Haskell-beginners-specific channel?

Asking any Haskell-related question at any level is appropriate in
#haskell, now as always.

One of the implicit goals of the new channel is to minimize such
intimidation.  The explicit goal of the new channel is to increase the
newbie friendliness of #haskell.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type and data constructors in CT

2009-01-31 Thread Derek Elkins
On Sat, 2009-01-31 at 11:00 -0600, Gregg Reynolds wrote:
 Hi,
 
 I think I've finally figured out what a monad is, but there's one
 thing I  haven't seen addressed in category theory stuff I've found
 online.  That is the relation between type constructors and data
 constructors.

The typical (albeit incomplete) view is that data constructors are the
(components of the) initial algebra of the functor corresponding to the
signature of a given algebraic data type.  This is discussed in quite a
few places online. 

 
 As I understand it, a type constructor Tcon a is basically the object
 component of a functor T that maps one Haskell type to another.
 Haskell types are construed as the objects of category HaskellType.
 I think that's a pretty straightforward interpretation of the CT
 definition of functor.
 
 But a data constructor Dcon a is an /element/ mapping taking elements
 (values) of one type to elements of another type.  So it too can be
 construed as a functor, if each type itself is construed as a
 category.

What are elements of a type?  How are you construing a type as a
category?  One answer is that you are viewing types as sets.  Ignoring
the problems with that identification, a set can be viewed as a discrete
category and functors between discrete categories are just functions.
This is like going around three sides of a square; you don't gain
anything over just saying types are sets, and functions are set
functions.

Note that there is nothing special about data constructors here.  Every
function between types is such a functor.

Most articles that apply CT to Haskell take one of two approaches.  They
either talk about a category of Haskell types and functions with no
explanation of what those actually are, i.e. the understanding that it
behaves like (idealized) Haskell, or they refer to some existing
approach to (idealized) semantics, e.g. sets or domains.  In either
case, the meaning of the objects and arrows is effectively taken for
granted.

An approach along the lines you are suggesting would be useful for a
categorical semantics of Haskell, but it would just be one possible
semantics among many.  For most of the aforementioned articles, the only
value of such a thing would be to be able to formally prove that the
constructions talked about exist (except that they usually don't for
technical reasons.)  Usually in those articles, the readers are assumed
to know Haskell and to not know much about category theory, so trying to
explain types and functions to them categorically is unnecessary and
obfuscatory.  It would make sense if you were going the other way,
explaining Haskell to categorists.

 So this gives us two functors, but they operate on different things,
 and I don't see how to get from one to the other in CT terms.  Or
 rather, they're obviously related, but I don't see how to express that
 relation formally.

One way to do this would be to use indexed categories or more generally
two categories.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monoids and newtypes

2009-01-27 Thread Derek Elkins
On Tue, 2009-01-27 at 08:51 -0800, Anish Muttreja wrote:
 On Thu, 22 Jan 2009 09:46:19 -0800, Derek Elkins derek.a.elk...@gmail.com 
 wrote:
 
 
 The old wiki had an excellent page that has not been replicated either
  verbatim or in spirit in the new wiki.
  http://web.archive.org/web/20060831090007/http://www.haskell.org/hawiki/CommonHaskellIdioms
 
 Thanks, this is really useful.
 
 There is a wikisnapshot on haskell.org 
 http://haskell.org/wikisnapshot/CommonHaskellIdioms.html
 which looks like a replication and has more working links than the 
 web.archive.org page.

The snapshot is quite a bit older than what is available on archive.org.
You should be able to stick the link to any page that doesn't work into
archive.org and get a version that does work (i.e. re-search for the
page.)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: filestore 0.1

2009-01-25 Thread Derek Elkins
On Sun, 2009-01-25 at 09:32 +, Magnus Therning wrote:
 Bulat Ziganshin wrote:
  Hello Gwern,
  
  Sunday, January 25, 2009, 2:56:07 AM, you wrote:
  
  my usual complaint: it will be great to see all announces duplicated
  in main haskell list
 
 I always only announce things on haskell-cafe.  What list is the “main
 haskell list”?

This is the description of the Haskell mailing list:
hask...@haskell.org (read  search via gmane)
Announcements, discussion openers, technical questions. 
hask...@haskell.org is intended to be a low-bandwidth list, to
which it is safe to subscribe without risking being buried in
email. If a thread becomes longer than a handful of messages,
please transfer to haskell-c...@haskell.org.

All announcements should go to hask...@haskell.org if nowhere else.  In
practice, it is probably best to post to both haskell and haskell-cafe
and this is what most people do.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laws and partial values

2009-01-25 Thread Derek Elkins
On Sun, 2009-01-25 at 07:11 -0800, Jonathan Cast wrote:
 On Sun, 2009-01-25 at 10:46 +0100, Thomas Davie wrote:
  On 25 Jan 2009, at 10:08, Daniel Fischer wrote:
  
   Am Sonntag, 25. Januar 2009 00:55 schrieb Conal Elliott:
   It's obvious because () is a defined value, while bottom is not -  
   per
   definitionem.
  
   I wonder if this argument is circular.
  
   I'm not aware of defined and not defined as more than informal  
   terms.
  
   They are informal. I could've written one is a terminating  
   computation while
   the other is not.
  
  Is that a problem when trying to find the least defined element of a  
  set of terminating computations?
 
 Yes.  If you've got a set of terminating computations, and it has
 multiple distinct elements, it generally doesn't *have* a least element.
 The P in CPO stands for Partial.

Yes, partial as in partial order (v. total order or preorder) not as
in partiality.  It's actually the complete part that indicates the
existence of a least element, pretty much by definition.  A cpo is a
dcpo (directed complete partial order) with a least element, though
sometimes cpo is used for dcpo in which case a least element is not
guaranteed.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: filestore 0.1

2009-01-25 Thread Derek Elkins
On Sun, 2009-01-25 at 23:09 +0100, Magnus Therning wrote:
 On Sun, Jan 25, 2009 at 10:47 AM, Derek Elkins derek.a.elk...@gmail.com 
 wrote:
  On Sun, 2009-01-25 at 09:32 +, Magnus Therning wrote:
  Bulat Ziganshin wrote:
   Hello Gwern,
  
   Sunday, January 25, 2009, 2:56:07 AM, you wrote:
  
   my usual complaint: it will be great to see all announces duplicated
   in main haskell list
 
  I always only announce things on haskell-cafe.  What list is the main
  haskell list?
 
  This is the description of the Haskell mailing list:
  hask...@haskell.org (read  search via gmane)
 Announcements, discussion openers, technical questions.
 hask...@haskell.org is intended to be a low-bandwidth list, to
 which it is safe to subscribe without risking being buried in
 email. If a thread becomes longer than a handful of messages,
 please transfer to haskell-c...@haskell.org.
 
  All announcements should go to hask...@haskell.org if nowhere else.  In
  practice, it is probably best to post to both haskell and haskell-cafe
  and this is what most people do.
 
 Hmm, interesting, I've never bothered subscribing to that list.  I
 assumed it was a moderated announcement list that I wouldn't be able
 to post to myself, i.e. it would be a one-way list only.  My
 assumption was most likely based on my experience with Debian
 announcement lists.
 
 I have a hard time seeing a point in having that list:
 
  - there should be no discussions on the list, and
  - people suggest that announcements be X-posted to haskell-cafe.
 
 Hmm, so what is posted to hask...@haskell.org that a subscriber to
 haskell-cafe would be inerested in?

Some announcements, discussion openers, technical questions may not be
posted to haskell-c...@haskell.org.  However, the idea (now at least) is
that if you are subscribed to haskell-cafe@haskell.org, you should (may
as well) be subscribed to hask...@haskell.org.  However, if you don't
want tbe drink from the firehose that is haskell-cafe@haskell.org you
can still keep track of announcements and such in hask...@haskell.org.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Factory methods in Haskell

2009-01-24 Thread Derek Elkins
On Sat, 2009-01-24 at 15:43 -0600, Jeremy Shaw wrote:
 Hello,
 
 I was reading about the Factory Method Pattern on wikipedia, and
 noticed that the very first example was written in Haskell. Sweet!
 
 http://en.wikipedia.org/wiki/Factory_method_pattern#Haskell
 
 Unfortunately, it looks to me like it is missing the 'factory' part.

It is.

 
 I have attempted to implement something more factory like (see
 attached). I am wonder what other people think. Is the code on
 wikipedia really demoing a factory method? Is the code attached any
 better? Is there an even better what to write this in Haskell?

Answering your questions in order: no, yes, yes but the essence is right
(there are other ways as well)  However since dynamic dispatch is rare
in Haskell, the factory method does not really come up.  It does occur
but not in a way that most people think of as the factory method.

 plain text document attachment (Pizza.hs)
 {-# LANGUAGE ExistentialQuantification #-}
 import Numeric (showFFloat)
 
 -- * A type which can hold different types of pizzas
 
 data Pizza = forall a. (PizzaMethods a) = Pizza a
 
 -- * A type class with functions common to different types of pizza
 
 class (Show a) = PizzaMethods a where
 price' :: a - Double
 
 -- * Getter functions for the pizza type
 
 price :: Pizza - Double
 price (Pizza p) = price' p
 
 pizzaType :: Pizza - String
 pizzaType (Pizza p) = show p

Get rid of these functions and get rid of the ' in price' and simply
make Pizza an instance of PizzaMethods.

 
 -- * Some types of pizza
 
 data HamAndMushroom = HamAndMushroom deriving (Read, Show)
 data Deluxe = Deluxe deriving (Read, Show)
 data Hawaiin= Hawaiinderiving (Read, Show)
 
 -- * Prices of various pizzas
 
 instance PizzaMethods HamAndMushroom where
 price' _ = 8.50
 
 instance PizzaMethods Deluxe where
 price' _ = 10.50
 
 instance PizzaMethods Hawaiin where
 price' _ = 11.50
 
 -- * A pizza factory
 
 pizzaFactory :: String - Pizza
 pizzaFactory pizzaType
 | pizzaType == HamAndMushroom = Pizza HamAndMushroom
 | pizzaType == Deluxe = Pizza Deluxe
 | pizzaType == Hawaiin = Pizza Hawaiin
 | otherwise = error We don't serve your kind here. 
 
 -- * An order at the pizza factory
 
 main = 
 let pizza = pizzaFactory HamAndMushroom
 in putStrLn $ You can get a  ++ pizzaType pizza ++  for $ ++ 
 showFFloat (Just 2) (price pizza) .
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapM_ - Monoid.Monad.map

2009-01-23 Thread Derek Elkins
On Fri, 2009-01-23 at 13:39 -0800, George Pollard wrote:
 On Fri, 2009-01-23 at 21:30 +, Joachim Breitner wrote:
  Hi,
  
  Am Freitag, den 23.01.2009, 21:50 +0100 schrieb Henning Thielemann:
 However our recent Monoid discussion made me think about mapM_, 
   sequence_, and friends. I think they could be useful for many monads if
 
   they would have the type:
 mapM_ :: (Monoid b) = (a - m b) - [a] - m b
  I expect that the Monoid instance of () would yield the same
 efficiency 
   as todays mapM_
  
  will it? This is based on a naive, not well-founded understanding of
  haskell evaluation, but looking at
   instance Monoid () where
 mempty= ()
 _ `mappend` _ = ()
 mconcat _ = ()
  I’d assume that evaluating
   mapM_ (putStrLn) lotsOfLargeStrings
  with your proposed mapM_ will leave a thunk equivalent to
   () `mappend` () `mappend` () `mappend`...
  in memory until the mapM_ has completely finished, where each () is
  actually an unevalutated thunk that still has a reference to one of the
  elements in the lotsOfLargeStrings list.
 
 Perhaps this is why the Monoid instance for () in GHC's source has the
 comment should this be strict? :)

It's easy to calculate the answer.

mempty `mappend` undefined = undefined (left identity monoid law)
The above definition doesn't meet this, similarly for the right identity
monoid law.  That only leaves one definition, () `mappend` () = () which
does indeed satisfy the monoid laws.

So the answer to the question is Yes.  Another example of making
things as lazy as possible going astray.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monoids and newtypes

2009-01-22 Thread Derek Elkins
On Thu, 2009-01-22 at 16:11 +0100, Ketil Malde wrote:
 One wart that was briefly mentioned during the Great Monoid Naming
 Thread of 2009 is the need to wrap types in newtypes to provide multiple
 instances of the same class with different semantics -- the archetypical
 example being Integer as a monoid over addition as well as
 multiplication. 
 
 I was just wondering if not phantom types might serve here as an
 alternative way to go about that.  Here's a small example illustrating
 it: 
 
 
 {-# LANGUAGE EmptyDataDecls  #-}
 {-# LANGUAGE FlexibleInstances  #-}
 
 module Monoids where
 import Data.Monoid
 
 data Foo a = Foo Integer deriving (Show, Eq)
 
 data Additive
 data Multiplicative
 
 instance Monoid (Foo Additive) where
 mappend (Foo x) (Foo y) = Foo (x+y)
 mempty = Foo 0
 
 instance Monoid (Foo Multiplicative) where
 mappend (Foo x) (Foo y) = Foo (x*y)
 mempty = Foo 1
 
 instance Num (Foo a) where
 fromInteger x = Foo x
 Foo x + Foo y = Foo (x+y)
 Foo x * Foo y = Foo (x*y)
 signum (Foo x) = Foo (signum x)
 
 
 Loading this into ghci, you get:
 *Monoids mconcat [1,2]
 
 interactive:1:0:
 Ambiguous type variable `t' in the constraints:
   `Monoid t' arising from a use of `mconcat' at interactive:1:0-12
   `Num t' arising from the literal `2' at interactive:1:11
 Probable fix: add a type signature that fixes these type variable(s)
 *Monoids mconcat [1,2::Foo Additive]
 Foo 3
 *Monoids mconcat [1,2::Foo Multiplicative]
 Foo 2
 
 (This can of course be prettified a bit by omitting the constructor
 from the Show instance).  
 
 Any thought about this, pro/contra the newtype method?
 

The old wiki had an excellent page that has not been replicated either
verbatim or in spirit in the new wiki.
http://web.archive.org/web/20060831090007/http://www.haskell.org/hawiki/CommonHaskellIdioms

This lists many small tips and tricks that Haskell programmers have
discovered/used throughout the years.

This particular example is an example of using wrapper types to attach a
phantom type as described here:
http://web.archive.org/web/20070614230306/http://haskell.org/hawiki/WrapperTypes

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt

2009-01-22 Thread Derek Elkins
On Thu, 2009-01-22 at 11:32 -0600, Jeremy Shaw wrote:
 Hello,
 
 Just some minor suggestions and comments:
 
 The description might read better as two sentences:
 
A class for monoids with various general-purpose instances. Monoids
are types with an associative binary operation that has an
identity.
 
 One thing that I think is a bit unclear from that description is the
 fact that it does not matter *what* the binary operation does, as long
 as the laws are followed. That is the whole point of the monoid class
 -- you use it when you only care about the laws, not the specific
 operation...
 
 For the laws, it would be nice to label each rule, something like
 
  * mappend mempty x = x -- Left Identity
  * mappend x empty = x  -- Right Identity
  * mappend x (mappend y z) = mappend (mappend x y) z-- Associative


  * mconcat = foldr mappend mempty   -- Not sure what to 
 call this. Perhaps it an axiom?

This is just a definition, both actually and nominally.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Factoring into type classes

2009-01-19 Thread Derek Elkins
On Mon, 2009-01-19 at 21:18 +0100, Alberto G. Corona wrote:
 This is one of the shortcomings of haskell not to mention other
 programming languages. Mathemathicist would find it very annoying.
 
 
 Instead of 
 
 
 instance Monoid Integer where
 mappend = (+)
 mempty = 0
 
 instance Monoid Integer where
 mappend = (*)
 mempty = 1
 
 
 which is not legal and the workaround
 
 Num a = Monoid (Sum a)
 Num a = Monoid (Product a)
 
 wich is cumbersome
 A mathematician  would say something like:
 instance Monoid Integer with operation + where
 mappend = (+)
 mempty = 0
 and
 
 instance Monoid Integer with operation * where
 
 mappend = (*)
 mempty = 1

Check out the OBJ family of languages, particularly OBJ3 and (I think)
Maude.

 
 
 But talking about shortcomings, personally I prefer to implement first
 a form of assertion that permits the checking of the  class properties
 automatically for each new instance. 
  
 This is far more important in práctical terms.
 
 
 
 2009/1/19 Thomas DuBuisson thomas.dubuis...@gmail.com
 2009/1/19 Luke Palmer lrpal...@gmail.com:
 
 
  On Mon, Jan 19, 2009 at 3:58 AM, Patai Gergely
 patai_gerg...@fastmail.fm
  wrote:
 
  However, there are other type classes that are too general
 to assign
  such concrete uses to. For instance, if a data structure
 can have more
  than one meaningful (and useful) Functor or Monoid
 instance,
 
  As a side curiosity, I would love to see an example of any
 data structure
  which has more than one Functor instance.  Especially those
 which have more
  than one useful functor instance.
  Luke
 
 
 The recent, and great, blog post about moniods [1] discusses
 the fact
 that (Num a) could be one of several different monoids and how
 that
 was handled.
 
 [1]
 http://sigfpe.blogspot.com/2009/01/haskell-monoids-and-their-uses.html
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Factoring into type classes

2009-01-19 Thread Derek Elkins
On Mon, 2009-01-19 at 12:10 -0800, Iavor Diatchki wrote:
 Hi,
 
 On Mon, Jan 19, 2009 at 11:06 AM, Jonathan Cast
 jonathancc...@fastmail.fm wrote:
  On Mon, 2009-01-19 at 10:59 -0800, Iavor Diatchki wrote:
  Hello,
  The multitude of newtypes in the Monoid module are a good indication
  that the Monoid class is not a good fit for the class system
 
  I would say rather that the class system is not a good fit for Monoid.
  Proposals for local instances, multiple instances, instance
  import/export control, etc. come up quite frequently on this list; the
  phenomena in question are not restricted to Monoid.
 
 I disagree with you but that is a moot point because we are discussing
 Haskell, which does not have any of these features.  Also, I find that
 in many situations where people want to use them, simpler solutions
 (like some of the ideas I mentioned in my  previous post) suffice.
 That is not to say that we should stop trying to figure out how to
 improve the class system, but language changes require a lot more work
 than improving the design of the libraries.
 
  I usually
  avoid using the newtype trick as I find it inconvenient:  usually
  the newtype does not have the same operations as the underlying type
  and so it cannot be used directly, and if you are going to wrap thing
  just when you use the class methods,
 
  OTOH, I think you mean here `when you use class methods and when you use
  overloaded functions'.
 
 Sure, the point is that you are essentially adding a type annotation,
 which is like using a non-overloaded function.  Compare, for example:
 mappend add x y  and getSum (mappend (Sum x) (Sum y)).  I think
 that the first one is quite a bit more readable but, of course, this
 is somewhat subjective.

data Iso a b = Iso { to :: a - b, from :: b - a }

under :: Iso a b - (b - b) - (a - a)
under iso = to iso ~ from iso

under2 :: Iso a b - (b - b - b) - (a - a - a)
under2 iso = to iso ~ under iso

sumIso = Iso Sum getSum

(+) = under2 sumIso mappend

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell and C++ program

2009-01-19 Thread Derek Elkins
On Mon, 2009-01-19 at 22:12 -0500, S. Doaitse Swierstra wrote:
 On 17 jan 2009, at 22:22, Derek Elkins wrote:
 
  On Thu, 2009-01-15 at 13:40 +0100, Apfelmus, Heinrich wrote:
  Eugene Kirpichov wrote:
  Well, your program is not equivalent to the C++ version, since it
  doesn't bail on incorrect input.
 
  Oops. That's because my assertion
 
show . read = id
 
  is wrong. We only have
 
read . show  = id
show . read = id  (in the less defined than sense)
 
  No, you only have
  read . show = id which often doesn't hold in practice.
  show . read /= id
 
 You do not even have that; the read may remove surplus parentheses  
 which will not be reinserted by the show.
 
   Doaitse
 

My notation is show . read is not less than or equal to id.  That covers
that case.  The particular example I was thinking of was actually simply
whitespace.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Improved documentation for Bool (Was: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt)

2009-01-18 Thread Derek Elkins
On Sun, 2009-01-18 at 18:17 +0100, Benja Fallenstein wrote:
 On Sun, Jan 18, 2009 at 5:48 PM,  rocon...@theorem.ca wrote:
  I noticed the Bool datatype isn't well documented.  Since Bool is not a
  common English word, I figured it could use some haddock to help clarify it
  for newcomers.
 
  -- |The Bool datatype is named after George Boole (1815-1864).
  -- The Bool type is the coproduct of the terminal object with itself.
 
 Russell, this does seem like it might be very helpful, but it might be
 useful to include a note about what category you are working in.
 People may sometimes naively assume that one is working in the
 category of Haskell/Hugs/GHC data types and Haskell functions, in
 which there are no terminal -- or initial -- objects 

The naive way of making a Haskell category doesn't even work.  Taking
objects to be Haskell types, all Haskell functions as arrows, arrow
equality being observational equality, and (.) and id to be the
composition and identity, you fail to even have a category.  Proof of
this is left as an (easy) exercise for the reader.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell and C++ program

2009-01-17 Thread Derek Elkins
On Thu, 2009-01-15 at 13:40 +0100, Apfelmus, Heinrich wrote:
 Eugene Kirpichov wrote:
  Well, your program is not equivalent to the C++ version, since it
  doesn't bail on incorrect input.
 
 Oops. That's because my assertion
 
show . read = id
 
 is wrong. We only have
 
read . show  = id
show . read = id  (in the less defined than sense)

No, you only have
read . show = id which often doesn't hold in practice.
show . read /= id

Assuming the first identity holds, you do of course have show . read .
show = show and this probably holds even in most cases where read . show
= id does not hold.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt

2009-01-16 Thread Derek Elkins
On Fri, 2009-01-16 at 15:21 -0800, Jonathan Cast wrote:
 On Fri, 2009-01-16 at 18:14 -0500, Anton van Straaten wrote:
  Niklas Broberg wrote:
   I still think existential quantification is a step too far though. :-P
   
   Seriously, existential quantification is a REALLY simple concept, that
   you would learn week two (or maybe three) in any introductory course
   on logic. In fact, I would argue that far more people probably know
   what existential quantification is than that know what a monoid is.
   :-)
  
  Andrew's core objection here seems reasonable to me.  It was this:
  
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and
should be changed to something that, at a minimum, tells you it's
something to do with the type system.
  
  But I suspect I part company from Andrew in thinking that something like 
  ExistentiallyQuantifiedTypes would be a perfectly fine alternative.
 
 +1

This focus on names is ridiculous.  I agree that good names are
beneficial, but they don't have to encode everything about the referent
into themselves.  Haskell is called Haskell not
StaticallyTypedPurelyFunctionalProgrammingLanguage.  In this
particular case, it's absurd.  In this case the name is only of mnemonic
value, other than that it could be called FraggleRock.  Regardless of
the name you are going to have to look up what it refers to (in the
user's guide), or, having already done that earlier, just know what it
means.

 (Although shouldn't it really be ExistentiallyQuantifiedConstructorTypes
 or something?  If GHC ever actually adds first-class existentials, what
 is Cabal going to call *that* then?)

FreeExistentials.  FirstClassExistentials would also be reasonable.
Though renaming the current LANGUAGE tag to
LocalExistentialQuantification would be better.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt

2009-01-15 Thread Derek Elkins
Actually programming requires -far more- precision than mathematics ever
has.  The standards of formal and precise that mathematicians use
are a joke to computer scientists and programmers.  Communication is
also more important or at least more center stage in mathematics than
programming.  Mathematical proofs are solely about communicating
understanding and are not required to execute on a machine.

On Thu, 2009-01-15 at 18:27 +, Lennart Augustsson wrote:
 That's very true.  But programming is one where mathematical precision
 is needed, even if you want to call it something else.
 
 On Thu, Jan 15, 2009 at 6:04 PM, Paul Moore p.f.mo...@gmail.com wrote:
 
  Mathematical precision isn't appropriate in all disciplines.
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt

2009-01-15 Thread Derek Elkins
On Thu, 2009-01-15 at 14:11 -0600, John Goerzen wrote:
 On Thu, Jan 15, 2009 at 07:46:02PM +, Andrew Coppin wrote:
  John Goerzen wrote:
 
  If we *must* insist on using the most obscure possible name for  
  everything, can we at least write some documentation that doesn't  
  require a PhD to comprehend?? (Anybody who attempts to argue that  
  monoid is not actually an obscure term has clearly lost contact with  
  the real world.)
 
 Several people have suggested this, and I think it would go a long way
 towards solving the problem.  The problem is: this documentation can
 really only be written by those that understand the concepts,
 understand how they are used practically, and have the time and
 inclination to submit patches.  Experience suggests there may be no
 such people out there :-)
 
  As somebody else said, it basically comes down to this: Who the hell is  
  Haskell actually for? If it's seriously intended to be used by  
  programmers, things need to change. And if things aren't going to  
  change, then let's all stop pretending that Haskell actually cares about  
  real programmers.
 
 It might surprise you to see me say this, but I don't see this
 discussion as necessarily a weakness.  I know of no other language
 community out there that has such a strong participation of both
 academics and applied users.  This is a great strength.  And, of
 course, Haskell's roots are firmly in academia.  
 
 I think there there is a ton of interest in Haskell from the, ahem,
 real world programmer types.  In fact, it seems to me that's where
 Haskell's recent growth has been.  There are a lot of things showing
 up on Hackage relating to networking, Unicode encoding, databases, web
 apps, and the like.
 
 The nice thing about Haskell is that you get to put the theory in
 front of a lot of people that would like to use it to solve immediate
 programming problems.  But they will only use it if you can explain it
 in terms they understand.

There are plenty of real world programmer types who are using these
scarily named things, Monoid, Monad, Functor, Existential
Quantification.  Programmers such as you*.  Despite poor documentation,
which everyone agrees could be improved, they've somehow managed to
understand these things anyway.  My impression is that to most of them
Monoids, Functors, and Monads are Just Another Interface and Existential
Quantification is Just Another Language Feature.  There are poorly
documented interfaces in every language**.  Any real world programmer
has some (plenty...) of experience dealing with this issue.  These
programmers do what they need to do to get stuff done.  Again, somehow
they learn how to use these things without waiting for us to provide
an explanation in terms they can understand; too busy trying to get
stuff done.

 
 There are a number of efforts in that direction: various websites,
 articles, books, libraries, etc.  And I think the efforts are
 succeeding.  But that doesn't mean there is no room for improvement.

No one doubts that there is room for improvement.  However, the
direction is better documentation, not different names.  Better names is
fine, but I have not heard any remotely convincing alternative for any
of the above terms.

* Or me for that matter.  I'm not an academic now and certainly wasn't
when I started learning Haskell.  I didn't know what a monoid was, had
never heard of category theory or monads or functors.  I was using
monads and functors and monoids in less than a month after I started
using Haskell.

** Heck, papers and decades worth of mathematical texts at almost every
level is a heck of a lot more documentation than most poorly
documented interfaces have.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   >