Re: [Haskell-cafe] foldr (.) id

2012-10-29 Thread Sebastian Fischer
 (.)/compose is consistent with (+)/sum, (*)/product, ()/and, etc.

(to) compose is a verb. composition would be consistent with sum
and product. and doesn't fit, though.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Snap] Argument Substitution in Heist Templates with Splices

2012-09-23 Thread Sebastian Fischer
Thanks! For the record, here is how to achieve what I want by
explicitly using `runChildren` and `stopRecursion`:

testSplice :: Splice IO
testSplice = do
input - getParamNode
kids - runChildren
stopRecursion
return [input { elementChildren = kids }]

It surprises me that an explicit call to `runChildren` is necessary,
especially after your comment regarding sensitivity to evaulation
order.

Your linked post shows that Heist splices are processed top down,
which reminds me of the `transform` combinator in Uniplate:


http://community.haskell.org/~ndm/downloads/paper-uniform_boilerplate_and_list_processing-30_sep_2007.pdf

The authors discuss bottom-up and top-down transformations in Sections
2.3 and 2.4 and argue for providing bottom-up transformations and only
a specific form of top-down transformations.

I think Heist's splice processing would be more intuitive (less
sensitive to evaluation order?) if applied bottom up rather than top
down. This only seems to require a slight change in the definition of
`runNode` from the post you linked - to process children before
applying the splice:

runNode :: Monad m = X.Node - Splice m
runNode (X.Element nm at ch) = do
newAtts - mapM attSubst at
newKids - runNodeList ch  -- added this line
let n = X.Element nm newAtts newKids -- changed this line
s - liftM (lookupSplice nm) getTS
maybe n (recurseSplice n) s  -- changed this line
  -- removed local function `runKids`
runNode n= return [n]

This change would simplify the definition of filter splices which
would not need to call `runChildren` explicitly. It would also make
the definition of substitution splices more uniform, because children
would be already processed when applying the splice - just like
attributes are.

Are Heist splices processed top down intentionally? (Reasons for doing
so are the same reasons people might have for preferring call-by-name
over call-by-value. However, I tend to agree with the discussion in
the Uniplate paper and would prefer call-by-value aka bottom-up
transformation.)

Best,
Sebastian

On Fri, Sep 21, 2012 at 6:03 PM, MightyByte mightyb...@gmail.com wrote:
 This is one of the more subtle corner cases of Heist.  My default, splices
 are recursively processed.  So when testSplice is executed for the test
 tag, the results are fed back into splice processing.  I think this is the
 right thing to do because it makes behavior less sensitive to evaluation
 order.  Obviously this can lead to infinite recursion, so Heist limits the
 splice call stack to a depth of 50.  If this limit is exceeded, then Heist
 simply stops recursing and returns the nodes unprocessed.  I also think this
 is the right thing to do because it is happening as we're serving a page to
 the end user, so there's an argument for failing quietly instead of going up
 in a ball of flames.

 In your case, you are returning the same node that was spliced in, so you
 are hitting the recursion limit and splice processing just stops.  I discuss
 this issue in my blog post about splice subtleties
 (http://softwaresimply.blogspot.com/2011/04/splice-subtleties.html).  Since
 you're writing a filter splice, you need to call stopRecursion.  But if you
 do that, then the child arg / tag won't be processed.  So what you need to
 do is use the runChildren function to process the child nodes, then return
 them in whatever your constructed node is.

 I think the easiest solution to your problem is to not write it as a filter
 splice.  Bind your testSplice function to the mytest tag and return a
 test tag.  This avoids the infinite recursion and will work the way you
 want without needing stopRecursion.

 On Thu, Sep 20, 2012 at 3:00 PM, Sebastian Fischer m...@sebfisch.de wrote:

 Hello,

 the following program demonstrates that arguments in Heist templates
 are sometimes not substituted in presence of splices:

 {-# LANGUAGE OverloadedStrings #-}

 import   Blaze.ByteString.Builder (toByteString)
 import qualified Data.ByteString.Char8as BS
 import   Data.Functor (($))
 import   Data.Maybe   (fromJust)
 import   Text.Templating.Heist

 -- just return input node unchanged
 testSplice :: Splice IO
 testSplice = (:[]) $ getParamNode

 main = do
 writeFile test.tpl arg /test attr='${arg}'arg //test
 state - either error id $ loadTemplates . defaultHeistState

 (builder,_) - fromJust $ renderWithArgs [(arg,42)] state test
 BS.putStrLn $ toByteString builder
 -- 42test attr='42'42/test

 let state' = bindSplices [(test,testSplice)] state
 (builder',_) - fromJust $ renderWithArgs [(arg,42)] state'
 test
 BS.putStrLn $ toByteString builder'
 -- 42test attr='42'arg/arg/test

 Without using splices, all occurrences of 'arg' in the template are
 substituted. When using a splice, 'arg' is not substituted underneath
 the input node of the splice. It is substituted in an attribute of the
 input

[Haskell-cafe] [Snap] Argument Substitution in Heist Templates with Splices

2012-09-20 Thread Sebastian Fischer
Hello,

the following program demonstrates that arguments in Heist templates
are sometimes not substituted in presence of splices:

{-# LANGUAGE OverloadedStrings #-}

import   Blaze.ByteString.Builder (toByteString)
import qualified Data.ByteString.Char8as BS
import   Data.Functor (($))
import   Data.Maybe   (fromJust)
import   Text.Templating.Heist

-- just return input node unchanged
testSplice :: Splice IO
testSplice = (:[]) $ getParamNode

main = do
writeFile test.tpl arg /test attr='${arg}'arg //test
state - either error id $ loadTemplates . defaultHeistState

(builder,_) - fromJust $ renderWithArgs [(arg,42)] state test
BS.putStrLn $ toByteString builder
-- 42test attr='42'42/test

let state' = bindSplices [(test,testSplice)] state
(builder',_) - fromJust $ renderWithArgs [(arg,42)] state' test
BS.putStrLn $ toByteString builder'
-- 42test attr='42'arg/arg/test

Without using splices, all occurrences of 'arg' in the template are
substituted. When using a splice, 'arg' is not substituted underneath
the input node of the splice. It is substituted in an attribute of the
input node.

Is this intentional? How can I ensure substitution also underneath the
input node?

Best,
Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad laws in presence of bottoms

2012-02-20 Thread Sebastian Fischer
On Mon, Feb 20, 2012 at 7:42 PM, Roman Cheplyaka r...@ro-che.info wrote:

 Is there any other interpretation in which the Reader monad obeys the
 laws?


If selective strictness (the  seq  combinator) would exclude function
types, the difference between  undefined  and  \_ - undefined  could not
be observed. This reminds me of the different language levels used by the
free theorem generator [1] and the discussions whether  seq  should have a
type-class constraint..

Sebastian

[1]: http://www-ps.iai.uni-bonn.de/cgi-bin/free-theorems-webui.cgi
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-23 Thread Sebastian Fischer
On Sun, Jan 22, 2012 at 5:25 PM, David Barbour dmbarb...@gmail.com wrote:
 The laws for monads only apply to actual values and combinators of the monad 
 algebra

You seem to argue that, even in a lazy language like Haskell,
equational laws should be considered only for values, as if they where
stated for a total language. This kind of reasoning is called fast
and loose in the literature and the conditions under which it is
justified are established by Danielsson and others:

http://www.cse.chalmers.se/~nad/publications/danielsson-et-al-popl2006.html

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-22 Thread Sebastian Fischer
On Sat, Jan 21, 2012 at 8:09 PM, David Barbour dmbarb...@gmail.com wrote:
 In any case, I think the monad identity concept messed up. The property:
   return x = f = f x

 Logically only has meaning when `=` applies to values in the domain.
 `undefined` is not a value in the domain.

 We can define monads - which meet monad laws - even in strict languages.

In strict languages both `return undefined = f` and `f undefined`
are observably equivalent to `undefined` so the law holds.

In a lazy language both sides might be observably different from
`undefined` but need to be consistently so. The point of equational
laws is that one can replace one side with the other without observing
a difference. Your implementation of `StrictT` violates this
principle.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Avoiding parametric function binding

2012-01-01 Thread Sebastian Fischer
On Sat, Dec 31, 2011 at 4:09 PM, Kevin Quick qu...@sparq.org wrote:


  onVarElem :: forall a . (Show a) = (Maybe a - String) - Var - String
 onVarElem f (V1 x) = f x
 onVarElem f (V2 x) = f x

 main = putStrLn . onVarElem elemStr $ test


 This is probably a better design, but still fails for the same reason:

Couldn't match expected type `Int' with actual type `[Char]'
Expected type: Maybe Int
  Actual type: Maybe String
In the first argument of `f', namely `x'
In the expression: f x


The problem is the scope of the quantification of the type variable 'a'.
You can use higher-rank types (via the Rnak2Types or RankNTypes language
extension) to achieve what you want. Change the type of 'onVarElem' to

onVarElem :: (forall a . (Show a) = Maybe a - String) - Var - String

In contrast to the previous type, this type says that the provided function
must be polymorphic over Show instances 'a', while the previous type would
have allowed any specialization of 'a' with a Show instance.

With the previous type, the call

onVarElem (maybe show (show . not))

would have been legal, with the new type it is not, which is necessary for
the implementation to be well typed.

If you don't want to use a language extension that allows you to require
polymorphism on function arguments, then Stephen's solution is a good
workaround.

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] space-efficient, composable list transformers [was: Re: Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]]

2011-12-28 Thread Sebastian Fischer
Hello Heinrich,

On Tue, Dec 27, 2011 at 1:09 PM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 Sebastian Fischer wrote:

 all functions defined in terms of `ListTo` and `interpret`
 are spine strict - they return a result only after consuming all input
 list
 constructors.

 Indeed, the trouble is that my original formulation cannot return a result
 before it has evaluated all the case expressions. To include laziness, we
 need a way to return results early.

 Sebastian's  ListTransformer  type does precisely that for the special
 case of lists as results,


Hmm, I think it does more than that.. (see below)

but it turns out that there is also a completely generic way of returning
 results early. In particular, we can leverage lazy evaluation for the
 result type.


This is nice! It would be cool if we could get the benefits of ListConsumer
and ListTransformer in a single data type.

I know that you chose these names to avoid confusion, but I would like to
 advertise again the idea of choosing the *same* names for the constructors
 as the combinators they represent [...] This technique for designing data
 structures has the huge advantage that it's immediately clear how to
 interpret it and which laws are supposed to hold.


I also like your names better, although they suggest that there is a single
possible interpretation function. Even at the expense of blinding eyes to
the possibility of other interpretation functions, I agree that it makes
things clearer to use names from a *motivating* interpretation. In
hindsight, my names for the constructors of ListTransformer seem to be
inspired by operations on handles. So, `Cut` should have been named `Close`
instead..


 Especially in the case of lists, I think that this approach clears up a
 lot of confusion about seemingly new concepts like Iteratees and so on.


A share the discomfort with seemingly alien concepts and agree that clarity
of exposition is crucial, both for the meaning of defined combinators and
their implementation. We should aim at combinators that people are already
familiar with, either because they are commonplace (like id, (.), or fmap)
or because they are used by many other libraries (like the Applicative
combinators).

A good way to explain the meaning of the combinators is via the meaning of
the same combinators on a familiar type. Your interpretation function is a
type-class morphism from `ListTo a b` to `[a] - b`. For Functor we have:

interpret (fmap f a)  =  fmap f (interpret a)

On the left side, we use `fmap` for `ListTo a` on the right side for `((-)
l)`. Similarly, we have the following properties for the coresponding
Applicative instances:

interpret (pure x)  =  pure x
interpret (a * b)  =  interpret a * interpret b

Such morphism properties simplify how to think about programs a lot,
because one can think about programs as if they were written in the
*meaning* type without knowing anything about the *implementation* type.
The computed results are the same but they are computed more efficiently.

Your `ListTo` type achieves space efficiency for Applicative composition of
list functions by executing them in lock-step. Because of the additional
laziness provided by the `Fmap` constructor, compositions like

interpret a . interpret b

can also be executed in constant space. However, we cannot use the space
efficient Applicative combinators again to form parallel compositions of
sequential ones because we are already in the meaning type.

We could implement composition for the `ListTo` type as follows

(.) :: ListTo b c - ListTo a [b] - ListTo a c
a . b = interpret a $ b

But if we use a result of this function as argument of *, then the
advantage of using `ListTo` is lost. While

interpret ((,) $ andL * andL)

runs in constant space,

interpret ((,) $ (andL . idL) * (andL . idL))

does not.

The ListTransformer type supports composition in lock-step via a category
instance. The meaning of `ListTransformer a b` is `[a] - [b]` with the
additional restriction that all functions `f` in the image of the
interpretation function are incremental:

xs `isPrefixOf` ys  ==  f xs `isPrefixOf` f ys

Composition as implemented in the ListTransformer type satisfies morphism
properties for the category instance:

transformList id  =  id
transformList (a . b)  =  transformList a . transformList b

As it is implemented on the ListTransformer type directly (without using
the interpretation function), it can be combined with the Applicative
instance for parallel composition without losing space efficiency.

The Applicative instance for `ListTransformer` is different from the
Applicative instance for `ListTo` (or `ListConsumer`). While

interpret ((,) $ idL * idL)

is of type `[a] - ([a],[a])`

transformList ((,) $ idL * idL)

is of type `[a] - [(a,a)]`. We could achieve the latter behaviour with the
former instance by using an additional fmap. But

uncurry zip $ ((,) $ idL * idL

Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-27 Thread Sebastian Fischer
On Tue, Dec 27, 2011 at 5:35 AM, Eugene Kirpichov ekirpic...@gmail.comwrote:

  I wonder if now this datatype of yours is isomorphic to StreamSummary b
 r - StreamSummary a r.

 Not sure what you mean here. StreamSummary seems to be the same as
 ListConsumer but I don't see how functions from consumers to consumers are
 list transformers, i.e., functions from lists to lists.

 Well. They are isomorphic, if list transformers are represented as
 functions from lists. I'm assuming they could be with the other
 representation too.

 type ListT a b = forall r . ([b] - r) - ([a] - r)


I see! I think the type

type ContListTransformer a b = forall r . ListConsumer b r -
ListConsumer a r

is isomorphic to `ListConsumer a [b]`. Here are the isomorphisms (I did not
check whether they are indeed isomorphisms):

clt2lc :: ContListTransformer a b - ListConsumer a [b]
clt2lc clt = clt idC

lc2clt :: ListConsumer a [b] - ContListTransformer a b
lc2clt _   (Done r)   = Done r
lc2clt (Done [])   (Continue r _) = Done r
lc2clt (Done (b:bs))   (Continue _ f) = lc2clt (Done bs) (f b)
lc2clt (Continue bs f) c  =
  Continue (consumeList c bs) (\a - lc2clt (f a) c)

However, `ListTransformer a b` is less expressive because of it's
incremental nature. Every list transformer `t` satisfies the following
property for all `xs` and `ys`:

transformList t xs `isPrefixOf` transformList t (xs++ys)

List *consumers* don't need to follow this restriction. For example, the
consumer

Continue [1] (\_ - Done [])

which represents the function

nonIncr [] = [1]
nonIncr _  = []

is not incremental in the sense above, because

not (nonIncr [] `isPrefixOf` nonIncr ([]++[0]))

I think it is the incremental nature of list transformers that allows them
to be composed in lock-step in the Category instance. `lc2clt` above is
sequential composition for list *consumers* but it applies the second
consumer only after executing the first completely.

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-26 Thread Sebastian Fischer
On Sun, Dec 25, 2011 at 11:25 AM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 Your  StreamSummary  type has a really nice interpretation: it's a
 reification of  case  expressions [on lists].


nice observation!


 For instance, consider the following simple function from lists to integers

length :: [a] - Int
length xs = case xs of
[] - 0
(y:ys) - 1 + length ys

 We want to reify the case expression as constructor of a data type. [...]

data ListTo a r = CaseOf r (a - ListTo a r)

interpret :: ListTo a r - ([a] - r)
interpret (CaseOf nil cons) xs =
case xs of
[] - nil
(y:ys) - interpret (cons y) ys

 [...]

 Likewise, each function from lists can be represented in terms of our new
 data type [...]

length' :: ListTo a Int
length' = CaseOf
(0)
(\x - fmap (1+) length')

length = interpret length'


This version of `length` is tail recursive while the previous version is
not. In general, all functions defined in terms of `ListTo` and `interpret`
are spine strict - they return a result only after consuming all input list
constructors.

This is what Eugene observed when defining the identity function as

idC = CaseOf [] (\x - (x:) $ idC)

This version does not work for infinite lists. Similarly, `head` and `take`
cannot be defined as lazily as in the standard libraries.

We can support lazier list consumers by adding a case to the ListTo type
that allows to stop consuming the list. To avoid confusion, I chose new
names for my new types.

data ListConsumer a b
  = Done !b
  | Continue !b (a - ListConsumer a b)

The interpretation function just ignores the remaining input in the case of
`Done`:

consumeList :: ListConsumer a b - [a] - b
consumeList (Done b)   _  = b
consumeList (Continue b _) [] = b
consumeList (Continue _ f) (x:xs) = consumeList (f x) xs

We can define lazier versions of `head` and `take` as follows:

headC :: ListConsumer a a
headC = Continue (error head of empty list) Done

takeC :: Int - ListConsumer a [a]
takeC 0 = Done []
takeC n = Continue [] (\x - (x:) $ takeC (n-1))

However, we still cannot define a lazy version of the identity funtion with
list consumers.

The identity function and `takeC` belong to a special case of a list
consumers because they also *produce* lists. We can define a specialized
type for list transformers that consume and produce lists. One advantage of
this specialization will be that we can define a lazy version of the
identity function. The transformer type can have functor and applicative
instances similar to the consumer type to compose transformers in parallel.
Additionally, it can have category and arrow instances to compose
transformers sequentially.

Here is a type for lazy list transformers:

data ListTransformer a b
  = Cut
  | Put b (ListTransformer a b)
  | Get (a - ListTransformer a b)

A transformer can either cut off the input list and return the empty list,
output a new element before transforming the input, or consume one element
from the input list and transform the remaining elements. The
interpretation function should make this clearer:

transformList :: ListTransformer a b - [a] - [b]
transformList Cut   _  = []
transformList (Put b t) xs = b : transformList t xs
transformList (Get _)   [] = []
transformList (Get f)   (x:xs) = transformList (f x) xs

Note that, if the transformer wants to read another element that is not
there, it simply returns the empty list.

Now we can define a lazy identity function and another version of `take`:

idT :: ListTransformer a a
idT = Get (\x - Put x idT)

takeT :: Int - ListTransformer a a
takeT 0 = Cut
takeT n = Get (\x - Put x (takeT (n-1)))

Here is another translation of a common list function:

filterT :: (a - Bool) - ListTransformer a a
filterT p = Get (\x - if p x then Put x (filterT p) else filterT p)

`filterT` is an example for a function that can consume several input
elements before producing an output element. Other examples for functions
of this kind are chunking functions:

pairsT :: ListTransformer a (a,a)
pairsT = Get (\x - Get (\y - Put (x,y) pairsT))

chunks :: Int - ListTransformer a [a]
chunks n = collect n
 where
  collect 0 = Put [] (chunks n)
  collect m = Get (\x - collect (m-1)  Get (\xs - Put (x:xs) id))

Here are some example calls in GHCi that demonstrate the category and
applicative instances for sequential and parallel composition (see below
for a link to the complete source code):

ghci transformList pairsT [1..5]
[(1,2),(3,4)]-- 5 is ignored
ghci transformList pairsT [1..6]
[(1,2),(3,4),(5,6)]
ghci transformList (chunks 2) [1..5]
[[1,2],[3,4]]-- similar to pairsT
ghci transformList (chunks 3) [1..6]
 

Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-26 Thread Sebastian Fischer
2011/12/26 Eugene Kirpichov ekirpic...@gmail.com

 Whoa. Sebastian, you're my hero — I've been struggling with defining Arrow
 for ListTransformer for a substantial time without success, and here you
 got it, dramatically simpler than I thought it could be done (I was using
 explicit queues).


This stuff is tricky. I noticed that my Applicative instance did not
satisfy all required laws. I think I could fix this by changing the
implementation of pure to

pure x = Put x $ pure x

in analogy to the ZipList instance. At least, QuickCheck does not complain
anymore (I did not write proofs).

The original definition of `pure` was inspired by Chris Smith's post on the
connection of Category/Applicative and Arrow:


http://cdsmith.wordpress.com/2011/08/13/arrow-category-applicative-part-iia/

However, even with the fixed Applicative instance, the Arrow instance does
not satisfy all laws. ListTransformer seems to be a type that has valid
Category and Applicative instances which do not give rise to a valid Arrow
instance as outlined by Chris. One of his additional axioms relating
Category and Applicative does not hold.

I have extended the (corrected) code with QuickCheck tests:

https://gist.github.com/1521467

I wonder if now this datatype of yours is isomorphic to StreamSummary b r
 - StreamSummary a r.


Not sure what you mean here. StreamSummary seems to be the same as
ListConsumer but I don't see how functions from consumers to consumers are
list transformers, i.e., functions from lists to lists.

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Alternative] summary of my understanding so far

2011-12-18 Thread Sebastian Fischer
On Thu, Dec 15, 2011 at 9:13 AM, Gregory Crosswhite
gcrosswh...@gmail.comwrote:

 To quote Ross Paterson's proposals:

 instance Alternative [] where
...
some [] = []
some (x:xs) = repeat (repeat x)

many [] = [[]]
many (x:xs) = repeat (repeat x)


Isn't this instance conflating the ZipList notion and the nondeterministic
list notion?


 • some v = (:) $ v * many v
 • many v = some v | pure []


Is there a motivation for writing the second law like this and not like

many v = pure [] | some v

other than parsers should try longest match first? Because apart from
that, I find the version with flipped arguments to | more natural (base
case first). Incidentally, it would lead to terminating definitions of
'some' and 'many' for lists:

ghci take 5 . map (take 5) $ some [1,2]
[[1],[1,1],[1,1,1],[1,1,1,1],[1,1,1,1,1]]
ghci take 5 . map (take 5) $ many [1,2]
[[],[1],[1,1],[1,1,1],[1,1,1,1]]

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] extending and reusing cmdargs option specs ?

2011-09-13 Thread Sebastian Fischer
Hi Simon,

On Tue, Sep 13, 2011 at 12:13 AM, Simon Michael si...@joyful.com wrote:
 Is that because of = auto ?

I'm not sure. The feature was added in version 0.2 and is described in
issue 333:

http://code.google.com/p/ndmitchell/issues/detail?id=333

The description does not mention = auto.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] extending and reusing cmdargs option specs ?

2011-09-12 Thread Sebastian Fischer
Hi Simon,

while it is not possible to reuse the definitions of common fields
themselves, their *descriptions* need to be given only once. Not sure
if you are already sharing descriptions or if it helps you saving a
few more lines. See

https://github.com/sebfisch/haskell-barchart/blob/v0.1.1.1/src/barchart.hs

for an example of different modes that share most but not all of their
options. IIRC, it works because in the list of exec modes later items
inherit from previous items what they do not define themselves.

Sebastian

On Tue, Aug 9, 2011 at 2:59 AM, Simon Michael si...@joyful.com wrote:
 Hi Neil,

 I just spent a day converting hledger from getopt to cmdargs. cmdargs feels
 more high level and featureful and nicer. And yet... I haven't reduced the
 line count that much - nothing like your HLint 3:1 ratio. And, I may have
 made things worse for myself in the reuse/avoiding boilerplate department:
 I'm not sure how to reuse a cmdargs options data structure, extending it
 with a few more options. Using getopt I was able to do this without
 repeating myself (as long as I defined the full set of Opt constructors in
 one place.) I've looked at the more advanced cmdargs api, but don't see a
 way yet - would you have any ideas ?

 I want this because I have multiple executables (hledger, hledger-vty,
 hledger-web etc.) which should share most (but not all) options. Also, I'd
 like to move a generic subset of report options, without the ui-specific
 ones, into hledger-lib for use by all apps.

 Also, the hledger executable has multiple commands, so I'd like to define a
 mode for each, but not have to redeclare all the same options for each mode
 - I didn't see how to do that.

 As always, thanks a lot for cmdargs!
 -Simon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-05 Thread Sebastian Fischer
Hi Max,

thanks for you proposal!

Using the Applicative methods to optimise do desugaring is still
 possible, it's just not that easy to have that weaken the generated
 constraint from Monad to Applicative since only degenerate programs
 like this one won't use a Monad method:


Is this still true, once Monad is a subclass of Applicative which defines
return?

I'd still somewhat prefer if return get's merged with the preceding
statement so sometimes only a Functor constraint is generated but I think, I
should adjust your desugaring then..

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-05 Thread Sebastian Fischer
Hi again,

I think the following rules capture what Max's program does if applied after
the usual desugaring of do-notation:

a = \p - return b
 --
(\p - b) $ a

a = \p - f $ b -- 'free p' and 'free b' disjoint
 --
((\p - f) $ a) * b

a = \p - f $ b -- 'free p' and 'free f' disjoint
 --
f $ (a = \p - b)

a = \p - b * c -- 'free p' and 'free c' disjoint
 --
(a = \p - b) * c

a = \p - b = \q - c -- 'free p' and 'free b' disjoint
 --
liftA2 (,) a b = \(p,q) - c

a = \p - b  c -- 'free p' and 'free b' disjoint
 --
(a  b) = \p - c

The second and third rule overlap and should be applied in this order.
'free' gives all free variables of a pattern 'p' or an expression
'a','b','c', or 'f'.

If return, , and  are defined in Applicative, I think the rules also
achieve the minimal necessary class constraint for Thomas's examples that do
not involve aliasing of return.

Sebastian

On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer fisc...@nii.ac.jp wrote:

 Hi Max,

 thanks for you proposal!

 Using the Applicative methods to optimise do desugaring is still
 possible, it's just not that easy to have that weaken the generated
 constraint from Monad to Applicative since only degenerate programs
 like this one won't use a Monad method:


 Is this still true, once Monad is a subclass of Applicative which defines
 return?

 I'd still somewhat prefer if return get's merged with the preceding
 statement so sometimes only a Functor constraint is generated but I think, I
 should adjust your desugaring then..

 Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-05 Thread Sebastian Fischer
On Mon, Sep 5, 2011 at 10:19 PM, Thomas Schilling
nomin...@googlemail.comwrote:

 a = \p - f $ b -- 'free p' and 'free b' disjoint
  --
 ((\p - f) $ a) * b


 Will there also be an optimisation for some sort of simple patterns?  I.e.,
 where we could rewrite this to:

   liftA2 (\pa pb - f ...) a b

 I think I remember someone saying that the one-at-a-time application of *
 inhibits certain optimisations.


liftA2 is defined via one-at-a-time application and cannot be redefined
because it is no method of Applicative. Do you remember more details?


 I find (a  b) confusing.  The intended semantics seem to be effect a,
 then effect b, return result of a.


Sorry, I didn't know that  doesn't exist. I meant an operator with the
meaning of * .

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-04 Thread Sebastian Fischer
On Sun, Sep 4, 2011 at 11:34 AM, Daniel Peebles pumpkin...@gmail.com wrote:
 I was wondering what people thought of a smarter do notation.

I'd support it (for both do notation and monad comprehensions) once
Applicative is a superclass of Monad.

To me it looks light a slight complication for an advantage. Parsers
are not the only examples that benefit. Implicitly parallel
computations would be another because the arguments of * can be
evaluated in parallel, those of = cannot.

I think it's quite reasonable to try to desugar into the most general
form. Something like

do x - something
   return (bla x)

could (and, I think, should) be desugared by using only Functor.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-04 Thread Sebastian Fischer
These are important questions. I think there is a trade-off between
supporting many cases and having a simple desugaring. We should find a
sweet-spot where the desugaring is reasonably simple and covers most
idiomatic cases.

So I guess it's possible to detect the pattern:

   do x1 - foo1; ...; xN - fooN; [res -] return (f {x1..xN})

 where {x1..xN} means x1..xN in some order


Your third example shows that it's beneficial to also support duplicated
variables.


 and turn it into:

   do [res -] (\x1..xN - f {x1..xN}) $ foo1 * ... * fooN


I think this is a reasonably simple rule and it covers most idiomatic cases.


 Open issues would be detection of the correct return-like thing.


I'm not sure how much return aliasing is worth supporting. In general it is
undecidable but we could add special cases for specialized returns (like
'Just' instead of 'return') depending on how difficult it is to implement.


 The current desugaring of do-notation is very simple because it doesn't
 even need to know about the monad laws.


Could you point out which monad law your proposed desugaring requires?


 They are used implicitly by the optimiser (e.g., foo = \x - return x
 is optimised to just foo after inlining), but the desugarer doesn't need
 to know about them.


Does the inliner have RULES for monad laws or why would this work?

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Superclass defaults

2011-08-31 Thread Sebastian Fischer
On Wed, Aug 31, 2011 at 4:21 PM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 There seems to be a lot of support for Option 3... but what about Option 2 
 (ie pre-empt but give a warning)?

I notice that the idea to only issue a warning if the explicit and
implicit instances are in different modules was already halfway
towards reaching option 2.

I think it is fine to issue warnings also if both instances are in the
same module. For newly written code, an explicit hiding clause removes
context dependence (as Conor points out) so the warning is justified.
For existing code where it generates too much noise the warning could
be switched of selectively until the code gets cleaned up.

Sebastian

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Pointed, but not Applicative

2011-08-30 Thread Sebastian Fischer
On Wed, Aug 31, 2011 at 6:13 AM, Ryan Ingram ryani.s...@gmail.com wrote:
 technically it violates 'fmap id' == 'id' [...]

 If you add this FList law, though, you're OK:

 runFList fl as = runFList fl [] ++ as

I think the idea of functional lists is that the monoids of 'lists'
and 'functions on lists' are isomorphic with isomorphisms toFList and
toList:

toFList [] = id
toFList (xs++ys) = toFList xs . toFList ys

toList id = []
toList (f . g) = toList f ++ toList g

These can be defined as:

toFList = (++)
toList = ($[])

Eliding newtypes, runFList is just the identity function so we need to check

f l = toList f ++ l

to verify your law. This is the same as

f = toFList (toList f)

which (once we know that toList and toFList are isomorphisms) does
indeed hold because:

toFList . toList = id
toList . toFList = id

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pointed, but not Applicative

2011-08-30 Thread Sebastian Fischer
    toFList [] = id
    toFList (xs++ys) = toFList xs . toFList ys

    toList id = []
    toList (f . g) = toList f ++ toList g

These laws do not *define* the isomorphisms because their behavior on
singletons is not fixed. Combining them with laws using a 'point'
function for functional lists

point = (:)

the isomorphisms are characterized uniquely:

toFList [x] = point x
toList (point x) = [x]

This might be an argument in favor of a Pointed class without Functor
constraint as it shows that other pointed structures have reasonable
laws involving 'point' as well.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Superclass defaults

2011-08-29 Thread Sebastian Fischer
On Mon, Aug 29, 2011 at 6:21 AM, Bas van Dijk v.dijk@gmail.com wrote:

 Won't option 1 Reject this as a duplicate instance declaration, which
 indeed it is. conflict with design goal 1: a class C can be
 re-factored into a class C with a superclass S, without disturbing any
 clients?

If yes, I prefer option 3:

 Allow the explicit to supersede the intrinsic default silently.

The argument against this option is:

 I might notice
 that Foo is a monad and add a Monad Foo instance in my own code,
 expecting the Applicative Foo instance to be generated in concert; to
 my horror, I find my code has subtle bugs because the package
 introduced a different, non-monadic, Applicative Foo instance which
 I'm accidentally using instead.

This seems rare enough that it's feasible to issue a warning if a
default instance is overwritten by an explicit instance in a different
module which would prevent the described perplexity. This wouldn't,
for example, disturb the transformers example mentioned by Bas because
(I think) all instances are defined in the same module.

Sebastian

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Pointed, but not Applicative

2011-08-28 Thread Sebastian Fischer
On Sun, Aug 28, 2011 at 12:41 AM, Sönke Hahn sh...@cs.tu-berlin.de wrote:
 I was wondering which
 type could be an instance of Pointed, but not of Applicative. But I can't
 think of one. Any ideas?

Functional lists:

type FList a = [a] - [a]

they have a Monoid instance for empty and append, a point function
for singletons but Applicative or Monad cannot be defined without
converting back and forth to ordinary lists.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pointed, but not Applicative

2011-08-28 Thread Sebastian Fischer
On Mon, Aug 29, 2011 at 12:24 PM, Maciej Marcin Piechotka
uzytkown...@gmail.com wrote:

 instance Functor FList where
    f `fmap` FList g = ...?

Yes, Functor is also one of the classes that can only be implemented
by converting to ordinary lists (I think).

So FList could only be made an instance of Pointed without (certain)
superclass constraints.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Prototype of 'docase' notation

2011-08-26 Thread Sebastian Fischer
This is interesting.

I think using a slightly different notation would avoid confusion with
matching on tuples. Why not just write

docase a,b,c of

instead of

docase (a,b,c) of

?

Sebastian

On Thu, Aug 25, 2011 at 9:48 PM, Tomas Petricek
tomas.petri...@gmail.com wrote:
 Hello,!
 at the Cambridge Hackathon, I started implementing an extension for GHC that
 adds the 'docase' notation. The notation is a syntactic sugar that can makes
 it easier to write code that works with monads that have three additional
 operations (parallel composition, choice and aliasing). Such monads include
 various moands for parallel and concurrent programming (Par monad, CPH and
 Orc) as well as other monads such as parsers.

 The proposal is in details discussed in our Haskell Symposium paper:
 http://www.cl.cam.ac.uk/~tp322/papers/docase.html (the page also contains
 link to earlier pre-processor based prototype with a couple of more
 examples). I created a GHC Trac ticket to track the discussion about the
 feature: http://hackage.haskell.org/trac/ghc/ticket/5429. A prototype
 version of the GHC extension is available here:
 http://github.com/tpetricek/Haskell.Extensions/commits/.

 I'd love to hear some feedback about the feature and the extension:

 * Do you think this is a useful extension and do you know about any monads
 that could benefit from it?
 * What is the best syntax for the feature? (The notation intentionally
 resembles 'case' pattern matching, but there are some differences)
 * The feature adds some aspects of recently added monad comprehensions using
 do-style syntax, but both 'docase' and monad comprehensions have features
 that cannot be written using the other. Is that a good desing, or should
 there be a closer correspondence?
 * Can anybody review the prototype implementation? (I believe it is a
 reasonably modular change, so it should not affect the rest of the language,
 but is there anything missing/incorrect?)

 Thanks!
 Tomas


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] a minor bug (memory leak) in ListLike package

2011-08-26 Thread Sebastian Fischer
On Wed, Aug 24, 2011 at 3:47 PM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 I was just trying to remember some of the tricks Daniel Peebles (aka
 {co}pumpkin) used to do in #haskell with Data.List.genericLength.
 I've never really used ListLike, but was just trying to guess why the
 default implementation was as it is.

Maybe he used lazy implementations of Num and Ord in combination with
a definition like

length [] = 0
length (_:xs) = 1 + length xs

But as John observed, the accumulating version of length does not
provide such laziness and the accumulator might as well be made
strict.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a minor bug (memory leak) in ListLike package

2011-08-23 Thread Sebastian Fischer
On Wed, Aug 24, 2011 at 10:47 AM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 24 August 2011 11:10, bob zhang bobzhang1...@gmail.com wrote:
 Hi, John, there is a space leak problem in ListLike typeclass,
 in the method genericLength
 calclen !accum cl =
 calclen accum cl =

 I _think_ this may cause problems with some data types (e.g.
 http://hackage.haskell.org/packages/archive/numbers/2009.8.9/doc/html/Data-Number-Natural.html
 ) that require the extra laziness (that is, you can do things like ` 3
  genericLength [1..] ' and have it return True).

Does the current version support this? The use of an accumulator (that
is presumably returned after consuming the complete input) seems to
suggest that your example would diverge anyway (but I did not try).

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: TKYProf

2011-08-16 Thread Sebastian Fischer

 I'm glad to announce the alpha release of TKYProf.


This looks useful, thanks! I'll try it out and let you know if I have
problems.

Installing with GHC 7.2, I needed to relax some upper bounds in cabal files
of dependencies (maintainers CC'ed).

  - email-validate and ranges specify base  4.4 but also seem to work with
base  5.
  - yesod-json specifies blaze-textual  0.2 but also seems to work with
blaze-textual  0.3

Additionally, I linked /usr/lib/libstdc++.so.6 to /usr/lib/libstdc++.so
before I could successfully install tkyprof. Not sure about the
consequences..

Cheers,
Sebastian

 http://hackage.haskell.org/package/tkyprof
  https://github.com/maoe/tkyprof

 TKYprof is a web-based interacitve visualizer for GHC time and allocation
 profiling reports. It helps you to find the bottlenecks in your code
 quickly!

 Here is a blog post:

   http://blog.foldr.in/tkyprof-a-web-based-interactive-visualizer-fo

 It is still alpha and it have some bugs. I'm happy to hear your feedback.

 Thanks,

 --
 Mitsutoshi Aoe
 m...@foldr.in





 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strictness properties of monoidal folds

2011-08-14 Thread Sebastian Fischer
Hello Alexey,

sorry for my slow response.

On Thu, Aug 4, 2011 at 7:10 AM, Alexey Khudyakov
alexey.sklad...@gmail.comwrote:

 On 02.08.2011 08:16, Sebastian Fischer wrote:

 Data.Foldable also provides the monoidal fold function foldMap. It is
 left unspecified whether the elements are accumulated leftwards,
 rightwards or in some other way, which is possible because the combining
 function is required to be associative. Does this additional freedom for
 implementors go so far as to allow for strict accumulation?

  Left and right folds behave identically for finite structures but they
 are different for infinite ones.


I agree that for types like normal lists that allow infinite structure it
makes sense to have different folds like foldr and foldl that are explicit
about the nesting of the combining function.

I don't think that there are laws for foldMap that require it to work for
infinite lists. One could even argue that the monoid laws imply that the
results for folding leftwards and rightwards should be equal, that is
undefined..

For types that only allow finite sequences (like Data.Sequence or
Data.Vector) this is not an issue. But you convinced me that a strict and
lazy foldMap can be distinguished even for list types that have a strict
append function by using a lazy mappend function for accumulation.

Best regards,
Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building ? using kleene closure {not haskell specific}

2011-08-12 Thread Sebastian Fischer
 I can easily understand how + can be built but am having trouble with
 building ? (zero or one).

If there is a regular expression e for the empty word, one can define ? as

a? = e | a

If there is a regular expression o that never matches one can define e as

e = o*

If there are character classes one can define o as

o = []

Apart from that, I have no idea..

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: yap-0.0 - yet another prelude

2011-08-11 Thread Sebastian Fischer
[switched to Cafe]

On Wed, Aug 10, 2011 at 11:46 PM, Henning Thielemann
lemm...@henning-thielemann.de wrote:

 On Wed, 10 Aug 2011, Paterson, Ross wrote:

 Yet another restructuring of the Prelude numeric classes on algebraic
 lines, proposed for a revision of the Haskell Prelude:

 http://hackage.haskell.org/package/yap-0.0

 A nice lightweight design, both in terms of the use of type extensions and
 import dependencies, that should people encourage to use it, when they are
 afraid of changing to a more radical approach like numeric-prelude. I would
 have prefered the name AdditiveGroup to AbelianGroup, since with '+' and '-'
 and '0' I associate more than just the laws of an Abelian group. The
 multiplicative group of rational numbers is abelian, too.

I'm curious: what laws do you have in mind for '+', '-', and '0' that
do not hold in the multiplicative group of rational numbers with

(+) = (*); (-) = (/); 0 = 1

?

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] weird type signature in Arrow Notation

2011-08-04 Thread Sebastian Fischer
here is a reduced program that still segfaults:

{-# LANGUAGE Arrows #-}

import Control.Arrow

main :: IO ()
main = print segfault

segfault :: [()]
segfault = anythingYouWant ()

anythingYouWant :: a
anythingYouWant = testB False (const ()) ()

testB :: ArrowChoice arrow
  = bool - arrow () () - arrow () anything
testB bool arrow =
  proc () -
do if bool then arrow - ()
   else arrow - ()

Sebastian

On Fri, Aug 5, 2011 at 6:20 AM, Brent Yorgey byor...@seas.upenn.edu wrote:

 On Tue, Aug 02, 2011 at 05:08:33PM -0400, bob zhang wrote:
  hi, all
  testB :: (ArrowChoice t1, Num a1, Num a) = (a - a1 - t2) - t1 a t3
  - t1 a1 t3 - t1 (a, a1) t
  testB f g h = proc (x,y) - do
  if (f x y)then g - x + 1 else h - y + 2
 
  it's very strange that the type of _f_ is (a-a1-t2) which I thought
  should be a - a1 - Bool,
 
  btw, is there any way to get the output of preprocessing using -XArrow
  extensions,
 
  Thanks a lot
  best, bob

 Congratulations, you have definitely found a GHC bug!  Note there are
 actually two things wrong with testB's type signature: first, t2 ought
 to be Bool, as you note.  But even worse, notice that the return type
 of the result arrow, t, has nothing to do with any of the other types!
 This means that we can use testB along with the (-) instance for
 Arrow to construct elements of arbitrary types:

  ghci let anythingYouWant = testB (\x y - False) (const 3) (const 2)
 (2,2)
  ghci :t anythingYouWant
  anythingYouWant :: t
  ghci anythingYouWant :: Integer
  2
  ghci anythingYouWant :: Int
  2
  ghci anythingYouWant :: Double
  1.0e-323
  ghci anythingYouWant :: Char
  '\STX'
  ghci (anythingYouWant :: (Double - Double) - [Double]) sqrt
  [
  [1]17391 segmentation fault  ghci

 whoops!

 I'm using GHC 7.0.3, but Daniel Wagner and I also tried it (with the
 same results) on GHC 7.2.0rc1 and GHC HEAD.

 I wasn't able to find a ticket for this on the GHC bug tracker, I
 guess we should file one!

 I tried to find a way to get the output of preprocessing using -XArrow
 but wasn't able to find one (other than -ddump-ds which gives you the
 unoptimized *GHC core* output, which is quite hard to read).

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] weird type signature in Arrow Notation

2011-08-04 Thread Sebastian Fischer
I created a ticket with a slightly further simplified program:
http://hackage.haskell.org/trac/ghc/ticket/5380

On Fri, Aug 5, 2011 at 10:10 AM, Sebastian Fischer fisc...@nii.ac.jpwrote:

 here is a reduced program that still segfaults:

 {-# LANGUAGE Arrows #-}

 import Control.Arrow

 main :: IO ()
 main = print segfault

 segfault :: [()]
 segfault = anythingYouWant ()

 anythingYouWant :: a
 anythingYouWant = testB False (const ()) ()

 testB :: ArrowChoice arrow
   = bool - arrow () () - arrow () anything
 testB bool arrow =
   proc () -
 do if bool then arrow - ()
else arrow - ()

 Sebastian

 On Fri, Aug 5, 2011 at 6:20 AM, Brent Yorgey byor...@seas.upenn.eduwrote:

 On Tue, Aug 02, 2011 at 05:08:33PM -0400, bob zhang wrote:
  hi, all
  testB :: (ArrowChoice t1, Num a1, Num a) = (a - a1 - t2) - t1 a t3
  - t1 a1 t3 - t1 (a, a1) t
  testB f g h = proc (x,y) - do
  if (f x y)then g - x + 1 else h - y + 2
 
  it's very strange that the type of _f_ is (a-a1-t2) which I thought
  should be a - a1 - Bool,
 
  btw, is there any way to get the output of preprocessing using -XArrow
  extensions,
 
  Thanks a lot
  best, bob

 Congratulations, you have definitely found a GHC bug!  Note there are
 actually two things wrong with testB's type signature: first, t2 ought
 to be Bool, as you note.  But even worse, notice that the return type
 of the result arrow, t, has nothing to do with any of the other types!
 This means that we can use testB along with the (-) instance for
 Arrow to construct elements of arbitrary types:

  ghci let anythingYouWant = testB (\x y - False) (const 3) (const 2)
 (2,2)
  ghci :t anythingYouWant
  anythingYouWant :: t
  ghci anythingYouWant :: Integer
  2
  ghci anythingYouWant :: Int
  2
  ghci anythingYouWant :: Double
  1.0e-323
  ghci anythingYouWant :: Char
  '\STX'
  ghci (anythingYouWant :: (Double - Double) - [Double]) sqrt
  [
  [1]17391 segmentation fault  ghci

 whoops!

 I'm using GHC 7.0.3, but Daniel Wagner and I also tried it (with the
 same results) on GHC 7.2.0rc1 and GHC HEAD.

 I wasn't able to find a ticket for this on the GHC bug tracker, I
 guess we should file one!

 I tried to find a way to get the output of preprocessing using -XArrow
 but wasn't able to find one (other than -ddump-ds which gives you the
 unoptimized *GHC core* output, which is quite hard to read).

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] strictness properties of monoidal folds

2011-08-01 Thread Sebastian Fischer
Hello Cafe,

left- and rightwards folds come in strict and lazy variants foldl/fold' and
foldr/foldr' which makes sense because strict versions sometimes use less
stack space while lazy versions support infinite data. For example,

head (foldr (:) [] [1..])

returns in an instant while

head (foldr' (:) [] [1..])

diverges. On the other hand

foldl' (flip (:)) 0 [1..10^9]

runs in constant space while

foldl (flip (:)) 0 [1..10^9]

consumes all memory available on my machine (at least without optimizations.
With optimizations GHC's strictness analyzer seems to be smart enough to
make the accumulator strict.)

Data.Foldable also provides the monoidal fold function foldMap. It is left
unspecified whether the elements are accumulated leftwards, rightwards or in
some other way, which is possible because the combining function is required
to be associative. Does this additional freedom for implementors go so far
as to allow for strict accumulation? Is it safe to implement foldMap
(without prime) with a strict accumulator or are there examples where lazy
accumulation is useful like in the above foldr example and distinguishable
from strict accumulation without breaking the monoid laws?

Sebastian

P.S. I thought the `Last` monoid would be an example that requires a lazy
accumulator but it is not because the `Just` constructor guards list
elements.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Properties for Foldable

2011-07-29 Thread Sebastian Fischer
  http://www.cs.ox.ac.uk/jeremy.gibbons/publications/iterator.pdf

Interesting. However I don't understand why the instance in Section
5.5 is not already forbidden by the purity law

traverse pure = pure

and a 'no duplication' constraint would be necessary. For example:

traverse Id [1,2,3] = Id [1,1,2,2,3,3] /= Id [1,2,3]

What am I missing?

Can we derive Foldable laws from the Traversable laws?

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Properties for Foldable

2011-07-29 Thread Sebastian Fischer
 What am I missing?

 I suspect you missed the use of const

Doh! I completely overlooked that it's about duplication of *effects*.

Thanks,
Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] For class Monoid; better names than mempty mappend might have been: mid (mident) mbinop

2011-07-24 Thread Sebastian Fischer

 because list is a (the?) free monoid.


 Yes, all free monoids are isomorphic (to lists).

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Subcategories on Hackage

2011-06-04 Thread Sebastian Fischer
http://www.shirky.com/writings/ontology_overrated.html

On Sat, Jun 4, 2011 at 10:02 AM, Tillmann Vogt
tillmann.v...@rwth-aachen.de wrote:
 Hi,

 There are some categories on Hackage that have become so large that it is
 hard to find something, i.e. Data(414 packages) and Graphics (191). Thats
 why I suggest to use subcategories separated from the category with a dot.
 To show that this makes sense I made subcategories for graphics libraries at
 the end of this email.
 Whatever happens to hackage2 this would be an immediate improvement.

 How
 --
 I would volunteer for the manual categorization and let the community look
 over it.
 I could upload the changes with a script but the version number has to
 increase even if only the category has changed. I also don't want to be
 responsible for a massive spike in the upload statistics.
 Shouldn't the cabal file be excluded from the versioning policy? = It is
 allowed to upload a library with the same version number if only the cabal
 file has changed. One should write a notifiaction mail to all owners to
 reply if they don't agree. Then after a week executing the script that
 applies the changes.


 Categories for Graphics:

 Graphics.2d
   bacteria program: braindead utility to compose Xinerama backgrounds
   barchart library and program: Creating Bar Charts in Haskell
   chalkboard library and programs: Combinators for building and processing
 2D images.
   chalkboard-viewer library: OpenGL based viewer for chalkboard rendered
 images.
   Chart library: A library for generating 2D Charts and Plots
   dia-base library: An EDSL for teaching Haskell with diagrams - data types
   dia-functions library: An EDSL for teaching Haskell with diagrams -
 functions
   diagrams library: Embedded domain-specific language for declarative vector
 graphics
   diagrams-cairo library: Cairo backend for diagrams drawing EDSL
   diagrams-core library: Core libraries for diagrams EDSL
   diagrams-lib library: Embedded domain-specific language for declarative
 graphics
   explore program: Experimental Plot data Reconstructor
   funcmp library: Functional MetaPost
   gloss library: Painless 2D vector graphics, animations and simulations.
   gloss-examples programs: Examples using the gloss library
   GoogleChart library: Generate web-based charts using the Google Chart API
   graphics-drawingcombinators library: A functional interface to 2D drawing
 in OpenGL
   haha library and program: A simple library for creating animated ascii art
 on ANSI terminals.
   HDRUtils library: Utilities for reading, manipulating, and writing HDR
 images
   hevolisa program: Genetic Mona Lisa problem in Haskell
   hevolisa-dph program: Genetic Mona Lisa problem in Haskell - using Data
 Parallel Haskell
   Hieroglyph library: Purely functional 2D graphics for visualization.
   HPlot library and program: A minimal monadic PLplot interface for Haskell
   hs-captcha library: Generate images suitable for use as CAPTCHAs in online
 web-form security.
   hs-gchart library: Haskell wrapper for the Google Chart API
   hsparklines library: Sparklines for Haskell
   internetmarke program: Shell command for constructing custom stamps for
 German Post
   plot library: A plotting library, exportable as eps/pdf/svg/png or
 renderable with gtk
   plot-gtk library: GTK plots and interaction with GHCi
   printxosd program: Simple tool to display some text on an on-screen
 display
   scaleimage program: Scale an image to a new geometry
   testpattern program: Display a monitor test pattern
   wumpus-basic library: Basic objects and system code built on Wumpus-Core.
   wumpus-core library: Pure Haskell PostScript and SVG generation.
   wumpus-drawing library: High-level drawing objects built on Wumpus-Basic.
   wumpus-microprint library: Microprints - greek-text pictures.
   wumpus-tree library: Drawing trees
   zsh-battery program: Ascii bars representing battery status

 Graphics.3d
   Attrac program: Visualisation of Strange Attractors in 3-Dimensions
   cal3d library: Haskell binding to the Cal3D animation library.
   cal3d-examples programs: Examples for the Cal3d animation library.
   cal3d-opengl library: OpenGL rendering for the Cal3D animation library
   FieldTrip library: Functional 3D
   gnuplot library and program: 2D and 3D plots using gnuplot
   HGL library: A simple graphics library based on X11 or Win32
   hgl-example program: Various animations generated using HGL
   IcoGrid library: Library for generating grids of hexagons and pentagons
 mapped to a sphere.
   nymphaea program: An interactive GUI for manipulating L-systems
   reactive-fieldtrip library: Connect Reactive and FieldTrip
   reactive-glut library: Connects Reactive and GLUT

 Graphics.Fractal
   fractal program: Draw Newton, Julia and Mandelbrot fractals
   gmndl program: Mandelbrot Set explorer using GTK
   hfractal program: OpenGL fractal renderer
   mandulia program: A zooming visualisation of the Mandelbrot Set as many
 Julia Sets.


[Haskell] Student Internships for Parallel Haskell Programming at NII, Tokyo

2011-05-30 Thread Sebastian Fischer
As part of my research fellowship at the National Institute of
Informatics in Tokyo, I announce the availability of student
internships for up to three months between July and September 2011.

Qualified applicants are enrolled in a Masters or Phd program, have a
firm grasp of the Haskell programming language, and, ideally, know the
different approaches to parallel and concurrent programming available
in the Glorious Glasgow Haskell Compilation System (GHC). Their task
is to support research on combinators for parallel programming by
developing and comparing efficient implementations.

The available fund covers sojourn costs but no travel expenses. It is
meant to foster collaboration between Germany and Japan but qualified
students from other countries may also be funded.

If you want to spend the summer hacking Haskell in Tokyo please do not
hesitate to contact me off list. Feel free to write even if you are
unsure whether you are qualified or have any other questions regarding
the internship.

Best regards,
Sebastian

http://research.nii.ac.jp/members/fischer/


P.S.

Information about the status after the earthquake is available on the
Local Information page of ICFP'11:

http://www.biglab.org/icfp11local/index.html

Additionally, you may want to follow recent news in English by
Mainichi Daily News and NHK World:

http://mdn.mainichi.jp/
http://www3.nhk.or.jp/daily/english/

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Student Internships for Parallel Haskell Programming at NII, Tokyo

2011-05-30 Thread Sebastian Fischer
As part of my research fellowship at the National Institute of
Informatics in Tokyo, I announce the availability of student
internships for up to three months between July and September 2011.

Qualified applicants are enrolled in a Masters or Phd program, have a
firm grasp of the Haskell programming language, and, ideally, know the
different approaches to parallel and concurrent programming available
in the Glorious Glasgow Haskell Compilation System (GHC). Their task
is to support research on combinators for parallel programming by
developing and comparing efficient implementations.

The available fund covers sojourn costs but no travel expenses. It is
meant to foster collaboration between Germany and Japan but qualified
students from other countries may also be funded.

If you want to spend the summer hacking Haskell in Tokyo please do not
hesitate to contact me off list. Feel free to write even if you are
unsure whether you are qualified or have any other questions regarding
the internship.

Best regards,
Sebastian

http://research.nii.ac.jp/members/fischer/


P.S.

Information about the status after the earthquake is available on the
Local Information page of ICFP'11:

http://www.biglab.org/icfp11local/index.html

Additionally, you may want to follow recent news in English by
Mainichi Daily News and NHK World:

http://mdn.mainichi.jp/
http://www3.nhk.or.jp/daily/english/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] For Euler 25; What is the first term in the Fibonacci sequence to contain 1000 digits?; the following seems to work.

2011-05-19 Thread Sebastian Fischer
On Thu, May 19, 2011 at 7:29 PM, KC kc1...@gmail.com wrote:

 For Euler 25; What is the first term in the Fibonacci sequence to
 contain 1000 digits?; the following seems to work.


 -- For number of digits being 5 or more.
 fibNdigits :: Int - Int
 fibNdigits nDigits = floor (((fromIntegral nDigits) - 1.0) / (logBase
 10 phi)) + 2
  where
sq5 = sqrt 5 :: Double
phi = (1 + sq5) / 2



(length . show) is fast enough for me. The following gives the answer in a
fraction of a second.

# cabal install fibonacci
# ghci
ghci import Data.Numbers.Fibonacci
ghci head . filter ((==1000) . length . show) $ map fib [0..]
10700662663827589367649805844573968850836838966321516650132352033753145206046940406218891475824897926578046948881775919574843364666725699595129960304612627480924821861440694330512347744427502737817530875793916661921492591867595539664228371489431130746995034395470019854326097230672901928705264472437261177158218255484911205250132014786129659313817922355596574520395061375514678375432291196021299340482607061753977068470682028954869026661854351245219003694806413574474709117076197669456910700980243934396174741037369125032313655321647736970231677550515951735184605799549194109677783732296657965816465139034881542563101842241902598460880001101862024549393711365165703944762958471454852342595042858242530608354443542821261100899286379504800689433030977321783486454311320576565986845628861680871869383529735064398629764066723562917905207051164077614812491885830945940566688339109350944456576357666151619317753792891661581327159616877487983821820492520348473874384736771934512787029218636250627816
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Programming Chalenges: The 3n+1 problem

2011-04-15 Thread Sebastian Fischer
On Thu, Apr 14, 2011 at 8:02 PM, Luke Palmer lrpal...@gmail.com wrote:

 For this problem, it is too slow to memoize everything; you have to use a
 bounded memo table.  That's why I use a combinator-based memo approach as
 opposed to the type-directed approach used in eg. MemoTrie.  The memo table
 you need is something like

 switch (10^6) integral id


I think because of the definition of `Data.Function.fix`

 fix f = let x = f x in x

which uses sharing, the definition

fibonacci = fix (switch (10^6) integral id . fib)

chaches results even of independent global calls. If `fix` would be defined
as

fix f = f (fix f)

it would only cache the recursive calls of each individual call.

Do you agree?

Here is a fixpoint combinator that is parameterized by a memo combinator:

fixmemo :: Memo a - ((a - b) - (a - b)) - (a - b)
fixmemo memo f = fix (memo . f)

If I use

fixmemo (switch (=10^6) integral id) collatz

the computation of the maximum Collatz length between 1 and 10^6 takes
around 16 seconds (rather than 4 seconds without memoization). But when
using

fixmemo (arrayRange (1,10^6)) collatz

memoization actually pays off and run time goes down to around 3 seconds. I
uploaded the program underlying my experiments to github (spoiler alert):

https://gist.github.com/921469

I knew that memocombinators are more flexible than a type-based MemoTrie but
it is nice to see that they also lead to more efficient implementations and
allow to define the other array-based implementation from this thread in a
modular way.

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Programming Chalenges: The 3n+1 problem

2011-04-14 Thread Sebastian Fischer
Hi Dimitri,

 When asking how to implement cache in Haskell I was hopping that there
 exists some solution without using Data.Array, more functional approach,
 if I may say so  ...

Steven's second solution is purely functional. It uses so-called tries
to cache results instead of mutable arrays. Here is an alternative
definition of Steven's `fixmemo` function:

fixmemo :: HasTrie a = ((a - b) - (a - b)) - (a - b)
fixmemo f = fix (memo . f)

It uses the standard fixpoint combinator

Data.Function.fix :: (a - a) - a

and has a similar (but more restrictive) type. In order to understand
Steven's solution you need to know how to define recursive functions
using a fixpoint combinator. For example, you can define the Fibonacci
function as follows:

fibonacci :: Int - Integer
fibonacci = fix fib

fib :: (Int - Integer) - (Int - Integer)
fib fib 0 = 0
fib fib 1 = 1
fib fib n = fib (n-1) + fib (n-2)

Note how the first argument `fib` of the function `fib` shadows the
name of the global function `fib`.

The advantage of this complicated definition is that you get a
memoized version of the `fibonacci` function simply by using `fixmemo`
instead of `fix`:

memoFib :: Int - Integer
memoFib = fixmemo fib

This definition avoids to recompute the same recursive calls over and
over again in is therefore much faster than the original version.

That said, memoization helps to solve your original problem only if
you reuse the cache for different top-level calls of the Collatz
length function. According to the Collatz conjecture no argument
appears again when computing the Collatz length of a number (otherwise
the computation would not terminate).

Ryan's solution achieves reuse of the cache by defining it at the top
level and you can do the same with tries. For the `fib` example it
would look like this:

fibCache :: Int :-: Integer
fibCache = trie (fib cachedFib)

cachedFib :: Int - Integer
cachedFib = untrie fibCache

Now even independent calls to `cachedFib` reuse previously computed results.

In my experiments, caching did not pay off, even for computing the
maximum Collatz length of a given range. Without caching it took less
than 5 seconds to compute the maximum Collatz length of all numbers
between 1 and 100. With caching, it took 30 seconds.

Cheers,
Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] efficient parallel foldMap for lists/sequences

2011-04-01 Thread Sebastian Fischer
Hello Haskellers,

in parallel programs it is a common pattern to accumulate a list of
results in parallel using an associative operator. This can be seen as
a simple form of the map-reduce pattern where each element of the list
is mapped into a monoid before combining the results using `mconcat`.
This pattern can be abstracted by the `foldMap` function of the
`Foldable` class and we can give a simple parallel implementation
using `par` and `pseq`:

~~~
import Data.Monoid
import Control.Parallel

foldMap :: Monoid m = (a - m) - [a] - m
foldMap f []  = mempty
foldMap f [x] = f x
foldMap f xs  =
  let (ys,zs) = splitAt (length xs `div` 2) xs
  ys' = foldMap f ys
  zs' = foldMap f zs
   in zs' `par` (ys' `pseq` (ys' `mappend` zs'))
~~~

How can this pattern be implemented in Haskell efficiently?

I plan to investigate the following options and am interested in previous work.


1. Data.Sequence (from containers)

As finger trees are already balanced they look like a good candidate
for parallel consumption. However, the current implementation of the
`Foldable` instance is sequential. I wonder what would be the overhead
and how it could be reduced if it is is too large. I think, I would
first go for an implementation outside of `Foldable` and later
consider the overhead of overloading.

2. Data.Array.Repa (from repa)

provides a `map` and (sequential) `fold` functions. I think that the
main advantage of repa is fusion of successive traversals which is not
necessary for the pattern in question. Hence, I'm not sure whether
it's a good candidate for the job. Also I don't know how to implement
the pattern using repa. Can it be done using `slice` or should it be
done by changing repa itself?

3. Data.Monoid.Reducer (from monoids)

`foldMapReduce` looks promising. I wonder whether it could be used for
parallel reduction and how efficient it would be.


Which approach do you think is most promising? What other options are there?

Best,
Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Strange performance effects with unsafePerformIO

2011-03-25 Thread Sebastian Fischer
2011/3/25 Thomas Schilling nomin...@googlemail.com:
 unsafePerformIO traverses the stack to perform blackholing.  It could
 be that your code uses a deep stack and unsafePerformIO is repeatedly
 traversing it.  Just a guess, though.

Sounds reasonable. Here is a variant of the program without intermediate lists.

import System.IO.Unsafe

main = run (10^5)

run 0 = return ()
run n = (unsafePerformIO . return) (run (n - 1))  return ()

I think it does not do much more than producing a large stack and
(like the original program) is much faster if the unsafe-return
combination or the final return (which probably prohibits tail-call
optimization) is removed.

Sebastian

 2011/3/24 Björn Peemöller b...@informatik.uni-kiel.de:
 Hello,

 we have a strange performance behaviour when we use unsafePerformIO, at
 least with GHC 6.12.3 and 7.0.1.

 Please consider the example program following at the end of this post.
 Running the original code the execution time is about 26 seconds, while
 uncommenting one (or both) of the comments shrinks it to about 0.01
 seconds on our machine.

 Is there an explanation for this effect?

 Regards,
 Bjoern

 -- ---

 module Main where

 import System.IO.Unsafe

 traverse []     = return ()
 -- traverse (_:xs) = traverse xs
 traverse (_:xs) = traverse xs  return ()

 makeList 0 = []
 -- makeList n = () : (makeList (n - 1))
 makeList n = () : (unsafePerformIO . return) (makeList (n - 1))

 main = traverse $ makeList (10^5)


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




 --
 Push the envelope. Watch it bend.

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Where to put a library

2011-03-02 Thread Sebastian Fischer
Hi Richard,

On Thu, Mar 3, 2011 at 1:46 AM, Richard Senington sc06...@leeds.ac.ukwrote:

 The file parsers are designed to process files coming out of the TSPLIB and
 SATLIB
 repositories. [...]

 Since these are all related I was going to try to put them together into a
 single
 library and post them to Hackage, but I am not sure what to put them under.


You could place the parsers under

Text.TSPLIB
Text.SATLIB
Text

(or maybe only Text.TSP ...?) That would reflect other formats like
Text.JSON and so on.

Cheers,
Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] upgrading mtl1 to mtl2

2011-02-17 Thread Sebastian Fischer
On Thu, Feb 17, 2011 at 4:57 PM, Max Bolingbroke batterseapo...@hotmail.com
 wrote:

 I think the problem is that the mtl1 Functor instances looked like:

 instance Monad m = Functor (ReaderT e m) where
  fmap = ...

 But the mtl2/transformers instances look like:

 instance Functor f = Functor (ReaderT e f) where
  fmap = ...


I see! So the problem is not that previously (Monad m) implied (Functor m)
(which has been proposed only recently) but that previously many Functor
instances required (Monad m) but now require (Functor m).

Thanks for the clarification,
Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] upgrading mtl1 to mtl2

2011-02-16 Thread Sebastian Fischer
On Thu, Feb 17, 2011 at 11:32 AM, Evan Laforge qdun...@gmail.com wrote:

 Or will there just be massive signature rewriting in the wake of mtl2?


I must admit I still don't understand your exact problem. Could you help me
with an example where using mtl2 requires an additional (Functor m)
constraint that is not required when using mtl1?

Thanks,
Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Injective type families?

2011-02-14 Thread Sebastian Fischer

 On Mon, Feb 14, 2011 at 1:41 PM, John Meacham j...@repetae.net wrote:

 Isn't this what data families (as opposed to type families) do?


On Tue, Feb 15, 2011 at 7:02 AM, Conal Elliott co...@conal.net wrote:

 Yes, it's one things that data families do. Another is introducing *new*
 data types rather than reusing existing ones. - Conal


Roman Leshchinskiy once used a newtype to make a type family injective and
remarked: As an aside, it is well possible to [use] an injective data type
family or even a GADT [instead]. ... However, this really messes up GHC’s
optimiser and leads to very inefficient code. [1]

Of course, introducing a newtype also requires introducing new types instead
of reusing existing ones..

Sebastian

[1]:
http://unlines.wordpress.com/2010/11/15/generics-for-small-fixed-size-vectors/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Proving correctness

2011-02-11 Thread Sebastian Fischer

 I've come across this a few times - In Haskell, once can prove the
 correctness of the code - Is this true?


One way to prove the correctness of a program is to calculate it from its
specification. If the specification is also a Haskell program, equational
reasoning can be used to transform a (often inefficient) specification into
an equivalent (but usually faster) implementation. Richard Bird describes
many examples of this approach, one in his functional pearl on a program to
solve Sudoku [1]. Jeremy Gibbons gives an introduction to calculating
functional programs in his lecture notes  of the Summer School on Algebraic
and Coalgebraic Methods in the Mathematics of Program Construction [2].

Sebastian

[1]: http://www.cs.tufts.edu/~nr/comp150fp/archive/richard-bird/sudoku.pdf
[2]:
http://www.comlab.ox.ac.uk/jeremy.gibbons/publications/acmmpc-calcfp.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


my RULES don't fire

2011-02-09 Thread Sebastian Fischer
Hello,

I want to use the RULES pragma and cannot get my rules to fire. Here is a
simplified example of what I'm trying.

I define my own version of foldMap for lists:

fold :: Monoid m = (a - m) - [a] - m
fold f = foldr mappend mempty . map f
-- alternative, trying to avoid interference with foldr/build fusion
-- fold _ [] = mempty
-- fold f (x:xs) = f x `mappend` fold f xs
{-# NOINLINE fold #-}

I try using a NOINLINE pragma to make the firing of my rules (which involve
fold) more robust. But they don't fire with or without NOINLINE. Also the
uncommented version does not make a difference.

I also define a function that creates a singleton list:

single :: a - [a]
single x = [x]
{-# NOINLINE single #-}

Now I want to replace calls of `fold f . g single` (or eta-expanded versions
of this) by `g f` using the following rules:

{-# RULES
  monoid fusion pointfree
forall f (g :: forall m . Monoid m = (a - m) - b - m) .
  fold f . g single = g f;

  monoid fusion pointed, general
forall f (g :: forall m . Monoid m = (a - m) - b - m) b .
  fold f (g single b) = g f b;

  monoid fusion pointed, for lists
forall f (g :: forall m . Monoid m = (a - m) - [a] - m) xs .
  fold f (g single xs) = g f xs;
  #-}

The variations of type signatures (including no signatures at all) for the
pattern variables that I tried did not change anything for the better.

I wrote the third rule only because the second gives a warning that I don't
quite understand:

Warning: Forall'd type variable b is not bound in RULE lhs
   fold @ m @ a $dMonoid f (g @ [a] $dMonoid (single @ a) b)

I try out the rules using the following function that takes the role of `g`
in the rules:

idGen :: Monoid m = (a - m) - [a] - m
idGen _ [] = mempty
idGen f (x:xs) = f x `mappend` idGen f xs
{-# NOINLINE idGen #-}

Again, I use NOINLINE just in case that would help the rules fire. Here is a
main function where the rules should fire:

main :: IO ()
main =
  do print ((fold id . idGen single) [[()]])
 print (fold id (idGen single [[()]]))

But they don't.

Why don't the rules fire, what can I change such that they do, and what to
get rid of the warning for the second rule (which I think is the one I
should use)?

Best regards,
Sebastian

Here is the output of -ddump-simple-stats (once with -fenable-rewrite-rules
only and once with -O):

# ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.0.1

# ghc -fenable-rewrite-rules -fforce-recomp -ddump-simpl-stats --make rules
[1 of 1] Compiling Main ( rules.hs, rules.o )

 Grand total simplifier statistics 
Total ticks: 0

1 SimplifierDone
1

# ghc -O -fforce-recomp -ddump-simpl-stats --make rules
[1 of 1] Compiling Main ( rules.hs, rules.o )

 FloatOut stats: 
0 Lets floated to top level; 0 Lets floated elsewhere; from 4 Lambda groups



 FloatOut stats: 
10 Lets floated to top level; 1 Lets floated elsewhere; from 5 Lambda groups



 Grand total simplifier statistics 
Total ticks: 144

34 PreInlineUnconditionally
1 eta_Xp5
1 g_amr
1 eta_amx
1 k_amJ
1 z_amK
1 f_amQ
1 g_amR
1 x_amS
1 k_an9
1 z_ana
1 g_anb
1 f_anf
1 xs_ang
1 eta_aoA
2 $dShow_aKW
2 x_aKX
1 ys_aVd
1 c_dmm
1 n_dmn
1 a_snX
1 a_so1
1 lvl_sod
1 lvl_soe
1 lvl_sof
1 lvl_sog
1 lvl_soh
1 lvl_soi
1 lvl_soj
1 a_son
1 a_sop
1 a_sV0
1 a_sV2
17 PostInlineUnconditionally
1 k_amv
1 f_amQ
1 g_amR
1 c_ani
1 n_anj
1 m_anI
1 k_anJ
2 $dShow_aoy
2 x_aoz
1 c_aVa
1 f_aVb
1 x_aVc
1 a_snV
1 a_snZ
1 lvl_sVA
15 UnfoldingDone
1 GHC.Base.build
1 GHC.Base.foldr
2 System.IO.print
1 GHC.TopHandler.runMainIO
2 GHC.Base..
1 GHC.Base.mapFB
1 GHC.Base.$fMonadIO_$c
2 Main.main
2 System.IO.print1
2 GHC.Show.$fShow[]_$cshow
8 RuleFired
1 Class op 
2 Class op show
2 Class op showList
1 fold/build
1 foldr/nil
1 map
8 LetFloatFromLet
8
62 BetaReduction
1 eta_Xp5
1 a_amq
1 g_amr
1 a_amt
1 b_amu
1 k_amv
1 z_amw
1 eta_amx
1 b_amH
1 a_amI
1 k_amJ
1 z_amK
2 b_amN
2 c_amO
2 a_amP
2 f_amQ
2 g_amR
1 x_amS
1 a_an7
1 b_an8
1 k_an9
1 z_ana
1 g_anb
1 a_and
1 a1_ane
1 f_anf
1 xs_ang
1 b_anh
1 c_ani
1 n_anj
1 a_anG
1 b_anH
1 m_anI
1 k_anJ
2 a_aox
2 $dShow_aoy
2 x_aoz
1 eta_aoA
2 a_aKV
2 $dShow_aKW
2 x_aKX
1 elt_aV7
1 lst_aV8
1 a_aV9
1 c_aVa
1 f_aVb
1 x_aVc
1 ys_aVd
1 a_dml
1 c_dmm
1 n_dmn
13 SimplifierDone
13

Re: my RULES don't fire

2011-02-09 Thread Sebastian Fischer

  Why don't the rules fire,

 Because the 'match' is at the wrong type.


This was the correct hint, thanks!


  what can I change such that they do,

 Type signatures.


Initially, I thought that just leaving out the polymorphic signature should
fix the problem. But I think it cannot be fixed by changing type signatures
only because once the type of `g` in the rule is fixed, the rule is no
longer type correct!

To overcome this, I have defined

gen :: (forall m . Monoid m = (a - m) - b - m) - b - [a]
gen g = g single
{-# NOINLINE gen #-}

and changed the rules to

{-# RULES
  monoid fusion pointfree
forall f (g :: forall m . Monoid m = (a - m) - b - m) .
  fold f . gen g = g f;

  monoid fusion pointed
forall f (g :: forall m . Monoid m = (a - m) - b - m) b .
  fold f (gen g b) = g f b;
  #-}

and now they fire. Seems a bit roundabout but I don't see how to avoid this
indirection.


  and what to get rid of the warning for the second rule (which I think
  is the one I should use)?

 I'll let that for somebody else.


My new rules don't cause this warning. I'm still interested in what the
warning meant, although my code does not depend on an answer anymore.

Probably because, GHC inlines function composition in the first line of

main = do print ((fold id . gen idGen) [[()]])
  print (fold id (gen idGen [[()]]))

the pointed rule fires twice if I remove the point-free one.

Does it make sense to keep the point-free rule just in case that `fold f .
gen g` is passed to a higher-order function and does not get an argument
after inlining?

Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Concurrency best practices?

2011-02-05 Thread Sebastian Fischer
Hi Wren,

maybe Twilight STM is for you:
http://hackage.haskell.org/package/twilight-stm

Sebastian

On Sat, Feb 5, 2011 at 6:46 PM, wren ng thornton w...@freegeek.org wrote:

 So I'm working on a project that uses STM to run a lot of things in
 parallel without the headaches of locks. So far it's working beautifully,
 STM rocks. But there's one snag...

 Sometimes I need those threads to do some IO like printing logging info.
 I'd like to make these IO chunks atomic with respect to one another so that
 the logging output isn't garbage. Since I'm using STM everywhere else, I'd
 love to use it for this too (instead of mixing STM and non-STM concurrency)
 except you can't embed IO in STM.

 I could just use STM vars as locks to force the atomicity, but locks are
 ugly and bug-prone. So what's the current best practice for doing this kind
 of thing?

 --
 Live well,
 ~wren

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MonadPeelIO instance for monad transformers on top of forall

2011-02-04 Thread Sebastian Fischer
Hi Max,

data M a = M { unM :: forall m. MonadPeelIO m = Reader.ReaderT () m a }

 It seems clear that there should be a MonadPeelIO instance for M,
 but I can't for the life of me figure it out.


 Have you (or the big brains on Haskell-Cafe, who are CCed) come across
 this before? Is there an obvious solution I am missing?


I have not used monad-peel before so please ignore my comment if I am
missing something obvious. The documentation mentions that Instances of
MonadPeelIO should be constructed via MonadTransPeel, using peelIO =
liftPeel peelIO. I think this would work at least in your simplified
example as ReaderT is an instance of MonadTransPeel. Maybe you can take the
same route with your actual transformer?

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Local definitions in the class instances

2011-02-01 Thread Sebastian Fischer
On Tue, Feb 1, 2011 at 9:23 PM, Ben Millwood hask...@benmachine.co.ukwrote:

 On Tue, Feb 1, 2011 at 9:52 AM, Max Bolingbroke
 batterseapo...@hotmail.com wrote:
 
  Local declarations at module scope can be emulated using pattern
 bindings:
 
  
  (foo, bar) = (foo, bar)
   where
 foo = ..
 bar = ..
 private = ...
  
 
  If instance declarations supported pattern bindings you could get the
  same effect for your instances too. This would be a minimal change
  that avoided introducing any extra syntax.


It's a nice trick! Although it does look strange, it may be reasonable to
allow pattern bindings in instance declarations regardless of the original
proposal. Is it correct that, currently, pattern bindings are allowed
everywhere but in instance declarations? If so, why not in instance
declarations too?


 This is kind of ugly, I think, and there are proposals to make pattern
 bindings monomorphic which would make this sort of thing no longer
 possible in general.


I think the proposals to make pattern bindings monomorphic only concern
pattern bindings without type annotations. Instance methods do have type
annotations in the class declaration so even if pattern bindings without
type signatures would be monomorphic, instance methods bound using pattern
bindings need not be.


 I think I would be in favour of a declaration analogue to let.


I agree that using pattern bindings for the original task would work around
a missing syntax extension. But at least this workaround may be easily
implementable and making it possible seems to fix an inconsistency by making
the use of pattern bindings more orthogonal.

Sebastian
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] parsing exercise

2011-01-23 Thread Sebastian Fischer
On Sun, Jan 23, 2011 at 4:31 PM, Chung-chieh Shan
ccs...@post.harvard.eduwrote:

 Maybe Text.Show.Pretty.parseValue in the pretty-show package can help?


That's what I was looking for, thanks!

On Sun, Jan 23, 2011 at 5:23 PM, Stephen Tetley stephen.tet...@gmail.com
 wrote:

 I don't think you can do this simply as you think you would always
 have to build a parse tree.


Isn't it enough to maintain a stack of open parens, brackets, char- and
string-terminators and escape chars? Below is my attempt at solving the
problem without an expression parser.

In practice, if you follow the skeleton syntax tree style you might
 find not caring about the details of syntax is almost as much work
 as caring about them. I've tried a couple of times to make a skeleton
 parser that does paren nesting and little else, but always given up
 and just used a proper parser as the skeleton parser never seemed
 robust.


Indeed I doubt that the implementation below is robust and it's too tricky
to be easily maintainable. I include it for reference. Let me know if you
spot an obvious mistake..

Sebastian

splitTLC :: String - [String]
splitTLC = parse 

type Stack  = String

parse :: Stack - String - [String]
parse _   = []
parse st (c:cs) = next c st $ parse (updStack c st) cs

next :: Char - Stack - [String] - [String]
next c []xs = if c==',' then [] : xs else c : xs
next c (_:_) xs = c : xs

infixr 0 :

(:) :: Char - [String] - [String]
c : [] = [[c]]
c : (x:xs) = (c:x):xs

updStack :: Char - Stack - Stack
updStack char stack =
  case (char,stack) of
-- char is an escaped character
(_   ,'\\':xs) - xs  -- the next character is not

-- char is the escape character
('\\', xs) - '\\':xs -- push it on the stack

-- char is the string terminator
('' , '':xs) - xs  -- closes current string literal
('' , ''':xs) - ''':xs  -- ignored inside character
('' , xs) - '':xs  -- opens a new string

-- char is the character terminator
(''' , ''':xs) - xs  -- closes current character literal
(''' , '':xs) - '':xs  -- ignored inside string
(''' , xs) - ''':xs  -- opens a new character

-- parens and brackets
(_   , '':xs) - '':xs  -- are ignored inside strings
(_   , ''':xs) - ''':xs  -- and characters
('(' , xs) - '(':xs  -- new opening paren
(')' , '(':xs) - xs  -- closing paren
('[' , xs) - '[':xs  -- opening bracket
(']' , '[':xs) - xs  -- closing bracket

-- other character don't modify the stack (ignoring record syntax)
(_   , xs) - xs
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are constructors strict?

2011-01-22 Thread Sebastian Fischer
Hi Daryoush,

On Fri, Jan 21, 2011 at 7:52 PM, Daryoush Mehrtash dmehrt...@gmail.comwrote:


  loop = MonadPlus m = m Bool

  loop = loop


 If we apply Just to loop as follows


  test2 :: MonadPlus m = m (Maybe Bool)

  test2 = loop = return . Just


 the evaluation of test2 does not terminate because = has to evaluate
 loop. But this does not correctly reflect the behaviour in a functional
 logic language like Curry. For example, if you have a function that checks
 whether the outermost constructor of test2 is Just, the test is supposed to
 be successful. In the naive model for non-determinism this is not the case.


 Do I have to have MonadPlus m or would any other Monad class work the same
 way?


I'm afraid, I don't quite get you question. Would you mind clarifying it
with an example?

Also, Jan, I don't understand your comment about continuation monads. Maybe
I am a bit numb today.. What property do you mean do continuation monads
have or not?

Thanks,
Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] parsing exercise

2011-01-22 Thread Sebastian Fischer
Hello,

I need a function and wonder whether I can copy some existing code so I
don't have to write it myself.

It should split a string into a list of strings:

splitAtTopLevelCommas :: String - [String]

I need something similar to `splitOn ,` from the Text package with the
property

intercalate , . splitAtTopLevelCommas = id

But it should split the string only on a few commas, not all. You can think
of the result list as representations of Haskell values, for example

splitAtTopLevelCommas True,(1,(2,[3,4])),Just ('a',\)\)

should yield

[True, (1,(2,[3,4])), Just ('a',\)\)]

I expect writing this function to be quite tedious (ignore commas in parens,
ignore parens in strings, quotation, ...) and would prefer to copy code from
some parsing library. Do you have an idea what I could use? Or how to solve
it from scratch in a few lines?

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are constructors strict?

2011-01-21 Thread Sebastian Fischer
sorry, forgot to cc cafe.

On Fri, Jan 21, 2011 at 7:12 PM, Sebastian Fischer fisc...@nii.ac.jpwrote:

 Hi Daryoush,

 On Fri, Jan 21, 2011 at 6:18 AM, Daryoush Mehrtash dmehrt...@gmail.comwrote:

 I am having hard time understanding the following paragraph in Purely
 functional Lazy non-deterministic  programing paper
 http://www.cs.rutgers.edu/~ccshan/rational/lazy-nondet.pdf

 The problem with the naive monadic encoding of non-determinism is that the
 arguments to a constructor must be deterministic. If these arguments are
 themselves results of non-deterministic computations, these computations
 must be performed completely before we can apply the constructor to build a
 non-deterministic result.



 Why does the argument to constructors must be deterministic?


 Because of the type of the constructor. Say you want to compute a list of
 Ints nondeterministically (like in the paper). Than the first argument of
 the (:) constructurs in the result must be an Int. A nondeterministic
 computation that yields an Int has type (m Int) for some nondeterminism
 monad (for example, it has type [Int] in the list monad). That the (:)
 constructur is polymorphic and would allow to make a value of type [m Int],
 is a coincidence and does not help to make a value of type (m [Int]).


 WHy is it that thunks are not used in this case?


 Thunks do not model nondeterminism, only laziness.

 Did that help you understand?

 Regards,
 Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A question about monad laws

2011-01-18 Thread Sebastian Fischer
Hi Kashyap,

 Could someone please help me get a better understanding of the necessity of 
 monads complying with these laws?

Maybe it helps to write them in do-notation. Once written like this,
it becomes clear(er?) that do-notation would be much less intuitive if
the laws would not hold:

Left Unit: return x = \y - f y  =  f x
    do y - return x
       f y
  = do f x

Right Unit: a = \x - return x  =  a
    do x - a
       return x
  = do a

Associativity: (a = \x - f x) = \y - g y  =  a = \x - (f x
= \y - g y)
    do y - do x - a
               f x
   g y
  = do x - a
   y - f x
   g y

So, at least, the monad laws ensure that do-notation behaves as one
would expect.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Set monad

2011-01-12 Thread Sebastian Fischer
On Sun, Jan 9, 2011 at 10:11 PM, Lennart Augustsson
lenn...@augustsson.netwrote:

 That looks like it looses the efficiency of the underlying representation.


Yes, I don't think one can retain that cleanly without using restricted
monads to exclude things like

liftM ($42) (mplus (return pred) (return succ))

or just

liftM ($42) (return pred)

Maybe one can hack something to achieve

mplus :: Ord a = Set a - Set a - Set a
mplus a b = Set (\k - S.union (a - ret) (b - ret) `bind` k)
  where
ret = S.singleton
bind = flip Data.Foldable.foldMap

mplus :: not (Ord a) = Set a - Set a - Set a
mplus a b = Set (\k - S.union (a - k) (b - k))

using overloading with undecidable instances (?) (and defining a Monoid
instance for the Set monad in terms of the MonadPlus instance)

Reminds me of instance chains.. [1]

Sebastian

[1]: http://portal.acm.org/citation.cfm?id=1863596
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Set monad

2011-01-08 Thread Sebastian Fischer
On Sun, Jan 9, 2011 at 6:53 AM, Lennart Augustsson
lenn...@augustsson.netwrote:

 It so happens that you can make a set data type that is a Monad, but it's
 not exactly the best possible sets.

 module SetMonad where

 newtype Set a = Set { unSet :: [a] }


Here is a version that also does not require restricted monads but works
with an arbitrary underlying Set data type (e.g. from Data.Set). It uses
continuations with a Rank2Type.

import qualified Data.Set as S

newtype Set a = Set { (-) :: forall b . Ord b = (a - S.Set b) -
S.Set b }

instance Monad Set where
  return x = Set ($x)
  a = f  = Set (\k - a - \x - f x - k)

Only conversion to the underlying Set type requires an Ord constraint.

getSet :: Ord a = Set a - S.Set a
getSet a = a - S.singleton

A `MonadPlus` instance can lift `empty` and `union`.

instance MonadPlus Set where
  mzero = Set (const S.empty)
  mplus a b = Set (\k - S.union (a - k) (b - k))

Maybe, Heinrich Apfelmus's operational package [1] can be used to do the
same without continuations.

[1]: http://projects.haskell.org/operational/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing Functor instances in GHC 7?

2010-12-10 Thread Sebastian Fischer
On Fri, 2010-12-10 at 08:33 +, Simon Peyton-Jones wrote:
 If there's a consensus that the behaviour is wrong, or at least
 unexpected, would you like to make a reproducible test case and file a
 ticket? 

I took Erik's mail as indicator that the behaviour of GHCi is
inconsistent and unexpected:

http://hackage.haskell.org/trac/ghc/ticket/4832

Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Missing Functor instances in GHC 7?

2010-12-09 Thread Sebastian Fischer
Hello,

according to

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.html

Control.Monad exports 20 Functor instance declarations in base-4.3.0.0.
However:

bash# ghc-pkg list | grep base
base-4.3.0.0
bash# ghci --version
The Glorious Glasgow Haskell Compilation System, version 7.0.1
bash# ghci
Prelude import Control.Monad
Prelude Control.Monad :i Functor
class Functor f where
  fmap :: (a - b) - f a - f b
  (GHC.Base.$) :: a - f b - f a
-- Defined in GHC.Base
instance Functor Maybe -- Defined in Data.Maybe
instance Functor [] -- Defined in GHC.Base
instance Functor IO -- Defined in GHC.Base

There are only 3 instances instead of 20. Importing Control.Applicative
gives 3 more instances but for example the instance for ((-) r) is
still missing.

Is my installation broken? Or has anybody similar problems finding
Functor instances in GHC 7?

Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing Functor instances in GHC 7?

2010-12-09 Thread Sebastian Fischer
Hi Antoine,

On Thu, 2010-12-09 at 23:20 -0600, Antoine Latter wrote:
 Are there any particular ones you're running into problems with?

Yes, I cannot find the instance for ((-) r).

Even if I import

Control.Monad 
Control.Monad.Reader
Control.Applicative 
Data.Functor 
Data.Function

I still get

ghci ((+) $ id * id) 21
interactive:1:1:
No instance for (Functor ((-) a))

Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing Functor instances in GHC 7?

2010-12-09 Thread Sebastian Fischer
On Fri, 2010-12-10 at 14:35 +0900, Sebastian Fischer wrote:
 Yes, I cannot find the Functor instance for ((-) r).

As the Applicative instance for ((-) r) depends on the Functor instance
I only needed to go through the imports of Control.Applicative to find
that the Functor instance of ((-) r) is defined in
Control.Monad.Instances. If I import this module into GHCi, it finds the
instance.

Interestingly, if I import only Control.Applicative from within GHCi, it
does not find the instances defined in Control.Monad.Instances although
this module is imported in Control.Applicative. On the other hand, if I
write a file containing the line 'import Control.Applicative' and load
this file in GHCi then the instances from Control.Monad.Instances are
visible.

Apparently, importing a module in GHCi differs from importing it in a
Haskell file and loading this into GHCi.

Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wondering if this could be done.

2010-12-04 Thread Sebastian Fischer
On Mon, 2010-11-22 at 14:48 +0800, Magicloud Magiclouds wrote:
 (+) :: A - A - A
 (+) a b =
   A (elem1 a + elem1 b) (elem2 a + elem2 b) -- I got errors here, for
 the (+) is ambiguous. 

That's because (+) is implicitly imported from the Prelude. If you

import Prelude hiding ((+))

the error disappears.

Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Equality constraint synonyms

2010-11-24 Thread Sebastian Fischer
On Thu, 2010-11-25 at 10:41 +0900, Hugo Pacheco wrote:
 Would this be a desired feature for other people?

I'd like to have Haskell Type Constraints Unleashed

http://users.ugent.be/~tschrijv/Research/papers/constraint_families.pdf

which includes equality constraint synonyms.

Sebastian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type Directed Name Resolution

2010-11-11 Thread Sebastian Fischer

On Nov 12, 2010, at 5:43 AM, Richard O'Keefe wrote:


A saucepan whose handle keeps falling off is defective,


I do not see TDNR as unambiguously defective as a loose saucepan handle.


The amount of time spent maintaining a program is much higher
than the amount of time spent creating it initially.  That
means that if you have a tradeoff between ease of writing and
the other virtues of a language, ease of writing *matters* less.


Like you, I think that a tradeoff between readability and writability  
should be made in favour of readability. Unlike you, I am not  
convinced that TDNR trades readability for writability.



Consider the vexed question of repeating all or part of the
record name in the field name.  Yes, this *is* a burden on
the person writing it.  But it is a **help** to the person
reading it.  The same applies to using module prefixes
(possibly abbreviated ones).


Not if the extra information is redundant. Then qualification may even  
impair readability by introducing unnecessary clutter.


I don't think that TDNR threatens readability more than type classes  
already do. Not only is Buffalo buffalo Baffalo buffalo buffalo  
buffalo Buffalo buffalo a grammatically valid sentence in the English  
language, also `fmap fmap fmap fmap fmap fmap fmap fmap` is a type  
correct expression in the Haskell programming language. It can already  
be hard today to distinguish occurrences of overloaded functions. TDNR  
does not add much to this, I think.


One difference is that there is a unifying type with a type class  
constraint for all implementations of functions with the same name  
when using type classes but not when using TDNR. Does this make code  
that is using TDNR less readable than code that is using type classes?


As others have pointed out, type classes are insufficient for  
overloading record labels because they do not cover record updates.


How can we add a special kind of overloading for record labels that  
also works for updates? Maybe like this:


rename :: ((name :: String) @ a) = a - a
rename newName someRecord = someRecord { name = newName }

This probably falls under the category of improved record systems. How  
difficult would it be to implement this? Can it be implemented by  
desugaring without substantial extensions to the type system?


Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type Directed Name Resolution

2010-11-10 Thread Sebastian Fischer

On Nov 10, 2010, at 11:57 PM, Neil Brown wrote:

I wonder if special syntax is actually needed for this.  How much of  
the language would be broken by adopting the general rule: If the  
only definitions of f are at the top-level or imported, find the  
type of 'a' and the type of all the in-scope 'f' s.  If there is  
exactly one match then use it, otherwise it's an error.?


Interesting idea. It is only after your question that I see TDNR as ad- 
hoc overloading and not only as simpler record notation (although it's  
obvious in retrospect). Such a change without new syntax seems quite  
radical, maybe too radical for a strongly typed language.


On Nov 11, 2010, at 3:05 AM, Albert Y. C. Lai wrote:

Typed-directed name resolution brings Haskell closer to a write-only  
language; that is, an ambiguous phrase made total sense to the  
author when the author wrote it, but an independent reader will need  
extraordinary effort to disambiguate.


In this regard it would be closer to natural language where it is the  
responsibility of writers to express themselves clearly. Who writes  
something that requires extraordinary effort to disambiguate is an  
incompetent writer (or a poet, like the author of the buffalo  
sentence). Why blame languages instead of writers?


On Nov 11, 2010, at 7:01 AM, Ryan Ingram wrote:


regular ad-hoc overloading does not make a ton of
sense in Haskell; function types are complicated enough that too much
ambiguity is introduced and inference becomes very difficult.  But I
see a lot of value in locally saying 'this particular invocation
should be ad-hoc overloaded' for common functions like 'length',
'map', 'lookup', etc.


So the current proposal looks like a sensible compromise.

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: arrow-list. List arrows for Haskell.

2010-11-07 Thread Sebastian Fischer
I'm planning to write up a blog post about using list arrows for XML  
processing.


Ok, I'll say tuned! Maybe a smaller example for the Haddock docs needs  
less time.


Maybe this sounds weird on the Haskell mailing list, but at Silk[1]  
we have a full implementation of (functional reactive) list arrows  
in JavaScript. The arrows build up an AST of operations that we  
statically optimize to a more efficient form. This is needed because  
JavaScript itself is not going to fuse intermediate list for you.


I'm reinventing all of this in Haskell now to gain more insight in  
what we've actually done in JavaScript. :-)


Sounds more interesting than weird to me. In case you blog about that  
too, I'll probably read it. I'm interested to get an intuition what  
you can do with arrows and what not. An intuition that goes beyond the  
quite abstract you can't choose different computation paths based on  
the result of a computation. Can you, for example, define a `perm`  
arrow that yields every permutation of it's input? Monadic  
implementations look like they need `=` but there may be another  
way.. Maybe you now have an example for the docs ;o)


I'm also planning to investigate whether it is possible (useful) to  
perform the list computations (the map in concatMap) in parallel.  
I'm not sure if someone has already tried this for list monads.


I tried something similar a while ago (for monads):

http://www-ps.informatik.uni-kiel.de/~sebf/haskell/speedup-search-with-parallelism.lhs.html 




Sebastian

P.S.

In


[1] https://blog.silkapp.com/2009/12/reinventing-xslt-in-pure-javascript/


there are two occurrences of the `sum` function which should probably  
be `alt` instead.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: arrow-list. List arrows for Haskell.

2010-11-07 Thread Sebastian Fischer

to answer my own question:

On Nov 7, 2010, at 8:33 PM, Sebastian Fischer wrote:

Can you, for example, define a `perm` arrow that yields every  
permutation of it's input? Monadic implementations look like they  
need `=` but there may be another way..


A `perm` arrow can be defined in the usual way using the list-arrow  
package:


{-# LANGUAGE TypeOperators #-}

import Control.Arrow
import Control.Arrow.ArrowList

perm :: (ArrowList (~), ArrowPlus (~)) = [a] ~ [a]
perm = isA null + (uncons  second perm  insert)

insert :: (ArrowList (~), ArrowPlus (~)) = (a,[a]) ~ [a]
insert = cons + (second uncons  rearrange  second insert  
 cons)

 where rearrange = assocL  first swap  assocR

It may be possible to do this with `ArrowChoice` only, that is,  
without resorting to the operations of `ArrowList`, but they looked  
simpler.


In order to support the above, we need a bunch of auxiliary arrows.  
First, list con- and destructors:


cons :: Arrow (~) = (a,[a]) ~ [a]
cons = arr (uncurry (:))

uncons :: ArrowList (~) = [a] ~ (a,[a])
uncons = isA (not . null)  arr (\ (x:xs) - (x,xs))

Second (and more annoyingly), reordering arrows:

swap :: Arrow (~) = (a,b) ~ (b,a)
swap = arr (\ (x,y) - (y,x))

assocL :: Arrow (~) = (a,(b,c)) ~ ((a,b),c)
assocL = arr (\ (x,(y,z)) - ((x,y),z))

assocR :: Arrow (~) = ((a,b),c) ~ (a,(b,c))
assocR = arr (\ ((x,y),z) - (x,(y,z)))

This is my first program with arrows so it might be unnecessarily  
complicated. Is there a more elegant way?


I wonder how badly my use of `arr` influences how the program can be  
optimized.  I hope it's still better than just using


perm = arrL perms
 where perms :: [a] - [[a]]
   perms = ...

;o)

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What do you call Applicative Functor Morphism?

2010-11-06 Thread Sebastian Fischer

Hello,

I'm curious and go a bit off topic triggered by your statement:

On Nov 6, 2010, at 12:49 PM, rocon...@theorem.ca wrote:


An applicative functor morphism is a polymorphic function,
eta : forall a. A1 a - A2 a between two applicative functors A1 and  
A2 that preserve pure and *


I recently wondered: why morphism and not homomorphism?

Wikipedia says:

In abstract algebra, a homomorphism is a structure-preserving map  
between two algebraic structures


and

In mathematics, a morphism is an abstraction derived from structure- 
preserving mappings between two mathematical structures.


One difference is absract algebra ... algebraic structures vs  
mathematics ... mathematic structures another difference is the  
abstraction derived from part in the second phrase.


So for the `Monoid` class, I'd say monoid homomorphism but I'm  
unsure whether `Applicative` counts as an algebraic structure or calls  
for using morphism instead.


Is there a deeper reason why people use morphism and not  
homomorphism or is it just because it's shorter?


Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Review request for my baby steps towards a platform independent interactive graphics using VNC

2010-11-06 Thread Sebastian Fischer


On Nov 4, 2010, at 3:48 PM, C K Kashyap wrote:


Also, any reference/suggestion on how I could go about using a state
machine to deal with the RFB protocol.


A simple way to model state machines is to use one function for each  
state. Each function calls the functions corresponding to successor  
states.


(Not sure whether this answers you question, though).

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: arrow-list. List arrows for Haskell.

2010-11-06 Thread Sebastian Fischer


On Nov 6, 2010, at 10:00 PM, Sebastiaan Visser wrote:

List arrows are a powerful tool when processing XML, building query  
languages and lots of other domains that build on functions that  
might return more than one value as their output.


Interesting. Do you plan to write some examples that show

  * how to use ListArrows,
  * differences to using the list monad, and
  * when using ListArrow is preferrable?

I'm interested to see something like this worked out although I have  
some rough ideas like monads are more powerful and arrows may allow  
stronger reasoning and more efficient implementations. Can you  
substantiate these general points for the concrete case of ListArrow  
vs list monad?


I assume your implementation is *not* more efficient than the list  
monad as it builds on ListT.


Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Loop optimisation with identical counters

2010-11-05 Thread Sebastian Fischer


On Nov 6, 2010, at 8:22 AM, David Peixotto wrote:


To summarize, I found that it is possible to get LLVM to do this
transformation through a combination of tail-call elimination,  
inlining,

induction variable optimization, and global value numbering.


Interesting. This approach requires `f` to be inlined into its call  
site in order to eliminate the redundant argument. This is different  
from the proposal to provide a specialized version of `f` (where the  
arguments are combined) which could be reused at different call sites.


How many call sites with identical arguments are there in the  
generated code that triggered this discussion and in the stream fusion  
code that would benefit from this optimization?


Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop optimisation with identical counters

2010-11-05 Thread Sebastian Fischer

Which proposal do you mean?


I referred to Christian's questions whether it is possible to generate  
`ff` and `gg` from `f` and `g`. If `h` is similar to `g`, then `hh`  
could reuse `ff` while with an inlining approach something like `ff`  
would be duplicated.


I'm not sure something like that is feasible without knowing the  
call sites.


I agree. One would need to generate variants for different call sites  
and reuse variants for similar call sites.


Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Is Curry alive?

2010-11-04 Thread Sebastian Fischer


On Nov 4, 2010, at 2:07 PM, wren ng thornton wrote:

Besides, it's simple enough to just use the MonadLogic class and  
switch between concrete types, if you need to test performance.


You may even try to use only `MonadPlus` to get more instances. The  
only instances of `MonadLogic` are (basically) `[]` and `LogicT` but  
there are many more `MonadPlus` instances on Hackage. If you don't  
need the flexibility of `LogicT` I recommend using `MonadPlus`.


The fmlist package provides a list datatype that is similar to LogicT  
and an instance of `MonadPlus`. stream-monad is a surprisingly space  
efficient way to approach infinite search spaces but is also worth  
trying for finite searches. If you want to go parallel try tree-monad  
in combination with parallel-tree-search. And if stream-monad still  
uses too much memory for an infinite search space, try iterative  
deepening search from level-monad.


Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Curry alive?

2010-11-02 Thread Sebastian Fischer

Hi Gregory,

On Nov 2, 2010, at 9:27 AM, Gregory Crosswhite wrote:

I was thinking about using Curry, but it looks to me like the  
language is dead and hasn't seen much activity for a few years.


The community is smaller than the Haskell community but the PAKCS  
system is still actively developed. MCC is a compiler that generates C  
(rather than Prolog) code (which is often more efficient) but does not  
come with the same libraries as PAKCS.


Actually, the more that I think about my problem the more that I'm  
thinking I should just stick with the List monad.


If this is feasible then staying in Haskell might be a good choice.

Which does raise the question: when is it better to use a logic  
programming language instead of the list monad?


It is more cumbersome to model logic variables and unification in a  
pure functional language than having them provided natively. If you  
need unification or constraint solving then a language with built-in  
logic variables has an advantage.


An advantage of combining laziness with non-determinism as in Curry is  
that you can interleave the non-deterministic generation of nested  
data with categorizing it which is more efficient, if the evaluation  
function is lazy. This combination is more cumbersome in a pure  
functional language than in a lazy functional-logic language like  
Curry (see [1] for how to do it). If you always need to generate  
elements completely before you categorize them, you probably do not  
gain from lazy non-determinism.


Cheers,
Sebastian

 [1]: http://www-ps.informatik.uni-kiel.de/~sebf/data/pub/icfp09.pdf
  http://sebfisch.github.com/explicit-sharing/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Red links in the new haskell theme

2010-10-30 Thread Sebastian Fischer

On Oct 30, 2010, at 9:15 PM, Henning Thielemann wrote:

To me it would make more sense if users could configure the colors  
of links in their browsers, like they configure fonts and font sizes.


Most browsers support user style sheets: google.com/search?q=user+style 
+sheet


Most people who responded when opinions were collected about the  
new theme

preferred that.


Possibly when seeing, how it looks like, people change their mind.


During the poll, the new style was shown on a demo page.

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Monads and Functions sequence and sequence_

2010-10-29 Thread Sebastian Fischer

On Oct 30, 2010, at 2:30 PM, Mark Spezzano wrote:


If you use the type with Maybe Int like so:

sequence [Just 1, Nothing, Just 2]

then the result is Nothing.

Whereas sequence [Just 1, Just 2, Just 3] gives

Just [1, 2, 3]


Try

do x - Just 1
   y - Nothing
   z - Just 2
   return [x,y,z]

and

do x - Just 1
   y - Just 2
   z - Just 3
   return [x,y,z]

The results are the same as with your calls of `sequence`. It is  =   
which makes the difference, not `sequence`.


Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Red links in the new haskell theme

2010-10-28 Thread Sebastian Fischer

Maybe we can keep at least the docs without red links.


Pick the Classic style in the style menu. It will remember your  
choice.


Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Scrap your rolls/unrolls

2010-10-23 Thread Sebastian Fischer

Hi Max,

neat idea! Haskell supports laziness even on the type level ;)

I tried to play with your code but did not get very far. I quickly ran  
into two problems.


On Oct 22, 2010, at 7:37 PM, Max Bolingbroke wrote:


The annoying part of this exercise is the the presence of a Force in
the functor definition (e.g ListF) means that you can't make them into
actual Functor instances! The fmap definition gives you a function of
type (a - b) and you need one of type (Force a - Force b). However,
you can make them into a category-extras:Control.Functor.QFunctor
instance


I think `Control.Functor.Categorical.CFunctor` is a more natural  
replacement for functor here. One can define


instance CFunctor (ListF a) ForceCat Hask

and I was hoping that I could define `fold` based on CFunctor but I  
did not succeed. The usual definition of `fold` is


fold :: Functor f = (f a - a) - Fix f - a
fold f = f . fmap (fold f)

and I tried to replace this with

fold :: CFunctor f ForceCat Hask = ...

but did not find a combination of type signature and definition that  
compiled.


My second problem came up when writing a `Show` instance for the  
`List` type. This works:


instance Show a = Show (List a) where
  show Nil = Nil
  show (Cons x xs) = (Cons  ++ show x ++   ++ show xs ++ )

But trying to avoid TypeSynonymInstances leads to a non-terminating  
`show` function:


instance (Show a, Show (Force rec)) = Show (ListF a rec) where
  show Nil = Nil
  show (Cons x xs) = (Cons  ++ show x ++   ++ show xs ++ )

Shouldn't both work?

Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector-space and standard API for vectors

2010-10-23 Thread Sebastian Fischer

On Oct 24, 2010, at 8:52 AM, wren ng thornton wrote:

But then, how should we decide whether the additive or  
multiplicative structure is more neutral?


On Oct 24, 2010, at 7:08 AM, Jacques Carette wrote:

People usually use additive notation for commutative monoids, and  
multiplicative notation for generic monoids.  It's a convention,  
nothing else.


I recently used

class Monoid m where
  one :: m
  (*) :: m - m - m

class CommutativeMonoid m where
  zero :: m
  (+)  :: m - m - m

   class (Monoid s, CommutativeMonoid s) = Semiring s

(The `Semiring` class only serves as a contract for additional laws.)

Considering the convention Jaques mentions and my wish for splitting  
the two monoids underlying a semiring into separate classes, it seemed  
natural to use multiplicative notation for the neutral case.


Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Implicit 'forall' in data declarations

2010-10-22 Thread Sebastian Fischer


On Oct 22, 2010, at 5:20 PM, Simon Peyton-Jones wrote:


I vote for this.  In fact I've created a ticket for it.
http://hackage.haskell.org/trac/ghc/ticket/4426


+1 (I decided to prefer simplicity over convenience in this case)

Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Implicit 'forall' in data declarations

2010-10-21 Thread Sebastian Fischer

Hi Simon,

thank you for your explanations. Now I understand why type variables  
are quantified at the outermost position in function types.


My confusion was caused by implicit quantifications in data type  
declarations. These are not (explicitly) mentioned in the  
documentation that you have linked and they only occur when a type  
class context is given.



In a data type decl
   data Foo = Foo (Eq a = a)
the top of the type is done separately for each argument.  After
all, Foo (Eq a = a) isn't a type.  So you get
   data Foo = Foo (forall a. Eq a = a)


This was a surprise as

data Bar = Bar (a - a)

is illegal and *not* equivalent to

data Bar = Bar (forall a . a - a)

(at least in GHC 6.12.3)

This lead me into thinking that the type class context causes  
quantification which was apparently wrong.


In fact, I think according to the documentation of implicit  
quantification it is unclear if the definitions of Foo and Bar  
(without explicit forall) are legal. I expected both to be illegal and  
am surprised that one is legal and the other is not.


If the top level of user written types includes data constructor  
arguments, then probably both should be legal. On the other hand it  
would probably be surprising if one could write


data Id type = Id typ

without getting an error message.


Suppose you wrote

bar :: (Eq a = a) - a

Then which of these two do you want?

bar :: forall a. (forall a. Eq a = a) - a
 bar :: forall a. (Eq a = a) - a

And then what's the rule lexically scoped tyvars?


I'm afraid I don't understand your question. But I'm fine with the  
current behaviour of implicit quantification in function types.


Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Implicit 'forall' in data declarations

2010-10-21 Thread Sebastian Fischer

On Oct 21, 2010, at 9:58 PM, Simon Peyton-Jones wrote:


 A implicit quantification point is
   a) the type in a type signature f :: type, or
   b) a type of form (context = type), that is not
enclosed by an explicit 'forall'


not quite:

foo :: forall a . a - (Ord b = b) - ()

According to your explanation, there is one implicit quantification  
point: the whole type of `foo`. The type `Ord b = b` is no implicit  
quantification point because it is enclosed by an explicit  
'forall' (for a certain definition of enclosed by). The new  
explanation implies that the type should be


foo :: forall b . forall a . a - (Ord b = b) - ()

but it is

foo :: forall a . a - (forall b . Ord b = b) - ()

instead.

Apparently, if there is an explicit 'forall' (that does not mention  
`b`) then the implicit 'forall' is inserted at the context. If there  
is no explicit 'forall' then it is inserted at the top level.


My impression is the following:

An implicit quantification point is
  a) the type in a type signature f :: type or
  b) a type of the form (context = type)
if it does not start with an explicit 'forall'

Then the only implicit quantification point in the type of `foo` is  
the type `Ord b = b` which matches the behaviour of GHC.


Is this what you meant?

Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Implicit 'forall' in data declarations

2010-10-21 Thread Sebastian Fischer

On Oct 21, 2010, at 9:58 PM, Simon Peyton-Jones wrote:

So GHC's behaviour is probably about right, but the description is  
wrong.


On Oct 21, 2010, at 11:14 PM, Sebastian Fischer wrote:


   An implicit quantification point is
 a) the type in a type signature f :: type or
 b) a type of the form (context = type)
   if it does not start with an explicit 'forall'


I have mixed feelings about b). A special treatment of contexts is  
convenient but also confusing because they are treated differently  
depending on where they occur.


It is convenient because one can omit 'forall's in higher-rank types  
if they use contexts. It is confusing because types with context in  
type signatures are treated differently than types with context in  
data declarations (because type signatures are implicit quantification  
points and arguments in data declarations are not.)


How inconvenient would it be to make the above description simpler by  
dropping b) and thus require more explicit 'forall's? I don't know  
what I prefer. I like both convenience and simplicity of explanations  
and cannot judge wich alternative has more convincing arguments in  
this case.


Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Implicit 'forall' in data declarations

2010-10-15 Thread Sebastian Fischer

Hello,

GHC 6.12.3 allows to omit the explicit quantification of
higher-rank type variables using 'forall' in data types if they
appear in a type class context

{-# LANGUAGE RankNTypes #-}
data Foo = Foo (Eq a = a)

Is this implicit introduction of 'forall' intended? If it is, why
does it not work in function types? The following is not accepted
by my GHC:

bar :: Eq b = (Eq a = a) - b
bar x = x

The error message is

All of the type variables in the constraint `Eq a'
are already in scope (at least one must be universally quantified  
here)

(Use -XFlexibleContexts to lift this restriction)

Using `FlexibleContexts` the signature of `bar` seems to be
interpreted as

bar :: (Eq b, Eq a) = a - b

because then the error becomes

Couldn't match expected type `b' against inferred type `a'

So unlike in data-type declarations, a 'forall' in a function type
must be written explicitly even if the quantified variable appears in  
a local type class constraint.


bar :: Eq b = (forall a . Eq a = a) - b
bar x = x

I have not yet installed GHC 7. Is this inconsistency between data and  
function declarations intended or has it been changed in the new type  
checker?


Sebastian
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Pointed (was: Re: Restricted type classes)

2010-09-07 Thread Sebastian Fischer


On Sep 7, 2010, at 5:23 AM, wren ng thornton wrote:

In particular, one of the primary complaints against the Monad class  
is precisely the fact that it *fails* to mention the Functor class  
as a (transitive) dependency. Why should we believe that making unit  
independent from fmap will fare any better?


A noteworthy difference is that fmap can be defined in terms of return  
and = but not in terms of point. IMHO, this is a good reason why  
fmap should be defined for every Monad and may be missing for some  
Pointed.


Sebastian


--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] overloaded list literals?

2010-09-06 Thread Sebastian Fischer


On Sep 6, 2010, at 12:23 PM, Johannes Waldmann wrote:


We have overloaded numerical literals (Num.fromInteger)
and we can overload string literals (IsString.fromString),
so how about using list syntax ( [], : )
for anything list-like (e.g., Data.Sequence)?


As lists of some type A represent the free monoid over A, what if

[x,y,z]

would be syntactic sugar for

mconcat (map point (x:y:z:[]))

with

class Pointed p where point :: a - p a

Then list literals could be used for every pointed monoid.

Note that this only considers list literals. The (:) and []  
constructors would not be overloaded.


Sebastian


--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graphics.Drawing

2010-09-06 Thread Sebastian Fischer


On Sep 6, 2010, at 1:57 PM, han wrote:

So the question is: Do you agree that Graphics.Rendering.OpenGL  
actually should have been Graphics.OpenGL (or just OpenGL) for  
wieldiness? If you don't, what is your reason? I would like to know.


Often, when this topic comes up, someone claims that ontology is  
overrated [1]. This time it's me.


Sebastian

[1]: http://www.shirky.com/writings/ontology_overrated.html


--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] overloaded list literals?

2010-09-06 Thread Sebastian Fischer


On Sep 6, 2010, at 1:47 PM, Stefan Holdermans wrote:

In general, it is kind of unfortunate that type classes and type  
constructors share a namespace, even though there is no way to ever  
mix them up.


Class and type names mix in im- and export lists. IIRC, this is the  
reason for putting them in a common name space.



--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restricted type classes

2010-09-05 Thread Sebastian Fischer

Just because we don't have
a use now doesn't mean it might not be useful in the future.


I am suspicious about complicating a design for potential future  
benefits.


However, difference lists provide an example of a type that support  
Pointed more naturally than Applicative: the dlist package [1]  
provides Applicative and Monad instances but only by converting to  
normal lists in between.


Note that even fmap cannot be defined without converting difference  
lists to normal lists in between. The natural interface to difference  
lists would be Pointed (without a Functor superclass) and Monoid.


Sebastian

[1]: http://hackage.haskell.org/package/dlist


--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Crypto-API is stabilizing

2010-09-03 Thread Sebastian Fischer

[CC'ing maintainer of MonadRandom]

On Sep 3, 2010, at 1:59 AM, Thomas DuBuisson wrote:


 data Key = Key {
  encrypt   :: B.ByteString - B.ByteString,
  decrypt   :: B.ByteString - B.ByteString,
  keyLength :: BitLength,
  serialize :: B.ByteString}

 rsa :: RandomGen g = BitLength - g - ((Key,Key), g)


One reason against this is simply that all the other constructs
(block/stream cipher, hashes) are classes, it would be odd for there
to be a single exception.  A better reason is the data structure has
no way to implement generateKeyPair.


Also, the type-class approach is extensible in that new operations  
(for example for signing) can be added via subclasses. Later extending  
the key type above requires nesting.



Why not use

   generateKeypair :: MonadRandom m = BitLength - m (Maybe (p,p))


Because MonadRandom dictates mtl, and is heavier weight than a single
class.  I was hoping to keep this agnostic (mtl is only required for
testing or benchmarks in crypto-api).


I think mtl is only used for the instances, not for the class itself.  
Maybe the maintainer of MonadRandom is inclined to split the package  
if this would raise the number of users of the class.



If MR the more agreeable path
then I'll do it, though this means I use the unholy fail function.


You don't want to use monads because the Monad class defines the fail  
function?



Even if that's the case (and more people weighing in would help) I
still want to include Data.Crypto.Random and welcome comments.


An advantage of using a MonadRandom class would be that the CryptoAPI  
would be independent of RandomGen or your new alternative. One could  
define random monads based on either.


Sebastian

--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Crypto-API is stabilizing

2010-09-03 Thread Sebastian Fischer


On Sep 3, 2010, at 10:40 AM, Sebastian Fischer wrote:

An advantage of using a MonadRandom class would be that the  
CryptoAPI would be independent of RandomGen or your new alternative.  
One could define random monads based on either.


I was wrong. The MonadRandom class uses the Random class which uses  
RandomGen.





--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Crypto-API is stabilizing

2010-09-02 Thread Sebastian Fischer


On Aug 27, 2010, at 11:12 AM, Heinrich Apfelmus wrote:

Is it actually necessary to use a type class here? The situation is  
very similar to


  Luke Palmer. Haskell Antipattern: Existential Typeclass.
  http://lukepalmer.wordpress.com/2010/01/24/

I suggest to use good old data types

  data Key = Key {
   encrypt   :: B.ByteString - B.ByteString,
   decrypt   :: B.ByteString - B.ByteString,
   keyLength :: BitLength,
   serialize :: B.ByteString}

  rsa :: RandomGen g = BitLength - g - ((Key,Key), g)


In general, I like this approach, but what are

encrypt privateKey

or

decrypt publicKey

supposed to do? A type-class solution also does not *prevent*  
programmers to perform such non-sensical calls, but the data-type  
solution *forces* programmers to provide non-sensical encrypt and  
decrypt functions when creating the public and private keys.



class (Binary p, Serialize p) = AsymCipher p where
  generateKeypair :: RandomGen g = g - BitLength - Maybe  
((p,p),g)

  encryptAsym :: p - B.ByteString - B.ByteString
  decryptAsym :: p - B.ByteString - B.ByteString
  asymKeyLength   :: p - BitLength


Why not use

generateKeypair :: MonadRandom m = BitLength - m (Maybe (p,p))

where MonadRandom is from [1].

Sebastian

[1]: http://hackage.haskell.org/package/MonadRandom


--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Crypto-API is stabilizing

2010-09-02 Thread Sebastian Fischer

On Sep 3, 2010, at 12:07 AM, Sebastian Fischer wrote:


Why not use

   generateKeypair :: MonadRandom m = BitLength - m (Maybe (p,p))


Or if the choice to generate keys or not should solely depend on the  
BitLength (and not on the random generator):


generateKeypair :: MonadRandom m = BitLength - Maybe (m (p,p))



--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Statically tracking validity - suggestions?

2010-09-01 Thread Sebastian Fischer
does anybody know of anything on Hackage for testing whether two  
values are approximately equal?


http://hackage.haskell.org/package/approximate-equality

--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   >