Re: [Haskell-cafe] A question about laziness and performance in document serialization.

2013-08-22 Thread Roman Cheplyaka
* Kyle Hanson hanoo...@gmail.com [2013-08-20 18:23:48-0700]
 So I am not entirely clear on how to optimize for performance for lazy
 bytestrings.
 
 Currently I have a (Lazy) Map that contains large BSON values (more than
 1mb when serialized each). I can serialize BSON documents to Lazy
 ByteStrings using Data.Binary.runPut. I then write this bytestring to a
 socket using Network.Socket.ByteString.Lazy.
 
 My question is this, if the Map object doesn't change (no updates) when it
 serializes the same document to the socket 2x in a row, does it re-evaluate
 the whole BSON value and convert it to a bytestring each time?

Yes.

 Lets say I wanted to have a cache of bytestings so I have another Map
 object that has the serialized bytestrings that I populate it with every
 time the original BSON Map changes. Should the map be strict or lazy?

This is the wrong question. The right question is, do you want the
values be strict (evaluated) or lazy (kept unevaluated until required)?

If you want values to be lazy, then you have to use the lazy Map.

If you want values to be strict, then you may either use the strict Map,
or still use the lazy Map but make sure that the values are evaluated
when you place them in the map. Using the strict Map is probably a
better idea, but the lazy Map lets you have finer control over what is
lazy and what is forced (should you need it).

Note that the lazy bytestring is just a lazy list of strict bytestrings.
Even placing it in the strict map wouldn't force its evaluation.

 Should the bytestrings it stores be strict or lazy?

For a cache, it makes sense to store strict bytestrings (unless they are
so large that it may be hard to allocate that much of contiguous space).

Lazy bytestrings are useful for streaming, when you use a chunk and then
discard it.

Using strict bytestrings doesn't imply that you want to store them
evaluated. Depending on your circumstances, it may be a good idea to
store strict bytestrings lazily, so that they do not take space and time
until they are requested for the first time.

Simply operating with the words lazy and strict may be very confusing,
since they refer to different things in different contexts. Every time
you read that something is lazy or strict, try to decipher it in terms
of the basic evaluation properties.

HTH,
Roman


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskore - lilypond - typesetting?

2013-08-22 Thread Al Matthews
Abjad, which is a Python library, is probably worth study here for its
integration and general re-embrace of Lilypond as a compositional tool.

http://www.projectabjad.org/

Note that in Lilypond one can define a Scheme function over for example a
set of notes.

Al Matthews
-- http://fatmilktv.com

On Aug 21, 2013 12:41 PM, Stephen Tetley stephen.tet...@gmail.com wrote:


 Here's one I did earlier...

 http://www.flickr.com/photos/44929957@N03/4459628487/lightbox/

 This is Haskore implementation of Chick Corea's Child Song 6 rendered
to LilyPond - I don't imagine Mr. Corea's publishers will be sending me a
takedown request any time soon.

 There's a lot missing from Haskore that is needed to make good scores -
the renderer of the above took a lot of effort with metrical grouping but
the result is still abysmal.

 I doubt mathematics can help (common practice) music typesetting much -
Western notation has had a thousand years to develop without the constraint
of a regular syntax; so if Lilypond is horrible it is mostly the fault of
what it tries to typeset (it does make some unwarranted mistakes like
over-restricting the characters it can use for variable names and its
parenthesizing is horrible).


 On 21 August 2013 14:05, Johannes Waldmann waldm...@imn.htwk-leipzig.de
wrote:

 I tried using lilypond ( http://www.lilypond.org/ )
 for typesetting of sheet music.

 While the output looks nice, the input language IMHO is quite horrible,
 because the underlying data/execution model is underspecified.
 For some parts, it tries to describe the logical structure of the score;
 but for others, the layout; and in addition it has several non-obvious
 context-dependencies (but see below), preventing modularity.

 Is there a better option? E.g., starting from a clear mathematical model,
 as in Haskore, and use lilypond only as a PDF rendering engine?

 Do I want hly / hts perhaps?  http://rd.slavepianos.org/?t=hly


 As I see it, the main high-level design problem
 is that the source language needs partial evaluation annotations
 for abstractions applications: sometimes they should be expanded
 (for MIDI rendering, always) and sometimes not (in typesetting,
 to create repetition marks instead of actually repeating notes).


 PS: I agree that some of lilypond's context dependencies
 (relative pitch, implicit note length) do really save
 large amounts of tedious typing: c4 e g a c1 is much more economical
 than [c 1 qn, e 1 qn, g 1 qn , a 1 qn, c 2 fn]
 which I guess is the Haskore equivalent.




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet Another Forkable Class

2013-08-22 Thread Alberto G. Corona
The paper is very interesting:

http://www.cs.indiana.edu/~sabry/papers/exteff.pdf

It seems that the approach is mature enough and it is better in every way
than monad transformers, while at the same time the syntax may become
almost identical to MTL for many uses.

I only expect to see the library in Hackage with all the blessings, and
with all the instances of the MTL classes in order to make the transition
form monad transformers  to ExtEff as transparent as possible


2013/8/22 o...@okmij.org


 Perhaps effect libraries (there are several to choose from) could be a
 better answer to Fork effects than monad transformers. One lesson from
 the recent research in effects is that we should start thinking what
 effect we want to achieve rather than which monad transformer to
 use. Using ReaderT or StateT or something else is an implementation
 detail. Once we know what effect to achieve we can write a handler, or
 interpreter, to implement the desired operation on the World, obeying
 the desired equations. And we are done.

 For example, with ExtEff library with which I'm more familiar, the
 Fork effect would take as an argument a computation that cannot throw
 any requests. That means that the parent has to provide interpreters
 for all child effects. It becomes trivially to implement:

  Another example would be a child that should not be able to throw errors
 as
  opposed to the parent thread.
 It is possible to specify which errors will be allowed for the child
 thread (the ones that the parent will be willing to reflect and
 interpret). The rest of errors will be statically prohibited then.

  instance (Protocol p) = Forkable (WebSockets p) (ReaderT (Sink p) IO)
 where
  fork (ReaderT f) = liftIO . forkIO . f = getSink

 This is a good illustration of too much implementation detail. Why do we
 need to know of (Sink p) as a Reader layer? Would it be clearer to
 define an Effect of sending to the socket? Computation's type will
 make it patent the computation is sending to the socket.
 The parent thread, before forking, has to provide a handler for that
 effect (and the handler will probably need a socket).

 Defining a new class for each effect is possible but not needed at
 all. With monad transformers, a class per effect is meant to hide the
 ordering of transformer layers in a monad transformer stack. Effect
 libraries abstract over the implementation details out of the
 box. Crutches -- extra classes -- are unnecessary. We can start by
 writing handlers on a case-by-case basis. Generalization, if any,
 we'll be easier to see. From my experience, generalizing from concrete
 cases is easier than trying to write a (too) general code at the
 outset. Way too often, as I read and saw, code that is meant to be
 reusable ends up hardly usable.




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Alberto.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet Another Forkable Class

2013-08-22 Thread John ExFalso
To be honest I'm not so sure about these effects... Simply the fact that
the Member class needs -XOverlappingInstances means that we cannot have
duplicate or polymorphic effects. It will arbitrarily pick the first match
in the former and fail to compile in the latter case.

Furthermore I don't really understand the way open sums are implemented.
These unions should be disjoint, but the way they're implemented in the
paper they try to be true unions which cannot be done as that would need
type equality (-XOverlappingInstances is a hack around this)

A correct disjoint open sum would behave well with duplicate and
polymorphic types in the type list. For example we should be able to
project the open sum equivalent of Either String String into the second
String but we cannot with the implementation in the paper. This means we
need to ~index~ the type list instead of picking the result type and
trying for equality with each entry. Something like this:
http://lpaste.net/92069

Of course this is very inconvenient and simply replaces the monad
transformers' lifts with a static index into the effect list.
In general I think there is no convenient way of stacking effects that is
also type safe. At some point we have to disambiguate which effect we are
trying to use one way or the other. The implementation in the paper simply
picks a heuristic and chooses the first effect that seems to match and
discards the others.



On 22 August 2013 12:15, Alberto G. Corona agocor...@gmail.com wrote:

 The paper is very interesting:

 http://www.cs.indiana.edu/~sabry/papers/exteff.pdf

 It seems that the approach is mature enough and it is better in every way
 than monad transformers, while at the same time the syntax may become
 almost identical to MTL for many uses.

 I only expect to see the library in Hackage with all the blessings, and
 with all the instances of the MTL classes in order to make the transition
 form monad transformers  to ExtEff as transparent as possible


 2013/8/22 o...@okmij.org


 Perhaps effect libraries (there are several to choose from) could be a
 better answer to Fork effects than monad transformers. One lesson from
 the recent research in effects is that we should start thinking what
 effect we want to achieve rather than which monad transformer to
 use. Using ReaderT or StateT or something else is an implementation
 detail. Once we know what effect to achieve we can write a handler, or
 interpreter, to implement the desired operation on the World, obeying
 the desired equations. And we are done.

 For example, with ExtEff library with which I'm more familiar, the
 Fork effect would take as an argument a computation that cannot throw
 any requests. That means that the parent has to provide interpreters
 for all child effects. It becomes trivially to implement:

  Another example would be a child that should not be able to throw
 errors as
  opposed to the parent thread.
 It is possible to specify which errors will be allowed for the child
 thread (the ones that the parent will be willing to reflect and
 interpret). The rest of errors will be statically prohibited then.

  instance (Protocol p) = Forkable (WebSockets p) (ReaderT (Sink p) IO)
 where
  fork (ReaderT f) = liftIO . forkIO . f = getSink

 This is a good illustration of too much implementation detail. Why do we
 need to know of (Sink p) as a Reader layer? Would it be clearer to
 define an Effect of sending to the socket? Computation's type will
 make it patent the computation is sending to the socket.
 The parent thread, before forking, has to provide a handler for that
 effect (and the handler will probably need a socket).

 Defining a new class for each effect is possible but not needed at
 all. With monad transformers, a class per effect is meant to hide the
 ordering of transformer layers in a monad transformer stack. Effect
 libraries abstract over the implementation details out of the
 box. Crutches -- extra classes -- are unnecessary. We can start by
 writing handlers on a case-by-case basis. Generalization, if any,
 we'll be easier to see. From my experience, generalizing from concrete
 cases is easier than trying to write a (too) general code at the
 outset. Way too often, as I read and saw, code that is meant to be
 reusable ends up hardly usable.




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Alberto.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet Another Forkable Class

2013-08-22 Thread suhorng Y
For the open union used in extensible effects, apart from using the
Typeable mechanism, is there a more protected way to implement
the open sum type?

I managed to modified the Member class given in the paper, but
ended up having to use the vague OverlappingInstance. That's not
quite what I hope. I'm not even sure whether the instance `Member t (t :
r)`
is more specific than `Member t (t' : r)`.

--
suhorng

{-# LANGUAGE KindSignatures, TypeOperators, GADTs, FlexibleInstances,
 FlexibleContexts, MultiParamTypeClasses, OverlappingInstances
#-}
-- FlexibleContexts is for Show instance of Union

import Data.Functor
import Control.Applicative -- for several functor instances

-- open union
infixr 2 :
data (a :: * - *) : b

data Union r v where
  Elsewhere :: Functor t' = Union r v - Union (t' : r) v
  Here :: Functor t = t v - Union (t : r) v

class Member t r where
  inj :: Functor t = t v - Union r v
  prj :: Functor t = Union r v - Maybe (t v)

instance Member t (t : r) where
  inj tv = Here tv
  prj (Here tv) = Just tv
  prj (Elsewhere _) = Nothing

-- Note: overlapped by letting t' = t
instance (Functor t', Member t r) = Member t (t' : r) where
  inj tv = Elsewhere (inj tv)
  prj (Here _)  = Nothing
  prj (Elsewhere u) = prj u

decomp :: Functor t = Union (t : r) v - Either (Union r v) (t v)
decomp (Here tv) = Right tv
decomp (Elsewhere u) = Left u

-- Auxiliary definitions for tests
data Void
newtype Func a = Func a

instance Show (Union Void a) where
  show _ = undefined

instance (Show (t v), Show (Union r v)) = Show (Union (t : r) v) where
  show (Here tv) = Here  ++ show tv
  show (Elsewhere u) = Elsewhere  ++ show u

instance Functor Func where
  fmap f (Func x) = Func (f x)

instance Show a = Show (Func a) where
  show (Func a) = show a

type Stk = Maybe : Either Char : Func : Void
type Stk' = Either Char : Func : Void -- used in `deTrue`, `deFalse`

unTrue :: Union Stk Bool
unTrue = inj (Func True)

unFalse :: Union Stk Bool
unFalse = inj (Just False)

-- `Func` is repeated
un5 :: Union (Maybe : Func : Either Char : Func : Void) Int
un5 = inj (Func 5)

maybe2 :: Maybe (Func Int)
maybe2 = prj un5

maybeTrue :: Maybe (Func Bool)
maybeTrue = prj unTrue

maybeFalse1 :: Maybe (Func Bool)
maybeFalse1 = prj unFalse

maybeFalse2 :: Maybe (Maybe Bool)
maybeFalse2 = prj unFalse

deTrue :: Either (Union Stk' Bool) (Maybe Bool)
deTrue = decomp unTrue

deFalse :: Either (Union Stk' Bool) (Maybe Bool)
deFalse = decomp unFalse



2013/8/22 Alberto G. Corona agocor...@gmail.com

 The paper is very interesting:

 http://www.cs.indiana.edu/~sabry/papers/exteff.pdf

 It seems that the approach is mature enough and it is better in every way
 than monad transformers, while at the same time the syntax may become
 almost identical to MTL for many uses.

 I only expect to see the library in Hackage with all the blessings, and
 with all the instances of the MTL classes in order to make the transition
 form monad transformers  to ExtEff as transparent as possible


 2013/8/22 o...@okmij.org


 Perhaps effect libraries (there are several to choose from) could be a
 better answer to Fork effects than monad transformers. One lesson from
 the recent research in effects is that we should start thinking what
 effect we want to achieve rather than which monad transformer to
 use. Using ReaderT or StateT or something else is an implementation
 detail. Once we know what effect to achieve we can write a handler, or
 interpreter, to implement the desired operation on the World, obeying
 the desired equations. And we are done.

 For example, with ExtEff library with which I'm more familiar, the
 Fork effect would take as an argument a computation that cannot throw
 any requests. That means that the parent has to provide interpreters
 for all child effects. It becomes trivially to implement:

  Another example would be a child that should not be able to throw
 errors as
  opposed to the parent thread.
 It is possible to specify which errors will be allowed for the child
 thread (the ones that the parent will be willing to reflect and
 interpret). The rest of errors will be statically prohibited then.

  instance (Protocol p) = Forkable (WebSockets p) (ReaderT (Sink p) IO)
 where
  fork (ReaderT f) = liftIO . forkIO . f = getSink

 This is a good illustration of too much implementation detail. Why do we
 need to know of (Sink p) as a Reader layer? Would it be clearer to
 define an Effect of sending to the socket? Computation's type will
 make it patent the computation is sending to the socket.
 The parent thread, before forking, has to provide a handler for that
 effect (and the handler will probably need a socket).

 Defining a new class for each effect is possible but not needed at
 all. With monad transformers, a class per effect is meant to hide the
 ordering of transformer layers in a monad transformer stack. Effect
 libraries abstract over the implementation details out of the
 box. Crutches -- 

[Haskell-cafe] Lifting strictness to types

2013-08-22 Thread Thiago Negri
I've just read the post Destroying Performance with Strictness by Neil
Mitchell [1].

One of the comments from an Anonymous says:

How hard would it be to lift strictness annotations to type-level? E.g.
instead of
f :: Int - Int
f !x = x + 1
write
f :: !Int - Int
f x = x + 1
which would have the same effect. At least it would be transparent to the
developer using a particular function.
The problem I see with this approach is on type classes, as it would be
impossible to declare a type instance with strict implementation to a type
class that used lazy types.

Is this a real problem? Is it the only one?

[1]
http://neilmitchell.blogspot.ru/2013/08/destroying-performance-with-strictness.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lifting strictness to types

2013-08-22 Thread Tom Ellis
On Thu, Aug 22, 2013 at 12:51:24PM -0300, Thiago Negri wrote:
 How hard would it be to lift strictness annotations to type-level? E.g.
 instead of
 f :: Int - Int
 f !x = x + 1
 write
 f :: !Int - Int
 f x = x + 1
 which would have the same effect. At least it would be transparent to the
 developer using a particular function.

See also the recent Reddit thread


http://www.reddit.com/r/haskell/comments/1ksu0v/reasoning_about_space_leaks_with_space_invariants/cbsac5m

where I and others considered the possibility of a strict language with
explicit thunk datatype.  NB OCaml essentially already has this

http://caml.inria.fr/pub/docs/manual-ocaml/libref/Lazy.html

but I think Haskellers would do it better because we have a lot of
experience with purity, laziness and monad and comonad transformers.

Tom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lifting strictness to types

2013-08-22 Thread Thiago Negri
I think Scala has this optional laziness too.
The problem with default-strictness is that libraries that are built with
no laziness in mind turn up to be too strict.
Going from lazy to strict is possible in the client side, but the other way
is impossible.



2013/8/22 Tom Ellis tom-lists-haskell-cafe-2...@jaguarpaw.co.uk

 On Thu, Aug 22, 2013 at 12:51:24PM -0300, Thiago Negri wrote:
  How hard would it be to lift strictness annotations to type-level? E.g.
  instead of
  f :: Int - Int
  f !x = x + 1
  write
  f :: !Int - Int
  f x = x + 1
  which would have the same effect. At least it would be transparent to the
  developer using a particular function.

 See also the recent Reddit thread


 http://www.reddit.com/r/haskell/comments/1ksu0v/reasoning_about_space_leaks_with_space_invariants/cbsac5m

 where I and others considered the possibility of a strict language with
 explicit thunk datatype.  NB OCaml essentially already has this

 http://caml.inria.fr/pub/docs/manual-ocaml/libref/Lazy.html

 but I think Haskellers would do it better because we have a lot of
 experience with purity, laziness and monad and comonad transformers.

 Tom

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] instance Alternative ZipList

2013-08-22 Thread Stefan Mehner
I had an idea for |instance Alternative ZipList|, which doesn't seem to
exist so far. Maybe there just is no need for it. Please tell me what you
think.

After giving the instance definition I will add some intuition on why this
might be useful. Then some words on laws and other conceivable instances.

This appeared on stackoverflow before to make sure it is new
(
http://stackoverflow.com/questions/18210765/instance-alternative-ziplist-in-haskell
).
J. Abrahamson proposed to move this to the list, so here it is.

The only thing I found so far is by AndrewC on stackoverflow:

  There are two sensible choices for Zip [1,3,4] | Zip [10,20,30,40]:
  Zip [1,3,4] because it's first - consistent with Maybe
  Zip [10,20,30,40] because it's longest - consistent with Zip [] being
discarded

(Here Zip is basically ZipList with the known Applicative instance.)

Proposed instance
=

I think the answer should be Zip [1,3,4,40]. Let's see an instance:

 instance Aternative Zip where
   empty = Zip []
   Zip xs | Zip ys = Zip (go xs ys) where
 go [] ys = ys
 go xs [] = xs
 go (x:xs) (_:ys) = x : go xs ys

The only Zip a we can produce without knowing the type argument a is Zip
[] :: Zip a, so there is little choice for empty. If the empty list is the
neutral element of the monoid, we might be tempted to use list
concatenation as the monoid operation. However, go is not (++), since
every time we use one entry of the first argument list, we drop one of the
second. Thus we have a kind of overlay: The left argument list hides the
beginning of the right one (or all of it).

[ 1, 3, 4,40]   [10,20,30,40]   [ 1, 3, 4]   [ 1, 3, 4]
  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^
  |  |  |  |  |  |  |  |  |  |  |  |  |  |
[ 1, 3, 4] |[10,20,30,40]   []|  |  |[ 1, 3, 4]
[10,20,30,40]   [ 1, 3, 4]  [ 1, 3, 4]   []

(use monospace for ascii-art)

For the some/many methods I'd guess

 some (Zip z) = Zip (map repeat z)
 many (Zip z) = Zip (map repeat z ++ repeat [])

where some takes a ziplist and replaces every entry x by repeat x and many
does the same but additionally extends the ziplist with empty lists.

Probably not particularly usefull, but that's what the recursive
definition of some and many gives.

What is it good for?


One intuition behind ziplists is processes: A finite or infinite stream of
results. When zipping, we combine streams, which is reflected by the
Applicative instance. When the end of the list is reached, the stream
doesn't produce further elements. This is where the Alternative instance
comes in handy: We can name a replacement, taking over as soon as the
default process terminates.

For example we could write

 fmap Just foo | pure Nothing

to wrap every element of the ziplist foo into a Just and continue with
Nothing afterwards. The resulting ziplist is infinite, reverting to a
default value after all (actual) values have been used up. This could of
course be done by hand by appending an infinite list inside the Zip
constructor. Yet the above is more elegant and does not assume knowledge
of constructors, leading to higher code reusability.

Another intuition one might have of zipLists is partial functions on
naturals. Using this analogy, | behaves like the Monoid instance of Map
and IntMap.

Lawfulness
==

The definition of | given above is associative and the empty list really
is the empty element. We also have

 Zip [] * xs = fs * Zip [] = Zip []
 (fs | gs) * xs = fs * xs | gs * xs
 fs * (xs | ys) = fs * xs | fs * ys

so all the laws you could ask for are satisfied (which is not true for
list concatenation by the way).

This instance is consistent with the one for Maybe: Choice is biased to
the left, yet when the left argument is unable to produce a value, the
right argument takes over. The functions

 zipToMaybe :: Zip a - Maybe a
 zipToMaybe (Zip []) = Nothing
 zipToMaybe (Zip (x:_)) = Just x

 maybeToZip :: Maybe a - Zip a
 maybeToZip Nothing = Zip []
 maybeToZip (Just x) = pure x

are morphisms of alternatives (meaning psi x | psi y = psi (x | y) and
psi x * psi y = psi (x * y)).

Other options
=

Before putting this up to discussion I have to say this was conceived in
an armchair: Until now I don't really know of any concrete uses for this
instance. Might be there are none.

Some words on AndrewC's instances (none of which has been put forward as a
serious suggestion).

Picking the longer list has a number of problems to it. When it comes to
infinite lists (which are introduced by pure), we get undefined values.
Also it's not very lazy: We have to evaluate both arguments until the
shorter list ends just to get the first entry of the result. When both
lists are of equal length we probably pick the left one, which defies the
laws (distributivity in particular). Finally, you could just write

 maximumBy (compare `on` length) [ys,xs]

which is surprisingly readable (and biassed to the 

Re: [Haskell-cafe] Lifting strictness to types

2013-08-22 Thread Bardur Arantsson
On 2013-08-22 18:19, Thiago Negri wrote:
 I think Scala has this optional laziness too.

Indeed, but it's _not_ apparent in types (which can be an issue).

Due to the somewhat weird constructor semantics of the JVM it also means
you can have immutable values which start out(!) as null and end up
being non-null.

Regards,


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hoogle vs Hayoo

2013-08-22 Thread jabolopes
Hi,

I noticed Hayoo appears as a link in the toolbox of
http://hackage.haskell.org and also that Hayoo seems to display better
results than Hoogle.  For example, if you search for 'PublicKey' in
Hayoo, you will get several results from Hackage libraries, such as,
'crypto-pubkey' and 'crypto-api'.  However, the same query in Hoogle
displays no results.

Is Hayoo the default Hackage search engine ?
Is Hoogle deprecated ?
What the status ?

Thank you,
Jose

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hoogle vs Hayoo

2013-08-22 Thread Mateusz Kowalczyk
On 22/08/13 19:30, jabolo...@google.com wrote:
 Hi,
 
 I noticed Hayoo appears as a link in the toolbox of
 http://hackage.haskell.org and also that Hayoo seems to display better
 results than Hoogle.  For example, if you search for 'PublicKey' in
 Hayoo, you will get several results from Hackage libraries, such as,
 'crypto-pubkey' and 'crypto-api'.  However, the same query in Hoogle
 displays no results.
 
 Is Hayoo the default Hackage search engine ?
 Is Hoogle deprecated ?
 What the status ?
 
 Thank you,
 Jose
 

You could also try the Hoogle hosted by FPComplete guys, it indexes more
stuff. It's at [1].

I hear that Hayoo actually does a better job getting the relevant
results but I am unsure how much truth there is to it. I always thought
it was just Hoogle with more indexed docs.


[1] - https://www.fpcomplete.com/hoogle


-- 
Mateusz K.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monoids induced by Applicative/Alternative/Monad/MonadPlus?

2013-08-22 Thread Petr Pudlák
Or, if there are no such definitions, where would be a good place to add 
them?


Petr

Dne 08/20/2013 06:55 PM, Petr Pudlák napsal(a):


Dear Haskellers,

are these monoids defined somewhere?

|import  Control.Applicative
import  Data.Monoid

newtype  AppMonoid  m a =AppMonoid  (m  a)
instance  (Monoid  a,Applicative  m) =Monoid  (AppMonoid  m a)where
 mempty =AppMonoid  $ pure mempty
 mappend (AppMonoid  x) (AppMonoid  y) =AppMonoid  $ mappend $ x * y
-- With the () monoid for `a` this becames the monoid of effects.

newtype  AltMonoid  m a =AltMonoid  (m  a)
instance  Alternative  m =Monoid  (AltMonoid  m a)where
 mempty =AltMonoid  empty
 mappend (AltMonoid  x) (AltMonoid  y) =AltMonoid  $ x | y|

(and similarly for Monad/MonadPlus, until they become subclasses of 
Applicative?)


Best regards,
Petr



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe