Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-21 Thread Heinrich Apfelmus

Ben wrote:

however, this does bring up a general question : why are applicative
functors (often) faster than monads?  malcolm wallace mentioned this
is true for polyparse, and heinrich mentioned this is true more
generally.  is there a yoga by which one can write monadic functors
which have a specialized, faster applicative implementation?


I'm not knowledgeable enough on the multicore stuff, but I can comment 
on the monad vs applicative issue.


Applicative functors are not per se faster than monads, it's just that 
the former can encode static analysis while the latter can't. As you can 
see from the type of bind


   (=) :: m a - (a - m b) - m b

the structure of the computation in the second argument, i.e. its 
various side effects, can depend on a value of type  a  , which is only 
available at run-time.


In contrast, the type of apply

   (*) :: m (a - b) - m a - m b

makes it clear that the side effects are the same, no matter what the 
value of  a  will be at run-time. In other words, the structure of the 
computation is known statically.



For parsers, interesting analyses are

* Does a parser accept the empty set?
* What are the first characters that a parser can accept?

The answers can be obtained easily enough from an applicative functors, 
for instance


acceptsEmpty (pure x)  = True
acceptsEmpty (f * g) = acceptsEmpty f  acceptsEmpty g

But the corresponding analysis for monadic parsers is either harder or 
hopelessly inefficient because we don't know the structure of the parser 
until we run it on some input.


See also this answer on StackOverflow:

  http://stackoverflow.com/a/7863380/403805



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread Ben
heinrich and all --

thanks for the illuminating comments, as usual.  i've had a little bit of time 
to play around with this and here's what i've concluded (hopefully i'm not 
mistaken.)

1 - while composeability makes STM a great silver bullet, there are other 
composable lower level paradigms.  aaron has identified a few fundamental 
algorithms that appear to be composable, and used a lot in concurrent 
algorithms / data structures : k-CAS, exponential backoff, helping and 
elimination.

2 - k-CAS (and more generally k-RMW) is composable.  i'm not exactly sure if 
i'd call k-CAS monadic but it is at least applicative (i'm not sure what the 
exact definition of k-CAS is.  a bunch of 1-CASs done atomically seems 
applicative; allowing them to interact seems monadic.  certainly k-RMW is 
monadic.)  while it is possible to implement STM on top of k-CAS and vice 
versa, k-CAS can get you closer to the metal, and if you can special case 1-CAS 
to hardware you will win on a lot of benchmarks.  a lot of concurrent 
algorithms only need 1-CAS.

3 - backoff, elimination and helping help scalability a lot, so you want 
support for them.  elimination and helping require communication between 
transactions, whereas STM is all about isolation, so reagents are fundamentally 
different in this regard.

reagents as implemented in the paper are not entirely monadic (by accident, i 
think the author intended them to be.)  as far as i can see he uses an 
applicative form of k-CAS, and the reagents on top of it are applicative : his 
computed combinator (monadic bind) does not allow post-composition (it has no 
continuation.)  there is no reason why it could not be built on top of a 
monadic k-RMW and be fully monadic.  however he is able to recognize and 
special-case 1-CAS, which gives great performance of course.

however, this does bring up a general question : why are applicative functors 
(often) faster than monads?  malcolm wallace mentioned this is true for 
polyparse, and heinrich mentioned this is true more generally.  is there a yoga 
by which one can write monadic functors which have a specialized, faster 
applicative implementation?

right now i'm reading up on k-CAS / k-RMW implementations, and i think i'm 
going to start writing that monad before moving on to elimination / helping.  
i'm finding a two things difficult :

- it is hard to represent a unified transaction log because of heterogeneous 
types, and 
- allowing a fully monadic interface makes it hard for me to special case 1-CAS 
(or applicative k-CAS.)

there are workarounds for the first issue (separate logs for each reference, or 
a global clock as in Transactional Locking II.)  for the second, right now i'm 
wondering if i'm going to have to write a compiler for a little DSL; i'd like 
to be able to exploit applicative performance gains generally, and special case 
1-CAS.  

best, ben

On Apr 6, 2012, at 5:38 AM, Heinrich Apfelmus wrote:

 Ben wrote:
 perhaps it is too late to suggest things for GSOC --
 but stephen tetley on a different thread pointed at aaron turon's
 work, which there's a very interesting new concurrency framework he
 calls reagents which seems to give the best of all worlds : it is
 declarative and compositional like STM, but gives performance akin to
 hand-coded lock-free data structures.  he seems to have straddled the
 duality of isolation vs message-passing nicely, and can subsume
 things like actors and the join calculus.
 http://www.ccs.neu.edu/home/turon/reagents.pdf
 he has a BSD licensed library in scala at
 https://github.com/aturon/ChemistrySet
 if someone doesn't want to pick this up for GSOC i might have a hand
 at implementing it myself.
 
 That looks great! While I didn't take the time to understand the concurrency 
 model in detail, the overall idea is to use arrows that can be run atomically
 
   runAtomically :: Reagent a b - (a - IO b)
 
 This is very similar to STM: combining computations within the monad/arrow is 
 atomic while combining computations outside the monad/arrow can interleave 
 them.
 
   runAtomically (f . g)  -- atomic
   runAtomically f . runAtomically g  -- interleaving
 
 
 Actually, it turns out that the  Reagent  arrow is also a monad, but the 
 author seems to claim that the static arrow style enables certain 
 optimizations. I haven't checked his model in detail to see whether this is 
 really the case and how exactly it differs from STM, but we know that 
 situations like this happen for parser combinators. Maybe it's enough to 
 recast reagents as an applicative functor?
 
 To summarize: the way I understand it is that it's apparently possible to 
 improve the STM monad by turning it into an arrow. (I refer to STM in a 
 very liberal sense here: whether memory is transactional or not is 
 unimportant, the only thing that matters is a computation that composes 
 atomically.)
 
 
 Best regards,
 Heinrich Apfelmus
 
 --
 http://apfelmus.nfshost.com
 
 
 

Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread KC
Think of the differences (and similarities) of Applicative Functors and
Monads and the extra context that monads carry around.


-- 
--
Regards,
KC
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread Ben
i'm not sure what your email is pointing at.  if it is unclear, i understand 
the difference between applicative and monadic.  i suppose the easy answer to 
why applicative can be faster than monadic is that you can give a more 
specialized instance declaration.  i was just wondering if there was a way to 
make a monad recognize when it is being used applicatively, but that is 
probably hard in general.

b

On Apr 20, 2012, at 2:54 PM, KC wrote:

 Think of the differences (and similarities) of Applicative Functors and 
 Monads and the extra context that monads carry around.
 
 
 -- 
 --
 Regards,
 KC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread KC
Sorry, I thought you or someone was asking why are Applicative Functors
faster in general than Monads.

Functional programming is structured function calling to achieve a result
where the functions can be evaluated in an unspecified order; I
thought Applicative Functors had the same unspecified evaluation order;
whereas, Monads could carry some sequencing of computations which has the
extra overhead of continuation passing.

Do I have that correct?


On Fri, Apr 20, 2012 at 4:05 PM, Ben midfi...@gmail.com wrote:

 i'm not sure what your email is pointing at.  if it is unclear, i
 understand the difference between applicative and monadic.  i suppose the
 easy answer to why applicative can be faster than monadic is that you can
 give a more specialized instance declaration.  i was just wondering if
 there was a way to make a monad recognize when it is being used
 applicatively, but that is probably hard in general.

 b

 On Apr 20, 2012, at 2:54 PM, KC wrote:

  Think of the differences (and similarities) of Applicative Functors and
 Monads and the extra context that monads carry around.
 
 
  --
  --
  Regards,
  KC




-- 
--
Regards,
KC
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread Ben
the sequencing matters for applicative functors.  from McBride and Patterson 
[1]:

The idea is that 'pure' embeds pure computations into the pure fragment of an 
effectful world -- the resulting computations may thus be shunted around 
freely, as long as the order of the genuinely effectful computations is 
preserved.

it is interesting to note that sequencing only matters a little for k-CAS : you 
just have to read before you write, but you can do the reads and writes in any 
order (as long as it is ultimately atomic.)

b

[1] McBride C, Patterson R. Applicative programming with effects Journal of 
Functional Programming 18:1 (2008), pages 1-13.

On Apr 20, 2012, at 4:41 PM, KC wrote:

 Sorry, I thought you or someone was asking why are Applicative Functors 
 faster in general than Monads.
 
 Functional programming is structured function calling to achieve a result 
 where the functions can be evaluated in an unspecified order; I thought 
 Applicative Functors had the same unspecified evaluation order; whereas, 
 Monads could carry some sequencing of computations which has the extra 
 overhead of continuation passing.
 
 Do I have that correct?
 
 
 On Fri, Apr 20, 2012 at 4:05 PM, Ben midfi...@gmail.com wrote:
 i'm not sure what your email is pointing at.  if it is unclear, i understand 
 the difference between applicative and monadic.  i suppose the easy answer to 
 why applicative can be faster than monadic is that you can give a more 
 specialized instance declaration.  i was just wondering if there was a way to 
 make a monad recognize when it is being used applicatively, but that is 
 probably hard in general.
 
 b
 
 On Apr 20, 2012, at 2:54 PM, KC wrote:
 
  Think of the differences (and similarities) of Applicative Functors and 
  Monads and the extra context that monads carry around.
 
 
  --
  --
  Regards,
  KC
 
 
 
 
 -- 
 --
 Regards,
 KC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe