On Sat, 2008-06-21 at 21:11 -0400, Brent Yorgey wrote: > On Fri, Jun 20, 2008 at 09:52:36PM -0500, Derek Elkins wrote: > > On Fri, 2008-06-20 at 22:31 -0400, Brent Yorgey wrote: > > > On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote: > > > > > > > > foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 > > > > foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0 > > > > > > > > > > Hi George, > > > > > > This is very cool! I have never thought of folds in quite this way > > > before. It makes a lot of things (such as the identities you point > > > out) obvious and elegant. > > > > > > > We can also see the following identities: > > > > > > > > foldright f as == foldright (.) (map f as) id > > > > foldleft f as == foldright (flip (.)) (map f as) id > > > > > > > > I like that second one, after trying to read another definition of > > > > left fold in terms of right fold (in the web book "Real World Haskell"). > > > > > > > > The type signature, which could be written (a -> (b -> b)) -> ([a] -> > > > > (b -> b)), suggests generalization to another type constructor C: (a -> > > > > (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any > > > > sense? > > > > > > As Brandon points out, you have rediscovered Data.Foldable. =) There's > > > nothing wrong with that, congratulations on discovering it for > > > yourself! But again, I like this way of organizing the type > > > signature: I had never thought of a fold as a sort of 'lift' before. > > > If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> > > > b -> b (or C a -> b -> b, more generally). > > > > > > > Okay, it goes without saying that this is useless dabbling, but have > > > > I entertained anyone? Or have I just wasted your time? I eagerly > > > > await > > > > comments on this, my first posting. > > > > > > Not at all! Welcome, and thanks for posting. > > > > Look into the theory of monoids, monoid homomorphisms, M-sets and free > > monoids. > > Thanks for the pointers! Here's what I've come up with, after > re-reading some Barr-Wells lecture notes. > > First, given finite sets A (representing an 'alphabet') and S > (representing 'states'), we can describe a finite state machine by a > function phi : A x S -> S, which gives 'transition rules' giving a new > state for each combination of alphabet character and state. If we > squint and wave our hands and ignore the fact that types aren't > exactly sets, and most of the types we care about have infinitely many > values, this is very much like the Haskell type (a,s) -> s, or > (curried) a -> s -> s, i.e. a -> (s -> s). So we can think of a > Haskell function phi :: a -> (s -> s) as a sort of 'state machine'. > > Also, for a monoid M and set S, an action of M on S is given by a > function f : M x S -> S for which > > (1) f(1,s) = s, and > (2) f(mn,s) = f(m,f(n,s)). > > Of course, in Haskell we would write f :: m -> (s -> s),
This change is not completely trivial. > and we would > write criteria (1) and (2) as > > (1) f mempty = id > (2) f (m `mappend` n) = f m . f n So what does this make f? Hint: What is (s -> s)? _______________________________________________ Haskell-Cafe mailing list [email protected] http://www.haskell.org/mailman/listinfo/haskell-cafe
