[Haskell-cafe] Re: a regressive view of support for imperative programming in Haskell
Stefan O'Rear wrote: apfelmus wrote: My assumption is that we have an equivalence forall a,b . m (a - m b) ~ (a - m b) because any side effect executed by the extra m on the outside can well be delayed until we are supplied a value a. Well, at least when all arguments are fully applied, for some notion of fully applied I figured that wouldn't be a problem since our values don't escape, and the functions we define all respect the embedding... More formally: Projections and injections: proj :: Monad m = m (a - m b) - (a - m b) proj ma = \x - ma = \fn' - fn' x inj fn = return fn Define an equivalence relation: ma ≡ mb - proj ma = proj mb Projection respects equivalence: ma ≡ mb - proj ma = proj mb(intro -) ma ≡ mb = proj ma = proj mb(equiv def) proj ma = proj mb = proj ma = proj mb (assumption) Application respects equivalence: Yeah, that's the right approach, but it has a few typos. The correct version is (@) :: Monad m = m (a - m b) - m a - m b (@) ma = \x - x = proj ma Formulating (@) in terms of proj ma is a very clever idea since it follows immediately that ma @ x = ma' @ x iff proj ma = proj ma' iff ma ≡ ma' In other words, when viewed through @ and proj only, equivalent actions give equivalent results. The main point is that this does not hold for the other curry-friendly type m a - m b proj :: Monad m = (m a - m b) - (a - m b) proj f = f . return (@) :: Monad m = (m a - m b) - m a - m b (@) = id ma ≡ ma' iff proj ma = proj ma' since those functions may execute their argument multiple times. So, here's the counterexample once :: Monad m = m A - m A once = id twice :: Monad m = m A - m A twice x = x once x Now, we have proj once = return = proj twice but effect :: IO () -- a given effect effect = putStrLn Kilroy was here! once @ effect = effect ≠ twice @ effect = effect effect The same can be done for several arguments, along the lines of proj2 :: m (a - m (b - m c)) - (a - b - m c) proj2 f = proj . (proj f) app2 :: m (a - m (b - m c)) - (m a - m b - m c) app2 f mx my = (f @ mx) @ my = my = proj (mx = proj f) = my = \y - mx = \x - proj2 f x y and similar results. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Diagnosing stack overflow
Justin Bailey wrote: -- Determines if the length of the strings in the list is longer than the given -- count. If not, amount the list falls short is returned. Otherwise, -- -1 indicates the prefix list is at least that long. If the count is zero and -- the list is empty or just null strings, -1 is also returned. prefixesAtLeast :: Int - [S.ByteString] - Int While that doesn't help your stack overflow problem, it's not very haskellish to return magic numbers. A Maybe type is more appropriate here. prefixesAtLeast !0 !ss | null ss = 0 | all S.null ss = 0 | otherwise = -1 prefixesAtLeast !n !ss = prefixesAtLeast' n ss where prefixesAtLeast' !n ss | n 0 = -1 | null ss = n | otherwise = let (!s : (!rest)) = ss in prefixesAtLeast' (n - (S.length s)) rest Extracting the head and tail of ss with a let statement could lead to a huge unevaluated expression like rest = tail (tail (tail (...))) but the null test are likely to force it. Also note that the case n = 0 is quite rare. In any case, I'd write the function as lengthExcess :: Int - [S.ByteString] - Maybe Int lengthExcess n ss | n = 0= Nothing | otherwise = case ss of [] - Just n (s:ss) - lengthExcess (n - S.length s) ss Note the that the function name is chosen to mnemonically match the result type Maybe Int, i.e. the excess is Just 5 characters or the excess is Nothing. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Diagnosing stack overflow
Justin Bailey wrote: apfelmus wrote: Extracting the head and tail of ss with a let statement could lead to a huge unevaluated expression like rest = tail (tail (tail (...))) Even though they are probably forced, would breaking the head and tail apart via pattern-matching or a case statement avoid building up that unevaluated expression? Yes, absolutely, since pattern matching has to force the scrutinee in order to choose the matching case. In contrast, a let statement let (x:xs) = expr in ... simply assumes that expr is of the form (x:xs) but does not force it and check whether that's really the case. Of course, this may turn out as pattern match later on as soon as x is demanded but expr was initially the empty list. In your case, the test null ss forces ss and checks whether the let-pattern is ok. So, you basically end up doing what a case expression would do. In other words, the situation is more they are most likely forced than they are probably forced and it's just a matter of convenience to choose one over the other. But there are certain situations where you can check/prove differently that the let pattern never fails and where such a lazy pattern is wanted. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: List comprehension desugaring
Neil Mitchell wrote: The Haskell desugaring for list comprehensions is given in: http://haskell.org/onlinereport/exps.html#list-comprehensions All the rules seem to be left to right rewrites, apart from the second one, which seems to be right to left. Is there some deep reason for this, or is this accidental. Isn't the second rule left to right, too? The translation assumes that Q is non-empty, so the three last rules don't match [e | q] but they match [e | q, True]. The non-emptiness is probably for ruling out the invalid list comprehension [e | ] which would otherwise appear as an intermediate result in the translation for empty Q. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Bi-directional Maps
Andrew Wagner wrote: So, in writing my chess engine, I've been trying to maintain 2 Map objects. One maps squares on the board to Ints, the other maps Ints to actual Pieces. It occurred to me that it would be useful to explicitly have a Bi-directional Map, which does the maintenance of keeping the Maps synchronized behind the scenes. Thus, Bimap was born! I've taken the API for Data.Map (which you can find at ), and cannibalized it for Bimap. The new API is at http://hpaste.org/2344 . The idea is that if you have a Bimap k a, and you want to treat the k's as keys, and use a function like Data.Map.foo, it will be called Data.Map.left_foo in Bimap. And if they want to do the same thing, but using the a's as keys, then they simply use right_foo. The metaphor is that we can view it as a Map in 2 directions, manipulating it from the left (on the k's), or from the right (on the a's). Is this useful? Is there a better way? Is the API too big, and if so, how can it be pared down? IMHO, the API is too big and not beautiful enough. How about a function flip :: Bimap a b - Bimap b a that interchanges the role of keys and values? Or maybe keep every functions symmetric in a and b , like in update :: ((a,b) - Maybe (a,b)) - Either a b - Bimap a b - Bimap a b The changer functions take pairs and the search key to look for is Either a b . But most of the map functions (including update above) probably won't work anyway, what should left_insertWith (\new old - new) 'a' 1 (fromList [('a',2),('b',1)]) do? I can't yield fromList [('a',1),('b',1)] since 1 has two keys now. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Bi-directional Maps
Hugh Perkins wrote: Exactly. For this to work there needs to be the constraint that there's a one-to-one mapping in each direction. The Bimap should have the uniqueness promise that Set (k, v) gives. Yet you should be able to search on either tuple value. Or... have the possibility of returning a list of values. Arguably there are two possible implementations, one that enforces one-to-one mapping, and one which allows multiple values, in either direction. Terminology reminder :) - the latter is called (binary) relation http://en.wikipedia.org/wiki/Binary_relation - the former would be a bijection http://en.wikipedia.org/wiki/Bijective_map Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Newbie question: Where is StackOverflow on the Wiki?
Stefan O'Rear wrote: sum (enum 1 10) = sum' 0 (enum 1 10) = ... sum' 36 (9 : enum (9+1) 10) = (sum' $! (36+9)) (enum (9+1) 10) = sum' 45 (enum (9+1) 10) = sum' 45 [] = 45 (I need to find some way to automate making these trails :) ) Yes! We'd need such an automatic tool for the wikibook, too. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Newbie question: Where is StackOverflow on the Wiki?
Neil Mitchell wrote: sum (enum 1 10) = sum' 0 (enum 1 10) = ... sum' 36 (9 : enum (9+1) 10) = (sum' $! (36+9)) (enum (9+1) 10) = sum' 45 (enum (9+1) 10) = sum' 45 [] = 45 (I need to find some way to automate making these trails :) ) Yes! We'd need such an automatic tool for the wikibook, too. The problem is that Haskell is ridiculously complex, and the small step interpretation is much harder than you'd think. For example, sum may well be defined as foldl' (+) 0, which is a CAF, so gets reduced once. The 0 won't actually be a 0, but will be fromInteger 0, which will correspond to looking up an item in the dictionary and applying it. Dictionaries especially make the simple interpretation completely wrong. It's easy to do informally, but once you start being more precise, its very complex. Yeah, the precise details may vary, even :) But for teaching, an automatic tool that does graph reduction would be great. I don't mind if it's sloppy (directly apply definitions pattern matching VS everything is a lambda abstraction) and only does simply typed lambda calculus (no type applications, no type classes). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: help understanding lazy evaluation
Ronald Guida wrote: Can anyone tell me if I've got this right? Yes, you got. The let-statement you introduce that embodies the sharing of the argument n = 12 probably should be present in the first parts, too. But this doesn't really matter, the formalities of graph reduction vary with the formalizer :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: IO inside CGI
Adrian Neumann wrote: Now I'd like to get a new StdGen, in case no id was supplied to the script. parse :: Maybe String- IO StdGen parse (Just x) = return $ read x parse Nothing = getStdGen Obviously this doesn't work because I'm trying to do IO inside CGI (right?). Is there some incantation I can perform to make this possible? Abracadabra, the incantation is liftIO :: IO a - CGI a i.e. parse :: Maybe String- CGI StdGen parse (Just x) = return $ read x parse Nothing = liftIO getStdGen Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: nested datatypes
(Note that the term nested data type also/already carries the meaning non-regular data type, an example being data PerfectBinaryTree a = One a | Succ (PerfectBinaryTree (a,a)) ) Thomas Girod wrote: recently I was trying to represent complex data by defining several datatypes and nesting them, such as data Foo = Foo { foo :: Bar } deriving (Eq,Show) data Bar = Bar { bar :: Int } deriving (Eq,Show) To change only a part of the data, syntactic sugar is quite convenient. But it seems to be quite painful with nested datatypes. b = Bar 10 f = Foo b foobar :: Int - Foo - Foo foobar i f = let nb = (foo f){bar = i} in f{foo = nb} So, my question is : is there a nifty way to modify data within a nested datatype, similar to the f{foo = bar} style ? If not, anyone is using some kind of workaround for this ? There is a nifty way, called functional references. They're a pair of get and set functions data Ref s a = Ref { get :: s - a, set :: a - s - s } The nice thing about them is that we can compose them like functions o :: Ref b c - Ref a b - Ref a c f `o` g = Ref (get f . get g) (\c a - set (set c f $ get g a) g a) The example becomes data Foo = Foo Bar data Bar = Bar Int foo :: Ref Foo Bar foo = Ref (\(Foo x) - x) (\x (Foo _) - Foo y) bar :: Ref Bar Int bar = Ref (\(Bar x) - x) (\x (Bar _) - Bar x) foobar :: Ref Foo Int foobar = bar `o` foo See also http://luqui.org/blog/archives/2007/08/05/ haskell-state-accessors-second-attempt-composability/ and Sander Evers, Peter Achten, and Jan Kuper. A Functional Programming Technique for Forms in Graphical User Interfaces. http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf Writing getter and setter functions by hand can be tedious but somebody already automated this with Template Haskell or other other preprocessing tools. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: quoting in Haskell
Peter Verswyvelen wrote: In Scheme, on can quote code, so that it becomes data. Microsoft's F# and C# 3.0 also have something similar that turns code into expression trees. I can't find something similar for Haskell? Maybe I am looking at the wrong places? Quoting/Inspecting code at runtime is not possible in Haskell since this would break referential transparency, i.e. one could tell that two extensionally equal values like 3 `add` 4 1 `add` (2 `mul` 3) are different by inspecting their internal structure. Of course, you can use constructors for add and mul and then inspect and transform the result data Expr = Val Int | Add Expr Expr | Mul Expr Expr add, mul :: Expr - Expr - Expr add = Add mul = Mul x :: Expr x = Val 1 `add` (Val 2 `mul` Val 3) By making Expr an instance of the class Num , you can use overloaded arithmetic operations instance Num Expr where (+) = Add (*) = Mul fromInteger = Val . fromInteger x :: Expr x = 1 + 2*3 I want to write something like selectiveQuote [add] (1 `add` 2 `mul` 3) which would result in an expression tree like add / \ 16 So the `mul` is not quoted because it is not part of the context = [add] I'm not sure why you'd want to do that, but it's not well-defined. What would selectiveQuote [add] ((1 `add` 2) `mul` 3) be? How to expand `mul` here when `add` isn't expanded? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Mutable but boxed arrays?
Henning Thielemann wrote: I'll see, if I understand it. do writeArray arr 0 2 x - readArray arr 0 writeArray arr 0 (x+x) If 'arr' is an STArray, the 'arr' will contain the unevaluated expression 2+2 as zeroth element and with type STUArray it will contain the evaluated 4 ? Exactly. Put differently, writeArray :: STUArray - .. is strict in the third argument whereas writeArray :: STArray - .. is not. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: turning an imperative loop to Haskell
Dougal Stanton wrote: To create an infinite list where each f(u) depends on the previous u, with a single seed value, use 'iterate': main = mapM_ (uncurry (printf %d %f\n)) (zip [1..50] (iterate f 3)) How about main = sequence_ $ zipWith (printf %d %f\n) [1..50] (iterate f 3) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ((a - b) - c) - (a - m b) - m c
Henning Thielemann wrote: On Sun, 9 Sep 2007, Stuart Cook wrote: When combining monadic and non-monadic code, I've often wished for a magical combinator of type (Monad m) = ((a - b) - c) - (a - m b) - m c which would let me inject a monadic function into a pure one, then wrap the ultimate result to ensure that no nastiness escapes. If the signature would be (Monad m) = ((a - b) - c) - m (a - b) - m c it would be possible, and the implementation would be 'liftM'/'fmap'. In the Reader monad you can even convert (a - m b) to m (a - b) The existence of a conversion convert :: (a - m b) - m (a - b) between those two types for a given monad m is equivalent to the existence of magic :: ((a - b) - c) - (a - m b) - m c since we have convert = magic id magic f g = return f `ap` convert g Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Data.Binary Endianness
Sven Panne wrote: So what I was asking for is: getInt32be, putIEEEFloatLe, getIEEEDoubleHost, ... Type classes might be used to get a slightly smaller API, but I am unsure about the performance impact and how much this would really buy us in terms of the ease of use of the API. It's not that related, but I just got struck by an obvious idea, namely to put the endianness in an extra parameter data Endianness = Little | Big | Host putInt32 :: Endianness - Int - Put in order to avoid three functions for every data type. I don't know whether this makes performance a bit worse, but if it does, phantom types or similar are an option to encode endianness, too. data LE = LE data BE = BE data Host = Host class Put a endian where put :: endian - a - Put instance Put Int32 LE where ... instance Put Int32 BE where ... instance Put Int32 Host where ... Oh, and the 8,16,32 and 64 are good candidates for phantom type/associated data types, too. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Sequencing Operations in a Monad
SevenThunders wrote: Ryan Ingram wrote: As long as the FFI calls don't make destructive updates to existing matrices, you can do what you want. For example, assuming you have: -- adds the second matrix to the first overwrites the first matrixAddIO :: MatrixIO - MatrixIO - IO () -- creates a new copy of a matrix matrixCopyIO :: MatrixIO - IO MatrixIO ... Well as you point out there is an efficiency issue if we need to copy matrices all of the time in order to insure 'referential transparency'. Moreover I manage my matrices on a stack in C, since it makes it easy to handle memory allocation and deallocation. The stack configuration tends to be highly fluid so there are always side effects going on. Right now my Matrix type wraps the index from the bottom of the Matrix stack into the IO monad. If you need destructive updates, you indeed need a monad. Otherwise, I'd use ForeignPtrs and import the matrix operations as pure functions (~ unsafePerformIO). I was just wondering if there was any obvious way to force an IO action to execute only once, since now each reference to the action IO causes it to execute again. Isn't that simply do x - onlyOnce mult x x with onlyOnce :: IO Int mult :: Int - Int - IO Int ? If you want mult = liftM2 something :: IO Int - IO Int - IO Int you can do x' - onlyOnce let x = return x' mult x x which is do x - return `liftM` onlyOnce mult x x for short. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Type-Marking finite/infinte lists?
David Menendez wrote: Joachim Breitner wrote: today while mowing the lawn, I thought how to statically prevent some problems with infinte lists. I was wondering if it is possible to somehow mark a list as one of finite/infinite/unknown and to mark list-processing functions as whether they can handle infinte lists. One possibility would be to have some phantom markers in the list type. For example, newtype List isEmpty isFinite a = MarkList [a] data Finite data Infinite data Empty data Nonempty data Unknown Yes, phantom types are what Joachim wants. For more about phantom types, see also http://haskell.org/haskellwiki/Phantom_type Ralf Hinze. Fun with Phantom Types. http://www.informatik.uni-bonn.de/~ralf/publications/With.pdf Another possibility for working with infinite lists would be a new data type data InfList a = a : InfList a (also called Stream but this name is quite overloaded.) cons:: a - List e f a - List Nonempty f a But unfortunately, finiteness is a special property that the type system cannot guarantee. The above type signature for cons doesn't work since the following would type check bad :: a - List Nonempty Finite a bad x = let xs = cons x xs in xs In other words, Haskell's type system doesn't detect infinite recursion. (But there are type systems like System F or that of Epigram that disallow general recursion.) As a variation, we can use rank-2 types instead of Unknown; e.g. tail :: List Nonempty f a - (forall e. List e f a - w) - w filter :: (a - Bool) - List e f a - (forall e. List e f a - w) - w That's probably best explained in terms of the existential quantor tail :: List Nonempty f a - (exists e. List e f a) filter :: (a - Bool) - List e f a - (exists e. List e f a) In other words, tail takes a nonempty list and returns one that has an emptiness e but that's all we know. Existential quantification is not first-class in Haskell, so you can either use a data type data Box100 f b c = forall a . Box100 (f a b c) tail :: List Nonempty f a - Box100 List f a or encode it via the isomorphism exists e . expr e = forall w . (forall e . expr e - w) - w Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Type-Marking finite/infinte lists?
Roberto Zunino wrote: apfelmus wrote: cons:: a - List e f a - List Nonempty f a But unfortunately, finiteness is a special property that the type system cannot guarantee. The above type signature for cons doesn't work since the following would type check bad :: a - List Nonempty Finite a bad x = let xs = cons x xs in xs In other words, Haskell's type system doesn't detect infinite recursion. I thought this was possible with GADTs (is it?): Interesting, it probably is. See below. data Z data S n data List a len where Nil :: List a Z Cons:: a - List a len - List a (S len) Then, bad (adapted from above) does not typecheck. Note that you're doing more than forcing your lists to be finite, you force them to be of a particular size. For instance, a list of type List a (S (S (S Z))) is guaranteed to have exactly 3 elements. That's why bad x = let xs = cons x xs in xs doesn't type check: the lhs has one more element than the rhs. As soon as you try to hide the finiteness proof (= count of elements in the list) away via existential quantification data ListFinite a where IsFinite :: List a len - ListFinite a the bad function reappears and type checks cons :: a - ListFinite a - ListFinite a cons x (IsFinite xs) = IsFinite (Cons x xs) bad :: a - ListFinite a bad x = let xs = cons x xs in xs But there is a major difference now: bad is not an infinite list, it's _|_. That's because cons is now strict in the second argument, i.e. it really does check whether the second argument matches the constructor IsFinite which means that there's a proof that xs is finite. That's good news, it seems to be that everything of type ListFinite is finite modulo _|_s. I don't have a proof, though. Of course, we can have a _|_ in every position, like cons 1 (cons 2 (IsFinite undefined)) which corresponds to 1 : 2 : _|_ Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Building production stable software in Haskell
Ketil Malde wrote: Neil Mitchell wrote: DBM's can differentiate themselves on external database support, Surely this is an opportunity to focus development on a single library with broader support? Currently, we have HSQL and HDBC supplying incompatible low-level interfaces, supporting a different set of back ends. No matter what I choose, I risk having to make a costly conversion later on. Yes, a common low-level interface is highly recommended. This does not only hold for DBMs, but also for XML, GUIs, vector graphics etc. The (imaginary, I'm a DB illiterate) picture is this: HDB -+ +--- Borland | | LambdaBase --+--- Generic Low-level DB --- +--- Oracle | | hasqel --+ +--- MySQL | | ... ... A common low-level interface factors the m - n relation into a m - 1 and a 1 - n relation. The story doesn't end here, since there can be additional low-level functionality that only some DB backends can offer but that some high-level interfaces require. But that's just a matter of putting another type class on top of the minimal low-level type class. Of course, designing a low-level interface that is neither too powerful (not all back ends offer the functionality) nor too general (being almost trivial) and still simple enough is *hard*, especially since you can think about it for weeks without touching a computer. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Type-Marking finite/infinte lists?
Bas van Dijk wrote: Roberto Zunino wrote: data Z data S n data List a len where Nil :: List a Z Cons:: a - List a len - List a (S len) The other day I was playing with exactly this GADT. See: http://hpaste.org/2707 My aim was to define a function like 'concat' in terms of 'foldr' but I was unable to do so. Can this be done? Not with the standard foldr you mimic. I mean, in reality, foldr is (almost) the induction principle for natural numbers! Given a property p that you want to prove for all natural numbers n , the induction principle reads induction :: forall p . (forall n . p n - p (S n)) -- induction step - p Z -- induction base - (forall n . p n)-- it holds! Similarly, the right type of foldr is foldr :: forall b a . - (forall n . a - b n - b (S n)) - b Z - (forall n . List a n - b n) or without the superfluous foralls foldr :: (forall n . a - b n - b (S n)) - b Z - List a n - b n The implementation is exactly the same foldr _ z Nil = z foldr f z (Cons x xs) = f x (foldr f z xs) Put differently, you just add the length parameter to b. For concat, we have to set b n = List a (Sum n m) Given only List a (Sum n m), the Haskell type checker can't figure out that it is of the form b n for some type constructor b . The solution is to introduce a newtype to guide it newtype PlusList a m n = In { out :: List a (Sum n m) } so that we now have b = (PlusList a m) and we can write concat :: forall a m n . List a n - List a m - List a (Sum n m) concat xs ys = out (foldr f z xs) where f :: a - PlusList a m n - PlusList a m (S n) f x b = In (cons x (out b)) z :: PlusList a m Z z = In ys I didn't test this code, but it should work ;) Also I was unable to define the function 'fromList :: [a] - ListN a n'. This makes sense of course because statically the size of the list is unknown. However maybe existential quantification can help but I'm not sure how. The return type of fromList can't be fromList :: [a] - List a n since that's syntactic sugar for fromList :: forall n . [a] - List a n i.e. given a list, fromList returns one that has all possible lengths n. Rather, the type should be fromList :: [a] - (exists n . List a n) i.e. there exists a length which the returned list has. (Exercise: why is (exists n . [a] - List a n) wrong?) The data type ListFinite from my other message on this thread does the existential quantification you want. With nil :: ListFinite a nil = IsFinite (Nil) cons :: a - ListFinite a - ListFinite a cons x (IsFinite xs) = IsFinite (Cons x xs) we can write fromList :: [a] - ListFinite a fromList [] = nil fromList (x:xs) = cons x (fromList xs) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] simple function: stack overflow in hugs vs none in ghc
are quite different from r0 and z0. Reducing r0 and x1 yields = let ts0 = x0 : ts1 x0 = 1 ts1 = x1 : ts2 x1 = 2 ts2 = 3:... z = (r, tail ts1) r = Just x1 ts' = snd z z1 = many item ts' r1 = fst z1 us1 = snd z1 z0 = (r0, us1) r0 = x1:r1 us0 = snd z0 in (x0:r0, us0) The general scheme should be clear now: z,r and ts' are temporary variables and further reduction of r1, r2 and so on leads to a chain let x0 = 1; ts0 = x0 : ts1 x1 = 2; ts1 = x1 : ts2 x2 = 3; ts2 = x2 : ts3 ... x8 = .. z = (r, tail ts8) r = Just x8 ts' = snd z z8 = many item ts' r8 = fst z8 us8 = snd z8 z7 = (r7, us8) r7 = x8:r8 us7 = snd z7 ... z0 = (r0, us1) r0 = x1:r1 us0 = snd z0 in (x0:r0, us0) So, after forcing the first component of the overall result to normal form, the result looks like (1:2:3:..., snd (_,snd (_,snd (_,...))) ) and it seems that Hugs fails to evaluate the tail recursive chain of snd ?? In the end, here's our decisive result: either Hugs or my analysis has a bug :D Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: simple function: stack overflow in hugs vsnonein ghc
Claus Reinke wrote: the given example is too strict, the requirement is to generate the matched portion lazilly, and return the tail (unconsumed portion). ah yes, without optimisations, Prelude.span builds up stack, I don't quite understand why it does so at all. span p [] = ([],[]) span p xs@(x:xs') | p x = let (ys,zs) = span p xs' in (x:ys,zs) | otherwise = ([],xs) I mean, the third line can return a tuple and the first element of the first list immediately. Where does the stack space come from? True enough, the second component will be a tower snd (_, snd (_, snd (_, ... but snd is tail recursive. In principle the function should be capable of being written to run in constant space which the given example dose not. Btw, even with the space leak prevention from the Sparud paper, it will be a linear chain of indirections. So, span doesn't run in constant space at all! The problem is that when lazily deconstructing the first component, the second component has to be deconstructed, too (reduced partially) or it will leak space. I mean, fetching the x from (x:ys, id zs) should reduce the id , too. Of course, that should be up to the programmer's choice (partial reduction may be expensive), so is there a way to specify that in code? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Shouldnt this be lazy too?
Vimal wrote: I was surprised to find out that the following piece of code: length [1..] 10 isnt lazily evaluated! I wouldnt expect this to be a bug, but in this case, shouldnt the computation end when the length function evaluation goes something like: 10 + length [11..] That's the spirit, but you still need the right integer type for that :) I mean, Haskell does not magically detect that the 32(64)-bit integer (10 + length [11..]) :: Int is bigger than 10 :: Int . But by using peano numbers, the comparison function can detect that, see also http://article.gmane.org/gmane.comp.lang.haskell.cafe/26329 Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Math.Statistics
ok wrote: I believe the author may have misunderstood numerically stable. The obvious (sum xs)/(fromIntegral $ length $ xs) is fine for the mean, That's probably my fault, out of ignorance. Do you know a good online resource about numeric stability? (I don't have the Knuth at home. Didn't he say something about the mean formula? Or was it the standard derivation?). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: isWHNF :: a - IO Bool ?
Tristan Allwood wrote: Does anyone know if there is a function that tells you if a haskell value has been forced or not? e.g. isWHNF :: a - IO Bool let x = (map succ [0..]) in do putStrLn . show (isWHNF x)-- False putStrLn . show . head $ x putStrLn . show (isWHNF x)-- True putStrLn . show (isWHNF (Just undefined)) -- True Note that this function is not referentially transparent since isWHNF 2 = True but isWHNF (1+1) = False although 1+1 = 2. In other words, it messes up the language semantics (extensional equality) which is bad. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: isWHNF :: a - IO Bool ?
Tristan Allwood wrote: Does anyone know if there is a function that tells you if a haskell value has been forced or not? e.g. isWHNF :: a - IO Bool apfelmus wrote: Note that this function [isWHNF :: a - Bool] is not referentially transparent Indeed. Does it still mess up with the result in IO Bool (as was my intent)? It depends, but I think yes. I mean, given extensional equality, isWHNF 2 isWHNF (1+1) still have to be the same IO actions, in the sense that there cannot be a guarantee that the first always returns True during execution without the second returning always True , too. That is, you may not depend on such a property for proving that your program is correct, although you may use it for performance (memory time) characteristics (I don't know how you would use isWHNF to do /that/, but it's a hypothetical possibility). In other words, if your program output is correct with a fake nondeterministic replacement like isWHNF x = do b' - getMemoizedValueFor x if b' then return True else do b - randomIO when b $ setMemoizedValueFor x True return b then it's safe, otherwise it's not. But similarly to a memoization function implemented with unsafePerformIO memoize :: Ord a = (a - b) - (a - b) you may well use the not so nondeterministic property of isWHNF to achieve a time space improvement. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: SOE/GLFW compilation problems
Joel Stanley wrote: I've recently been able to get the SOE/GLFW code up and running from http://haskell.org/soe/software1.htm and things are working well under GHCi under Mac OS X. However, when attempting to compile a simple graphics script using ghc, I'm getting some unexpected link errors: $ ghc -o mygfx mygfx.hs -iSOE/src compilation IS NOT required /usr/bin/ld: Undefined symbols: _SOE_Blue_closure _SOE_Red_closure _SOE_closeWindow_closure _SOE_drawInWindow_closure _SOE_ellipse_closure _SOE_getKey_closure _SOE_openWindow_closure _SOE_polyline_closure _SOE_runGraphics_closure _SOE_withColor_closure collect2: ld returned 1 exit status I'm quite new to Haskell, so I'm not sure if there was something special I had to do in order to get the code for the closures generated by ghc. You can just use --make and ghc will figure things out automatically ghc --make mygfx.hs -o mygfx -iSOE/src (mygfx.hs must be the Main module for that to work, though) If you want to do it by hand/Makefile, you need to compile (ghc -c) the SOE code first and then link the resulting object files (ghc -o mygfx SOE.o Draw.o ...). See http://www.haskell.org/ghc/docs/latest/html/users_guide/ separate-compilation.html#using-make for an example Makefile that shows how it's done. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Data types, opengl display loop and readIORef/writeIORef
bbrown wrote: This is more an aesthetics question, I have a simple opengl application that has a singleton like object I need to pass around to the display loop and possibly to the keyboard action handler. (pseudo code) data SimpleMech = SimpleMech { mechPos :: !MVector, mechRot :: !MVector } deriving (Read) mainMech = initMech (0, 0, 0) (0, 0, 0) mainMechRef - newIORef(mainMech) displayCallback $= displayGameLoop mainMechRef rotateMech :: (Double, SimpleMech) - SimpleMech rotateMech (angle, mech) = -- Calculate a new rotation let (xrot,yrot,zrot) = (mechRot mech) newyrot = newyrot + angle ^^^ That should be newyrot = yrot + angle in SimpleMech { mechRot = (xrot, newyrot, zrot), mechPos = (mechPos mech) } displayGameLoop mechRef = do mech - get mechRef mechRef $= (rotateMech (0.1, mech)) For some reason in that code above, the mechRef doesnt update the value I want to change. It is like I am getting an instance of the initial valyue back. Also, when I use writeIORef, eg writeIORef mechRef (rotateMech (0.1, mech) I get stack overflow exceptions. The bug above is probably responsible for both the stack overflows with writeIORef and for a $= that doesn't update (it wants to update, but the program is trapped in an infinite loop). Btw, you don't need parenthesis about function applications, you can use record update syntax and functions with several arguments are usually written in curried form: -- Calculate a new rotation rotateMech :: Double - SimpleMech - SimpleMech rotateMech angle mech = mech { mechRot = (xrot, yrot + angle, zrot) } where (xrot,yrot,zrot) = mechRot mech mainMech = initMech (0, 0, 0) (0, 0, 0) main = do mainMechRef - newIORef mainMech displayCallback $= displayGameLoop mainMechRef displayGameLoop mechRef = do mech - get mechRef mechRef $= rotateMech 0.1 mech The latter can also be written as displayGameLoop mechRef = do mechRef $~ rotateMech 0.1 Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] EnableGUI hack changes working directory
Hello, the EnableGUI hack for getting a window in GHCi on Mac OS X (10.3.9 for me) unexpectedly changes the working directory for the running ghci [...] apfelmus$ ghci -i../../Hackage/SOE/src [...] GHC Interactive, version 6.6.1, for Haskell 98. *Main :! pwd /Users/apfelmus/Documents/Programmierung/Haskell/Graphics/Kunst *Main enableGUI run lambda Loading package OpenGL-2.2.1 ... linking ... done. Loading package GLFW-0.1 ... linking ... done. *Main :! pwd /opt/local/lib/ghc-6.6.1 The effect is that reloading the Main module now fails *Main :r no location info: can't find file: Main.hs Any solution? Btw, using enableGUI exactly once at the beginning of the ghci session doesn't work *Main enableGUI interactive: ../../Hackage/SOE/src/EnableGUI.o: unknown symbol `_CPSEnableForegroundOperation' unless ghci is started with the -framework Carbon option. But then, the working directory remains fine, so I guess the change of working directory bug happens while dynamically loading frameworks/shared libraries. Is there a way to automate ghci ... EnableGUI -framework Carbon and typing enableGUI at the first prompt? Something like a custom shell script ghci+gui or even ghci --with-gui ? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: symbol type?
Michael Vanier wrote: Yitzchak Gale wrote: Ah. Perhaps Data.HashTable is what you are looking for then? Hmm, I was hoping for something that didn't involve side effects. There's always Data.Map newtype Symbol = S { unS :: Integer } deriving (Eq, Ord) type SymbolTable = Data.Map Symbol String Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Can every monad can be implemented with Cont? (was: New slogan for haskell.org)
Don Stewart wrote: allbery: Didn't someone already prove all monads can be implemented in terms of Cont? Cont and StateT, wasn't it? And the schemers have no choice about running in StateT :) You sure? I want to see the proof :) Last time I stumbled upon something like this, the proof was to embed every monad m in the type type Cont m a = forall b . (a - m b) - m b with an instance Monad (Cont m) where ... _independent_ of whether m is a monad or not. The problem I see with it is that we didn't really encode m with it since we're still dependent on return and (=) via project :: Monad m = Cont m a - m a project f = f return and inject :: Monad m = m a - Cont m a inject x = (x =) I mean, the starting point for a concrete monad M are some primitive operations like get :: M s put :: s - M () and a function observe :: M a - (S - (a,S)) together with laws for the primitive operations (= operational semantics) observe (put s = x) = \_ - observe (x ()) s observe (get = x) = \s - observe (x s ) s and for return observe (return a)= \s - (a,s) Now, implementing a monad means to come up with a type M and functions (=) and return that fulfill the monad laws. (In general, the result type of observe is _not_ M a !) This can be done with the standard trick of implementing stuff as constructors (see Unimo for details http://web.cecs.pdx.edu/~cklin/papers/unimo-143.pdf). But - and that's the problem - I don't see how it can be done with Cont in all cases. It works for the above state monad (*) but what about primitives like mplus :: m a - m a - m a callcc :: ((a - m r) - m r) - m a that have monadic arguments in a contravariant position (possibly even universally quantified)? Regards, apfelmus *: Here you go: type Observe s a = s - (a,s) type State s a = Cont (Observe s) a get = \x - (\s - observe (x s ) s) -- law for get put s = \x - (\_ - observe (x ()) s) -- law for put observe f = f $ \a - (\s - (a,s)) -- law for observe (return a) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Filesystem questions
Yitzchak Gale wrote: I wrote: ...a tool for recursing through directories... How about a built-in function that represents a directory tree as a lazy Data.Tree? Bryan O'Sullivan wrote: See System.FilePath.Find in http://hackage.haskell.org/cgi-bin/hackage-scripts/package/FileManip-0.2 -- List all directories in the tree rooted at the given path traverseDirectories :: MonadIO m = TraversalDirection - FilePath - ListT m Directory You could plug the above into your machinery for recursion predicates and all the other nice stuff. Or - getting back to the lazy Data.Tree idea - we could define TreeT, analgous to ListT: newtype TreeT m a = NodeT (m (a, TreeT (ListT m) a)) and give it a nice Monad instance. Then you can prune and filter trees in a natural way, inside the monad. Eh, isn't that what ListT is already for? I mean, type Directory = FilePath contents :: MonadIO m = Directory - m [FilePath] liftList :: Monad m = m [a] - ListT m a runList :: Monad m = ListT m a - m [a] allows you to prune the directory tree in whatever way you like it. Here's an example top-down traversal that lists all non-directories: allFiles :: MonadIO m = Directory - m [FilePath] allFiles d = runList $ do f - liftList $ contents d if isDirectory f then allFiles f else return f In other words, ListT is like one of those iterators I keep hearing from the Python/Java/.../imperative world (except that you can't suspend a traversal once started). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: On the verge of ... giving up!
Vimal wrote: I have been trying my best to read about Haskell from the various tutorials available on the internet and blogs. [...] So, I requested my institute to buy Dr. Graham Hutton's book. I would be getting hold of that quite soon, and am willing to start from the beginning. IMHO, the best way to learn Haskell is to learn it from a textbook. That's basically all there is to it :) I mean, the same goes for Theoretical Computer Science, Mathematics, Physics etc. I think that the key properties of a textbook are 1) printed on paper 2) well-written Of course, if a book doesn't have property 2), use another one. An online book satisfying 2) can be made satisfy 1) by printing it out. In other words, anything goes that fulfills 1) and 2). And since reading two textbooks (in parallel) is even better than reading only one, I'd additionally recommend Bird's Introduction to Functional Programming using Haskell. For other books, see also http://haskell.org/haskellwiki/Books_and_tutorials#Textbooks Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: On the verge of ... giving up!
Brian Hurt wrote: I mean, contemplate this trivial exercise for a moment: write a program that reads from stdin a series of numbers (one number per line), and writes out the sum of the last n numbers. This is a trivial problem, and I have no doubt that someone who knows Haskell better than I will reply to this email with a single line of code that does it. Sorry, I can't resist :) main n = print . sum . map read . take n . reverse . lines = getContents I'm not saying that it's impossible to go directly to Haskell, I'm saying that it's just very very hard. [] I'm going to offer an opinion here that's likely to be controversial (in this forum): people new to functional programming shouldn't learn Haskell first. They should start with either Ocaml or SML first. If it makes it easier to accept this argument, you can consider Ocaml and SML as Haskell with training wheels. I don't agree. At least, it was different for myself. Looking at the line of code above, I can't help it, but I perceive Haskell as being the _simplest_ programming language in the whole world. I had no trouble learning it (step by step from a book), maybe because I've happily thrown away everything I (thought I) knew (about programming). The reward was worth it. Why do people want side effects? Purity is soo much simpler. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: On the verge of ... giving up!
[EMAIL PROTECTED] wrote: However, arguably the biggest imperatives for Haskell 98 was to remove features that would confuse undergraduates. [...] People want to write map instead of fmap. We could have come up with an alternative name for the list-version of map and not showed map to newbies. Couldn't the too much overloading for undergrads issue be solved by providing a LearningPrelude and a RealHackersPrelude? :) The condition is that the former exports the same or less functions with the same or less general types than the latter, so that function names are the same and there's no infantilizing. A stronger condition would be that every valid LearningPrelude program should be a valid RealHackersPrelude program. This is probably preferred, but may be tricky due to overloading ambiguities. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: On the verge of ... giving up!
Felipe Lessa wrote: apfelmus wrote: Of course, the solution is to first drop n elements and then take tails instead of dropping n elements every time. map (drop n) . tails = tails . drop n O(m*n) O(m) Nice identity. I'll remember this one. Oops, please don't because it's wrong :) Data.List let xs = [1..3] Data.List map (drop 2) . tails $ xs [[3],[],[],[]] Data.List tails . drop 2 $ xs [[3],[]] The first one produces some extra empty lists at the end. In other words, the left hand side and the right hand side have different lengths length . map (drop n) . tails = (+1) . length but length . tails . drop n = (\k - 1 + max 0 (k-n)) . length But the wrong version looks much nicer :) Thanks for your great postings, apfelmus. λ(^_^)λ Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Existential types (Was: Type vs TypeClass duality)
TJ wrote: data Showable = forall a. Show a = Showable a stuff = [Showable 42, Showable hello, Showable 'w'] Which is exactly the kind of 2nd-rate treatment I dislike. I am saying that Haskell's type system forces me to write boilerplate. Nice :) I mean, the already powerful Hindley-Milner type system is free of type annotations (= boilerplate). It's existential types and other higher-rank types that require annotations because type inference in full System F (the basis of Haskell's type system so to speak) is not decidable. In other words, there is simply no way to have System F without boilerplate. That being said, there is still the quest for a minimal amount of boilerplate and in the right place. That's quite a hard problem, but people are working on that, see for instance http://research.microsoft.com/~simonpj/papers/gadt/index.htm http://research.microsoft.com/~simonpj/papers/higher-rank/index.htm http://research.microsoft.com/users/daan/download/papers/mlftof.pdf http://research.microsoft.com/users/daan/download/papers/existentials.pdf [exists a. Show a = a] I actually don't understand that line. Substituting forall for exists, someone in my previous thread said that it means every element in the list is polymorphic, which I don't understand at all, since trying to cons anything onto the list in GHCi results in type errors. Let's clear the eerie fog surrounding universal quantification in this thread. -+- The mathematical symbol for forall is ∀ (Unicode) exists is ∃ -+- ∀a.(a - a) means: you give me a function (a - a) that I can apply to _all_ argument types a I want. ∃a.(a - a) means: you give me a function (a - a) and tell me that _there is_ a type a that I can apply this function to. But you don't tell me the type a itself. So, this particular example ∃a.(a - a) is quite useless and can be replaced with (). -+- A more useful example is ∃a. Show a = a i.e. ∃a.(a - String, a) So, given a value (f,x) :: ∃a.(a - String, a), we can do f x :: String but that's pretty much all we can do. The type is isomorphic to a simple String. ∃a.(a - String, a) ~ String So, instead of storing a list [∃a. Show a = a], you may as well store a list of strings [String]. -+- Since ∀ and ∃ are clearly different, why does Haskell have only one of them and even uses ∀ to declare existential types? The answer is the following relation: ∃a.(a - a) = ∀b. (∀a.(a - a) - b) - b So, how to compute a value b from an existential type ∃a.(a - a)? Well, we have to use a function ∀a.(a - a) - b that works for any input type (a - a) since we don't know which one it will be. More generally, we have ∃a.(f a)= ∀b. (∀a.(f a) - b) - b for any type f a that involves a, like f a = Show a = a f a = a - a f a = (a - String, a) and so on. Now, the declaration data Showable = forall a. Show a = Showable a means that the constructor Showable gets the type Showable :: ∀a. Show a = a - Showable and the deconstructor is caseS :: Showable - ∀b. (∀a.(Show a = a) - b) - b caseS sx f = case sx of { Showable x - f x } which is the same as caseS :: Showable - ∃a. Show a = a . GADT-notation clearly shows the ∀ data Showable where Showable :: ∀a - Showable -+- The position of the quantifier matters. - Exercise 1: Explain why [∀a.a] ∀a.[a] [∃a.a] and ∃a.[a] are all different. - Exercise 2: ∀ can be lifted along function arrows, whereas ∃ can't. Explain why String - ∀a.a = ∀a.String - a String - ∃a.a =/= ∃a.String - a Since ∀ can always be lifted to the top, we usually don't write it explicitly in Haskell. -+- Existential types are rather limited when used for OO-like stuff but are interesting for ensuring invariants via the type system or when implementing algebraic data types. Here the mother of all monads in GADT-notation data FreeMonad a where return :: a - FreeMonad a (=) :: ∀b. FreeMonad b - (b - FreeMonad a) - FreeMonad a Note the existential quantification of b . (The ∀b can be dropped, like the ∀a has been.) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: XML parser recommendation?
Uwe Schmidt wrote: The hashtable approach would of course reduce memory usage, Note that hashtables are evil :) I'm all for tries instead. but this would require a global change of the processing model: A document then does not longer consist of a single tree, it alway consists of a pair of a tree and a map. Ah! I got struck by a trick: it's possible to store every tag name in full only once but still present a simple tree with full tag names to the user. You simply share all the tag names. For instance, in let x = mytagname in Tree (Tag x) [Tree (Tag x) [Text foobar]] the string mytagname is stored in memory only once although there are two tags with this name. When parsing an XML-file, you look up every parsed tag name in a finite map (i.e. the trie). If it's already in, you don't store the parsed tag name in the XML tree but the one stored at the leaf in the trie. Of course, these two strings are equal. But they're not (pointer-) identical! After parsing, all equal tag names now are pointers to one and the same leaf in the finite map. You can drop the finite map structure afterwards, the pointers to the leaves will remain. That would be quite the memory reduction for large XML trees which are likely to have many many identical tag names. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Existential types (Was: Type vs TypeClass duality)
Dan Weston wrote: Thanks for the concise explanation. I do have one minor question though. -+- A more useful example is ∃a. Show a = a i.e. ∃a.(a - String, a) So, given a value (f,x) :: ∃a.(a - String, a), we can do f x :: String but that's pretty much all we can do. The type is isomorphic to a simple String. Don't you mean *epimorphic* instead of isomorphic (not that it matters)? For any existential type a of cardinality less than that of String, it is isomorphic, but if a = String, then by Cantor's theorem String - String has a cardinality greater than String and cannot be isomorphic to it. I do mean isomorphic. The point is that because we can't know what a is, the only thing we will ever be able to do with it is plug it into the function given. So, there is no difference in using the function result in the first place. To show that formally, one has to use _parametricity_ which is basically the fact that the intuition about ∀ (and ∃) is true. For instance, the intuition says that every function of type ∀a. a - a has to be the identity function (or _|_ but let's ignore that) because it may not look into a . These quotes can be translated into math-speak and are then called parametricity. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Existential types (Was: Type vs TypeClass duality)
Peter Hercek wrote: I do not see why forall can be lifted to the top of function arrows. I probably do not understand the notation at all. They all seem to be different to me. String - ∀a.a a function which takes strings a returns a value of all types together for any input string (so only bottom could be the return value?) ∀a.(String - a) a function which takes strings and returns a values of a type we want to be returned (whichever one it is; in given contexts the return value type is the same for all input strings) It's probably best to interpret ∀a as you are to choose any type a and I will comply. Then, it doesn't matter whether you first supply a String and then choose some a or whether you first choose some a and then supply a String. In both cases, the choice is yours and independent of the String. So, the types String - ∀a.a and ∀a.(String - a) are isomorphic. (And you're right, the only thing this function can do is to return _|_.) In contrast, ∃a means I choose a concrete type a at will and you will have to deal with all of my capricious choices. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Existential types (Was: Type vs TypeClass duality)
Alfonso Acosta wrote: This bit was specially helpful: So, how to compute a value b from an existential type ∃a.(a - a)? ... Could you give a specific example of computing existential types? Well, the word compute was probably not a good choice, I meant the following question: given a type ∃a.(a, a - String) , how can I pattern match on it? And how to construct values of that type? (Note the different example, since ∃a.(a - a) pretty much behaves like the singleton type () ). That's probably best detailed with a comparison to the finite sum type Either A B . (∃ is like an infinite sum.) (I'll abbreviate concrete types like Int with A,B,... since that's less to write.) For constructing a value of type Either A B , we have two functions Left :: A - Either A B Right :: B - Either A B Likewise for ∃, we have functions fromA :: (A, A - String) - ∃a.(a, a - String) fromB :: (B, ... etc. ... but this time for all types A,B,... . We don't need infinitely many such functions, one polymorphic functions will do from :: ∀b. (b, b - String) - ∃a.(a, a - String) In fact, that's exactly what the constructor of data Showable = forall b. Showable (b, b - String) does: Showable :: ∀b. (b, b - String) - Showable That's all there is to it. (To connect to TJ's original post, he basically wants a language where you don't have to write down this function from or Showable anymore, the compiler should infer it for you.) Pattern matching is similar. For Either A B , we have case expressions foobar :: Either A B - C foobar e = case e of Left a - foo a Right b - bar b You probably also know the Prelude function either :: ∀c. (A - c) - (B - c) - Either A B - c In fact, the case expression can be seen as a syntactic convenience for the either function, any such pattern match with branches foo and bar can be rewritten as foobar e = either foo bar e (Exercise: Convince yourself that it's the same for the function maybe . Exercise: Same for foldr (ok, ok, the situation is a bit different there).) Now, ∃ also has a pattern match function. Naively, we would have to supply an infinite amount of branches match :: ∀c. ((A, A - String) - c) - ((B, B - String) - c) - ... - ∃a.(a, a - String) - c but again, this is just one polymorphic branch match :: ∀c. (∀a. (a,a - String) - c) - ∃a.(a, a - String) - c Again, this is what happens when using a case expression for data Showable = forall b. Showable (b, b - String) foobar :: Showable - C foobar s = case s of Showable fx - foo fx The branch foo must have a polymorphic type ∀a. (a,a - String) - C. That's all there is to understand pattern matching. Of course, all this doesn't explain where existential types are useful. (TJ's post is one example but that's one of their least useful uses.) Actually, they show up rather seldom. Peter Hercek wrote: More generally, we have ∃a.(f a)= ∀b. (∀a.(f a) - b) - b Is that by definition? Any pointers to an explanation why they are valid? The right hand side is exactly the type of the match function (without the last function argument). The idea is that this type can in fact be used as an implementation for ∃ , just like ∀c. (A - c) - (B - c) - c can be used as an implementation for Either A B . Alfonso Acosta wrote: The Haskell Wikibook is usually be helpful but unfortunately it wasn't that clear in the case of existentials (for me at least). I think the existentials article misses the clarity shown by aplemus' explanation and furthermore does not cover the computing a value from an existantial type directly. Maybe it would be a good idea to extend it. Yes please! At the moment, I don't have the time to polish the article or my e-mails myself. In any case, I hereby license my explanations about existentials under the terms noted on http://en.wikibooks.org/wiki/User:Apfelmus. (This also means that I don't allow to put them on the haskellwiki which has a more liberal license.) Thanks for posting this, I finally understand existentials! λ(^_^)λ Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Existential types (Was: Type vs TypeClass duality)
Ryan Ingram wrote: On 10/24/07, apfelmus [EMAIL PROTECTED] wrote: So, instead of storing a list [∃a. Show a = a], you may as well store a list of strings [String]. I've seen this before, and to some extent I agree with it, but it breaks down for bigger examples due to sharing. In most cases, x has a more compact representation than the result of evaluating show x. So if you keep a list of [show x, show y, show z] around for a long period of time, you're leaking space. Consider instead: data StringFn where StringFn :: forall a. a - (a - String) - StringFn applyStringFn :: StringFn - String applyStringFn (StringFn a f) = f a [ StringFn x show, StringFn y show, StringFn z show ] Now, you use more time every time you compute the result, but StringFn has a compact representation; you're not leaking the space used for the string throughout the execution of the program, but only allocating it while it's used. Indeed. (Although the effect will be marginal in this particular example since the unevaluated thunk (show x) is represented as compactly as x . But the time-space trade-off would matter when strings from the list are used several times.) In a sense, that's also the reason why stream fusion à la Duncan + Roman + Don uses an existential type data Stream a where Stream :: ∀s. s - (s - Step a s) - Stream a data Step a s = Done | Yield a s | Skip s I mean, every stream can be represented by the list of its results but the observation for fusion is that there is a much more compact representation for the state (like an integer for [1..] or the source list in map f . map g $ [x,y,...]) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Existential types (Was: Type vs TypeClass duality)
Wouter Swierstra wrote: In a sense, that's also the reason why stream fusion à la Duncan + Roman + Don uses an existential type data Stream a where Stream :: ∀s. s - (s - Step a s) - Stream a data Step a s = Done | Yield a s | Skip s I thought there was a deeper reason for this, but I might be wrong. [..] data CoFix f = In (f (CoFix f)) Then the unfold producing a value of type CoFix f has type: forall a . (a - f a) - a - CoFix f (exists a . (a - f a, a)) - CoFix f Using the co-Church encoding of colists yields the above Stream data type, where f corresponds to the Step data type. (The Skip constructor is a bit of a fix to cope with filter and friends). Yes. I mean, I want to say that the efficiency gain of fusion is based on the fact that the state (the seed a ) can be represented more compactly than the resulting CoFix f . I.e. while ∃a. (a - f a, a) =~= CoFix f the former type may be a more compact representation than the latter, demonstrating that an existential type may have performance benefits compared to an isomorphic alternative. (This is even without sharing here, Ryan remark was about sharing issues) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: viewing HS files in Firefox
Thomas Schilling wrote: Isaac Dupree wrote: When I try to go to one of the Module.hs files, e.g. on darcs.haskell.org, it now has type HS and Firefox refuses to display it (and only lets me download it). Does anyone know how to make Firefox treat certain file types as others (HS as plain text, in particular)? so that I can browse them with any convenience I believe those kinds of problem have to do with the MIME-encoding on the server side: The server uses text/x-haskell. For Firefox to display the document inline it probably has to be text/plain. Not sure what the proper fix is, though. I think so, too. Isn't there a way to reassign MIME types to browser/plugins via some hidden preferences in Firefox/Camino? On MacOS 9, the old Netscape 4.5 allowed me to do that. I believe that Internet Explorer could do that as well via a standard system-wide preference. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Semantics of uniqueness types for IO (Was: Why can't Haskell be faster?)
Stefan Holdermans wrote: Exposing uniqueness types is, in that sense, just an alternative to monadic encapsulation of side effects. While *World - (a, *World) seems to work in practice, I wonder what its (denotational) semantics are. I mean, the two programs loop, loop' :: *World - ((),*World) loop w = loop w loop' w = let (_,w') = print x w in loop' w' both have denotation _|_ but are clearly different in terms of side effects. (The example is from SPJs awkward-squad tutorial.) Any pointers? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Semantics of uniqueness types for IO (Was: Why can't Haskell be faster?)
Brandon S. Allbery KF8NH wrote: apfelmus wrote: during function evaluation. Then, we'd need a purity lemma that states that any function not involving the type *World as in- and output is indeed pure, which may be a bit tricky to prove in the presence of higher-order functions and polymorphism. I mean, the function arrows are tagged for side effects in a strange way, namely by looking like *World - ... - (*World, ...). I don't quite see that; the Clean way looks rather suspiciously like my unwrapped I/O in GHC example from a couple weeks ago, so I have trouble seeing where any difficulty involving functions not using *World / RealWorld# creeps in. I will grant that hiding *World / RealWorld# inside IO is cleaner from a practical standpoint, though. Just not from a semantic one. What do you mean? I mean, in Clean, we may ask the following question: are all functions of type say forall a . Either (a - *World - a) String - [*World] or Either (forall a . a - *World - a) String - Maybe *World pure? In Haskell, the answer to any such question is unconditionally yes (unless you're hacking with unsafePerformIO and GHC internals like RealWorld# of course) even with plenty of appearances of the IO type constructor. But in Clean, functions may perform side effects, that's the only way to explain why the examples loop and loop' aren't the same. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Semantics of uniqueness types for IO (Was: Why can't Haskell be faster?)
Paul Hudak wrote: loop, loop' :: *World - ((),*World) loop w = loop w loop' w = let (_,w') = print x w in loop' w' both have denotation _|_ but are clearly different in terms of side effects. One can certainly use an operational semantics such as bisimulation, but you don't have to abandon denotational semantics. The trick is to make output part of the final answer. For a conventional imperative language one could define, for example, a (lifted, recursive) domain: Answer = Terminate + (String x Answer) and then define a semantic function meaning, say, such that: meaning loop = _|_ meaning loop' = x, x, ... In other words, loop denotes bottom, whereas loop' denotes the infinite sequence of xs. There would typically also be a symbol to denote proper termination, perhaps . A good read on this stuff is Reynolds book Theories of Programming Languages, where domain constructions such as the above are called resumptions, and can be made to include input as well. In the case of Clean, programs take as input a World and generate a World as output. One of the components of that World would presumably be standard output, and that component's value would be _|_ in the case of loop, and x, x, ... in the case of loop'. Another part of the World might be a file system, a printer, a missile firing, and so on. Presumably loop and loop' would not affect those parts of the World. Ah, so the denotational semantics would be a bit like good old stream-based IO. However, we have to change the semantics of - , there's no way to put the side effects in *World only. I mean, the problem of both loop and loop' is that they never return any world at all, the side effects occur during function evaluation. Then, we'd need a purity lemma that states that any function not involving the type *World as in- and output is indeed pure, which may be a bit tricky to prove in the presence of higher-order functions and polymorphism. I mean, the function arrows are tagged for side effects in a strange way, namely by looking like *World - ... - (*World, ...). In contrast, we can see IO a as an abstract (co-)data type subject to some straightforward operational semantics, no need to mess with the pure - . So, in a sense, the Haskell way is cleaner than the Clean way ;) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: FP design
Thomas Schilling wrote: Levi Stephen wrote: I'm was wondering how most people work during when designing a functional program. Do you create data structures/types first? Do you work from some type signatures? But there's a third thing that you can do, other than start implementing: think about the laws/properties that should hold. That's not always simple, in fact, it rarely is. Yes, the classic approach: systematically derive programs from their specification. The classic paper on that is Paul Hudak. The Design of a Pretty-printing Library. http://citeseer.ist.psu.edu/hughes95design.html with a follow-up Philip Wadler. A prettier printer. http://decenturl.com/homepages.inf.ed/wadler-98-prettier-printer The man who derives all his programs from specification is Richard Bird. You may want to have a look at his recent sudoku solver Richard Bird. A program to solve Sudoku. Slides: http://icfp06.cs.uchicago.edu/bird-talk.pdf where he starts with an apparently correct but hopelessly slow specification and transforms it into a blazingly fast one. His introduction to Haskell Richard Bird. Introduction to Functional Programming using Haskel (2nd edition). http://decenturl.com/amazon/bird-introduction-functional emphasizes the classic style, too. You may think this is all nice, but my problem is too 'soft' for mathematical laws and properties and such. Well, if you don't search, you won't find. Here's an example for a soft problem domain: Simon Peyton Jones, Jean-Marc Eber, Julian Seward. Composing contracts: an adventure in financial engineering. http://decenturl.com/research.microsoft/spj-financial-contracts Of course, the laws of nature governing your problem domain may be hard to find, so it may be worth to just implement and let some law intuition guide you. Well-known example: darcs. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Doubly-linked zipper list w/ insert implementation
Justin Bailey wrote: The other day I decided to implement a ring buffer with a current element (i.e. a doubly-linked zipper list). In order to allow inserts (and, in the future, deletes and updates), I have a special sentinel element called Join in the structure. When inserting, I find the join first, insert and then rebuild the buffer using circular programming techniques. This also allows the buffer to be converted back to a list. The current element can be changed by rotating right or left, which never fails. Rotating n positions takes n steps. I'm posting it here for comments and feedback. How could the structure be smarter? Would storing a unique ID with each element make more sense? Any comments on the space behavior under insert and rotates? I wanted to maximize sharing. Thanks in advance. Do you really need to realize the cycle by sharing? I mean, sharing doesn't go well with insertion / updates / deletion since each of these operations breaks it and needs to restore it everywhere. In other words, your insert takes O(n) time. I'd simply drop the sharing and use two double ended queues (or something like that) instead data Ring a = Ring (DeQueue a) a (DeQueue a) -- pseudo-code missing lots of cases. I want views! left (Ring (l' : ls : l) x (r : rs : r')) = Ring (ls : l : x) r (rs : r' : l') This way, you can implement update operations in O(1) time instead of O(n). With a fancy random access queue like Data.Sequence , you can even have rotations like rotL xs n in O(log n) time. (I keep mixing up the meaning of rotL and rotR , does L push the current element to the left or does it rotate the ring clockwise?) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Should I step into a minefield? / Writing a trading studio in Haskell
Manuel M T Chakravarty wrote: Joel Reymont: I need to pick among the usual list of suspects for a commercial product that I'm writing. The suspects are OCaml, Haskell and Lisp and the product is a trading studio. My idea is to write something like TradeStation [1] or NinjaTrader, only for the Mac. It would be quite nifty to use SPJ's financial combinator approach and, for example, embed Yi (Haskell editor). One of the key features of the product would be the ability to model your trading logic using a trading DSL. I'm thinking that this DSL could well be Haskell but I'm concerned about stepping into a minefield. [...] I just can't see how laziness can help in processing real-time price data. Laziness, in particular, and Haskell, in general, is going to help you with the EDSL (it's no coincidence that most EDSL work was done in Haskell). Yes, you definitively want a host language with non-strict semantics for EDSLs. For instance, custom control structures like if' p a b = if p then a else b when p m = if p then m else return () loose lots of their usefulness in a strict language. Same for parser combinators, since you can't even define many :: Parser a - Parser [a] many p = empty ||| (liftM2 (:) p (many p)) (Sooner or later, you probably want observable sharing for such recursive values, but that's another story.) In any case, the problem of choosing a host language and achieving good user experience is probably negligible compared to the difficulty of designing a powerful trading DSL in the first place :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: About Fibonacci again...
It's not over yet, the rabbits are still going strong on the fibonacci-side. Jerzy Karczmarczuk wrote: the solution of Alfonso Acosta: rs_aa = let acum a1 a2 = a2 ++ acum (a1++a2) a1 in 1 : acum [1] [0] We can apply the difference list trick to obtain f 0 = (0:) f 1 = (1:) f n = f (n-1) . f (n-2) i.e. rs_aa' = let accum r1 r2 = r2 . accum (r1 . r2) r1 in 1: accum (1:) (0:) undefined Finally, Bernie Pope original: mrs = [0] : [1] : zipWith (++) (tail mrs) mrs rs_bp = [(mrs !! n) !! (n-1) | n - [1..]] produced something which also begins with a list of lists. It seems that it is faster than the solution of A.B., but I have also the impression that it generates a lot of temporary rubbish: the printing of a long sequence periodically stops as if GC sneaked in. and mrs' = (0:) : (1:) : zipWith (.) (tail mrs') mrs' To speed up Bernie's infinite list flattening, we note that for every generalized fibonacci sequence f n = f (n-1) + f (n-2) we have the following telescope sum f (n+2) = f 1 + f 0 + f 1 + f 2 + ... + f n = f 2 + f 1 + f 2 + ... + f n = f 3 + f 2 + ... + f n i.e. f (n+2) = f 1 + foldr1 (+) [f k | k - [0..n]] This identity allows us to write f ∞ = f 1 + foldr1 (+) [f k | k - [0..]] and hence rs_bp' = 1: foldr1 (.) mrs' undefined To close the circle, Alfonso's solution is in fact the deforestation of this one. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Doubly-linked zipper list w/ insert implementation
Justin Bailey wrote: The other day I decided to implement a ring buffer with a current element (i.e. a doubly-linked zipper list). [...] p.s. The original motivation for writing this was to model cellular automata. The CA world is circular, so that got me thinking about a structure that made connecting the ends easy to do. Note that depending on your concrete setting, you may not need a fancy ring structure for cellular automata. And with simple automata like c'_i = c_(i-1) `xor` c_i `xor` c_(i+1) it may even be easier to generate fresh rings for each step in the automaton: data Context a = Context [a] a [a] -- rotate left rotL (Context ls x (r:rs)) = Context (x:ls) r rs -- description of a cellular automaton type Rule a= Context a - a example :: Rule Bool example (Context (cm:_) c (cp:_)) = cm `xor` c `xor` cp -- run a cellular automaton on an initial band of cells -- which is considered to be cyclic, i.e. a cylinder automate :: Rule a - [a] - [[a]] automate f xs = iterate (take n . map f . mkContexts) xs where -- length of the cell band n = length xs mkContexts (x:xs)= iterate rotL $ Context (cycle $ reverse xs) (head xs) (tail $ cycle xs) Here, mkContexts xs initializes a new infinite cyclic ring for xs and rotates it left ad infinitum. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Queues and Rings (Re: Doubly-linked zipper list w/insert implementation)
(Btw, this ring stuff could be relevant for Xmonad, I don't know whether the workspace/window-ring implementation there is O(1). Not that it matters for 1000 windows, of course :) Justin Bailey wrote: apfelmus wrote: Do you really need to realize the cycle by sharing? I mean, sharing doesn't go well with insertion / updates / deletion since each of these operations breaks it and needs to restore it everywhere. In other words, your insert takes O(n) time. I'd simply drop the sharing and use two double ended queues (or something like that) instead Very good point, and much easier to implement with Data.Sequence to boot. All that circular programming made my brain hurt. There's also a direct and rather lightweight possibility to implement rings in the spirit of the classic O(1) lazy amortized functional queue implementation. This post will try to explain it. Here's the basic idea for implementing queues in Haskell: we have a front list to fetch items (head, tail) and a rear list to insert items (snoc) into the queue. data Queue a = Queue [a] [a] empty= Queue [] [] head (Queue (x:f) r) = x tail (Queue (x:f) r) = Queue f r snoc (Queue f r) x = Queue f (x:r) Of course, this doesn't quite work yet, at some point we have to feed the items from the rear list into the front list. For example, the last possibility to do so is when the front list becomes empty. balance (Queue [] r) = Queue (reverse r) [] balance q= q tail (Queue (x:f) r) = balance $ Queue f r snoc (Queue f r) x = balance $ Queue f (x:r) (Calling balance maintains the invariant that the front list is never empty except when the whole queue is empty, too.) Now, how much time will a single snoc or tail operation take? In the worst case, tail triggers a reverse and takes O(n) time whereas snoc always takes constant time. That's a big blow to our goal of O(1) time for both. But luckily, queues don't come out of thin air, they all have to be constructed from the empty queue by a sequence of applications of snoc and tail . Can the heavy O(n) cost of a worst case tail be spread over the many good cases of tail and snoc in that sequence? Yes, it can. To that end, we increase the price of each snoc by 1 time coin. So, each item of the rear list gets inserted with one extra coin as credit. With these credits, we can pay the whole length (rear list) cost of a reverse operation when it occurs, making tail O(1) again. This is also called _amortization_ and O(1) the _amortized_ cost of tail . The above works fine if the queue is used in a single-threaded way i.e. as _ephemeral_ data structure. But it doesn't work anymore when a queue is used multiple times in a _persistent_ setting. Assuming that tail q triggers a reverse , the first evaluation of q1 in let q1 = tail q q2 = tail q q3 = tail q ... in ... q1 .. q2 .. q3 will use up all credits and q2, q3,... don't have any to spend and are back to worst-case behavior. In the persistent setting, lazy evaluation comes to the rescue. The idea is to create the (yet unevaluated) call to reverse earlier, namely when the rear list has more elements than the front list. balance (Queue f r) | length r = length f = Queue (f ++ reverse r) [] balance q = q (We assume that length has been made O(1) by storing the lengths explicitly.) Now, the O(length r) reverse will not be evaluated before having tailed through the previous front list with length f == length r items. Thus, we can spread the cost of reverse as debits over these elements. When finally executing reverse , its debits have already been paid off and tail is O(1) again. And once executed, lazy evaluation memoizes the result, so that sharing doesn't duplicate the work. (Note that strict languages without side effects are doomed to be slower when persistence matters. Ha! ;) So much for a too short introduction to the classic purely functional queue implementation. For a detailed exposition and much more, see also Chris Okasaki. Purely Functional Data Structures. (Thesis) http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf or his book with the same title which arose from this thesis. Now, rings can be implemented in a similar style. data Ring a = Ring [a] a [a] rotL (Ring ls x (r:rs)) = balance $ Ring (x:ls) r rs rotR (Ring (l:ls) x rs) = balance $ Ring ls l (x:rs) (For simplicity, we only deal with the case where the left and right list are non-empty.) How to balance? In contrast to queues, doing a full reverse when one list is empty doesn't even work in the ephemeral case since a rotR following a rotL will undo the reverse with yet another expensive reverse . But we can apply the same idea as for persistent queues and balance as soon as one list becomes like 2 times (or 3 or whatever) as large as the other one balance (Ring ls x rs) | length
[Haskell-cafe] Re: Renaming constructors for readability
Dougal Stanton wrote: I wonder, is there an equivalent of the 'type' keyword for constructors? An example: -- create a pseudo-C pointer type -- which can point to a value or a -- null. type Pointer a = Maybe a -- int a = 3; -- int *pa = a; ampersand :: t - Pointer t ampersand a = Just a -- int b = *pa. star :: Pointer a - a star (Just a) = a -- note this function behaves -- in an 'authentic' fashion ;-) To really complete the illusion it would be nice to replace the names Just and Nothing with PointerTo and Null. Then the constructors would really mean something. Is there a solution? The thing you want is called views. See http://hackage.haskell.org/trac/ghc/wiki/ViewPatterns#Relatedwork for more. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: List of all powers
Brent Yorgey wrote: Kurt Hutchinson wrote: As part of a solution I'm working on for Project Euler problem 119, I wanted to create an ordered list of all powers of all positive integers (starting with squares). The merging can be done much more simply and efficiently (this is code I wrote when computing squarefree numbers in a blog posthttp://byorgey.wordpress.com/2007/09/01/squarefree-numbers-in-haskell/a few months ago): -- merge two nondecreasing lists. ( # ) :: (Ord a) = [a] - [a] - [a] [] # ys = ys xs # [] = xs xs@(x:xt) # ys@(y:yt) | x y = x : (xt # ys) | x y = y : (xs # yt) | otherwise = x : (xt # yt) -- merge an infinite list of lists, assuming that each list -- is nondecreasing and the lists occur in order of their first -- element. mergeAll :: (Ord a) = [[a]] - [a] mergeAll ([] : zs) = mergeAll zs mergeAll (xxs@(x:xs) : yys@(y:ys) : zs) | x y = x : mergeAll (xs : yys : zs) | otherwise = mergeAll ((xxs # yys) : zs) Then you can simply define powers = 1 : mergeAll powertable I wrote some code to sum the first n powers and print the result, and compiled (using -O2) first with your method, then with mergeAll. Summing the first 7000 powers took ~8s and ~0.1s respectively, so that's a pretty big speedup. Based on seeing how the times scale, I suspect that your code is O(n^2) or something of that sort, whereas mergeAll is essentially linear, although without scrutinizing your code more closely I'm not exactly sure why that would be the case. In principle, both Kurt's and even your mergeAll are O(n^2), but that depends on the actual distribution of the numbers in the powertable. In both cases, the optimization over a naive implementation is to increase the number of rows to be considered only if the next output came from the last row. This is ok since of the last row, only the head element was considered so far and the non-considered elements all have to be bigger than this. Kurt's code is slower because it takes the minimum over _all_ the considered rows at every step. This is unnecessary since only one element changed, many comparisons can be cached. In other words, this calls for a heap. Brent's code does not produce a heap, but I it's still able to cache lots of comparisons. Kurt may want to transpose the powertable to 2^2 3^2 4^2 5^2 .. 2^3 3^3 4^3 5^3 .. 2^4 3^4 4^4 .. .. instead of the current 2^2 2^3 2^4 2^5 .. 3^2 3^3 3^4 3^5 .. 4^2 4^3 4^4 .. .. since the first elements of the rows of the former grows far steeper than the ones of the latter. This means that only few rows are to be considered each turn. However, Brent may not want to transpose the powertable since the steep increase in every single row (as opposed to across rows) is exactly what makes his code fast. During evaluation, his tower of calls to # will compare something like the following numbers: 2^5 3^4 4^3 5^2 Thanks the form of the tower, the comparisons of the first elements are cached. It looks like mergeAll $ (2^5:(__ # 3^4:__) # 4^3:__) : (5^2:xs) : __ The winner is 5^2 and mergeAll will proceeds by expecting the head of xs . But this one will first be compared to 2^5 = minimum [2^5,3^4,4^3] that's what I mean with cached. Similarly, the winner 3^4 = minimum [2^5,3^4] is cached, too. In other words, the minimum of every initial segment of the numbers considered is cached. The crucial point now is that those initial numbers quickly become very large (2^7 jumps exponentially to 2^8 and so on) and the winners are mostly to be found at the end of the list. With the caching, this is cheap to check. If Brent were to transpose the powertable , winners are more to the front of the list and the caching is useless, most likely rendering his implementation inferior to Kurt's one. Now, back to the heap and O(n*log n). There is no need to use an explicit heap data structure, it's possible to arrange the merges to form an implicit heap, i.e. using the best form of caching. Here's an explanation of the technique with mergesort http://www.mail-archive.com/[EMAIL PROTECTED]/msg19980.html (Hm, does gmane gradually forget old messages?). The only problem is to make this work on an infinite list. Dave Bayer discovered a great way to do this, here's an explanation http://thread.gmane.org/gmane.comp.lang.haskell.cafe/26426/focus=26493 Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: List of all powers
Brent Yorgey wrote: apfelmus, does someone pay you to write so many thorough, insightful and well-explained analyses on haskell-cafe? I'm guessing the answer is 'no', but clearly someone should! =) Depending on length, my prices for posts range between λ9.99 and λ29.99 ;) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Knot tying vs monads
John D. Ramsdell wrote: This is another Haskell style question. I had some trouble with the pretty printer that comes with GHC, so I translated one written in Standard ML. I have already translated the program into C, so rewriting it in Haskell was quick and easy for me. Concerning the choice of a pretty printer, the one bundled in GHC is close to John Hughes. The Design of a Pretty-printing Library. http://citeseer.ist.psu.edu/hughes95design.html but there's also Philip Wadler. A prettier printer. http://homepages.inf.ed.ac.uk/wadler/papers/prettier/prettier.pdf (probably available as a library on hackage). Btw, both papers are marvelous introductions to the derivation of programs from their specification. Compared to that, I'm missing the specification part for your pretty printer. How's it supposed to lay out? The Standard ML version uses a reference cell to keep track of the space available on a line. I threaded the value of the reference cell through the computation using a where clause to define two mutually recursive equations. The fixed point implicit in the where clause ties the knot in the circular definitions in a way that allows the output string to be efficiently computed front to back. I showed the code to a colleague, who found the circular definitions opaque. He suggested a better style is to use monads, and describe the computation in a mode that is closer to its origin form in Standard ML. What style do to you prefer, a knot-tying or a monad-based style? I have enclosed the pretty printer. The printing function is the subject of the controversy. Neither, I think that the code mixes too many concerns. You need neither knot tying nor monads for efficient string concatenation, a simple difference list type DString = Data.DList String = String - String will do. (There's a small difference list library Data.DList available on hackage). If ++ is too inefficient, then simply switch to a different String implementation with a faster ++. Introducing a difference list means to replace the output type (Int, String) - (Int, String) of printing not by Int - (String - (Int, String)) -- state monad with state String but by Int - (Int, String - String) -- difference list Furthermore, I guess that this can probably be replaced by Int - (String - String) (Int - Int, String - String) or made entirely abstract type X = (Int, String) - (Int, String) blanks :: Int - X blanks n (space, s) | n = 0 = (space, s) | otherwise = blanks (n - 1) (space - 1, showChar ' ' s) string :: String - X string s (space,t) = (space - length s, s ++ t) or something like that. I don't know what your printer is supposed to do, so I can't say for sure. module Pretty(Pretty, pr, blo, str, brk) where data Pretty = Str !String | Brk !Int -- Int is the number of breakable spaces | Blo ![Pretty] !Int !Int -- First int is the indent, second int -- is the number of chars and spaces for strings and breaks in block Drop those strictness annotations from !String and ![Pretty], they won't do any good. The !Int are only useful if they will be unboxed, but I wouldn't bother right now. Indentation blocks blo :: Int - [Pretty] - Pretty blo indent es = Blo es indent (sum es 0) where sum [] k = k sum (e:es) k = sum es (size e + k) size (Str s) = length s size (Brk n) = n size (Blo _ _ n) = n size is of independent value, I'd make it a top-level function. Oh, and the sum won't be tail-recursive (until ghc's strictness analyzer figures it out). I'd like to point you to http://haskell.org/haskellwiki/Performance/Accumulating_parameter for an explanation of why, but the information there is rather inaccurate. For the moment, I could only find http://monad.nfshost.com/wordpress/?p=19 last section of http://blog.interlinked.org/tutorials/haskell_laziness.html but isn't there a short text that describes in detail why foldl' is different from foldl and why foldr is better in many cases? I thought this faq would have been cached already :) In any case, I'd simply write blo indent es = Blo es indent . sum . map size $ es ( sum is a function from the Prelude.) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: List of all powers
Calvin Smith wrote: I really look forward to apfelmus' consistently outstanding explanations on haskell-cafe. If some of the especially good ones were bundled up as book -- *Intermediate/Advanced Functional Programming with Haskell* -- I would buy it sight unseen (hint, hint). :) I intend the Haskell wikibook http://en.wikibooks.org/wiki/Haskell to be(come) the one Beginner/Intermediate/Advanced Functional Programming book and the mailing list can also be seen as a marvelous source of materials, like real world questions, problems, techniques etc for that. The wikibook hasn't gained much momentum yet, but I guess that's also partly to the fact that writing a good tutorial is time consuming and harder than I imagined, mailing list rants are far easier :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Knot tying vs monads
John D. Ramsdell wrote: Compared to that, I'm missing the specification part for your pretty printer. How's it supposed to lay out? The specification is in Paulson's book. The pretty printer is used with S-Expressions, and the block layout generates compact, indented output that is good when there is much data to be displayed. ... The close connection between the Haskell and Standard ML versions is part of the reason I was able to quickly generate the code, and be confident in its correctness. Unfortunately, I don't have Paulson's book (or any other ML book :) at home. I'm too lazy to figure out the specification from the source code, can you provide a link or an explanation? I'm asking because I'd be astonished if one couldn't write an elegant Haskell version that's clearly correct and efficient at the same time. And such things are easiest to produce from scratch. a simple difference list ... will do. Hmm. Doesn't the output type (Int, String) - (Int, String) show the implementation is using the essence of a difference list? Remember, the resulting function prepends something the string it is given in the second element of the pair, and returns that string. Yes, of course. But the true virtue is to disentangle it from the rest, i.e. to use an abstract string type with fast concatenation. Int - (Int, String - String) -- difference list My first attempt at writing the printing function had a type similar to this one. I found myself composing difference lists of type ShowS. The performance was noticabily slow, specially as compared with the implementation displayed in my message. Perhaps the use of Data.DList would repair this performance problem. I don't mean to suggest that ShowS style difference lists cannot be used to make the function easier to understand--all I'm saying is my attempt to do so did not work out well. Dlist a = [a] - [a] so this would be no different from ShowS. Drop those strictness annotations from !String and ![Pretty], they won't do any good. The !Int are only useful if they will be unboxed, but I wouldn't bother right now. I thought that the annotations ensure the first element of the list is evaluated. The Char and Pretty lists are generated with seqrev, so everything gets evaluated before constructor is applied to data. -- A reverse that evaluates the list elements. seqrev :: [a] - [a] seqrev = foldl (\xs x - x `seq` xs `seq` (x:xs)) [] The trouble is the constructors are not exported directly, but instead through str, brk, and blo, functions which are not strict. It's the best I could do, as near as I can tell. O_O, everything strict? But why would you ever want that? Well if it's for performance reasons, I'd have to point out that the pretty printer has an algorithmic flaw: there are two calls to (breakdist es after), one from the Brk case and one from the Blo case. The former one is safe since breakdist doesn't look further than to the next Brk in es . But the latter one will lead to a quadratic run-time in the worst case, for example on the following input replicate n (Blk [Brk _] _ _) = [Blk [Brk _] _ _, Blk [Brk _] _ _, ..] -- n elements (Fill in any numbers you like for the placeholders _ ). That's a bit degenerate but you can spice it up with as many Str as you like, just don't add any additional Brk . Of course, a memoization scheme will fix this but I'd rather develop a new algorithm from scratch. It seems rather hard to avoid lazyness in the current version of Haskell when it's not wanted. I hope one of the proposals for deep strictness makes it into Haskell prime. In my application, there is one datastructure, such that if every value tracable from an instance of the datastructure is evaluated, I am quite sure my program will be free from memory leaks due to dragging. I wish I could tell compilers to make it so. As Derek said, strict data types are probably the easiest way to go here. Or you can use custom strict constructors, like str s = s `deepSeq` Str s or something. But again, I don't know why you would want that at all. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Knot tying vs monads
Brent Yorgey wrote: but isn't there a short text that describes in detail why foldl' is different from foldl and why foldr is better in many cases? I thought this faq would have been cached already :) Perhaps you're thinking of http://haskell.org/haskellwiki/Stack_overflow ? Ah, that looks better, although it's a bit messy for my taste. I've scribbled a hopefully gentler explanation at http://en.wikibooks.org/wiki/Haskell/Performance_Introduction . Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: What is the role of $!?
Paul Johnson wrote: Andrew Coppin wrote: PS. There is a technical distinction between the terms lazy and non-strict, and also the opposite terms eger and strict. I couldn't tell you what that is. As I understand it, the distinction is between the mathematical term non-strict and the implementation method of lazy. Non-strict means that reduction (the mathematical term for evaluation) proceeds from the outside in, so if I have (a+(b*c)) then first you reduce the +, then you reduce the inner (b*c). Strict languages work the other way around, starting with the innermost brackets and working outwards. [...] Almost right, but strict and non-strict aren't tied to an operational semantics. In other words, you can talk about _|_ and strictness without knowing how to evaluate your expressions at all. See also http://en.wikibooks.org/wiki/Haskell/Denotational_semantics . For more on the details of lazy evaluation (which actually does work outside in), there's the incomplete http://en.wikibooks.org/wiki/Haskell/Graph_reduction . Of course, strict and eager as well as non-strict and lazy have pretty much the same effect and can be used synonymously, but they're different things nonetheless. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Knot tying vs monads
John D. Ramsdell wrote: On Nov 17, 2007 3:04 PM, apfelmus [EMAIL PROTECTED] wrote: Unfortunately, I don't have Paulson's book (or any other ML book :) at home. I'm too lazy to figure out the specification from the source code, I guess the code is too opaque, as my colleague claimed. The layout the algorithm generates condensed indented blocks. Within a block, it inserts a newline when the distance to the next break point plus the current position is greater than the space remaining on the current line. Thus if S-Expression lists are rendered as blocks with indent two, and every element in a list is separated by a break point of length one, with the proper margin, you would see: (defthingy name-of-thingy (one thing) (two thing) (a-big-thing-made-bigger) (three thing) (four thing)) As an exercise, the book asks you to implement group indent, where if any break point in a group inserts a newline, they all do. So with that layout, one would get: (defthingy name-of-thingy (one thing) (two thing) (a-big-thing-made-bigger) (there thing) (four thing)) The C version I wrote supports this layout, but I didn't bother with that extension for the Haskell version. Thanks. The interesting case of nested blocks still needs to be specified, but with this description in mind and judging from the code, I guess it behaves as follows: either a block fits entirely on the remaining line (no line breaks inside), or it begins a new line. Now, the quest of programming is to make this description executable by computers while keeping it understandable by humans. This is straightforward to do with Wadler's pretty printer combinators (package wl-pprint on http://hackage.haskell.org ) data S = Atom String | List [S] -- S-expressions layout :: Int - [S] - Doc layout indent [] = empty layout indent (x:xs) = foldr1 () (render x : map f xs) where f x@(Atom _) = group line render x f x@(List _) = group (line render x) render (Atom s ) = text s render (List xs) = nest indent $ parens $ layout xs The semantics of Doc are (for more, see Wadler's paper): Doc is a document with a set of different layouts, where the only difference between them is that some line primitives are rendered as (\n ++ replicate currentIndentation ' ') and some are rendered as a single space. Now, group x adds a new layout to the set x , namely the layout where all line in x have been flattened to a single space. This way, the definition of f directly reflects the alternative either a block fits entirely on the remaining line or it begins a new line. Your group indent extension is even easier, namely just layout2 :: Int - [S] - Doc layout2 indent = sep . map render where render (Atom s ) = text s render (List xs) = nest indent $ parens $ layout2 xs with the functions sep = group . foldr ($) empty x $ y = x line y from the library. On the strictness annotations, my reasons for them are the usual ones, primarily to prevent memory leaks due to dragging, but a performance boost is always welcome. At some point, I plan to profile the code with and without the annotations, and find out where they are needed. That seems excessive. Can you really prove that this will prevent space leaks? I doubt that. Laziness is still useful (example: early bail-out in your breakdist ) if only the data structures are fully evaluated as opposed to every function being strict, so it's an interesting idea. But that doesn't guarantee zero space leaks, since sumFromTo :: Int - Int - Int sumFromTo a b = f a b 0 where f a b k = if a == b then k else f (a+1) b (k+a) is one. Also, you won't be able to conveniently use lists as suspended loops which would be a pity. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: expanded standard lib
Simon Peyton-Jones wrote: | the php documentation has user contributed notes where people can leave | sniplets of useful code as comments, eg | | http://www.php.net/manual/en/introduction.php | | I think this is a very nice feature. | | I would love to have this on haskell, especially because the | documentation often lack example(s) We've discussed this a couple of times at GHC HQ, at least in relation to GHC's user manual and library documentation. It's a *great* idea, because it allows everyone to improve the documentation. But we're just not sure how to do it: * What technology to use? * Matching up the note-adding technology with the existing infrastructure - GHC's user manual starts as XML and is generated into HTML by DocBook - In contrast, the library documentation is generated by Haddock. * Hardest of all: evolution. Both GHC's user manual and library docs change every release. Even material that doesn't change can get moved (e.g. section reorganisation). We don't want to simply discard all user notes! But it's hard to know how to keep them attached; after all they may no longer even be relevant. They almost certainly don't belong in the source-code control system. If someone out there knows solutions to these challenges, and would like to help implement them, we'd love to hear from you. Accurate documentation, with rich cross-links (e.g. to source code), and opportunities for the community to elaborate it, is a real challenge for a language the size of Haskell and its libraries. What technology to use, that's the *key* question. If we forget everything what we currently can do with a computer and instead imagine what we could do, the answer would probably be: The documentation / source code can be edited directly while viewing it (i.e. Wiki + WYSIWYG). Moreover, it's possible to attach lots of Post-It® notes to sections / paragraphs / sentences with scribbled comments / questions / remarks about content / administrative tasks. Those notes can be hidden to get a clean view. A wiki is rather centralized, so a form of decentralization / version control à la darcs is needed, at least for some parts like the source code. Last but not least, there's a tension between quality and editable by everyone, so some form of access control is mandatory and further means to ensure quality are needed, that's the hard part. The above ideal is entirely realizable, just not with existing technology like web-browsers / text editors . For instance, it's desirable to be able to edit source / haddock with a text editor like right now. But one would also like to edit it right in the (generalized) web-browser. Ideally, one could just pipe the underlying document through a lens data Lens s a = Lens { get :: s - a; set :: a - (s - s); } text:: Lens HaskellDocument ASCII browser :: Lens HaskellDocument Html so that the edits in the view are reflected in the document. (Same for IDEs or GUIs or whatever). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: -O2 bug in GHC 6.8.1?
Christian Maeder wrote: good bug! -O or -O2 is irrelevant but it works if compiled with -fvia-C You (or someone else) should add it to http://hackage.haskell.org/trac/ghc I guess that this is related to http://thread.gmane.org/gmane.comp.lang.haskell.cafe/31675 Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Knot tying vs monads
ChrisK wrote: The data dependency is circular. Yes and no. The input and outputs pairs are dependent on each other, but the integer doesn't depend on the string. Thus, I'm pretty sure that (Int, String) - (Int, String) can be refactored into Int - (Int, String - String) This is related to attribute grammars, I finally found the reference Designing and Implementing Combinator Languages. S. Doaitse Swierstra, Pablo R. Azero Alcocer, João Saraiva http://people.cs.uu.nl/doaitse/Papers/1999/AFP3.pdf I'd even add after to the result of the functions in order to avoid the O(n^2) degenerate case. In any case, I prefer Wadler's combinators. With line being more rigid than Brk , nest and group basically factor the monolithic Blk which makes more laws and available and hence gives a more elegant implementation. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANNOUNCE: fixpoint 0.1
Bertram Felgenhauer wrote: [redirecting from [EMAIL PROTECTED] apfelmus wrote: [...] I wonder whether a multi parameter type class without fundeps/associated types would be better. class Fixpoint f t where inject :: f t - t project :: t - f t [...] Interestingly, this even gives slightly shorter type signatures cata :: Fixpoint f t = (f s - s) - t - s size :: (Fixpoint f t, Foldable f) = t - Int size can't be used now though, because there is no way to infer f. Ah, of course, stupid me. Making f an associacted type synonym / fundep instead of a associated data type is still worth it, since we can use it for Mu f class Fixpoint f t where type F t a ... instance Fixpoint f (Mu f) where .. type F (Mu f) a = f a Otherwise, we would have to deal with some kind of newtype data F (Mu f) a = MuF f a Hm, but does F (Mu f) qualify as a type constructor of kind * - * for type inference/checking? Or is the situation the same as with normal type synonyms? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANNOUNCE: fixpoint 0.1
Roman Leshchinskiy wrote: apfelmus wrote: Making f an associacted type synonym / fundep instead of a associated data type is still worth it, since we can use it for Mu f But alas, this breaks hylomorphisms: hylo :: Fixpoint t = (Pre t s - s) - (p - Pre t p) - p - s If Pre is a type function, there is no way to infer t. Ah, right. But unlike size , this is unambiguous since t can (and probably should) be fused away: hylo :: Functor f = (f s - s) - (p - f p) - p - s hylo f g = f . fmap (hylo f g) . g Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Knot tying vs monads
John D. Ramsdell wrote: All I know is it was dog slow without any annotations, and spaying them on the suspect data structures cured that problem. Ah ok, that makes sense :) although it's a bit unsatisfactory to be forced to do that blindly. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANNOUNCE: fixpoint 0.1
Roman Leshchinskiy wrote: apfelmus wrote: Ah, right. But unlike size , this is unambiguous since t can (and probably should) be fused away: hylo :: Functor f = (f s - s) - (p - f p) - p - s hylo f g = f . fmap (hylo f g) . g Excellent point! When I originally developed the code, type functions didn't really work anyway. I'll try again now that they are more mature. Actually, I don't think that hylo :: Fixpoint f t = (f s - s) - (p - f p) - p - s hylo f g = cata f . ana g will typecheck, the t is still ambiguous. It's just that it's one of those cases where the type signature is ambiguous but the program isn't. Well, from a denotational point of view anyway, different t will generate different code for hylo . Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: An interesting monad: Prompt
Ryan Ingram wrote: I've been trying to implement a few rules-driven board/card games in Haskell and I always run into the ugly problem of how do I get user input? The usual technique is to embed the game in the IO Monad: The problem with this approach is that now arbitrary IO computations are expressible as part of a game action, which makes it much harder to implement things like replay, undo, and especially testing! The goal was to be able to write code like this: ] takeTurn :: Player - Game () ] takeTurn player = do ] piece - action (ChoosePiece player) ] attack - action (ChooseAttack player piece) ] bonusTurn - executeAttack piece attack ] when bonusTurn $ takeTurn player but be able to script the code for testing, allow undo, automatically be able to save replays, etc. While thinking about this problem earlier this week, I came up with the following solution: class Monad m = MonadPrompt p m | m - p where prompt :: p a - m a prompt is an action that takes a prompt type and gives you a result. A simple example: ] prompt [1,3,5] :: MonadPrompt [] m = m Int This prompt would ask for someone to pick a value from the list and return it. This would be somewhat useful on its own; you could implement a choose function that picked randomly from a list of options and gave non-deterministic (or even exhaustive) testing, but on its own this wouldn't be much better than the list monad. [...] data Prompt (p :: * - *) :: (* - *) where PromptDone :: result - Prompt p result -- a is the type needed to continue the computation Prompt :: p a - (a - Prompt p result) - Prompt p result Intuitively, a (Prompt p result) either gives you an immediate result (PromptDone), or gives you a prompt which you need to reply to in order to continue the computation. This type is a MonadPrompt: instance Functor (Prompt p) where fmap f (PromptDone r) = PromptDone (f r) fmap f (Prompt p cont) = Prompt p (fmap f . cont) instance Monad (Prompt p) where return = PromptDone PromptDone r = f = f r Prompt p cont = f = Prompt p ((= f) . cont) instance MonadPrompt p (Prompt p) where prompt p = Prompt p return Marvelous! Basically, by making the continuation (a - Prompt p result) explicit, we have the flexibility to acquire the value a differently, like through user input or a replay script. The popular continuations for implementing web applications in Lisp/Scheme do the same thing. A slightly different point of view is that you use a term implementation for your monad, at least for the interesting primitive effects like choosePiece :: Player - Game Piece chooseAttack :: Player - Piece - Game Attack By using constructors for them, you have the flexibility to write different interpreters for Game a , like play :: Game a - IO a replay :: Game a - GameScript - a with the semantics play (choosePiece pl = f) = do putStrLn Player ++ show pl ++ , choose your piece: play f . read = getLine replay (choosePiece pl = f) (Piece pl' piece:xs) | pl == pl' = replay (f piece) xs Just for the record, the most general term implementation is presented here Chuan-kai Lin. Programming Monads Operationally with Unimo. http://web.cecs.pdx.edu/~cklin/papers/unimo-143.pdf Btw, the web framework WASH/CGI for Haskell uses some kind of prompt monad, too. Peter Thiemann. An Embedded Domain-Specific Language for Type-Safe Server-Side Web-Scripting. http://www.informatik.uni-freiburg.de/~thiemann/WASH/draft.pdf Here, the server replays parts of the CGI monad when the user submits a form i.e. answers to a prompt. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: expanded standard lib
Ketil Malde wrote: David Menendez [EMAIL PROTECTED] writes: Someone in a previous thread made an analogy between GHC and the linux kernel. I imagine that third-party Haskell distributions, consisting of GHC/Hugs/whatever and some bundled packages, would meet the desire for a batteries included Haskell implementation without tying the most popular libraries to GHC releases. Well - the various Linux distributions certainly could do this - providing a virtual haskell-libs package that just pulls in a bunch of commonly used packages. It'd be nice, of course, if that package was reasonably consistent across distributions, and if there were a corresponding installer for those other operating systems. Meta-packages on hackage would do the trick, no? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: nhc vs ghc
brad clawsie wrote: can anyone provide a concise list of the major differences between nhc98 and ghc? for example, can i build a cabal package with nhc98? i get that ghc and nhc98 are not interchangeable, otherwise i am not sure The major difference is that nhc98 is pretty much Haskell98 only, so no multi parameter type classes, rank-n-polymorphism or GADTs. It does support existential types, though. In particular, the popular monad transformer library isn't Haskell98, at least concerning the type classes. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: An interesting monad: Prompt
Derek Elkins wrote: Ryan Ingram wrote: apfelmus wrote: A context passing implementation (yielding the ContT monad transformer) will remedy this. Wait, are you saying that if you apply ContT to any monad that has the left recursion on = takes quadratic time problem, and represent all primitive operations via lift (never using = within lift), that you will get a new monad that doesn't have that problem? If so, that's pretty cool. To be clear, by ContT I mean this monad: newtype ContT m a = ContT { runContT :: forall b. (a - m b) - m b } instance Monad m = Monad (ContT m) where return x = ContT $ \c - c x m = f = ContT $ \c - runContT m $ \a - runContT (f a) c fail = lift . fail instance MonadTrans ContT where lift m = ContT $ \c - m = c evalContT :: Monad m = ContT m a - m a evalContT m = runContT m return Yes, that's the case because the only way to use = from the old monad is via lift. Since only primitive operations are being lifted into the left of =, it's only nested in a right-associative fashion. The remaining thing to prove is that ContT itself doesn't have the left-associativity problem or any other similar problem. It's pretty intuitive, but unfortunately, I currently don't know how to prove or even specify that exactly. The problem is that expressions with = contain arbitrary unapplied lambda abstractions and mixed types but the types shouldn't be much a minor problem. Indeed this was discussed on #haskell a few weeks ago. That information has been put on the wiki at http://www.haskell.org/haskellwiki/Performance/Monads and refers to a blog post http://r6.ca/blog/20071028T162529Z.html that describes it in action. Note that the crux of the Maybe example on the wiki page is not curing a left-associativity problem, Maybe doesn't have one. The key point here is that continuation passing style allows us to inline the liftings (Just x =) = \f - f x (Nothing =) = \_ - Nothing and thus eliminate lots of case analysis. (Btw, this is exactly the behavior of exceptions in an imperative language.) (Concerning the blog post, it looks like this inlining brings speed. But Data.Sequence is not to be underestimated, it may well be responsible for the meat of the speedup.) I'm fairly confident, though I'd have to actually work through it, that the Unimo work, http://web.cecs.pdx.edu/~cklin/ could benefit from this. In fact, I think this does much of what Unimo does and could capture many of the same things. Well, Unimo doesn't have the left-associativity problem in the first place, so the only motive for using ContT Prompt instead is to eliminate the Bind constructor and implied case analyses. But there's a slight yet important difference between Unimo p a and Unimo' p a = ContT (Prompt p) a , namely the ability the run the continuation in the outer monad. Let me explain: in the original Unimo, we have a helper function observe_monad :: (a - v) - (forall b . p (Unimo p) b - (b - Unimo p a) - v) - (Unimo p a - v) for running a monad. For simplicity and to match with Ryan's prompt, we'll drop the fact that p can be parametrized on the outer monad, i.e. we consider observe_monad :: (a - v) - (forall b . p b - (b - Unimo p a) - v) - (Unimo p a - v) This is just the case expression for the data type data PromptU p a where Return :: a - PromptU p a BindEffect :: p b - (b - Unimo p a) - PromptU p a observe_monad :: (PromptU p a - v) - (Unimo p a - v) The difference I'm after is that the second argument to BindEffect is free to return an Unimo and not only another PromptU! This is quite handy for writing monads. In contrast, things for Unimo' p a = ContT (Prompt p) a are as follows: data Prompt p a where Return :: a - Prompt p a BindEffect :: p b - (b - Prompt p a) - Prompt p a observe :: (Prompt p a - v) - (Unimo' p a - v) observe f m = f (runCont m Return) Here, we don't have access to the outer monad Unimo' p a when defining an argument f to observe. I don't think we can fix that by allowing the second argument of BindEffect to return an Unimo' p a since in this case, lift :: p a - Unimo' p a lift x = Cont $ BindEffect x won't work anymore. Of course, the question now is whether this can be fixed. Put differently, this is the question I keep asking: does it allow Unimo to implement strictly more monads than ContT = Unimo' or is the latter enough? I.e. can every monad be implemented with ContT? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
Henning Thielemann wrote: Now my idea was, that making links to glossary articles leaves the slogan as short as it is, and allows people to find out quickly about the words they still don't know. An explanation why Haskell's features are useful for programmers is still required. +1 But we'd probably need the glossary articles first before linking to them :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: is there a more concise way to generate helper functions for a datatype built on records?
Isaac Dupree wrote: apfelmus wrote: dup :: Lens a (a,a) dup = id id Which component of the pair should put dup :: a - (a,a) - (a,a) change? The first, the second, or even both? [...] put :: Lens s a - a - s - s put x = flip $ snd . focus x wouldn't put dup :: (a,a) - a - a Oops, of course. Arrows IIRC resolve this dilemma by arbitrarily saying the first argument of () takes effect first... a solution I'm not entirely happy with. Here, first it would put the first element of the pair, then it would put the second, so the result would be the second element. If it were 2d vectors, x::Lens Vector Double, angle::Lens Vector Angle, it makes a difference whether x-coordinate or angle is changed first, and again, () could sequence. I wish there was some way to compose them that guaranteed order-independence, and didn't work otherwise, though. I suppose QuickCheck could be used to catch most parallel/disjoint-assumption-mistakes like that... The situation is much worse, even dropping order-independence doesn't help: the lens laws get x (put x a s) = a put x (get x s) s = s are already violated for dup ! Assuming that get dup :: a - (a,a) is total (i.e. first and second component not undefined ), parametric polymorphism dictates that it can only be get dup = \a - (a,a) Now, we should have get dup x (put dup (a,a') s) = (put dup (a,a') s, put dup (a,a') s) = (a,a') but that's impossible when a is different from a'. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
David Menendez wrote: Thomas Davie wrote: But the point is that this section of the site is the bit that's meant to be an advertisement -- we're trying to encourage people to read more, Are we? I thought Haskell.org was intended to describe what Haskell *is*. There are plenty of articles and blog posts and wiki pages out there that advocate Haskell. I don't see why the main web page needs to be polluted with marketing. Agreed! I hate marketing! The facts can speak for themselves, if you need somebody to explain them, then something's wrong. More specifically, fact means something that you can easily check yourself. Robust/maintainable/testable code are things you _can't_ easily check yourself without already learning the language. But shorter code is a fact you can easily check, for instance with quicksort as example. In fact, short code is the reason why I picked up Haskell. Back then, I was given the task to calculate some sequence of numbers which I did in one page of C code. So far so good, but when I asked the task assigner about his solution, he responded: Ah, this problem, that's 1 line in Haskell. Well, 2 lines if the terminal is too small. Such power! Hearing just this was more than enough reason for me to learn Haskell and to never look back. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
Henning Thielemann wrote: apfelmus wrote: Back then, I was given the task to calculate some sequence of numbers which I did in one page of C code. So far so good, but when I asked the task assigner about his solution, he responded: Ah, this problem, that's 1 line in Haskell. Well, 2 lines if the terminal is too small. Ah, a Haskell code contribution to the Encyclopedia of Integer Sequences? The task was just for fun, but it's sequence A05. import Data.Set xs = let f x m = x:let y = x `div` 2 in f (if member y m then 3*x else y) (insert x m) in f 1 (singleton 0) As said, it's two lines if the terminal is too small :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
Laurent Deniau wrote: apfelmus wrote: Back then, I was given the task to calculate some sequence of numbers which I did in one page of C code. import Data.Set xs = let f x m = x: let y = x `div` 2 in f (if member y m then 3*x else y) (insert x m) in f 1 (singleton 0) As said, it's two lines if the terminal is too small :) I can't see how it could be one page of C unless the page is 10 lines long ;-) The following code is the direct translation of your Haskell code (except that it prints the result instead of building a list). a+, ld. #include stdio.h #include intset.h void f(int x, intset s) { printf(%d, , x); f (intset_elem(s, x/2) ? 3*x : x/2, intset_put(s, x)); } int main(void) { f (1, intset_put(intset_new(), 0)); } Well, I only remember that it took _me_ a page of C code :D Basically due to a hand-coded intset and user interaction (no REPL for C, after all). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Trees
Adrian Neumann wrote: data Tree a = Leaf a | Node a [Tree a] But now the assignments require more than a simple top-down traversal. For example: given a tree t and two nodes u,v, find the first common ancestor. Well, this problem doesn't make much sense in Haskell. How do you specify the subtrees u and v in the first place? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Possible Improvements
PR Stanley wrote: data Tree = Leaf Int | Node Tree Int Tree occurs :: Int - Tree - Bool occurs m (Leaf n) = m == n occurs m (Node l n r) = m == n || occurs m l || occurs m r It works but I'd like to know if it can be improved in any way. That's entirely fine. The logical or || doesn't evaluate it's second argument occurs m r if the first argument occurs m l turns out to be already True. In other words, thanks to lazy evaluation, the search stops if m has been found in the left subtree, it won't search the right subtree anymore. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
Stefan O'Rear wrote: In my C programming, I've taken to using gdb as a REPL: Ah, that's a nice trick, thanks! I wish I there had been a gdb on MacOS 8.5 back then ;) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Trees
Thomas Davie wrote: apfelmus wrote Well, this problem doesn't make much sense in Haskell. How do you specify the subtrees u and v in the first place? One could alway store a node's depth at each node -- then you must search for u and v, creating a list of what nodes you found at each depth, and finally, simply compare the lists -- O(n) in the depth of u and v. Huh? I mean, are u and v node labels of type a? Or are they subtrees? In any case, they aren't pointers into the tree. In the case of node labels, a breath first search will take O(number of nodes of depth = min (depth u) (depth v)) steps which is O(size of the tree) in the worst case. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: distinguish functions from non-functions in a class/instances
Luke Palmer wrote: Hmm, this still seems ill-defined to me. compose :: (Int - Int - Int) - (Int - Int) - Int - Int - Int Is a valid expression given that definition (with a,b = Int and c = Int - Int), but now the arity is 4. That's correct, the arity of a function is not well-defined due to polymorphism. The simplest example is probably id :: a - a-- arity 1 id = ($) :: (a - b) - (a - b) -- arity 2 Therefore, the polymorphic expression wrap id is problematic. It roughly has the type wrap id ~~ [String] - a But it's clearly ambiguous: do we have wrap id (x:_) = read x or wrap id (f:x:_) = wrap ($) (f:x:_) = read f (read x) or what? (assuming a read instance for function types) GHCi gives it a type :type wrap id wrap id :: (FunWrap (a - a) y) = [String] - y but trying to use it like in let x = wrap id [1] :: Int yields lots of type errors. We have to specialize the type of id before supplying it to wrap . For example, wrap (id :: Int - Int) works just fine. I don't like this behavior of wrap since it violates the nice property of polymorphic expressions that it's unimportant when a type variable is instantiated, like in map ((+1) :: Int - Int) [1..5] = map (+1) ([1..5] :: [Int]) = (map (+1) [1..5]) :: [Int] Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: distinguish functions from non-functions in a class/instances
Dan Weston wrote: Questioning apfelmus definitely gives me pause, but... Don't hesitate! :) Personally, I question everyone and everything, including myself. This is a marvelous protection against unintentionally believing things just because I've heard them several times like Monads are hard or Haskell code is easier to understand, but has many more uses. As Feynman put it: What do you care what other people think? id :: a - a-- arity 1 id = ($) :: (a - b) - (a - b) -- arity 2 I agree with the arities given above (but without quotes) and see no ill-definedness to arity. But these are two different classes of functions. There are arguments of the first function that cannot be applied to the second (e.g. 5). The fact that the two have different type signatures shows that Haskell can distinguish them (e.g. in the instantiation of a type class). No, I think not. Having different type signatures is not enough for being distinguishable by type classes. The second type ∀a,b. (a - b) - a - b is an instance of the first one ∀a. a - a Instance not in the sense of class instance but in the sense of type instance, i.e. that we can obtain the former by substituting type variables in the latter, here a := (a - b). Formally, we can write this as an inequality ∀a. (a - a) (a - b) - a - b with x y meaning x less specific than y or x more general than y. Now, the property I meant with I don't like this behavior of wrap since it violates the nice property of polymorphic expressions that it's unimportant when a type variable is instantiated is that the class instance predicate is monotonic with respect to the type instance relation: from x y and Num x , we can always conclude Num y . In particular, let's examine a type class class Nat a = Arity a f | f - a describing that the function type f has a certain arity a which is expressed with Peano numbers in the type system: data Zero = Zero data Succ a = Succ a type One = Succ Zero type Two = Succ One class Nat n instance Nat Zero instance Nat n = Nat (Succ n) Now, if Arity One (∀a . a - a) is true then due to monotonicity, Arity One ((a - b) - a - b) must be true, too! The only possible solution to this dilemma is to drop the class instance for (∀a. a - a). It's no problem to have many special instances Arity One (Int - Int) Arity One (Char - Char) etc. but we can't have the polymorphic one. In other words, not every (potentially polymorphic) function type has a well-defined arity! Oleg's hack basically supplies all those possible particular instances while avoiding the polymorphic one. Concerning the ill-definedness of wrap id :type wrap id wrap id :: (FunWrap (a - a) y) = [String] - y but trying to use it like in let x = wrap id [1] :: Int yields lots of type errors. We have to specialize the type of id before supplying it to wrap . For example, wrap (id :: Int - Int) works just fine. which I don't like, it seems that I have to life with it. That's because the toy implementation class FunWrap f x | f - x where wrap :: f - [String] - x instance FunWrap (Int - Int) Int where wrap f [x] = f (read x) instance FunWrap ((Int - Int) - Int - Int) Int where wrap f [g,x] = f id (read x) already shows the same behavior. When trying something like wrap id [1] :: Int , GHCi complains that there's no polymorphic instance FunWrap (a - a) Int There can't be, since that would disallow the special instances and moreover conflict with the functional dependency. So, wrap id is an example of an essentially ill-typed expression that the type checker does not reject early (namely when asking for its type). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: class default method proposal
Jules Bean wrote: David Menendez wrote: Duncan Coutts wrote: So my suggestion is that we let classes declare default implementations of methods from super-classes. It creates ambiguity if two classes declare defaults for a common superclass. My standard example involves Functor, Monad, and Comonad. Both Monad and Comonad could provide a default implementation for fmap. But let's say I have a type which is both a Monad and a Comonad: which default implementation gets used? I'm disappointed to see this objection isn't listed on the wiki. Doesn't sound like a very big problem. That would just be a compile time error (More than one default for fmap possible for Foo, please reslve ambiguity). And how would you resolve that ambiguity? module Control.Functor.UsefulStuff (hylo) where hylo :: Functor f = (a - f a) - (f b - b) - a - b hylo f g = g . fmap (hylo f g) . f module BANG where import Foo (Foo) import Foo.Is.Monad import Foo.Is.Comonad import Control.Functor.UsefulStuff (hylo) bar :: Bar - Foo Bar baz :: Foo Baz - Baz bang = hylo bar baz The problem is that the ambiguity may arise by just importing different modules while not having access to the offending call to fmap . Also note that as much as I'd like explicit import/export of type class instances, the current implicit and global export is no accident, it's crucial for well-definedness. See also the second half of http://article.gmane.org/gmane.comp.lang.haskell.general/15471 In other words, the main problem of all those superclass/explicit import/export proposals is that there are no proofs of the fact that they only allow well-defined programs. The homework isn't done yet, discussing adoption is too early. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: distinguish functions from non-functions in a class/instances
jerzy karczmarczuk wrote: apfelmus: As Feynman put it: What do you care what other people think? It was not Feynman, but his wife. Thanks, I should have questioned my claim :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
gwern wrote: Now, the Main Page on haskell.org is not protected, so I could just edit in one of the better descriptions proposed, but as in my Wikipedia editing, I like to have consensus especially for such visible changes. Hey, why has the front-page already been changed then? I don't like neither this nor the new slogan. Concerning what slogan should be on the front page, I prefer technical terms to buzzwords. myReadText = filter (not . buzzword) In any case: it's not our task to convince others by means of an enterprisey formulation, people are free to choose. If they don't want it, so be it. We provide data points (I have written a big but robust program, it's called insert name here, We have a FFI and its use is explained here, look, this quicksort function is so beautiful) but judgment is what everybody has to do for himself. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Software Tools in Haskell
Tommy M McGuire wrote: (Plus, interact is scary. :-D ) You have a scary feeling for a moment, then it passes. ;) Gwern Branwen wrote: I... I want to provide a one-liner for 'detab', but it looks impressively monstrous and I'm not sure I understand it. On the other hand, I'm not looking for one-liners; I really want clarity as opposed to cleverness. tabwidth = 4 -- tabstop !! (col-1) == there is a tabstop at column col -- This is an infinite list, so no need to limit the line width tabstops = map (\col - col `mod` tabwidth == 1) [1..] -- calculate spaces needed to fill to the next tabstop in advance tabspaces = snd $ mapAccumR addspace [] tabstops addspace cs isstop = let cs'=' ':cs in (if isstop then [] else cs',cs') main = interact $ unlines . map detabLine . lines where detabLine = concat $ zipWith replace tabspaces replace cs '\t' = cs -- replace with adequate number of spaces replace _ char = [char] -- pass through How about that? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: class default method proposal
Luke Palmer wrote: Isn't a type which is both a Monad and a Comonad just Identity? (I'm actually not sure, I'm just conjecting) Good idea, but it's not the case. data L a = One a | Cons a (L a) -- non-empty list -- standard list monad instance Monad L where return = One -- join concatenates all lists in the list join (One y) = y join (Cons (One x) ys) = Cons x (join ys) join (Cons (Cons x xs) ys) = Cons x (join (Cons xs ys)) class Comonad c where counit :: c a - a cobind :: c a - (c a - b) - c b instance Comonad L where counit (One x)= x -- that's why we insist on non-emptiness counit (Cons x _) = x -- f has access to the whole past cobind ys@(One x) f = One (f ys) cobind ys@(Cons x xs) f = Cons (f ys) (cobind f xs) Checking the laws is left as an exercise for the reader :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Software Tools in Haskell
Tommy M McGuire wrote: apfelmus wrote: tabwidth = 4 -- tabstop !! (col-1) == there is a tabstop at column col -- This is an infinite list, so no need to limit the line width tabstops = map (\col - col `mod` tabwidth == 1) [1..] -- calculate spaces needed to fill to the next tabstop in advance tabspaces = snd $ mapAccumR addspace [] tabstops addspace cs isstop = let cs'=' ':cs in (if isstop then [] else cs',cs') Are you using mapAccumR (mapAccumR? (!)) to share space among the space strings? Sharing is a good idea! But mapAccumR has nothing to do with it, I just used it to encode the recursion, as replacement for a fold so to speak. If so, wouldn't this be better: tabstops = map (\col - col `mod` tabwidth == 1) [1..tabwidth] tabspaces = cycle $ snd $ mapAccumR addspace [] tabstops Yes. We can make the code even simpler :) tabspaces = cycle . init . tails . replicate tabwidth $ ' ' and the tabstops list is gone. On the other hand, wouldn't this make for less head scratching: tabspaces = map (\col - replicate (spacesFor col) ' ') [1..] where spacesFor col = tabwidth - ((col - 1) `mod` tabwidth) Yes and no. The very idea of introducing the tabspaces list in the first place is to avoid explicit indices altogether, a single zipWith is responsible for aligning columns. So, it's only natural to avoid indices for the definition of tabspaces , too. A side effect of separating tabspaces from the main loop is that we can do all kind of irregular tabstop spacing or different fill characters and the like solely by changing this list. main = interact $ unlines . map detabLine . lines where detabLine = concat $ zipWith replace tabspaces I think you mean concat . zipWith (You're doing this from memory, aren't you?) Yes and yes :) replace cs '\t' = cs -- replace with adequate number of spaces replace _ char = [char] -- pass through How about that? It doesn't produce the same output, [...] It's counting tabs before expanding rather than after? Yes, I noticed it too late, it's so wrong (_) :) Here's a correct version: perLine f = interact $ unlines . map f . lines main = perLine (detabLine tabspaces) where detabLine _ []= [] detabLine (w:ws) ('\t':cs) = detabLine (w:ws) (w ++ cs) detabLine (w:ws) (c :cs) = c:detabLine ws cs Or even main = interact $ detab tabspaces where detab _ []= [] detab _ ('\n':cs) = '\n':detab tabspaces cs detab (w:ws) ('\t':cs) = detab (w:ws) (w ++ cs) detab (_:ws) (c :cs) =c:detab ws cs This can't be expressed with zip anymore since the alignment of the list of spaces and the text changes when encountering a tab. @dons: I guess that detab would probably be a very interesting (and even useful) study example for generalizing stream fusion, since it's more like concatMap than map . Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Examples of useful functions of order = 3?
Ben Lippmeier wrote: I vaguely remember a paper called something like Is there any use for a seventh order function?, but I forget why they were used, and I can't find it on Google, or ACM or any of the other likely places. Does anyone have any examples of code (in whatever language) which usefully uses functions of order = 3?? Preferably up to 5? I don't know, but you can probably use church-encodings to pimp up the order: type Bool = ∀a . a - a - a -- order a + 1 type List a = ∀b . (a - b - b) - b - b -- max (order a, order b) + 1 not :: Bool - Bool -- order 2 map :: (a - a) - List a - List a -- order 3 + order a To avoid higher-rank polymorphism, just choose some arbitrary fixed types not':: (A - A - A) - (A - A - A) filter' :: (A - A) - ((A - B - B) - B - B) - ((A - B - B) - B - B) Those functions probably aren't that useful anymore, but they once were :) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Software Tools in Haskell
Benja Fallenstein wrote: Henning Thielemann wrote: I remember there was a discussion about how to implement full 'wc' in an elegant but maximally lazy form, that is counting bytes, words and lines in one go. Did someone have a nice idea of how to compose the three counters from implementations of each counter? I'm afraid one cannot simply use the split and count fragments trick then. Well, you could rely on catamorphism fusion (foldr f1 x1, foldr f2 x2) = foldr (f1 *** f2) (x1,x2) but that's not so compositional. Could you turn the folds into scans and use zip3 and last? I.e., something like this: This approach is really clever! data Triple a b c = Triple !a !b !c deriving Show countChars :: String - [Int] countChars = scanl (\n _ - n+1) 0 countChar :: Char - String - [Int] countChar c = scanl (\n c' - if c == c' then n+1 else n) 0 countLines = countChar '\n' countWords = countChar ' ' last' [x] = x last' (x:xs) = x `seq` last' xs zip3' (x:xs) (y:ys) (z:zs) = Triple x y z : zip3' xs ys zs zip3' _ _ _ = [] zipWith3 Triple wc :: String - Triple Int Int Int wc xs = last' $ zip3' (countChars xs) (countWords xs) (countLines xs) main = print . wc = getContents (or use Data.Strict.Tuple -- but that only has pairs and no zip...) Slightly simplified (uses BangPatterns): import Data.List scanl' :: (b - a - b) - b - [a] - [a] scanl' f !b [] = [b] scanl' f !b (x:xs) = b:scanl' (f b x) xs counts :: (a - Bool) - [a] - [Int] counts p = scanl' (\n c - if p c then n+1 else n) 0 wc :: String - (Int,Int,Int) wc = last $ zip3 (charc xs) (wordc xs) (linec xs) where charc = counts (const True) wordc = counts (== ' ') linec = counts (== '\n') The scanl' basically ensures that the forcing the resulting list spine automatically forces the elements. This makes sense to do early and we can use normal list functions after that. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New slogan for haskell.org
Henning Thielemann wrote: apfelmus wrote: gwern wrote: Now, the Main Page on haskell.org is not protected, so I could just edit in one of the better descriptions proposed, but as in my Wikipedia editing, I like to have consensus especially for such visible changes. Hey, why has the front-page already been changed then? I don't like neither this nor the new slogan. Edit war! Yarr, bring up the guns! Y-rifle, fire! http://ellemose.dina.kvl.dk/cgi-bin/sestoft/lamreduce?action=normalizeexpression=%5Clamb.%28%5Cx.%5Cf.f%28x+x+f%29%29+%28%5Cx.%5Cf.f%28x+x+f%29%29+%28%5Cf.%5Cda.f%29evalorder=normal+order Goodstein gun, fire! import Data.Tree type Number = Forest Integer zero = []; one = [Node 1 zero]; two = [Node 1 one] -- (shortened) hereditary three = one++two; four = [Node 1 two] -- base 2 representation subtractOne p (Node 1 []:xs) = xs subtractOne p (Node a []:xs) = Node (a-1) []:xs subtractOne p (Node 1 k :xs) = let k' = subtractOne p k in subtractOne p [Node 1 k'] ++ Node (p-1) k':xs subtractOne p (Node a k :xs) = subtractOne p [Node 1 k ] ++ Node (a-1) k :xs goodstein !p n = if null n then [] else n:goodstein (p+1) (subtractOne (p+1) n) goodsteingun n = concat $ lamb:map (const da) (goodstein 2 n) goodsteingun three lambdadadadadada goodsteingun four lambdadadadadadadadadadadadadadadadadadadadadada[...] Will it ever cease? Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Monads that are Comonads and the role of Adjunction
Dan Weston wrote: newtype O f g a = O (f (g a)) -- Functor composition: f `O` g instance (Functor f, Functor g) = Functor (O f g) where ... instance Adjunction f g = Monad (O g f) where ... instance Adjunction f g = Comonad (O f g) where ... class (Functor f, Functor g) = Adjunction f g | f - g, g - f where leftAdjunct :: (f a - b) - a - g b rightAdjunct :: (a - g b) - f a - b -- Functors are associative but not generally commutative. Apparently a Monad is also a Comonad if there exist left (f) and right (g) adjuncts that commute. Yes, but that's only sufficient, not necessary. Jules and David already pointed out that while every monad comes from an adjunction, this adjunction usually involves categories different from Hask. So, there are probably no adjoint functors f and g in Hask such that [] ~= g `O` f or data L a = One a | Cons a (L a) -- non-empty list L ~= g `O` f (proof?) Yet, both are monads and the latter is even a comonad. Moreover, f and g can only commute if they have the same source and target category (Hask in our case). And even when they don't commute, the resulting monad could still be a comonad, too. My category theory study stopped somewhere between Functor and Adjunction, but is there any deep magic you can describe here in a paragraph or two? I feel like I will never get Monad and Comonad until I understand Adjunction. Alas, I wish I could, but I have virtually no clue about adjoint functors myself :) I only know the classic example that conjunction and implication f a = (a,S) g b = S - b are adjoint (a,S) - b = a - (S - b) which is just well-known currying. We get the state monad (g `O` f) a = S - (S,a) and the stream comonad (f `O` f) a = (S, S - a) out of that. Regards apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: OOP'er with (hopefully) trivial questions.....
Nicholls, Mark wrote: data Shape = Circle Int | Rectangle Int Int | Square Int Isn't this now closed...i.e. the statement is effectively defining that shape is this and only ever thisi.e. can I in another module add new types of Shape? Yes, but in most cases, this is actually a good thing. For instance, you can now define equality of two shapes: equal :: Shape - Shape - Bool equal (Circle x)(Circle y)= x == y equal (Rectangle x1 x2) (Rectangle y1 y2) = x1 == x2 y1 == y2 equal (Square x)(Square y)= x == y In general, the open approach is limited to functions of the form Shape - ... - Shape / Int / something else with no Shape occurring in the other ... arguments. I'm trying to teach myself HaskellI've spent a few hours going through a few tutorialsand I sort of get the basics... [...] After many years of OOP though my brain is wired up to construct software in that 'pattern'a problem for me at the moment is I cannot see how to construct programs in an OO style in HaskellI know this is probably not the way to approach it...but I feel I need to master the syntax before the paradigm. This approach is probably harder than it could be, you'll have a much easier time learning it from a paper-textbook like http://www.cs.nott.ac.uk/~gmh/book.html http://web.comlab.ox.ac.uk/oucl/publications/books/functional/ http://haskell.org/soe/ After all, it's like learning programming anew. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Foldable Rose Trees
Dominic Steinitz wrote: I've been trying to re-label nodes in a rose tree without re-inventing wheels (although I'm beginning to wish I had). I've got as far as this but haven't yet cracked the general case for Traversable. Solution 1) Data.Tree is already an instance of Traversable. :) Solution 2) The key observation is that you the instances for rose trees can/should be bootstrapped from corresponding instances for lists []. With this, we have instance Functor Rose' where fmap f (Rose' x rs) = Rose' (f x) (map (fmap f) rs) fmap f (Rose' x rs) = Rose' (f x) (fmap (fmap f) rs) (fmap instead of map to highlight the general structure) instance Foldable Rose' where foldMap f (Rose' x rs) = f x `mappend` (mconcat (map (foldMap f) rs)) foldMap f (Rose' x rs) = f x `mappend` (foldMap (foldMap f) rs) instance Traversable Rose' where traverse f (Rose' x []) = Rose' $ f x * pure [] traverse f (Rose' x [x0]) = Rose' $ f x * (pure (\x - [x]) * traverse f x0) traverse f (Rose' x [x0,x1]) = Rose' $ f x * (pure (\x y - x:y:[]) * traverse f x0 * traverse f x1) traverse f (Rose' x [x0,x1,x2]) = Rose' $ f x * (pure (\x y z - x:y:z:[]) * traverse f x0 * traverse f x1 * traverse f x2) traverse f (Rose' x xs) = Rose' $ f x * traverse (traverse f) xs *Main let (p,_) = runState (unwrapMonad (traverse (\x - WrapMonad update) (Rose' 3 [Rose' 5 [Rose' 11 [Rose' 19 []], Rose' 13 [], Rose' 17[]], Rose' 7 []]))) 0 in p Rose' 0 [Rose' 1 [Rose' 2 [Rose' 3 []],Rose' 4 [],Rose' 5 []],Rose' 6 []] This can be made shorter: Data.Traversable.mapM m = unwrapMonad . traverse . (\x - WrapMonad (m x)) Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: New to Haskell: The End
Joost Behrends wrote: it has MONADS Interestingly, this is not even a language feature, it just happens that the concept of monads can be expressed in Haskell. (Ok, ignoring syntactic sugar in form of do-notation for the moment. And ignoring that constructor classes have been introduced because monads were such a cool use case). Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: MonadFix
Joost Behrends wrote: since about three weeks i am learning Haskell now. One of my first exercises is to decompose an Integer into its primefactors. How about separating the candidate prime numbers from the recursion factorize :: Integer - [Integer] factorize = f primes' where primes' = 2:[3,5..] f (p:ps) n | r == 0= p : f (p:ps) q | p*p n = [n] | otherwise = f ps n where (q,r) = n `divMod` p For a faster factorization, just plug in a better primes' . Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: MonadFix
Joost Behrends wrote: apfelmus writes: How about separating the candidate prime numbers from the recursion factorize :: Integer - [Integer] factorize = f primes' where primes' = 2:[3,5..] f (p:ps) n | r == 0= p : f (p:ps) q | p*p n = [n] | otherwise = f ps n where (q,r) = n `divMod` p (besides: p intsqrt n must stay, otherwise you have the expensive p*p at every step) Huh? p intsqrt n is evaluated just as often as p*p n , with changing n . Why would that be less expensive? Btw, the code above test for r==0 first, which means that the following p*p n is tested exactly once for every prime candidate p . Providing effectively primes' for that is simply impossible talking about really big numbers as i did in my post. There are no fast generators iterating just through the primes firstly Sure, that's why I called it primes' . It's indented to be an easily computable list of prime candidates and as you write below, we can do better than using all odd numbers for that. and these lists get much too big also (for 2^120 you cannot even begin to use a list of the primes up to 2^(any appropriate x) ). Thanks to lazy evaluation, the list will be generated on demand and its elements are garbage collect once used. So, the code above will run in constant space. The list is more like a suspended loop. What can be done is to iterate through odd numbers meeting as many primes as possible. We could do this: iterdivisors x | x == 0 = 3 | x == 1 = 5 | otherwise x = iterdivisors (x-1) + ((cycle [2,4]) !! x) This gives 7,11,13,17,19,23,25,29,31,35,37,41,43,47,49,53,55,59,61,63,67 ... i.e. exactly all primes and odds with greater primefactors as 3. We can improve that cycle avoiding the multiples of 5: ... | otherwise x = iterdivisors (x-1) + ((cycle [2,4,2,4,2,4,6,2,6] !! x) and we can do better by avoiding the multiples of 7 and so on (the length of these lists grows fast - it gets multiplied by every new avoided prime -, but we could provide that lists programmatically). And we must be sure, that cycle doesn't eat up memory for each new pass through the list. And we should use a more efficient representaion for the list of summands than a list. Huh, this looks very expensive. I'd avoid indices like x altogether and use a plain list instead, we don't need random access to the prime candidates, after all. But the title of my post and much more interesting topic for learning Haskell is, how to avoid memory exhaustion by recursion. Maybe you stumbled over common examples for a stack overflow like foldr (+) 0 foldl (+) 0 whereas foldl' (+) 0 runs without. See also http://en.wikibooks.org/wiki/Haskell/Performance_Introduction http://www.haskell.org/haskellwiki/Stack_overflow THIS was my intention and the reason why i erroneously brought MonadFix into the game. The recursion i described as follows divisions = do y - get if divisor y = bound y then do put ( divstep y ) divisions else return y makes a DESTRUCTIVE UPDATE of the DivIters (by put) Huh? The State monad doesn't do destructive updates, to the contrary. (neither do IORefs or STRefs, only STArrays or something do). and this kind of recursion seems not to remember itself (as i have understood, that is achieved by tail recursion). I just didn't like making DivIters to States. It's kind of lying code. Because of lazy evaluation, tail recursion is not what it seems to be in Haskell. However it worked and improved performance by a factor around 10 (or oo - perhaps a normal recursion exhausts 512MB memory for 2^120+1, as it will do for much smaller Integers, if they are prime) not to talk about footprint. Compiled for running standalone, it took 17 minutes, an equivalent python script 2 hours. This factor near 7 is not fully satisfactory. Using the State monad introduces unnecessary overhead. Also, I assume that you ran the compiler with the -O2 flag to enable optimizations? Or is there still a way of getting a tail recursive Haskell function for iterating through the DivIters (outside any monads) ?? I would not get stroke dead by surprise if yes, but i couldn't find any. I don't understand why you use a complicated DivIters data structure. Passing two simple parameters does the job just fine. Regards, apfelmus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe