Re: [Haskell-cafe] Non-Overlapping Patterns
PR Stanley wrote: Hi isZero :: Int - Bool isZero 0 = True isZero n | n /= 0 = False The order in which the above equations appear makes no difference to the application of isZero. Does the Haskell interpreter rewrite patterns into one single definition using some sort of switch or if construct? Something a bit like that. Why does an equation without a guard have to be placed after the more specific cases? It doesn't. You could write the above the other way around if you wished. To put it another way, why doesn't the interpreter identify the more specific cases and put them before the general ones. Because it general it's convenient to have overlapping cases, where the order matters, and be able to choose the order. Jules ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Using Template Haskell to make type-safe database access
Hi Mads, I think there may a bit of problem with the approach you suggest: as the type returned by the query is computed by the SQL server (if I understand you correctly), it's very hard to do anything with the result of the query - the Haskell compiler has no idea what type the result has, so you can't do anything with it. I think it makes much more sense to bookkeep type information on the Haskell side. But you can ask the SQL server for the type of the result. In the TH function you could: Thanks for your interesting reply. I'd forgotten that you can do I/O in TH's quotation monad. I agree that you can ask the database server for the type that an SQL expression will return. I don't understand metaprogramming enough to see how computing types with TH effects the rest of your program. Here's a concrete example. Suppose you have a query q that, when performed, will return a table storing integers. I can see how you can ask the SQL server for the type of the query, parse the response, and compute the Haskell type [Int]. I'm not sure how to sum the integers returned by the query *in Haskell* (I know SQL can sum numbers too, but this is a simple example). What would happen when you apply Haskell's sum function to the result of the query? Does TH do enough compile time execution to see that the result is well-typed? Having the SQL server compute types for you does have other drawbacks, I think. For example, suppose your query projects out a field that does not exist. An error like that will only get caught once you ask the server for the type of your SQL expression. If you keep track of the types in Haskell, you can catch these errors earlier; Haskell's type system can pinpoint which part of the query is accessing the wrong field. I feel that if you really care about the type of your queries, you should guarantee type correctness by construction, rather than check it as an afterthought. Of cause it all requires that the database have identical metadata at run and compile -time. Either using the same database or a copy. Though one should note that HaskellDB has the same disadvantage. Actually it do not seem much of a disadvantage it all, as most code accessing SQL databases depends on database metadata anyway. Perhaps I should explain my own thoughts on the subject a bit better. I got interested in this problem because I think it makes a nice example of dependent types in the real world - you really want to compute the *type* of a table based on the *value* of an SQL DESCRIBE. Nicolas Oury and I have written a draft paper describing some of our ideas: http://www.cs.nott.ac.uk/~wss/Publications/ThePowerOfPi.pdf Any comments are very welcome! Our proposal is not as nice as it could be (we would really like to have quotient types), but I hope it hints at what is possible. Wouter This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Induction (help!)
Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Induction (help!)
PR Stanley [EMAIL PROTECTED] wrote: Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul Induction - from the small picture, extrapolate the big Deduction - from the big picture, extrapolate the small Thus, in traditional logic, if you induce all apples are red, simple observation of a single non-red apple quickly reduces your result to at least one apple is not red on one side, all others may be red, i.e, you can't deduce all apples are red with your samples anymore. As used in mathematical induction, deductionaly sound: 1) Let apple be defined as being of continuous colour. 2) All apples are of the same colour 3) One observed apple is red Ergo: All apples are red Q.E.D. The question that is left is what the heck an apple is, and how it differs from these things you see at a supermarket. It could, from the above proof, be a symbol for red rubberband. The samples are defined by the logic. Proposition 2 should be of course inferable from looking at one single apple, or you're going to look quite silly. It only works in mathematics, where you can have exact, either complete or part-wise, copies of something. If you can think of a real-world example where this works, please speak up. That's it. Aristotlean logic sucks. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE: Sessions 2008.5.2
Howdy, I'm pleased to announce the general availability of Session Types for Haskell, version 2008.5.2. It is available from my website[0], Hackage[1][2] and I've just updated the online tutorial[3] to take into account the recent changes and new features. [0] http://wellquite.org/non-blog/sessions-2008.5.2.tar.gz [1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/sessions-2008.5.2 [2] http://hackage.haskell.org/ [3] http://wellquite.org/sessions/tutorial_1.html Session types are a means of describing communication between multiple threads, and statically verifying that the communication being performed is safe and conforms to the specification. The library supports multiple, concurrent channels being open and actions upon those channels being interleaved; forking new threads; the communication of process IDs, allowing threads to establish new channels between each other; higher-order channels, allowing an established channel to be sent over a channel to another process; and many other features common to session type systems. The significant new changes in this version are: a) an entirely new means to specify session types. This removes the old absolute indexing, is composeable and far more flexible and powerful than the old system. b) support for higher-order channels and session types allowing channels to be sent and received. This permits additional communication patterns such as two threads which are otherwise unaware of each other being able to communicate with one another by sending and receiving a channel via a common third party - delegation. Tests[4] and an example application[5] are available which should, in combination with the tutorial[6], explain how session types can be used. As ever, any feedback is very gratefully received. [4] http://wellquite.org/non-blog/sessions-browseable/Control/Concurrent/Session/Tests.hs [5] http://wellquite.org/non-blog/sessions-browseable/Control/Concurrent/Session/Queens.hs [6] http://wellquite.org/sessions/tutorial_1.html Matthew ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] help in tree folding
Hi. I'm learning haskell but i'm stuck on a generic tree folding exercise. i must write a function of this type treefoldr::(Eq a,Show a)=(a-b-c)-c-(c-b-b)-b-Tree a-c Tree has type data (Eq a,Show a)=Tree a=Void | Node a [Tree a] deriving (Eq,Show) as an example treefoldr (:) [] (++) [] (Node '+' [Node '*' [Node 'x' [], Node 'y' []], Node 'z' []]) must return +∗xyz any help? (sorry for my bad english) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Induction (help!)
Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul Induction - from the small picture, extrapolate the big Deduction - from the big picture, extrapolate the small Thus, in traditional logic, if you induce all apples are red, simple observation of a single non-red apple quickly reduces your result to at least one apple is not red on one side, all others may be red, i.e, you can't deduce all apples are red with your samples anymore. Paul: surely, you wouldn't come up with an incorrect premise like all apples are red in the first place. Sorry, still none the wiser Cheers, Paul ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANNOUNCE: Sessions 2008.5.6
On Tue, May 06, 2008 at 01:04:28PM +0100, Matthew Sackman wrote: I'm pleased to announce the general availability of Session Types for Haskell, version 2008.5.2. It is available from my website[0], Hackage[1][2] and I've just updated the online tutorial[3] to take into account the recent changes and new features. In an unforgettable act of incompetance, I forgot to check the package was actually usable and sadly it was missing a module. Thus 2008.5.6 is now released which I have verified builds and is usable. Apologies. http://wellquite.org/non-blog/sessions-2008.5.6.tar.gz http://hackage.haskell.org/cgi-bin/hackage-scripts/package/sessions-2008.5.6 Matthew ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
Am Dienstag, 6. Mai 2008 11:34 schrieb PR Stanley: Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul Mathematical induction is a method to prove that some proposition P holds for all natural numbers. The principle of mathematical induction says that if 1) P(0) holds and 2) for every natural number n, if P(n) holds, then also P(n+1) holds, then P holds for all natural numbers. If you choose another base case, say k instead of 0, so use 1') P(k) holds, then you can deduce that P holds for all natural numbers greater than or equal to k. You can convince yourself of the validity of the principle the following ways: Direct: By 1), P(0) is true. Specialising n to 0 in 2), since we already know that P(0) holds, we deduce that P(1) holds, too. Now specialising n to 1 in 2), since we already know that P(1) is true, we deduce that P(2) is also true. And so on ... after k such specialisations we have found that P(k) is true. Indirect: Suppose there is an n1 such that P(n1) is false. Let n0 be the smallest number for which P doesn't hold (there is one because there are only finitely many natural numbers less than or equal to n1. Technical term: the set of natural numbers is well-ordered, which means that every non-empty subset contains a smallest element). If n0 = 0, 1) doesn't hold, otherwise there is a natural number n (namely n0-1), for which P(n) holds but not P(n+1), so 2) doesn't hold. Example: The sum of the first n odd numbers is n^2, or 1 + 3 + ... + 2*n-1 = n^2. 1) Base case: a) n = 0: the sum of the first 0 odd numbers is 0 and 0^2 = 0. b) n = 1: the sum of the first 1 odd numbers is 1 and 1^2 = 1. 2) Let n be an arbitrary natural number. We prove that if the induction hypothesis - 1 + 3 + ... + 2*n-1 = n^2 - holds, then 1 + 3 + ... + 2*(n+1)-1 = (n+1)^2 holds, too. 1 + 3 + ... + 2*(n+1)-1 = (1 + 3 + ... + 2*n-1) + 2*(n+1)-1 = n^2 + 2*(n+1) -1 -- by the induction hypothesis = n^2 + 2*n + 1 = (n+1)^2 cqfd. Hope this helps, Daniel ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A Cabal problem
Trevor Elliott wrote: On Mon, 05 May 2008 13:37:12 -0400 Mario Blazevic [EMAIL PROTECTED] wrote: Trevor Elliott wrote: Hi Mario, Is the name of the module within the Shell.hs file Main? If not, that could be your problem. You may be right, the module's name is Shell, not Main. GHC does not have problem with that when --main-is Shell option is specified on the command line. Is Cabal different? Is this documented somewhere? Cabal doesn't pass the --main-is option, I believe because it is specific to GHC. What you could do is add this flag in the ghc-options field of your executable in the cabal file, like this: ghc-options: --main-is Shell That worked like charm, except the two dashes should be one (my mistake). Thank you. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Induction (help!)
On 2008 May 6, at 8:37, PR Stanley wrote: Thus, in traditional logic, if you induce all apples are red, simple observation of a single non-red apple quickly reduces your result to at least one apple is not red on one side, all others may be red, i.e, you can't deduce all apples are red with your samples anymore. Paul: surely, you wouldn't come up with an incorrect premise like all apples are red in the first place. You could come up with it as a hypothesis, if you've never seen a green or golden apple. This is all that's needed; induction starts with *if* P. However, the real world is a really lousy way to understand inductive logic: you can come up with hypotheses (base cases), but figuring out *what* the inductive step is is difficult at best --- never mind the impossibility of *proving* such. Here's what we're trying to assert: IF... you have a red apple AND YOU CAN PROVE... that another related apple is also red THE YOU CAN CONCLUDE... that all such related apples are red From a mathematical perspective this is impossible; we haven't defined apple, much less related apple. In other words, we can't assert either a hypothesis or an inductive case! So much for the real world. This only provides a way to construct if-thens, btw; it's easy to construct such that are false. In mathematics you can sometimes resolve this by constructing a new set for which the hypothesis does hold: for example, if you start with a proposition `P(n) : n is a natural number' and use the inductive case `P(n-1) : n-1 is a natural number', you run into trouble with n=0. If you add the concept of negative numbers, you come up with a new proposition: `P(n): n is an integer'. This is more or less how the mathematical notion of integer came about, as naturals came from whole numbers (add 0) and complex numbers came from reals (add sqrt(-1)). -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED] system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED] electrical and computer engineering, carnegie mellon universityKF8NH ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE: category-extras 0.44.2
Dan Doel asked me to roll category-extras into my nascent comonad transformer library, and the result is category-extras 0.44.2! So since Dan's release a couple of weeks ago ( http://www.haskell.org/pipermail/haskell-cafe/2008-April/042240.html) we have added: * Comonad Transformers. Context/ContextT, Reader/ReaderT. * A suite of Bifunctors and combinators in Control.Bifunctor.* -- we attach most of the logic usually associated with a monoidal/comonoidal category to the individual Bifunctor since Hask is a rich category to begin with. * A suite of Functors and combinators in Control.Functor.* * Pointed and copointed functors. * Control.Recursion has been broken out into a Control.Morphism.* and recoded to use a simpler distributive law. * Type Indexed versions of Applicatives, Monads, and Comonads (including Diatchki's Indexed State and Wadler's Delimited Continuation Parameterized Monad) * Parameterized Monads a la Ghani and Johann's paper ICFP 07 paper, and their Applicative and comonadic dual. * Higher-order hylo-, cata- and ana- morphisms. * Higher-order Monads a la Ghani and Johann and their comonadic equivalents. * Kan extensions. * BiKleisli arrows as seen in Uustalu and Vene's Signals and Comonads and SIGFPE's recent posts * The Pointer comonad * Grabbed Iavor Diatchki's value-supply and rolled it in as Control.Comonad.Supply to make it clearer that it is a comonad, and pave the way towards a Supply comonad transformer * A richer set of compositions to allow for construction of comonads and monads not only from adjunctions, but also from pre-composition or post-composition of a monad with a pointed functor, and similarly pre-composition and post-composition of a comonad with a copointed functor. * Generic functor zapping, zipping, unzipping, and cozipping as mentioned recently on http://comonad.com/reader There is still a lot to do in terms of adding back a lot of the documentation from the original, documenting the extensions and fleshing out all of the definable instances as the concepts have grown exceptionally fine-grained. I definitely welcome feedback and additions. In particular if you were using a feature that was supported by the old library or is unnatural to program in the current one, let me know. My goal is to gather a lot of this esoterica into one place and integrate it into something cohesive. HEAD: haddock http://comonad.com/haskell/category-extras/dist/doc/html/category-extras/ darcs http://comonad.com/haskell/category-extras/ Release: package http://hackage.haskell.org/cgi-bin/hackage-scripts/package/category-extras-0.44.2 -Edward Kmett ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Induction (help!)
On Tue, May 6, 2008 at 4:53 AM, Achim Schneider [EMAIL PROTECTED] wrote: PR Stanley [EMAIL PROTECTED] wrote: Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul Induction - from the small picture, extrapolate the big Deduction - from the big picture, extrapolate the small Induction has two meanings in mathematics, and I don't believe this is the type of induction the OP was asking about. See Daniel Fischer's response for the type you are asking about, and try not to be confused by the irrelevant discussion about inductive logic. Luke Thus, in traditional logic, if you induce all apples are red, simple observation of a single non-red apple quickly reduces your result to at least one apple is not red on one side, all others may be red, i.e, you can't deduce all apples are red with your samples anymore. As used in mathematical induction, deductionaly sound: 1) Let apple be defined as being of continuous colour. 2) All apples are of the same colour 3) One observed apple is red Ergo: All apples are red Q.E.D. The question that is left is what the heck an apple is, and how it differs from these things you see at a supermarket. It could, from the above proof, be a symbol for red rubberband. The samples are defined by the logic. Proposition 2 should be of course inferable from looking at one single apple, or you're going to look quite silly. It only works in mathematics, where you can have exact, either complete or part-wise, copies of something. If you can think of a real-world example where this works, please speak up. That's it. Aristotlean logic sucks. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
On Tue, May 6, 2008 at 5:34 AM, PR Stanley [EMAIL PROTECTED] wrote: Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul For a more intuitive view, mathematical induction (at least, induction over the integers) is like knocking over a chain of dominoes. To knock over the whole (possibly infinite!) chain, you need two things: 1. You need to make sure that each domino is close enough to knock over the next. 2. You need to actually knock over the first domino. Relating this to proving a proposition P for all nonnegative integers, step 1 is like proving that P(k) implies P(k+1) -- IF the kth domino gets knocked over (i.e. if P(k) is true), then it will also knock over the next one (P(k) implies P(k+1)). Step 2 is like proving the base case -- P(0) is true. Hopefully this helps you get a more intuitive view of what induction is all about; you can use some of the other responses to fill in more details. -Brent ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
After you grok induction over the naturals, you can start to think about structural induction, which is usually what we use in programming. They are related, and understanding one will help you understand the other (structural induction actually made more sense to me when I was learning, because I started as a programmer and then became a mathematician, so I thought in terms of data structures). So let's say you have a tree, and we want to count the number of leaves in the tree. data Tree = Leaf Int | Branch Tree Tree countLeaves :: Tree - Int countLeaves (Leaf _) = 1 countLeaves (Branch l r) = countLeaves l + countLeaves r We want to prove that countLeaves is correct. So P(x) means countLeaves x == the number of leaves in x. First we prove P(Leaf i), since leaves are the trees that have no subtrees. This is analogous to proving P(0) over the naturals. countLeaves (Leaf i) = 1, by definition of countLeaves. Leaf i has exactly one leaf, obviously. So countLeaves (Leaf i) is correct. Now to prove P(Branch l r), we get to assume that P holds for all of its subtrees, namely we get to assume P(l) and P(r). You can think of this as constructing an algorithm to prove P for any tree, and we have already proved it for l and r in our algorithm. This is analogous to proving P(n) = P(n+1) in the case of naturals. So: Assume P(l) and P(r). P(l) means countLeaves l == the number of leaves in l P(r) means countLeaves r == the number of leaves in r countLeaves (Branch l r) = countLeaves l + countLeaves r, by definition. And that is the number of leaves in Branch l r, the sum of the number of leaves in its two subtress. Therefore P(Branch l r). Now that we have those two cases, we are done; P holds for any Tree whatsoever. In general you will have to do one such proof for each case of your data structure in order to prove a property for the whole thing. At each case you get to assume the property holds for all substructures. Generally not so many steps are written down. In fact in this example, most people who do this kind of thing a lot would write straightforward induction, and that would be the whole proof :-) The analogy between structural induction and induction over the naturals is very strong; in fact induction over the naturals is just induction over this data structure: data Nat = Zero | Succ Nat Hope this helps. Luke On Tue, May 6, 2008 at 7:15 AM, Daniel Fischer [EMAIL PROTECTED] wrote: Am Dienstag, 6. Mai 2008 11:34 schrieb PR Stanley: Hi I don't know what it is that I'm not getting where mathematical induction is concerned. This is relevant to Haskell so I wonder if any of you gents could explain in unambiguous terms the concept please. The wikipedia article offers perhaps the least obfuscated definition I've found so far but I would still like more clarity. The idea is to move onto inductive proof in Haskell. First, however, I need to understand the general mathematical concept. Top marks for clarity and explanation of technical terms. Thanks Paul Mathematical induction is a method to prove that some proposition P holds for all natural numbers. The principle of mathematical induction says that if 1) P(0) holds and 2) for every natural number n, if P(n) holds, then also P(n+1) holds, then P holds for all natural numbers. If you choose another base case, say k instead of 0, so use 1') P(k) holds, then you can deduce that P holds for all natural numbers greater than or equal to k. You can convince yourself of the validity of the principle the following ways: Direct: By 1), P(0) is true. Specialising n to 0 in 2), since we already know that P(0) holds, we deduce that P(1) holds, too. Now specialising n to 1 in 2), since we already know that P(1) is true, we deduce that P(2) is also true. And so on ... after k such specialisations we have found that P(k) is true. Indirect: Suppose there is an n1 such that P(n1) is false. Let n0 be the smallest number for which P doesn't hold (there is one because there are only finitely many natural numbers less than or equal to n1. Technical term: the set of natural numbers is well-ordered, which means that every non-empty subset contains a smallest element). If n0 = 0, 1) doesn't hold, otherwise there is a natural number n (namely n0-1), for which P(n) holds but not P(n+1), so 2) doesn't hold. Example: The sum of the first n odd numbers is n^2, or 1 + 3 + ... + 2*n-1 = n^2. 1) Base case: a) n = 0: the sum of the first 0 odd numbers is 0 and 0^2 = 0. b) n = 1: the sum of the first 1 odd numbers is 1 and 1^2 = 1. 2) Let n be an arbitrary natural number. We prove that if the induction hypothesis - 1 + 3 + ... + 2*n-1 = n^2 - holds, then 1 + 3 + ... + 2*(n+1)-1 = (n+1)^2 holds, too. 1 + 3 + ... + 2*(n+1)-1 = (1 + 3 + ... + 2*n-1) + 2*(n+1)-1 = n^2 + 2*(n+1) -1
[Haskell-cafe] license question
Hello everybody, I have one question regarding a licensing issue. I am developing a library to parse and pretty-print the Core Erlang language and it is heavily based on the modules included in haskell-src. What I want to know is if I have to reproduce all of the LICENSE file included in that package with my name included in the copyright notice or what other thing do I have to do. Thanks in advance. -- Henrique Ferreiro García [EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Figuring out if an algebraic type is enumerated through Data.Generics?
Hi all, probably_unnnecesary_background I'm writing a Hardware-oriented DSL deep-embedded in Haskell (similar to Lava, for those familiar with it). One of the goals of the language is to support polymorphic signals (i.e. we would like to allow signals to carry values of any type, including user-defined types) The embedded compiler has different backends: simulation, VHDL, graphical representation in GraphML ... The simulation backend manages to support polymorphic signals by forcing them to be Typeable. In that way, circuit functions, no matter what signals they process, can be transformed to Dynamic and included in the AST for later simulation. The situation is more complicated for the VHDL backend. The Typeable trick works just fine for translating a limited set of predefined types (e.g. tuples) but not for user-defined types, since the embedded-compiler doesn't have access to the user-defined type declarations. One possible solution to access the type-definition of signal values would be constraining them to instances of Data. Then, we could use dataTypeOf to get access to the type representation. (Another option would be using template Haskell's reify, but I would like to avoid that by now) It would certainly be difficult map any Haskell type to VHDL, so, by now we would be content to map enumerate algebraic types (i.e. algebraic types whose all data constructors have arity zero, e.g. data Colors = Green | Blue | Red) /probably_unnnecesary_background So, the question is. Is there a way to figure out the arity of data constructors using Data.Generics ? I'm totally new to generics, but (tell me if I'm wrong) it seems that Constr doesn't hold any information about the data-constructor arguments. Why is it so? Do you think there is a workoaround for this problem? (maybe using some other function from Data.Generics different to dataTypeOf?) Thanks in advance, Alfonso Acosta ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Figuring out if an algebraic type is enumerated through Data.Generics?
On Tue, May 6, 2008 at 12:34 PM, Alfonso Acosta [EMAIL PROTECTED] wrote: | So, the question is. Is there a way to figure out the arity of data | constructors using Data.Generics ? | I'm totally new to generics, but (tell me if I'm wrong) it seems that | Constr doesn't hold any information about the data-constructor | arguments. Why is it so? Hmrmm, Playing around with it, I was able to abuse gunfold and the reader comonad to answer the problem : fst $ (gunfold (\(i,_) - (i+1,undefined)) (\r - (0,r)) (toConstr Hello) :: (Int,String)) returns 2, the arity of (:), the outermost constructor in Hello A longer version which does not depend on undefined would be to take and define a functor that discarded its contents like: module Args where import Data.Generics newtype Args a = Args { runArgs :: Int } deriving (Read,Show) tick :: Args (b - r) - Args r tick (Args i) = Args (i + 1) tock = const (Args 0) argsInCons = runArgs $ (gunfold tick tock (toConstr Hello) :: (Args String) Basically all I do is rely on the fact that gunfold takes the 'tick' argument and calls it repeatedly for each argument after a 'tock' base case. The use of the reader comonad or functor is to give gunfold a 'functor-like' argument to meet its type signature. -Edward Kmett ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Still no joy with parsec
Am Dienstag, 6. Mai 2008 18:52 schrieb Ross Boylan: Source: import Text.ParserCombinators.Parsec import qualified Text.ParserCombinators.Parsec.Token as P import Text.ParserCombinators.Parsec.Language(haskell) reserved = P.reserved haskell braces = P.braces haskell -- TeX example envBegin :: Parser String envBegin = do{ reserved \\begin 1 ; braces (many1 letter) ^ } Right there appears the '1' that irritated you. Remove it from the source file, and ghc is happy. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] help in tree folding
Luke Palmer ha scritto: On Tue, May 6, 2008 at 6:20 AM, patrik osgnach [EMAIL PROTECTED] wrote: Hi. I'm learning haskell but i'm stuck on a generic tree folding exercise. i must write a function of this type treefoldr::(Eq a,Show a)=(a-b-c)-c-(c-b-b)-b-Tree a-c Tree has type data (Eq a,Show a)=Tree a=Void | Node a [Tree a] deriving (Eq,Show) as an example treefoldr (:) [] (++) [] (Node '+' [Node '*' [Node 'x' [], Node 'y' []], Node 'z' []]) must return +∗xyz any help? (sorry for my bad english) Functions like this are very abstract, but are also quite nice in that there's basically only one way to write them given the types. What have you tried so far? This function needs to be recursive, so what arguments should it give to its recursive calls, and where should it plug the results? It also helps, as you're writing, to keep meticulous track of the the types of everything you have and the type you need, and that will tell you what you need to write next. Luke so far i have tried this treefoldr f x g y Void = x treefoldr f x g y (Node a []) = f a y treefoldr f x g y (Node a (t:ts)) = treefoldr f x g (g (treefoldr f x g y t) y) (Node a ts) but it is clearly incorrect. this functions takes as arguments two functions and two zeros (one for the empty tree and one for the empty tree list). thanks for the answer Patrik ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] help in tree folding
Brent Yorgey ha scritto: On Tue, May 6, 2008 at 8:20 AM, patrik osgnach [EMAIL PROTECTED] wrote: Hi. I'm learning haskell but i'm stuck on a generic tree folding exercise. i must write a function of this type treefoldr::(Eq a,Show a)=(a-b-c)-c-(c-b-b)-b-Tree a-c Tree has type data (Eq a,Show a)=Tree a=Void | Node a [Tree a] deriving (Eq,Show) as an example treefoldr (:) [] (++) [] (Node '+' [Node '*' [Node 'x' [], Node 'y' []], Node 'z' []]) must return +∗xyz any help? (sorry for my bad english) Having a (Tree a) parameter, where Tree is defined as an algebraic data type, also immediately suggests that you should do some pattern-matching to break treefoldr down into cases: treefoldr f y g z Void = ? treefoldr f y g z (Node x t) = ? -Brent so far i have tried treefoldr f x g y Void = x treefoldr f x g y (Node a []) = f a y treefoldr f x g y (Node a (t:ts)) = treefoldr f x g (g (treefoldr f x g y t) y) (Node a ts) but it is incorrect. i can't figure out how to build the recursive call thanks for the answer Patrik ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Still no joy with parsec
Am Dienstag, 6. Mai 2008 18:52 schrieb Ross Boylan: g.hs:11:19: Couldn't match expected type `t1 - GenParser Char () t' against inferred type `CharParser st ()' In the expression: reserved \\begin 1 In a 'do' expression: reserved \\begin 1 In the expression: do reserved \\begin 1 braces (many1 letter) Failed, modules loaded: none. More generally, how can I go about diagnosing such problems? Since I can't load it, I can't debug it or get :info on the types. It looks as if maybe it's expecting a Monad, but getting a parser. But (GenParser tok st) is a monad. GenParser tok st a is the type of parsers which parse lists of tok, using a state of type st and returning a value of type a. There are two type synonyms I remember, type CharParser st a = GenParser Char st a and type Parser a = GenParser Char () a Now let's look at what ghc does with the code envBegin :: Parser String envBegin = do reserved \\begin 1 braces (many1 letter) which you had (btw, had you used layout instead of explicit braces, the 1 in the first column of the line would have led to a parse error and been more obvious). From the last expression, braces (many1 letter), which has type CharParser st [Char], the type checker infers that the whole do-expression has the same type. So the first expression in the do-block must have type CharParser st a, or, not using the type synonym, GenParser Char st a. The type signature says that the user state st is actually (), which is okay, because it's a more specific type. Now that first expression is parsed (reserved \\begin) 1 , so the subexpression (reserved \\begin) is applied to an argument and has to return a value of type GenParser Char st a, hence the type checker expects the type t1 - GenParser Char st t for (reserved \\begin). That is the expected type from the error message, with st specialised to () due to the type signature. Next, the type of the expression (reserved \\begin) is inferred. 'reserved' is defined as P.reserved haskell, P.reserved has type P.TokenParser st - String - CharParser st () haskell has type P.TokenParser st , so reserved has type String - CharParser st () and hence (reserved \\begin) has type CharParser st () , that is the inferred type of the error message. Since one of the two is a function type and the other not, these types do not match. The error message Couldn't match expected type `thing' against inferred type `umajig' In the expression: foo bar oops tells you that from the use of (foo bar) in that expression, the type checker expects it to have type `thing', but the type inference of the expression (foo bar), without surrounding context, yields type `umajig', which can't be matched (or unified) with `thing'. HTH, Daniel I don't know why that would have changed vs using 6.6. More questions about the error messages. Where is the expected type, and where is the inferred type, coming from? I'm guessing the expected type is from the function signature and the position inside a do (or perhaps from the argument following the ; in the do?) and the inferred type is what I would just call the type of reserved begin. And what is the 1 that appears after 'reserved \\begin'? An indicator that all occurrences of the text refer to the same spot in the program? Nesting level? Thanks. Ross P.S. There have been some issues with the Debian packaging of ghc6.8, so it's possible I'm bumping into them. I thought/hoped the problems were limited to non i386 architectures. Also, I'm pretty sure that the parsec code used by ghc6.6, ghc6.8, and hugs is all in different files. So conceivably the parsec source differs. I have ghc6 6.8.2-5 and libghc6-parsec-dev 2.1.0.0-2. Source: import Text.ParserCombinators.Parsec import qualified Text.ParserCombinators.Parsec.Token as P import Text.ParserCombinators.Parsec.Language(haskell) reserved = P.reserved haskell braces = P.braces haskell -- TeX example envBegin :: Parser String envBegin = do{ reserved \\begin 1 ; braces (many1 letter) } ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] help in tree folding
Am Dienstag, 6. Mai 2008 22:40 schrieb patrik osgnach: Brent Yorgey ha scritto: On Tue, May 6, 2008 at 8:20 AM, patrik osgnach [EMAIL PROTECTED] wrote: Hi. I'm learning haskell but i'm stuck on a generic tree folding exercise. i must write a function of this type treefoldr::(Eq a,Show a)=(a-b-c)-c-(c-b-b)-b-Tree a-c Tree has type data (Eq a,Show a)=Tree a=Void | Node a [Tree a] deriving (Eq,Show) as an example treefoldr (:) [] (++) [] (Node '+' [Node '*' [Node 'x' [], Node 'y' []], Node 'z' []]) must return +∗xyz any help? (sorry for my bad english) Having a (Tree a) parameter, where Tree is defined as an algebraic data type, also immediately suggests that you should do some pattern-matching to break treefoldr down into cases: treefoldr f y g z Void = ? treefoldr f y g z (Node x t) = ? -Brent so far i have tried treefoldr f x g y Void = x Yes, nothing else could be done. treefoldr f x g y (Node a []) = f a y Not bad. But actually there's no need to treat nodes with and without children differently. Let's see: treefoldr f x g y (Node v ts) should have type c, and it should use v. We have f :: a - b - c x :: c g :: c - b - b y :: b v :: a. The only thing which produces a value of type c using a value of type a is f, so we must have treefoldr f x g y (Node v ts) = f v someExpressionUsing'ts' where someExpressionUsing'ts' :: b. The only thing we have which produces a value of type b is g, so someExpressionUsing'ts' must ultimately be g something somethingElse. Now take a look at the code and type of foldr, that might give you the idea. Cheers, Daniel treefoldr f x g y (Node a (t:ts)) = treefoldr f x g (g (treefoldr f x g y t) y) (Node a ts) but it is incorrect. i can't figure out how to build the recursive call thanks for the answer Patrik ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A Cabal problem
On Tue, 2008-05-06 at 09:43 -0400, Mario Blazevic wrote: Trevor Elliott wrote: Cabal doesn't pass the --main-is option, I believe because it is specific to GHC. What you could do is add this flag in the ghc-options field of your executable in the cabal file, like this: ghc-options: --main-is Shell That worked like charm, except the two dashes should be one (my mistake). Thank you. Note that hackage will reject packages that use ghc-options: -main-is with the message that it is not portable. The rationale is that unlike other non-portable extensions, it is easy to change to make it portable: http://hackage.haskell.org/trac/hackage/ticket/179 If you want to argue for supporting this ghc extension and/or implement support (possibly with workaround support for the other haskell implementations) then please do comment on the above ticket. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
After you grok induction over the naturals, you can start to think about structural induction, which is usually what we use in programming. They are related, and understanding one will help you understand the other (structural induction actually made more sense to me when I was learning, because I started as a programmer and then became a mathematician, so I thought in terms of data structures). Paul: I was hoping that understanding the classic mathematical concept would help me appreciate the structural computer science) variation better. I don't know what it is about induction that I'm not seeing. It's so frustrating! Deduction in spite of the complexity in some parts makes perfect sense. This, however, is a different beast! So let's say you have a tree, and we want to count the number of leaves in the tree. data Tree = Leaf Int | Branch Tree Tree countLeaves :: Tree - Int countLeaves (Leaf _) = 1 countLeaves (Branch l r) = countLeaves l + countLeaves r We want to prove that countLeaves is correct. So P(x) means countLeaves x == the number of leaves in x. Paul: By 'correct' presumably you mean sound. First we prove P(Leaf i), since leaves are the trees that have no subtrees. This is analogous to proving P(0) over the naturals. Paul: I'd presume 'proof' here means applying the function to one leaf to see if it returns 1. If I'm not mistaken this is establishing the base case. countLeaves (Leaf i) = 1, by definition of countLeaves. Leaf i has exactly one leaf, obviously. So countLeaves (Leaf i) is correct. Now to prove P(Branch l r), we get to assume that P holds for all of its subtrees, namely we get to assume P(l) and P(r). Paul: How did you arrive at this decision? Why can we assume that P holds fr all its subtrees? You can think of this as constructing an algorithm to prove P for any tree, and we have already proved it for l and r in our algorithm. Paul: Is this the function definition for countLeaves? This is analogous to proving P(n) = P(n+1) in the case of naturals. Paul:I thought P(n) was the induction hypothesis and P(n+1) was the proof that the formula/property holds for the subsequent element in the sequence if P(n) is true. I don't see how countLeaves l and countLeaves r are analogous to P(n) and P(n+1). So: Assume P(l) and P(r). P(l) means countLeaves l == the number of leaves in l P(r) means countLeaves r == the number of leaves in r countLeaves (Branch l r) = countLeaves l + countLeaves r, by definition. And that is the number of leaves in Branch l r, the sum of the number of leaves in its two subtress. Therefore P(Branch l r). Now that we have those two cases, we are done; P holds for any Tree whatsoever. In general you will have to do one such proof for each case of your data structure in order to prove a property for the whole thing. At each case you get to assume the property holds for all substructures. Generally not so many steps are written down. In fact in this example, most people who do this kind of thing a lot would write straightforward induction, and that would be the whole proof :-) The analogy between structural induction and induction over the naturals is very strong; in fact induction over the naturals is just induction over this data structure: data Nat = Zero | Succ Nat Hope this helps. Luke ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] TimeSpec
Hello, I checked Hoogle for the POSIX data type TimeSpec. Nada. Does anybody know where the definition is? Thanks, Vasili ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
Am Dienstag, 6. Mai 2008 23:34 schrieb PR Stanley: After you grok induction over the naturals, you can start to think about structural induction, which is usually what we use in programming. They are related, and understanding one will help you understand the other (structural induction actually made more sense to me when I was learning, because I started as a programmer and then became a mathematician, so I thought in terms of data structures). Paul: I was hoping that understanding the classic mathematical concept would help me appreciate the structural computer science) variation better. Understanding either will help understanding the other, they're the same idea in different guise. I don't know what it is about induction that I'm not seeing. It's so frustrating! Deduction in spite of the complexity in some parts makes perfect sense. This, however, is a different beast! So let's say you have a tree, and we want to count the number of leaves in the tree. data Tree = Leaf Int | Branch Tree Tree countLeaves :: Tree - Int countLeaves (Leaf _) = 1 countLeaves (Branch l r) = countLeaves l + countLeaves r We want to prove that countLeaves is correct. So P(x) means countLeaves x == the number of leaves in x. Paul: By 'correct' presumably you mean sound. First we prove P(Leaf i), since leaves are the trees that have no subtrees. This is analogous to proving P(0) over the naturals. Paul: I'd presume 'proof' here means applying the function to one leaf to see if it returns 1. If I'm not mistaken this is establishing the base case. Yes, right. countLeaves (Leaf i) = 1, by definition of countLeaves. Leaf i has exactly one leaf, obviously. So countLeaves (Leaf i) is correct. Now to prove P(Branch l r), we get to assume that P holds for all of its subtrees, namely we get to assume P(l) and P(r). Paul: How did you arrive at this decision? Why can we assume that P holds fr all its subtrees? It's the induction hypothesis. We could paraphrase the induction step as If P holds for all cases of lesser complexity than x, then P also holds for x. In this example, the subtrees have lesser complexity than the entire tree (you could define complexity as depth). The induction step says assuming P(l) and P(r), we can deduce P(Branch l r), or, as a formula, forall l, r. ([P(l) and P(r)] == P(Branch l r)). You can think of this as constructing an algorithm to prove P for any tree, and we have already proved it for l and r in our algorithm. Paul: Is this the function definition for countLeaves? This is analogous to proving P(n) = P(n+1) in the case of naturals. Paul:I thought P(n) was the induction hypothesis and P(n+1) was the proof that the formula/property holds for the subsequent element in the sequence if P(n) is true. I don't see how countLeaves l and countLeaves r are analogous to P(n) and P(n+1). (countLeaves l == number of leaves in l) and (countLeaves r == number of leaves in r) together are analogous to P(n). Proving forall l, r. ([P(l) and P(r)] == P(Branch l r)) is analogous to proving forall n. [P(n) == P(n+1)]. Let me phrase mathematical induction differently. To prove forall natural numbers n. P(n) , it is sufficient to prove 1) P(0) and 2) forall n. [P(n) implies P(n+1)] . And structural induction for lists: To prove forall (finite) lists l . P(l) , you prove 1L) P([ ]) -- the base case is the empty list 2L) forall x, l. [P(l) == P(x:l)] and for trees: to prove forall trees t. P(t) , you prove 1T) forall i. P(Leaf i) 2T) forall l, r. ([P(l) and P(r)] == P(Branch l r)). In all three cases, the pattern is the same. The key thing is number 2, which says if P holds for one level of complexity, then it also holds for the next level of complexity. In the domino analogy, 2) says that the dominos are placed close enough, so that a falling domino will tip its neighbour over. Then 1) is tipping the first domino over. So: Assume P(l) and P(r). P(l) means countLeaves l == the number of leaves in l P(r) means countLeaves r == the number of leaves in r countLeaves (Branch l r) = countLeaves l + countLeaves r, by definition. And that is the number of leaves in Branch l r, the sum of the number of leaves in its two subtress. Therefore P(Branch l r). Now that we have those two cases, we are done; P holds for any Tree whatsoever. In general you will have to do one such proof for each case of your data structure in order to prove a property for the whole thing. At each case you get to assume the property holds for all substructures. Generally not so many steps are written down. In fact in this example, most people who do this kind of thing a lot would write straightforward induction, and that would be the whole proof :-) The analogy between structural induction and induction over the naturals is very strong;
[Haskell-cafe] Haddock and upload questions?
Hello, 1) Can I set up Haddock to run everytime I do a build? 2) http://hackage.haskell.org/packages/upload.html - do I have to set up my .cabal in a special way to run dist? - once I checked that my Cabal package is compliant on this page, how do I get a username and password? Kind regards, Vasili ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
Another way to look at it is that induction is just a way to abbreviate proofs. Lets say you have a proposition over the natural numbers that may or may not be true; P(x). If you can prove P(0), and given P(x) you can prove P(x+1), then for any given natural number n, you can prove P(n): insert proof of P(0) here insert proof of P(0) = P(1) here P(1). -- from P(0) and P(0) = P(1) proof of P(1) = P(2) P(2). -- from P(1) and P(1) = P(2) ... P(n-1). -- from P(n-2) and P(n-2) = P(n-1). proof of P(n-1) = P(n) P(n). -- from P(n-1) and P(n-1) = P(n) The magic thing about induction is that it gives you a unifying principle that allows you to skip these steps and prove an -infinite- number of hypotheses; even though the natural numbers is an infinite set, each number is still finite and therefore is subject to the same logic above. One point to remember is that structural induction fails to hold on infinite data structures: data Nat = Zero | Succ Nat deriving (Eq, Show) Take P(x) to be (x /= Succ x). P(0): Zero /= Succ Zero. (trivial) P(x) = P(Succ x) So, given P(x) which is: (x /= Succ x), we have to prove P(Succ x): (Succ x /= Succ (Succ x)) In the derived Eq typeclass: Succ a /= Succ b = a /= b Taking x for a and Succ x for b: Succ x /= Succ (Succ x) = x /= Succ x which is P(x). Now, consider the following definition: infinity :: Nat infinity = Succ infinity infinity /= Succ infinity == _|_; and if you go by definitional equality, infinity = Succ infinity, so even though P(x) holds on all natural numbers due to structural induction, it doesn't hold on this infinite number. -- ryan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Induction (help!)
Ryan Ingram wrote: One point to remember is that structural induction fails to hold on infinite data structures: As I understand it, structural induction works even for infinite data structures if you remember that the base case is always _|_. [1] Let the initial algebra functor F = const Zero | Succ We have to prove that: P(_|_) holds if P(X) holds then P(F(X)) holds Here, we see that for P(x) = (x /= Succ x), since infinity = Succ infinity = _|_ then P(_|_) does not hold (since _|_ = Succ _|_ = _|_) As a counterexample, we can prove e.g. that length (L ++ M) = length L + length M See [2] for details, except that they neglect the base case P(_|_) and start instead with the F^1 case of P([]), so their proof is valid only for finite lists. We can include the base case too: length (_|_ ++ M) = length _|_ + length M length (_|_ ) = _|_+ length M _|_ = _|_ and similarly for M = _|_ and both _|_. So this law holds even for infinite lists. [1] Richard Bird, Introduction to Functional Programming using Haskell, p. 67 [2] http://en.wikipedia.org/wiki/Structural_induction Also note that data Nat = Zero | Succ Nat deriving (Eq, Show) Take P(x) to be (x /= Succ x). P(0): Zero /= Succ Zero. (trivial) P(x) = P(Succ x) So, given P(x) which is: (x /= Succ x), we have to prove P(Succ x): (Succ x /= Succ (Succ x)) In the derived Eq typeclass: Succ a /= Succ b = a /= b Taking x for a and Succ x for b: Succ x /= Succ (Succ x) = x /= Succ x which is P(x). Now, consider the following definition: infinity :: Nat infinity = Succ infinity infinity /= Succ infinity == _|_; and if you go by definitional equality, infinity = Succ infinity, so even though P(x) holds on all natural numbers due to structural induction, it doesn't hold on this infinite number. -- ryan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Figuring out if an algebraic type is enumerated through Data.Generics?
Thanks a lot for your answer, it was exactly what I was looking for. Just for the record, based on your solution I can now easily code a function to check if a Data value belongs to an enumerated algebraic type (as I defined it in my first mail). {-# LANGUAGE DeriveDataTypeable, ScopedTypeVariables #-} import Data.Generics newtype Arity a = Arity Int deriving (Show, Eq) consArity :: Data a = Constr - Arity a consArity = gunfold (\(Arity n) - Arity (n+1)) (\_ - Arity 0) belongs2EnumAlg :: forall a . Data a = a - Bool belongs2EnumAlg a = case (dataTypeRep.dataTypeOf) a of AlgRep cons - all (\c - consArity c == ((Arity 0) :: Arity a )) cons _ - False -- tests data Colors = Blue | Green | Red deriving (Data, Typeable) test1 = belongs2EnumAlg 'a' -- False test2 = belongs2EnumAlg Red -- True test3 = belongs2EnumAlg a -- False On Tue, May 6, 2008 at 7:42 PM, Edward Kmett [EMAIL PROTECTED] wrote: On Tue, May 6, 2008 at 12:34 PM, Alfonso Acosta [EMAIL PROTECTED] wrote: | So, the question is. Is there a way to figure out the arity of data | constructors using Data.Generics ? | I'm totally new to generics, but (tell me if I'm wrong) it seems that | Constr doesn't hold any information about the data-constructor | arguments. Why is it so? Hmrmm, Playing around with it, I was able to abuse gunfold and the reader comonad to answer the problem : fst $ (gunfold (\(i,_) - (i+1,undefined)) (\r - (0,r)) (toConstr Hello) :: (Int,String)) returns 2, the arity of (:), the outermost constructor in Hello A longer version which does not depend on undefined would be to take and define a functor that discarded its contents like: module Args where import Data.Generics newtype Args a = Args { runArgs :: Int } deriving (Read,Show) tick :: Args (b - r) - Args r tick (Args i) = Args (i + 1) tock = const (Args 0) argsInCons = runArgs $ (gunfold tick tock (toConstr Hello) :: (Args String) Basically all I do is rely on the fact that gunfold takes the 'tick' argument and calls it repeatedly for each argument after a 'tock' base case. The use of the reader comonad or functor is to give gunfold a 'functor-like' argument to meet its type signature. -Edward Kmett ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] license question
Hello Henrique, That license, The Glasgow Haskell Compiler License, available at http://darcs.haskell.org/ghc-hashedrepo/libraries/haskell-src/LICENSE, reads as follows: - license text follows immediately after this line - The Glasgow Haskell Compiler License Copyright 2004, The University Court of the University of Glasgow. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither name of the University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE UNIVERSITY COURT OF THE UNIVERSITY OF GLASGOW AND THE CONTRIBUTORS AS IS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE UNIVERSITY COURT OF THE UNIVERSITY OF GLASGOW OR THE CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - license text ends immediately before this line - According to the above conditions, the following conditions hold: * In redistributions of source code, you are required to retain the above copyright notice, the above list of conditions, and the above disclaimer. * In restributions in binary form, you are required to retain the above copyright notice, the above list of conditions, and the above disclaimer in the documentation and/or other materials provided with your distribution. However, according to the above conditions, in neither case are you necessarily required to include your name in the copyright notice. Benjamin L. Russell --- On Wed, 5/7/08, Henrique Ferreiro García [EMAIL PROTECTED] wrote: Hello everybody, I have one question regarding a licensing issue. I am developing a library to parse and pretty-print the Core Erlang language and it is heavily based on the modules included in haskell-src. What I want to know is if I have to reproduce all of the LICENSE file included in that package with my name included in the copyright notice or what other thing do I have to do. Thanks in advance. -- Henrique Ferreiro García [EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Figuring out if an algebraic type is enumerated through Data.Generics?
Alfonso Acosta wrote: It would certainly be difficult map any Haskell type to VHDL, so, by now we would be content to map enumerate algebraic types (i.e. algebraic types whose all data constructors have arity zero, e.g. data Colors = Green | Blue | Red) Wouldn't it be much simpler to use the standard deriveable classes Bounded and Enum, instead of an admittedly very clever trick using Data? Metaprogramming comes in many shapes and sizes, and even the humble deriving (Show,Enum,Bounded,Ord,Eq) gives you quite some leverage.. Jules ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe