Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?
Ertugrul Söylemez wrote: damodar kulkarni kdamodar2...@gmail.com wrote: The Monad class makes us define bind (=) and unit (return) for our monads. Why the Kleisli composition (=) or (=) is not made a part of Monad class instead of bind (=)? Is there any historical reason behind this? The bind (=) is not as elegant as (=), at least as I find it. Am I missing something? Try to express do x - getLine y - getLine print (x, y) using only Kleisli composition (without cheating). Through cheating (doing non-categorical stuff) it's possible to implement (=) in terms of (=), but as said that's basically breaking the abstraction. What do you mean with cheating / doing non-categorical stuff? m = f = (const m = f) () f = g = \x - f x = g How does the first definition break the abstraction while the second does not? Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Efficient mutable arrays in STM
David Barbour wrote: Create an extra TVar Int for every `chunk` in the array (e.g every 256 elements, tuned to your update patterns). Read-write it (increment it, be sure to force evaluation) just before every time you write an element or slice it or slice the array element. Incrementing and forcing evaluation should not be necessary, a TVar () should be enough. I would be very much surprised if the internal STM machinery compares the actual _values_ of what is inside a TVar, I guess it just notes that a read or a write happened. Anyway, this is just a guess, I wonder if these details are documented somewhere... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANNOUNCE: darcs 2.5 beta 4
Benjamin Franksen wrote: I have trouble building this latest beta. I found that the build uses ghc-6.8.3 which, I think, is no longer supported for building darcs, IIRC. Sorry for the noise. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANNOUNCE: darcs 2.5 beta 3
This beta has a bug. This is what just happened to me: frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:27:51 darcs push Pushing to /srv/csr/repositories/controls/darcs/epics/ioc/MLS- Controls/base-3-14... Tue Aug 17 15:20:42 CEST 2010 benjamin.frank...@bessy.de * generate many link fields using expander.pl (EnergyRampApp) This makes it much easier to add new target devices. Shall I push this patch? (1/3) [ynW...], or ? for more options: n Tue Aug 17 15:22:29 CEST 2010 benjamin.frank...@bessy.de * added kickers to energy ramp (EnergyRampApp) Shall I push this patch? (2/3) [ynW...], or ? for more options: n Thu Aug 19 13:27:07 CEST 2010 benjamin.frank...@bessy.de * docs now go to the new help.bessy.de server (configure) Shall I push this patch? (3/3) [ynW...], or ? for more options: y WARNING: Doing a one-time conversion of pristine format. This may take a while. The new format is backwards-compatible. Pristine conversion done... darcs: stdin: hIsTerminalDevice: illegal operation (handle is closed) Apply failed! frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:28:02 which darcs 1 /projects/ctl/franksen/ghc-6.10.4/bin/darcs frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:28:15 darcs --version 2.4.98.3 (beta 3) frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:28:20 Reverting back to 2.4.4: rmpath /projects/ctl/franksen/ghc-6.10.4/bin frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:28:50 rehash frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:28:56 darcs --version 2.4.4 (release) frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:28:59 darcs push Pushing to /srv/csr/repositories/controls/darcs/epics/ioc/MLS- Controls/base-3-14... Tue Aug 17 15:20:42 CEST 2010 benjamin.frank...@bessy.de * generate many link fields using expander.pl (EnergyRampApp) This makes it much easier to add new target devices. Shall I push this patch? (1/3) [ynWvplxdaqjk], or ? for help: n Tue Aug 17 15:22:29 CEST 2010 benjamin.frank...@bessy.de * added kickers to energy ramp (EnergyRampApp) Shall I push this patch? (2/3) [ynWsfvplxdaqjk], or ? for help: n Thu Aug 19 13:27:07 CEST 2010 benjamin.frank...@bessy.de * docs now go to the new help.bessy.de server (configure) Shall I push this patch? (3/3) [ynWsfvplxdaqjk], or ? for help: y Reading pristine 300 done, 364 queued. 88fa8a58d32f70e7269b6b74f51f8e44a54eb014c Finished applying... Posthook ran successfully. Push successful. frank...@aragon:~/ctl/MLS-Controls/clean ___ 13:29:13 darcs --version 2.4.4 (release) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Approaches to dependent types (DT)
pbrowne wrote: Dependent Types (DT) The purpose of dependent types (DT) is to allow programmers to specify dependencies between the parameters of a multiple parameter class. 'Dependent type' means result type (of a function) can depend on argument values. This is not (directly) supported in Haskell. What you are talking about is called 'functional dependencies', not 'dependent types'. Sometimes abbreviated as 'fundeps'. DTs can be seen as a move towards more general purpose parametric type classes. This is at least misleading, as adding a functional dependency does not make the class more general, but more special, as it reduces the number of possible instances. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Building production stable software in Haskell
Philippa Cowderoy wrote: On Mon, 17 Sep 2007, Adrian Hey wrote: Ideally the way to deal with this is via standardised interfaces (using type classes with Haskell), not standardised implementations. Even this level of standardisation is not a trivial clear cut design exercise. e.g we currently have at least two competing class libs, Edison and the collections package. Which one should become standard? They shouldn't, at least not now. Knock up something lightweight that'll do for now for each of the modules that're going to be standard, worry about overarching frameworks later. Realistically we need a standardised name which we can expect to find an implementation under, with some performance guarantees even if they're the worst possible ones we can make. I am using the collections package on a regular basis, and I am quite satisfied. (I have no experience with Edison so I can't compare them.) The main advantage of a framework such as the collections package offers is that the code becomes a lot more flexible. First, it is easier to experiment with different implementations. In one application I wrote, you can switch from Data.Collections.StdMap (whose implementation is the familiar Data.Map) to e.g. Data.Collections.AvlMap by changing exactly /one/ line of code (of a total of about 1500 in 8 modules). No need to change any of the import declarations, no change in type signatures, nothing. Also, many functions, even whole classes, can be written more polymorphically and thus easier to use in situations other than what they were planned for; you can specify exactly the API that is needed and no more which strengthens static typing. The main disadvantage is that it can become quite hard to understand type errors, which often don't give me any clue what the /cause/ of the problem is. (This might be unavoidable, due to the level of polymorphism, I don't know.) In the end, I think something similar to the collections package should become 'standard' in the sense of getting distributed with the main Haskell implementations. This would encourage more people to try and use them, so we'd gather more experience and would be able to eliminate shortcomings sooner rather than later. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Help understanding type error
Derek Elkins wrote: On Sat, 2007-09-08 at 12:24 +1000, Stuart Cook wrote: On 9/8/07, Ryan Ingram [EMAIL PROTECTED] wrote: This does what you want, I think: {-# LANGUAGE ExistentialQuantification #-} module Exist where data Showable = forall a. (Show a) = Showable a instance Show Showable where showsPrec p (Showable a) = showsPrec p a show (Showable a) = show a -- You have to use the default implementation of showList -- because a list could be heterogeneous data T a = forall b. (Show b) = T b a extShow :: T a - Showable extShow (T b _) = Showable b Wow, I'm impressed! Making the existential wrapper an instance of its own typeclass solves quite a few problems. While the idiom is obvious in hindsight, I don't think I've seen it documented before. (Looking around just now I found some oblique references to the technique, but nothing that really called attention to itself.) It's documented on the old wiki... and in the papers that introduce local existential types I believe. E.g. http://citeseer.ist.psu.edu/aufer95type.html, this is where I first read about it. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Extending the idea of a general Num to other types?
Dan Piponi wrote: On 9/5/07, Ketil Malde [EMAIL PROTECTED] wrote: On Wed, 2007-09-05 at 08:19 +0100, Simon Peyton-Jones wrote: Error message from GHCi: test/error.hs:2:8: No instance for (Num String) arising from use of `+' at test/error.hs:2:8-17 Possible fix: add an instance declaration for (Num String) In the expression: x + (show s) In the definition of `f': f x s = x + (show s) your suggestion for the error message you'd like to have seen. ghc --newbie-errors error.hs . . . . . . Error message from GHCi: test/error.hs:2:8: You have tried to apply the operator '+' to 'x' and 'show s' 'show s' is a String. I don't know how to apply '+' to a String. May I suggest either: (1) '+' is a method of type class Num. Tell me how to apply '+' to a String by making String an instance of the class Num (2) You didn't really mean '+' In the expression: x + (show s) In the definition of `f': f x s = x + (show s) Splendid! And w/o the --newbie-errors drop the 'Possible fix:...'. In my experience, it is either unnecessary (because it is obvious which instance is missing) or (more often) misleading. As to the first line of the message 'No instance for (Num String)': I dislike the proposed 'String is not an instance of Num' for reasons already mentioned by others (multi parameter classes). I suggest to make it even shorter by directly quoting the missing syntax, i.e. 'Missing (instance Num String)'. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Block-wise lazy sequences in Haskell
Bryan O'Sullivan wrote: Henning Thielemann wrote: I thought it must be possible to define an unboxed array type with Storable elements. Yes, this just hasn't been done. There would be a few potentially tricky corners, of course; Storable instances are not required to be fixed in size, They are (indirectly), see http://www.haskell.org/ghc/docs/latest/html/libraries/base/Foreign-Storable.html#v%3AsizeOf sizeOf :: a - Int Computes the storage requirements (in bytes) of the argument. The value of the argument is not used. and http://www.cse.unsw.edu.au/~chak/haskell/ffi/ffi/ffise5.html#x8-320005.7 sizeOf:: Storable a = a - Int alignment :: Storable a = a - Int The function sizeOf computes the storage requirements (in bytes) of the argument, andalignment computes the alignment constraint of the argument. An alignment constraint x is fulfilled by any address divisible by x. Both functions do not evaluate their argument, but compute the result on the basis of the type of the argument alone. [...] Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Learn Prolog...
Hugh Perkins wrote: Sooo.. what is the modern equivalent of Prolog? I once learned about LIFE (Logic, Inheritance, Functions, and Equations) and was deeply fascinated. However, it died the quick death of most research languages. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: RE: Definition of the Haskell standard library
Sven Panne wrote: On Tuesday 31 July 2007 19:39, Duncan Coutts wrote: [...] The docs for those packages would be available for packages installed via cabal (assuming the user did the optional haddock step) and would link to each other. Well, on a normal Linux distro a user should *never* have to call cabal (or any of its cousins) directly, the distro's package manager should be the used instead. This is very theoretical. I use debian (stable) and have to install non-deb Haskell libraries all the time. No way distro package maintainers can provide packages for each and every library out there, and even for 'standard' libs (whatever that may mean) sometimes you need a newer or an older version of a certain library (relative to what the distro offers). On some systems (windows, gnome) there are dedicated help viewers that can help with this contents/index issue. haddock supports both (mshelp, devhelp). I'm not sure everyone would find that a sufficient solution however. A install-haddock tool would be the solution IMHO. Yes. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] RE: Re: Remember the future
Simon Peyton-Jones wrote: | From the ghc manual: | | --- | 7.3.3. The recursive do-notation | ... | | It is unfortunate that the manual does not give the translation rules, or at | least the translation for the given example. Hmm. OK. I've improved the manual with a URL to the main paper http://citeseer.ist.psu.edu/erk02recursive.html which is highly readable. And I've given the translation for the example as you suggest After finally reading the paper I agree that repeating the translation in teh manual is not a good idea. However, I suggest the manual should mention the restrictions imposed for mdo (wrt the normal do) * no shadowing allowed for generator bound variables * let bindings must be monomorphic Both of them might cause confusion if someone hits them by accident and starts to wonder what's wrong with her code, in which case it would be helpful if this information were directly available in teh manual. No need to give a detailed rationale (that's what the paper can be read for), just say that they are there. BTW, I agree with the paper that the restrictions are sensible and typically don't hurt. Thanks Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskellnet could not find network-any dependency.
Edward Ing wrote: Hi, I am trying to install Haskellnet. But the configuration breaks on dependency of network-any in GHC 6.6. I thought network-any was part of Hierarchical libraries? If not where do I get it? The generic place for libraries nowadays is hackage: http://hackage.haskell.org/packages/archive/pkg-list.html where you find http://hackage.haskell.org/cgi-bin/hackage-scripts/package/network-2.0 The 'Hierarchical Libraries' do not exist as such; all modern Haskell libraries use the hierarchical module name extension for their exported modules. There are, however, a certain number of libraries that are regularly shipped together with Haskell implementations. All of them, except those which work only together with a certain compiler/interpreter version (e.g. 'base'), are avaiable from hackage. BTW, it would be nice if hackage would list repository locations, too, if available. The one for 'network' library is not mentioned on hackage; I found one here: http://darcs.haskell.org/packages/network/ Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] RE: Re: Remember the future
Simon Peyton-Jones wrote: | It is unfortunate that the [ghc] manual does not give the translation rules, or at | least the translation for the given example. Hmm. OK. I've improved the manual with a URL to the main paper http://citeseer.ist.psu.edu/erk02recursive.html which is highly readable. And I've given the translation for the example as you suggest Cool, thanks. BTW, the Haskell' wiki says its adoption status is 'probably no' which I find unfortunate. IMHO recursive do is a /very/ useful and practical feature and the cons listed on http://hackage.haskell.org/trac/haskell-prime/wiki/RecursiveDo don't weigh enough against that. Ok, just my (relatively uninformed) 2 cents. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Using Collections: ElemsView and KeysView
Jean-Philippe Bernardy wrote: foldr on ElemsView is defined as such: foldr f i (ElemsView c) = foldr (f . snd) i c so, for example: getElementList = toList . ElemViews Ok, thanks, this helps. I had forgot to look at the instances. When I designed this code (some years ago), I didn't like the fold of Map to have the type: fold :: (a - b - b) - b - Map k a - b This just doesn't make sense if we see maps as a collection of (key, value) pairs. (Indeed, toList :: Map k a - [(k, a)]) In order to be consistent, but to provide an easy way to migrate to the new collection classes I was designing, I provided the ElemViews/KeyViews to switch to the former behaviour on a case by case basis. I understand the motivation. Could you say something about the functions withElems and withKeys? Their types contain a mysterious type constructor 'T' that is not documented in the haddock docs. The source file says 'type T a = a-a' so I guess this is for cases where I want to 'lift' a function mapping e.g. keys to keys to a fucntion on Maps? This also allows for definining optimized versions of foldr, etc. for each types that supports Views, but this was tedious, so I never did it. GHC RULE pragma is probably better suited to the purpose anyway. As for the lack of documentation, everyone is very welcome to contribute ;) I'd love to, but my understanding is still somewhat limited... Cheers (and thanks) Ben P.S. The collections framework is great, but the type errors one sometimes gets are horrible. When I started to see whether I had understood what you wrote above I got: Couldn't match expected type `c' (a rigid variable) against inferred type `Str' `c' is bound by the type signature for `macros' at MacroSubst.hs:13:24 Expected type: (Str, c) Inferred type: (Str, Str) When using functional dependencies to combine Foldable (Data.Map.Map k a) (k, a), arising from the instance declaration at Defined in Data.Collections Foldable (Data.Map.Map Str Str) (Str, c), arising from use of `unions' at Gadgets.hs:95:13-30 When trying to generalise the type inferred for `macros' Signature type: forall c1. (Collection c1 String, Set c1 String) = Attributes - c1 Type to generalise: forall c1. (Collection c1 String, Set c1 String) = Attributes - c1 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Using Collections: ElemsView and KeysView
Hi I am using collections-0.3 and need to perform some operations on keys resp. values of a map. In the concrete implementations there are functions like 'elems' and 'keys' but there is no such thing in Data.Collections. Instead there are the types 'ElemsView' and 'KeysView' and functions 'withElems' and 'withKeys', but I have not the slightest idea how they are supposed to be used, there is very sparse documentation and no examples. I'd be glad for any hints. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Remember the future
Andrew Coppin wrote: Surely all this means is that the magical mdo keyword makes the compiler arbitrarily reorder the expression...? It is not magical but simple syntactic sugar. And no, the compiler does not 'arbitrarily reorder' anything, you do the same in any imperative language with pointers/references and mutation. From the ghc manual: --- 7.3.3. The recursive do-notation ... The do-notation of Haskell does not allow recursive bindings, that is, the variables bound in a do-expression are visible only in the textually following code block. Compare this to a let-expression, where bound variables are visible in the entire binding group. It turns out that several applications can benefit from recursive bindings in the do-notation, and this extension provides the necessary syntactic support. Here is a simple (yet contrived) example: import Control.Monad.Fix justOnes = mdo xs - Just (1:xs) return xs As you can guess justOnes will evaluate to Just [1,1,1, The Control.Monad.Fix library introduces the MonadFix class. It's definition is: class Monad m = MonadFix m where mfix :: (a - m a) - m a --- It is unfortunate that the manual does not give the translation rules, or at least the translation for the given example. If I understood things correctly, the example is translated to justOnes = mfix (\xs' - do { xs - Just (1:xs'); return xs } You can imagine what happens operationally by thinking of variables as pointers. As long as you don't de-reference them, you can use such pointers in expressions and statements even if the object behind them has not yet been initialized (=is undefined). The question is how the objects are eventually be initialized. In imperative languages this is done by mutation. In Haskell you employ lazy evaluation: the art of circular programming is to use not-yet-defined variables lazily, that is, you must never demand the object before the mdo block has been executed. A good example is http://www.cse.ogi.edu/PacSoft/projects/rmb/doubly.html which explains how to create a doubly linked circular list using mdo. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Syntax for lambda case proposal could be \of
Brian Hulley wrote: Stefan O'Rear wrote: On Wed, Aug 15, 2007 at 06:58:40PM +0100, Duncan Coutts wrote: On Wed, 2007-08-15 at 10:50 -0700, Stefan O'Rear wrote: OTOH, your proposal provides (IMO) much more natural syntax for multi-pattern anonymous functions, especially if we stipulate that unlike a case (but like a lambda) you can have multiple arguments; then you could write stuff like: sumTo0 = foldr (\of 0 k - 0 n k - n + k) 0 sumTo0 = foldr (\0 k - 0 n k - n + k) 0 foo = getSomethingCPS $ \ arg - moreStuff is now a syntax error (\ { varid - } matches no productions). A multi-way lambda could be introduced using \\ thus: sumTo0 = foldr (\\ 0 k - 0; n k - n + k) 0 I like the idea, but unfortunately '\\' is currently a regular operator symbol. In fact it is used as (set or map) 'difference' operator (according to http://www.haskell.org/ghc/docs/latest/html/libraries/doc-index-92.html): \\ 1 (Function) Data.IntMap 2 (Function) Data.IntSet 3 (Function) Data.List 4 (Function) Data.Map 5 (Function) Data.Set Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Error building takusen with Cabal-1.1.6.2
John Dell'Aquila wrote: Setup.hs wants a module that Cabal hides. Am I doing something wrong (newbie :-) or should I try to fall back to Cabal-1.1.6.1? $ ghc --make -o setup Setup.hs Setup.hs:13:7: Could not find module `Distribution.Compat.FilePath': it is hidden (in package Cabal-1.1.6.2) This is what I did to make takusen build with ghc-6.6.1: [EMAIL PROTECTED]: .../haskell/takusen_0 darcs whatsnew { hunk ./Setup.hs 13 -import Distribution.Compat.FilePath (splitFileName, joinPaths)^M$ +import System.FilePath (splitFileName, combine)^M$ hunk ./Setup.hs 124 - libDirs - canonicalizePath (joinPaths path libDir)^M$ - includeDirs - canonicalizePath (joinPaths path includeDir)^M$ + libDirs - canonicalizePath (combine path libDir)^M$ + includeDirs - canonicalizePath (combine path includeDir)^M$ } HTH Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Explaining monads
Brian Brunswick wrote: One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture. /All/ of the above structures are about combining compatible things things together in a row. /None/ of them force any particular order of evaluation - that all comes from the particular instance. So its only a particular feature of IO that it sequences the side effects. Others don't - we can have a lazy State monad that just builds up big thunks. I am a bit astonished. Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here. Similar with the Error Monad (i.e. Either Err, for some Err type). I won't talk about List Monad because I always had difficulty understanding the List Monad. What about State? The effect is reading/writing the state. Again, the State Monad takes care that these effects get sequenced, and again that's what you expect it to do for you. And so on... This is -- of course -- not a proof, so maybe there /are/ Monads that don't sequence (their) effects. I'd be most interested to see an example, if there is one, to bring myself nearer to the -- unattainable -- goal of full enlightenment wrt Monads. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Help with a project design
Andrea Rossato wrote: The task this library should do is simple: given an xml object (representing a bibliographic reference), render it with rules stored in a different xml object (the citation style). While I think I can find solutions for this problem - the rendering -, what I find difficult is the design of the reference xml objects. Bibliographic entries have different types, which must be rendered differently. These types can be classified into 3 main classes (books, articles, parts of a book) that can be rendered with the same methods. That seems to fit Haskell perfectly. Now, I basically see 2 approaches: 1. create some data structures (most part of them is common) to map different types of bibliographic entries, and create the needed classes with the render methods; 2. keep the xml objects as xml and create an abstract interface to the xml objects to get the data required for rendering and classifying the xml objects. This way I would have to: - create data types to store different types of xml objects (data Book = Book XmlTree, data Artilce, etc.): these data types represent my reference classes; - create a class of 'render'-able types with the render method and define the instances; - create an existential type to set the type of the xml objects with some kind of setType :: XmlTree - ExistentialContainer I may not be overly qualified (and experienced with Haskell) to give you advice, so take what follows with caution. I would definitely prefer choice 1 over 2. I think it is very important to design the data structure independent from any external representation of that data. XML is a fine way to externally represent data, but this should not influence your choice of data structure. I'd rather keep the possibility of alternative representations in the back of my head, and make the data structure general enough that they could be added w/o disrupting your main algorithms. Abstraction can be added later; if you find that you need to maintain invariants for you bibliographic data that cannot be easily expressed in the type itself, then you might consider to make your data type abstract, i.e. put it into a module of its own and export only an API. I think that the first approach is not abstract enough and requires a lot of boilerplate code to translate into a Haskell type a specific type of bibliographic entry. A certain amount of boiler plate may be unavoidable. I never found this to be a serious obstacle, but again that may be due to my limited experience. It is a bit tedious to write but OTOH may even serve you as 'finger exercise'. If it really gets out of hand, 'scrap' it in some way ;) I recommend the Uniplate approach because it is very easy to understand, performs good, and requires the least amount of extensions. OK, you have been warned... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Language support for imperative code. Was: Re: monad subexpressions
Brian Hulley wrote: Brian Hulley wrote: apfelmus wrote: Brian Hulley schrieb: main = do buffer - createBuffer edit1 - createEdit buffer edit2 - createEdit buffer splitter - createSplitter (wrapWidget edit1) (wrapWidget edit2) runMessageLoopWith splitter ... Thus the ability to abstract mutable state gives to my mind by far the best solution. I'm not sure whether mutable state is the real goodie here. I think it's the ability to indpendently access parts of a compound state. http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf This is indeed a real key to the problem. Of course this is only one aspect of the problem... Thinking about this a bit more, and just so this thought is recorded for posterity (!) and for the benefit of anyone now or in a few hundred years time, trying to solve Fermat's last GUI, the object oriented solution allows the buffer object to do anything it wants, so that it could negotiate a network connection and implement the interface based on a shared network buffer for example, without needing any changes to the client code above, so a functional gui would need to have the same flexibility to compete with the OO solution. I'd be careful. Introducing a network connection into the equation makes the object (its methods) susceptible to a whole new bunch of failure modes; think indefinite delays, connection loss, network buffer overflow, etc etc. It may be a mistake to abstract all that away; in fact I am convinced that the old Unix habit of sweeping all these failure modes and potentially long delays under a big carpet named 'file abstraction' was a bad idea to begin with. The ages old and still not solved problems with web browsers hanging indefinitely (w/o allowing any GUI interaction) while name resolution waits for completion is only the most prominent example. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Explaining monads
Brandon S. Allbery KF8NH wrote: On Aug 13, 2007, at 16:29 , Benjamin Franksen wrote: Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here. Clearly it does, but not as a side effect of the *monad*. It's ordinary Haskell data dependencies at work here, not some mystical behavior of a monad. I can't remember claiming that Monads have any mystic abilities. In fact, these Monads are implemented in such a way that their definition /employs/ data dependencies to enforce a certain sequencing of effects. I think that is exactly the point, isn't it? What about State? The effect is reading/writing the state. Again, the State Monad takes care that these effects get sequenced, and again that's what you expect it to do for you. No, I expect it to carry a value around for me. If I carry that value around myself instead of relying on the monad to do it for me, *the calculation still gets sequenced by the data dependencies*. Of course, you can unfold (itso inline) bind and return (or never use them in the first place). Again, nobody claimed Monads do the sequencing by employing Mystic Force (tm); almost all Monads can be implemented in plain Haskell, nevertheless they sequence certain effects -- You Could Have Invented Them Monads Yourself ;-) The Monad merely captures the idiom, abstracts it and ideally implements it in a library for your convenience and for the benefit of those trying to understand what your code is supposed to achieve. She reads 'StateM ...' and immediately sees 'ah, there he wants to use some data threaded along for reading and writing'. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Explaining monads
Derek Elkins wrote: On Mon, 2007-08-13 at 22:29 +0200, Benjamin Franksen wrote: Brian Brunswick wrote: One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture. /All/ of the above structures are about combining compatible things things together in a row. /None/ of them force any particular order of evaluation - that all comes from the particular instance. So its only a particular feature of IO that it sequences the side effects. Others don't - we can have a lazy State monad that just builds up big thunks. I am a bit astonished. Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here. You are wrong. Proof: Let's take a simpler example: Identity. QED I don't accept this proof. Note the wording: 'Monads sequence (certain, monad specific) effects'. Identity has no effects, ergo no sequencing has to happen. I didn't claim that /all/ monadic actions are (necessarily) sequenced. This also disproves David Roundy's statement that do x - return 2; undefined; return (x*x) will hit bottom. Reader also does not sequence it's actions. Ok, I admit defeat now ;-) Monads in general /allow/ sequencing of (certain) effects, but it is not necessary for a monad to do so. Writer is a kind of funny example. In which way? Certainly, any monad instance where (=) needs it's first argument to determine whether to continue, e.g. Maybe, Either, IO, Parser, Cont, List, Tree will clearly need to force it's first argument before continuing, but that's just the nature of the situation. Don't forget State, clearly it sequences actions even though it always continues; but maybe 'sequencing' is too strong a word: Just as with Reader, a sequence of reads (with no writes in between) may actually happen in any order, State imposes strict order on groups of adjacent reads and all (single) writes, correct? Ok, I think I understand where I was misled: I took the means for the end. There are many monads that impose a certain order on (some) effects; but this is done only as far as necessary to maintain X, whatever X may be, maybe X is just the monad laws? What about: The imperative way always imposes a sequencing of actions, period. Monads in Haskell (except IO which is just imperative programming) allow us to impose ordering on effects only partially, ideally only where necessary. Thanks for further enlightening me. Hey, I just said that sequencing is a means, not an end, but maybe even this is not necessarily true. I imagine a 'Sequence Monad' whose only effect would be to order evaluation... would that be possible? Or useful? Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Re: Language support for imperative code. Was: Re: monad subexpressions
Isaac Dupree wrote: Benjamin Franksen wrote: I'd be careful. Introducing a network connection into the equation makes the object (its methods) susceptible to a whole new bunch of failure modes; think indefinite delays, connection loss, network buffer overflow, etc etc. It may be a mistake to abstract all that away; in fact I am convinced that the old Unix habit of sweeping all these failure modes and potentially long delays under a big carpet named 'file abstraction' was a bad idea to begin with. The ages old and still not solved problems with web browsers hanging indefinitely (w/o allowing any GUI interaction) while name resolution waits for completion is only the most prominent example. IMO it's just a terribly stupid bug in the best web browsers. Maybe inefficient, poorly, or not-at-all-used multithreading? Explicitly creating a (system) thread with all the overhead (in computing resources, as well as code complexity) only because the system interface is broken? Yes, of course, necessary, but not nice. An extra parameter for a continuation would be a lot more light-weight and would also make explicit that we must expect the call to be delayed. I think the main reason why systems don't regularly employ this scheme is that it is so tedious to work with closures in low-level languages like C. file abstraction has its points. We just need a (type-level?) clear-to-program-with distinction between operations that may block indefinitely, and operations that have particular bounds on their difficulty. Although, modern OSes try to balance too many things, don't usually make any such hard real-time guarantees, in favor of everything turning out more-or-less correct eventually. Back to file abstraction - well, considering the benefits of mounting remote systems as a filesystem. The hierarchy abstraction of the filesystem didn't stay the same performance characteristics... And all kinds of potential problems result too, when the connection breaks down! Indeed, as I have experienced multiple times: NFS clients completely hanging for minutes, enforcing coffee break for the whole office! Not that I would mind a coffee break now or then, but it tends to happen in the middle of an attempt to fix a critical bug in the production system... How do you program with all those error conditions explicitly? It is difficult. You need libraries to do it well - and I'm not at all sure whether there exist such libraries yet! I mean, programs are much too complicated already without infesting them with a lot of special cases. What I would like to have is a clear distinction, apparent in the type, between actions that can be expected to terminate fast and with certainty (apart from broken hardware, that is) and others which are inherently insecure and may involve considerable or even indefinite delays. The latter should accept a continuation argument. However, there is obviously a judgement call involved here. Thus, the system should be flexible enough to allow to treat the same resource either as one or the other, depending on the demands of the application. There may be situations where a name lookup can safely be treated as a synchronous operation (e.g. a script run as a cron job); in other situations one might need to regard even local bus access to some I/O card as asynchronous. indefinite delays I can create with `someCommand | haskellProgram` too Yes, user input as well as reading from a pipe should be handled like a network connection: call my continuation whenever input is available. connection loss Is there a correct way to detect this? There are many ways, typically involving some sort of beacons. Anyway, if all else fails the operation times out. I find it rather odd when I lose my IRC connection for a moment and then it comes back a moment later (Wesnoth games are worse, apparently, as they don't reconnect automatically). I often prefer considering them an indefinite delay. Right: The user (the application) is in the best position to decide how long to wait before an operation times out. network buffer overflow that is: too much input, not processing it fast enough? (or similar). Yeah, though usually the other way around, i.e. too much output and the system can't send it fast enough (maybe because the other side is slow in accepting data, or because the connection is bad, or whatever). Memory size limitations are considerably unhandled in programs of all sorts, not just networked ones, though they(networked) may suffer the most. It is usually not a problem with modern desktop or server systems, rather with so called 'real-time' OSes, where everything tends to be statically allocated. We wish we had true unlimited-memory Turing machines :) ...this is possibly the most difficult issue to deal with formally. Probably requires limiting input data rates artificially. That's what one does (or tries to do, until some arbitrary
[Haskell-cafe] Re: Small question
Andrew Coppin wrote: Like that time yesterday, I compiled from program and got a weird message about GHC about ignored trigraphs or something... What the heck is a trigraph? Everyone's favorite obscure feature of the ANSI C99 preprocessor. Probably you had something like this is odd??? in your source code, and were using -cpp. http://www.vmunix.com/~gabor/c/draft.html#5.2.1.1 Er... wow. OK, well I have no idea what happened there... (I'm not using -cpp. I don't even know what it is.) I had presumed GHC was upset because it got killed on the previous run... (I was running something else and it locked up the PC.) Since you are after increasing your program's performance, maybe you are using -fvia-c or chose an optimization level high enough that GHC decides for itself to go via C. The message is very most probably from the C compiler. You could try passing the C compiler an appropriate flag (see gcc manual) via a ghc command line option (which I am too lazy to look up). Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskell vs GC'd imperative languages, threading, parallelizeability (is that a word? :-D )
Hugh Perkins wrote: Now, arguably the fact that we are pattern matching on the receiver at least means we dont do anything with the invalid data sent, but this is not rocket science: the standard technique to ensure decent compile time validation in rpc-type things is to use an interface. The interface defines the method names and parameters that you can send across. Both the receiver and the sender have access to the interface definition, and it is trivial to check it at compile time. (Caveat: havent looked enough into Erlang to know if there is a good reason for not using an interface?) Remember that Erlang is an untyped language and that you are allowed to send any kind of data. However, there is more to consider here: A certain amount of dynamism wrt the message content (high level protocol) is necessary for systems for which Erlang was designed, namely large distributed control systems with minimum down-times. For large distributed installations it is a matter of practicality to be able to upgrade one component w/o needing to recompile (and then re-start) all the other components that it communicates with -- for systems with expected down-times of 3 Minutes per year it is a matter of being able to meet the specifications. You'll have a hard time finding high-availability or large control systems which use an IDL approach for communication. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: a regressive view of support for imperative programming in Haskell
Donn Cave wrote: (I have a soft spot for O'Haskell, but alas I must be nearly alone on that.) You are /not/ alone :-) I always found it very sad that O'Haskell and also its sucessor Timber (with all the good real-time stuff added) died the 'quick death' of most research languages. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: a regressive view of support for imperative programming in Haskell
David Roundy wrote: Several times since reading the beginning of this discussion I've wished I had the new syntax so I could write something like: do if predicateOnFileContents (- readFile foo) then ... instead of either do contents - readFile foo if predicateOnFileContents contents then ... or (as you'd prefer) readFile foo = \contents - if predicateOnFileContents contents then ... Isn't this problem, namely being forced to name intermediate results, also solved by some sort of idiom bracket sugar, maybe together with the lambda case proposal? I would prefer both very much to the proposed (- action) syntax for the same reasons that e.g. Jules Bean nicely summarized. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: a regressive view of support for imperative programming in Haskell
David Menendez wrote: On 8/9/07, Benjamin Franksen [EMAIL PROTECTED] wrote: Donn Cave wrote: (I have a soft spot for O'Haskell, but alas I must be nearly alone on that.) You are /not/ alone :-) I always found it very sad that O'Haskell and also its sucessor Timber (with all the good real-time stuff added) died the 'quick death' of most research languages. There is also RHaskell, which implements an O'Haskell-like system as a Haskell library. http://www.informatik.uni-freiburg.de/~wehr/haskell/ Thanks for the pointer, I didn't know about this. Will take a look. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: a regressive view of support for imperative programming in Haskell
David Roundy wrote: On Thu, Aug 09, 2007 at 08:45:14PM +0200, Benjamin Franksen wrote: David Roundy wrote: Several times since reading the beginning of this discussion I've wished I had the new syntax so I could write something like: do if predicateOnFileContents (- readFile foo) then ... instead of either do contents - readFile foo if predicateOnFileContents contents then ... or (as you'd prefer) readFile foo = \contents - if predicateOnFileContents contents then ... Isn't this problem, namely being forced to name intermediate results, also solved by some sort of idiom bracket sugar, maybe together with the lambda case proposal? I would prefer both very much to the proposed (- action) syntax for the same reasons that e.g. Jules Bean nicely summarized. I'm not familiar with the lambda case proposal, http://hackage.haskell.org/trac/haskell-prime/wiki/LambdaCase or, quoting from a recent post by Stefan O'Rear in this thread: I think the CaseLambda proposal on the Haskell' wiki solves this one nicely. mexpr = case of p1 - branch1 p2 - branch2 You still have to use =, but you don't have to name the scrutinee (and names are expensive cognitively). i.e. your example would become fmap predicateOnFileContents (readFile foo) = case of True - ... False - ... (use liftM instead of fmap, if you prefer) and don't know what you mean by idiom bracket sugar, As has been already mentioned in this thread, in http://www.soi.city.ac.uk/~ross/papers/Applicative.html Conor McBride and Ross Paterson invent/explain a new type class that is now part of the base package (Control.Applicative). They also use/propose syntactic sugar for it, i.e. pure f * u1 * ... * un ~~ (| f u1 ... un |) (I just made up the symbols '(|' and '|)', the concrete syntax would have to be fixed by people more knowledgeable than me.) As to the pros and cons of (- action) proposal, I think everything has been said. I'd vote for giving IdiomBrackets and/or LambdaCase a chance for being implemented, too, so we can try and evaluate different ways to simplify monadic code. One reason why I like IdiomBrackets is that they are more generally applicable (no pun intended ;:) i.e. they would work not just for Monads but for anything in Applicative. (Of course, that is also their weakness.) Similary, LambdaCase has more applications than just simplifying monadic code by avoiding to name an intermediate result. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Handling custom types in Takusen
Salvatore Insalaco wrote: I noticed that in Takusen there're just two instances to implement to make any Haskell type db-serializable: DBBind / SqliteBind for serialization and DBType for deserialization. FWIW, I have two patches lying around (attached) that I wanted to send to the Takusen maintainers anyway. They (the patches) implement (only) instance DBType Data.ByteString for Oracle and Sqlite backends. They are rudimentarily tested (hey, seems to work!), anyway a review might be in order because I am not sure I understand the internals good enough -- for all I know I might have introduced space leaks or whatnot. Cheers Ben New patches: [added ByteString support to Database/Oracle [EMAIL PROTECTED] { hunk ./Database/Oracle/Enumerator.lhs 41 + import qualified Data.ByteString.Char8 as B hunk ./Database/Oracle/Enumerator.lhs 948 + bufferToByteString :: ColumnBuffer - IO (Maybe B.ByteString) + bufferToByteString buffer = OCI.bufferToByteString (undefined, colBufBufferFPtr buffer, colBufNullFPtr buffer, colBufSizeFPtr buffer) + hunk ./Database/Oracle/Enumerator.lhs 1010 + instance DBType (Maybe B.ByteString) Query ColumnBuffer where + allocBufferFor _ q n = allocBuffer q (16000, oci_SQLT_CHR) n + fetchCol q buffer = bufferToByteString buffer + hunk ./Database/Oracle/OCIFunctions.lhs 39 + import qualified Data.ByteString.Base as B hunk ./Database/Oracle/OCIFunctions.lhs 676 + + bufferToByteString :: ColumnInfo - IO (Maybe B.ByteString) + bufferToByteString (_, bufFPtr, nullFPtr, sizeFPtr) = + withForeignPtr nullFPtr $ \nullIndPtr - do + nullInd - liftM cShort2Int (peek nullIndPtr) + if (nullInd == -1) -- -1 == null, 0 == value + then return Nothing + else do + -- Given a column buffer, extract a string of variable length + withForeignPtr bufFPtr $ \bufferPtr - + withForeignPtr sizeFPtr $ \retSizePtr - do + retsize - liftM cUShort2Int (peek retSizePtr) + --create :: Int - (Ptr Word8 - IO ()) - IO ByteString + val - B.create retsize (\p - copyBytes (castPtr p) bufferPtr retsize) + return (Just val) } [added ByteString support to Database/Sqlite Ben Franksen [EMAIL PROTECTED]**20070714230837] { hunk ./Database/Sqlite/Enumerator.lhs 38 + import qualified Data.ByteString.Char8 as B hunk ./Database/Sqlite/Enumerator.lhs 366 + bufferToByteString query buffer = + DBAPI.colValByteString (stmtHandle (queryStmt query)) (colPos buffer) + hunk ./Database/Sqlite/Enumerator.lhs 414 + instance DBType (Maybe B.ByteString) Query ColumnBuffer where + allocBufferFor _ q n = allocBuffer q n + fetchCol q buffer = bufferToByteString q buffer + hunk ./Database/Sqlite/SqliteFunctions.lhs 22 + import qualified Data.ByteString.Char8 as B hunk ./Database/Sqlite/SqliteFunctions.lhs 278 + + colValByteString :: StmtHandle - Int - IO (Maybe B.ByteString) + colValByteString stmt colnum = do + cstrptr - sqliteColumnText stmt (fromIntegral (colnum - 1)) + if cstrptr == nullPtr + then return Nothing + else do + str - B.copyCString cstrptr + return (Just str) } Context: [added Functor and MonadFix instances to DBM Ben Franksen [EMAIL PROTECTED]**20070714112112] [TAG 0.6 [EMAIL PROTECTED] Patch bundle hash: 3bd78e14633d172cbabf4fd716fc0bcf3b32fa8c ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] View patterns in GHC: Request for feedback
apfelmus wrote: Jules Bean wrote: Have you tried using pattern guards for views? f s | y : ys - viewl s = | EmptyL - viewl s = Hm, I'd simply use a plain old case-expression here f s = case viewl s of y : ys - ... EmptyL - ... In other words, case-expressions are as powerful as any view pattern may be in the single-parameter + no-nesting case. A better example is probably zip for sequences (Data.Sequence.Seq): zip :: Seq a - Seq b - Seq (a,b) zip xs ys = case viewl xs of x : xt - case viewl ys of y : yt - (x,y) | zip xt yt EmptyL - empty EmptyL - empty This is how I do it, no pattern guards, no view patterns: zip :: Seq a - Seq b - Seq (a,b) zip xs ys = case (viewl xs,viewl ys) of (EmptyL, _ ) - empty (_, EmptyL ) - empty (x : xt, y : yt) - (x,y) | zip xt yt This is IMHO a lot clearer than any of the alternatives you listed, except your 'dream' (which is exactly what 'real' views would give us). Cheers Ben, member of the we-want-real-views-or-nothing-at-all movement ;-) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] upgrading Text.Printf
Hi, I am using Text.Printf (package base, ghc-6.6.1) and am missing %X formatting. Checking out the darcs repo of the base package I see that it has already been added there. Is there any way to upgrade to the latest version while keeping ghc-6.6.1? I keep hearing Bulat saying that this is not possible, but I wanted to check anyway. Is there, perhaps, a separate package that contains (a newer version of) Text.Printf? Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: [Haskell] View patterns in GHC: Request for feedback
Dan Licata wrote: On Jul25, apfelmus wrote: The point is to be able to define both zip and pairs with one and the same operator : . There's actually a quite simple way of doing this. You make the view type polymorphic, but not in the way you did: type Queue elt empty :: Queue elt cons :: elt - Queue elt - Queue elt data ViewL elt rec = EmptyL | elt : rec view :: Queue elt - ViewL elt (Queue elt) view2 :: Queue elt - ViewL elt (ViewL elt (Queue elt)) This is cool! The more so because 'view2' can quite easily be defined in terms of 'view' view2 q = case view q of EmptyL - EmptyL x : q' - x : view q' so it suffices to provide the one-level 'view' as a library function. Does this scale to views containing multiple nestable constructors? It would, of course, be nice to get rid of having to write view2 altogether... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: upgrading Text.Printf
Ian Lynagh wrote: On Wed, Jul 25, 2007 at 03:38:50PM +0200, Benjamin Franksen wrote: I am using Text.Printf (package base, ghc-6.6.1) and am missing %X formatting. Checking out the darcs repo of the base package I see that it has already been added there. Is there any way to upgrade to the latest version while keeping ghc-6.6.1? I keep hearing Bulat saying that this is not possible, but I wanted to check anyway. Is there, perhaps, a separate package that contains (a newer version of) Text.Printf? The easiest way is to grab a copy of the module from http://darcs.haskell.org/packages/base/Text/Printf.hs and either put it in your program directly, or rename it and put it in a package. Yeah, actually that's what I've been doing in the meantime (include it in my program, that is). Just thought it might be worth to ask... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Help with IO and randomR
Niko Korhonen wrote: Bryan Burgers wrote: tpdfs range = do g - newStdGen -- get a random generator (g1, g2) - return $ split g -- make two random generators out of it return $ zipWith combine (randomRs range g1) (randomRs range g2) -- get two streams of random numbers, and combine them elementwise. combine x y = (x + y) `div` 2 So, moving on to the next question, how well do you think this solution would scale if we would need n random numbers to generate one? As a rule of thumb, anything that works with two operands of the same type can be generalized to more than two operands, using lists and fold resp. unfold. The following code is untested and may contain errors, but you should get the idea: -- combine n generators by taking the arithmetic mean arithMean xs = sum xs `div` length xs -- must be a finite nonempty list! -- create infinitely many generators from one splitMany = map fst . iterate (split . snd) tpdfs range = do g - newStdGen -- get a random generator gs - return $ iterate splitMany g return $ map arithMean . map (randomRs range) (take 42 gs) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] RE: Speedy parsing
Re, Joseph (IT) wrote: Ah, I thought I might have to resort to one of the ByteStrings modules. I've heard of them but was more or less avoiding them due to some complexities with installing extra libraries in my current dev environment. I'll try to work that out with the sysadmins and try it out. Maybe it is easier to just get ghc-6.6.1 - it contains Data.ByteString Co in base library. Also, for cabal-ized libs (which is quickly becoming the rule), you can install with --user flag and give it a directory where you have write permission. On a different note: I've been wondering how difficult it would be to re-write Parsec so that it doesn't act on a /list/ of tokens but on a Sequence (defined in Data.Collections from collections package). Because this would mean one could use String as well as ByteString. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: RE: Speedy parsing
Thomas Schilling wrote: On 22 jul 2007, at 23.46, Benjamin Franksen wrote: On a different note: I've been wondering how difficult it would be to re-write Parsec so that it doesn't act on a /list/ of tokens but on a Sequence (defined in Data.Collections from collections package). Because this would mean one could use String as well as ByteString. That's (Summer of Code) work in progress: http://code.google.com/soc/2007/haskell/appinfo.html? csaid=B97EF4562EF3B244 Thanks, this is excellent news. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] A wish for relaxed layout syntax
Hi, I often run into the following issue: I want to write a list of lengthy items like this mylist = [ quite_lengthy_list_item_number_one, quite_lengthy_list_item_number_two, quite_lengthy_list_item_number_three ] With the current layout rules this is a parse error (at the closing bracket). Normally I avoid this by indenting everything one level more as in mylist = [ quite_lengthy_list_item_number_one, quite_lengthy_list_item_number_two, quite_lengthy_list_item_number_three ] but I think this is a little ugly. Same issue comes up with parenthesized do-blocks, I would like to write when (condition met) (do first thing second thing ) So my wish is for a revised layout rule that allows closing brackets (of all sorts: ']', ')', '}') to be on the same indent level as the start of the definition/expression that contains the corresponding opening bracket. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Can we do better than duplicate APIs?
Robert Dockins wrote: Some sort of in-langauge or extra-language support for mechanicly producing the source files for the full API from the optimized core API would be quite welcome. Have you considered using DrIFT? IIRC it is more portable and easier to use than TH. Handling export lists, How so? I thought in Edision the API is a set of type classes. Doesn't that mean export lists can be empty (since instances are exported automatically)? No. Edison allows you to directly import the module and bypass the typeclass APIs if you wish. Ah, I didn't know that. Also, some implementations have special functions that are not part of the general API, and are only available via the module exports. Ok. One could make typeclasses the only way to access the main API, but I rather suspect there would be performance implications. I get the impression that typeclass specialization is less advanced than intermodule inlining (could be wrong though). No idea. Experts? haddock comments, I thought all the documentation would be in the API classes, not in the concrete implementations. It is now, but I've gotten complaints about that (which are at least semi-justified, I feel). Also, the various implementations have different time bounds which must documented in the individual modules. Yes, I forgot about that. Hmmm. Ideally, I'd like to have the function documentation string and the time bounds on each function in each concrete implementation. I've not done this because its just too painful to maintain manually. I can relate to that. The more so since establishing such time bounds with confidence is not trivial even if the code looks simple. BTW, code generation (of whatever sort) wouldn't help with that, right? I wonder: would it be worthwhile to split the package into smaller parts that could be upgraded in a somewhat less synchronous way? (so that the maintenance effort can be spread over a longer period) I have to admit, I'm not sure what an in-language mechanism for doing something like this would look like. Template Haskell is an option, I suppose, but its pretty hard to work with and highly non- portable. It also wouldn't produce Haddock-consumable source files. ML-style first class modules might fit the bill, but I'm not sure anyone is seriously interested in bolting that onto Haskell. As I explained to SPJ, I am less concerned with duplicated work when implementing concrete data structures, as with the fact that there is still no (compiler checkable) common interface for e.g. string-like thingies, apart from convention to use similar names for similar features. Fair enough. I guess my point is that typeclasses (ad per Edison) are only a partial solution to this problem, even if you can stretch them sufficiently (with eg, MPTC+fundeps+whatever other extension) to make them cover all your concrete implementations. Yes, and I think these problems would be worth some more research effort. Besides, I dearly hope that we can soon experiment with associated type synonyms... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why the Prelude must die
mgsloan wrote: On 3/24/07, Vivian McPhail [EMAIL PROTECTED] wrote: I agree with Sven, but... What I want to push is a 'mathematically sound' numeric prelude. A proper numerical prelude should have bona fide mathematical obects like groups, rings, and fields underlying common numerical classes. It would be edifying to the student who discovered that the particular data type he is using is an inhabitant of a known class and can thus take advantage of known properties, presupplied as class methods. Reasoning and communication about programs, data types, and functions would be enhanced. One problem with that is that the instances are often times not mathematically sound - Int and Double certainly aren't. Int is algebraically sound as a factor ring Z/nZ with n=2**k, k the number of bits (which could be implementation defined). Unfortunately the order inherited from Integer is not compatible with the algebra... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Can we do better than duplicate APIs? [was: Data.CompactString 0.3]
Jean-Philippe Bernardy wrote: Please look at http://darcs.haskell.org/packages/collections/doc/html/Data-Collections.html for an effort to make most common operation on bulk types fit in a single framework. The last time I looked at this (shortly after you started the project) I wasn't sure if I would want to use it. Now it seems like an oasis in a desert to me. I am pretty much impressed, for instance, you managed to unify all the nine existing 'filter' types into a common type class. Cool. The only hair in the (otherwise very tasty) soup is Portability: MPTC, FD, undecidable instances which doesn't sound like it is going to replace the Prelude any time soon ;-) Never mind: I definitely consider using this instead of importing all these different Data.XYZ modules directly (and, heaven forbid, having to import them qualified whenever I need two of them in the same module). Do you forsee any particular obstacle to an integration (=providing the appropriate instances) of e.g. CompactStrings? I would even try to do this myself, as an exercise of sorts. How difficult is it in practice to work with 'undecidable instances'? Are there special traps one has to be careful to walk around? Also, we expect indexed types to solve, or at least alleviate, some problems you mention in your rant. http://haskell.org/haskellwiki/GHC/Indexed_types I have been hoping for that to resolve (some of) our troubles, but have been confused by the all the back and forth among the experts about whether they offer more, or less, or the same, as MPTCs+fundeps+whatever (and that they will probably not go into Haskell'). BTW, any reason I didn't find your collections library in the HackageDB (other than stupidity on my part)? (Just interested, I already found the darcs repo.) Cheers Ben PS: Since I read and post to the Haskell lists via gmane and a news client: Do mail clients usually respect the follow-up header, such as I insert when cross-posting, so as to restrict follow-ups to the intended list? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Can we do better than duplicate APIs? [was: Data.CompactString 0.3]
[sorry for the somewhat longer rant, you may want to skip to the more technical questions at the end of the post] Twan van Laarhoven wrote: I would like to announce version 0.3 of my Data.CompactString library. Data.CompactString is a wrapper around Data.ByteString that represents a Unicode string. This new version supports different encodings, as can be seen from the data type: [...] Homepage: http://twan.home.fmf.nl/compact-string/ Haddock: http://twan.home.fmf.nl/compact-string/doc/html/ Source:darcs get http://twan.home.fmf.nl/repos/compact-string After taking a look at the Haddock docs, I was impressed by the amount of repetition in the APIs. Not ony does Data.CompactString duplicate the whole Data.ByteString interface (~100 functions, adding some more for encoding and decoding), the whole interface is again repeated another four times, once for each supported encoding. Now, this is /not/ meant as a criticism of the compact-string package in particular. To the contrary, duplicating a fat interface for almost identical functionality is apparently state-of-the-art in Haskell library design, viz. the celebrated Data.Bytesting, whose API is similarly repetitive (see Data.List, Data.ByteString.Lazy, etc...), as well as Map/IntMap/SetIntSet etc. I greatly appreciate the effort that went into these libraries, and admire the elegance of the implementation as well as the stunning results wrt. efficiency gains etc.. However I fear that duplicating interfaces in this way will prove to be problematic in the long run. The problems I (for-)see are for maintenance and usability, both of which are of course two sides of the same coin. For the library implementer, maintenance will become more difficult, as ever more of such 'almost equal' interfaces must be maintained and kept in sync. One could use code generation or macro expansion to alleviate this, but IMO the necessity to use extra-language pre-processors points to a weakness in the language; it be much less complicated and more satisfying to use a language feature that avoids the repetition instead of generating code to facilitate it. On the other side of teh coin, usability suffers as one has to lookup the (almost) same function in more and more different (but 'almost equal') module interfaces, depending on whether the string in question is Char vs. Byte, strict vs. lazy, packed vs. unpacked, encoded in X or Y or Z..., especially since there is no guarantee that the function is /really/ spelled the same everywhere and also really does what the user expects.(*) I am certain that most, if not all, people involved with these new libraries are well aware of these infelicities. Of course, type classes come to mind as a possible solution. However, something seems to prevent developers from using them to capture e.g. a common String or ListLike interface. Whatever this 'something' is, I think it should be discussed and addressed, before the number of 'almost equal' APIs becomes unmanageable for users and maintainers. Here are some raw ideas: One reason why I think type classes have not (yet) been used to reduce the amount of API repetition is that Haskell doesn't (directly) support abstraction over type constraints nor over the number of type parameters (polykinded types?). Often such 'almost equal' module APIs differ in exactly these aspects, i.e. one has an additional type parameter, while yet another one needs slightly different or additional constraints on certain types. Oleg K. has shown that some if these limitations can be overcome w/o changing or adding features to the language, however these tricks are not easy to learn and apply. Another problem is the engineering question of how much to put into the class proper: there is a tension between keeping the class as simple as possible (few methods, many parametric functions) for maximum usability vs. making it large (many methods, less parametric functions) for maximum efficiency via specialized implementations. It is often hard to decide this question up front, i.e. before enough instances are available. (This has been stated as a cause for defering the decision for a common interface to list-like values or strings). Since the type of a function doesn't reveal whether it is a normal function with a class constraint or a real class method, I imagine a language feature that (somehow) enables me to specialize such a function for a particular instance even if it is not a proper class member. Or maybe we have come to the point where Haskell's lack of a 'real' module system, like e.g. in SML, actually starts to hurt? Can associated types come to the rescue? Cheers Ben -- (*) I know that strictly speaking a class doesn't guarantee any semantic conformance either, but at least there is a common place to document the expected laws that all implementations should obey. With duplicated module APIs there is no such single place. ___ Haskell-Cafe
[Haskell-cafe] Re: Lazy IO and closing of file handles
Benjamin Franksen wrote: Bertram Felgenhauer wrote: Having to rely on GC to close the fds quickly enough is another problem; can this be solved on the library side, maybe by performing GCs when running out of FDs? Claus Reinke wrote: in good old Hugs, for instance, we find in function newHandle in src/iomonad.c [...snip...] /* Search for unused handle*/ /* If at first we don't */ /* succeed, garbage collect*/ /* and try again ... */ /* ... before we give up */ so, instead of documenting limitations and workarounds, this issue should be fixed in GHC as well. This may help in some cases but it cannot be relied upon. Finalizers are always run in a separate thread (must be, see http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html). Thus, even if you force a GC when handles are exhausted, as hugs seems to do, there is no guarantee that by the time the GC is done the finalizers have freed any handles (assuming that the GC run really detects any handles to be garbage). Sorry for replying to myself, but I just realized that the argument brought forth by Boehm applies only to general purpose finalizing facilites, and not necessarily to each and every special case. I think one could make up an argument that file handles in Haskell are indeed a special kind of object and that the language runtime /can/ run finalizers for file handles in a more 'synchronous' way (i.e. GC could call them directly as soon as it determines they are garbage). The main point here is that a file descriptor does not contain references to other language objects. The same would apply to all sorts of OS resource handles. However, the whole argument is a priori valid only for raw system handles, such as file descriptors. No idea what issues come up if one considers e.g. buffering, or more generally, any additional data structure that gets associated with the handle. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Lazy IO and closing of file handles
Bertram Felgenhauer wrote: Having to rely on GC to close the fds quickly enough is another problem; can this be solved on the library side, maybe by performing GCs when running out of FDs? Claus Reinke wrote: in good old Hugs, for instance, we find in function newHandle in src/iomonad.c [...snip...] /* Search for unused handle*/ /* If at first we don't */ /* succeed, garbage collect*/ /* and try again ... */ /* ... before we give up */ so, instead of documenting limitations and workarounds, this issue should be fixed in GHC as well. This may help in some cases but it cannot be relied upon. Finalizers are always run in a separate thread (must be, see http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html). Thus, even if you force a GC when handles are exhausted, as hugs seems to do, there is no guarantee that by the time the GC is done the finalizers have freed any handles (assuming that the GC run really detects any handles to be garbage). Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Recursion in Haskell
P. R. Stanley wrote: Chaps, is there another general pattern for mylen, head or tail? mylen [] = 0 mylen (x:xs) = 1 + mylen (xs) head [] = error what head? head (x:xs) = x tail [] = error no tail tail (x:xs)= xs There are of course stylistic variations possible, e.g. you can use case instead of pattern bindings: mylen list = case list of [] - 0 (x:xs) - 1 + mylen (xs) As you see, this moves pattern matching from the lhs to the rhs of the equation. Another very common 'pattern' is to factor the recursion into a generic higher order function fold op z [] = z fold op z (x:xs) = x `op` (fold op z xs) -- parentheses not strictly necessary here, added for readability and define mylen in terms of fold mylen = fold (+) 0 You also have the possibility to use boolean guards as in mylen xs | null xs = 0 | otherwise = 1 + mylen (tail xs) (Although here we use the more primitive functions (null) and (tail) which in turn would have to be defined using pattern matching. Pattern matching is the only way to examine data of which nothing is known other than its definition.) Lastly, there are cases where you want to use nested patterns. For instance, to eliminate successive duplicates you could write elim_dups (x:x':xs) = if x == x' then xs' else x:xs' where xs' = elim_dups (x':xs) elim_dups xs = xs Here, the first clause matches any list with two or more elements; the pattern binds the first element to the variable (x), the second one to (x'), and the tail to (xs). The second clause matches everything else, i.e. empty and one-element lists and acts as identity on them. This pattern matching reminds me of a module on formal spec I studied at college. As long as your code doesn't (have to) use a tricky algorithm (typically if the algorithm is more or less determined by the data structure, as in the above examples) then it is really like an executable specification. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Summarize of Why do I have to specify (Monad m) here again?
David Tolpin wrote: On Mon, 19 Feb 2007 02:17:34 +0400, Marc Weber [EMAIL PROTECTED] wrote: Would it make sense to specify partial type declarations ? I don't need an answer right now. no, it wouldn't. I think it would, and it seems there are others. See e.g. http://www.mail-archive.com/haskell%40haskell.org/msg10677.html There is even a hack that allows you to do it in Haskell98, see http://okmij.org/ftp/Haskell/partial-signatures.lhs (hey, google rulez!) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Newbie Q: GHCi: Where ?List? module is imported from?
Jules Bean wrote: Dmitri O.Kondratiev wrote: Set module here is built with list and uses among other things list comparison functions such as (==) and (=). Q1: Where List module is imported from? GHC Base package contains Data.List module, not just List module. List was the old name for it. Data.List is the new name. You can also use package haskell98 which gives you the original non-hierarchical standard library modules. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Suggestion for hackage
Hi It would be a nice feature if one could look online at the documentation of a package, i.e. w/o downloading and building the package first. Fr instance, haddock generated API docs can give you a much better idea what you can expect from a library package than the mere package description. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Implementation of scaled integers
Stefan Heinzmann wrote: is there a library for Haskell that implements scaled integers, i.e. integers with a fixed scale factor so that the scale factor does not need to be stored, but is part of the type? I dimly remember that there has been some work done on this in connection with (and by the creator of) the new time package. Can't remember any specifics, though. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Parsec and Java
Arnaud Bailly wrote: Joel Reymont wrote: Is there a Java parser implemented using Parsec? There is: http://jparsec.codehaus.org/ Jparsec is an implementation of Haskell Parsec on the Java platform. I think Joel was asking for a parser for the Java language, written in Haskell using the (original, Haskell-) Parsec library. So much for natural language and ambiguities... ;-) Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: State of OOP in Haskell
[EMAIL PROTECTED] wrote: Here are two surveys (somewhat outdated) on the use of formal methods in industry: http://citeseer.ifi.unizh.ch/39426.html http://citeseer.ifi.unizh.ch/craigen93international.html Both of these links are dead. Could you post author and title? Thanks Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: nested maybes
Udo Stenzel wrote: Benjamin Franksen wrote: Udo Stenzel wrote: Sure, you're right, everything flowing in the same direction is usually nicer, and in central Europe, that order is from the left to the right. What a shame that the Haskell gods chose to give the arguments to (.) and ($) the wrong order! But then application is in the wrong order, too. Do you really want to write (x f) for f applied to x? No, doesn't follow. No? Your words: everything flowing in the same direction. Of the two definitions (f . g) x = g (f x) vs. (f . g) x = f (g x) the first one (your prefered one) turns the natural applicative order around, while the second one preserves it. Note this is an objective argument and has nothing to do with how I feel about it. Unix pipes also read from left to right, even though programs receive their arguments to the right of the program namen, and that feels totally natural. I'd say what 'feels natural' is in the eye of the beholder. One can get used to almost any form of convention, notational and otherwise, however inconsistent. For instance, the Haskell convention for (.) feels natural for me, because I have been doing math for a long time and mathematicians use the same convention. OTOH, the math convention also says that the type of a function is written (ArgType - ResultType), although (ResultType - ArgType) would have been more logical because consistent with the application order. I am used to it, so it feels natural to me, but does that make it the better choice? Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: nested maybes
Udo Stenzel wrote: Sure, you're right, everything flowing in the same direction is usually nicer, and in central Europe, that order is from the left to the right. What a shame that the Haskell gods chose to give the arguments to (.) and ($) the wrong order! But then application is in the wrong order, too. Do you really want to write (x f) for f applied to x? Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Channel9 Interview: Software Composability and theFu ture of Languages
[sorry, this was meant to go to the list] On Wednesday 31 January 2007 00:40, Bulat Ziganshin wrote: Saturday, January 27, 2007, 12:00:11 AM, you wrote: and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them. greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people i think you are completely wrong! FP way is to represent everything as function, imperative way is to represent everything as algorithm. there is no natural thinking way, the only think that matters is *when* student learned the appropriate concept. What I meant is that it is more similar to the way we use to think in our daily life. Noone thinks about day-to-day practical problems in a formal way -- in most cases this would be a completely inappropriate approach. Formal thinking comes naturally to only a gifted few of us, most find it exceptionally hard to learn. However, I didn't mean to say that formal reasoning is something that cannot be learned. I would even say that it is easier to learn if done right from the start as something completely new instead of appealing to intuition, thus trying to connect it to the more 'natural' ways of thinking -- because the latter have to be 'unlearned' to a certain extent before the new way of thinking can take hold. all the problem of learning FP paradigm for college-finished programmers is just that their brains are filled with imperative paradigm and they can't imagine non-imperative programming. it was hard for me, too :) we should change college programs to make FP programming available for the masses Absolutely. Cheers Ben -- Programming = Mathematics + Murphy's Law (Dijkstra in EWD1008) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re[2]: Channel9 Interview: Software Composability and the Future of Languages
Bulat Ziganshin wrote: 2. it bites me too. it's why i say that C++ is better imperative language than Haskell. there are also many other similar issues, such as lack of good syntax for for, while, break and other well-known statements, inability to use return inside of block and so on You forgot to mention 'goto' ;-) Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: State of OOP in Haskell
Yitzchak Gale wrote: Steve Downey wrote: OO, at least when done well, maps well to how people think. Um, better duck. I am afraid you are about to draw some flames on that one. I hope people will try to be gentle. No problem ;-) I'll never get tired quoting Dijkstra; one of the things that stuck in my mind is when he argues that 'the way people normally think' may simply not be appropriate to automatic computing aka programming, and that for this reason it may be contra-productive to appeal to this usual way of thinking -- however expedient it may seem in the short run. The failure of OO (to deliver on its many promises) IMHO nicely illustrates this. Of course people /like/ to think of 'objects' and their 'behavior' etc., it /is/ a very intuituive approach, because it is the way we are used to think. Unfortunately that doesn't necessarily make it effective for precise reasoning about the large and complex digital systems we are constructing. It may, in fact, be more effective to short-cut all these centuries (if not millenia) old thinking habits and cut straight to the chase: see programs as formulas to be reasoned about with formal methods (such as equational reasoning, which is a particularly good fit for Haskell with its equational notation and pure functional semantics). Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Channel9 Interview: Software Composability and theFu ture of Languages
Chris Kuklewicz wrote: Note that I have not mentioned laziness. This is because it only helps to solve problems more elegantly -- other languages can model infinite computations / data structures when it is useful to do so. Reminds me of yet another quote from Dijkstra (http://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/EWD1284.html): After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Channel9 Interview: Software Composability and theFu ture of Languages
Chris Kuklewicz wrote: Aside on utterly useful proofs: When you write concurrent programs you want an API with strong and useful guarantees so you can avoid deadlocks and starvation and conflicting data read/writes. Designing an using such an API is a reasoning exercise identical to creating proofs. Some systems makes this possible and even easy (such as using STM). I have to -- partly -- disagree with your last statement. It is possible that I missed something important but as I see it, STM has by construction the disadvantage that absence of starvation is quite hard (if not impossible) to prove. A memory transaction must be re-started from the beginning if it finds (when trying to commit) another task has modified one of the TVars it depends on. This might mean that a long transaction may in fact /never/ commit if it gets interrupted often enough by short transactions which modify one of the TVars the longer transaction depends upon. IIRC this problem is mentioned in the original STM paper only in passing (Starvation is surely a possibility or some such comment). I would be very interested to hear if there has been any investigation of the consequences with regard to proving progress guarantees for programs that use STM. My current take on this is that there might be possibilities avoid this kind of starvation by appropriate scheduling. One idea is to assign time slices dynamically, e.g. to double it whenever a task must do a rollback instead of a commit (due to externally modified TVars), and to reset the time slice to the default on (successful) commit or exception. Another possibility is to introduce dynamic task priorities and to increase priority on each unsuccessful commit. (Unfortunately I have no proof that such measures would remove the danger of starvation, they merely seem plausible avenues for further research. I also don't know whether the current GHC runtime system already implements such heuristics.) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Channel9 Interview: Software Composability and theFu ture of Languages
Steve Schafer wrote: Neil Bartlett wrote: It also highlights some of the misconceptions that still exist and need to be challenged, e.g. the idea that Haskell is too hard or is impractical for real work. Haskell _is_ hard, although I don't think it's _too_ hard, or I wouldn't be here, obviously. Haskell is hard in the sense that in order to take advantage of its ability to better solve your problems, you have to THINK about your problems a lot more. Most people don't want to do that; they want the quick fix served up on a platter. And even the intermediate camp, the ones who are willing to invest some effort to learn a better way, are only willing to go so far. I agree, but I think it should be pointed out that primarily it is not Haskell which is hard, it is Programming which is. Haskell only reflects this better than the mainstream imperative languages. The latter encourage and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them. Haskell (more specifically: its purely functional core) makes such operational reasoning comparatively hard(*). Instead it supports and greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people and thus appears 'difficult' and even 'impractical' -- it requires a mental shift to a more abstract understanding.(***) What the hard-core 'common sense' type can't see and won't accept is that denotational reasoning is strictly superior to operational reasoning (at least with regard to correctness), if only because the latter invariably fails to scale with the exploding number of possible system states and execution paths that have to be taken into account for any non-trivial program. As Dijkstra once said, the main intellectual challenge for computer science resp. programming is: how not to make a mess of it. Those who don't want to think will invariably shoot themselves in the foot, sooner or later. Their programs become a complex, unintelligible mess; the prefered choice of apparently 'easier' or 'more practical' programming languages often (but not always) reflects an inability or unwillingness to appreciate the intellectual effort required to avoid building such a mess. Cheers Ben (*) The downside of which is that it is also quite hard to reason about a program's efficiency in terms of memory and CPU usage. (**) At least the 'fast and loose' version of it, i.e. disregarding bottom and seq. (***) Many Haskell newcomers have described the feeling of being 'mentally rewired' while learning programming in Haskell. IMO this is a very good sign! ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: GHC concurrency runtime breaks every 497 (and a bit) days
Neil Davies wrote: In investigating ways of getting less jitter in the GHC concurrency runtime I've found the following issue: The GHC concurrency model does all of its time calculations in ticks - a tick being fixed at 1/50th of a second. It performs all of these calculations in terms the number of these ticks since the Unix epoch (Jan 1970). Which is stored as an Int - unfortunately this 2^31 bits overflows every 497 (and a bit) days. When it does overflow any threads on the timer queue will either a) run immediately or b) not be run for another 497 days - depending on which way the sign changes. Also any timer calculation that spans the wrapround will not be dealt with correctly. Although this doesn't happen often (the last one was around Sat Sep 30 18:32:51 UTC 2006 and the next one will not be until Sat Feb 9 21:00:44 UTC 2008) I don't think we can leave this sort of issue in the run-time system. Take a look at the definition of getTicksofDay (in base/include/HsBase.h (non-windows) / base/cbits/Win32Utils.c) and getDelay (in base/GHC/Conc.lhs) to understand the details of the issue. I don't mind having a go at the base system to remove this problem - but before I do I would like to canvas some opinions. I'll do that in a separate thread. I answer but on this thread because IMO the overflow is a /much/ more serious problem than the jitter due to unnecessarily coarse rounding. Fortunately both can be solved together, by using 64 bit timestamps and the highest available resolution. In my experience avoiding overflow by using enough bits is a lot more reliable than any trick to 'handle' overflow. With 64 bit timestamps, even if the base resolution is as small as one nanosecond (which is lot more than you get on stock hardware even if you use raw hardware timers), you still get overflow only every 585 years which I would deem acceptable for all practical purposes. BTW, the timer resolution should be available as a (platform dependent) constant. Maybe it would even be useful to define a simple external interface (to the RTS, i.e. in C) so that advanced users may plug in e.g. extra hardware timers for ultimate resolution. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Partial Application
Philippe de Rochambeau wrote: my original query concerning partial application was triggered by the following statement from Thomson's The Craft of Functional Programming, p. 185: multiplyUC :: (Int, Int) - Int multiplyUC (x,y) = x * y multiply :: Int - Int - Int multiply x y = x * y In the case of multiplications we can write expression like multiply 2. When I read this, I thought that you could partially apply multiply by typing multiply 2 at the ghci prompt. (This has already been said but to re-iterate:) You can (partially apply multiply) but not by typing multiply 2 at the ghci prompt. The latter is interpreted by ghci as a command to evaluate and then /print/ the resulting value, which means it must convert it to a textual representation, using the Show class, which is normally not instantiated for function values. However, this generated an error: interactive:1:0: No instance for (Show (Int - Int)) arising from use of `print' at interactive:1:0-9 Possible fix: add an instance declaration for (Show (Int - Int)) In the expression: print it In a 'do' expression: print it Just import (:load) the module Text.Show.Functions which defines a Show instance for functions. Prelude :m Text.Show.Functions Prelude Text.Show.Functions let multiply x y = x * y Prelude Text.Show.Functions multiply 2 function Or, use let to bind the result to a variable, like Prelude let multiply x y = x * y Prelude let f = multiply 2 Prelude f 3 6 Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Monad Set via GADT
On Friday 12 January 2007 09:04, Simon Peyton-Jones wrote: | On 1/3/07, Roberto Zunino [EMAIL PROTECTED] wrote: | 1) Why the first version did not typececk? | | 1) Class constraints can't be used on pattern matching. They ARE | restrictive on construction, however. This is arguably bug in the | Haskell standard. It is fixed in GHC HEAD for datatypes declared | in the GADT way, so as not to break H98 code: | | http://article.gmane.org/gmane.comp.lang.haskell.cvs.all/29458/matc |h=gadt+class+context | | To quote from there: I think this is stupid, but it's what H98 | says. | | Maybe it is time to consider it deprecated to follow the Haskell 98 | standard /to the letter/. GHC follows this strange standard when you write data type decls in H98 form data Eq a = T a = C a Int | D Here, pattern-matching on either C or D will cause an (Eq a) constraint to be required. However, GHC does *not* follow this convention when you write the data type decl in GADT style syntax: data T a where C :: Eq a = a - Int - T a D :: T a Here, (a) you can be selective; in this case, C has the context but D does not. And (b) GHC treats the constraints sensibly: - (Eq a) is *required* when C is used to construct a value - (Eq a) is *provided* when C is used in pattern matching In short, in GADT-style syntax, GHC is free to do as it pleases, so it does the right thing. In this case, then, you can avoid the H98 bug by using GADT-style syntax. All of this is documented in the user manual. If it's not clear, please help me improve it. Crystal clear. My remark was meant merely as a general observation. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Monad Set via GADT
Jim Apple wrote: On 1/3/07, Roberto Zunino [EMAIL PROTECTED] wrote: 1) Why the first version did not typececk? 1) Class constraints can't be used on pattern matching. They ARE restrictive on construction, however. This is arguably bug in the Haskell standard. It is fixed in GHC HEAD for datatypes declared in the GADT way, so as not to break H98 code: http://article.gmane.org/gmane.comp.lang.haskell.cvs.all/29458/match=gadt+class+context To quote from there: I think this is stupid, but it's what H98 says. Maybe it is time to consider it deprecated to follow the Haskell 98 standard /to the letter/. The above is an example where the default (without flags) should (arguably) be the 'fixed' standard. We would need an equivalent of gcc's -pedantic flag, meaning Follow the Haskell 98 standard to the letter, even on issues where the standard is generally considered bad. I hope this will be handled in a better way with Haskell'. It should be possible to revise the standard (every few years or so, /very/ conservatively i.e. no extensions, etc) so that we can eliminate 'bugs' from the language spec. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Large data structures
Alex Queiroz wrote: On 12/12/06, Benjamin Franksen [EMAIL PROTECTED] wrote: PS: Please try to include exactly the relevant context in replies, no more, no less. Your original question (stripped down to the body of the text) would have been relevant, here, but neither 'Hello', nor 'Cheers' are worth quoting. Sorry, but I did not quote hello or cheers. Oops. My turn to say 'sorry'. No offense meant... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] What is a hacker? [was: Mozart versus Beethoven]
Kirsten Chevalier wrote: On 12/12/06, Patrick Mulder [EMAIL PROTECTED] wrote: PS I like the idea of a book Hakell for Hackers Maybe Haskell for People Who Want to Be Hackers? I would never buy a book with such a title, even if I didn't have the slightest clue about programming. However Haskell for Hackers is cool. (Since, of course, one should never apply the term hacker to oneself.) Who told you that? Calling oneself 'hacker' is a sign of healthy self-respect; to the contrary, I don't know anyone who would call themselves wannabe-hacker. I'm not sure whether it's best to aim at people who might be already hackers who want to learn Haskell, or people who are already programmers who want to be Haskell hackers, in particular. I suppose that the first group of people is probably larger. Being a hacker is a matter of attitude and self-definition more than knowledge and experience. A hacker, even if young and lacking experience, reads books for hackers (if at all) not 'how do I become a hacker' books. The attitude is 'gimme the knowledge so i can go ahead and start doing real stuff', not 'oh, there is so much to learn, maybe after 10 years of study and hard work people will finally call me a hacker'. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why so slow?
Lyle Kopnicky wrote: The code below is using way more RAM than it should. It seems to only take so long when I build the 'programs' list - the actual reading/parsing is fast. For a 5MB input file, it's using 50MB of RAM! Any idea how to combat this? 1) I strongly recommend to work through at least the relevant parts of the tutorial 'Write yourself a scheme in 48 hours' (google for the title). It explains how to use efficient parser combinator libraries, how to cleanly separate IO from pure computations, how to use Error monad (instead of IO) to recover from errors, etc. pp. 2) Use String or ByteString.Lazy together with readFile to read the input on demand. Else your program must suck all of the input into memory before processing can even start. (This works whenever processing of the input stream is more or less sequential). HTH Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Aim Of Haskell
Joachim Durchholz wrote: These activities are among the major reasons why I'm finally prepared to get my feet wet with Haskell after years of interested watching. I'll probably fire off a set of newbie questions for my project, though it might still take a few days to get them organized well enough to do that (and to find the time for setting up the text). Hi Jo! Welcome to the club. (I think I did my share, now and then, on c.l.f to keep up your interest... ;-) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Writing Haskell For Dummies Or At Least For People Who Feel Like Dummies When They See The Word 'Monad'
Sebastian Sylvan wrote: Perhaps a single largish application could be the end product of the book. Like a game or something. You'd start off with some examples early on, and then as quickly as possible start working on the low level utility functions for the game, moving on to more and more complex things as the book progresses. You'd inevitably have to deal with things like performance and other real world tasks. It might be difficult to find something which would work well, though. This again reminds me of 'Write yourself a scheme in 48 hours'. It is exactly this approach, albeit on a far less ambitious level (tutorial, not book). You end up with a working scheme implementation; ok, a partial implementation missing most of the more advanced features, anyway, you get something which /really works/. I have spent quite some time adding stuff that was left out for simplicity (e.g. non-integral numbers), rewriting parts I found could be done better, added useful functionality (readline lib for input), improved the parser to be standard conform, added quasiquotation, etc. pp. Had lots of fun and learned a lot (and not only about Haskell). Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Large data structures
Alex Queiroz wrote: On 12/11/06, Stefan O'Rear [EMAIL PROTECTED] wrote: No. Haskell's lists are linked lists, enlarge creates a single new link without modifying (and copying) the original. Thanks. Is there a way to mimic this behaviour with my own code? It is the default for any data structure you define. Data is by default represented internally as a pointer to the actual value. Otherwise recursive structures (see below for an example) would not be easily possible. And since no part of the data structure is 'mutable', different instances can share (memory-wise) as much of their structure as the implementation is able to find. BTW apart from the special syntax (like [a,b,c]) there is nothing special about lists. E.g. with this data type definition data List a = Nil | Cons a (List a) (List a) is completely equivalent to [a]. HTH Ben PS: Please try to include exactly the relevant context in replies, no more, no less. Your original question (stripped down to the body of the text) would have been relevant, here, but neither 'Hello', nor 'Cheers' are worth quoting. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Beginner: IORef constructor?
On Wednesday 06 December 2006 07:40, Bernie Pope wrote: On 05/12/2006, at 1:00 PM, Benjamin Franksen wrote: Bernie Pope wrote: If you want a global variable then you can use something like: import System.IO.Unsafe (unsafePerformIO) global = unsafePerformIO (newIORef []) But this is often regarded as bad programming style (depends who you talk to). Besides, isn't this example /really/ unsafe? I thought, at least the IORef has to be monomorphic or else type safety is lost? Perhaps your question is rhetorical, Half-way ;-) I was pretty sure but not 100%. Thanks for the nice example. Cheers, Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Typeclass question
Stefan O'Rear wrote: [...] Unfortunately, it turns out that allowing foralls inside function arguments makes typechecking much harder, in general impossible. Just a tiny correction: AFAIK, it is type /inference/ which becomes undecidable in the presence of higher rank types -- checking works just fine (it could be true that it's harder, though). Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Beginner: IORef constructor?
Bernie Pope wrote: If you want a global variable then you can use something like: import System.IO.Unsafe (unsafePerformIO) global = unsafePerformIO (newIORef []) But this is often regarded as bad programming style (depends who you talk to). Besides, isn't this example /really/ unsafe? I thought, at least the IORef has to be monomorphic or else type safety is lost? Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: The Future of MissingH
Bulat Ziganshin wrote: Monday, November 27, 2006, 1:46:34 AM, you wrote: I hate to be nitpicking but GPL is not only compatible with but encourages commerce in general and commercial software in particular. It is incompatible with proprietary software. There's a difference. of course, but on practice most of commercial software are closed-source. i personally use this license when i want to show code to the world but don't want that but will be used in commercial software (without paying royalties). This is impossible. GPL expressly allows commercial use of your software. You cannot license under GPL and at the same time disallow making money out of it, this would be incompatible. If, however, you actually meant to say 'proprietary', then I would kindly ask you to say so. Imprecise usage of the term 'commercial' as a synonym (**) for 'proprietary' only serves to promote common misconceptions. It is neither in the spirit nor the words of the GPL to be opposed to commerce (=earning money by selling work or things of value to others), and there is ample proof(*) that it isn't opposed to commerce in practice, too. And no, I don't intend to pursue this (somewhat off-) topic any further ;-) Cheers Ben (*) anyone know how much Novell paid for buying SuSE? Not that it's gotten any better since... (**) some would say 'euphemism' ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] RE: why are implicit types different? (cleanup)
S. Alexander Jacobson wrote: Ok, I'm not sure I understand the answer here, but how about a workaround. My current code looks like this: tt = let ?withPassNet=wpn ?withPassNet'=wpn ?withPassNet''=wpn in passNet passnet [user] regImpl b The type of wpn is: wpn :: Ev PassNet ev a - Ev State ev a The individual implicit parameters end up arriving with concrete values for a. If I pass wpn to passNet explicitly, it also appears to get converted to a monotype on arrival at the inside of passNet. So I guess the real question is, how do I pass a polytype* wpn? Wrap it inside a newtype, maybe? Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: The Future of MissingH
Bulat Ziganshin wrote: Friday, November 24, 2006, 7:32:55 PM, you wrote: Josef Svenningsson posted a comment on my blog today that got me to thinking. He suggested that people may be intimidated by the size of MissingH, confused by the undescriptive name, and don't quite know what's in there. And I think he's right. first, is it possible to integrate MissingH inside existing core libs, i.e. Haskell libs supported by Haskell community? i think that it will be impossible if MissingH will hold its GPL status. i think that such fundamental library as MissingH should be BSDified to allow use it both in commercial and non-commercial code I hate to be nitpicking but GPL is not only compatible with but encourages commerce in general and commercial software in particular. It is incompatible with proprietary software. There's a difference. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Priority Queue?
Ken Takusagawa wrote: Is there a Haskell implementation of an efficient priority queue (probably heap-based) out there that I can use? I do not see it in the GHC libraries. Unfortunately the base package contains only the specialized Data.Sequence and not the general annotated 2-3 finger trees, which could be easily instantiated to an efficient priority queue. I have a package lying around that I wrote 1 or 2 years ago that implements the (much more complicated) search tree version of the 2-3 finger trees. I just managed to compile it again. Can send you a tarball if you are interested. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: list monad and controlling the backtracking
isto wrote: Hi all, Weekly news had a link to article Local and global side effects with monad transformers and the following is from there (minor modification done): import Control.Monad.List import Control.Monad.State import Control.Monad.Writer test5 :: StateT Integer (ListT (Writer [Char])) Integer test5 = do a - lift $ mlist aList b - lift $ mlist bList lift $ lift $ tell (trying ++show a++ ++show b++\n) modify (+1) guard $ a+b5 return $ a+b go5 = runWriter $ runListT $ runStateT test5 0 If the aList = [1..5] as well as bList, then there will be 25 tryings. If aList and bList are [1..1000], there will be a lot of tryings... However, guard is saying that we are interested in only values whose sum is less than 5. Is it possible to control easily in the above example that when we have tried out pairs (1,1), (1,2), (1,3), (1,4), that now we are ready to stop trying from the bList? And then similarly when we finally arrive to a pair (4,1), that now we are ready to finish also with aList? If I understood you correctly you seem to want a monad that offers something akin to Prolog's cut. You might want to take a look at http://okmij.org/ftp/Computation/monads.html#LogicT Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Equivalent of if/then/else for IO Bool?
Dougal Stanton wrote: Is there some sort of equivalent of the if/then/else construct for use in the IO monad? For instance the following can get quite tedious: do bool - doesFileExist filename if bool then sth else sth' Is there a more compact way of writing that? Something akin to: condM (doesFileExist filename) (sth) (sth') Or maybe there's a more sensible way of doing the above that I've missed. I seem to use endless alternate conditions sometimes and there's bound to be a better way. Roll your own control structures! Haskell has higher order functions for a reason. This should do the trick (untested): condM condAction thenBranch elseBranch = do bool - condAction if bool then thenBranch else elseBranch (Hack it into ghci or hugs to find out the type, it's a bit more general than what you need.) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] SimonPJ and Tim Harris explain STM - video
[sorry for quoting so much, kinda hard to decide here where to snip] Cale Gibbard wrote: On 23/11/06, Jason Dagit [EMAIL PROTECTED] wrote: A comment on that video said: - BEGIN QUOTE It seems to me that STM creates new problems with composability. You create two classes of code: atomic methods and non atomic methods. Nonatomic methods can easily call atomic ones ? the compiler could even automatically inject the atomic block if the programmer forgot. Atomic methods and blocks cannot be allowed to call nonatomic code. The nonatomic code could do I/O or other irrevocable things that would be duplicated when the block had to retry. END QUOTE I imagine an example like this (some pseudo code for a side effect happy OO language): class Foo { protected int counter; // assume this gets initialized to 0 public doSomething() { atomic{ counter++; Console.Write(called doSomething execution# + counter); // something which could cause the transaction to restart } } public doOtherThing() { atomic{ doSomething(); // something which could cause the transaction to restart } } } Now imagine doSomething gets restarted, then we see the console output once each time and counter gets incremented. So one solution would be to move the side effects (counter++ and the console write) to happen before the atomic block. This works for doSomething, but now what if we called doOtherThing instead? We're back to having the extra side-effects from the failed attempts at doSomething, right? We just lost composability of doSomething? I'm assuming counter is only meant to be incremented once per successful run of doSomething and we only want to see the output to the log file once per successful run, but it needs to come before the log output inside doSomething so that the log makes sense. I realize STM is not a silver bullet, but it does seem like side-effects do not play nicely with STM. What is the proposed solution to this? Am I just missing something simple? Is the solution to make it so that Console.Write can be rolled back too? The solution is to simply not allow side effecting computations in transactions. They talk a little about it in the video, but perhaps that's not clear. The only side effects an atomic STM transaction may have are changes to shared memory. Another example in pseudocode: atomic x - launchMissiles if (x 5) then retry This is obviously catastrophic. If launchMissiles has the side effect of launching a salvo of missiles, and then the retry occurs, it's unlikely that rolling back the transaction is going to be able to put them back on the launchpad. Worse yet, if some variable read in launchMissiles changes, the transaction would retry, possibly causing a second salvo of missiles to be launched. So you simply disallow this. The content of a transaction may only include reads and writes to shared memory, along with pure computations. This is especially easy in Haskell, because one simply uses a new monad STM, with no way to lift IO actions into that monad, but atomically :: (STM a - IO a) goes in the other direction, turning a transaction into IO. In other languages, you'd want to add some static typechecking to ensure that this constraint was enforced. This is of course the technically correct answer. However, I suspect that it may not be completely satisfying to the practitioner. What if you want or even need your output to be atomically tied to a pure software transaction? One answer is in fact to make it so that Console.Write can be rolled back too. To achieve this one can factor the actual output to another task and inside the transaction merely send the message to a transactional channel (TChan): atomic $ do increment counter counterval - readvar counter sendMsg msgChan (called doSomething execution# ++ show counterval) -- something which could cause the transaction to restart Another task regularly takes messages from the channel and actually outputs them. Of course the output will be somewhat delayed, but the order of messages will be preserved between tasks sending to the same channel. And the message will only be sent if and only if the transaction commits. Unfortunately I can't see how to generalize this to input as well... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Re: SimonPJ and Tim Harris explain STM - video
Hi Liyang HU you wrote: On 23/11/06, Benjamin Franksen [EMAIL PROTECTED] wrote: One answer is in fact to make it so that Console.Write can be rolled back too. To achieve this one can factor the actual output to another task and inside the transaction merely send the message to a transactional channel (TChan): So, you could simply return the console output as (part of) the result of the atomic action. Wrap it in a WriterT monad transformer, even. But this would break atomicity, wouldn't it? Another call to doSomething from another task could interrupt before I get the chance to do the actual output. With a channel whatever writes will happen in the same order in which the STM actions commit (which coincides with the order in which the counters get incremented). Another task regularly takes messages from the channel With STM, the outputter task won't see any messages from the channel until your main atomic block completes, after which you're living in IO-land, so you might as well do the output yourself. Yeah, right. Separate task might still be preferable, otherwise you have to take care not to forget to actually do the IO after each transaction. I guess it even makes sense to hide the channel stuff behind some nice abstraction, so inside the transaction it looks similar to a plain IO action: output port msg The result is in fact mostly indistiguishable from a direct IO call due to the fact that IO is buffered in the OS anyway. Pugs/Perl 6 takes the approach that any IO inside an atomic block raises an exception. Unfortunately I can't see how to generalize this to input as well... The dual of how you described the output situation: read a block of input before the transaction starts, and consume this during the transaction. I guess you're not seeing how this generalises because potentially you won't know how much of the input you will need to read beforehand... (so read all available input?(!) You have the dual situation in the output case, in that you can't be sure how much output it may generate / you will need to buffer.) You say it. I guess the main difference is that I have a pretty good idea how much data is going to be produced by my own code, and if it's a bit more than I calculated then the whole process merely uses up some more memory, which is usually not a big problem. However, with input things are different: in many cases the input length is not under my control and could be arbitrarily large. If I read until my buffer is full and I still haven't got enough data, my transaction will be stuck with no way to demand more input. (however, see below) input - hGetContent file atomic $ flip runReaderT input $ do input - ask -- do something with input return 42 (This is actually a bad example, since hGetContents reads the file lazily with interleaved IO...) In fact reading everything lazily seems to be the only way out, if you don't want to have arbitrary limits for chunks of input. OTOH, maybe limiting the input chunks to some maximum length is a good idea regardless of STM and whatnot. Some evil data source may want to crash my process by making it eat more and more memory... So, after all you are probably right and there is an obvious generalization to input. Cool. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: jhc, whole program optimizing compiler.
[EMAIL PROTECTED] wrote: Do anyone had any experience with JHC? I tried to install it second time and again get an error during library build. It's a pity, we need a speed in our very lazy code. ;) I had the same problem and asked John. He explained why and told me how to proceed: that is most likely because the format of the 'ho' and 'hl' files changed recently. you will need to delete all old versions with 'make clean-ho' and rm *.hl to get rid of them. I think the pre-compiled libraries on the site are up to date if you want to get those. HTH Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Debugging partial functions by the rules
Daniel, you wrote: I suspect I would be classified as a newbie relative to most posters on this list but here's my thoughts anyway... [...] One of my initial responses to haskell was disappointment upon seeing head, fromJust and the like. 'Those look nasty', I sez to meself. From day one I've tried to avoid using them. Very occasionally I do use them in the heat of the moment, but it makes me feel unclean and I end up having to take my keyboard into the bathroom for a good scrubbing with the sandsoap. I completely agree and couldn't have said it in any better way (including the relative newbie part). Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Debugging partial functions by the rules
John Hughes wrote: From: Robert Dockins [EMAIL PROTECTED] It seems to me that every possible use of a partial function has some (possibly imagined) program invariant that prevents it from failing. Otherwise it is downright wrong. 'head', 'fromJust' and friends don't do anything to put that invariant in the program text. Well, not really. For example, I often write programs with command line arguments, that contain code of the form do ... [a,b] - getArgs ... Of course the pattern match is partial, but if it fails, then the standard error message is good enough. This applies to throw away code, of course, and if I decide to keep the code then I sooner or later extend it to fix the partiality and give a more sensible error message. But it's still an advantage to be ABLE to write the more concise, but cruder version initially. This isn't a trivial point. We know that error handling code is a major part of software cost--it can even dominate the cost of the correct case code (by a large factor). Erlang's program for the correct case strategy, coupled with good fault tolerance mechanisms, is one reason for its commercial success--the cost of including error handling code *everywhere* is avoided. But this means accepting that code *may* very well fail--the failure is just going to be handled somewhere else. Haskell (or at least GHC) has good exception handling mechanisms too. We should be prepared to use them, and let it fail when things go wrong. The savings of doing so are too large to ignore. But note that Erlang will give you a stack trace for unhandled exceptions which Haskell (currently?) doesn't/can't. Also, I remember an Erlang expert (Ulf Wiger?) stating recently that 'catching and re-throwing expections in most cases tends to hide program errors rather than avoid them' (quoted non-verbatim from memory, I can't recall the name of the paper). Lastly, Erlang's program for the correct case is not meant for things like input of data from sources outside the program's control, but only applies to not-checking for program internal invariants. I would be very surprised if there were Erlang experts encouraging throwaway code such as your example above. Cheers, Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: non-total operator precedence order (was:Fractional/negative fixity?)
Henning Thielemann wrote: On Fri, 10 Nov 2006, Benjamin Franksen wrote: Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/. It's the same as with instance declarations, isn't it? I don't want to defend the problems arising with todays anonymous instance declarations, Right. However, with instances I can imagine solutions that avoid naming them, that is, I could imagine to write something like select instance C T1 T2 ... Tn from module M or import M hiding (instance C T1 T2 ... Tn, ) Such a feature could prove extremely useful in practice. Disclaimer: I am almost sure that there is something I have overlooked that makes such a simple solution impossible, otherwise it would have been proposed and implemented long ago... but I think a simple error report is better than trying to solve the problem automatically and thus hide it from the programmer. I agree 100% that error is better than silently change (fix) semantics. However the fact that there is currently no way to manually resolve instance clashes coming from imported (library) modules is really problematic, IMHO. I think the only reason this hasn't yet produced major upheaval is that Haskell community is still quite small so library writers can still have most of the field in their eyeview, so to speak. If Haskell libraries were written and used in multitudes such as seen e.g. on CPAN, then the probability of conflicting instances would be a lot greater, in turn causing many libraries to be incompatible with each other. IMHO, this must be fixed before Haskell reaches even a fraction of that level of popularity. Non-total precedence order will give us more potential incompatibilities that the programmer has no way of resolving satisfactorily, so I'd rather stick with the current system, however limited. (And yes, I /have/ changed my mind on this. I'd /love/ to be convinced that this is not really going to be a problem but I am not and I hate it.) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: non-total operator precedence order (was:Fractional/negative fixity?)
Jan-Willem Maessen wrote: On Nov 9, 2006, at 7:16 PM, Benjamin Franksen wrote: Carl Witty wrote: If you have operators op1 and op2, where the compiler sees conflicting requirements for the precedence of op1 and op2, then they are treated as non-associative relative to each other: the expression a op1 b op2 c is illegal, and the programmer must instead write (a op1 b) op2 c or a op1 (b op2 c) It's a possibility. However, I fear that such conflicting precedences might not come in nice little isolated pairs. For instance, each operator that is in the same precedence class as op1 (i.e. has been declared as having equal precedence) will now be 'incompatible' with any that is in the same class as op2, right? Well, look at it from the perspective of the reader. Does the reader of your code know beyond a shadow of a doubt what the intended precedence will be in these cases? If not, there should be parentheses there---quite apart from what the parser may or may not permit you to do. If the parser can't figure it out, you can bet your readers will have trouble as well. Imagine op1=(+), op2=(*). Would you think that it is acceptable if any wild module can come along and destroy the relative precedence order everyone espects to hold between those two? For this to happen it would be enough if M1 says prec (+) = prec (+) prec (*) = prec (*) while M2 says prec () = prec (*) and M3 prec () = prec (+) All modules M1, M2, and M3, when viewed independently, and even when viewed in pairwise combination, don't do anything bad. It is only the combination of all three that cause the expression 3 + 4 * 3 to become a syntax error! Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re[2]: Fractional/negative fixity?
Nicolas Frisby wrote: I don't see how it's too complex. Isn't infixl ?? prec ?? $ (??) = whenOperator exactly what you want to say? Yes. I'd add that the system should (of course) take the transitive closure over all explicitly stated precedence relations. That really cuts down the number of cases one needs to specify. For instance, we'll want the Prelude operators to retain their relative precedences, so Prelude will contain (using a slightly changed syntax): prec ($ $! seq) ( = =) (||) () (== /= = = ) (:) prec (:) (+ -) (* / quot rem div mod) (^ ^^ **) (.) (Syntax summary: * ops with equal precedence are grouped by parentheses which are mandatory even if they contain only a single operator * precedence relations may use '' or '' between equality groups * no commas between operators are necessary if they are seperated by whitespace * backticks for operators with function names can be left out) Fixity would have to be declared separately as (using 'infixl' or 'infixr'; or whatever). Whenever we would currently declare an operator as having precedence N, we'd now declare it to have precedence equal to one or more operators from the corresponding precedence group, e.g. prec (+ +) or, if you like prec (+ + -) However prec (+ + *) would be an error because (*) and (+) have previously been declared to have different precedence. Compilers can support a special command line switch --dump-precedences) (and interpreters an interactive command) to display the precedences of all operators in scope in some module. (The latter would be quite a useful addition even with the current numerical precedence system). The more I think about it the more I like the idea. Cheers, Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Fractional/negative fixity?
Jón Fairbairn wrote: Syntax 1, based on Phil Wadler's improvement of my old proposal. The precedence relation is a preorder.[...] infix {ops_1; ops_2; ...; ops_n} The alternative syntax is exemplified thus: infix L + - (L * / (R ^)) [...] I think both ways (I like the second one more) are a lot better than precedence numbers, whether they be fractional or integral. Let us add compiler/interpreter support for querying known precedence/fixity relations and it's (almost) perfect. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] non-total operator precedence order (was:Fractional/negative fixity?)
Henning Thielemann wrote: Maybe making fixity declarations like type class instance declarations is good. I thought so too at first but, having thought about it for a while, I now see this will cause /major/ problems. The precedence relations as declared explicitly by the programmer must form a DAG, with the vertices being the operator classes with equal precedence. There are two ways you can break the DAG: by introducing a 'smaller' or 'larger' relation when another module has already declared them to have equal precedence (resp. the other way around); or by introducing a cycle. Both can be caused simply by importing yet another module. I think it would be unacceptable not to provide some way for the programmer to resolve such conflicts. One way to do so would be to give an explicit order of 'precedence declaration priority' (PDP) for conflicting modules. Any rule introduced by a module with a lower PDP would be dropped (in the order of appearance) if it causes the DAG to break. Simple to use but the resulting precedence order would be quite hard to predict! Another way would be to hide precedence relation(s) in the import declaration. However, this is practical only if precedence declarations have a name we can refer to. The cleanest variant would probably be to tear holes in the graph by removing all relations to a particular operator from the graph, making it an isolated vertice, and then re-introducing new precedence relations for this operator. You might have to do this with several operators and it would be quite difficult to find the minimal set of operators that together will do the trick (i.e. restore the DAG). Even if you manage to find out, it could well be that by removing an operator from the graph you will accidentally remove many other operators, too, for instance if all these other operators have been declared to have the same precedence than the one you removed. I am not a graph specialist; maybe there exist semi-automatic solutions for such problems in the literature, anyway I know of none, and it would would in any case be quite a 'heavy' extension. All in all, in must agree with Nils and say that I can't see a light-weight and elegant way to make this work. Too bad, really. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: non-total operator precedence order (was:Fractional/negative fixity?)
Carl Witty wrote: On Thu, 2006-11-09 at 22:20 +0100, Benjamin Franksen wrote: Henning Thielemann wrote: Maybe making fixity declarations like type class instance declarations is good. I thought so too at first but, having thought about it for a while, I now see this will cause /major/ problems. The precedence relations as declared explicitly by the programmer must form a DAG, with the vertices being the operator classes with equal precedence. There are two ways you can break the DAG: by introducing a 'smaller' or 'larger' relation when another module has already declared them to have equal precedence (resp. the other way around); or by introducing a cycle. Both can be caused simply by importing yet another module. I think it would be unacceptable not to provide some way for the programmer to resolve such conflicts. [ ... possibilities for resolving conflicts omitted ... ] Another possibility is: If you have operators op1 and op2, where the compiler sees conflicting requirements for the precedence of op1 and op2, then they are treated as non-associative relative to each other: the expression a op1 b op2 c is illegal, and the programmer must instead write (a op1 b) op2 c or a op1 (b op2 c) It's a possibility. However, I fear that such conflicting precedences might not come in nice little isolated pairs. For instance, each operator that is in the same precedence class as op1 (i.e. has been declared as having equal precedence) will now be 'incompatible' with any that is in the same class as op2, right? It gets worse if the conflict creates a cycle in a chain of large operator classes. Thus one single bad declaration can tear a gaping hole into an otherwise perfectly nice and consistent DAG of precedence order relations, possibly invalidating a whole lot of code. Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Guards with do notation?
Joachim Breitner wrote: Hi, Am Dienstag, den 24.10.2006, 00:44 +0300 schrieb Misha Aizatulin: hello all, why is it not possible to use guards in do-expressions like do (a, b) | a == b - getPair return a and b are equal Probably because it is not well-defined for all Monad what a failure is, i.e. what to do in the other case. or something. Just my guess. No, fail is indeed a method of class Monad, and it is there exactly for this reason, i.e. because pattern matching may fail (even without guards, think of do Just a - ... ) The restriction is there because guards are not allowed in lambda expressions, for which do-notation is merely syntactic sugar. (Some people have argued for lifting this restriction in Haskell', see http://thread.gmane.org/gmane.comp.lang.haskell.prime/1750/focus=1750) HTH Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Exception: Too many open files
Bas van Dijk wrote: On Monday 23 October 2006 21:50, Tomasz Zielonka wrote: unsafeInterleaveMapIO f (x:xs) = unsafeInterleaveIO $ do y - f x ys - unsafeInterleaveMapIO f xs return (y : ys) unsafeInterleaveMapIO _ [] = return [] Great it works! I didn't know about unsafeInterleaveIO [1]. Why is it called 'unsafe'? Because it causes pure code to perform side-effects (=IO), albeit in a controlled manner, so it's not as bad as unsafePerformIO. For instance, using getContents you get a string (list of chars) with the property that evaluating subsequent elements of the list causes IO to happen (in this case reading another character from stdin). Thus, unsafeInterleaveIO is safe only if it is not observable (from inside the program) when exactly the IO gets performed. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re: Guards with do notation?
Joachim Breitner wrote: Am Dienstag, den 24.10.2006, 12:48 +0200 schrieb Benjamin Franksen: Am Dienstag, den 24.10.2006, 00:44 +0300 schrieb Misha Aizatulin: hello all, why is it not possible to use guards in do-expressions like do (a, b) | a == b - getPair return a and b are equal Probably because it is not well-defined for all Monad what a failure is, i.e. what to do in the other case. or something. Just my guess. No, fail is indeed a method of class Monad, and it is there exactly for this reason, i.e. because pattern matching may fail (even without guards, think of do Just a - ... ) The restriction is there because guards are not allowed in lambda expressions, for which do-notation is merely syntactic sugar. (Some people have argued for lifting this restriction in Haskell', see http://thread.gmane.org/gmane.comp.lang.haskell.prime/1750/focus=1750) Then why is the ?guard? function, which can be used in a way to implement what Misha wants, only available in MonadPlus, and not in Monad? This seems to be inconsistent. Anyway, the decision to include fail in class Monad (instead of using MonadZero) has been criticized by far more competent people than me, see this thread http://thread.gmane.org/gmane.comp.lang.haskell.cafe/15656/focus=15666 Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [off-topic / administrative] List Reply-to
Antti-Juhani Kaijanaho wrote: Robert Dockins wrote: I think (pure speculation) the haskell.org mail server is set up to omit people from mail it sends if they appear in the To: or Cc: of the original mail. Yes, this is a feature of recent Mailmans. Finally, I agree that reply-to munging is a bad idea, but I don't think appealing to a definition of 'reasonable mailer' that doesn't match a large portion of mail clients currently in the wild is a good way to argue the point. Gnus might have been the first one to have it, but mutt (very popular in hackerdom) was perhaps the one that popularized it. I am currently using Mozilla Thunderbird, for which it is available as an extension (unfortunately, it also requires a patch for Thunderbird; but Debian sid has already applied it). KMail (at least in KDE 3.5.4) has it, too. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Offtopic] Re: Re: A better syntax for qualified operators?
Cale Gibbard wrote: What a beautiful world this could be... ;-)) (*) Cheers, Ben (*) Donald Fagen (forgot the name of the song) I think I.G.Y. (International Geophysical Year) is it: On that train all graphite and glitter Undersea by rail Ninety minutes from New York to Paris (More leisure time for artists everywhere) A just machine to make big decisions Programmed by fellows with compassion and vision We'll be clean when their work is done We'll be eternally free yes and eternally young What a beautiful world this will be What a glorious time to be free Many thanks for digging it out. Hey, it fits some of the discussions on this list even better than I remembered ;-) Cheers, Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [newbie] How to test this function?
Bruno Martínez [EMAIL PROTECTED] writes: On Thu, 21 Sep 2006 15:12:07 -0300, Benjamin Franksen [EMAIL PROTECTED] wrote: OK. Thanks. I didn't find that one because it's not offered as an identation option in emacs haskell mode. Emacs is evil! David House wrote: I'll ignore the throwaway flaimbait there ;) Jón Fairbairn wrote: That's a great exaggeration Of course. I am sorry, couldn't resist the urge, if you know what I mean. Oh and forgot the ubiquitous smiley, so here goes: ;-) Cheers, Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [newbie] How to test this function?
Bruno Martínez wrote: On Thu, 21 Sep 2006 01:52:38 -0300, Donald Bruce Stewart [EMAIL PROTECTED] wrote: First, how do I fix the identation of the if then else? getList = find 5 where find 0 = return [] find n = do ch - getChar if ch `elem` ['a'..'e'] then do tl - find (n-1) return (ch : tl) else find n OK. Thanks. I didn't find that one because it's not offered as an identation option in emacs haskell mode. Emacs is evil! It also inserts random tab characters into your code just to save a few space bytes. Tends to completely trash indentation e.g. when pasting code into mails etc. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re[2]: what is a difference between existential quantification and polymorhic field?
Bulat Ziganshin wrote: Hello Ross, Thursday, September 21, 2006, 12:55:40 PM, you wrote: data Ex = forall a. Num a = Ex a and data Po = Po (forall a. Num a = a) Consider the types of the constructors: Ex :: forall a. (Num a) = a - Ex Po :: (forall a. (Num a) = a) - Po sorry, Ross, can you give me a more detailed explanation? it seems that Po argument is existential by itself, The Po argument is a polymorphic function, not an existential. As I understand it, the constructor Ex _is_ a polymorphic function, whereas the constructor Po (only) _takes_as_argument_ a polymorphic function. In the latter case you store a polymorphic function inside your data, whereas in the former case you construct (monomorphic) data in a polymorphic way, which means that during the process you loose the concrete type of the argument. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: foreach
Brandon Moore wrote: Couldn't '\' delimit a subexpression, as parentheses do? Would there be any ambiguity in accepting code like State \s - (s, s) instead of requiring State $ \s - (s, s), or taking main = do args - getArgs foreach args \arg - do foreach [1..3] \n - do putStrLn ((show n) ++ ) ++ arg It would be a bit odd to have a kind of grouping the always starts explicitly and ends implicitly, but other than that it seems pretty handy, harmless, and natural (I know I've tried to write this sort of thing often enough) Sounds like an extremely good idea to me. Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe