[Haskell-cafe] ANN: zip-archive 0.0
I've written a library, zip-archive, for dealing with zip archives. Haddock documentation (with links to source code): http://johnmacfarlane.net/zip-archive/ Darcs repository: http://johnmacfarlane.net/repos/zip-archive/ It comes with an example program that duplicates some of the functionality of 'zip' (configure with '-fexecutable' to build it). I intend to put it on HackageDB, but I thought I'd get some feedback first. Bug reports, patches, and suggestions on the API are all welcome. John ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
John Meacham wrote: I forgot who came up with the original ACIO idea, but I'd give them props in the manual if they wish. I think this is based on Ian Starks message.. http://www.haskell.org/pipermail/haskell-cafe/2004-November/007664.html Yeah, this sounds like a great idea. there were a whole lot of issues dealing with finalizers and concurrency, and restricting them in some way similar to ACIO might be good... however, you want something a little weaker than ACIO I think. it must satisfy the ACIO conditions, but _may_ assume its argument (the item being collected) is never referenced again. hence something like 'free' is okay which wouldn't be if other references to the object exist. do you think that is 'formal' enough of a description? seems clear enough if ACIO is well defined which I think it is. Yes, now I cast my mind back that was something that was troubling me. Clearly the one thing you're most likely to want to do in a finaliser is free some external resource, which certainly wouldn't be ACIO ordinarily. But as you say, giving sane semantics and type safety to finalisers is very tricky indeed. I can't help thinking that semantically finaliser execution should be treated like some kind of external event handling, like an interrupt. Not sure what that should be properly, but I think finalisers should be the same. But from a top level aThing - someACIO point of view, if we're going to say that it doesn't matter if someACIO is executed before main is entered (possibly even at compile time) or on demand, then we clearly don't want to observe any difference between the latter case and the former (if aThing becomes garbage without ever being demanded). Maybe it would be safest to just say anything with a finaliser can't be created at the top level. We can always define an appropriate top level get IO action using runOnce or whatever. Regards -- Adrian Hey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
On Tue, Aug 26, 2008 at 12:07 AM, Adrian Hey [EMAIL PROTECTED] wrote: But from a top level aThing - someACIO point of view, if we're going to say that it doesn't matter if someACIO is executed before main is entered (possibly even at compile time) or on demand, then we clearly don't want to observe any difference between the latter case and the former (if aThing becomes garbage without ever being demanded). Maybe it would be safest to just say anything with a finaliser can't be created at the top level. We can always define an appropriate top level get IO action using runOnce or whatever. I've been wondering: is there any benefit to having top-level ACIO'd - instead of just using runOnce (or perhaps oneshot) as the primitive for everything? For example: oneshot uniqueRef :: IO (MVar Integer) uniqueRef = newMVar 0 It was also suggested in that wiki page: http://haskell.org/haskellwiki/Top_level_mutable_state#Proposal_4:_Shared_on-demand_IO_actions_.28oneShots.29 Those proposals eliminate the need for creating an ACIO monad and enforcing its axioms, since one-shot actions are executed in-line with other I/O actions (rather than at some nebulous before the program is run time). So, in the context of top-level initializers, does ACIO offer something beyond what oneshot provides on its own? If not, I prefer the latter since it seems like a much simpler solution. Best, -Judah ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: ANN: First Monad Tutorial of the Season
Hans van Thiel wrote: As a general comment on the teaching of Haskell, all books and tutorials, which I've seen, appear to treat this aspect of Haskell as if it were self explanatory. This while the better known imperative languages don't have anything like it. Only Real World Haskell explains algebraic data types to some satisfaction (IMHO, of course). (Hopefully this different take on it helps more than it hurts...) In addition to keeping the type-level and the value-level separated, Haskell does a little bit to keep the type/interface-level and the implementation-level separate. The data keyword introduces both a new type and also a new implementation. This is the only way of introducing new implementations. ADTs are beauty incarnate, but unfortunately not well known outside of functional languages and abstract algebra. The newtype keyword introduces a new type, but it reuses an old implementation under the covers. Even though they have the same underlying implementation, the newtype and the type of the old implementation are considered entirely different semantically and so one cannot be used in lieu of the other. The dubiously named type keyword introduces an alias shorthand for some type that already exists. These aliases are, in a sense, never checked; that is, the macro is just expanded. This means that we can't carry any additional semantic information by using aliases and so if we have: type Celsius= Int type Fahrenheit = Int ...the type checker will do nothing to save us. If we wanted to add semantic tags to the Int type in order to say what units the number represents, then we could do that with a newtype and the type checker would ensure that we didn't mix units. Re data vs newtype, where a newtype is possible (single data constructor, which has exactly one argument) there are still a few differences at the semantic level. Since a newtype's data constructor doesn't exist at runtime, evaluating a newtype to WHNF will evaluate the argument to WHNF; hence a newtype can be thought of as the data version with an obligatory strictness annotation. In terms of bottom, this means that: data Foo = Foo Int ...has both _|_ and (Foo _|_). Whereas, both of the following: dataFoo = Foo !Int newtype Foo = Foo Int ...have only _|_. It should also be noted that the overhead for newtypes is not *always* removed. In particular, if we have the following definitions: dataZ = Z newtype S a = S a We must keep the tags (i.e. boxes) for S around because (S Z) and (S (S Z)) need to be distinguishable. This only really comes up with polymorphic newtypes (since that enables recursion), and it highlights the difference between strict fields and unpacked strict fields. Typically newtypes are unpacked as well as strict (hence no runtime tag overhead), but it's not guaranteed. Another operational difference between newtypes and an equivalent data declaration has to do with the type class dictionaries. IIRC, with GeneralizedNewtypeDeriving the newtype actually uses the exact same dictionaries as the underlying type, thus avoiding unwrapping/rewrapping overhead. I'm somewhat fuzzy on all the details here, bit its another reason to use newtypes when you can. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Top Level -
Adrian Hey wrote: Maybe it would be safest to just say anything with a finaliser can't be created at the top level. Do you have an example of something that is correctly ACIO to create, but has a problematic finaliser? -- Ashley Yakeley ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: problem with cabal for syb-with-class
in fairness, i have to add that i did inadvertetly install version 0.3. of syb-with-class and got the error i still cannot understand. installing version 0.4 did work flawlessly! nevertheless, i would be interested to understand the problem i encountered. andrew On Mon, 2008-08-25 at 18:08 +0200, Andrew U. Frank wrote: i tried to install wxGeneric and need syb-with-class. i got the package from hackage. configure runs fine, but when i build i get: Data/Generics/SYB/WithClass/Instances.hs:11:7: Could not find module `Data.Array': it is a member of package array-0.1.0.0, which is hidden but array-0.1.0.0 is installed, as shown later. what can be wrong? why is an installed package complained about as hidden? thanks for help! andrew [EMAIL PROTECTED]:~/haskellSources/packages/syb-with-class-0.3$ ghc-pkg describe array name: array version: 0.1.0.0 license: BSD3 copyright: maintainer: [EMAIL PROTECTED] stability: homepage: package-url: description: This package defines the classes @IArray@ of immutable arrays and @MArray@ of arrays mutable within appropriate monads, as well as some instances of these classes. category: author: exposed: True exposed-modules: Data.Array Data.Array.Base Data.Array.Diff Data.Array.IArray Data.Array.IO Data.Array.MArray Data.Array.ST Data.Array.Storable Data.Array.Unboxed hidden-modules: Data.Array.IO.Internals import-dirs: /usr/lib/ghc-6.8.2/lib/array-0.1.0.0 library-dirs: /usr/lib/ghc-6.8.2/lib/array-0.1.0.0 hs-libraries: HSarray-0.1.0.0 extra-libraries: extra-ghci-libraries: include-dirs: includes: depends: base-3.0.1.0 hugs-options: cc-options: ld-options: framework-dirs: frameworks: haddock-interfaces: /usr/share/doc/ghc6-doc/libraries/array/array.haddock haddock-html: /usr/share/doc/ghc6-doc/libraries/array f ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Top Level -
Judah Jacobson wrote: I've been wondering: is there any benefit to having top-level ACIO'd - instead of just using runOnce (or perhaps oneshot) as the primitive for everything? I don't think oneshots are very good for open witness declarations (such as the open exceptions I mentioned originally), since there are pure functions that do useful things with them. Not sure about TVars either, which operate in the STM monad. Would you also need a oneshotSTM (or a class)? -- Ashley Yakeley ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] 2 modules in one file
Is it allowed to write two different modules in a single file? Something like: module Mod1 (...) where { ... } module Mod2 (...) where { import Mod1; ... } I tried, and got an error, but would like to confirm that there's no way to do that. No, that's not possible because haskell will use the module name A.B.C to look the module up in path A/B/C.[l]hs. So using modules module A where .. module B where the compiler could only find one of them. (naming the file A.hs or B.hs) You have to use one file for each module I think there is another tool somewhere to merge many modules into one. But I don't think that's what you're looking for. (I haven't tried that myself) Marc Weber ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
Wow! 3x the performance for a simple change. Frustrating that there isn't a protable/standard way to express this. Also frustrating that the threaded version doesn't improve on the situation (utilization is back at 50%). GR, retraction, retraction! I was obviously too tired when I posted this. In generallizing the system to take run-time specified number of CPUs (for forkOnIO) and tokens the behavior changed from my 3 minute runs. I'll play with it more tonight. Thomas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] String to Double conversion in Haskell
2008/8/24 Daryoush Mehrtash [EMAIL PROTECTED]: I am trying to convert a string to a float. It seems that Data.ByteString library only supports readInt.After some googling I came accross a possibloe implementation: http://sequence.svcs.cs.pdx.edu/node/373 My questions are: a) is there a standard library implementation of string - Double and float? b) Why is it that the ByteString only supports readInt? Is there a reason for it? Hi Daryoush, are you really looking for ByteString - Float conversion, or just plain String - Float? The latter is really simple, the function is called 'read' and is available in the Prelude: $ ghci GHCi, version 6.8.3: http://www.haskell.org/ghc/ :? for help Loading package base ... linking ... done. Prelude read 3.14 :: Float 3.14 /Bjorn ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: problem with cabal for syb-with-class
On Tue, 2008-08-26 at 12:06 +0200, Andrew U. Frank wrote: in fairness, i have to add that i did inadvertetly install version 0.3. of syb-with-class and got the error i still cannot understand. installing version 0.4 did work flawlessly! nevertheless, i would be interested to understand the problem i encountered. The error message unfortunately refers to the mechanism and not the cause. When Cabal builds a package it tells ghc to hide every package and then use only the packages listed in the build-depends field: ghc --make -hide-all-packages -package base-3.0.1.0 ... etc So when ghc finds that one of your modules needs to import something that is not in one of the given packages it says that it's in another package that is 'hidden'. Of course it's only hidden because Cabal told ghc to hide them. So what the error message really means is that you're missing a package from the build-depends field in the .cabal file. The error message will improve when Cabal does it's own dependency chasing, but don't hold your breath. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: problem with cabal for syb-with-class
On Tue, Aug 26, 2008 at 2:22 PM, Duncan Coutts [EMAIL PROTECTED] wrote: So when ghc finds that one of your modules needs to import something that is not in one of the given packages it says that it's in another package that is 'hidden'. Of course it's only hidden because Cabal told ghc to hide them. Yes, it is one of those unfortunate error messages that says I know what the problem is *and* how to fix it, but I'm not going to. ;-) Cheers, Dougal ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Propeganda
On Sat, Aug 23, 2008 at 6:15 PM, Daniel Fischer [EMAIL PROTECTED] wrote: Am Samstag, 23. August 2008 23:17 schrieb Thomas Davie: I'd be interested to see your other examples -- because that error is not happening in Haskell! You can't argue that Haskell doesn't give you no segfaults, because you can embed a C segfault within Haskell. Use ST(U)Arrays, and use unsafeWrite because you do the indexchecking yourself. Then be stupid and confuse two bounds so that you actually write beyond the array bounds. I've had that happen _once_. But if you explicitly say you want it unsafe, you're prepared for it :) Which illustrates the point that it's not type safety that protects us from segfaults, so much as bounds checking, and that's got a non-trivial runtime cost. At least, most segfaults that *I've* caused (in C or C++) have been from overwriting the bounds of arrays, and that's precisely the problem that Haskell does *not* solve using its type system. There have attempts to do so, but I've not heard of instances where they have been used in real programs. David ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: problem with cabal for syb-with-class
On Tue, 2008-08-26 at 14:30 +0100, Dougal Stanton wrote: On Tue, Aug 26, 2008 at 2:22 PM, Duncan Coutts [EMAIL PROTECTED] wrote: So when ghc finds that one of your modules needs to import something that is not in one of the given packages it says that it's in another package that is 'hidden'. Of course it's only hidden because Cabal told ghc to hide them. Yes, it is one of those unfortunate error messages that says I know what the problem is *and* how to fix it, but I'm not going to. ;-) Actually it's more like: I know what the problem is but I cannot fix it. He knows how to fix it, but doesn't know there's a problem! :-) GHC knows what the problem is but it's just following orders. Cabal gave the orders but doesn't know there is a problem. That's why it'll be fixed when Cabal does the dep chasing rather than delegating that to ghc --make. Cabal will then be able to either just pull in the (probably) right packages, or report the problem using the language of the problem domain and not the implementation. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: problem with cabal for syb-with-class
So when ghc finds that one of your modules needs to import something that is not in one of the given packages it says that it's in another package that is 'hidden'. Of course it's only hidden because Cabal told ghc to hide them. Yes, it is one of those unfortunate error messages that says I know what the problem is *and* how to fix it, but I'm not going to. ;-) Actually it's more like: I know what the problem is but I cannot fix it. He knows how to fix it, but doesn't know there's a problem! :-) GHC knows what the problem is but it's just following orders. Cabal gave the orders but doesn't know there is a problem. Since you said don't hold your breath for Cabal's dependencies: Cabal doesn't have to pass on ghc's messages uninterpreted. That's a lot like implementing a map as a list and complaining about empty list instead of element not found. Cabal is the interface here, ghc is the tool. The interface shouldn't just pass instructions to the tool, it should also interpret and present the tool's responses. As suggested in this thread: http://www.haskell.org/pipermail/cabal-devel/2007-December/001497.html http://www.haskell.org/pipermail/cabal-devel/2007-December/001499.html Hmm, the archive failed to decode the code sketch attached to the last message there (which demonstrated that some basic help could be hacked up as a simple pattern-message script wrapping cabal), so I attach the old code again for reference. Claus cabal.hs Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Re: ANN: First Monad Tutorial of the Season
[snip] Most probably you are confusing type and data constructor. This is a common error and a hurdle I remember falling over more than once. It is due to the fact that in Haskell both are in completely separate name spaces, nevertheless both use capitalized names. Thus people often use the same name for both, especially with newtype, as there may only be one data constructor. In your case you have newtype State s a = State { runState :: (s - (a, s)) } where the type constructor takes two (type-) arguments (even for a newtype it can take as many as you like), but the data constructor takes only one value as argument, namely a function from s to (a,s). Clear now? A newtype has only one data constructor, a data definition may have more (when it contains a choice (|) operator). That's clear now. Third, newtype is unlifted. The books I use for reference, the Craft and SOE, don't seem to mention this. I have to confess, I don't really understand the difference between newtype and data. Again, an explanation would be appreciated. Did Ryan's explanation help? As a general comment on the teaching of Haskell, all books and tutorials, which I've seen, appear to treat this aspect of Haskell as if it were self explanatory. This while the better known imperative languages don't have anything like it. Only Real World Haskell explains algebraic data types to some satisfaction (IMHO, of course). This is one of the more difficult aspects Haskell, IME. I found the Haskell wiki book (http://en.wikibooks.org/wiki/Haskell) very useful, especially the chapter on denotational semantics (http://en.wikibooks.org/wiki/Haskell/Denotational_semantics). The wikibook has a lot of good material, IMO. I'll certainly read that chapter. If you have a background in imperative languages, especially low-level ones like C, then it may help to think of the values of a lifted type (data ...) as being represented by a pointer to the data proper (e.g. a struct), whereas values of an unlifted type (newtype ...) are represented exactly as the argument type. That makes sense to me. Thanks, everybody! A value of a lifted type always has one additional value in its type, namely bottom. You may think of bottom as being represented by a null pointer. In fact, one could say that, in Java, Objects are always lifted whereas basic types like integer are unlifted. Now, before I get shot down by the purists, I know that this is not exactly true, since bottom is also the value of an infinite loop, so Java in fact has a 'real' bottom in addition to null, etc. See the above cited online book chapter for a more precise (and still very readable) treatment. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: problem with cabal for syb-with-class
thank you very much for the explanation! i take that in this case, the error is in the cabal, which does not list the required package. andrew On Tue, 2008-08-26 at 15:09 +0100, Duncan Coutts wrote: On Tue, 2008-08-26 at 14:30 +0100, Dougal Stanton wrote: On Tue, Aug 26, 2008 at 2:22 PM, Duncan Coutts [EMAIL PROTECTED] wrote: So when ghc finds that one of your modules needs to import something that is not in one of the given packages it says that it's in another package that is 'hidden'. Of course it's only hidden because Cabal told ghc to hide them. Yes, it is one of those unfortunate error messages that says I know what the problem is *and* how to fix it, but I'm not going to. ;-) Actually it's more like: I know what the problem is but I cannot fix it. He knows how to fix it, but doesn't know there's a problem! :-) GHC knows what the problem is but it's just following orders. Cabal gave the orders but doesn't know there is a problem. That's why it'll be fixed when Cabal does the dep chasing rather than delegating that to ghc --make. Cabal will then be able to either just pull in the (probably) right packages, or report the problem using the language of the problem domain and not the implementation. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] OpenGL's VBO with Haskell
First thanks you two for the reply. Now for the solution, I'm a bit ashamed of myself, because I simply forgot to put a clientState VertexArray $= Enabled The rest of the code is valid beside this 'little' miss -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Bit Connor Envoyé : lundi 25 août 2008 23:44 À : Twinside Cc : haskell-cafe@haskell.org Objet : Re: [Haskell-cafe] OpenGL's VBO with Haskell Hi, I used VBO with haskell and I remember it being pretty straightforward, pretty much the same as in C. This was a while ago and I don't really remember how things work so I can't really comment on your code. But I'll see if I can find my old haskell VBO code. On Mon, Aug 25, 2008 at 8:43 PM, Twinside [EMAIL PROTECTED] wrote: Hi Haskell list, Today I'm turning to you for the use of VBO (Vertex Buffer Object) in Haskell. I seem to be able to create one without any problem using the following code : -- vboOfList :: Int - [Float] - IO BufferObject vboOfList size elems = let ptrsize = toEnum $ size * 4 arrayType = ElementArrayBuffer in do [array] - genObjectNames 1 bindBuffer arrayType $= Just array arr - newListArray (0, size - 1) elems withStorableArray arr (\ptr - bufferData arrayType $= (ptrsize, ptr, StaticDraw)) bindBuffer ArrayBuffer $= Nothing reportErrors return array -- However the problem arise when I try to draw primitives using this vbo : -- displayVbo buff size = do let stride = toEnum sizeOfVertexInfo vxDesc = VertexArrayDescriptor 3 Float stride $ offset 0 colors = VertexArrayDescriptor 4 Float stride $ offset 12 texCoo = VertexArrayDescriptor 2 Float stride $ offset (12 + 16) filt = VertexArrayDescriptor 4 Float stride $ offset (12 + 16 + 8) bindBuffer ArrayBuffer $= Just buff arrayPointer VertexArray $= vxDesc arrayPointer ColorArray $= colors arrayPointer TextureCoordArray $= texCoo arrayPointer SecondaryColorArray $= filt drawArrays Quads 0 size bindBuffer ArrayBuffer $= Nothing -- Nothing is displayed on screen. As you can see, my VBO contain interleaved data : - 3 float for the vertex - 4 for the color - 2 for the texture coordinate - 4 for the secondary color) The 'offset' function has type Int - Ptr Float, and is used to forge a pointer from an Int, to mimic the C way of using VBO. As far as I've checked, the values in my list for VBO generation are valid and well displayed using other techniques. So is there a workaround other method for my solution, preferably by keeping my data interleaved? Secondly, is there any sample for advanced features like VBOs in Haskell? Regards, Vincent ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re[2]: [Haskell] Top Level -
Hello Derek, Tuesday, August 26, 2008, 8:14:21 PM, you wrote: but from my POV it's important to push this feature into haskell standard Haskell should be moving -toward- a capability-like model, not away from it. what you mean by capability-like model? -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re[2]: [Haskell] Top Level -
On Tue, 2008-08-26 at 20:18 +0400, Bulat Ziganshin wrote: Hello Derek, Tuesday, August 26, 2008, 8:14:21 PM, you wrote: but from my POV it's important to push this feature into haskell standard Haskell should be moving -toward- a capability-like model, not away from it. what you mean by capability-like model? http://erights.org/elib/capability/index.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: ANN: First Monad Tutorial of the Season
On Tue, Aug 26, 2008 at 1:19 AM, wren ng thornton [EMAIL PROTECTED] wrote: It should also be noted that the overhead for newtypes is not *always* removed. In particular, if we have the following definitions: dataZ = Z newtype S a = S a We must keep the tags (i.e. boxes) for S around because (S Z) and (S (S Z)) need to be distinguishable. This only really comes up with polymorphic newtypes (since that enables recursion), and it highlights the difference between strict fields and unpacked strict fields. Typically newtypes are unpacked as well as strict (hence no runtime tag overhead), but it's not guaranteed. Is this true? (S Z) and (S (S Z)) only need to be distinguished during typechecking. This would be different if it was some sort of existential type: newtype N = forall a. Num a = N a but GHC at least disallows existential boxes in newtypes. -- ryan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] String to Double conversion in Haskell
Bjorn, I am initializing a list from a file. I am reading the lines from the file, splitting them into bytestring and then converting them to float. Should I be using String - Float or ByteString - Float? thanks Daryoush On Tue, Aug 26, 2008 at 6:01 AM, Bjorn Bringert [EMAIL PROTECTED] wrote: 2008/8/24 Daryoush Mehrtash [EMAIL PROTECTED]: I am trying to convert a string to a float. It seems that Data.ByteString library only supports readInt.After some googling I came accross a possibloe implementation: http://sequence.svcs.cs.pdx.edu/node/373 My questions are: a) is there a standard library implementation of string - Double and float? b) Why is it that the ByteString only supports readInt? Is there a reason for it? Hi Daryoush, are you really looking for ByteString - Float conversion, or just plain String - Float? The latter is really simple, the function is called 'read' and is available in the Prelude: $ ghci GHCi, version 6.8.3: http://www.haskell.org/ghc/ :? for help Loading package base ... linking ... done. Prelude read 3.14 :: Float 3.14 /Bjorn / ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] String to Double conversion in Haskell
dmehrtash: Bjorn, I am initializing a list from a file. I am reading the lines from the file, splitting them into bytestring and then converting them to float. Should I be using String - Float or ByteString - Float? I'd try reading the file entirely as a bytestring, then splitting out the Doubles, using ByteString - Float. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Top Level -
I have a feeling this is going to be a very long thread so I'm trying to go to Haskell cafe again (without mucking it up again). Derek Elkins wrote: Haskell should be moving -toward- a capability-like model, not away from it. Could you show how to implement Data.Random or Data.Unique using such a model, or any (preferably all) of the use cases identified can be implemented? Like what about implementing the socket API starting with nothing but primitives to peek/poke ethernet mac and dma controller registers? Why should Haskell should be moving -toward- a capability-like model and why does top level - declarations take us away from it? Regards -- Adrian Hey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell Speed Myth
dons: (Where I note GHC is currently in second place, though we've not submitted any parallel programs yet). We might call that the thread-ring effect :-) Also CC'd Isaac, Mr. Shootout. Isaac, is the quad core shootout open for business? Should we rally the troops? iirc there was some discussion after the last GHC release about cleaning up the programs to make them less low-level given the improved capabilities of the compiler - I don't think that ever happened, and low level seems to be a common complaint against the Haskell programs shown in the benchmarks game. As Simon Peyton-Jones suggested we're certainly open for suggestions: http://groups.google.com/group/fa.haskell/browse_thread/thread/7eb82c689de8688/4f3c47b976394666?lnk=stq=alioth+shootout#4f3c47b976394666 However, we're operating new measurement scripts on both u64q (published) and gp4 (unpublished), and my focus is still on catching up to where we were with measurements from the old scripts, and installing third-party libraries on u64q. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: ANN: First Monad Tutorial of the Season
Ryan Ingram wrote: wren ng thornton wrote: It should also be noted that the overhead for newtypes is not *always* removed. In particular, if we have the following definitions: dataZ = Z newtype S a = S a We must keep the tags (i.e. boxes) for S around because (S Z) and (S (S Z)) need to be distinguishable. This only really comes up with polymorphic newtypes (since that enables recursion), and it highlights the difference between strict fields and unpacked strict fields. Typically newtypes are unpacked as well as strict (hence no runtime tag overhead), but it's not guaranteed. Is this true? (S Z) and (S (S Z)) only need to be distinguished during typechecking. This would be different if it was some sort of existential type: newtype N = forall a. Num a = N a but GHC at least disallows existential boxes in newtypes. They only need to be distinguished at type checking time, true; but if you have a function that takes peano integers (i.e. is polymorphic over Z and (S a) from above) then you need to keep around enough type information to know which specialization of the function to take. The problem is that the polymorphism means that you can't do full type erasure because there's a type variable you need to keep track of. From my experiments looking at memory usage, the above declarations take the same amount of memory as a pure ADT, which means linear in the value of the peano integer. It may be that I misinterpreted the results, but I see no other way to deal with polymorphic newtypes so I'm pretty sure this is what's going on. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
igouy2: dons: (Where I note GHC is currently in second place, though we've not submitted any parallel programs yet). We might call that the thread-ring effect :-) Also CC'd Isaac, Mr. Shootout. Isaac, is the quad core shootout open for business? Should we rally the troops? iirc there was some discussion after the last GHC release about cleaning up the programs to make them less low-level given the improved capabilities of the compiler - I don't think that ever happened, and low level seems to be a common complaint against the Haskell programs shown in the benchmarks game. As Simon Peyton-Jones suggested we're certainly open for suggestions: http://groups.google.com/group/fa.haskell/browse_thread/thread/7eb82c689de8688/4f3c47b976394666?lnk=stq=alioth+shootout#4f3c47b976394666 However, we're operating new measurement scripts on both u64q (published) and gp4 (unpublished), and my focus is still on catching up to where we were with measurements from the old scripts, and installing third-party libraries on u64q. So still consolidating the system. Do I understand though, that if we submit, say, a quad-core version of binary-trees, for example, using `par` and -N4, it'll go live on the benchmark page? -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
Making a network stack from peek and poke is easy in a well structured OS. The boot loader (or whatever) hands you the capability (call it something else if you want) to do raw hardware access, and you build from there. If you look at well structured OSs like NetBSD, this is pretty much how they work. No hardware drivers use global variables. -- Lennart On Tue, Aug 26, 2008 at 6:34 PM, Adrian Hey [EMAIL PROTECTED] wrote: I have a feeling this is going to be a very long thread so I'm trying to go to Haskell cafe again (without mucking it up again). Derek Elkins wrote: Haskell should be moving -toward- a capability-like model, not away from it. Could you show how to implement Data.Random or Data.Unique using such a model, or any (preferably all) of the use cases identified can be implemented? Like what about implementing the socket API starting with nothing but primitives to peek/poke ethernet mac and dma controller registers? Why should Haskell should be moving -toward- a capability-like model and why does top level - declarations take us away from it? Regards -- Adrian Hey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
--- Don Stewart [EMAIL PROTECTED] wrote: -snip- So still consolidating the system. Pretty much. Do I understand though, that if we submit, say, a quad-core version of binary-trees, for example, using `par` and -N4, it'll go live on the benchmark page? That's an open question - should it? How should the benchmarks game approach multicore? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
igouy2: --- Don Stewart [EMAIL PROTECTED] wrote: -snip- So still consolidating the system. Pretty much. Do I understand though, that if we submit, say, a quad-core version of binary-trees, for example, using `par` and -N4, it'll go live on the benchmark page? That's an open question - should it? How should the benchmarks game approach multicore? Well, there's a famous paper, Algorithm + Strategy = Parallelism I'd imagine we use the benchmark game's algorithms, but let submitters determine the strategy. Then the results would show a) how well you utilize the cores, and b) overall wall clock results. I'm keen to get going on this, if only because I think we can turn out parallelised versions of many of the existing programs, fairly cheaply. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
On Tue, 2008-08-26 at 18:34 +0100, Adrian Hey wrote: I have a feeling this is going to be a very long thread so I'm trying to go to Haskell cafe again (without mucking it up again). Derek Elkins wrote: Haskell should be moving -toward- a capability-like model, not away from it. Could you show how to implement Data.Random or Data.Unique using such a model, or any (preferably all) of the use cases identified can be implemented? Like what about implementing the socket API starting with nothing but primitives to peek/poke ethernet mac and dma controller registers? Data.Random and Data.Unique are trivial. Already the immutable interfaces are fine. You could easily pass around a mutable object holding the state if you didn't want to be curtailed into a State monad. If you have full access to the DMA controller your language is not even memory safe. This is not a common situation for most developers. I have no trouble requiring people who want to hack OSes having to use implementation-specific extensions as they have to do today in any other language. However, this is only a problem for capabilities (as the capability model requires memory safety,) not for a language lacking top-level mutable state. Access to the DMA controller and the Ethernet interface can still be passed in, it doesn't need to be a top-level action. There are entire operating systems built around capability models, so it is certainly possible to do these things. Why should Haskell should be moving -toward- a capability-like model and why does top level - declarations take us away from it? Answering the second question first: mutable global variables are usually -explicitly- disallowed from a capability model. To answer your first question: safety, security, analyzability, encapsulation, locality are all things that Haskell strives for. Personally, I think that every language should be moving in this direction as much as possible, but the Haskell culture, in particular, emphasizes these things. It's notable that O'Haskell and Timber themselves moved toward a capability model. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell audio libraries audio formats
On Mon, 25 Aug 2008, John Van Enk wrote: How well would the storablevector package (Data.StorableVector) work for storing audio data? One of the major issues I'm still working over is that I want to maintain something similar to a [[a]] format (since the underlying PortAudio library and hardware could support hundreds of interleaved channels) but I would like to be able to build in some typechecking to the functions to make sure the number of channels matches the nubmer expected in the functions. With data Stereo a = Stereo !a !a you could also use Stereo (Stereo a) for quadrophony and so on. Would this be convenient enough? StorableVector stores everything of fixed length for which a Storable instance is defined. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: ANN: First Monad Tutorial of the Season
The values Z, S Z, and S (S Z) all have the same runtime representation and there is no linear increase in size when you add a extra S. BUT, if you make something overloaded and there is a dictionary associated with the type (Z, S Z, or S (S Z)) then the dictionary takes up space, and that space is linear in the number of S constructors. -- Lennart On Tue, Aug 26, 2008 at 6:39 PM, wren ng thornton [EMAIL PROTECTED] wrote: Ryan Ingram wrote: wren ng thornton wrote: It should also be noted that the overhead for newtypes is not *always* removed. In particular, if we have the following definitions: dataZ = Z newtype S a = S a We must keep the tags (i.e. boxes) for S around because (S Z) and (S (S Z)) need to be distinguishable. This only really comes up with polymorphic newtypes (since that enables recursion), and it highlights the difference between strict fields and unpacked strict fields. Typically newtypes are unpacked as well as strict (hence no runtime tag overhead), but it's not guaranteed. Is this true? (S Z) and (S (S Z)) only need to be distinguished during typechecking. This would be different if it was some sort of existential type: newtype N = forall a. Num a = N a but GHC at least disallows existential boxes in newtypes. They only need to be distinguished at type checking time, true; but if you have a function that takes peano integers (i.e. is polymorphic over Z and (S a) from above) then you need to keep around enough type information to know which specialization of the function to take. The problem is that the polymorphism means that you can't do full type erasure because there's a type variable you need to keep track of. From my experiments looking at memory usage, the above declarations take the same amount of memory as a pure ADT, which means linear in the value of the peano integer. It may be that I misinterpreted the results, but I see no other way to deal with polymorphic newtypes so I'm pretty sure this is what's going on. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell audio libraries audio formats
I think the problem I'll run into is the 128 channel case. I'm hoping for a general solution... I'm almost positive this will require runtime checks. Your solution is what I was thinking for functions requiring exactly N channels (I'm not sure if there are many functions like that). On Tue, Aug 26, 2008 at 2:11 PM, Henning Thielemann [EMAIL PROTECTED] wrote: On Mon, 25 Aug 2008, John Van Enk wrote: How well would the storablevector package (Data.StorableVector) work for storing audio data? One of the major issues I'm still working over is that I want to maintain something similar to a [[a]] format (since the underlying PortAudio library and hardware could support hundreds of interleaved channels) but I would like to be able to build in some typechecking to the functions to make sure the number of channels matches the nubmer expected in the functions. With data Stereo a = Stereo !a !a you could also use Stereo (Stereo a) for quadrophony and so on. Would this be convenient enough? StorableVector stores everything of fixed length for which a Storable instance is defined. -- /jve ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Top Level -
On Tue, Aug 26, 2008 at 3:15 AM, Ashley Yakeley [EMAIL PROTECTED] wrote: Judah Jacobson wrote: I've been wondering: is there any benefit to having top-level ACIO'd - instead of just using runOnce (or perhaps oneshot) as the primitive for everything? I don't think oneshots are very good for open witness declarations (such as the open exceptions I mentioned originally), since there are pure functions that do useful things with them. I think you're saying that you want to write w - newIOWitness at the top level, so that w can then be referenced in a pure function. Fair enough. But newIOWitness's implementation requires writeIORef (or an equivalent), which is not ACIO, right? I suppose you could call unsafeIOToACIO, but if that function is used often it seems to defeat the purpose of defining an ACIO monad in the first place. Not sure about TVars either, which operate in the STM monad. Would you also need a oneshotSTM (or a class)? Interesting point; I think you can work around it, but it does make the code a little more complicated. For example: oneshot uniqueVar :: IO (TVar Integer) uniqueVar = atomically $ newTVar 0 -- alternately, use newTVarIO uniqueIntSTM :: IO (STM Integer) uniqueIntSTM = uniqueVar = \v - return $ do n - readTVar v writeTVar v (n+1) return n getUniqueInt :: IO Integer getUniqueInt = uniqueIntSTM = atomically -Judah ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Parsec and network data
Hi, I've been struggling with this problem for days and I'm dying. Please help. I want to use Parsec to parse NNTP data coming to me from a handle I get from connectTo. One unworkable approach I tried is to get a lazy String from the handle with hGetContents. The problem: suppose the first message from the NNTP server is 200 OK\r\n. Parsec parses it beautifully. Now I need to discard the parsed part so that Parsec will parse whatever the server sends next, so I use Parsec's getInput to get the remaining data. But there isn't any, so it blocks. Deadlock: the client is inappropriately waiting for server data and the server is waiting for my first command. Another approach that doesn't quite work is to create an instance of Parsec's Stream with timeout functionality: instance Stream Handle IO Char where uncons h = do r - hWaitForInput h ms if r then liftM (\c - Just (c, h)) (hGetChar h) else return Nothing where ms = 5000 It's probably obvious to you why it doesn't work, but it wasn't to me at first. The problem: suppose you tell parsec you're looking for (many digit) followed by (string \r\n). 123\r\n won't match; 123\n will. My Stream has no backtracking. Even if you don't need 'try', it won't work for even basic stuff. Here's another way: http://www.mail-archive.com/haskell-cafe@haskell.org/msg22385.html The OP had the same problem I did, so he made a variant of hGetContents with timeout support. The problem: he used something from unsafe*. I came to Haskell for rigor and reliability and it would make me really sad to have to use a function with 'unsafe' in its name that has a lot of wacky caveats about inlining, etc. In that same thread, Bulat says a timeout-enabled Stream could help. But I can't tell what library that is. 'cabal list stream' shows me 3 libraries none of which seems to be the one in question. Is Streams a going concern? Should I be checking that out? I'm not doing anything with hGetLine because 1) there's no way to specify a maximum number of characters to read 2) what is meant by a line is not specified 3) there is no way to tell if it read a line or just got to the end of the data. Even using something like hGetLine that worked better would make the parsing more obscure. Thank you very very much for *any* help. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsec and network data
Are you doing this all in a single thread? On Tue, Aug 26, 2008 at 4:35 PM, brian [EMAIL PROTECTED] wrote: Hi, I've been struggling with this problem for days and I'm dying. Please help. I want to use Parsec to parse NNTP data coming to me from a handle I get from connectTo. One unworkable approach I tried is to get a lazy String from the handle with hGetContents. The problem: suppose the first message from the NNTP server is 200 OK\r\n. Parsec parses it beautifully. Now I need to discard the parsed part so that Parsec will parse whatever the server sends next, so I use Parsec's getInput to get the remaining data. But there isn't any, so it blocks. Deadlock: the client is inappropriately waiting for server data and the server is waiting for my first command. Another approach that doesn't quite work is to create an instance of Parsec's Stream with timeout functionality: instance Stream Handle IO Char where uncons h = do r - hWaitForInput h ms if r then liftM (\c - Just (c, h)) (hGetChar h) else return Nothing where ms = 5000 It's probably obvious to you why it doesn't work, but it wasn't to me at first. The problem: suppose you tell parsec you're looking for (many digit) followed by (string \r\n). 123\r\n won't match; 123\n will. My Stream has no backtracking. Even if you don't need 'try', it won't work for even basic stuff. Here's another way: http://www.mail-archive.com/haskell-cafe@haskell.org/msg22385.html The OP had the same problem I did, so he made a variant of hGetContents with timeout support. The problem: he used something from unsafe*. I came to Haskell for rigor and reliability and it would make me really sad to have to use a function with 'unsafe' in its name that has a lot of wacky caveats about inlining, etc. In that same thread, Bulat says a timeout-enabled Stream could help. But I can't tell what library that is. 'cabal list stream' shows me 3 libraries none of which seems to be the one in question. Is Streams a going concern? Should I be checking that out? I'm not doing anything with hGetLine because 1) there's no way to specify a maximum number of characters to read 2) what is meant by a line is not specified 3) there is no way to tell if it read a line or just got to the end of the data. Even using something like hGetLine that worked better would make the parsing more obscure. Thank you very very much for *any* help. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- /jve ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsec and network data
On Tue, Aug 26, 2008 at 3:38 PM, John Van Enk [EMAIL PROTECTED] wrote: Are you doing this all in a single thread? Yes. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsec and network data
Perhaps you'll want to continue with the hGetLine setup in one thread (assuming the NNTP data is line delimited), then in another, parse the data, then in a third, respond. Lookup how to use MVar's. Allowing the threads to block on reads/writes is a lot easier (logically) than figuring out the mess in a single threaded system. When you have a system like Haskell's threading tools, you're much better off splitting the tasks up into blocking calls with MVar's to synchronize. (Perhaps MVar's aren't quite the correct solution here, but it seems like it would work to me.) On Tue, Aug 26, 2008 at 4:40 PM, brian [EMAIL PROTECTED] wrote: On Tue, Aug 26, 2008 at 3:38 PM, John Van Enk [EMAIL PROTECTED] wrote: Are you doing this all in a single thread? Yes. -- /jve ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
Lennart Augustsson wrote: Making a network stack from peek and poke is easy in a well structured OS. The boot loader (or whatever) hands you the capability (call it something else if you want) to do raw hardware access, and you build from there. If you look at well structured OSs like NetBSD, this is pretty much how they work. No hardware drivers use global variables. So? We all know this is possible outside Haskell. But I don't want to rely on mysterious black box OS's to hand me the capability any more than I want to rely on mysterious extant but unimplementable libs like Data.Unique. Most real world computing infrastructure uses no OS at all. How could I use Haskell to implement such systems? Also (to mis-quote Linus Torvalds) could you or anyone else who agrees with you please SHOW ME THE CODE in *Haskell*! If scripture is all that's on offer I'm just not going to take any of you seriously. Frankly I'm tired of the patronising lectures that always acompany these threads. It'd be good if someone who knows global variables are evil could put their code where their mouth is for a change. Fixing up the base libs to eliminate the dozen or so uses of the unsafePerformIO hack might be a good place to start. I'll even let you change the API of these libs if you must, provided you can give a sensible explanation why the revised API is better, safer, more convenient or whatever. Regards -- Adrian Hey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsec and network data
Hello, Polyparse has some lazy parsers: http://www.cs.york.ac.uk/fp/polyparse/ Perhaps that would do the trick? j. At Tue, 26 Aug 2008 15:35:28 -0500, brian wrote: Hi, I've been struggling with this problem for days and I'm dying. Please help. I want to use Parsec to parse NNTP data coming to me from a handle I get from connectTo. One unworkable approach I tried is to get a lazy String from the handle with hGetContents. The problem: suppose the first message from the NNTP server is 200 OK\r\n. Parsec parses it beautifully. Now I need to discard the parsed part so that Parsec will parse whatever the server sends next, so I use Parsec's getInput to get the remaining data. But there isn't any, so it blocks. Deadlock: the client is inappropriately waiting for server data and the server is waiting for my first command. Another approach that doesn't quite work is to create an instance of Parsec's Stream with timeout functionality: instance Stream Handle IO Char where uncons h = do r - hWaitForInput h ms if r then liftM (\c - Just (c, h)) (hGetChar h) else return Nothing where ms = 5000 It's probably obvious to you why it doesn't work, but it wasn't to me at first. The problem: suppose you tell parsec you're looking for (many digit) followed by (string \r\n). 123\r\n won't match; 123\n will. My Stream has no backtracking. Even if you don't need 'try', it won't work for even basic stuff. Here's another way: http://www.mail-archive.com/haskell-cafe@haskell.org/msg22385.html The OP had the same problem I did, so he made a variant of hGetContents with timeout support. The problem: he used something from unsafe*. I came to Haskell for rigor and reliability and it would make me really sad to have to use a function with 'unsafe' in its name that has a lot of wacky caveats about inlining, etc. In that same thread, Bulat says a timeout-enabled Stream could help. But I can't tell what library that is. 'cabal list stream' shows me 3 libraries none of which seems to be the one in question. Is Streams a going concern? Should I be checking that out? I'm not doing anything with hGetLine because 1) there's no way to specify a maximum number of characters to read 2) what is meant by a line is not specified 3) there is no way to tell if it read a line or just got to the end of the data. Even using something like hGetLine that worked better would make the parsing more obscure. Thank you very very much for *any* help. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] contributing to standard library
Like everyone else who has used Haskell for a while, I'm accumulating functions which I feel should have already been in the standard libraries. What's the normal path to contributing functions for consideration in future standard libraries? Is there some experimental standard lib that we can contribute to to try out for the big league? Here are some functions: http://www.thenewsh.com/%7Enewsham/x/machine/Missing.hs Tim Newsham http://www.thenewsh.com/~newsham/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] contributing to standard library
On 2008 Aug 26, at 17:49, Tim Newsham wrote: Like everyone else who has used Haskell for a while, I'm accumulating functions which I feel should have already been in the standard libraries. What's the normal path to contributing functions for consideration in future standard libraries? Is there some experimental standard lib that we can contribute to to try out for the big league? Here are some functions: http://www.thenewsh.com/%7Enewsham/x/machine/Missing.hs I think MissingH has served for that in the past. The official route is [EMAIL PROTECTED] -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED] system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED] electrical and computer engineering, carnegie mellon universityKF8NH ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] two problems with Data.Binary and Data.ByteString
On Mon, Aug 25, 2008 at 2:28 PM, Don Stewart [EMAIL PROTECTED] wrote: I've pushed a decodeFile that does a whnf on the tail after decoding. Does this mean that there are now NFData instances for bytestrings? That would be handy. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] contributing to standard library
functions which I feel should have already been in the standard libraries. have you tried searching with hoogle for the types of your functions. signature.asc Description: OpenPGP digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] two problems with Data.Binary and Data.ByteString
bos: On Mon, Aug 25, 2008 at 2:28 PM, Don Stewart [EMAIL PROTECTED] wrote: I've pushed a decodeFile that does a whnf on the tail after decoding. Does this mean that there are now NFData instances for bytestrings? That would be handy. No, since I can get whnf with `seq`. However, that does sound like a good idea (a patch to the parallel library? ) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Top Level -
Judah Jacobson wrote: I think you're saying that you want to write w - newIOWitness at the top level, so that w can then be referenced in a pure function. Fair enough. But newIOWitness's implementation requires writeIORef (or an equivalent), which is not ACIO, right? newIOWitness is very like newUnique. In both cases, the internal implementation updates an MVar to make them unique. Internally the open-witness package would use unsafeIOtoACIO (just as it already uses unsafeCoerce), but an exposed newIOWitnessACIO would be safe. oneshot uniqueVar :: IO (TVar Integer) uniqueVar = atomically $ newTVar 0 -- alternately, use newTVarIO uniqueIntSTM :: IO (STM Integer) uniqueIntSTM = uniqueVar = \v - return $ do n - readTVar v writeTVar v (n+1) return n getUniqueInt :: IO Integer getUniqueInt = uniqueIntSTM = atomically This complicates the purpose of STM, which is to make composable STM transactions. I would rather do this: uniqueVar :: TVar Integer uniqueVar - newTVarACIO uniqueInt :: STM Integer uniqueInt = do n - readTVar uniqueVar writeTVar uniqueVar (n+1) return n AFAICT, one-shots are less powerful and just as complicated as an ACIO monad. -- Ashley Yakeley ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] contributing to standard library
On Tue, Aug 26, 2008 at 2:49 PM, Tim Newsham [EMAIL PROTECTED] wrote: Like everyone else who has used Haskell for a while, I'm accumulating functions which I feel should have already been in the standard libraries. What's the normal path to contributing functions for consideration in future standard libraries? Is there some experimental standard lib that we can contribute to to try out for the big league? Here are some functions: http://www.thenewsh.com/%7Enewsham/x/machine/Missing.hs The official process for proposing a change to the standard libraries is documented at: http://www.haskell.org/haskellwiki/Library_submissions -Judah ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell audio libraries audio formats
On Tue, 26 Aug 2008, John Van Enk wrote: I think the problem I'll run into is the 128 channel case. I'm hoping for a general solution... I'm almost positive this will require runtime checks. Your solution is what I was thinking for functions requiring exactly N channels (I'm not sure if there are many functions like that). If the number of channels is variable it might be better to use a list of StorableVectors instead. I think it is more common to process the channels separately instead of simultaneously. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] two problems with Data.Binary and Data.ByteString
On Tue, Aug 26, 2008 at 3:04 PM, Don Stewart [EMAIL PROTECTED] wrote: No, since I can get whnf with `seq`. However, that does sound like a good idea (a patch to the parallel library? ) I suspect that patching parallel doesn't scale. It doesn't have a maintainer, so it will be slow, and the package will end up dragging in everything under the sun if we centralise instances in there. I think that the instance belongs in bytestring instead. I know that this would make everything depend on parallel, but that doesn't seem as bad a problem. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parsec and network data
On Tue, Aug 26, 2008 at 3:43 PM, John Van Enk [EMAIL PROTECTED] wrote: Perhaps you'll want to continue with the hGetLine setup in one thread (assuming the NNTP data is line delimited), then in another, parse the data, then in a third, respond. Sorry if my writing was unclear. I think hGetLine is really unsuited for doing anything with data from a network. It's like the Haskell equivalent of gets(3). I think it's only suitable for quick tests or toy programs. The only way I can think to make it a little safer is to wrap it in a timeout, and that'd still be really bad. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: problem with cabal for syb-with-class
On Tue, 2008-08-26 at 15:53 +0100, Claus Reinke wrote: GHC knows what the problem is but it's just following orders. Cabal gave the orders but doesn't know there is a problem. Since you said don't hold your breath for Cabal's dependencies: Cabal doesn't have to pass on ghc's messages uninterpreted. That's a lot like implementing a map as a list and complaining about empty list instead of element not found. I see what you're saying, but in practise it's just not possible. GHC can return a non-zero exit code for a multitude of reasons (most of which will be genuine errors in your source code). It's just not practical to parse the error messages that ghc produces and try and reinterpret them. I fear it'd quite easy to introduce more problems than are solved this way. If one wanted to take this approach you'd need to have some mode where error messages are produced in a machine readable format (which is of course doable if you write a client using the ghc api). A quicker hack would be to change ghcs error message in this circumstance, where the -hide-all-packages flag is given. Given our limited amount of volunteer developer time I think it's much better investing it in proper solutions. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: zip-archive 0.0
On Mon, 2008-08-25 at 23:22 -0700, John MacFarlane wrote: I've written a library, zip-archive, for dealing with zip archives. Great. I saw your query about this from a month ago. Haddock documentation (with links to source code): http://johnmacfarlane.net/zip-archive/ Darcs repository: http://johnmacfarlane.net/repos/zip-archive/ It comes with an example program that duplicates some of the functionality of 'zip' (configure with '-fexecutable' to build it). I intend to put it on HackageDB, but I thought I'd get some feedback first. Bug reports, patches, and suggestions on the API are all welcome. Generally it looks good, that the operations on the archive are mostly separated from IO of writing out archives or creating entries from disk files etc. Looking at the API there feels to be slightly too much exposed. Eg does the MSDOSDateTime need to be exposed, or the (de)compressData functions. I've been reworking the tar library recently and currently have an api that looks like: -- * Reading and writing the tar format read :: ByteString - Entries write :: [Entry] - ByteString -- * Packing and unpacking files to\/from a tar archive pack :: FilePath - FilePath - IO [Entry] unpack :: FilePath - Entries - IO () Entry is like your ZipEntry. Entries is a little special. Tar is really a linear/streamable format, we typically read the file front to back. Of course with zip it's more complex as you have an index (right?) and you can jump around without reading all the data. So Entries represents the unfolding of a tar file as a sequence of entries, but with the possibility of failure (eg format decoding failures): -- | A tar archive is a sequence of entries. data Entries = Next Entry Entries | Done | Fail String So that's why we have Entries for the result of decoding and just an ordinary list for the input to encoding. Zip is more complex of course because you often want to add files to existing archives, or lookup individual entries without just iterating through each entry. My personal inclination is to leave off the Zip prefix in the names and use qualified imports. I'd also leave out trivial compositions like readZipArchive f = toZipArchive $ B.readFile f writeZipArchive f = B.writeFile f . fromZipArchive but reasonable people disagree. For both the pack in my tar lib and your addFilesToZipArchive, there's a getDirectoryContentsRecursive function asking to get out. This function seems to come up often. Ideally pack/unpack and addFilesToZipArchive/extractFilesFromZipArchive would just be mapM_ extract or create for an individual entry over the contents of the archive or the result of a recursive traversal. So yeah, I feel these operations ought to be simpler compositions of other things, in your lib and mine, since this bit is often the part where different use cases need slight variations, eg in how they write files, or deal with os-specific permissions/security stuff. So if these are compositions of simpler stuff it should be easier to add in extra stuff or replace bits. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] two problems with Data.Binary and Data.ByteString
On Tue, 2008-08-26 at 15:31 -0700, Bryan O'Sullivan wrote: On Tue, Aug 26, 2008 at 3:04 PM, Don Stewart [EMAIL PROTECTED] wrote: No, since I can get whnf with `seq`. However, that does sound like a good idea (a patch to the parallel library? ) I suspect that patching parallel doesn't scale. It doesn't have a maintainer, so it will be slow, and the package will end up dragging in everything under the sun if we centralise instances in there. I think that the instance belongs in bytestring instead. I know that this would make everything depend on parallel, but that doesn't seem as bad a problem. This is a general problem we have with packages and instances. Perhaps in this specific case it wouldn't cause many problems to make bytestring depend on parallel (though it means bytestring cannot be a boot lib and cannot be used to implement basic IO) but in general it can be a problem. I can't see any obvious solutions. We don't want lots of tiny packages that just depend on two other packages and define a instance. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] unsafeInterleaveIO, lazyness and sharing
Hello, Haskell is non-strict but not necessarily lazy. So it’s possible that an expression is reduced to WHNF although it is not used yet. Could this “early reduction” also happen to outputs of unsafeInterleaveIO actions (which might trigger the action too early)? While I’d expect those outputs to be evaluated lazily (reduced as late as possible), I cannot find anything in the docs that guarantees this. In addition, I’d like to know whether unsafeInterleaveIO outputs are guaranteed to be evaluated at most once so that the “interleaved action” is executed at most once. Again, I suppose that this is the case while I cannot find a guarantee for it. Best wishes, Wolfgang ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
On Tue, Aug 26, 2008 at 01:14:34AM -0700, Judah Jacobson wrote: On Tue, Aug 26, 2008 at 12:07 AM, Adrian Hey [EMAIL PROTECTED] wrote: But from a top level aThing - someACIO point of view, if we're going to say that it doesn't matter if someACIO is executed before main is entered (possibly even at compile time) or on demand, then we clearly don't want to observe any difference between the latter case and the former (if aThing becomes garbage without ever being demanded). Maybe it would be safest to just say anything with a finaliser can't be created at the top level. We can always define an appropriate top level get IO action using runOnce or whatever. I've been wondering: is there any benefit to having top-level ACIO'd - instead of just using runOnce (or perhaps oneshot) as the primitive for everything? For example: oneshot uniqueRef :: IO (MVar Integer) uniqueRef = newMVar 0 It was also suggested in that wiki page: http://haskell.org/haskellwiki/Top_level_mutable_state#Proposal_4:_Shared_on-demand_IO_actions_.28oneShots.29 Those proposals eliminate the need for creating an ACIO monad and enforcing its axioms, since one-shot actions are executed in-line with other I/O actions (rather than at some nebulous before the program is run time). Actually, due to the definition of ACIO, there is no difference between the two (for actions actually in ACIO). It was formulated to have this property. both implementations (executing them before the program is run, or on first call) and ways of thinking about things are valid and will be indistinguishable for all proper ACIO actions. note, you can implement oneshot IO actions on top of ACIO top level actions, but not the reverse. I think ACIO is cleaner overall, since we have a nice formal definition of when ACIO actions are valid without having to invoke the more complicated IO monad. John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
On Tue, Aug 26, 2008 at 08:07:24AM +0100, Adrian Hey wrote: But from a top level aThing - someACIO point of view, if we're going to say that it doesn't matter if someACIO is executed before main is entered (possibly even at compile time) or on demand, then we clearly don't want to observe any difference between the latter case and the former (if aThing becomes garbage without ever being demanded). Maybe it would be safest to just say anything with a finaliser can't be created at the top level. We can always define an appropriate top level get IO action using runOnce or whatever. If the finalizer is also in the weaker form of ACIO (ACIO under the no more references exist to its argument presumption, maybe called 'linearity condition' or something?), then it shouldn't matter at all. I can't think of any finalizers that don't obey this property that wern't problematic under the old model to begin with. John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Re:Fwd: Haskell job opportunity: Platform Architect at
On Wed, Aug 20, 2008 at 03:17:14PM -0700, Jason Dusek wrote: What is your company going to do? What sort of dot-com attitude is that? Your company does whatever the buzzword that the venture capitalist you are currently talking to is enthralled with. :) 'monadic B2B ultra-wideband catamorphic connection oriented infospherespace' John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Re:Fwd: Haskell job opportunity: Platform Architect at
John Meacham [EMAIL PROTECTED] wrote: Your company does whatever the buzzword that the venture capitalist you are currently talking to is enthralled with. :) Then we'll need some green, renewable monads! -- _jsn ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] unsafeInterleaveIO, lazyness and sharing
Wolfgang, Haskell is non-strict but not necessarily lazy. So it's possible that an expression is reduced to WHNF although it is not used yet. Could this early reduction also happen to outputs of unsafeInterleaveIO actions (which might trigger the action too early)? While I'd expect those outputs to be evaluated lazily (reduced as late as possible), I cannot find anything in the docs that guarantees this. unsafeInterleaveIO allows IO computation to be deferred lazily. When passed a value of type IO a, the IO will only be performed when the value of the a is demanded. This is used to implement lazy file reading, see hGetContents. http://haskell.org/ghc/docs/latest/html/libraries/base/System-IO-Unsafe.html#v:unsafeInterleaveIO Is this the kind of guarantee you're looking for? I'd bet against getting any decent durable, portable guarantees for any unsafe* functions in Haskell; but the above behavioral description may be strong enough to suit you. In addition, I'd like to know whether unsafeInterleaveIO outputs are guaranteed to be evaluated at most once so that the interleaved action is executed at most once. Again, I suppose that this is the case while I cannot find a guarantee for it. I'd be surprised if an implementation didn't have that behavior. I'd also be wary of anyone claiming to guarantee it, beyond compiler X version Y. John Dorsey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
I told you where to look at code. It's C code, mind you, but written in a decent way. No well written device driver ever accesses memory or IO ports directly, doing so would seriously hamper portability. Instead you use an abstraction layer to access to hardware, and the driver gets passed a bus (whatever that might be) access token (akin to a capability). I know you're not going to be convinced, so I won't even try. :) -- Lennart On Tue, Aug 26, 2008 at 9:47 PM, Adrian Hey [EMAIL PROTECTED] wrote: Lennart Augustsson wrote: Making a network stack from peek and poke is easy in a well structured OS. The boot loader (or whatever) hands you the capability (call it something else if you want) to do raw hardware access, and you build from there. If you look at well structured OSs like NetBSD, this is pretty much how they work. No hardware drivers use global variables. So? We all know this is possible outside Haskell. But I don't want to rely on mysterious black box OS's to hand me the capability any more than I want to rely on mysterious extant but unimplementable libs like Data.Unique. Most real world computing infrastructure uses no OS at all. How could I use Haskell to implement such systems? Also (to mis-quote Linus Torvalds) could you or anyone else who agrees with you please SHOW ME THE CODE in *Haskell*! If scripture is all that's on offer I'm just not going to take any of you seriously. Frankly I'm tired of the patronising lectures that always acompany these threads. It'd be good if someone who knows global variables are evil could put their code where their mouth is for a change. Fixing up the base libs to eliminate the dozen or so uses of the unsafePerformIO hack might be a good place to start. I'll even let you change the API of these libs if you must, provided you can give a sensible explanation why the revised API is better, safer, more convenient or whatever. Regards -- Adrian Hey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Top Level -
BTW, I'm not contradicting that the use of global variables can be necessary when interfacing with legacy code, I just don't think it's the right design when doing something new. -- Lennart On Tue, Aug 26, 2008 at 9:47 PM, Adrian Hey [EMAIL PROTECTED] wrote: Lennart Augustsson wrote: Making a network stack from peek and poke is easy in a well structured OS. The boot loader (or whatever) hands you the capability (call it something else if you want) to do raw hardware access, and you build from there. If you look at well structured OSs like NetBSD, this is pretty much how they work. No hardware drivers use global variables. So? We all know this is possible outside Haskell. But I don't want to rely on mysterious black box OS's to hand me the capability any more than I want to rely on mysterious extant but unimplementable libs like Data.Unique. Most real world computing infrastructure uses no OS at all. How could I use Haskell to implement such systems? Also (to mis-quote Linus Torvalds) could you or anyone else who agrees with you please SHOW ME THE CODE in *Haskell*! If scripture is all that's on offer I'm just not going to take any of you seriously. Frankly I'm tired of the patronising lectures that always acompany these threads. It'd be good if someone who knows global variables are evil could put their code where their mouth is for a change. Fixing up the base libs to eliminate the dozen or so uses of the unsafePerformIO hack might be a good place to start. I'll even let you change the API of these libs if you must, provided you can give a sensible explanation why the revised API is better, safer, more convenient or whatever. Regards -- Adrian Hey ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Top Level -
Lennart Augustsson wrote: No hardware drivers use global variables. No problem, write your hardware drivers in a different monad. Thus IO is the type for code that can use global variables, and H (or whatever) is the type for code that must not. -- Ashley Yakeley ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] unsafeInterleaveIO, lazyness and sharing
On Wed, 2008-08-27 at 01:48 +0200, Wolfgang Jeltsch wrote: Hello, Haskell is non-strict but not necessarily lazy. So it’s possible that an expression is reduced to WHNF although it is not used yet. Could this “early reduction” also happen to outputs of unsafeInterleaveIO actions (which might trigger the action too early)? While I’d expect those outputs to be evaluated lazily (reduced as late as possible), I cannot find anything in the docs that guarantees this. In addition, I’d like to know whether unsafeInterleaveIO outputs are guaranteed to be evaluated at most once so that the “interleaved action” is executed at most once. Again, I suppose that this is the case while I cannot find a guarantee for it. I believe ghc does provide the behaviour you want, almost. I say almost because there is an issue with concurrency. In early versions of ghc's smp implementation it was easy to set up an experiment where two threads would pull on the result of hGetContents and observe different results. This was because different threads could occasionally enter an IO thunk simultaneously. This is now fixed but I understand the fix is not an absolute guarantee. This concurrency difference is the distinction between unsafeInterleaveIO and unsafeDupableInterleaveIO. unsafeDupableInterleaveIO itself is not currently documented but the difference is the same as for unsafeDupablePerformIO which is documented as: This version of 'unsafePerformIO' is slightly more efficient, because it omits the check that the IO is only being performed by a single thread. Hence, when you write 'unsafeDupablePerformIO', there is a possibility that the IO action may be performed multiple times (on a multiprocessor), and you should therefore ensure that it gives the same results each time. Though as I said, the check that the IO is only being performed by a single thread is apparently not an absolute guarantee. The details of why exactly I do not fully understand. If you want a full explanation ask Simon Marlow. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: ANN: First Monad Tutorial of the Season
Lennart Augustsson wrote: The values Z, S Z, and S (S Z) all have the same runtime representation and there is no linear increase in size when you add a extra S. BUT, if you make something overloaded and there is a dictionary associated with the type (Z, S Z, or S (S Z)) then the dictionary takes up space, and that space is linear in the number of S constructors. Ah yes, that makes more sense. Since your instance would look like: instance Foo a = Foo (S a) where foo :: a - Int a dictionary for Foo (S(S Z)) would have entries for foo@(S(S Z)) and also the dictionary for Foo (S Z) which has foo@(S Z) and a dictionary for Foo Z which has... It's still something to watch out for if you're really worrying about performance. I wonder if this is documented on the wiki's section about performance anywhere, the overhead for inductive type class instances I mean. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Haskell Propeganda
David Roundy wrote: Which illustrates the point that it's not type safety that protects us from segfaults, so much as bounds checking, and that's got a non-trivial runtime cost. At least, most segfaults that *I've* caused (in C or C++) have been from overwriting the bounds of arrays, and that's precisely the problem that Haskell does *not* solve using its type system. That differs from my experience. Most segfaults that *I've* caused (in C or C++) have been due to dereferencing null pointers. Type safety does help you here, in that Maybe lets you distinguish the types of things that are optionally present from those that must be. Tim ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Building SDL-image package on Windows
Hello, I'm trying to build the latest SDL-image package (0.5.2) from Hackage on Windows and encountering problems. These are the steps I've taken so far: 1. Downloaded SDL 1.2.13 developmental library for Mingw32 to E:\SDL-1.2.13, and SDL_image 1.2.6 developmental library for VC8 to E:\SDL_image-1.2.6. 2. Installed SDL package from Hackage, modifying the SDL.cabal according to the included WIN32 readme file and then runghc Setup.lhs configure/build/install 3. Downloaded the SDL-image package from Hackage, modified the SDL-image.cabal file to add the line Include-Dirs: E:\SDL_image-1.2.6\include\SDL, E:\SDL-1.2.13\include\SDL so Cabal can find the header files. After doing runghc Setup.lhs configure, runghc Setup.lhs build -v gives me the following output: Creating dist\build (and its parents) Creating dist\build\autogen (and its parents) Preprocessing library SDL-image-0.5.2... Creating dist\build\Graphics\UI\SDL\Image (and its parents) E:\ghc\ghc-6.8.2\bin\hsc2hs.exe --cc=E:\ghc\ghc-6.8.2\bin\ghc.exe --ld=E:\ghc\ghc-6.8.2\bin\ghc.exe --cflag=-package --cflag=SDL-0.5.4 --cflag=-package --cflag=base-3.0.1.0 --cflag=-IE:\SDL_image-1.2.6\include\SDL --cflag=-IE:\SDL-1.2.13\include\SDL -o dist\build\Graphics\UI\SDL\Image\Version.hs Graphics\UI\SDL\Image\Version.hsc E:/ghc/ghc-6.8.2/libHSrts.a(Main.o)(.text+0x7):Main.c: undefined reference to `__stginit_ZCMain' E:/ghc/ghc-6.8.2/libHSrts.a(Main.o)(.text+0x36):Main.c: undefined reference to `ZCMain_main_closure' collect2: ld returned 1 exit status linking dist\build\Graphics\UI\SDL\Image\Version_hsc_make.o failed command was: E:\ghc\ghc-6.8.2\bin\ghc.exe dist\build\Graphics\UI\SDL\Image\Version_hsc_make.o -o dist\build\Graphics\UI\SDL\Image\Version_hsc_make.exe The results of a limited google search suggests that the __stginit_ZCMain linker error has to do with GHC expecting a main function, but I'm not really sure how that works in context of a library. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cleaning up the Debian act (report from the trenches)
On Mon, Aug 25, 2008 at 9:10 AM, Mads Lindstrøm [EMAIL PROTECTED] wrote: Hi Ketil Malde wrote: I've had an interested user, who tried to get one of my programs to run on a Debian machine - running Debian Etch, released a couple of months ago. Here are some of the hurdles stumbled upon in the process: Debian Etch were released in April 8th, 2007. 16 months ago. Hardly a copule of months ago. See http://www.debian.org/News/2007/20070408.en.html . Sure, there have been updates since then, but they are mainly concerned with security and drivers for new hardware. 1. Etch comes with ghc-6.6, and that didn't work with my .cabal file. 2. ghc-6.8.3, presumably the binary snapshots, didn't work, neither in i386 nor in x86_64 incarnation. 3. ghc 6.8.1-i386 appears to work, but some of the dependencies failed to compile (tagsoup, in this case) 4. A precompiled (by me), statically linked binary refuses to run with a message of FATAL: kernel too old. Granted, not all of this is our fault, but it won't help users to start charging the windmills of Debian conservativism. We really need to make this process smoother, and ideally, provide debs for Etch backports. I'm not sure how to go about any of this, beyond debianizing my own packages. But that's why I'm telling you. :-) There are several options: 1) Use the testing or unstable branch of Debian. They got newer packages. Testing (aka. Lenny) has GHC 6.8.2 http://packages.debian.org/lenny/ghc6 . I'd stay away from 6.8.2 if I were you. It has at least one annoying bug that was fixed in 6.8.3. The one I'm thinking of is getSymbolicLinkStatus returning bogus mtimes on some 32bit platforms. 2) Compile GHC yourself. You can even compile and install GHC (and most Haskell software) on a dedicated user account. In this way you avoid messing up you Debian installation if something goes wrong. I find with Debian this is the way to go. Install your system and use Debian's packages for everything, and then install your own copy of anything for which you care what version you're running. Not everyone will like this option, but I find it's a decent balance between using what Debian provides and getting the satisfaction of using the versions of things I care about. Jason ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe