Re: [Haskell-cafe] a general question on Arrows
On Thu, Feb 14, 2008 at 12:50 AM, Steve Lihn [EMAIL PROTECTED] wrote: He described a few things that cannot be represented as a monad, they are: 1. Stream This is actually a comonad. 2. FRP Depends on which FRP you're talking about. This could be the stream comonoad + the event monad. Here's another fun arrow: http://luqui.org/blog/archives/2007/09/06/quantum-entanglement-in-haskell/ Luke ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
Richard A. O'Keefe wrote: On 14 Feb 2008, at 6:01 pm, Roman Leshchinskiy wrote: I don't understand this. Why use a type which can overflow in the first place? Why not use Integer? [...] Presumably the reason for having Int in the language at all is speed. As people have pointed out several times on this list to my knowledge, Integer performance is not as good as Int performance, not hardly, and it is silly to pay that price if I don't actually need it. Do I understand correctly that you advocate using overflowing ints (even if they signal overflow) even if Integers are fast enough for a particular program? I strongly disagree with this. It's premature optimisation of the worst kind - trading correctness for unneeded performance. SafeInt is what you should use when you *THINK* your results should all fit in a machine int but aren't perfectly sure. (And this is nearly all the time.) Again, I strongly disagree. You should use Integer unless your program is too slow and profiling shows that Integer is the culprit. If and only if that is the case should you think about alternatives. That said, I doubt that your SafeInt would be significantly faster than Integer. You just have to check for exceptional conditions. Why should it be *MY* job to check for exceptional conditions? It shouldn't unless you use a type whose contract specifies that it's your job to check for them. Which is the case for Int, Float and Double. Wrong. You're confusing two things here. One is Float and Double, where we get in serious trouble WITHOUT ANY EXCEPTIONAL CONDITIONS IN SIGHT. The other is Int overflow. I'm not sure what I'm confusing here, my comment referred specifically to exceptional conditions which floats provide plenty of. As to getting in trouble, I don't need floats for that, I manage to do it perfectly well with any data type including (). Seriously, though, I think we agree that using floating point numbers correctly isn't trivial, people who do that should know what they are doing and should best use existing libraries. I just don't see how floats are special in this regard. The checking I am talking about is done by the hardware at machine speeds and provides *certainty* that overflow did not occur. So you advocate using different hardware? If you think that, you do not understand floating point. x+(y+z) == (x+y)+z fails even though there is nothing exceptional about any of the operands or any of the results. For all practical purposes, the semantics of (==) is not well defined for floating point numbers. With respect to IEEE arithmetic, wrong. Yes, IEEE does define an operation which is (wrongly, IMO) called equality. It's not a particularly useful operation (and it is not equality), but it does have a defined semantics. However, the semantics of (==) on floats isn't really defined in Haskell or C, for that matter, even if you know that the hardware is strictly IEEE conformant. In general, floating point numbers do not really have a useful notion of equality. They are approximations, after all, and independently derived approximations can only be usefully tested for approximate equality. And yes, x+(y+z) is approximately equal to (x+y)+z for floats. How approximate depends on the particular values, of course. That's one of the first things I used to teach my students about floats: *never* compare them for equality. That's one of the warning signs I watch out for. Never compare floats for equality is a sure sign of someone who thinks they know about floats but don't. Hmm. Personally, I've never seen an algorithm where comparing for exact equality was algorithmically necessary. Sometimes (rarely) it is acceptable but necessary? Do you know of one? On the other hand, there are a lot of cases where comparing for exact equality is algorithmically wrong. As an aside, you might want to try googling for Never compare floats for equality. I'm positive some of those people *do* know about floats. Creating denormals and underflow are equivalent. No they are not. Underflow in this sense occurs when the result is too small to be even a denormal. I'm fairly sure that IEEE underflow occurs when the result cannot be represented by a *normal* number but I don't have a copy of the standard. Anyway, it's not important for this discussion, I guess. Underflow is indeed a standard IEEE exception. Like other standard IEEE exceptions, it is *disabled by default*. In this case, the hardware delivered the exception *to the operating system*, but the operating system did not deliver it to the *user code*. It is the *combination* of hardware and operating system that conforms to the IEEE standard (or not). So we are talking about a situation where the only legal IEEE outcomes are return 0.0 or raise the Underflow exception and where raising an exception *in the user code* wasn't allowed and didn't happen. Now I'm curious. I would have guessed
Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)
Am Donnerstag, 14. Februar 2008 03:23 schrieben Sie: To directly answer Wolfgang's question: parameterized is the more common. It is more correct only insofar as it is the more common. So we should “parameterized” for the package name. Best wishes, Wolfgang ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
G'day all. Richard A. O'Keefe wrote: That's one of the warning signs I watch out for. Never compare floats for equality is a sure sign of someone who thinks they know about floats but don't. Quoting Roman Leshchinskiy [EMAIL PROTECTED]: Hmm. Personally, I've never seen an algorithm where comparing for exact equality was algorithmically necessary. One trick I've occasionally used is to avoid the need for a discriminated union of floating point and integer types by just using a floating point number. If you ignore bitwise operations and division/remainder, any integer operation that doesn't cause overflow on 32-bit integers will work just the same if you use IEEE-754 64-bit floating point numbers instead. That includes equality. Moreover, you even get a few more significant bits of precision. In these days of fast 64 and 128 bit words, it might not be entirely moral, but it works. Cheers, Andrew Bromage ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Leksah 0.1 - Haskell IDE written in Haskell
Leksah is written by me and published under a GPL-2 license. Leksah can be optained via Hackage: http://hackage.haskell.org/ Darcs development repository: http://code.haskell.org/leksah I just downloaded this from hackage and went through the usual Cabal ritual. The build fails with [16 of 41] Compiling IDE.Utils.File ( src/IDE/Utils/File.hs, dist\build\leksah\leksah-tmp/IDE/Utils/File.o ) src/IDE/Utils/File.hs:161:33: Couldn't match expected type `Either String String' against inferred type `String' In the expression: if takeExtension fp == .lhs then unlit fp str else str In the definition of `str'': str' = if takeExtension fp == .lhs then unlit fp str else str In the expression: if exists then do str - readFile fp let str' = ... let parseRes = ... else return Nothing Would I be better off getting straight from the darcs repo? Alistair ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Leksah 0.1 - Haskell IDE written in Haskell
On 14/02/2008, Alistair Bayley [EMAIL PROTECTED] wrote: src/IDE/Utils/File.hs:161:33: Couldn't match expected type `Either String String' against inferred type `String' Never mind; I've just realised this is because I have the latest dev version of Cabal installed. Alistair ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Leksah 0.1 - Haskell IDE written in Haskell
On 14/02/2008, Alistair Bayley [EMAIL PROTECTED] wrote: I just downloaded this from hackage and went through the usual Cabal ritual. The build fails with Would I be better off getting straight from the darcs repo? I compiled from the darcs repo last night, which worked fine. -- Dougal Stanton [EMAIL PROTECTED] // http://www.dougalstanton.net ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)
I asked Oleg regarding the use of GADTs to emulate dependent types. My conclusion is that I should forget about them. Here is the full answer: -- Forwarded message -- From: [EMAIL PROTECTED] Date: Feb 12, 2008 8:49 AM Subject: Re: GADTs to emulate dependent types? To: [EMAIL PROTECTED] Hello! The main question is whether it is feasible to use GADTs to define fixed-sized vectors or not. The motivation behind it is to avoid needing to code your own trusted core and use the compiler typechecker for that purpose instead. That is a false proposition. In Haskell, any code that uses GADT *MUST* use a run-time test (run-time pattern match). Otherwise, the code is unsound and provides no guarantees. The particular value of a GADT data type is a witness of a proposition expressed in the type of GADT (e.g., type equality). Since it is possible in Haskell to fake this witness (just write undefined), if you don't check the witness at run-time, that is, test that the witness is a real value rather than undefined, you don't get any guarantees. That is why the irrefutable pattern-match does not work with GADT. Incidentally, the first implementation of GADT in GHC did permit irrefutable pattern-match, which did let one write unsound code (that is, code that gave segmentation fault): http://okmij.org/ftp/Haskell/GADT-problem.hs The issue is not present in Coq, for example, whose core calculus is sound and you cannot fake the evidence. Thus, the unsoundness of Haskell (the presence of undefined) mandates the run-time test for any code that uses GADT. So, GADTs are fundamentally less efficient than the typeclasses; the latter can provide assurances that can be checked at compile-time only, with no run-time artifacts. data FSVec :: * - * - * where NullV :: FSVec D0 a (:) :: Succ s s' = a - FSVec s a - FSVec s' a That is an old problem, and a difficult problem. In Singapore I was talking to Zhaohui Luo (you can look him up on Google), who said that the problem of asserting two types being equal (that is, Succ D0 being equal to D1) is a very tough one. Conor McBride have written a paper on observational equality -- but this is not Haskell. So, I don't think this approach works. Cheers, Oleg ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: A question about monad laws
Jed Brown comments the answer of - -- Roman Leshchinskiy who answers my question concerning the replacement of floating-point numbers: First, when I see the advice use something else, I always ask what, and I get an answer very, very rarely... Well? What do you propose? For Haskell, Rational seems like a good choice. The fact that the standard requires defaulting to Double is quite unfortunate and inconsistent, IMO; the default should be Rational. Float and Double shouldn't even be in scope without an explicit import. There really is no good reason to use them unless you are writing a binding to existing libraries or really need the performance. Here Jed Brown says: Until you need to evaluate a transcendental function. ... It would be killing, wouldn't it?... I would say more. It is known that in randomly taken, usual formulae in physics, engineering, etc., if you start with rationals, *typically* the GCD between numerator and denominator will be small, the reductions of fractions are not very effective. Your rationals explode very fast! If after some reasonable number of operations you get rationals whose num/den have millions of digits, the program becomes *completely useless*, this is not just the questions of performance. Richard O'Keefe said that people speculate about floating-point numbers without knowing them. Well, nobody is perfect... I am a bit disturbed by people, who speculate without ever *needing* fl.p's, and who haven't thus the sensibility. For them this is a kind of game. Well, it isn't. R.L. says: For all practical purposes, the semantics of (==) is not well defined for floating point numbers. That's one of the first things I used to teach my students about floats: *never* compare them for equality. So in my view, your example doesn't fail, it's undefined. That Haskell provides (==) for floats is unfortunate. I disagree, on practical basis. Floating-point numbers are very well defined, we know how the mantissa is represented. If the numbers are normalized, as they should, plenty of low-level iterative algorithms may use the equality - after some operation - to check that the machine- precision convergence has been obtained. On the contrary, the verification that the absolute value between two terms is less than some threshold, may be arbitrary or dubious. Jerzy Karczmarczuk ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Stack overflow
Hi all, I have a very simple program which reads a Data.Map from a file using Data.Binary and Data.ByteString.Lazy, which gives stack overflow with files over a certain size. Any ideas of what might be causing it? You can try it with the small file (11M) at: http://computing.dcu.ie/~gchrupala/map.bin import System import Data.Map import qualified Data.ByteString.Lazy as BS import Data.Binary main = do [path] - getArgs m - fmap decode (BS.readFile path)::IO (Map (Int,Maybe String) Int) putStrLn . show . findMax $ m -- Grzegorz -- View this message in context: http://www.nabble.com/Stack-overflow-tp15479718p15479718.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: A question about monad laws
[EMAIL PROTECTED] [EMAIL PROTECTED] schrieb: G'day all. Richard A. O'Keefe wrote: Hmm. Personally, I've never seen an algorithm where comparing for exact equality was algorithmically necessary. One trick I've occasionally used is to avoid the need for a discriminated union of floating point and integer types by just using a floating point number. IMHO it is a perfectly good idea to use the FP processor for integer computations. You can use the Inexact Trap as Overflow Exception, a service you don't get from i386 (and most other) hardware for int operations. Of course your integers are limited to 24bit+sign in single precision and 54bit+sign in double. In i387 extended precision you get 64bit+sign. I would consider a good idea if ghc would provide language support to this sort of integers. -- Dipl.-Math. Wilhelm Bernhard Kloke Institut fuer Arbeitsphysiologie an der Universitaet Dortmund Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257 PGP: http://vestein.arb-phys.uni-dortmund.de/~wb/mypublic.key ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: threads + IORefs = Segmentation fault?
David Roundy wrote: Yes, that should all be fine, because the IORef is only modified from one thread, and read from the other(s). If you were modifying the IORef from more than one thread you would need to use atomicallyModifyIORef, or MVars. If I did modify the IORef from more than one thread (e.g. if a bug were introduced), would this cause any trouble other than occasional missed updates or reads of wrong data? It shouldn't, no. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
[EMAIL PROTECTED] wrote: Jed Brown comments the answer of - -- Roman Leshchinskiy who answers my question concerning the replacement of floating-point numbers: First, when I see the advice use something else, I always ask what, and I get an answer very, very rarely... Well? What do you propose? For Haskell, Rational seems like a good choice. The fact that the standard requires defaulting to Double is quite unfortunate and inconsistent, IMO; the default should be Rational. Float and Double shouldn't even be in scope without an explicit import. There really is no good reason to use them unless you are writing a binding to existing libraries or really need the performance. Here Jed Brown says: Until you need to evaluate a transcendental function. ... It would be killing, wouldn't it?... Yes, it would. I was talking about average programs, though, which (I suspect) don't do numerics and really only need fractions. If you do numerics, by all means use a data type that supports numerics well. But even here, and especially in a functional language, IEEE floating point usually isn't the best choice unless you really need the performance. You seem to be after a type that can be used to represent non-integer numbers in next to all problem domains. I don't think such a type exists. So, as usual, one has to choose a data structure suited to the problem at hand. IMO, standard floating point is not a good choice for most problem domains so Float and Double shouldn't be used by default. Whether Rational is a good default is certainly debatable. For all practical purposes, the semantics of (==) is not well defined for floating point numbers. That's one of the first things I used to teach my students about floats: *never* compare them for equality. So in my view, your example doesn't fail, it's undefined. That Haskell provides (==) for floats is unfortunate. I disagree, on practical basis. Floating-point numbers are very well defined, we know how the mantissa is represented. If the numbers are normalized, as they should, plenty of low-level iterative algorithms may use the equality - after some operation - to check that the machine- precision convergence has been obtained. If you are absolutely sure that for every possible precision and for every sequence of operations that compilers will generate from your code your algorithm will actually converge to a particular binary representation and not flip-flop on the last bit of the mantissa, for instance, and if you do not care about the actual precision of your algorithm (i.e., you want as much as possible of it) then yes, you might get away with using exact equality. Of course, you'll have to protect that part of your code by a sufficient number of warnings since you are using a highly unsafe operation in a very carefully controlled context. I'm not sure the trouble is really worth it. Anyway, in my view, such an unsafe operation shouldn't be in scope by default and definitely shouldn't be called (==). It's really quite like unsafePerformIO. On the contrary, the verification that the absolute value between two terms is less than some threshold, may be arbitrary or dubious. Only if you use an inappropriate theshold. Choosing thresholds and precision is an important part of numeric programming and should be done with great care. Roman ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskell maximum stack depth
Stefan O'Rear wrote: On Mon, Feb 04, 2008 at 10:13:12PM +, Adrian Hey wrote: Also remember that this behaviour never wastes more than 50% of the stack, which is a relatively small amount. Only if the stack is relatively small. Would you say the same about heap, or about a stack that only needed 50% of heap space but ended up using all of it? Or money? Using twice as much as you need of anything is bad IMO. Apparently you don't realize that GHC normally uses twice as much heap as is needed, due to the decision to use a two-space copying collector by default for the oldest generation. :) Three times in fact, on average. If the old generation contained L live data at the last collection, it is allowed to grow to 2L before being collected again. If it still has L live data at the next collection, L is copied, giving you a total requirement of 3L for a major collection. Compacting GC reduces this to 2L, because the live data is not copied. The factor of 2 is tunable using +RTS -F. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell wiki is searched by google.
Mads Lindstrøm wrote: Richard Kelsall wrote: Daniil Elovkov wrote: Hello I remember there was a problem with haskell wiki pages not being indexed and searched. Now it seems to work perfectly. Actually typing 'monomorphism restriction' in google I see the appropriate wiki page as the first result. I must have missed the moment when it was fixed. Just sharing my observations. Thanks. Hello Daniil, I think I may have started that discussion here : http://www.haskell.org/pipermail/haskell-cafe/2007-November/035127.html but sadly a search for mdo in the Search box at the top of this page http://haskell.org/haskellwiki/Haskell still gives no results. I suspect something needs adjusting in the configuration of MediaWiki for the haskellwiki. Not MediaWiki, but the underlying database. If HaskellWiki uses MySql ft_min_word_len needs to be set to something smaller than four. See http://dev.mysql.com/doc/refman/5.0/en/fulltext-fine-tuning.html . After doing this the indexes needs to be rebuild. Testing using this page http://www.haskell.org/haskellwiki/Extending_Phooey by searching for words which I can see are on the page I get Not found Found - - mdoPhooey mapwire wayplay letpersistence So three character words don't get indexed but four character words do. I mentioned in the previous thread that some longer words weren't getting indexed. It looks like this has been fixed. All the long words I just tried worked properly. And, better than Google, the words get indexed as soon as they are changed. I tried searches for words added this afternoon and they returned up-to-date results. Excellent! Whoever it was thank you for fixing the search. Richard. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell wiki is searched by google.
Richard Kelsall wrote: I mentioned in the previous thread that some longer words weren't getting indexed. It looks like this has been fixed. All the long words I just tried worked properly. And, better than Google, the words get indexed as soon as they are changed. I tried searches for words added this afternoon and they returned up-to-date results. Excellent! Whoever it was thank you for fixing the search. Haskelves? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskell maximum stack depth
Adrian Hey wrote: I have no objection to people bounding their stack if that's their choice. I can't imagine why anybody who stopped to think about this would actually want this feature, but it's free world. What I object to is it being bounded by default to something other than overall program memory limit. (i'm a few days behind haskell-cafe, as usual...) There *are* some good reasons to want a stack limit, but they are only practical concerns. The point is, GHC has no such thing as the overall program memory limit unless by that you mean the total amount of memory + swap in your machine. You can set a limit with +RTS -M, but there isn't one by default. So what happens when you write a program with a space leak is that it gobbles up all the memory, and turns on the hard disk light for a few minutes until the OS gives up and kills it. Or at least, you hope it kills your program and not something else important you were doing at the time. We used to have a default heap limit, for the reason above, but I was persuaded to remove it because it was too much of a pain to keep having to say +RTS -Mwhatever when your program ran out of memory. So we could do the same with the stack limit - the only concern is whether it will be highly inconvenient when we accidentally write programs with infinite loops, which in my experience seem to occur more than space-leaky programs. We could also set the limit a lot higher than it currently is. Or, we could try to figure out how much memory is in the machine and set it based on that. Basically, I don't care that much, unless it means I have to do a lot of work to implement it :) Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Leksah 0.1 - Haskell IDE written in Haskell
On Wed, Feb 13, 2008 at 2:46 AM, Jürgen Nicklisch-Franken [EMAIL PROTECTED] wrote: I'm pleased to announce the first release of Leksah, an IDE for Haskell written in Haskell. Leksah is intended as a practical tool to support the Haskell development process. Is there a development listserv for leksah? And what about a bug tracker? -Brent ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] question about STM and IO
Stephan Friedrichs-2 wrote: it's unsafe to perform IO inside of a transaction as it can't be undone, when rolling it back. I guess, unsafeIOToSTM has been designed in order to allow us to inject debugging output into a transaction, but you really shouldn't use it to perform real IO (like writing files, etc.). Simon Peyton Jones provides a good example of this in http://research.microsoft.com/~simonpj/papers/stm/beautiful.pdf atomically (do { x - readTVar xv ; y - readTVar yv ; if xy then launchMissiles else return () }) where launchMissiles :: IO () causes serious international side-effects. ;-) -- Ricardo Guimarães Herrmann There are only two industries that refer to their customers as 'users' -- Edward Tufte -- View this message in context: http://www.nabble.com/question-about-STM-and-IO-tp15439579p15481669.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: A question about monad laws
On 2008-02-14, Roman Leshchinskiy [EMAIL PROTECTED] wrote: Richard A. O'Keefe wrote: Presumably the reason for having Int in the language at all is speed. As people have pointed out several times on this list to my knowledge, Integer performance is not as good as Int performance, not hardly, and it is silly to pay that price if I don't actually need it. Do I understand correctly that you advocate using overflowing ints (even if they signal overflow) even if Integers are fast enough for a particular program? I strongly disagree with this. It's premature optimisation of the worst kind - trading correctness for unneeded performance. Fast enough is not absolute. It's not trading correctness, it's trading /completion/. And if you expect everything to fit in [-2^31..2^31-1] or [0..2^32-1], finding out it doesn't might be valuable information about your problem domain. For exploratory coding, speed and knowing when something breaks can be more important than knowing that all possible corner case are covered, even ones you don't expect to hit. SafeInt is what you should use when you *THINK* your results should all fit in a machine int but aren't perfectly sure. (And this is nearly all the time.) Again, I strongly disagree. You should use Integer unless your program is too slow and profiling shows that Integer is the culprit. If and only if that is the case should you think about alternatives. That said, I doubt that your SafeInt would be significantly faster than Integer. Why? GMP is pretty good, but it's not going to be anywhere near hardware speeds. The checking I am talking about is done by the hardware at machine speeds and provides *certainty* that overflow did not occur. So you advocate using different hardware? At a minimum, any usable hardware sets flags on overflow. Testing on those is pretty cheap. Much cheaper than calling a GMP routine to compare with 2^32, for instance. -- Aaron Denney -- ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: A question about monad laws
Wilhelm B. Kloke wrote: [EMAIL PROTECTED] [EMAIL PROTECTED] schrieb: G'day all. Richard A. O'Keefe wrote: Hmm. Personally, I've never seen an algorithm where comparing for exact equality was algorithmically necessary. One trick I've occasionally used is to avoid the need for a discriminated union of floating point and integer types by just using a floating point number. IMHO it is a perfectly good idea to use the FP processor for integer computations. You can use the Inexact Trap as Overflow Exception, a service you don't get from i386 (and most other) hardware for int operations. Of course your integers are limited to 24bit+sign in single precision and 54bit+sign in double. In i387 extended precision you get 64bit+sign. I would consider a good idea if ghc would provide language support to this sort of integers. No need, you can do that for yourself: {-# LANGUAGE GeneralizedNewtypeDeriving #-} newtype DInt = DInt Double deriving (Eq, Ord, Enum, Num) instance Show DInt where show (DInt x) = show (truncate x :: Integer) You can even make it H98 by defining the instances manually... Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Leksah 0.1 - Haskell IDE written in Haskell
Currently their is no mailing list and no bug tracker. Until yesterday it was a one person - I do it inbetween project. If others like to contribute, which would be great, I think about writing a short one page developers intro. Jürgen Am Donnerstag, den 14.02.2008, 10:44 -0500 schrieb Brent Yorgey: On Wed, Feb 13, 2008 at 2:46 AM, Jürgen Nicklisch-Franken [EMAIL PROTECTED] wrote: I'm pleased to announce the first release of Leksah, an IDE for Haskell written in Haskell. Leksah is intended as a practical tool to support the Haskell development process. Is there a development listserv for leksah? And what about a bug tracker? -Brent ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] getSymbolicLinkStatus completely broken on some 6.8.2 systems
Filed as: http://hackage.haskell.org/trac/ghc/ticket/2096 -- Adam Langley [EMAIL PROTECTED] http://www.imperialviolet.org 650-283-9641 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Best way to find an undefined error?
Hello all, Does anyone favourite technique to track down an undefined call? I'm 99% sure that my code is not the offender (I've grepped for undefined occurrences and turned them all into error calls). Supposing that this is happening in some other package or library that I'm using, what is the best way to find out where it is? I have profiling enabled on all libraries that I use, so I'm ok there I think. Any magic combination of +RTS .. -RTS? Scott ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best way to find an undefined error?
saynte: Hello all, Does anyone favourite technique to track down an undefined call? I'm 99% sure that my code is not the offender (I've grepped for undefined occurrences and turned them all into error calls). Supposing that this is happening in some other package or library that I'm using, what is the best way to find out where it is? I have profiling enabled on all libraries that I use, so I'm ok there I think. Any magic combination of +RTS .. -RTS? You can use the profiler to get a stack trace, or use the new GHCi debugger to step backwards from the exception to the source. I wrote a bit of a tutorial for this here: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/14#no-exceptions -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best way to find an undefined error?
On Thu, Feb 14, 2008 at 2:55 PM, Don Stewart [EMAIL PROTECTED] wrote: You can use the profiler to get a stack trace, or use the new GHCi debugger to step backwards from the exception to the source. I wrote a bit of a tutorial for this here: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/14#no-exceptions Section 6.3 of http://haskell.org/haskellwiki/Debugging also is relevant for using ghcu to step backward ... perhaps the section label is misleading, though. Feel free to modify as needed if you find the ghci stepper lets you find the problem -- Denis ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: two new packages: carray, fft
jed: Hopefully these are mature enough to be generally useful. I would appreciate any comments regarding the interface. The FFTW API is not particularly nice for building a pure interface on. Some of the transforms could be made slightly faster at the expense of a much nastier interface, but unless someone needs the extra few percent, or is pushing the memory limits on their machine, I'm not inclined to expose the underlying nastiness. * carray A C-compatible array library. Provides both an immutable and mutable (in the IO monad) interface. Includes utilities for multi-dimensional arrays, slicing and norms. Memory is 16-byte aligned by default to enable use of SIMD instructions. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/carray * fftBindings to the FFTW library. Provides high performance discrete fourier transforms in arbitrary dimensions. Include transforms of complex data, real data, and real to real transforms. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fft Jed Excellent work! These fill a key niche. Thanks for contributing! -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: two new packages: carray, fft
Hopefully these are mature enough to be generally useful. I would appreciate any comments regarding the interface. The FFTW API is not particularly nice for building a pure interface on. Some of the transforms could be made slightly faster at the expense of a much nastier interface, but unless someone needs the extra few percent, or is pushing the memory limits on their machine, I'm not inclined to expose the underlying nastiness. * carray A C-compatible array library. Provides both an immutable and mutable (in the IO monad) interface. Includes utilities for multi-dimensional arrays, slicing and norms. Memory is 16-byte aligned by default to enable use of SIMD instructions. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/carray * fftBindings to the FFTW library. Provides high performance discrete fourier transforms in arbitrary dimensions. Include transforms of complex data, real data, and real to real transforms. http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fft Jed pgpB45UC6AGAd.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best way to find an undefined error?
On Thu, Feb 14, 2008 at 3:02 PM, Denis Bueno [EMAIL PROTECTED] wrote: On Thu, Feb 14, 2008 at 2:55 PM, Don Stewart [EMAIL PROTECTED] wrote: You can use the profiler to get a stack trace, or use the new GHCi debugger to step backwards from the exception to the source. I wrote a bit of a tutorial for this here: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/14#no-exceptions Section 6.3 of http://haskell.org/haskellwiki/Debugging also is relevant for using ghcu to step backward ... perhaps the section label is misleading, though. Feel free to modify as needed if you find the ghci stepper lets you find the problem Well, when using +RTS -xc, I get: GHC.Err.CAFGHC.Err.CAFPrelude.undefined I'm not really sure what to do with this, not really the stacktrace I was hoping for. The ghci debugger I found was really quite nice, up until it his some portion of code that it isn't interpreting. By not interpreting i mean things that have been already been compiled and it's just calling (even if it has been compiled with profiling). I have a feeling that my problem is somewhere in something that has already been compiled. Knowing that, should +RTS -xc be giving me more information? Is there a way for it to do so? Regards, Scott ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
Hello, On a tangent, probably: On Thursday 14 February 2008 10:24, Roman Leshchinskiy wrote: ... Hmm. Personally, I've never seen an algorithm where comparing for exact equality was algorithmically necessary. Sometimes (rarely) it is acceptable but necessary? Do you know of one? Finding the machine epsilon, perhaps, that is, the smallest (floating point, surely) number for which 1.0+machine_eps==1.0 would be a candidate? ... Best regards Thorkil ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best way to find an undefined error?
saynte: On Thu, Feb 14, 2008 at 3:02 PM, Denis Bueno [EMAIL PROTECTED] wrote: On Thu, Feb 14, 2008 at 2:55 PM, Don Stewart [EMAIL PROTECTED] wrote: You can use the profiler to get a stack trace, or use the new GHCi debugger to step backwards from the exception to the source. I wrote a bit of a tutorial for this here: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/14#no-exceptions Section 6.3 of http://haskell.org/haskellwiki/Debugging also is relevant for using ghcu to step backward ... perhaps the section label is misleading, though. Feel free to modify as needed if you find the ghci stepper lets you find the problem Well, when using +RTS -xc, I get: GHC.Err.CAFGHC.Err.CAFPrelude.undefined I'm not really sure what to do with this, not really the stacktrace I was hoping for. The ghci debugger I found was really quite nice, up until it his some portion of code that it isn't interpreting. By not interpreting i mean things that have been already been compiled and it's just calling (even if it has been compiled with profiling). I have a feeling that my problem is somewhere in something that has already been compiled. Is it possible to just load all the code interpreted? Or is the problem in a dependent library? If you profile and let the program terminate, there should be a stack trace in the .prof as well. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best way to find an undefined error?
On Thu, Feb 14, 2008 at 4:43 PM, Don Stewart [EMAIL PROTECTED] wrote: saynte: On Thu, Feb 14, 2008 at 3:02 PM, Denis Bueno [EMAIL PROTECTED] wrote: On Thu, Feb 14, 2008 at 2:55 PM, Don Stewart [EMAIL PROTECTED] wrote: You can use the profiler to get a stack trace, or use the new GHCi debugger to step backwards from the exception to the source. I wrote a bit of a tutorial for this here: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/14#no-exceptions Section 6.3 of http://haskell.org/haskellwiki/Debugging also is relevant for using ghcu to step backward ... perhaps the section label is misleading, though. Feel free to modify as needed if you find the ghci stepper lets you find the problem Well, when using +RTS -xc, I get: GHC.Err.CAFGHC.Err.CAFPrelude.undefined I'm not really sure what to do with this, not really the stacktrace I was hoping for. The ghci debugger I found was really quite nice, up until it his some portion of code that it isn't interpreting. By not interpreting i mean things that have been already been compiled and it's just calling (even if it has been compiled with profiling). I have a feeling that my problem is somewhere in something that has already been compiled. Is it possible to just load all the code interpreted? Or is the problem in a dependent library? If you profile and let the program terminate, there should be a stack trace in the .prof as well. -- Don Well, I can load a bunch of it interpreted. I've already done this as far as I think I can. The only pieces laying outside the interpreter are the GHC libraries and Gtk2Hs. The really funny thing, is that (essentially) I believe the error results from a call to fromDynamic out of Data.Dynamic. This sort of leads me to believe that someone's Typeable instance is being funny. Again though, all the Typeable's in the immediate area of my code seem to be fine. (the .prof didn't seem to reveal anything yet... I'll look it over a little more closely though) Regards, Scott ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
Aaron Denney wrote: On 2008-02-14, Roman Leshchinskiy [EMAIL PROTECTED] wrote: Richard A. O'Keefe wrote: SafeInt is what you should use when you *THINK* your results should all fit in a machine int but aren't perfectly sure. (And this is nearly all the time.) Again, I strongly disagree. You should use Integer unless your program is too slow and profiling shows that Integer is the culprit. If and only if that is the case should you think about alternatives. That said, I doubt that your SafeInt would be significantly faster than Integer. Why? GMP is pretty good, but it's not going to be anywhere near hardware speeds. This how Integer is defined in the libraries: data Integer = S# Int#-- small integers | J# Int# ByteArray# -- large integers As long as the Int# doesn't overflow, you don't call any GMP routines. Roman ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] getSymbolicLinkStatus completely broken on some 6.8.2 systems
Created a ticket with a patch: http://hackage.haskell.org/trac/ghc/ticket/2093 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] getSymbolicLinkStatus completely broken on some 6.8.2 systems
Hello, You rule! Here is where I am at now. It seems that when I build ghc from source, the top level configure script detects that my system has large file support, and enables it. As you discovered, this gets dumped into: /usr/lib/ghc-6.8.2/include/ghcautoconf.h which gets included when hsc2hs calculates the offsets. However, it seems that that header file does not get included when the unix library builds -- so it builds the library with out large file support. If I run ltrace on this program: import System.Posix.Files main = do getSymbolicLinkStatus /etc/passwd = print . fileID getFileStatus /etc/passwd = print . fileID I see that __xstat64 is called, but plain old __xlstat is called. As a fix(??) I added the copied the following from ghc's configure.ac into unix's configure.ac and ran autoreconf: dnl ** Enable large file support. NB. do this before testing the type of dnloff_t, because it will affect the result of that test. AC_SYS_LARGEFILE This seemed to fix my problem -- ltrace shows that __lxstat64 is called and the results are normal. thanks a ton! j. At Tue, 12 Feb 2008 23:45:41 -0800, Adam Langley wrote: On Feb 12, 2008 11:04 PM, Adam Langley [EMAIL PROTECTED] wrote: structure filled in. HsUnix.h has a wrapper around lstat for exactly this reason, however ltrace shows it calling the wrong one. So (finally!) the real issue: hsc2hs has a C preprocessor prelude (utils/hsc2hs/template-hsc.h) which includes some GHC header files. As a result, hsc processes files in 64-bit mode: % cpp -I../../includes -dM template-hsc.h | grep FILE_OFFSET_BITS #define _FILE_OFFSET_BITS 64 However, when building HsUnix.c (or any c file with cabal), the environment is different and it's built in 32-bit mode. Thus the lstat wrapper called the 32-bit glibc function, but the structure sizes and offsets in Files.hs (from Files.hsc) are all for 64-bit mode. The fix is not quite obvious. Probably hsc2hs should try harder not to change the environment so much. I'm not quite sure why the hsc2hs template pulls in the HsFFI header - it doesn't seem to need it. Although you can remove the #include and rebuild the unix package, I can safely say that you'll be reinstalling ghc if you do. What might work is changing hsc2hs and then rebuilding all of GHC. However, that's an overnight build so I don't know yet. AGL -- Adam Langley [EMAIL PROTECTED] http://www.imperialviolet.org 650-283-9641 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
G'day all. Quoting Thorkil Naur [EMAIL PROTECTED]: Finding the machine epsilon, perhaps, that is, the smallest (floating point, surely) number for which 1.0+machine_eps==1.0 would be a candidate? The machine epsilon is the smallest relative separation between two adjacent normalised floating point numbers. (The largest is the machine epsilon multiplied by the radix, more or less.) So as I understand it, if you're thinking relative error, this test: (fabs(x1-x2) machine_eps * fabs(x1)) should be true if and only if x1 == x2, assuming that x1 and x2 are nonzero and normalised. I've always had the impression that using the machine epsilon for pseudo-equality testing is fairly useless, especially if you can work out a meaningful problem-specific tolerance. What seems to be more useful is using the machine epsilon to compute an estimate of how much relative error your algorithm accumulates. I've seen this in a lot of Serious Numeric Code(tm), and I've done it myself (probably inexpertly) a few times. I haven't tried this, but I imagine that a computed relative error estimate could be useful for assisting your approximate-equality tests under some circumstances. Richard might know of some circumstances where this sort of thing would be useful. Cheers, Andrew Bromage ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: ctemplate, SSL server support in HsOpenSSL, network-minihttp, binary parsing, network-dns
Binary parsing: binary-strict now has support for combinator parsing[1] with | and friends. This works both for the strict Get and the IncrementalGet. (The latter was more complex than I expected) binary-strict also gained a very fast byte set module[2] (used for scanning ranges of valid bytes) and a surprisingly useful hexDump[3] function (because I kept having to write it in lots of places) [1] http://hackage.haskell.org/packages/archive/binary-strict/0.3.0/doc/html/Data-Binary-Strict-Class.html [2] http://hackage.haskell.org/packages/archive/binary-strict/0.3.0/doc/html/Data-Binary-Strict-ByteSet.html [3] http://hackage.haskell.org/packages/archive/binary-strict/0.3.0/doc/html/Data-Binary-Strict-Util.html DNS: network-dns contains a DNS client library[4] in pure Haskell. The ADNS library (for which a wrapper already exists) is GPL, not BSD. This code is also halfway to being a DNS server should anyone so wish. This is very similar to the DNS client library in libevent (because I wrote them both). HTTP: ctemplate is a wrapping of the template library which Google uses for most of their sites. There are several template systems already in Hackage, but I like ctemplate because escaping is so easy and well supported. Generally the power of a template system doesn't interest me (in fact, it can be a bad thing), but stopping XSS attacks is a big deal. HsOpenSSL now has rudimentary support for writing SSL servers[5] (clients coming soon). In terms of blocking IO, this /should/ work the way you would hope (i.e. like a Haskell function w.r.t. forkIO). network-minihttp is a small HTTP server. Currently it serves files from the disk (with caching and range support), but it's most of the way to being a HTTP client also. Don't do anything too serious with it yet, however. It needs a few limits to stop DoS attackers from, for example, sending an infinite header and using up all the memory :) [4] http://hackage.haskell.org/packages/archive/network-dns/0.1.1/doc/html/Network-DNS-Client.html -- Adam Langley [EMAIL PROTECTED] http://www.imperialviolet.org 650-283-9641 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Leksah 0.1 - Haskell IDE written in Haskell
byorgey: On Wed, Feb 13, 2008 at 2:46 AM, Juergen Nicklisch-Franken [EMAIL PROTECTED] wrote: I'm pleased to announce the first release of Leksah, an IDE for Haskell written in Haskell. Leksah is intended as a practical tool to support the Haskell development process. Is there a development listserv for leksah? And what about a bug tracker? Should be pretty cheap to create a bug tracker on google's bug tracker site, as xmonad does. then link to it from the wiki page. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best way to find an undefined error?
On Thu, Feb 14, 2008 at 5:12 PM, Scott West [EMAIL PROTECTED] wrote: On Thu, Feb 14, 2008 at 4:43 PM, Don Stewart [EMAIL PROTECTED] wrote: saynte: On Thu, Feb 14, 2008 at 3:02 PM, Denis Bueno [EMAIL PROTECTED] wrote: On Thu, Feb 14, 2008 at 2:55 PM, Don Stewart [EMAIL PROTECTED] wrote: You can use the profiler to get a stack trace, or use the new GHCi debugger to step backwards from the exception to the source. I wrote a bit of a tutorial for this here: http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/14#no-exceptions Section 6.3 of http://haskell.org/haskellwiki/Debugging also is relevant for using ghcu to step backward ... perhaps the section label is misleading, though. Feel free to modify as needed if you find the ghci stepper lets you find the problem Well, when using +RTS -xc, I get: GHC.Err.CAFGHC.Err.CAFPrelude.undefined I'm not really sure what to do with this, not really the stacktrace I was hoping for. The ghci debugger I found was really quite nice, up until it his some portion of code that it isn't interpreting. By not interpreting i mean things that have been already been compiled and it's just calling (even if it has been compiled with profiling). I have a feeling that my problem is somewhere in something that has already been compiled. Is it possible to just load all the code interpreted? Or is the problem in a dependent library? If you profile and let the program terminate, there should be a stack trace in the .prof as well. -- Don Well, I can load a bunch of it interpreted. I've already done this as far as I think I can. The only pieces laying outside the interpreter are the GHC libraries and Gtk2Hs. The really funny thing, is that (essentially) I believe the error results from a call to fromDynamic out of Data.Dynamic. This sort of leads me to believe that someone's Typeable instance is being funny. Again though, all the Typeable's in the immediate area of my code seem to be fine. (the .prof didn't seem to reveal anything yet... I'll look it over a little more closely though) Regards, Scott I did finally find the error (in one of the pieces of code not written by myself). A few instances of Typeable were trying to pattern match their arguments. *bomb* In either case, it was an interesting experience! Thanks for the help all! Regards, Scott ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] a general question on Arrows
1. Stream This is actually a comonad. Something more to learn everyday. Here's another fun arrow: http://luqui.org/blog/archives/2007/09/06/quantum-entanglement-in-haskell/ Luke Luke, I managed to get your quantum entanglement examples working. But honestly, I can't quite figure out your Quantum module yet. My head is exploding after reading the code :-) It is amazing to know it takes several layers of arrows to simulate the quantum mechanics. I have a small question on the simulation technique. In both John Hughes and your code, you wrap the print inside the runXYZ (...) to print out the state of simulation. It is like: runArrow ( ... simulation ...then print ...) - input Why don't you structure it like y - runArrow ( ... simulation ... then return observation ... ) - input reuse y or print y In the former, the result is printed on the screen. I can not collect the result and do more analysis. For example, for a quantum state |0 + i |1, the probability is half half. If I can repeat the simulation 1 times and collect the observations (y) , then I can prove the correctness of experiment by observing ~5000 of |0 and ~5000 of |1. Or even plot the probability distribution of the experiment. Steve ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
On 14 Feb 2008, at 10:24 pm, Roman Leshchinskiy wrote: Do I understand correctly that you advocate using overflowing ints (even if they signal overflow) even if Integers are fast enough for a particular program? No you most certainly do NOT. There is no way to soundly, and I would have thought no way to plausibly, infer that from anything I wrote. Again, I strongly disagree. You should use Integer unless your program is too slow and profiling shows that Integer is the culprit. If and only if that is the case should you think about alternatives. That said, I doubt that your SafeInt would be significantly faster than Integer. SafeInt should be as fast as Int, or very nearly. The representation of SafeInt is identical to the representation of Int, so the space overheads are definitely lower. The checking I am talking about is done by the hardware at machine speeds and provides *certainty* that overflow did not occur. So you advocate using different hardware? Again, this is the opposite of what I wrote. On my desk there are a Pentium machine and an UltraSPARC and a G4. They *all* support cheap integer overflow checks. I am saying that we should use the hardware we have already paid for! It's not a particularly useful operation (and it is not equality), but it does have a defined semantics. However, the semantics of (==) on floats isn't really defined in Haskell or C, for that matter, even if you know that the hardware is strictly IEEE conformant. The semantics of == on floats in C99 is, under certain circumstances, and on the machines I use with the compilers I use, defined to be those of the IEEE (or, as the C99 standard puts it, IEC) operation. In general, floating point numbers do not really have a useful notion of equality. They are approximations. The *numbers* are not approximations, the *operations* are approximate. In particular, floating point +, -, *, , ==, abs, sign, and other related operations (but not /) are *exact* on integral values, if the results fit. AWK programs would be in terrible trouble if this were not so ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A question about monad laws
On 15 Feb 2008, at 2:03 am, Wilhelm B. Kloke wrote: IMHO it is a perfectly good idea to use the FP processor for integer computations. You can use the Inexact Trap as Overflow Exception, a service you don't get from i386 (and most other) hardware for int operations. A neat idea. However, the i386 has the INTO instruction, the SPARC family has the TRAPV instruction, and other processors have analogues. Some machines have two sets of add/subtract instructions, one trapping, one not. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Designing DSL with explicit sharing [was: I love purity, but it's killing me]
Matthew Naylor wrote: it's not immediately clear (to me at least) how efficient your method will be in practice. Any method based on common sub-expression elimination surely must inspect every node in the flattened graph. In the worst case, an acyclic graph containing n nodes could have 2^n nodes when flattened to a tree: tricky 0 = constant 0 tricky d = add g g where g = tricky (d-1) It should work quite well in practice, as demonstrated below. Once our DSL is extended with a let form (so the user can declare the intention to share results of computations rather than computations themselves), tricky code becomes trivial. The code remains declarative, safe and pure, and, save for the (cosmetic) lack of the monomorphism restriction, Haskell98. The well-commented code is available at http://okmij.org/ftp/Haskell/DSLSharing.hs First a few comments on the tricky code. It is true that (absent programmer's proclamations and code structuring forms) a complete method of common sub-expression elimination must inspect every sub-expression in the program. That inspection must include at least one comparison of a node with some other nodes. As I understand the original problem had less to do with the number of comparison but more to do with the cost of a single comparison. In an impure language, we can use constant-time physical equality. It is usually provided natively as pointer comparison, and can be trivially emulated via mutation. The original poster used pure Haskell and the following representation for the program graph: data Expression = Add Expression Expression | Sub Expression Expression | Variable String | Constant Int deriving Eq Here, the derived equality operation, albeit pure, no longer takes constant time. Comparing nodes near the root of a deep tree really takes a while. As the previous message on DSL with sharing showed, one can do common sub-expression elimination and the conversion of the program tree to a DAG using a constant-time node equality operation, without sacrificing purity. The tricky function is a metaprogram that builds a large DSL program: (tricky 12) yields a single DSL expression with 8191 nodes. Such humongous expressions with thousands of components are unlikely to be written by human programmers (although can be easily generated). Many compilers will choke if we submit a program with one large expression. (Incidentally, the previously posted code converts the tree (tricky 12) to a 12-node DAG in 0.25 secs on 1GHz Pentium). The reason we can compile much larger projects is because we structure them, into compilation units, functions, let-blocks. The structure gives the common sub-expression eliminator and other optimizer phases much needed hints and so significantly limits the search space. For example, hardly any compiler searches for common sub-expressions across compilation units. More importantly for us here, forms like let-expressions let the programmer declare that the results of certain computations be shared. Although the tricky code also contains a let-expression (masqueraded as 'where'), the sharing there occurs at the meta-level and benefits the generator rather than the generated program. We share code generators rather than the generated code. The importance of placing 'let' in the right phase has been extensively discussed in http://www.cs.rice.edu/~taha/publications/conference/pepm06.pdf also in the context of a DSL (for dynamic programming). In our case, we need to give our DSL programmer a way to state their intention on sharing results of DSL computations. The programmer declares certain expressions common, and thus greatly helps the compiler as well as human readers of the code. As Chung-chieh Shan has pointed out, we need to introduce a let construct in the _embedded_ language. Since our DSL already has (concrete) variables with string names, we can extend our language thusly: let_ v1 (add (constant 1) (variable x)) (add (variable v1) (variable v1)) We chose a different way: higher-order abstract syntax. We use variables of the metalanguage (that is, Haskell) as let-bound variables. Our language is now defined as class Exp repr where constant :: Int - repr Int variable :: String - repr Int add :: repr Int - repr Int - repr Int sub :: repr Int - repr Int - repr Int let_ :: repr a - (repr a - repr b) - repr b -- like flip ($) Here are simple programs in our language a = add (constant 10) (variable i1) b = sub (variable i2) (constant 2) c = add a b d = add c c e = add d d-- e now as 16 leaf nodes. e' = let_ d (\x - add x x) The programs (e) and (e') evaluate to the same integer given the same environment for i1 and i2. The two programs differ in how sharing is declared. The program (e) uses the identifier (d) twice; even if GHC shares the corresponding expressions rather than copies them (a Haskell system is not obliged to share anything:
Re: [Haskell-cafe] fast integer base-2 log function?
Hi, all, a few days ago I had asked about fast integer log-2 routines, and got a couple of answers. I've now had a chance to play with the routines, and here's what I found. Initially, Thorkil's routine taken from the Haskell report was about 30% or so faster than mine. When I replaced the calls to my routine powi with native calls to ^, my routine became about 10% faster than Thorkil's routine. (I undoubtedly had some reason for using my own version of powi, but I have no idea anymore what that reason was... :-/ ) I initially thought that that speed difference might be due to the fact that my routine had base 2 hard-wired, whereas his routine is for general bases, but that seems not to be the case: when I modified my version to also do general bases, it stayed pretty much the same. I didn't do enough statistics-gathering to really be absolutely positively certain that my routine is indeed 10.000% faster, but there did seem to be a slight edge in speed there. Here's the latest version, in case anyone's interested. I had previously had effectively a bit-length version; since I made it general base-b I changed it to a log function. ilogb :: Integer - Integer - Integer ilogb b n | n 0 = ilogb b (- n) | n b = 0 | otherwise = (up b n 1) - 1 where up b n a = if n (b ^ a) then bin b (quot a 2) a else up b n (2*a) bin b lo hi = if (hi - lo) = 1 then hi else let av = quot (lo + hi) 2 in if n (b ^ av) then bin b lo av else bin b av hi Stefan's routine is, as expected, much much faster still: I tested the first two routines on numbers with 5 million or so bits and they took ~20 seconds of CPU time, whereas I tested Stefan's routine with numbers with 50 million bits, and it took ~11 seconds of CPU time. The limitation of Stefan's routine is of course that it's limited to base 2 -- it is truly a bit-length routine -- and I guess another potential limitation is that it uses GHC extensions, not pure Haskell (at least I had to compile it with -fglasgow-exts). But it's the speed king if those limitations aren't a problem! Uwe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] fast integer base-2 log function?
On Thu, Feb 14, 2008 at 08:23:29PM -0800, Uwe Hollerbach wrote: Stefan's routine is, as expected, much much faster still: I tested the first two routines on numbers with 5 million or so bits and they took ~20 seconds of CPU time, whereas I tested Stefan's routine with numbers with 50 million bits, and it took ~11 seconds of CPU time. The limitation of Stefan's routine is of course that it's limited to base 2 -- it is truly a bit-length routine -- and I guess another potential limitation is that it uses GHC extensions, not pure Haskell (at least I had to compile it with -fglasgow-exts). But it's the speed king if those limitations aren't a problem! At the risk of stating the obvious, it also hard-codes 32 bit words and certain configurable (but nobody bothers) details of the GMP data representation (big endian, 0 nails). Stefan signature.asc Description: Digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] a general question on Arrows
On Thu, Feb 14, 2008 at 7:34 PM, Steve Lihn [EMAIL PROTECTED] wrote: I have a small question on the simulation technique. In both John Hughes and your code, you wrap the print inside the runXYZ (...) to print out the state of simulation. It is like: runArrow ( ... simulation ...then print ...) - input Why don't you structure it like y - runArrow ( ... simulation ... then return observation ... ) - input reuse y or print y After attempting to reply to this several times, I think I finally know what you're asking. Well, observe and observeWith are exported, so you could do that if you wanted. As far as why I didn't do that in my example code, uh, I dunno. Really the purpose of this was to port the Quantum::Entanglement Perl module, for no other reason than that module made me go Woah! That's awesome!. So I wanted to transliterate the Perl examples which collapsed and printed in one go, because those were what made me feel like I was inside some freaky quantum computer, not just running a simulation module. :-) Luke ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: ctemplate, SSL server support in HsOpenSSL, network-minihttp, binary parsing, network-dns
On Feb 14, 2008, at 4:44 PM, Adam Langley wrote: HsOpenSSL now has rudimentary support for writing SSL servers[5] (clients coming soon). In terms of blocking IO, this /should/ work the way you would hope (i.e. like a Haskell function w.r.t. forkIO). Coincidentally, night before last I downloaded the openssl source and started trying to think about an SSL library. The thing is, I have always wished for an SSL library that doesn't do I/O, but just transforms data and leaves the I/O to you. I guess that's going to be a good deal more complicated, and in many cases not worth it at all - so there would need to be an SSLIO layer above it - but clinging to some vague reactive-object ideals I imagine that this `pure' SSL core could fit into some kinds of dispatching frameworks where the usual thing would be awkward. Anyway, I'll write again in a few years if I get around to doing anything about it. Donn Cave, [EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Mailing List Archive: Search Broken?
Don Stewart wrote: voigt: Hi, it seems that something is wrong with the search on http://www.mail-archive.com/[EMAIL PROTECTED]/ Only messages from Nov and Dec 2007 are found, no matter what search phrases and date ranges are given. Worked yesterday and delivered search results over the whole history of the archive. Same for haskell-cafe. Anyone knows what is going on? I always use gmane, via: http://news.gmane.org/search.php?match=haskell The problem with this is that when searching through gmane, and selecting a single message with a hit, one gets only to see that message without its context thread. At least I could not find out a way to switch from the found message to the thread in which it occurred. The search facility on www.mail-archive.com is much nicer in that respect, but as mentioned, as of yesterday seems to have forgotten about all messages prior to November 2007, and from 2008. Too bad... Ciao, Janis. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Mailing List Archive: Search Broken?
On 02/14/2008 11:13 PM, Janis Voigtlaender wrote: Don Stewart wrote: voigt: I always use gmane, via: http://news.gmane.org/search.php?match=haskell The problem with this is that when searching through gmane, and selecting a single message with a hit, one gets only to see that message without its context thread. At least I could not find out a way to switch from the found message to the thread in which it occurred. The search facility on www.mail-archive.com is much nicer in that respect, but as mentioned, as of yesterday seems to have forgotten about all messages prior to November 2007, and from 2008. Too bad... Ciao, Janis. I removed [EMAIL PROTECTED] from the recipients. It is possible to get to the thread-view of a message from just the bare message page. For example, if I turn up http://article.gmane.org/gmane.comp.lang.haskell.cafe/8046 in a google search, which takes me to the article without context, I then click on the linked subject of the message, which redirects me to http://thread.gmane.org/gmane.comp.lang.haskell.cafe/8044/focus=8046, which is the threaded view. It still doesn't seem to allow you to get to other threads before and after the given thread, but at least you can view the entire thread of the message you found. It's definitely not very intuitive. Cheers, Calvin ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: A question about monad laws
Ben Franksen [EMAIL PROTECTED] schrieb: Wilhelm B. Kloke wrote: [EMAIL PROTECTED] [EMAIL PROTECTED] schrieb: I would consider a good idea if ghc would provide language support to this sort of integers. No need, you can do that for yourself: {-# LANGUAGE GeneralizedNewtypeDeriving #-} newtype DInt = DInt Double deriving (Eq, Ord, Enum, Num) instance Show DInt where show (DInt x) = show (truncate x :: Integer) Probably you missed the point I wanted to make. This works only properly, if you can catch the Inexact Trap which indicates the overflow. The problem whith normal Ints is that the hardware does not support overflow detection. You get silent wrong results if you use an Int type which maps to C int, but not if it maps to C float or double with Inexact trap enabled. Therefore you need add runtime checks to be sure that you notice a possible overflow condition. -- Dipl.-Math. Wilhelm Bernhard Kloke Institut fuer Arbeitsphysiologie an der Universitaet Dortmund Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257 PGP: http://vestein.arb-phys.uni-dortmund.de/~wb/mypublic.key ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Mailing List Archive: Search Broken?
Calvin Smith wrote: On 02/14/2008 11:13 PM, Janis Voigtlaender wrote: Don Stewart wrote: voigt: I always use gmane, via: http://news.gmane.org/search.php?match=haskell The problem with this is that when searching through gmane, and selecting a single message with a hit, one gets only to see that message without its context thread. At least I could not find out a way to switch from the found message to the thread in which it occurred. The search facility on www.mail-archive.com is much nicer in that respect, but as mentioned, as of yesterday seems to have forgotten about all messages prior to November 2007, and from 2008. Too bad... Ciao, Janis. I removed [EMAIL PROTECTED] from the recipients. It is possible to get to the thread-view of a message from just the bare message page. For example, if I turn up http://article.gmane.org/gmane.comp.lang.haskell.cafe/8046 in a google search, which takes me to the article without context, I then click on the linked subject of the message, which redirects me to http://thread.gmane.org/gmane.comp.lang.haskell.cafe/8044/focus=8046, which is the threaded view. It still doesn't seem to allow you to get to other threads before and after the given thread, but at least you can view the entire thread of the message you found. It's definitely not very intuitive. And the funny thing is that I tried just that yesterday, to no avail. Now I found that it does work for haskell.cafe, but not for haskell.general, which I tried yesterday. Strange. (The problem on mail-archive affects both lists haskell-cafe and haskell proper.) Thanks for the solution at least for haskell-cafe! Ciao, Janis. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe