Re: [Haskell-cafe] Generating random graph
Hi! On Mon, Apr 11, 2011 at 7:36 AM, Steffen Schuldenzucker sschuldenzuc...@uni-bonn.de wrote: So when using randomRs, the state of the global random number generator is not updated, but it is used again in the next iteration of the toplevel forM [1..graphSize] loop. I thought it would be interleaved. Thanks. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Generating random graph
Hi! I have made this function to generate a random graph for Data.Graph.Inductive library: generateGraph :: Int - IO (Gr String Double) generateGraph graphSize = do when (graphSize 1) $ throwIO $ AssertionFailed $ Graph size out of bounds ++ show graphSize let ns = map (\n - (n, show n)) [1..graphSize] es - fmap concat $ forM [1..graphSize] $ \node - do nedges - randomRIO (0, graphSize) others - fmap (filter (node /=) . nub) $ forM [1..nedges] $ \_ - randomRIO (1, graphSize) gen - getStdGen let weights = randomRs (1, 10) gen return $ zip3 (repeat node) others weights return $ mkGraph ns es But I noticed that graph has sometimes same weights on different edges. This is very unlikely to happen so probably I have some error using random generators. Could somebody tell me where? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] fmap and LaTeX
Hi! Is there for $ (fmap) operator some nice looking symbol in mathematics, LaTeX? I am looking here: http://www.soi.city.ac.uk/~ross/papers/Applicative.html http://www.haskell.org/ghc/docs/6.12.1/html/libraries/base/Control-Applicative.html but only for other operators there are nice symbols, and $ is left as it is. I have also tried \mathbin{\$} but it adds space around $ (at least in lhs2tex), so it looks ugly. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] fmap and LaTeX
Hi! Uuu, nice. Thanks Daniel and Henning. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] lhs2tex and comment
Hi! I have problems with lhs2tex and nested (long) comments. It formats it all in one long line and does not wrap it. Is there a way to define those comments so that it would maybe automatically wrap them at the end of the page? (And still take into the account possible indentation of comments.) Or at least to not ignore newlines in comments? I am using poly mode. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] [ANNOUNCE] Data-flow based graph algorithms
Hi! Based on my Etage data-flow framework: http://hackage.haskell.org/package/Etage I have made a package to show how to implement graph algorithms on top of it: http://hackage.haskell.org/package/Etage-Graph I invite everybody to take a look and see how it is possible to implement known control-flow algorithms in a data-flow manner. For now, only shortest paths search is implemented (from all to all nodes). The nice feature is that such approach is easily parallelized. It quite heavily uses Haskell threads so it is also useful to benchmark and test them, especially for graphs with many nodes ( 300). There is documentation, but because of this bug it is not visible: http://hackage.haskell.org/trac/hackage/ticket/656 Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] BlockedIndefinitelyOnMVar exception
Hi! Is there a way to disable throwing BlockedIndefinitelyOnMVar exceptions? Because I am doing small program where I do not care if some threads block. As at the end user will have to interrupt the program anyway. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] BlockedIndefinitelyOnMVar exception
Hi! On Thu, Mar 31, 2011 at 4:37 PM, Brandon Moore brandon_m_mo...@yahoo.com wrote: If you plan to send an exception, you must have the ThreadId saved elsewhere, which should prevent the BlockedIndefinitelyOnMVar exception. But this behavior is something they wish to remove in future versions as not-wanted? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] lhs2tex and line numbers
Hi! On Mon, Feb 14, 2011 at 12:23 PM, Andres Loeh andres.l...@googlemail.com wrote: A long time ago, I've written some experimental code that achieves line numbering in lhs2tex. I've committed the files to the github repository, so you can have a look at the .fmt file and the demo document in Great! Super! This is exactly what I needed. Thanks really a lot! I especially like having the numbers on the right. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] lhs2tex and line numbers
Hi! How could I convince lhs2tex to add in poly mode line numbers before each code line in code block? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] [ANNOUNCE] Etage 0.1.7, a general data-flow framework
Hi! I am glad to announce a new version of Etage, a general data-flow framework. Now it supports also GHC 7.0 and not just GHC head and because of this also Hackage successfully generates documentation so it is easier to understand the framework. http://hackage.haskell.org/package/Etage Feel free to test it and provide any feedback Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Misleading MVar documentation
Hi! On Sat, Dec 25, 2010 at 11:58 AM, Edward Z. Yang ezy...@mit.edu wrote: I think you're right. A further comment is that you don't really need stringent timing conditions (which is the only thing I think of when I hear race) to see another thread grab the mvar underneath you Yes, MVars are (bounded, 1 space long) queues with predictable behavior. Maybe we should change documentation for swapMVar (and others) and replace notion of race condition with that it can block. This would be also in accordance with functions I proposed. So current functions are blocking (and we document that) and we introduce non-blocking versions. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Misleading MVar documentation
Hi! On Sat, Dec 25, 2010 at 2:32 AM, Edward Z. Yang ezy...@mit.edu wrote: In particular, we should explicitly note the race conditions for not just swapMVar but also readMVar, withMVar, modifyMVar_ and modifyMVar, I am not sure if this are really race conditions? The point is that readMVar, withMVar and others do not return until they can return the value back and after that the value is the same as it was at the beginning of the call. So if somebody manages to put the value in then those functions wait that the value is removed. Race condition would mean for me that some other thread could corrupt the data. This is not the case here. So I would argue that it would be better to write that those function can block. Not that there are race-conditions. Race-conditions (for me) imply that invariants can change based on some other thread, here they hold, when/if the function returns. This is why I proposed tryReadMVar and tryModifyMVar here: http://hackage.haskell.org/trac/ghc/ticket/4535 Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Thu, Dec 16, 2010 at 10:30 AM, Simon Marlow marlo...@gmail.com wrote: I've thought about whether we could support resumption in the past. It's extremely difficult to implement - imagine if the thread was in the middle of an I/O operation - should the entire I/O operation be restarted from the beginning? Yes, this is the same problem as when designing an operating system: what should happen to a system call if it is interrupted. And I agree with elegance of simply returning EINTR errno. This makes system calls much easier to implement and for user it is really simple to wrap it in a loop retrying it. So it is quite often to see such function: static int xioctl( int fd, int request, void *arg) { int r; do { r = ioctl(fd, request, arg); } while (-1 == r EINTR == errno); return r; } And this is why, once I recognized similarities decided to use my uninterruptible function. So as resumption is hard to do then such approach is a valid alternative. I agree with you, that the best would be that there would be no need for such functions as you would make sure that exceptions are raised only in threads you want. But this is hard to really satisfy when using complex programs with many threads and many exception throwing between them and especially if you are using third-party libraries you do not know much about and which could throw some exception to some other thread you haven't expected. This is why it is good to make sure that things will work as expected even if some exception leaks in. It would be great if type system would help you here: maybe a separate (type) system just for exception coverage checking which would build a graph from code of how all could exceptions be thrown (this would require following of how ThreadId values are passed around, as capability tokens for throwing an exception) and inform/warn you about which exceptions can be thrown in which code segments. And you could specify which exceptions you predict/allow and if there would be a mismatch you would get an compile-time error. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] threadWaitRead and threadWaitWrite on multiple fds
Hi! On Mon, Dec 13, 2010 at 2:14 AM, Antoine Latter aslat...@gmail.com wrote: Can you do it with forkIO? That is, have two light-weight threads, each waiting on a different fd, which perform the same action when one of them wakes up. Or you could wait for each fd in its own thread (those are really light-weight threads) and once some is triggered you spawn another thread which deals with the event, while the original thread goes back into the waiting. Or you can also send data over Chan to another thread which then processes the even (if you need to serialize processing). Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANNOUNCE: genprog-0.1
Hi! On Thu, Dec 9, 2010 at 1:48 PM, Alberto G. Corona agocor...@gmail.com wrote: assign rates of mutation for each statement, This could be assigned by evolution itself. If if will have high probability of mutation then resulting programs will not survive. So those probabilities can be assigned by evolution itself and be also something which is passed on with generations (with again possibility of mutations of those probabilities itself). What would be interesting is to have an evolution algorithm which would allow such protections to evolve during its run. So some kind of protection which would lower the rate of mutation for some code part. Species of programs means that the seed of the genetic algoritm must not be turing comoplete I guess. It must be specific for each problem. I do not see this as necessity. Just that those specimens which would use this power too much would probably not survive. (If they would remove protection for some sentences they have build before.) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANNOUNCE: genprog-0.1
Hi! On Wed, Dec 8, 2010 at 12:33 PM, Alberto G. Corona agocor...@gmail.com wrote: But programs are non lineal. And DNK is? I doubt. ;-) I think the approach is valid, it simulates what is happening in nature (random insertions, deletions, changes, translations, copies, etc, without any higher meaning and guiding). Only problem is that people often do not want to wait millions of years for evolution of programs to achieve their goal. Of course mutations are only part of the story. Also combining different parents' genes in a way to get functional offsprings is also a question how to do with programs. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANNOUNCE: genprog-0.1
Hi! On Wed, Dec 8, 2010 at 4:51 PM, Alberto G. Corona agocor...@gmail.com wrote: Of course there are smooth zones in the fitness landscape of any code. what is necessary is to direct the process by avoiding absurd replacements (mutations that goes straight to dead zones) and rules for changing from a smooth to another smooth area once the local maximum is not satisfactory. Or to detect them as early as possible (that is, rules again). That is what I mentioned before. But this is not blind evolution as we know it. Of course, if you want to speed up things you can use such techniques you are mentioning. But this techniques use prior knowledge for this particular set of problems. Evolution in general does not care about this. It has all the time it needs. Moreover, the genetic code has evolved to evolve. I agree with that. Neither Haskell nor any conventional language has. True. We should evolve also the language itself, not just programs. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Wed, Dec 1, 2010 at 10:50 AM, Simon Marlow marlo...@gmail.com wrote: Yes, but semantics are different. I want to tolerate some exception because they are saying I should do this and this (for example user interrupt, or timeout) but I do not want others, which somebody else maybe created and I do not want to care about them. Surely if you don't care about these other exceptions, then the right thing to do is just to propagate them? Why can't they be dealt with in the same way as user interrupt? The problem is that Haskell does not support getting back to (continue) normal execution flow as there was no exception. So for user interrupt I want to deal with in a way to interrupt execution, cleanup, print to user something, etc. But for other exceptions I want to behave as nothing happened. This is why those cannot be dealt with the same as user interrupt. On way to behave as nothing happened is to mask them (and only them) and the other way is to eat (and ignore) them. Probably the first would be better if it would be possible. So the second one is the one I have chosen. The third one, to restore execution flow is sadly not possible in Haskell. Would it be possible to add this feature? To have some function which operates on Exception value and can get you back to where the exception was thrown? uninterruptible :: IO a - IO a uninterruptible a = *CENSORED* That is a very scary function indeed. It just discards all exceptions and re-starts the operation. I definitely do not condone its use. Yes, this one is extreme. For example, this would be more in the line of above arguments: uninterruptible :: IO a - IO a uninterruptible a = mask_ $ a `catches` [ Handler (\(e :: AsyncException) - case e of UserInterrupt - throw e ; _ - uninterruptible a), Handler (\(_ :: SomeException) - uninterruptible a) ] Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] [ANNOUNCE] A general data-flow framework
Hi! I have just published my library which provides a general data-flow framework: http://hackage.haskell.org/package/Etage Sadly it requires current GHC HEAD branch because of this: http://hackage.haskell.org/trac/ghc/ticket/1769 Short description: A general data-flow framework featuring nondeterminism, laziness and neurological pseudo-terminology. It can be used for example for data-flow computations or event propagation networks. It tries hard to aide type checking and to allow proper initialization and cleanup so that interfaces to input and output devices (data or events producers or consumers) can be made (so that created models/systems/networks can be used directly in real world applications, for example robots). I am more than open to any suggestions and comments. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Thu, Nov 18, 2010 at 2:19 PM, Simon Marlow marlo...@gmail.com wrote: then it isn't uninterruptible, because the timeout can interrupt it. If you can tolerate a timeout exception, then you can tolerate other kinds of async exception too. Yes, but semantics are different. I want to tolerate some exception because they are saying I should do this and this (for example user interrupt, or timeout) but I do not want others, which somebody else maybe created and I do not want to care about them. My main question is then, why do you want to use maskUninterruptible rather than just mask? Can you give a concrete example? I have written few of them. As I said: timeout and user exceptions. Timeout is for interrupts with which me as a programmer want to limit otherwise uninterruptible code, and user exceptions so that users can limit otherwise uninterruptible code. But in meanwhile I have find an elegant way to solve my problems (very old one in fact). ;-) I have defined such function: uninterruptible :: IO a - IO a uninterruptible a = mask_ $ a `catch` (\(_ :: SomeException) - uninterruptible a) with which I wrap takeMVar and other interruptible operations. As they can only be interrupted or succeed they can simply be retried until they succeed. In this way my code is not susceptible to possible dead locks which could happen if I would use maskUninterruptible and for example throw two exceptions between two threads, while they would both be in maskUninterruptible section. In this way I can remain in only masked state, exceptions can still be delivered, but my program flow is not interrupted. Of course function about can be also easily modified to allow user interrupts and timeouts, but not other exceptions. And as I want to use this code only in my cleanup code I do not really care about new exceptions being ignored. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Plot Haskell functions in TikZ
Hi! On Thu, Nov 25, 2010 at 9:41 PM, Henning Thielemann lemm...@henning-thielemann.de wrote: Has anyone tried to write such a '--plot' command for TikZ? Great idea. ;-) I haven't yet done this but I have played with TikZ a lot. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Eq instance for Chan
Hi! On Thu, Nov 25, 2010 at 1:05 PM, Simon Marlow marlo...@gmail.com wrote: It's just an oversight. Send us a patch, or make a ticket for it? Done: http://hackage.haskell.org/trac/ghc/ticket/4526 Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Eq instance for Chan
Hi! On Thu, Nov 25, 2010 at 5:37 PM, Bas van Dijk v.dijk@gmail.com wrote: Officially your patch has to go through the Library submission process: A bit overkill for one line but OK. ;-) Since you already have a ticket the only thing left to do is convert your patch to a darcs patch and send a proposal to the libraries list. I just have. I attached your patch to the ticket. Thanks. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Eq instance for Chan
Hi! Why is there no Eq instance for Chan? There is Eq for MVar so it is quite possible to define also Eq for Chan? What I would like to do is keep track how many consumers I have for each Chan so duplicating them with dupChan as necessary. So I was thinking of storing a list of Chans which already have a consumer and then duplicating it if another (and every additional) consumer would be added. But without Eq it is not possible to check if there is Chan already in a list (and I hope Eq would compare just Chan and not its content). Is there some other way to achieve this? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] [ANNOUNCE] A Haskell interface to Lego Mindstorms NXT
Hi! I have just published my library which provides Haskell interface to Lego Mindstorms NXT over Bluetooth. http://hackage.haskell.org/package/NXT So take your NXTs out of your closets or out of your children hands and try it out! ;-) Feel free to contribute additional features, interfaces for more sensors and propose or write other (example) programs. I also more than welcome any comments as it is my first library in Haskell. So if you have any suggestion about code improvements or some other best practices, I would really like to hear them. Currently haddock documentation does not compile because of this bug: http://hackage.haskell.org/trac/hackage/ticket/656 Can anybody help with fixing this bug? It would be much nicer to see all documentation on HackageDB I made. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Wed, Nov 17, 2010 at 12:00 PM, Simon Marlow marlo...@gmail.com wrote: That's hard to do, because the runtime system has no knowledge of exception types, and I'm not sure I like the idea of baking that knowledge into the RTS. But currently it does have a knowledge of interruptible and uninterruptible exceptions? So adding another which would ctrl-c interrupt? The point of maskUninterruptible is for those hoefully rare rare cases where (a) it's really inconvenient to deal with async exceptions and (b) you have some external guarantee that the critical section won't block. But is it possible to make an uninterruptible code section which can timeout? Because once you enter maskUninterruptible probably System.Timeout does not work anymore? I see uses of maskUninterruptible (or some derivation of it, preferably) if: - user would still be able to interrupt it - you could specify some timeout or some other condition after which the code section would be interrupted, so you could try for example to deal with some cleanup code (and you do not want interrupts there) but if this cleanup code hangs you still want some way to kill it then Currently, with this absolute/all approach maskUninterruptible is really not useful much because it masks too much. But I would see a lot more useful something which would still allow me (and only me, as a programmer) to interrupt it or user (because user should know what he/she does). And this is why I argue for some way of being able to specify which interrupts you still allow: like mask everything except... (user interrupts, my special exception meant for cleanup code hanging conditions...). Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bracket around every IO computation monad
Hi! On Tue, Nov 16, 2010 at 1:40 AM, Alexander Solla a...@2piix.com wrote: Check out: http://apfelmus.nfshost.com/articles/operational-monad.html It's the paper that inspired the operational module. I read that yesterday. A nice read. Now I have to think how to make my own monad based on all this knowledge. (As an exercise.) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bracket around every IO computation monad
Hi! On Tue, Nov 16, 2010 at 2:05 AM, Bas van Dijk v.dijk@gmail.com wrote: The assumption being that Mitar's Nerves are scarce resources (like files for example). Meaning: Yes. My nerves are really a scarce resource. ;-) And I haven't yet heard anybody comparing them to files. I heard that they are short and sometimes people step on some of them. But not that they are like files. ;-) I have to tell this to my fellow neuroscientists. This is a whole new paradigm. ;-) But yes, Nerves were modeled by looking at file handles. And were also made so that they fit nicely into the bracket function. They are mostly a wrapper around scarce resource (like display, complex computation (CPU), sensors and similar). This is why there has to be some preparation and cleanup. In fact your approach opens a whole new idea for me. Because currently my whole main program was: attach everything together, if error while attaching cleanup and exit wait until everything lives/works or until an error cleanup everything and wait until everything is really cleaned up exit The whole main program just prepares my generic data-flow computation framework I am developing. So that attaching is how all flows (called Nerves) are interconnected and then you let it live and process. So your approach is interesting because I could do cleanup at one place, and it wouldn't matter if I am doing this in the attach phase (which this thread is about) or any other phase. someOperation :: LiveNeuron n - IO () (I'm not sure Mitar actually has this...) No. In fact Neurons are defined with an operation they do. You just feed them with data and they output data. In main program you do not do operations over them. You just grow/prepare them and attache/interconnect them. 4) It's important not to leave handles open (or in this case leave nerves attached) when they don't need to be. In the case of files when you leave a file open when you're not using it anymore, other processes may not be able to use the file. (I'm not sure this is a requirement for Nerves...) It is. Because they encapsulate also complex resources like sensors, cameras, displays, keyboard and similar things. (But they can also be quite low-level too.) (Again, I'm not sure a similar requirement exists for LiveNeurons) It does. Once things are cleaned up there should be no other use of them. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bracket around every IO computation monad
Hi! On Mon, Nov 8, 2010 at 1:50 AM, Felipe Almeida Lessa felipe.le...@gmail.com wrote: (I won't answer your main question, I'll just write some notes on your current code.) Thanks. This also helped. But is it possible to define a Monad like I described? So that all actions would be wrapped in a bracket, which would be stacked? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bracket around every IO computation monad
Hi! On Mon, Nov 15, 2010 at 9:07 PM, C. McCann c...@uptoisomorphism.net wrote: Isn't this mostly a reimplementation of mapM? Given a list of [IO Growable], you map over it to put a bracket around each one, then sequence the result No. There is a trick. It stacks up attach and deattach. So if for third attach something fails then both deattach for second and first nerve is called. In your example this would not happen. So the idea is that in a case of an error all computations (which are of some type class which defines necessary prepare and cleanup functions) are unwinded. In a case of success the resulting list is returned. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bracket around every IO computation monad
Hi! Felipe and Bas, thank you for your suggestions. They really opened two new worlds to me. ;-) I didn't know about those libraries. I was hopping of succeeding making a bracketing monad by hand, to learn something and to make my first monad. But was not successful. I hope those approaches with libraries will be a good guide to me. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bracket around every IO computation monad
Hi! My approach using 'operational' package is equivalent to creating your own monad. The beauty of 'operational' is that you don't need to worry about the pumbling of the monad, you just need to specify what to do with your operations. True. Approach with operational is really beautiful. And it is really great when you want things done. But for me, Haskell novice who wants to learn more, it hides too much. So it is probably something I would use in my code, but on the other hand I would like an exercise of doing things by hand. So first 100 monads by hand and then such libraries are useful, but also you exactly understand what are they doing - what are they automating, which process you have been doing by hand before. So thank you for your approach. I didn't know that this can be automated in so elegant way. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Wed, Nov 10, 2010 at 4:48 PM, Simon Marlow marlo...@gmail.com wrote: You can use maskUninterruptible in GHC 7, but that is not generally recommended, Maybe there should be some function like maskUninterruptibleExceptUser which would mask everything except UserInterrupt exception. Or maybe UserInterrupt and some additional exception meant for use by programs, like InterruptMaskException. Or we could make two new type classes: HiddenException -- for those exceptions which should not print anything if not caught UninterruptibleExceptException -- for those exceptions which should not be masked with maskUninterruptibleExcept function Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Wed, Nov 10, 2010 at 8:54 AM, Bas van Dijk v.dijk@gmail.com wrote: A ThreadKilled exception is not printed to stderr because it's not really an error and should not be reported as such. So, how to make custom exceptions which are not really an error? What do you mean by hanging there. Are they blocked on an MVar? I have something like this: terminated - newEmptyMVar forkIO $ doSomething `catches` [ Handler (\(_ :: MyTerminateException) - return ()), -- we just terminate, that is the idea at least Handler (...) -- handle other exceptions ] `finally` (putMVar terminated ()) takeMVar terminated The problem is, that if I send multiple MyTerminateException exceptions to the thread one of them is printed (or maybe even more, I do not know, because I have many threads). My explanation is that after the first one is handled and MVar is written, thread stays active, just no computation is evaluating. Because of that another exception can be thrown at the thread. And then it is not handled anymore and so the library prints the exception and exits the thread. If I change things into: `finally` (putMVar dissolved () throwIO ThreadKilled) it works as expected. That shouldn't be necessary. If the cleanup action is the last action of the thread then the thread will terminate when it has performed the cleanup. It does not seem so. It would help if you could show us your code (or parts of it). I hope the code above should satisfy. If not I will make some real example. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Wed, Nov 10, 2010 at 1:39 PM, Bas van Dijk v.dijk@gmail.com wrote: First of all, are you sure you're using a unique MVar for each thread? If not, there could be the danger of deadlock. I am. Finally the 'putMVar terminated ()' computation is performed. Yes. But this should be done fast if MVar is empty, so putMVar does not block. And also MVar is filled, because takeMVar does not block at the end. So It is not that putMVar is interrupted (and thus no handler) but something after that. Finally, your code has dangerous deadlock potential: Because you don't mask asynchronous exceptions before you fork it could be the case that an asynchronous exception is thrown to the thread before your exception handler is registered. This will cause the thread to abort without running your finalizer: 'putMVar terminated ()'. Your 'takeMVar terminated' will then deadlock because the terminated MVar will stay empty. I know that (I read one post from you some time ago). It is in TODO commend before this code. I am waiting for GHC 7.0 for this because I do not like current block/unblock approach. Because blocked parts can still be interrupted, good example of that is takeMVar. The other is (almost?) any IO. So if code would be: block $ forkIO $ do putStrLn Forked (unblock doSomething) `finally` (putMVar terminated ()) There is a chance that putStrLn gets interrupted even with block. So for me it is better to have a big TODO warning before the code than to use block and believe that this is now fixed. And when I will add some code one year later I will have problems. Of course I could add a comment warning not to add any IO before finally. But I want to have a reason to switch to GHC 7.0. ;-) I am using such big hack for this: -- Big hack to prevent interruption: it simply retries interrupted computation uninterruptible :: IO a - IO a uninterruptible a = block $ a `catch` (\(_ :: SomeException) - uninterruptible a) So I can call uninterruptible $ takeMVar terminated and be really sure that I have waited for thread to terminate. Not that thread in its last breaths send me some exception which interrupted me waiting for it and thus not allowing it to terminate properly. Of course if you now that you do not want that your waiting is interrupted in any way. Maybe there could be an UserInterrupt exception to this uninterruptible way of handling exceptions. ;-) I've made this same error and have seen others make it too. For this reason I created the threads[1] package to deal with it once and for all. I know. I checked it. But was put off by mention of unsafe. Strange. It would help if you could show more of of your code. I am attaching a sample program which shows this. I am using 6.12.3 on both Linux and Mac OS X. And I run this program with runhaskell Test.hs. Without throwIO ThreadKilled it outputs: Test.hs: MyTerminateException MVar was successfully taken With throwIO ThreadKilled is as expected, just: MVar was successfully taken So MVar is filled. What means that thread gets exception after that. But there is nothing after that. ;-) (At least nothing visible.) Mitar Test.hs Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! On Wed, Nov 10, 2010 at 4:16 PM, Simon Marlow marlo...@gmail.com wrote: The right way to fix it is like this: Optimist. ;-) let run = unblock doSomething `catches` [ Handler (\(_ :: MyTerminateException) - return ()), Handler (\(e :: SomeException) - putStrLn $ Exception: ++ show e) ] `finally` (putMVar terminated ()) nid - block $ forkIO run In 6.12.3 this does not work (it does not change anything, I hope I tested it correctly) because finally is defined as: a `finally` sequel = block (do r - unblock a `onException` sequel _ - sequel return r ) You see that unblock there? So it still unblocks so that second exception is delivered immediately after catches handles the first exception? But I agree that your explanation for what is happening is the correct one. Better than my hanging threads at the end. And with my throwIO approach I just override MyTerminateException with ThreadKilled. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Printing of asynchronous exceptions to stderr
Hi! I have been spend some time to debug this in my program. ;-) Why is ThreadKilled not displayed by RTS when send to thread (and unhandled), but any other exception is? For example, I am using custom exceptions to signal different kinds of thread killing. Based on those my threads cleanup in different ways. But after they cleanup they keep running. Not really running as their computations were interrupted. But simply hanging there. And if then new custom exception arrives it is not handled anymore (as it is after my normal exception handlers already cleaned everything) and it is printed to stderr. Now I added `finally` throwIO ThreadKilled at the end of my cleanup computations to really kill the thread. So that possible later exceptions are not delivered anymore. Why it is not so that after all computations in a thread finish, thread is not killed/disposed of? Because it can be reused by some other computation? But then my exceptions could be delivered to wrong (new) thread while they were meant for old one? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Bracket around every IO computation monad
Hi! I have a class Neuron which has (among others) two functions: attach and deattach. I would like to make a way to call a list/stack/bunch of attach functions in a way that if any of those fail (by exception), deattach for previously already attached values (called attach on them) are deattached (called deattach on them). I have come up with such way: data Growable where Growable :: Neuron n = LiveNeuron n - Growable growNeurons :: [IO Growable] - IO [Growable] growNeurons attaches = growNeurons' attaches [] where growNeurons' [] ls = return ls growNeurons' (a:ats) ls = bracketOnError a (\(Growable l) - deattach l) (\l - growNeurons' ats (l:ls)) So I give growNeurons a list of attach actions and it returns a list of attached values ((live)neurons). This works nice, but syntax to use it is ugly: neurons - growNeurons [ do { a - attach nerve1; return $ Growable a }, do { a - attach nerve2; return $ Growable a }, do { a - attach nerve3; return $ Growable a } ] Types of attach and deattach are (if I simplify): attach :: Nerve n - IO (LiveNeuron n) deattach :: LiveNeuron n - IO () Growable is only used so that I can put actions of different type in the list. It seems to me that all this could be wrapped into a monad. So that I would be able to call something like: neurons - growNeurons' $ do attach nerve1 attach nerve2 attach nerve3 Where I would be allowed to call actions of a special type class which defined also clean-up function (in my case called deattach). And which would be called if there was any exception thrown (and then at the end rethrown). Otherwise, the result would be a list of all computed values. In my case all this in IO monad. So it is possible that evaluation of monad actions would be stacked inside of bracketOnError and in a case of error clean-up functions would be called, otherwise returns a list of results? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Rigid types fun
Hi! I have much fun with rigid types, type signatures and GADTs. And I would like to invite also others in and share my joy. ;-) Please see the attached file and chase a solution to how to make it compile. I would like to have a function which I would call like: createNerve (Axon undefined) (AxonAny undefined) and it would return proper Nerve. Similar to how asTypeOf works. I would like to do that to remove repeating code like: from - newChan for - newChan let nerve = Nerve (Axon from) (AxonAny for) which I have to write again and again just to make types work out. Why I cannot move that into the function? I am using GHC 6.12.3. Is this something which will work in 7.0? Mitar Test.hs Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Rigid types fun
Hi! On Fri, Nov 5, 2010 at 10:49 AM, Bulat Ziganshin bulat.zigans...@gmail.com wrote: Friday, November 5, 2010, 12:45:21 PM, you wrote: from - newChan for - newChan let nerve = Nerve (Axon from) (AxonAny for) create = do from - newChan for - newChan return$ Nerve (Axon from) (AxonAny for) main = do nerve - create ... OK. It is necessary to check the attached file to understand. ;-) I would like to call it like create (Axon undefined) (AxonAny undefined) and get in that case Nerve (Axon a) (AxonAny b) as a result. If I would call it like create (AxonAny undefined) (AxonAny undefined) I would get Nerve (AxonAny a) (AxonAny b) as a result. And so on. So I know I can move some hard-coded combination into a function. But I would like to cover all combinations and tell with arguments which combination I want. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Rigid types fun
Hi! On Fri, Nov 5, 2010 at 12:50 PM, Alexey Khudyakov alexey.sklad...@gmail.com wrote: I'm not sure what do you exactly want. But what about applicative functors? They offer nice notation Nerve $ (Axon $ newChan) * (AxonAny $ newChan) Ooo. That is nice. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Re[2]: Rigid types fun
Hi! On Fri, Nov 5, 2010 at 12:12 PM, Bulat Ziganshin bulat.zigans...@gmail.com wrote: look into HsLua sources. it does something like you asking (converting return type into sequence of commands) so it mau be what you are looking for. it uses typeclasses for this effect. the same technique used in haskell printf implementation afaik While reading the source code I found that in loadstring you do not make sure that free is called even in a case of an exception. Is this not necessary? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Rigid types fun
Hi! On Fri, Nov 5, 2010 at 4:01 PM, Tillmann Rendel ren...@informatik.uni-marburg.de wrote: Note that newNerve does not take Axons, but rather monadic actions which create Axons. Now, you can use something like nerve - newNerve newAxon newAxonAny to create a concrete nerve. Thanks! I decided for this approach. It seems to me the nicest. Simple and allows me to later redefine Nerve and Axon without breaking stuff around. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] problems using macports?
Hi! On Thu, Sep 9, 2010 at 5:38 PM, S. Doaitse Swierstra doai...@swierstra.net wrote: I am in my yearly fightto get a working combination of operating system (Snow Leopard), compiler version (6.12) , wxWidgets and wxHaskell on my Mac . I had same problem just know and it seems to be (as described also elsewhere) that Haskell Platform GHC is linked against system's iconv and by using (or having) MacPorts or Fink with their own version you have a conflict. One solution is to get MacPorts a new version of GHC so there will be no need for Haskell Platform GHC installation if you want to use MacPorts: https://trac.macports.org/ticket/25558 (Or you could just change PATH so that it uses MacPorts version.) The other way is that you (for session in which you compile) hide MacPorts or Fink installation. I had this problem with Fink and I fixed this simply by undeclaring environment variables: export -n LD_LIBRARY_PATH export -n LIBRARY_PATH Of course then you cannot use Haskell libraries which require additional libraries not installed on the system (and this is why you install them with MacPorts or Fink). So one way would be to have Haskell Platform versions which link against MacPorts and Fink iconv (or their library paths in general). The other would be probably to implement/document configuration (extra-lib-dir?) that Cabal (or GHC in general) first searches system's library path (those against which GHC was compiled in Haskell Platform) and if lib is not there goes for MacPorts or Fink's library paths? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Cleaning up threads
Hi! On Wed, Sep 22, 2010 at 10:21 AM, Simon Marlow marlo...@gmail.com wrote: You could use maskUninterruptible, but that's not a good solution either - if an operation during cleanup really does block, you'd like to be able to Control-C your way out. So maybe this shows a need for maskUninterruptibleExcept (user exception for example). So the only way out of this hole is: don't write long cleanup code that needs to mask exceptions. Find another way to do it. There is sometimes no other way. For example if cleanup requires extensive IO with robot on the Mars. It takes time for communicating in each direction and while waiting for the response it is really not a great idea that robot on the Mars is left in undefined state because cleanup function has been interrupted half a way doing its thing. Of course the protocol would be probably just that you send a message like I am going away for some time, please confirm and you wait for confirmation. So in most cases it will be OK even if the you do not really wait for confirmation, but sometimes there will be no confirmation and you will have to retry or try something else (like raise an alarm). (The point of such protocol message is that robot does not consume power trying to reach you when you are not there.) But yes, in this case I will simply use maskUninterruptible and also user should be blocked/masked from interrupting the cleanup. (He/she still has kill signal if this is really really what he/she wants.) Haskell great type checking is a great tool and help in mission critical programs, there is just this hidden side effect (exceptions) possibility built-in in language which has yet to be polished. In my opinion. This could be mitigated with resume from exception. But this is only one part of the story I am trying to rise here. The other is that me, as an user of some library function, do not know and cannot know (yet) which exceptions this function can throw at me. I believe this is a major drawback. Code behavior should be transparent and types do help here. But (possibility of) exceptions are hidden. To be prepared for any exception (even those not yet defined at the time of writing) in code is sometimes a too hard requirement (you are writing about async-exception-saftiness). Especially because for some exceptions you could want some behavior and for some other. Maybe what I am arguing for is that currently (with mask) you cannot say mask everything except. To answer Bas: I do not know how this should look like. I just know that I am missing Java's transparency what can be thrown and what not and its at-compile-time checking if you have covered all possibilities. Mitar P.S.: I am not really doing a remote control for Mars robot in Haskell, but it is not so much far off. Maybe it will even be there someday. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Cleaning up threads
Hi! On Tue, Sep 21, 2010 at 11:10 PM, Simon Marlow marlo...@gmail.com wrote: So rather than admitting defeat here I'd like to see it become the norm to write async-exception-safe code. This is also what I think. You have to make your code work with exceptions, because they will come sooner or later. So if you handle them properly, once you have this implemented, then you can easily use them also for your own stuff. It's not that hard, especially with the new mask API coming in GHC 7.0. Not hard? I see it almost impossible without mask. You cannot have arbitrary long cleanup functions in for example bracket because somebody can (and will) interrupt it even if you block, because some function somewhere deep bellow will unblock. And there is no way to resume after exception in Haskell. What would I also like to see in Haskell is that it would be somehow possible to see which all exceptions could your function (through used functions there) throw. In this way it would be really possible to make async-exception-safe functions (as we really do not want catch-all all around). (Of course the problem with this approach is if somebody changes underlying function it could get additional possible exception(s) to be thrown.) So it would be even better if this would be solved like pattern matching warning (so you could see if you are missing some exception through warning) or even better: that type system would enforce you to or catch or declare throwing/passing exception yourself. Similar how Java has. Because one other major Haskell's selling point is that you (are said to) know from type if and which side-effects a function has. This is the story behind IO monad. And with exceptions you do not have this information anymore. I believe this should be visible through type system. Something like: http://paczesiowa.blogspot.com/2010/01/pure-extensible-exceptions-and-self.html ? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Idea for hackage feature
Hi! I just got an idea for hackage feature. All functions/modules listed there could have some mark if they or any function/module they use uses an unsafe* function. Of course this will make probably almost everything marked as unsafe, but this is the idea - to raise awareness about that so that you can prefer some function/implementation over another. Of course marking/tagging everything as unsafe is not really useful. Because of this I propose that then community votes/vouches on correctness/stability of implementations and this would then influence the how unsafe given function really is (or is according to community, if we are more precise). Of course it would be even better that every function using unsafe would have also a formal proof but as we cannot believe that we will prove everything in a feasible feature we could maybe opt for such crowd intelligence approach. We cannot have a Turing machine, but maybe we can have crowd. ;-) (Of course low number of found bugs and good unit test code coverage can then positively influence crowd, so authors would be motivated to assure that.) Comments? Opinions? Because I really hate that I try to keep my code pure and separate IO from everything else and then somewhere deep in there some unsafe* lurks. (Ah, yes, a side effect of this tagging/marks would be also that you would be able to see where all those unsafe* calls are for a given function, so you would be able to fast jump (with link) to a given line in code and evaluate circumstances in which that unsafe* call is made. And then vote/vouch once you discover that it is probably pretty safe.) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Cleaning up threads
Hi! On Tue, Sep 14, 2010 at 9:04 PM, Gregory Collins g...@gregorycollins.net wrote: That's surprising to me -- this is how we kill the Snap webserver (killThread the controlling thread...). Yes. This does work. The only problem is that my main thread then kills child threads, which then start killing main thread again, which then again kills child threads and interrupt cleanup. Probably it can be solved with mask: http://hackage.haskell.org/trac/ghc/ticket/1036 My question is if there is some good code example how to achieve that before mask is available. The code I wrote in my original post does not work as intended. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cleaning up threads
Hi! On Tue, Sep 14, 2010 at 11:46 PM, Bas van Dijk v.dijk@gmail.com wrote: Note that killing the main thread will also kill all other threads. See: Yes. But how does those other threads have time to cleanup is my question. You can use my threads library to wait on a child thread and possibly re-raise an exception that was thrown in or to it: Thanks. Will look into it. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Cleaning up threads
Hi! On Wed, Sep 15, 2010 at 2:16 AM, Ertugrul Soeylemez e...@ertes.de wrote: The point is that killThread throws an exception. An exception is usually an error condition. This is reasoning based on nomenclature. If exceptions were named Signal or Interrupt? My approach strictly separates an unexpected crash from an intended quit. For this you can have multiple types of exceptions, some which signify error condition and some which signify that user has interrupted a process and that process should gracefully (exceptionally) quit. I like exceptions because you can split main logic from exceptional logic (like user wants to prematurely stop the program). But you still want to clean up properly everything. Once you have this exceptional logic in place (and you should always have it as some exceptional things can always happen) why do not use it also for less exceptional things (because you have cleaner code then). Also using the Quit command from my example you can actually wait for the thread to finish cleanup work. You can't do this with an exception. You can. If you would have a proper way to mask them: http://hackage.haskell.org/trac/ghc/ticket/1036 Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Interruptable event loop
Hi! I have X11 code which looks something like the following. The problem is that TimerInterrupt gets sometimes thrown in a way that it kills the whole main thread. Probably it gets thrown in the middle of some nested function which unblocked exceptions. I found: http://hackage.haskell.org/trac/ghc/ticket/1036#comment:4 but this is not yet in GHC stable version. Is there some workaround for this? Because I have problems compiling current Haskell platform with GHC head. currentThread - myThreadId _ - forkIO $ timer currentThread block $ run display run display = do ... allocaXEvent $ \event - do interrupted - catch (unblock $ nextEvent' display event return False) (\(_ :: TimerInterrupt) - return True) ... run display -- A version of nextEvent that does not block in foreign calls nextEvent' :: Display - XEventPtr - IO () nextEvent' d p = do pend - pending d if pend /= 0 then nextEvent d p else do threadWaitRead (fromIntegral . connectionNumber $ d) nextEvent' d p data TimerInterrupt = TimerInterrupt deriving (Show, Typeable) instance Exception TimerInterrupt timer :: ThreadId - IO () timer t = do threadDelay redrawInterval throwTo t TimerInterrupt timer t Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Interruptable event loop
Hi! OK, System.Timeout's timeout does not have this problem. ;-) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Cleaning up threads
Hi! I run multiple threads where I would like that exception from any of them (and main) propagate to others but at the same time that they can gracefully cleanup after themselves (even if this means not exiting). I have this code to try, but cleanup functions (stop) are interrupted. How can I improve this code so that this not happen? module Test where import Control.Concurrent import Control.Exception import Control.Monad thread :: String - IO ThreadId thread name = do mainThread - myThreadId forkIO $ handle (throwTo mainThread :: SomeException - IO ()) $ -- I want that possible exception in start, stop or run is propagated to the main thread so that all other threads are cleaned up bracket_ start stop run where start = putStrLn $ name ++ started stop = forever $ putStrLn $ name ++ stopped -- I want that all threads have as much time as they need to cleanup after themselves (closing (IO) resources and similar), even if this means not dying run = forever $ threadDelay $ 10 * 1000 * 1000 run :: IO () run = do threadDelay $ 1000 * 1000 fail exit main :: IO () main = do bracket (thread foo) killThread $ \_ - bracket (thread bar) killThread $ \_ - bracket (thread baz) killThread (\_ - run) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Dependent types
Hi! I believe dependent types is the right term for my problem. I was trying to solve it with GADTs but I was not successful so I am turning to infinite (lazy) wisdom of Haskell Cafe. I am attaching example code where I had to define MaybePacket data type which combines different types of Packets I would like to allow over Line. The problem is that there is a correlation between Line type and MaybePacket type and I would like to tell Haskell about that. But I am not sure how. Because now compiler, for example, warns me of a non-exhaustive pattern even if some MaybePacket value is not possible for given Line. Somehow I would like to have a getFromFirstLine function which would based on type of given Line return Maybe (Packet i) (for Line) or Maybe AnyPacket (for LineAny). So that this would be enforced and type checked. Best regards and thanks for any help Mitar Test2.hs Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Dependent types
Hi! On Fri, Sep 10, 2010 at 9:22 AM, Stephen Tetley stephen.tet...@gmail.com wrote: This issue pops up quite quite often - Ryan Ingram's answer to it the last time it was on the Cafe points to the relevant Trac issue numbers: But I have not yet made it as GADTs. I would need some help here. How to change MaybePacket to GADTs? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Dependent types
Hi! I made it. ;-) Thanks to all help from copumpkin on IRC channel. I am attaching my solution for similar problems in future. And of course comments. Mitar Test5.hs Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Control.Parallel missing
Hi! A friend of mine tried to learn Haskell and asked me for an advice because he had problems with Haskell in 5 steps: http://www.haskell.org/haskellwiki/Haskell_in_5_steps Under Write your first parallel Haskell program section there is import Control.Parallel which is simply missing in 6.12 and also it is not in documentation anymore: http://www.haskell.org/ghc/docs/latest/html/libraries/index.html I know that we have moved now to Haskell Platfrom so you can install it additionally (probably), but for a novice it is not really nice that example programs do not compile out of the box. Maybe we could extend this 4th step also with an introduction to Cabal? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Double free-ing (was: Reading from a process)
Hi! On Fri, Dec 18, 2009 at 8:16 AM, Jason Dusek jason.du...@gmail.com wrote: Concatenating two `ByteString`s is O(n)? This is what it is written here: http://haskell.org/ghc/docs/latest/html/libraries/bytestring-0.9.1.5/Data-ByteString.html#v%3Aappend Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Double free-ing
Hi! On Fri, Dec 18, 2009 at 8:29 AM, Ketil Malde ke...@malde.org wrote: Lazy ByteStrings should be able to append with O(1). (Or use strict BS hGetNonBlocking, and Lazy ByteString fromChunks.) Oh, true. Thanks! But ... lazy ByteString's hGetNonBlocking is probably not lazy? Could it be? It has to read at that moment whatever it can? But appending is really better. Why are lazy ByteStrings not default? Thanks again. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Double free-ing
Hi! On Fri, Dec 18, 2009 at 6:54 PM, Mitar mmi...@gmail.com wrote: On Fri, Dec 18, 2009 at 8:29 AM, Ketil Malde ke...@malde.org wrote: Lazy ByteStrings should be able to append with O(1). (Or use strict BS hGetNonBlocking, and Lazy ByteString fromChunks.) Oh, true. Thanks! But ... lazy ByteString's hGetNonBlocking is probably not lazy? Could it be? It has to read at that moment whatever it can? But appending is really better. Why are lazy ByteStrings not default? Tried it and it really works great! Thanks to all. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Double free-ing (was: Reading from a process)
Hi! On Thu, Dec 17, 2009 at 5:28 AM, Jason Dusek jason.du...@gmail.com wrote: It seems like the message delimiter to me because you keep buffering till you receive it. Hm, true. I have changed code to this: getLastMyData :: MySate (Maybe MyData) getLastMyData = do p - gets process case p of Nothing - fail No process Just (MyProcess processOut processPid processBuffer) - do processRunning - io $ getProcessExitCode processPid case processRunning of Just _ - processExited _ - do ret - io $ tryJust (guard . isEOFError) $ slurpMyData processOut processBuffer case ret of Left _ - processExited -- EOF Right (datalist, currentBuffer) - do modify (\s - s { process = Just (MyProcess processOut processPid currentBuffer) }) if null datalist then return Nothing else return $ Just $ head datalist -- MyData is stored in the reverse order so head is the last MyData from the process slurpMyData :: Handle - DataBuffer - IO ([MyData], DataBuffer) slurpMyData = slurpMyData' [] where slurpMyData' datalist h buffer@(DataBuffer capacity array bytecount) = do ready - hReady h if not ready then return (datalist, buffer) else do let array' = advancePtr array bytecount count - hGetBufNonBlocking h array' (capacity - bytecount) let bytecount' = bytecount + count chars - peekArray bytecount' array let (d, rest) = readData . dataToString $ chars rest' = stringToData rest if null d then if length rest' == capacity then error Invalid data from the process else return (datalist, buffer) else do assert (length rest' = capacity) $ pokeArray array rest' let buffer' = DataBuffer capacity array (length rest') slurpMyData' (d ++ datalist) h buffer' readData :: String - ([MyData], String) readData = readData' [] where readData' datalist = (datalist, ) readData' datalist string = case reads string of [(x, rest)] - readData' ((head x):datalist) rest -- x is a list and currently it has only one element [] - (datalist, string) -- we probably do not have enough data to read MyData properly _ - error Ambiguous parse from the process And now it works. I just do not like using malloc and free. And talking about free, I have now a problem of double freeing this buffer. I am getting it after I send ctrl-c to the main process (with underlying process I am communicating with). I am freeing it in processExited: processExited :: MyState a processExited = do terminateDataProcess fail Process exited terminateDataProcess :: MyState () terminateDataProcess = do p - gets process case p of Just (MyProcess _ processPid (DataBuffer _ array _)) - do modify (\s - s { process = Nothing }) io $ free array io $ terminateProcess processPid _ - return () So I run processExited if there is EOF from underlying process. But I also run terminateDataProcess in a bracket I am calling getLastMyData. Code is like this: bracket initDataProcess terminateDataProcess (... read getLastMyData repeatedly and process it ...) Why I am getting this double free-ing errors? Should I introduce some locks on terminateDataProcess? I am using Linux 2.6.30 amd64 and 6.10.4. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Double free-ing (was: Reading from a process)
Hi! On Fri, Dec 18, 2009 at 1:53 AM, Jason Dusek jason.du...@gmail.com wrote: You shouldn't have to use `malloc` and `free` to accumulate input. Just append to a list or a ByteString, which is to say, add a byte to a ByteString to get a new ByteString. Yes, really. Oh, I did not know that ByteString has hGetNonBlocking. When I see that System.IO's hGetBufNonBlocking uses Ptr I started crying and kept crying while coding with it. It was really strange for me to use Foreign.* modules for code which should be clean Haskell. Maybe there is something I am missing here; but I can't see any reason to adopt this unnatural (for Haskell) approach. I checked ByteString's hGetNonBlocking now and I do see why it is still better to use System.IO's hGetBufNonBlocking. I would like to have a buffer of fixed length and just fill it until it is full. With hGetBufNonBlocking this is possible as I can just give different starting positions in the buffer while retaining data already read where it is (so I do not to copy anything around). But with hGetNonBlocking I would have to append two different buffers to get a resulting buffer, what is completely unnecessary O(n). Why would I read data into some other buffer just to be able to append it to main buffer later. Any suggestion how can I do that in Haskell why? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: openFd under -threaded gets interrupted
Hi! On Wed, Dec 16, 2009 at 3:18 PM, Mitar mmi...@gmail.com wrote: fd - openFd device ReadWrite Nothing OpenFileFlags { append = False, noctty = True, exclusive = False, nonBlock = True, trunc = False } OK. After some testing I have discovered that the problem is only with /dev/rfcomm0 as a device, that is with Bluetooth serial connection. The problem is that rfcomm Linux kernel code contains: if (signal_pending(current)) { err = -EINTR; break; } So if during open call some signal comes it returns EINTR. As it has to open a connection to a Bluetooth device opening a /dev/rfcomm0 file requires some time and during that time obviously there is some signal sent by GHC with -threaded option which is not sent without it. So please tell me what is the difference between open with and without -threaded option in GHC as I would like to make a simple C test case. Also is there any workaround possible in Haskell/GHC? For example making time while openFd is in progress without interrupts? I have found very similar bug reported here: https://bugzilla.redhat.com/show_bug.cgi?id=161314 but code from the patch does not seem to be included in official kernel source code (but it is also a long time ago so many things have probably changed). But the workaround there is working also here. If I open /dev/rfcomm0 with some other program (so that Bluetooth connection is made) before I run Haskell program then it works in both cases, with or without -threaded option. Of course this is not really useful workaround in my case, I would like to make a stand-alone Haskell program. So please help me solve this problem. But without -threaded it works flawlessly. I am using Linux 2.6.30 amd64, GHC 6.10.4. It was the same with 6.8. I have also tested it on 6.12 and it is the same. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] openFd under -threaded gets interrupted
Hi! I am using such code: fd - openFd device ReadWrite Nothing OpenFileFlags { append = False, noctty = True, exclusive = False, nonBlock = True, trunc = False } And if I compile my program with -threaded option I always get such error: interrupted (Interrupted system call) But without -threaded it works flawlessly. I am using Linux 2.6.30 amd64, GHC 6.10.4. It was the same with 6.8. Why is this? Is there any workaround? Does it work in 6.12? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Broken GHC doc links
Hi! On Wed, Dec 16, 2009 at 8:08 PM, Edward Z. Yang ezy...@mit.edu wrote: As the W3C would say, Cool URLs don't Change. Can we at least setup redirects to the new pages? I second that. I have to manually fix URLs from Google results now all the time. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Reading from a process
Hi! I would like to make an one-directional inter-process communication in a way that one process is printing out read-compatible Haskell data types and Haskell program is reading them from a pipe. I would like to make a function which would return Nothing if there is no data type in a pipe or Just last data type there was in a pipe (discarding all before). So it should behave in a non-blocking way. In an attempt I made this: getLastMyData :: MyState (Maybe MyData) getLastMyData = do p - gets process case p of Nothing - fail No process Just (MyProcess processOut processPid processBuffer) - do processRunning - io $ getProcessExitCode processPid case processRunning of Just _ - processExited _ - do ret - io $ tryJust (guard . isEOFError) $ slurpInput processOut processBuffer case ret of Left _ - processExited -- EOF Right currentBuffer - do let (datalist, currentBuffer') = readData currentBuffer modify (\s - s { process = Just (MyProcess processOut processPid currentBuffer') }) if null datalist then return Nothing else return $ Just $ head datalist -- MyData is stored in the reverse order so head is the last MyData from the process slurpInput :: Handle - String - IO String slurpInput h buffer = do ready - hReady h if not ready then return buffer else do char - hGetChar h slurpInput h (buffer ++ [char]) readData :: String - ([MyData], String) readData buffer = readData' [] buffer where readData' datalist [] = (datalist, []) readData' datalist buf = case reads buf of [(x, rest)] - readData' ((head x):datalist) rest -- x is a list and currently it has only one element [] - if length buf 5 * maxMyDataDescLength then error Invalid data from process else (datalist, buf) -- we probably do not have enough data to read MyData properly _ - error Ambiguous parse from process (I have cleaned a code somewhat, I hope I have not introduced any errors. MyData is encapsulated in a list when printed from a process, this is why is there head.) The problem is that I do not like this approach. And it does not look nice. For example I am reading byte per byte and appending it to the end. Then the problem is that if process is sending garbage faster then Haskell can consume it Haskell stays in slurpInput function and never gets to readData where it would found out that there is garbage coming in. I could use hGetBufNonBlocking? But it would still not solve garbage problem. So is there some better way to do it? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Reading from a process
Hi! On Wed, Dec 16, 2009 at 11:25 PM, Jason Dusek jason.du...@gmail.com wrote: The what is a solid, terminating criterion for garbage? How do you know this stream of bytes is no good or has gone as far as you can allow it to go? Use that criterion in `slurpInput`. Criterion for garbage is that it is not readable with read and that not because there would be not enough data available. It seems that I will need to do buffer filling and reading at the same time. I was hoping I could split this in two functions. I urge you to reconsider your approach for message delimiting. Why use the `hReady` signal as the clue? Seems like a race condition waiting to happen. Maybe terminate them with a Haskell comment, like `-- EOT`. Since your message always comes wrapped in a list, you could just use the square brackets to tell you when you're done. I am not using hReady signal as delimiter. This is just to have some robustness for example if processes will communicate over network and there will be delays. So I am just trying to pick enough bytes together for read to succeed. And I do have upper limit on the message defined. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Time constrained computation
Hi! I am not really sure if this is correct term for it but I am open to better (search) terms. pure ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Time constrained computation
Hi! Ups, missed save button and pressed send. ;-) So I am not really sure if this is correct term for it but I am open to better (search) terms. I am wondering if it is possible to make a time constrained computation. For example if I have a computation of an approximation of a value which can take more or less time (and you get more or less precise value) or if I have an algorithm which is searching some search-space it can find better or worse solution depending how much time you allow. So I would like to say to Haskell to (lazily, if it really needs it) get me some value but not to spend more than so much time calculating it. One abstraction of this would be to have an infinity list of values and I would like to get the last element I can get in t milliseconds of computational time. One step further would be to be able to stop computation not at predefined time but with some other part of the program deciding it is enough. So I would have a system which would monitor computation and a pure computation I would be able to stop. Is this possible? Is it possible to have a pure computation interrupted and get whatever it has computed until then? How could I make this? Is there anything already done for it? Some library I have not found? Of course all this should be as performance wise as it is possible. So the best interface for me would be to be able to start a pure computation and put an upper bound on computation time but also be able to stop it before that upper bound. And all this should be as abstracted as it is possible. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: Sun Microsystems and Haskell.org joint project on OpenSPARC
Hi! On Sat, Jul 26, 2008 at 3:17 AM, Ben Lippmeier [EMAIL PROTECTED] wrote: http://valgrind.org/info/tools.html No support for Mac OS X. :-( Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: Sun Microsystems and Haskell.org joint project on OpenSPARC
Hi! On Sat, Jul 26, 2008 at 1:35 PM, Mitar [EMAIL PROTECTED] wrote: No support for Mac OS X. :-( Apple provides Shark in Xcode Tools which has something called L2 Cache Miss Profile. I will just have to understand results it produces. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: Sun Microsystems and Haskell.org joint project on OpenSPARC
Hi! If we spend so long blocked on memory reads that we're only utilising 50% of a core's time then there's lots of room for improvements if we can fill in that wasted time by running another thread. How can you see how much does your program wait because of L2 misses? I have been playing lately with dual Quad-Core Intel Xeon Mac Pros with 12 MB L2 cache per CPU and 1.6 GHz bus speed and it would be interesting to check this things there. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Help with optimization
Hi! Profiling says that my program spends 18.4 % of time (that is around three seconds) and 18.3 % of allocations in this function which is saving the rendered image to a PPM file: saveImageList :: String - Int - Int - [ViewportDotColor] - IO () saveImageList filename width height image = do B.writeFile filename file where file = B.append header bytes header = C.pack $ P6\n ++ show width ++ ++ show height ++ \n255\n bytes = B.pack $ concatMap (color . dealpha . (\(ViewportDotColor _ c) - c)) image where color (VoxelColor red green blue _) = [floor $ red * 255, floor $ green * 255, floor $ blue * 255] dealpha c = addColor c (VoxelColor 1.0 1.0 1.0 1.0) -- white background For a 921615 bytes large file to save this is too much in my opinion. And I think that it consumes to much allocations. Probably it should not store intermediate lists? Any suggestions? Best regards Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Help with optimization
Hi! 2008/7/20 Adrian Neumann [EMAIL PROTECTED]: Maybe your image isn't strict enough and the computations are forced when the image gets written to disc? But it still takes three seconds more than displaying to the screen. This is why I am also thinking that this code is to slow. This and that when saving to a file it takes almost 25 % more allocations. So probably it is slower because of all this allocations? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Profiling nested case
Hi! I had to change code somewhat. Now I have a function like: worldScene point = case redSphere (0,50,0) 50 point of v@(Right _) - v Left d1 - case greenSphere (25,-250,0) 50 point of v@(Right _) - v Left d2 - Left $ d1 `min` d2 (Of course there could be more objects.) Any suggestion how could I make this less hard-coded? Something which would take a list of objects (object functions) and then return a Right if any object function return Right or a minimum value of all Lefts. But that it would have similar performance? If not on my GHC version (6.8.3) on something newer (which uses fusion or something). Is there some standard function for this or should I write my own recursive function to run over a list of object functions? But I am afraid that this will be hard to optimize for compiler. (It is important to notice that the order of executing object functions is not important so it could be a good candidate for parallelism.) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Profiling nested case
Hi! On Sat, Jul 12, 2008 at 3:33 AM, Max Bolingbroke [EMAIL PROTECTED] wrote: If findColor had been a function defined in terms of foldr rather than using explicit recursion, then theres a good chance GHC 6.9 would have fused it with the list to yield your optimized, loop unrolled, version: My first version was with msum. Is this also covered by this fusion? (And it is interesting that my own recursion version is faster than the version with msum. Why?) Incidentally, if in your most recent email castRayScene2 was your only used of castRay, GHC would have inlined the whole definition into that use site and you would have got castRayScene1 back again. It is a little more tricky. I choose in an IO monad which scene it will render (selected by a program argument). So at compile time it does not yet know which one it will use. But there is a finite number of possibilities (in my case two) - why not inline both versions and at run time choose one? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Profiling nested case
Hi! On Fri, Jul 18, 2008 at 3:54 PM, Chaddaï Fouché [EMAIL PROTECTED] wrote: So that I can easily change the type everywhere. But it would be much nicer to write: data Quaternion a = Q !a !a !a !a deriving (Eq,Show) Only the performance of Num instance functions of Quaternion is then quite worse. You can probably use a specialization pragma to get around that. But why is this not automatic? If I use Quaternions of only one type in the whole program then why it does not make specialized version for it? At least with -O2 switch. Why exactly are polymorphic functions slower? Is not this just a question of type checking (and of writing portable/reusable code)? But later on in a compiler process we do know of which type exactly is the called function - so we could use a function as it would be written only for that type. Something what specialization is doing as I see. I thought this is always done. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Profiling nested case
Hi! (I will reply propely later, I have a project to finish and GHC is playing me around and does not want to cooperate.) This project of mine is getting really interesting. Is like playing table tennis with GHC. Some time it gives a nice ball, sometimes I have to run around after it. But I just wanted to make a simple raycasting engine. Not debug GHC. But it is interesting - just I do not have time just know for playing the change code - compile - run on known input - check if time elapsed increased (I have to do this even when I am thinking that I am optimizing things or even when I am thinking that I am just refactoring code - moving constants to definitions...). And this is a slow process because every iteration runs for a few minutes. The next beautiful example in this series is this function for computing 4D Julia set fractal: julia4DFractal :: BasicReal - World julia4DFractal param (x,y,z) = julia4D (Q (x / scale) (y / scale) (z / scale) param) iterations where c = (Q (-0.08) 0.0 (-0.8) (-0.03)) alphaBlue = VoxelColor 0 0 (2 / scale) (2 / scale) scale = fromIntegral sceneHeight / 1.8 threshold = 16 iterations = 100 :: Int julia4D _ 0= (# alphaBlue, 1 #) -- point is (probably) not in the set julia4D q it | qMagnitudeSquared q threshold = (# noColor, 1 #) -- point is in the set | otherwise = julia4D (qSquared q + c) (it - 1) where distance = scale * (qMagnitude qN) / (2 * (qMagnitude qN')) * log (qMagnitude qN) (# qN, qN' #) = disIter q (Q 1 0 0 0) iterations where disIter qn qn' 0 = (# qn, qn' #) disIter qn qn' i | qMagnitudeSquared qn threshold = (# qn, qn' #) | otherwise = disIter (qSquared qn + c) (2 * qn * qn') (i - 1) Please observe that distance is never used. And this is also what GHC warns. But the trick is that with having this part of a code in there, the program virtually never finishes (I killed it after 15 minutes). If I remove everything on and after the where distance line it finishes in 30 seconds. OK, the problem is with (# qN, qN' #), if this is changed to normal (qN, qN'), then it works. But to notice this ... This is something you have to spend a day for. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Profiling nested case
Hi! My guess is that it was premature optimization that created this bug. It is the root of all evil. ;-) Unboxed tuples are not the best answer for every situation. They are evaluated strictly! Then I have not understood the last paragraph correctly: http://www.haskell.org/ghc/docs/latest/html/users_guide/primitives.html Oh, no. It is like you say. I also use -funbox-strict-fields and Q is defined with strict fields. But I tried also without the switch and is it the same (it takes forever). But then qN and qN' does not have unboxed types. So it should be lazy? If you are in that phase where you are doing performance tweaks and you think GHC's strictness analysis might not be picking up on some strict behavior in your program, add the annotation. If it makes it faster, great; if it doesn't change things, take it out! Best to underconstrain your program. I completely agree. I am also a firm believer in the clean and pure code where I would leave all optimization to compiler and just write semantics into a program. But this project just showed me that there is still a long way of compiler development before that would be possible (and usable). That some simple refactoring of code which is not really changing semantics have a big influence on a performance because compiler uses it differently (polymorphic types instead of hardcoded types, passing function as an parameter instead of hardcode it). For example I have now defined my types as: type BasicReal = Double data Quaternion = Q !BasicReal !BasicReal !BasicReal !BasicReal deriving (Eq,Show) So that I can easily change the type everywhere. But it would be much nicer to write: data Quaternion a = Q !a !a !a !a deriving (Eq,Show) Only the performance of Num instance functions of Quaternion is then quite worse. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Profiling nested case
Hi! This is not all. If I compare performance of those two semantically same functions: castRayScene1 :: Ray - ViewportDotColor castRayScene1 (Ray vd o d) = ViewportDotColor vd (castRay' noColor 0) where castRay' color@(VoxelColor _ _ _ alpha) depth | depth depthOfField = color | alpha 0.001 = castRay' pointColor (depth + distance') | alpha 0.999 = color | otherwise = castRay' newColor (depth + distance') where (# pointColor, distance #) = worldScene (o + (d * depth)) distance' = max 1 distance newColor = addColor color pointColor and: castRay :: World - Ray - ViewportDotColor castRay world (Ray vd o d) = ViewportDotColor vd (castRay' noColor 0) where castRay' color@(VoxelColor _ _ _ alpha) depth | depth depthOfField = color | alpha 0.001 = castRay' pointColor (depth + distance') | alpha 0.999 = color | otherwise = castRay' newColor (depth + distance') where (# pointColor, distance #) = world (o + (d * depth)) distance' = max 1 distance newColor = addColor color pointColor castRayScene2 :: Ray - ViewportDotColor castRayScene2 = castRay worldScene is the program which uses castRayScene1 1.35 faster then the program which uses castRayScene2 (37 seconds versus 50 seconds). (Compiler with GHC 6.8.3 and -O2 switch. Program is executing almost just this function over and over again.) It is somehow award that passing function as an argument slow down the program so much. Is not Haskell a functional language and this such (functional) code reuse is one of its main points? Of course. I could use some preprocessor/template engine to change/find/replace castRay-like function into a castRayScene1 before compiling. But this somehow kills the idea of a compiler? Smart compiler. Which should do things for you? The same as my previous example. Where a hard-coded list was not optimized. (Like it would change during the execution.) It looks like my program would be interpreted and not compiled. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Profiling nested case
Hi! I am making a simple raycasting engine and have a function which take a point in space and return a color of an object (if there is any) at this point in space. And because the whole thing is really slow (or was really slow) on simple examples I decided to profile it. It takes around 60 seconds for a 640x480 px image with 400 depth of field. This is at worst 122,880,000 calculations (if the scene is rather empty) of a coordinate of a point in space and then checking for a color. And 60 seconds look really a lot to me for that. So I went profiling and found out that the strange part of code is the main color checking function which has a list of objects (at this time the list is hardcoded). It looks like this: world :: SpacePoint - VoxelColor world point = case msum . sequence elements $ point of Just v - v Nothing - noColor where elements = [redSphere (0,50,0) 50, greenSphere (25,-50,0) 50, blueSphere (-150,0,150) 50] So three spheres in a world and I check if the point is in any of them. Like that: sphere :: SpacePoint - BasicReal - VoxelColor - WorldElement -- center of sphere, it's radius, it's color sphere (x0,y0,z0) r color (x,y,z) | x' * x' + y' * y' + z' * z' = r * r = Just color | otherwise= Nothing where x' = x - x0 y' = y - y0 z' = z - z0 redSphere :: SpacePoint - BasicReal - WorldElement redSphere c r = sphere c r redColor So profiling told me that world function takes 38.4 % of all running time. So I decided to play with it. Maybe a more direct approach would be better: world :: SpacePoint - VoxelColor world point = findColor [redSphere (0,50,0) 50, greenSphere (25,-50,0) 50, blueSphere (-150,0,150) 50] where findColor [] = noColor findColor (f:fs) = case f point of Just v - v Nothing - findColor fs Great, it improved. To 40 s. But still it was too much. I tried this: world :: SpacePoint - VoxelColor world point = case redSphere (0,50,0) 50 point of Just v - v Nothing - case greenSphere (25,-50,0) 50 point of Just v - v Nothing - case blueSphere (-150,0,150) 50 point of Just v - v Nothing - noColor And it took 15 s. And also the profiling was like I would anticipate. Calculating points coordinates and checking spheres takes almost all time. So any suggestions how could I build a list of objects to check at runtime and still have this third performance? Why this big difference? (I am using GHC 6.8.3 with -O2 compile switch.) (The * operator is casting a ray, that is multiplying a ray direction vector with a scalar factor.) Mitar Main-case.prof Description: Binary data Main-rec.prof Description: Binary data Main-seq.prof Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Internet Communications Engine and Haskell
Hi! Has been any work on implementing Internet Communications Engine in Haskell already done? Any other suggestions how could I use ICE in Haskell? Through FFI calls to its C++ version? http://www.zeroc.com/ice.html Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] 0/0 1 == False
Hi! On Jan 11, 2008 7:30 AM, Cristian Baboi [EMAIL PROTECTED] wrote: NaN is not 'undefined' Why not? What is a semantic difference? I believe Haskell should use undefined instead of NaN for all operations which are mathematically undefined (like 0/0). NaN should be used in a languages which does not support such nice Haskell features. Because if Haskell would use undefined such error would propagate itself to higher levels of computations, with NaN it does not. if bigComputation 1 then ... else ... Would be evaluating else semantically correct if bigComputation returns NaN? No, it is not. With undefined this is correctly (not)evaluated. Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] 0/0 1 == False
Hi! Why is 0/0 (which is NaN) 1 == False and at the same time 0/0 1 == False. This means that 0/0 == 1? No, because also 0/0 == 1 == False. I understand that proper mathematical behavior would be that as 0/0 is mathematically undefined that 0/0 cannot be even compared to 1. There is probably an implementation reason behind it, but do we really want such hidden behavior? Would not it be better to throw some kind of an error? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Missing join and split
Hi! I am really missing the (general) split function built in standard Haskell. I do not understand why there is something so specific as words and lines but not a simple split? The same goes for join. Yes, I can of course define them but ... in an interactive mode it would be quite handy to have them there. Or am I wrong and are those hidden somewhere? So what are common ways to get around this? What are elegant definitions? Inline definitions? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Missing join and split
Hi! On Dec 28, 2007 5:51 PM, Lihn, Steve [EMAIL PROTECTED] wrote: Since regex is involved, it is specific to (Byte)String, not a generic list. Oh, this gives me an interesting idea: making regular expressions more generic. Would not it be interesting and useful (but not really efficient) to have patterns something like: foo :: Eq a = a - ... foo (_{4}'b') = ... which would match a list with four elements ending with an element 'b'. Or: foo (_+';'_+';'_) = ... which would match a list with embedded two ';' elements. (Last _ matched the remaining of the list.) OK, maybe guards are not the proper place to implement this as would add a possibility to make a really messy Haskell programs. But extending regular expressions to work on any list of elements with type implementing Eq would be realy powerfull. And then we could use split in many other than just text processing contexts. Of course, the problem in both cases is implementing something like regular expressions efficiently, especially on lists, but this is why there are smart people around. :-) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Missing join and split
Hi! On Dec 29, 2007 12:13 AM, Evan Laforge [EMAIL PROTECTED] wrote: Maybe you could use view patterns? foo (regex (.*);(.*);(.*)) - [c1, c2, c3] = ... Oh. Beautiful. :-) Parser combinators basically provide generalized regexes, and they all take lists of arbitrary tokens rather than just Chars. I've written a simple combinator library before that dispenses with all the monadic goodness in favor of a group combinator and returning [Either [tok] [tok]], which sort of gives parsers a simpler regexy flavor (Left is out of group chunk and Right is in group chunk). foo (match (group any `sepBy` char ';') - [c1, c2, c3]) = ... Ah. Is this accessible somewhere? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Serial Communications in Haskell
Hi! You can check how I did this in my Lego Mindstorms NXT interface, pre-beta version: http://www.forzanka.si/files/NXT.tgz That's really cool! I hope you can upload this to hackage soon. I do not think it is ready yet. It is working but it is missing extensive testing (making some bigger programs in practice, to see how it behaves and to see if the API is sane) and of course documentation (how to use it, examples, tutorials ...). I will continue with this project in a few months. (The main reason is that I do not have any experience with Hackage and Cabal nor I have time now for this. But if anybody wants to help ...) I am using it in my AI robot research project where I am using Lego Mindstorms NXT unit and communicating with it over Bluetooth. And the AI is made in Haskell. :-) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] let and fixed point operator
Hi! I did once try to learn Prolog. And failed. Miserably. You should backtrack at this point and try again differently. :-) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Dynamic thread management?
Hi! I am thinking about a model where you would have only n threads on a n-core (or processor) machine. They would be your worker threads and you would spawn them only once (at the beginning of the program) and then just delegate work between them. On 8/13/07, Jan-Willem Maessen [EMAIL PROTECTED] wrote: The problem here is that while Cilk spawns are incredibly cheap, they're still more than a simple procedure call (2-10x as expensive if my fading memory serves me rightly). Let's imagine we have a nice, parallelizable computation that we've expressed using recursive subdivision (the Cilk folks like to use matrix multiplication as an example here). Near the leaves of that computation we still spend the majority of our time paying the overhead of spawning. So we end up actually imposing a depth bound, and writing two versions of our computation---the one that spawns, which we use for coarse-grained computations, and the one that doesn't, which we use when computation gets fine-grained. It makes a really big difference in practice. But this could be done at the runtime too. If the lazy-non-evaluated-yet chunk is big then divide it into a few parts and run each part in its thread. But if the chunk is small (you are at the end of the evaluation and you already evaluated necessary subexpressions) you do it in the thread which encounters this situation (which is probably near the end of the program or the end of the monadic IO action). And this line when you choose to delegate or not can be determined at runtime too. In combination with some transactional memory or some other trick which would be behind this delegation this could be probably possible. We could also hint runtime that the function would probably take a long time to compute (even if it is lazy) with making a type for such functions which would signal this. Of course this could also make things worse if used improperly. But sometimes you know that you will be running the map of time-consuming function. Yes, you have parMap but the problem I saw with it (and please correct me if I am wrong) is that it spawns a new thread for every application of the function to the element? But what if functions are small? Then this is quite an overhead. And you have to know this in advance if you want to use something else than the default parMap which is not always possible (if we are passing a function as an argument to the function which calls map). For example: calculate f xs = foldl (+) 0 $ map f xs -- or foldr, I am not sure And I would like to see something like this: it gets to the point when we need to evaluate this function call, for some big f and some big list of xs, so the thread which gets to it starts evaluating the first value and when it starts with another (it is recursive call so it is a similar evaluation) it sees that the other thread is empty and the function would probably take a long time (it speculates) so it delegates it there and continues with the third element that is a dummy recursive call to the function, in this case foldl (dummy because it did not really evaluate everything at the previous level). Now, if the second thread finishes first, it goes to the next element (recursive call) but sees that it is already (still) evaluating, so it gets to the fourth. Otherwise, it the first thread finishes first just goes to the next element. This would be some kind of speculative evaluating. If the CPUs know how to do that why would not we at the higher level? It would be also an interesting lazy evaluation of the, in this example, foldl function. The system would make a recursive call but would just go another call deeper if it finds that it is impossible (because it is still evaluating somewhere else) to evaluate it. And at every level it finishes it will check previous levels if it can collapse them (and maybe prematurely finish the whole thing). It would be like that I unwind the loop (in this case recursion), evaluate everything (on many threads) and then (try to) calculate the value. If it finishes prematurely ... sad, but for big lists and big functions it would be a saver. And the size of this window (unwinding) would be a heuristic speculative function of number of possible threads (cores, processors) and of information how long have previous evaluations of the same function taken. I really messed this explanation up. Or maybe it is completely wrong and this is why it looks like a mess to me. :-) Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Interval Arithmetics
Hi! First, disclaimer: everything I know about interval arithmetics comes from this video: http://video.google.com/videoplay?docid=-2285617608766742834 I would like to know if there is any implementation of interval arithmetics in Haskell? I would like to play a little with that. I checked the web and the straightforward approach I found: http://cs.guc.edu.eg/faculty/sabdennadher/Publikationen/paper-wflp99.ps.gz has from my point of view invalid implementation. For example, lower bound in the sum should not be just calculated as the sum of lower bounds of summands. It should return the greatest representable number which is smaller or equal to the exact value of the sum. With just boldly making a sum we ignore any errors we could introduce and this is somehow against the idea of interval arithmetics. And as it is said at the end of the talk, a system behind interval arithmetics should do a lot of work to make those intervals as small as possible while still correct and counting in all the errors we accumulated. I think a strict-typed and lazy language like Haskell would be a good place to implement this. But I would like to know if this would be possible to do from the language itself, without making changes to the compiler and/or runtime itself? Because, for example, a good implementation should reformulate equations at the runtime accordingly to exact values it wants to compute them on. Has it been done already? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parallel executing of actions
Hi! On 4/18/07, Juan Carlos Arevalo Baeza [EMAIL PROTECTED] wrote: This evaluates all the elements of the list using parMap (the expensive part, right?), and then sequentially applies the action on the current thread. True. But currently I have the main function I would like to parallel something like this: drawPixel x y = do openGLDrawPixel x y color where color = calcColor x y But it is probably really better if I first calculate everything and then just draw it. It is easier to parallelize (will have only pure functions) and also I will not have those OpenGL errors. Thanks everybody Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parallel executing of actions
Hi! On 4/17/07, Sebastian Sylvan [EMAIL PROTECTED] wrote: I would suggest chunking up your work (assuming that calculating your colour is indeed a significant amount of work) in tiles or something, then fork off a thread for each of them, sticking the final colours in a Chan. Then you have another thread just pick tiles off the Chan and copy the results to the frame buffer. Is there some completely different and maybe better way of rendering the image? Because I noticed that in fact I do not really have any use for OpenGL (I draw pixels on a 2D plane). So maybe is there some other portable way of rendering a 2D image, which would be easier to parallelize? Maybe of precomputing the image in completely functional way and then only draw the whole image at once to the screen buffer (now I call OpenGL draw pixel function for every pixel I want to draw - this is probably not the best way). Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parallel executing of actions
Hi! On 4/16/07, Bertram Felgenhauer [EMAIL PROTECTED] wrote: Since all the threads block on a single MVar how do they run in parallel? The idea is that before the threads block on the MVar, they run their action x to completion. The rendering crashes. I will have to precompute the values in threads someway and then sequentially draw it? Any suggestion how to do that? Mitar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Parallel executing of actions
Hi! Is there a parallel version of mapM_? Like it is for map (parMap)? I would like to execute actions in parallel as the computation of necessary data for actions is quite computational heavy. But it does not matter in which order those actions are executed. (I am rendering pixels with OpenGL and it does not matter in which order I draw them, but it matters that for one pixel it takes some time to calculate its color.) The example would be: main :: IO () main = do -- the order of printed characters is not important -- instead of putStrLn there will be a computationally big function -- so it would be great if those computations would be done in parallel -- and results printed out as they come mapM_ rwhnf (putStrLn) [a,b,c,d] Is this possible? Without unsafe functions? And without changing the semantics of the program. Mitar ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Parallel executing of actions
Hi! On 4/15/07, Spencer Janssen [EMAIL PROTECTED] wrote: This version will fork a new thread for each action: \begin{code} import Control.Concurrent import Control.Monad parSequence_ xs = do m - newEmptyMVar mapM_ (\x - forkIO x putMVar m ()) xs replicateM_ (length xs) (takeMVar m) parMapM_ f xs = parSequence_ $ map f xs \end{code} OpenGL bindings successfully crash. The functional calculations in f should be done in parallel, but those few OpenGL actions should still be done sequentially. I am attaching the code in question. It is a simple voxel raycasting engine. (Any suggestions on other memory/performance improvements are more than welcome.) Mitar Main.hs Description: Binary data ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe