measuring part of program
Dear GHC users, I am puzzled with the following effect in ghc-4.04, 06. In certain large program f x = let XX... p = ... YY... in r p p is found respectively cheaply from x, r is found expensively via x, p. Now, without having profiling, I want to know what part of the time cost takes evaluating p. For this, I replace `r p' with error $ show p. And it takes 1000 times more than computing p by separate program, without `YY...' part. The problem is that with all this, it is too hard to measure the parts of computation; one needs to rewrite the part of the program as the completly new program. What do you think of this? -- Sergey Mechveliani [EMAIL PROTECTED]
Re: putMVar on full MVar
Claus Reinke wrote: Right. I am relieved to read that your application source is not full of calls to putMVar, but rather uses safe abstractions built on top of MVars (it might be interesting to isolate them out of your current application and migrate them to a CH library?). Not a bad idea. Public implementations of channels are already available (I believe there is already code provided with GHC.) I think Einar Karlsen's composable events are an even neater idea and I should like to publicise and extend them some more later on, maybe by writing a paper about them. I don't understand the "tryButDoNotSuspend" operation. What does it mean, "would lead to an immediate suspension of the current thread"? It would incidentally be fairly trivial to implement tryForAWhileButDoNotBlockForever using Einar Karlsen's events. (Except of course that delays are always going to be approximate; GHC might not be able to run the thread bang on time because it could be in the middle of some C function.) I think the rest of Claus's discussion is a very good illustration of what I said before that you can think of other primitives that could arguably be provided for MVars and perhaps we should think the whole issue through carefully. Another thing which MVars can't do very well at the moment is multiple access. Perhaps there should be a function takeMVarList :: [MVar a] - IO a which waits for the first MVar to come along. (Again, you could easily implement such a thing using MVars defined in terms of Einar's events.) Well I have no quarrel with orthogonal extensions to MVars PROVIDED THAT THEY DO NOT GET SLOWER. So what do the wizards of GHC say can be done for free?
MVar behavior...
I've been following the MVar debate with some interest. As I've mentioned in other mail, M structures were an Id thing, and so we here at MIT are in some sense responsible for their peculiar non-uniformity. It should be noted that M-structures were defined by analogy to I-structures---which are write-once, read-many. Thus, putMVar signals an error on a full location primarily because the I-structure write did as well. This non-uniformity stems from the fact that we can easily cook up an MVar implementation which stores queued reads in the same storage which it uses for the data (which is what I-structures do). It's an implementation hack. I have no idea whether this is true in GHC (I don't have the source handy right now), but I suspect it isn't quite. Really, it does seem to me that MVar behavior _ought_ to be symmetric. In this case, there are two options: * M-structure operations block. This shouldn't be a problem in practice, as M-structures mostly get used in a producer-consumer fashion, where this is what we want, or they get incrementally updated, in which case we should be able to prove that suspension on write is impossible (though the necessary compiler logic may not be worth the bother). * M-structure operations don't block. This breaks one of the important early M-structure ideas: That the presence state of the M-structure was invisible and enforced by the underlying synchronization. On the other hand, when we're using M-structures for e.g. caching we end up wasting a lot of effort coding up presence state explicitly in another MVar. Alas, my instinct is that Claus Reinke's "tryButDoNotSuspend" probably isn't the right way to go here. Effectively what it does is "change the mode" of the IO monad, altering the behavior of M-structures. It seems to me that blocking and non-blocking operations really are fundamentally different and have fundamentally different underlying implementations, and there should simply be a separate set of operations for each. I don't see an obvious way to code up the suspensive operations using the non-suspensive ones, since you want to associate the suspension state with the MVar (and do so atomically, in case someone else is trying to change the state). My conclusions: The suspensive behaviors are necessary. At least one, and preferably both of takeMVar/putMVar must be suspensive. The non-suspensive behaviors are nice, but should be captured using separate operations. Cooking them up using multiple MVars will work, but is arguably only necessary because the implementation isn't flexible enough. -Jan-Willem Maessen
Re: putMVar on full MVar
a. Ease of implementation should never be the first argument;-) In the case of basic functionality like the synchronisation provided by MVars, ease of implementation and efficiency go often hand in hand. For such a basic structure, efficiency matters a lot. The same points have been made for languages that omit array bounds checks, or allow side effects everywhere, or have limited Ints as defaults instead of variable length Integers. In line with other parts of ghc (e.g., unsafePerformIO), I would like to get the safe operations per default, with access to the unsafe, more efficient operations if I am willing to take on the proof obligations. In other words, if I can show that my program logic guarantees safe execution (or if I can change my program to enable such a proof), then I want to be able to use the more efficient putMVar. But the default should not assume that such proofs exist or introduce race conditions I can do without. Claus PS. There is some ambivalence in Haskell about out-of-range use of partial functions. Some cause static type errors, some abort at runtime, some throw exceptions, some return Nothing in Maybe..
Re: putMVar on full MVar
Where I come from, that's called a race condition :-) We shouldn't change the semantics because it's possible to write broken programs using the existing semantics. Where I come from, race conditions are not welcome in programs, only on race courses:-) The point is that the semantics causes programs to be broken *by default*, and that they have to be fixed (at least by handling the exceptions, better by proving some statements about the dynamic behaviour of a concurrent, dynamically evolving, asynchronous, higher-order, .. CH program). That it is quite possible to write programs that are not broken in spite of the existing semantics doesn't mean that the semantics shouldn't be changed. It turns out that in a large number of cases, the existing MVar semantics is sufficient (and more efficient than using one of the abstractions). Interesting. But also more reason to think about the defaults (if they are not used from within safe abstractions). Btw, does a distributed implementation of CH exist already? Cheers, Claus