RE: Strictness annotations on type parameters
It's not at all easy in general. Suppose we have f :: [!a] - [a] f xs = xs Does f run down the list, behind the scenes, evaluating each element? That could be expensive. Would it make any difference if we wrote f :: [!a] - [a], or f :: [a] - [!a]? Or does the type mean that f *requires* a list with *already-evaluated* elements? It's hard to avoid the feeling that one ought to be able to say something useful about strictness in the types but it's swampy territory, and hard to get right. Meanwhile, you can easily define your own writeIORef: strictWriteIORef r x = x `seq` writeIORef r x No need to sprinkle seq's everywhere! Simon | -Original Message- | From: [EMAIL PROTECTED] [mailto:glasgow-haskell-users- | [EMAIL PROTECTED] On Behalf Of Mario Blazevic | Sent: 06 December 2005 21:06 | To: glasgow-haskell-users@haskell.org | Subject: Strictness annotations on type parameters | | I spent several days last week trying to track a cause of a 100% | slowdown after some trivial changes I made. The profiler didn't show any | slowdown, presumably because it was dependent on optimizations, so I had | to revert to tweak-run-measure cycle. | | It turned out the slowdown was caused by some unevaluated thunks | that were kept around in long-lived IORefs. This is not the first time I | was bitten by too laziness, either. What made things worse this time is | that there is no way do declare the following: | | data Label = LabelRef {labelId:: !Unique, |reference:: (IORef !LabelState), -- illegal |origin:: Maybe !Label} -- illegal | | | No container data type can be annotated as strict. That means I have | to pepper my code with explicit evaluations to HNF before every | writeIORef (reference label): | | newState `seq` writeIORef (reference label) newState | | What is the reason for this restriction on where strictness | annotations can appear? Is it purely an implementation problem or is | there a reason emanating from Haskell design? If former, how hard would | it be to fix? | | ___ | Glasgow-haskell-users mailing list | Glasgow-haskell-users@haskell.org | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Strictness annotations on type parameters
It's hard to avoid the feeling that one ought to be able to say something useful about strictness in the types but it's swampy territory, and hard to get right. Fortunately, it's easy to dry up the `swampy territory'. The type `Contract' below models strictness requirements. The function `assert' implements them (`assert c' is a so-called projection). Feel free to experiment with different designs ... {-# OPTIONS -fglasgow-exts #-} infixr :- data Contract :: * - * where Id :: Contract a (:-) :: Contract a - Contract b - Contract (a - b) (:=) :: Contract a - Contract b - Contract (a - b) List:: Contract a - Contract [a] SList :: Contract a - Contract [a] assert :: Contract a - (a - a) assert Id = id assert (c1 :- c2) = \ f - assert c2 . f . assert c1 assert (c1 := c2) = \ f x - x `seq` (assert c2 . f . assert c1) x assert (List c) = map (assert c) assert (SList c)= \ xs - eval xs `seq` map (assert c) xs eval []= [] eval (a : as) = a `seq` eval as Some test data. evens [] = [] evens [a] = [a] evens (a1 : a2 : as) = evens as assert (Id :- Id) (const 1) undefined assert (Id := Id) (const 1) undefined assert (Id := Id) (sum . evens) [1, undefined] assert (SList Id := Id) (sum . evens) [1, undefined] Cheers, Ralf PS: Just in case you wonder, the code has been adopted from a recent paper on contracts (see my homepage). ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: New bug tracker: Trac
On 06 December 2005 19:52, Gour wrote: Simon Marlow ([EMAIL PROTECTED]) wrote: The wiki is currently closed because I don't want to have to deal with spam. Use HaWiki for now. I got responses from people on #trac to disable entries from non-registered user as a spam-prevention mechanism. Unfortunately there isn't a way for users to register themselves, we have to create them manually using htpasswd. I'm sure someone must have implemented an online user-registration system for Trac though? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: Happiness! Re: New bug tracker: Trac
On 06 December 2005 17:45, Malcolm Wallace wrote: Just one minor nit: the front page URL tells me a lot about Trac, but nothing at all about ghc! Eventually, I worked out that I needed to click on View tickets, but it took a while to realise this. The top page of the wiki is now a bit more informative. This Trac thing is great! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: ghc --make and make
On 04 December 2005 00:51, John Meacham wrote: I was thinking it would be nice if ghc --make could touch the output file with the timestamp of the most recently modified source file. As it is, if you edit a file in between when ghc --make starts and it finishes (a substantial amount of time in some cases) then 'make' will not realize the dependencies have changed. Hmm, I'm not sure about this. I see the problem: you have a Makefile rule that looks like this: myprog : Main.hs Foo.hs Bar.hs ... $(GHC) -o $@ --make Main.hs and while building myprog, you edit Foo.hs after it has been compiled, with the result that myprog has a later modification date than Foo.hs so won't be recompiled. However, touching myprog to match the date of Foo.hs when it was compiled doesn't seem right - the date on the executable would be earlier than the object files, which might not cause problems with the Makefile rule above, but would likely lead to problems with different Makefile setups. Perhaps the date on an object file should be set to the same as the source file? And the date on the .hi file? The easiest thing to do is not to try to use make dependencies with --make, just run ghc --make every time. It doesn't link if linking isn't required any more. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: New bug tracker: Trac
Simon Marlow ([EMAIL PROTECTED]) wrote: Hi Simon! Unfortunately there isn't a way for users to register themselves, we have to create them manually using htpasswd. I'm sure someone must have implemented an online user-registration system for Trac though? This is what I got on #trac channel from one of the devs: they created a 'guest/guest' login/password, and apparently that's enough for discouraging spam attempts and it refers to: http://initd.org/tracker/pysqlite/wiki Hope it helps. Sincerely, Gour -- Registered Linux User | #278493 GPG Public Key | 8C44EDCD ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Optimizations for mutable structures?
Talk of uniqueness and so forth on the various Haskell mailing lists causes me to wonder: with so much imperative code being written in Haskell these days, to what extent is / should GHC perform standard imperative optimizations? A few things come to mind: - Fetch elimination for imperative reads: writeIORef r e acts readIORef r === writeIORef r e acts return e readIORef r = \e - acts readIORef r === readIORef r = \e - acts return e And that sort of thing, generalized for other imperative monadic types... My feeling is this doesn't come up much in code as written on the page, but might well be relevant after inlining. - Some way to turn the following idiom into memcpy (for any array type): do a - newArray writeArray a 0 e0 writeArray a 1 e1 writeArray a 2 e2 ...etc... What say others? Is there a need yet? (I don't honestly know the answer!) -Jan-Willem Maessen ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
GCC in the Windows distribution of GHC
The Windows distribution of GHC includes GCC. Some header files (such as iostream, algorithm, and so forth), however, are missing (at least I was not able to find them). I could, of course, download the correct version of GCC (this must be version 3.2.3 for GHC 6.4), and put the header files where GCC can find them. I only wonder if this is the right thing to do. Were the header files omitted just to make the distribution smaller, or for some other reason? Cheers, Cyril ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
Jan-Willem Maessen [EMAIL PROTECTED] writes: - Fetch elimination for imperative reads: writeIORef r e acts readIORef r === writeIORef r e acts return e This transformation is valid only on single-threaded systems. If there is any possibility of an IORef being shared across threads, you are out of luck. Regards, Malcolm ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: New bug tracker: Trac
On 07 December 2005 13:17, Gour wrote: Simon Marlow ([EMAIL PROTECTED]) wrote: Hi Simon! Unfortunately there isn't a way for users to register themselves, we have to create them manually using htpasswd. I'm sure someone must have implemented an online user-registration system for Trac though? This is what I got on #trac channel from one of the devs: they created a 'guest/guest' login/password, and apparently that's enough for discouraging spam attempts and it refers to: http://initd.org/tracker/pysqlite/wiki Ah yes, good idea, I'll do that. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: GCC in the Windows distribution of GHC
On 07 December 2005 13:39, [EMAIL PROTECTED] wrote: The Windows distribution of GHC includes GCC. Some header files (such as iostream, algorithm, and so forth), however, are missing (at least I was not able to find them). I could, of course, download the correct version of GCC (this must be version 3.2.3 for GHC 6.4), and put the header files where GCC can find them. I only wonder if this is the right thing to do. Were the header files omitted just to make the distribution smaller, or for some other reason? The GCC bundled with GHC is intended mainly for compiling Haskell code via-C, and we didn't intend it to be invoked independently. Hence, it just contains the bits we need. If you want to compile C files using GHC (a reasonable thing to do), then the right thing to do is install a full GCC somewhere (eg. the mingw gcc), and tell GHC to use it, like this: $ ghc -pgmc c:/mingw/bin/gcc -c foo.c Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: Optimizations for mutable structures?
On 07 December 2005 13:38, Malcolm Wallace wrote: Jan-Willem Maessen [EMAIL PROTECTED] writes: - Fetch elimination for imperative reads: writeIORef r e acts readIORef r === writeIORef r e acts return e This transformation is valid only on single-threaded systems. If there is any possibility of an IORef being shared across threads, you are out of luck. (assuming 'acts' doesn't modify 'r'). No, Jan's transformation is correct even in a multithreaded setting. It might eliminate some possible outcomes from a non-deterministic program, but that's ok. There's no requirement that all interleavings according to the semantics have to be implemented. This is a hard property to state precisely, indeed we gave up trying to in the concurrency/FFI paper: http://www.haskell.org/~simonmar/papers/conc-ffi.pdf, see Section 6.1. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re[2]: Strictness annotations on type parameters
Hello Ralf, Wednesday, December 07, 2005, 12:43:33 PM, you wrote: It's hard to avoid the feeling that one ought to be able to say something useful about strictness in the types but it's swampy territory, and hard to get right. RH Fortunately, it's easy to dry up the `swampy territory'. The type RH `Contract' below models strictness requirements. The function RH `assert' implements them (`assert c' is a so-called projection). RH Feel free to experiment with different designs ... i don't understand anything in what you wrote :) but week or two ago there is a brief discussion in libraries maillist about need of creating strict and lazy variants of libraries, for example for monads and data structures (Maps, Trees and so on). can you say something about automatic generation of strict library from lazy one, or about generating them both from some template? -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
|- Fetch elimination for imperative reads: | writeIORef r e acts readIORef r | === writeIORef r e acts return e | | This transformation is valid only on single-threaded systems. | If there is any possibility of an IORef being shared across threads, | you are out of luck. | |(assuming 'acts' doesn't modify 'r'). this remark is the problem. |No, Jan's transformation is correct even in a multithreaded setting. It |might eliminate some possible outcomes from a non-deterministic program, |but that's ok. There's no requirement that all interleavings according |to the semantics have to be implemented. .. not implementing traces promised as possible by the semantics is not a good idea, imho, as programmers have some control over the set of traces themselves. in this case, acts could implement the synchronisation with another thread working on r, ie., even if acts does not modify r itself, it might reliably cause another thread to modify r before acts can end. if such synchronisations depend on traces you have eliminated, the code would just block (without apparent reason), whereas in this case, r will have a value after acts that shouldn't have been possible, due to the explicit synchronisation code (again, happy debugging) .. of course, you could try to infer non-sequential interference with r resulting from act, but that is what Malcolm pointed out - you have to take the full semantics into account when doing such transformations (btw, java originally made a mess of this- hope that has been fixed by now). cheers, claus ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Wed, Dec 07, 2005 at 02:15:24PM -, Simon Marlow wrote: On 07 December 2005 13:38, Malcolm Wallace wrote: Jan-Willem Maessen [EMAIL PROTECTED] writes: - Fetch elimination for imperative reads: writeIORef r e acts readIORef r === writeIORef r e acts return e This transformation is valid only on single-threaded systems. If there is any possibility of an IORef being shared across threads, you are out of luck. (assuming 'acts' doesn't modify 'r'). No, Jan's transformation is correct even in a multithreaded setting. It might eliminate some possible outcomes from a non-deterministic program, but that's ok. There's no requirement that all interleavings according to the semantics have to be implemented. This is a hard property to state precisely, indeed we gave up trying to in the concurrency/FFI paper: http://www.haskell.org/~simonmar/papers/conc-ffi.pdf, see Section 6.1. I don't think it's true for this program: import Data.IORef import Control.Concurrent main = do m1 - newEmptyMVar m2 - newEmptyMVar let e = 6 not_e = 7 acts = putMVar m1 () takeMVar m2 r - newIORef 5 forkIO $ do takeMVar m1 writeIORef r not_e putMVar m2 () writeIORef r e acts readIORef r = print Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: Optimizations for mutable structures?
On 07 December 2005 15:21, Claus Reinke wrote: (assuming 'acts' doesn't modify 'r'). this remark is the problem. No, Jan's transformation is correct even in a multithreaded setting. It might eliminate some possible outcomes from a non-deterministic program, but that's ok. There's no requirement that all interleavings according to the semantics have to be implemented. .. not implementing traces promised as possible by the semantics is not a good idea, imho, as programmers have some control over the set of traces themselves. It's unreasonable to expect an implementation to provide *all* transitions specified by the semantics. How could you tell, for one thing? For example, what if the compiler doesn't allow a context switch between two adjacent operations: writeIORef r e x - readIORef r this is a good example, in fact, because GHC will quite possibly compile this code into a single basic block that doesn't allow a context switch between the two statements. However, the semantics certainly allows a context switch between these two operations. Is GHC wrong? No way! So how do you specify which transitions an implementation must provide? Perhaps there's a good formulation, but I don't know what it is. in this case, acts could implement the synchronisation with another thread working on r, ie., even if acts does not modify r itself, it might reliably cause another thread to modify r before acts can end. if such synchronisations depend on traces you have eliminated, the code would just block (without apparent reason), whereas in this case, r will have a value after acts that shouldn't have been possible, due to the explicit synchronisation code (again, happy debugging) .. I should have said that if 'acts' blocks, then the transformation is invalid. When I say acts doesn't modify r, I mean to include all ways of modifying r, including synchronising with another thread, or calling an unknown function. of course, you could try to infer non-sequential interference with r resulting from act, but that is what Malcolm pointed out - you have to take the full semantics into account when doing such transformations (btw, java originally made a mess of this- hope that has been fixed by now). I don't think so. Malcolm asserted that the transformation was invalid in a multi-threaded implementation; I disagree - it's just as valid in a multi-threaded implementation as a single-threaded one. You don't have to preserve non-deterministic interactions with other threads, because they're non-deterministic! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: calling system gives exception, waitForProcess
I have a fix for it now, thanks to dons. I just wrap system in Control.Exception.catch and it doesn't crash. dons also said that this has been fixed in head dons: waitForProcess started raising exceptions if the processed had already termianted (I think) in 6.4.1 and the head. It's been fixed in the head. Thanks Neil ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: Optimizations for mutable structures?
On 07 December 2005 15:14, Ian Lynagh wrote: On Wed, Dec 07, 2005 at 02:15:24PM -, Simon Marlow wrote: On 07 December 2005 13:38, Malcolm Wallace wrote: Jan-Willem Maessen [EMAIL PROTECTED] writes: - Fetch elimination for imperative reads: writeIORef r e acts readIORef r === writeIORef r e acts return e This transformation is valid only on single-threaded systems. If there is any possibility of an IORef being shared across threads, you are out of luck. (assuming 'acts' doesn't modify 'r'). No, Jan's transformation is correct even in a multithreaded setting. It might eliminate some possible outcomes from a non-deterministic program, but that's ok. There's no requirement that all interleavings according to the semantics have to be implemented. This is a hard property to state precisely, indeed we gave up trying to in the concurrency/FFI paper: http://www.haskell.org/~simonmar/papers/conc-ffi.pdf, see Section 6.1. I don't think it's true for this program: import Data.IORef import Control.Concurrent main = do m1 - newEmptyMVar m2 - newEmptyMVar let e = 6 not_e = 7 acts = putMVar m1 () takeMVar m2 r - newIORef 5 forkIO $ do takeMVar m1 writeIORef r not_e putMVar m2 () writeIORef r e acts readIORef r = print Sorry for not being clear enough, see my other message. I agree the transformation is not valid in this case. putMVar and takeMVar count as modifying the IORef. However... remove all the putMVars and takeMVars. Do you think it's valid now? Even though after the transformation the program might deliver a different answer? (I claim yes). Now, take the original program, but change the creation of m2 to newMVar (), i.e. m2 starts off full. Is the transformation valid now? Well maybe, because in some interleavings acts does not block, and we can prove that at compilation time. Interesting - I think you could justify doing the transformation in this case too, but I doubt any compiler would go that far. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
Simon Marlow [EMAIL PROTECTED] writes: I should have said that if 'acts' blocks, then the transformation is invalid. Well that is exactly what I was assuming when I said that the transformation is invalid. In the general case, for some arbitrary actions between the write and the read (excluding another write of course), there is no guarantee that the IORef remains unmodified. If you want to restrict the intermediate actions to be non-blocking, such that another thread cannot run, then that is an extra (and significant) proof obligation. And AFAICS your non-blocking argument only applies to co-operative scheduling. If pre-emption is permitted (e.g. interrupt-handling), then all bets are off, because an arbitrary thread could write to the IORef at any moment. I don't think so. Malcolm asserted that the transformation was invalid in a multi-threaded implementation; I disagree - it's just as valid in a multi-threaded implementation as a single-threaded one. I think what I said was correct in general. You need quite a lot of side-conditions to assert the opposite. Regards, Malcolm ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
The following paper seems relevant to this thread. Although it's written in the context of C and C++, it's relevant to any language that combines pre-emptive threads and imperative features. http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf Tony. -- f.a.n.finch [EMAIL PROTECTED] http://dotat.at/ BISCAY: WEST 5 OR 6 BECOMING VARIABLE 3 OR 4. SHOWERS AT FIRST. MODERATE OR GOOD. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: Optimizations for mutable structures?
On 07 December 2005 16:38, Malcolm Wallace wrote: Simon Marlow [EMAIL PROTECTED] writes: I should have said that if 'acts' blocks, then the transformation is invalid. Well that is exactly what I was assuming when I said that the transformation is invalid. In the general case, for some arbitrary actions between the write and the read (excluding another write of course), there is no guarantee that the IORef remains unmodified. This is an analysis that's performed all the time in C compilers, it's quite straightforward to do a good approximation. One simple algorithm is: a store can be forwarded to a matching read as long as there are no intervening writes that may alias, or function calls. C does this and C has threads, so what's the difference? If you want to restrict the intermediate actions to be non-blocking, such that another thread cannot run, then that is an extra (and significant) proof obligation. And AFAICS your non-blocking argument only applies to co-operative scheduling. If pre-emption is permitted (e.g. interrupt-handling), then all bets are off, because an arbitrary thread could write to the IORef at any moment. No, that is exactly what I disagree with. Faced with non-derministic semantics, the compiler does *not* have to preserve all possible outcomes. In other words, it does not have to assume that an IORef can be modified between any two arbitrary instructions. If we had to assume this, then indeed all bets would be off, and C compilers would be very much more restricted in what optimisations they could perform. I don't know how I can explain this a different way - it seems pretty obvious to me, but I'm probably not explaining it well :-/ You cut the example from my message - did you agree with the conclusion? Or the conclusion I made from Ian's example? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Dec 7, 2005, at 12:05 PM, Simon Marlow wrote: On 07 December 2005 16:38, Malcolm Wallace wrote: Simon Marlow [EMAIL PROTECTED] writes: I should have said that if 'acts' blocks, then the transformation is invalid. Well that is exactly what I was assuming when I said that the transformation is invalid. In the general case, for some arbitrary actions between the write and the read (excluding another write of course), there is no guarantee that the IORef remains unmodified. This is an analysis that's performed all the time in C compilers, it's quite straightforward to do a good approximation. One simple algorithm is: a store can be forwarded to a matching read as long as there are no intervening writes that may alias, or function calls. C does this and C has threads, so what's the difference? I would personally be very uncomfortable justifying a semantic transformation based on common practice in C compilers. What exactly are the semantics of C programs and why do we believe that C compilers are correct? I'd much rather see some argument in terms of an appropriate definition of observational equivalence. [snip] Rob Dockins ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Wed, 7 Dec 2005, Robert Dockins wrote: What exactly are the semantics of C programs and why do we believe that C compilers are correct? With regards to threading, the semantics are undefined and the compilers are subtly broken :-) Tony. -- f.a.n.finch [EMAIL PROTECTED] http://dotat.at/ BISCAY: WEST 5 OR 6 BECOMING VARIABLE 3 OR 4. SHOWERS AT FIRST. MODERATE OR GOOD. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Strictness annotations on type parameters
It's not at all easy in general. Suppose we have f :: [!a] - [a] f xs = xs Does f run down the list, behind the scenes, evaluating each element? No, because what [!a] means is List !a = ([] | !a : List !a). It does not mean ([] | !(!a : List !a)). Since the Cons thunks are not forced to HNF, the argument type [!a] should mean a lazy list with strict elements. So you would pay the extra expense in a function like length :: [!a] - Int length [] = 0 length (_:xs) = succ (length xs) That could be expensive. Would it make any difference if we wrote f :: [!a] - [a], or f :: [a] - [!a]? Or does the type mean that f *requires* a list with *already-evaluated* elements? I don't see it as expensive: the function would do exactly what it's supposed to do, given its declaration. If the declared and inferred types don't match on strictness, the compiler should emit a warning that the declared function type is too strict. As for f :: [a] - [!a] I'm not sure if this kind of declaration should be treated as a guarantee or an obligation. That is, I'm not sure if the compiler should report an error if the function definition does not actually return a list of elements in HNF, or if it should insert code to force each element to HNF. But I'm mostly concerned about data declarations. See, what bugs me is that it's allowed to declare data Maybe' a = Nothing' | Just' !a data List' a = Nil | Cons' !a (List' a) and then use those to declare data MaybeList1 a = [Maybe' a] data MaybeList2 a = List' (Maybe' a) But now the containers are incompatible mutually and incompatible with [Maybe a]. If strictness annotations were allowed not only at the top level, then [Maybe a], [Maybe !a], [!(Maybe a)], and [!(Maybe !a)] would all be compatible types. And the libraries wouldn't need to provide different version of functions for lazy and strict containers of the same kind. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
[previously sent by mistake to Simon only - new para at end] Simon Marlow [EMAIL PROTECTED] writes: Now, take the original program, but change the creation of m2 to newMVar (), i.e. m2 starts off full. Is the transformation valid now? Well maybe, because in some interleavings acts does not block, and we can prove that at compilation time. I don't think it is valid for a compiler to say that one possible execution path permits me to remove some code, therefore I will remove that code on all possible execution paths. The example I had in mind was a GUI where the action writeIORef r e acts readIORef r is intended to capture a situation where first we record some global configuration data for the application, then permit some arbitrary GUI actions to occur, and then we retrieve the configuration data again. My expectation is that the config data /will/ have been changed by some other GUI thread. Surely it cannot be OK for the compiler to silently deliver the original unchanged data here - it goes against the programmer's intention. Surely, if a Haskell programmer is going to write code that explicitly reads from a reference after writing to it, that sequence must 9/10 be intentional: otherwise wouldn't she have just used a let-binding? Regards, Malcolm ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
there seem to be two issues here - can we agree on that at least? 1) scheduling non-sequential programs on sequential processors i wasn't arguing that the scheduler should realise all possible interleavings at once. the issue i was referring to is known as fairness in concurrent systems literature. as with referential transparency, various non-equivalent definitions are in use, but one informal description might be something like: if, for a given experiment, some event is possible according to the semantics, it should happen eventually if the experiment is repeated often enough. see, eg, http://research.microsoft.com/users/lamport/pubs/pubs.html#lamport-fairness otherwise -if the event cannot happen no matter how often the experiment is repeated, one could hardly call it possible? scheduling is usually more difficult to implement with fairness guarantees than without, but reasoning about programs is more difficult without such guarantees. i wouldn't expect every concurrent haskell implementation to suceed in guaranteeing fairness, but i would expect the semantics to do so, and would consider any implementation failing in this to be incomplete (perhaps neccessarily so, for well-documented pragmatic reasons). 2) compiler optimisation validity in sequential and non-sequential environments the original motivation of this thread - are sequential transformations still valid in a non-sequential setting? in general, the answer is no, to the surprise/annoyance of many many compiler implementors who want their valuable optimisations to work even when their sequential language acquires threads. this is in fact so annoying that some languages, notably Java, have tried to specify the language semantics in such a way that some level of sequential optimisations would still be valid in spite of the language having threads - the search keyword here is memory model. as I mentioned, they messed up the first time round, and have been through a lengthy re-design phase that involved *changes to the semantics*. I haven't followed that process - the original Java language spec of concurrency and memory model was sufficient to drive me away from the language for good (fingers crossed..). see, eg: http://www.cs.umd.edu/~pugh/java/memoryModel/ http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html [concurrent haskell doesn't seem to have this kind of memory hierarchy, but when you're suggesting to transform single threads based on local knowledge of shared variables, you implicitly assume a hierarchy, and a weak memory model rather than a strong one - a memory model is one way of specifying the conditions under which reordering and other transformations remain valid in a non-sequential setting] i'd really, really prefer concurrent haskell not to go down a route in which demands of simpler implementation leads to subtle problems in reasoning about programs. cheers, claus ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GCC in the Windows distribution of GHC
Simon Marlow wrote: If you want to compile C files using GHC (a reasonable thing to do), then the right thing to do is install a full GCC somewhere (eg. the mingw gcc), and tell GHC to use it, like this: $ ghc -pgmc c:/mingw/bin/gcc -c foo.c How this flag affects linking and the location of default libraries? This flag should be in the Windows FAQ, google suggests to use the bundled GCC for external libraries from a Darcs FAQ. Using two or more set of compilers and libraries on Windows is annoying. It's reasonable to bundle GCC, but if the user already has the full MingW and MSYS environment and using ghc(i) from this shell I prefer to use the native GCC instead of the bundled. Is there any global way to do this? Setting the order of directories in the PATH or a special environment variable would be nice. Cheers: Babo ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
Tony Finch [EMAIL PROTECTED] writes: On Wed, 7 Dec 2005, Robert Dockins wrote: What exactly are the semantics of C programs and why do we believe that C compilers are correct? With regards to threading, the semantics are undefined and the compilers are subtly broken :-) Just have a look at Hans Boehm's page for more details on that ... http://www.hpl.hp.com/personal/Hans_Boehm/c++mm/ -Matthias -- Matthias Neubauer | Universität Freiburg, Institut für Informatik | tel +49 761 203 8060 Georges-Köhler-Allee 79, 79110 Freiburg i. Br., Germany | fax +49 761 203 8052 ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Wed, Dec 07, 2005 at 08:30:36AM -0500, Jan-Willem Maessen wrote: What say others? Is there a need yet? (I don't honestly know the answer!) Although I don't think impertive optimizations at this high of a level will benefit much for how much they cost, after the code has been processed and is in cmm form or about to be I think there is a lot of room for improvement. even a few basic optimizations there can help. ideally we would rely on gcc, but it seems to fall down on a lot of code that ghc generates. John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Wed, Dec 07, 2005 at 05:05:58PM -, Simon Marlow wrote: No, that is exactly what I disagree with. Faced with non-derministic semantics, the compiler does *not* have to preserve all possible outcomes. In other words, it does not have to assume that an IORef can be modified between any two arbitrary instructions. If we had to assume this, then indeed all bets would be off, and C compilers would be very much more restricted in what optimisations they could perform. Yup. this is exactly why C has the 'volatile' keyword. variables that are declared volatile will always be read and written to memory and never stored in a register because they could be modified by external interrupts or threads. If every varibale were considered volatile everything would be a whole whole lot slower. I have only had need to use it 3 times and all were in an operating system kernel. In any case, ghc IORefs are not 'volatile' in the C sense and that is a good thing. John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Wed, Dec 07, 2005 at 05:36:09PM +, Malcolm Wallace wrote: My expectation is that the config data /will/ have been changed by some other GUI thread. Surely it cannot be OK for the compiler to silently deliver the original unchanged data here - it goes against the programmer's intention. then you should be using MVars. I don't think ghc guarentees (except by accident of implementation) anything about using IORefs from multiple threads and you shouldn't count on code which does so working in the future. At least it shouldn't, since thread synchronization is a very expensive thing compared to direct memory access and registers. this is something of an argument for providing MVars or some thread-safe version of IORefs with all haskell implementations whether they have concurrency or not so people can write portable thread-safe libraries that don't actually use concurrency themselves. John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
Yup. this is exactly why C has the 'volatile' keyword. variables that are declared volatile will always be read and written to memory and never stored in a register because they could be modified by external interrupts or threads. If every varibale were considered volatile everything would be a whole whole lot slower. I have only had need to use it 3 times and all were in an operating system kernel. In any case, ghc IORefs are not 'volatile' in the C sense and that is a good thing. but concurrent haskell (ch) is not c! in ch, you can adjust the scopes of your variables so that scopes do not include other threads, so no need for any slowdown - local variables are available for local optimisations. but when the scope of a variable does include other threads, i for one do not want to go back to c/java-style efficiency annotations as a means for code-obfuscation: if a variable is shared between threads, those threads should all see the same variable contents, without me having to insert special synchronisation calls (or worse, wonder which operations might involve synchronisation and which won't); if some variable's contents do not need to be seen by other threads, don't make that variable available to them. scope extrusion/passing variables to other threads does not seem to be problematic, either: if thread A updates a variable passed to it by thread B, then those updates should be visible outside A. but perhaps i am missing something? do you have any example in mind where you'd actually need those extra annotations for efficiency? cheers, claus ps does your later email imply that you suggest IORef/MVar as the non-volatile/volative separation? i'd rather see non-thread- local IORefs phased out of ch.. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
On Thu, Dec 08, 2005 at 12:17:36AM -, Claus Reinke wrote: ps does your later email imply that you suggest IORef/MVar as the non-volatile/volative separation? i'd rather see non-thread- local IORefs phased out of ch.. yeah. exactly. thread local storage is also quite expensive though so making all IORefs them would be a waste 90% of the time (as would making them threadsafe). I propose IORefs make no guarentees when it comes to concurrency, it is the users burden to make sure they use them in a single threaded manner. MVars should make guarentees when it comes to concurrency, and you should use those whenever they might be accesed by different threads.. The reason all implementations should provide MVars (even if they are just wrappers around IORefs) is so people can write portable threadsafe libraries. mostly they can use IORefs, but should use MVars for things like the unsafePerformIO global variable trick. a thread-local version of MVars might also be interesting... John -- John Meacham - ⑆repetae.net⑆john⑈ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Optimizations for mutable structures?
Oh, dear, a brief mail with a high-level view of optimization seems to have touched off some confusion about concurrency. I wasn't thinking about concurrency at all when I wrote my original message, but there seem to be some major misconceptions in the ensuing discussion, which I'd like to clear up. On Dec 7, 2005, at 9:15 AM, Simon Marlow wrote: On 07 December 2005 13:38, Malcolm Wallace wrote: Jan-Willem Maessen [EMAIL PROTECTED] writes: - Fetch elimination for imperative reads: writeIORef r e acts readIORef r === writeIORef r e acts return e This transformation is valid only on single-threaded systems. If there is any possibility of an IORef being shared across threads, you are out of luck. (assuming 'acts' doesn't modify 'r'). This was my intended meaning---I dashed off the above lines, and trusted they'd be understood. Apparently I should have been clearer. No, Jan's transformation is correct even in a multithreaded setting. It might eliminate some possible outcomes from a non-deterministic program, but that's ok. I agree with Simon here. Eliminate some possible outcomes is indeed possible to define in a meaningful and technically rigorous way. Others (I'll single out Malcolm and Claus here) have railed against this view, arguing that every program behavior should be preserved. Claus even raises the (difficult technical) issue of fairness. Sadly, I'm afraid any realistic implementation *will* eliminate possible outcomes. In its cooperative multithreading, GHC constantly makes decisions about where thread switching may occur---and I suspect that inserting every possible thread switch would result in dramatic performance drops. But worse than that, even if we insert a check between every single IORef operation, the processor may well decide to execute successive IORef operations in parallel, or even choose to reorder two IORef operations which refer to different locations. It may even choose to eliminate the read in the above example before the write has become visible to any other thread! In effect we get behavior indistinguishable from the suggested optimizations. Now, such behavior isn't always acceptable, so there are ways to get back to sanity. However, those ways (memory barriers and/or atomic memory operations) are extremely expensive. I'm betting one or both of you regularly use an x86 machine---for which there is not even a rigorous specification of where these operations must be inserted! Nonetheless, we get by. We do so by defining idioms based on synchronization---MVars and TMVars are entirely appropriate places to be enforcing memory ordering. Much of the early pain of the Java Memory Model revision (Claus referred to the mess which was made of the original spec, now fixed) was to avoid the need to insert barriers in most code. A consensus was reached on an acceptable programming style: Use monitor synchronization and avoid data races, or failing that use volatile variables in particular well-defined ways. If you break those rules, all bets are off; there is a lengthy spec defining exactly what that means (mostly to rule out the behavior then the program creates a valid password out of thin air), but this is everything you, the programmer, need to understand. Similar consensus opinions have formed in other communities, usually without rigorous specifications to back them up. Voices in this thread have suggested that the right idiom for Haskell is to synchronize using TMVars and MVars. I agree (actually I hope that TMVars are enough, though I'd love to have a TMArray), and I think we can even guarantee reasonable things about IORefs that get passed from thread to thread in this way. Want fairness? Put the stuff you want to observe in an MVar or a TMVar. Where does this leave us? Roughly where Simon noted---it's easy to do shallow things, like fetch elimination in the absence of intervening calls to unknown functions, and hard to do deep optimizations. I suspect that shallow things are all that is called for. We love to criticize C for all the things it has done badly. But let's not assume that everything C compilers do is stupid or a bad idea. [Oddly, I find myself telling C programmers the same thing about Fortran all the time.] -Jan-Willem Maessen There's no requirement that all interleavings according to the semantics have to be implemented. This is a hard property to state precisely, indeed we gave up trying to in the concurrency/FFI paper: http://www.haskell.org/~simonmar/papers/conc-ffi.pdf, see Section 6.1. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list