Re: Records in Haskell
J. Garrett Morris jgmorris at cs.pdx.edu writes: On Wed, Feb 29, 2012 at 11:05 PM, AntC anthony_clayden at clear.net.nz wrote: I repeat: nobody is using a type-level string. You (or someone) is making it up. It isn't clear where that idea came from. On Mon, Jan 2, 2012 at 4:38 AM, Simon Peyton-Jones simonpj at microsoft.com wrote: It seems to me that there's only one essential missing language feature, which is appropriately-kinded type-level strings (and, ideally, the ability to reflect these strings back down to the value level). * Provide type-level string literals, so that “foo” :: String Huh. Thank you Garrett, I feel suitably chided. So the 'culprit' is 'your man himself'. You may want to call your type-level-things-that-identify-fields strings, labels, fieldLabels, or rumbledethumps, but surely that's not the point of interest here? /g Ah, but there _is_ a point of interest: under DORF I _must_ call my type-level- things-etc: **types** (or perhaps proxy **types**), Because they are only and exactly **types**. And because they are exactly **types** they come under usual namespace control. SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Wed, Feb 29, 2012 at 11:58 PM, AntC anthony_clay...@clear.net.nz wrote: SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) Nah, they have kinds, and they don't take parameters, so they're probably types. Whether you prefer that foo in module A mean the same thing as foo in module B is entirely up to you; while it might seem intuitive to do so, it's also true that if I write data List t = Cons t (List t) | Nil in two different modules, I declare two entirely distinct list types, even if the natural semantics of the two types might be hard to distinguish. /g -- Would you be so kind as to remove the apricots from the mashed potatoes? ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Unpack primitive types by default in data
On 29/02/2012 16:17, Johan Tibell wrote: On Wed, Feb 29, 2012 at 2:08 AM, Simon Marlow marlo...@gmail.com mailto:marlo...@gmail.com wrote: (I think you meant record, not field in the last sentence, right?) I did mean record, but I wasn't being very clear. Let me try again. It's not obvious to me why having a mixture of strict and nonstrict (maybe you meant UNPACKed and not UNPACKed?) fields would make things worse. Could you give a concrete example? Sure. Lets say we have a value x of type Int, that we copy from constructor to constructor: f (C_i x) = C_j x -- for lots of different i:s and j:s (In practice C_i and C_j most likely have different types.) In a program with constructors, C_1 .. C_n, we can do one of three things: 1. Unpack no fields. 2. Unpack some fields. 3. Unpack all fields. Now, if we have a program that's currently in state (1) and we move to state (2) by manually adding some unpack pragmas, performance might get worse, as we introduce re-boxing where there was none before. However, if we kept unpacking fields until we got into state (3), performance might be even better than in state (1), because we are again back into a state where * there's no reboxing (lazy functions aside), but * we have better cache locality. I suspect many large Haskell programs (GHC included) are in state (1) or (2). I think you're right, but in general there's no way to get to state (3) because C_j is often a constructor in a library, or a polymorphic constructor ((:) being a common case, I expect). Furthermore C_j is often not a constructor - just passing x to a non-strict function is enough. The larger and more complicated the code, the more likely it is that cases like this occur, and the harder it is to find them all and squash them (and even if you did so, maintaining the codebase in that state is very difficult). If we introduce -funbox-primtive-fields and turn it on by default, the hope would be that many programs go from (1) to (3), but that only works if the programs have consistently made primitive fields strict (or kept them all lazy, in which case -funbox-primitive-fields does nothing.) If the programmer have been inconsistent in his/her use of strictness annotations, we might end up in (2) instead. Right, but I'm suggesting we'll end up in (2) for other reasons beyond our control. How often this happens in practice, I don't know. Cheers, Simon Did this make any more sense? Cheers, Johan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
J. Garrett Morris jgmorris at cs.pdx.edu writes: On Wed, Feb 29, 2012 at 11:58 PM, AntC anthony_clayden at clear.net.nz wrote: SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) Nah, they have kinds, and they don't take parameters, so they're probably types. Whether you prefer that foo in module A mean the same thing as foo in module B is entirely up to you; ... /g It's about representation hiding: - I don't want the importer to even know I have field foo, - but they can use my field bar Or perhaps: - I don't want the importer to update field foo - but they can read foo, and they can update bar (This is especially to support using records to emulate OO, where we want abstraction/'separation of concerns'.) If the importer (either maliciously or by accident) creates their own record with a foo field, I specifically _don't_ want them to try sharing my hidden foo. AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghci 7.4.1 no longer loading .o files?
On 21/02/2012 04:33, Evan Laforge wrote: On Mon, Feb 20, 2012 at 1:14 AM, Eugene Crossercros...@average.org wrote: On 02/20/2012 10:46 AM, Evan Laforge wrote: Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.: I don't *know* but could this have anything to do with this? http://hackage.haskell.org/trac/ghc/ticket/5878 Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that. I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint? Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file. I just committed a fix for this: http://hackage.haskell.org/trac/ghc/ticket/3217#comment:28 What do people think about getting this into 7.4.2? Strictly speaking it's more than a bug fix, because it adds a new GHCi command (:seti) and some extra functions to the GHC API, although I believe it has no effect on existing usage of GHCi or the GHC API. The docs explicitly mention -XNoMonomorphismRestriction. The way to work around the problem you had is to use :seti -XNoMonomorphismRestriction in your ~/.ghci, instead of :set. One disadvantage of this is that your .ghci won't work with older versions of GHC. (does anyone have some .ghci magic for doing conditional compilation?) Furthermore, I'm shortly going to push a patch that will add an indication of why modules are being recompiled. Here's the log message: commit 27d7d930ff8741f980245da1b895ceaa5294e257 (HEAD, refs/heads/master) Author: Simon Marlow marlo...@gmail.com Date: Thu Mar 1 13:55:41 2012 + In --make, give an indication of why a module is being recompiled e.g. [3 of 5] Compiling C(C.hs, C.o) [4 of 5] Compiling D(D.hs, D.o) [C changed] [5 of 5] Compiling E(E.hs, E.o) [D changed] The main motivation for this is so that we can give the user a clue when something is being recompiled because the flags changed: [1 of 1] Compiling Test2( Test2.hs, Test2.o ) [flags changed] Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote: SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) I haven't followed everything, so please forgive me if this is a stupid question, but if you implement this variant of SORF: http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#ScopecontrolbygeneralisingtheStringtypeinHas then do you get the behaviour of SORF when using field names starting with a lower-case letter, and DORF when they start with an upper-case letter? Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Thu, Mar 1, 2012 at 8:38 AM, Ian Lynagh ig...@earth.li wrote: On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote: SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) I haven't followed everything, so please forgive me if this is a stupid question, but if you implement this variant of SORF: http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#ScopecontrolbygeneralisingtheStringtypeinHas then do you get the behaviour of SORF when using field names starting with a lower-case letter, and DORF when they start with an upper-case letter? Thanks Ian It is close to a hack (e.g. taking over a special meaning for String) that has been implemented in the Scratchpad II (now known as AXIOM) system for over 3 decades. I found it odd, this maybe for Haskell it may have a completely different taste. If you have a copy of the AXIOM book http://www.amazon.com/AXIOM-Scientific-Computation-Richard-Jenks/dp/0387978550 have a look at the end of page 71. -- Gaby ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Error while installing new packages with GHC 7.4.1
Ok, interesting info. But how to solve the problem now? Should I contact the author of Hoogle and ask him about how solving this? On 03/01/2012 02:02 AM, Albert Y. C. Lai wrote: On 12-02-29 06:04 AM, Antoras wrote: I don't know where the dependency to array-0.3.0.3 comes from. Is it possible to get more info from cabal than -v? hoogle-4.2.8 has Cabal = 1.8 1.13, this brings in Cabal-1.12.0. Cabal-1.12.0 has array = 0.1 0.4, this brings in array-0.3.0.3. It is a mess to have 2nd instances of libraries that already come with GHC, unless you are an expert in knowing and avoiding the treacherous consequences. See my http://www.vex.net/~trebla/haskell/sicp.xhtml It is possible to fish the output of cabal install --dry-run -v3 hoogle for why array-0.3.0.3 is brought in. It really is fishing, since the output is copious and of low information density. Chinese idiom: needle in ocean (haystack is too easy). Example: selecting hoogle-4.2.8 (hackage) and discarding Cabal-1.1.6, 1.2.1, 1.2.2.0, 1.2.3.0, 1.2.4.0, 1.4.0.0, 1.4.0.1, 1.4.0.2, 1.6.0.1, 1.6.0.2, 1.6.0.3, 1.14.0, blaze-builder-0.1, case-insensitive-0.1, We see that selecting hoogle-4.2.8 causes ruling out Cabal 1.14.0 Similarly, the line for selecting Cabal-1.12.0 mentions ruling out array-0.4.0.0 ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Error while installing new packages with GHC 7.4.1
Hi Antoras, The darcs version of Hoogle has had a more permissive dependency for a few weeks. Had I realised the dependency caused problems I'd have released a new version immediately! As it stands, I'll release a new version in about 4 hours. If you can't wait that long, try darcs get http://code.haskell.org/hoogle Thanks, Neil On Thursday, March 1, 2012, Antoras wrote: Ok, interesting info. But how to solve the problem now? Should I contact the author of Hoogle and ask him about how solving this? On 03/01/2012 02:02 AM, Albert Y. C. Lai wrote: On 12-02-29 06:04 AM, Antoras wrote: I don't know where the dependency to array-0.3.0.3 comes from. Is it possible to get more info from cabal than -v? hoogle-4.2.8 has Cabal = 1.8 1.13, this brings in Cabal-1.12.0. Cabal-1.12.0 has array = 0.1 0.4, this brings in array-0.3.0.3. It is a mess to have 2nd instances of libraries that already come with GHC, unless you are an expert in knowing and avoiding the treacherous consequences. See my http://www.vex.net/~trebla/**haskell/sicp.xhtmlhttp://www.vex.net/~trebla/haskell/sicp.xhtml It is possible to fish the output of cabal install --dry-run -v3 hoogle for why array-0.3.0.3 is brought in. It really is fishing, since the output is copious and of low information density. Chinese idiom: needle in ocean (haystack is too easy). Example: selecting hoogle-4.2.8 (hackage) and discarding Cabal-1.1.6, 1.2.1, 1.2.2.0, 1.2.3.0, 1.2.4.0, 1.4.0.0, 1.4.0.1, 1.4.0.2, 1.6.0.1, 1.6.0.2, 1.6.0.3, 1.14.0, blaze-builder-0.1, case-insensitive-0.1, We see that selecting hoogle-4.2.8 causes ruling out Cabal 1.14.0 Similarly, the line for selecting Cabal-1.12.0 mentions ruling out array-0.4.0.0 __**_ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/**mailman/listinfo/glasgow-**haskell-usershttp://www.haskell.org/mailman/listinfo/glasgow-haskell-users __**_ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/**mailman/listinfo/glasgow-**haskell-usershttp://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Ian Lynagh igloo at earth.li writes: On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote: SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) I haven't followed everything, so please forgive me if this is a stupid question, but if you implement this variant of SORF: http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#Scopeco ntrolbygeneralisingtheStringtypeinHas then do you get the behaviour of SORF when using field names starting with a lower-case letter, and DORF when they start with an upper-case letter? Thanks Ian And you get In my opinion, this is ugly, since the selector can be either a type name or a label and the semantics are nonsame. Rather, we need scoped instances. [SPJ] So if we open the gate for ugly, do we also open it for hacks and for unscalable? Then we also have a solution for updating higher-ranked typed fields. I guess this is all for decision by the implementors. If we need to go into scoped instances, I'd be really scared -- that seems like a huge, far-reaching change, with all sorts of opportunity for mysterious compile fails and inexplicable behaviour-changing from imports. I have some creative ideas for introducing overlapping instances; shall I run them up the flagpole as well? AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Thu, Mar 01, 2012 at 08:52:29PM +, AntC wrote: Ian Lynagh igloo at earth.li writes: On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote: SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because they're new-fangled, I'm opposed because I can't control the namespace.) I haven't followed everything, so please forgive me if this is a stupid question, but if you implement this variant of SORF: http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#Scopeco ntrolbygeneralisingtheStringtypeinHas then do you get the behaviour of SORF when using field names starting with a lower-case letter, and DORF when they start with an upper-case letter? And you get In my opinion, this is ugly, since the selector can be either a type name or a label and the semantics are nonsame. Rather, we need scoped instances. [SPJ] That comment was from strake888, not SPJ? Personally, in the context of Haskell (where the case of the first character often determines the behaviour, e.g. a pattern of Foo vs foo), I don't think it's too bad. So if we open the gate for ugly, do we also open it for hacks and for unscalable? Then we also have a solution for updating higher-ranked typed fields. I guess this is all for decision by the implementors. If we need to go into scoped instances, I'd be really scared -- that seems like a huge, far-reaching change, with all sorts of opportunity for mysterious compile fails and inexplicable behaviour-changing from imports. I have some creative ideas for introducing overlapping instances; shall I run them up the flagpole as well? I'm getting lost again. But I think you are agreeing that (leaving aside the issue of whether the design is reasonable) the above variant would indeed allow the user to choose the behaviour of either SORF or DORF. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: [Haskell-cafe] Records in Haskell
Thanks Evan, I've had a quick read through. Thanks for reading and commenting! It's a bit difficult to compare to the other proposals. I can't see discussion of extracting higher-ranked functions and applying them in polymorphic contexts. (This is SPJ's `rev` example.) Putting h-r fields into records is the standard way of emulating object- oriented style. SPJ's view is that requirement is very important in practice. (No proposal has a good answer to updating h-r's, which you do discuss.) Yeah, I've never wanted that kind of thing. I've written in object-oriented languages so it's not just that I'm not used to the feature so I don't feel its lack. And if I did want it, I would probably not mind falling back to the traditional record syntax, though I can see how people might find that unsatisfying. But my suggestion is meant to solve only the problem of composed record updates and redundant things in 'Thing.thing_field thing'. Not supporting higher-ranked function record fields *only* means that you can't use this particular convenience to compose updates to a higher-ranked field. If you happen to have that particular intersection of requirements then you'll have to fall back to typing more things for that particular update. My motivation is to solve an awkward thing about writing in haskell as it is, not add a new programming style. Re the cons 1. Still can't have two records with the same field name in the same module since it relies on modules for namespacing. Did you see the DORF precursor page ? http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields /NoMonoRecordFields I tried to figure out if that would help, but I suspect not. (Looking at the desugar for `deriving (Lens)`, you need the H98 field selector functions.) Then for me, cons 1. is a show-stopper. (I know you think the opposite.) Yeah, I don't think the DORF precursor stuff is related, because it's all based on typeclasses. I think there are two places where people get annoyed about name clashes. One is where they really want to have two records with the same field name defined in one module. The other is where they are using unqualified imports to shorten names and get a clash from records in different modules. Only the former is a problem, the latter should work just fine with my proposal because ghc lets you import clashing names as long as you don't call them unqualified, and SDNR qualifies them for you. So about the former... I've never had this problem, though the point about circular imports forcing lots of things into the same module is well taken, I have experienced that. In that case: nested modules. It's an orthogonal feature that can be implemented and enabled separately, and can be useful in other ways too, and can be implemented separately. If we are to retain modules as *the* way to organize namespaces and visibility then we should think about fancying-up modules when a namespacing problem comes up. Otherwise you're talking about putting more than one function into one symbol, and that's typeclasses, and now you have to think of something clever to counteract typeclasses' desire to be global (e.g. type proxies). Maybe that's forcing typeclasses too far beyond their power/weight compromise design? I also don't see whether you can 'hide' or make abstract the representation of a record type, but still allow read-access to (some of) its fields. If you want a read-only field, then don't export the lens for 'a', export a normal function for it. However, it would mean you'd use it as a normal function, and couldn't pass it to 'get' because it's not a lens, and couldn't be composed together with lenses. I'd think it would be possible to put 'get' and 'set' into different typeclasses and give ReadLenses only the ReadLens dictionary. But effectively we'd need subtyping, so a Lens could be casted automatically to a ReadLens. I'm sure it's possible to encode with clever rank2 and existentials and whatnot, but at that point I'm inclined to say it's too complicated and not worth it. Use plain functions. Since 'get' turns a lens into a plain function, you can still compose with '#roField . get (#rwField1 . #rwField2)'. We could easily support 'get (#roField1 . #roField2)' by doing the ReadLens thing and putting (-) into ReadLens, it's just combining rw fields and ro fields into the same composition that would require type gymnastics. Suppose a malicious client declares a record with field #a. Can you stop them reading and/or updating your field #a whilst still letting them see field #b of your record type? I don't think it's worth designing to support malicious clients, but if you don't want to allow access to a function or lens or any value, then don't export it. #a can't resolve to M.a if M doesn't export 'a'. With SDNR, is it possibly to define a polymorphic field selector function? I suspect no looking at the desugar for `deriving (Lens)`, but perhaps
Re: Error while installing new packages with GHC 7.4.1
Hi Antoras, I've just released Hoogle 4.2.9, which allows Cabal 1.15, so hopefully will install correctly for you. Thanks, Neil On Thu, Mar 1, 2012 at 5:02 PM, Neil Mitchell ndmitch...@gmail.com wrote: Hi Antoras, The darcs version of Hoogle has had a more permissive dependency for a few weeks. Had I realised the dependency caused problems I'd have released a new version immediately! As it stands, I'll release a new version in about 4 hours. If you can't wait that long, try darcs get http://code.haskell.org/hoogle Thanks, Neil On Thursday, March 1, 2012, Antoras wrote: Ok, interesting info. But how to solve the problem now? Should I contact the author of Hoogle and ask him about how solving this? On 03/01/2012 02:02 AM, Albert Y. C. Lai wrote: On 12-02-29 06:04 AM, Antoras wrote: I don't know where the dependency to array-0.3.0.3 comes from. Is it possible to get more info from cabal than -v? hoogle-4.2.8 has Cabal = 1.8 1.13, this brings in Cabal-1.12.0. Cabal-1.12.0 has array = 0.1 0.4, this brings in array-0.3.0.3. It is a mess to have 2nd instances of libraries that already come with GHC, unless you are an expert in knowing and avoiding the treacherous consequences. See my http://www.vex.net/~trebla/haskell/sicp.xhtml It is possible to fish the output of cabal install --dry-run -v3 hoogle for why array-0.3.0.3 is brought in. It really is fishing, since the output is copious and of low information density. Chinese idiom: needle in ocean (haystack is too easy). Example: selecting hoogle-4.2.8 (hackage) and discarding Cabal-1.1.6, 1.2.1, 1.2.2.0, 1.2.3.0, 1.2.4.0, 1.4.0.0, 1.4.0.1, 1.4.0.2, 1.6.0.1, 1.6.0.2, 1.6.0.3, 1.14.0, blaze-builder-0.1, case-insensitive-0.1, We see that selecting hoogle-4.2.8 causes ruling out Cabal 1.14.0 Similarly, the line for selecting Cabal-1.12.0 mentions ruling out array-0.4.0.0 ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Ian Lynagh igloo at earth.li writes: On Thu, Mar 01, 2012 at 08:52:29PM +, AntC wrote: And you get In my opinion, this is ugly, ... That comment was from strake888, not SPJ? Thanks Ian, you're right. Specifically, it's 'igloo's tweak to the proposal and 'strake888's comment. (I had erroneously thought the whole of that page was SPJ's, and I hadn't much re-read it since SPJ posted it.) Personally, in the context of Haskell (where the case of the first character often determines the behaviour, e.g. a pattern of Foo vs foo), I don't think it's too bad. Hmm. Upper case in an expression always means data constructor (or qualified name). (You've possibly not been watching the outrage at changing the meaning of `.` ;-) Also this would be ambiguous: object.SubObject.Field.subField -- are the `SubObject` and `Field` (private) field selectors, -- or a qualified name for subField? -- or perhaps SubObject.Field is a qualified private field selector? Putting parentheses would cut out some of those interpretations, but not all of them?? In terms of scope control, I think (I'm guessing rather) you do get similar behaviour to DORF, with the added inconvenience of: * an extra arg to Has (how does the constraint sugar cope?) r{ field :: Int } = ... r{ Field :: Int } = ... -- ? does that look odd -- suppose I have two private namespaces r{ Field :: Int ::: Field1 } = ... -- ?? r{ (Field ::: Field2) :: Int } = ... -- ??? * something(?) extra in record decls: data PublicRecord = Pub { field :: Int } data PrivateRecord = Priv { Field :: Int }-- ? data PrivateRecord = Priv { Field :: Int ::: Field2 } -- ?? * a need for equality constraints between Kind and Type (that's the ft ~ FieldT bit) The class decl and instances are 'punning' on tyvar `ft` being both a type and a Kind. Is that even contemplated with Kinds? * a need for something(?) different on record update syntax: pubRec{ field = 27 } privRec{ Field = 92 } -- does upper case there look odd to you? privRec{ Field = 87 ::: Field2 } (ugly is a mild reaction, the more I think about it.) But I think you are agreeing that (leaving aside the issue of whether the design is reasonable) the above variant would indeed allow the user to choose the behaviour of either SORF or DORF. No, not the user to choose, but the implementor. We can't possibly try to support both approaches. AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
AntC anthony_clayden at clear.net.nz writes: Ian Lynagh igloo at earth.li writes: But I think you are agreeing that (leaving aside the issue of whether the design is reasonable) the above variant would indeed allow the user to choose the behaviour of either SORF or DORF. No, not the user to choose, but the implementor. We can't possibly try to support both approaches. Sorry, I mis-interpreted your last paragraph. I think you meant: ... allow the user to choose [public or restricted namespacing] behaviour under either the SORF or DORF proposal. Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour: - use String Kinds and your label is public-everywhere and completely uncontrollable. - (So someone who imports your label can't stop it getting re-exported.) - This is unlike any other user-defined name in Haskell. I'm not sure whether to call that extra behaviour a 'feature' (I tend more to 'wart'), but it's certainly another bit of conceptual overload. I prefer DORF's sticking to conventional/well-understood H98 namespacing controls. AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Thu, Mar 01, 2012 at 10:46:27PM +, AntC wrote: Also this would be ambiguous: object.SubObject.Field.subField Well, we'd have to either define what it means, or use something other than '.'. In terms of scope control, I think (I'm guessing rather) you do get similar behaviour to DORF, with the added inconvenience of: * an extra arg to Has (how does the constraint sugar cope?) You can infer ft from the f. r{ field :: Int } = ... r{ Field :: Int } = ... -- ? does that look odd Well, it's new syntax. -- suppose I have two private namespaces r{ Field :: Int ::: Field1 } = ... -- ?? r{ (Field ::: Field2) :: Int } = ... -- ??? You've lost me again. But I think you are agreeing that (leaving aside the issue of whether the design is reasonable) the above variant would indeed allow the user to choose the behaviour of either SORF or DORF. No, not the user to choose, but the implementor. We can't possibly try to support both approaches. I don't follow. You agreed above that you do get similar behaviour to DORF, and if you just use lowercase field names then the behaviour is the same as SORF. Therefore both are supported. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Thu, Mar 01, 2012 at 11:32:27PM +, AntC wrote: AntC anthony_clayden at clear.net.nz writes: Ian Lynagh igloo at earth.li writes: But I think you are agreeing that (leaving aside the issue of whether the design is reasonable) the above variant would indeed allow the user to choose the behaviour of either SORF or DORF. No, not the user to choose, but the implementor. We can't possibly try to support both approaches. Sorry, I mis-interpreted your last paragraph. I think you meant: ... allow the user to choose [public or restricted namespacing] behaviour under either the SORF or DORF proposal. Yes, exactly. Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour: - use String Kinds and your label is public-everywhere and completely uncontrollable. - (So someone who imports your label can't stop it getting re-exported.) - This is unlike any other user-defined name in Haskell. I'm not sure whether to call that extra behaviour a 'feature' (I tend more to 'wart'), but it's certainly another bit of conceptual overload. Right, but other people would prefer the SORF behaviour to the DORF behaviour. But note that if this was implemented, then the only difference between the 3 is in the desugaring. So if you desugar r.f only then you get SORF, r.F only then you get DORF (well, with different syntax, probably), and if you desugar both then you get the choice. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Ian Lynagh igloo at earth.li writes: On Thu, Mar 01, 2012 at 11:32:27PM +, AntC wrote: AntC anthony_clayden at clear.net.nz writes: Ian Lynagh igloo at earth.li writes: But I think you are agreeing that (leaving aside the issue of whether the design is reasonable) the above variant would indeed allow the user to choose the behaviour of either SORF or DORF. No, not the user to choose, but the implementor. We can't possibly try to support both approaches. Sorry, I mis-interpreted your last paragraph. I think you meant: ... allow the user to choose [public or restricted namespacing] behaviour under either the SORF or DORF proposal. Yes, exactly. Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour: - use String Kinds and your label is public-everywhere and completely uncontrollable. - (So someone who imports your label can't stop it getting re-exported.) - This is unlike any other user-defined name in Haskell. I'm not sure whether to call that extra behaviour a 'feature' (I tend more to 'wart'), but it's certainly another bit of conceptual overload. Right, but other people would prefer the SORF behaviour to the DORF behaviour. Would they? How could we know? Most of the posts here have been from people who don't get anywhere near to understanding the issues. There's been a voicifereous poster who wants to have lots of fields with the same name and have them each mean something different. (No, I don't understand either.) Under DORF they could get the public-everywhere behaviour by exporting and importing unqualified (just like H98!). But note that if this was implemented, then the only difference between the 3 is in the desugaring. So if you desugar r.f only then you get SORF, r.F only then you get DORF (well, with different syntax, probably), and if you desugar both then you get the choice. Thanks Ian Sorry Ian, but I've got conceptual overload. I feel I understand DORF behaviour not just because I designed it, but also because I can (and have!) prototyped it under GHC v7.2, including public-everywhere and controlled import/export -- see my additional attachment to the implementor's page. With Kinds and Stringy Kinds and type-to-Kind equality constraints I feel I want to better understand how that affects the design space. I don't think that's possible yet, even in v7.4(?) Right from the beginning of SPJ's SORF proposal, I've had a feeling that ghc central, having introduced the new whizzy Kinds, now wants to find a use for them. Surely there would be other applications for Kinds that would be clearer use cases than patching-up Haskell's kludgy record design? We're focussing too narrowly on this representation-hiding issue. There are other important differences between SORF and DORF (which I've tried to explain on my comparison page on the wiki). Nothing you've said so far is being persuasive. (BTW on the comparison wiki, I've put some speculations around using Kinds behind the scenes as an implementation for DORF -- implementor's choice. Because it's behind the scenes we could use a more compact/specific variety of Kind than String. But it's still in danger of suffering the uncontrollable public-everywhere issue. Could you suggest an improvement?) AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Fri, Mar 2, 2012 at 1:06 AM, Ian Lynagh ig...@earth.li wrote: On Thu, Mar 01, 2012 at 11:32:27PM +, AntC wrote: Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour: - use String Kinds and your label is public-everywhere and completely uncontrollable. - (So someone who imports your label can't stop it getting re-exported.) - This is unlike any other user-defined name in Haskell. I'm not sure whether to call that extra behaviour a 'feature' (I tend more to 'wart'), but it's certainly another bit of conceptual overload. Right, but other people would prefer the SORF behaviour to the DORF behaviour. Who and why? What's the use case? I was trying to tease this out at another point in the thread. What use case is there for which Haskell's normal and familiar classes-and-instances mode of polymorphism isn't appropriate, and for which we want to introduce this new and alien global-implicit-name-based mode of polymorphism? Another point which could sway in SORF's favour might be easier implementation, but DORF actually requires less type system magic than SORF, and also already has a working prototype implementation, so I don't think that works, either. Let's look at this from the other direction. The advantage of DORF over SORF is that it handles record fields in a hygienic way, and that it works with the module system, rather than around it. What advantage does SORF have over DORF? My main complaint against DORF is that having to write fieldLabel declarations for every field you want to use is onerous. If that could be solved, I don't think there are any others. (But even if it can't be, I still prefer DORF.) ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Ian Lynagh igloo at earth.li writes: * an extra arg to Has (how does the constraint sugar cope?) You can infer ft from the f. Let me explain better what I mean by two private namespaces, then we'll try to understand how your proposal goes ... module T where data FieldT = Field data RecT = RecT{ Field :: Int } ... module U where data FieldU = Field data RecU = RecU{ Field :: Bool } ... module V where import T -- also consider either/both import U -- imports hiding (Field) data RecV = RecV{ Field :: String } -- am I sharing this Field? -- who with? ... ... r.Field ... -- is this valid?, if not what is? ... r{ Field = e }-- likewise (Oh yeah, imports and hiding: how do I do that for these non-String-type-Kinds? And is this allowed?: data NotaField = Constr Int Bool data AmIaRec = AmI{ Constr :: String } ... ... r.Constr ... It's all getting very tangled trying to squeeze constructors into other roles.) AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Fri, Mar 02, 2012 at 01:44:45AM +0100, Gábor Lehel wrote: On Fri, Mar 2, 2012 at 1:06 AM, Ian Lynagh ig...@earth.li wrote: Right, but other people would prefer the SORF behaviour to the DORF behaviour. Who and why? What's the use case? My main complaint against DORF is that having to write fieldLabel declarations for every field you want to use is onerous. I believe this is the main concern people have, but I can't speak for them. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Fri, Mar 02, 2012 at 01:04:13AM +, AntC wrote: Let me explain better what I mean by two private namespaces, then we'll try to understand how your proposal goes ... module T where data FieldT = Field data RecT = RecT{ Field :: Int } ... module U where data FieldU = Field data RecU = RecU{ Field :: Bool } ... module V where import T -- also consider either/both import U -- imports hiding (Field) data RecV = RecV{ Field :: String } -- am I sharing this Field? -- who with? Ah, I see. No, you couldn't do that, just as you couldn't do v = Field You would need to say data RecV = RecV{ T.Field :: String } ... r.Field ... -- is this valid?, if not what is? r!T.Field (I took the liberty of using a random different symbol for field access, for clarity). ... r{ Field = e }-- likewise r{ T.Field = e } (Oh yeah, imports and hiding: how do I do that for these non-String-type-Kinds? And is this allowed?: data NotaField = Constr Int Bool data AmIaRec = AmI{ Constr :: String } No. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghci 7.4.1 no longer loading .o files?
On Tue, Feb 28, 2012 at 1:53 AM, Simon Marlow marlo...@gmail.com wrote: I don't see how we could avoid including -D, since it might really affect the source of the module that GHC eventually sees. We've never taken -D into account before, and that was incorrect. I can't explain the behaviour you say you saw with older GHC's. unless your CPP flags only affected the imports of the module. In fact, that's what I do. I put system specific stuff or expensive stuff into a module and then do #ifdef EXPENSIVE_FEATURE import qualified ExpensiveFeature #else import qualified StubbedOutFeature as ExpensiveFeature #endif I think this is a pretty common strategy. I know it's common for os-specific stuff, e.g. filepath does this. Although obviously for OS stuff we're not interested in saving recompilation :) Well, one solution would be to take the hash of the source file after preprocessing. That would be accurate and would automatically take into account -D and -I in a robust way. It could also cause too much recompilation, if for example a preprocessor injected some funny comments or strings containing the date/time or detailed version numbers of components (like the gcc version). By take the hash of the source file do you mean the hash of the textual contents, or the usual hash of the interface etc? I assumed it was the latter, i.e. that the normal hash was taken after preprocessing. But suppose it's the former, I still think it's better than unconditional recompilation (which is what always including -D in the hash does, right?). Unconditionally including -D in the hash either makes it *always* compile too much--and likely drastically too much, if you have one module out of 300 that switches out depending on a compile time flag, you'll still recompile all 300 when you change the flag. And there's nothing you can really do about it if you're using --make. If you try to get around that by using a build system that knows which files it has to recompile, then you get in a situation where the files have been compiled with different flags, and now ghci can't cope since it can't switch flags while loading. If your preprocessor does something like put the date in... well, firstly I think that's much less common than switching out module imports, since for the latter as far as I know CPP is the only way to do it, while for dates or version numbers you'd be better off with a config file anyway. And it's still correct, right? You changed your gcc version or date or whatever, if you want a module to have the build date then of course you have to rebuild the module every time---you got exactly what you asked for. Even if for some reason you have a preprocessor that nondeterministically alters comments, taking the interface hash after preprocessing would handle that. And come to think of it, these are CPP flags not some arbitrary pgmF... can CPP even do something like insert the current date without also changing its -D flags? ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghci 7.4.1 no longer loading .o files?
I just committed a fix for this: http://hackage.haskell.org/trac/ghc/ticket/3217#comment:28 What do people think about getting this into 7.4.2? Strictly speaking it's more than a bug fix, because it adds a new GHCi command (:seti) and some extra functions to the GHC API, although I believe it has no effect on existing usage of GHCi or the GHC API. Well, I'm all for it :) You could stretch it into calling it a bug fix for a regression (it's maybe not technically a regression, but it pushed me back to 7.0.3... well, that and the -D thing...). [1 of 1] Compiling Test2 ( Test2.hs, Test2.o ) [flags changed] Very cool, I love it! ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On 03/01/2012 01:46 AM, AntC wrote: Isaac Dupreemlat isaac.cedarswampstudios.org writes: In the meantime, I had an idea (that could work with SORF or DORF) : data Foo = Foo { name :: String } deriving (SharedFields) The effect is: without that deriving, the declaration behaves just like H98. (For super flexibility, allow to specify which fields are shared, like deriving(SharedFields(name, etc, etc)) perhaps.) Is it too verbose? Or too terrible that it isn't a real class (well, there's Has...)? -Isaac Thanks Isaac, hmm: that proposal would work against what DORF is trying to do. You're right about the `deriving` syntax currently being used for classes. The fact of re-purposing the surface syntax is really no different to introducing different syntax. [...] What you're not getting is that DORF quite intentionally helps you hide the field names if you don't want your client to break your abstraction. So under your proposal, a malicious client could guess at the fieldnames in your abstraction, then create their own record with those fieldnames as SharedFields, and then be able to update your precious hidden record type. Show me how a malicious client could do that. Under DORF plus my mini-proposal, module Abstraction (AbstractData) where data AbstractData = Something { field1 :: Int, field2 :: Int } {- or it could use shared field names (shared privately) : fieldLabel field1 --however it goes fieldLabel field2 --however it goes data AbstractData = Something { field1 :: Int, field2 :: Int } deriving (SharedFields) -} module Client where import Abstraction --break abstraction how? let's try... module Client1 where import Abstraction data Breaker = Something { field1 :: Int } deriving (SharedFields) -- compile fails because there are no field-labels in scope module Client2 where import Abstraction fieldLabel field1 --however it goes data Breaker = Something { field1 :: Int } deriving (SharedFields) -- succeeds, still cannot access AbstractData with Client2.field1 module Client3 where import Abstraction -- (using standalone deriving, if we permit it for SharedFields at all) deriving instance SharedFields AbstractData -- compile fails because not all constructors of AbstractData are in scope All my mini-proposal does is modify SORF or DORF to make un-annotated records behave exactly like H98. AntC (in an unrelated reply to Ian) : I prefer DORF's sticking to conventional/well-understood H98 namespacing controls. [warning: meta-discussion below; I'm unsure if I'm increasing signal/noise ratio] Since this giant thread is a mess of everyone misinterpreting everyone else, I'm not sure yet that DORF's namespacing is well-understood by anyone but you. For example, one of us just badly misinterpreted the other (above; not sure who yet). Would IRC be better? worse? How can the possibly-existent crowd of quiet libraries@ readers who understand SORF/DORF/etc. correctly show (in a falsifiable way) that they understand? any ideas? Do people misinterpret DORF this much because you posted at least 4000 words[1] without creating and making prominent a concise, complete description of its behaviour? (is that right?) I propose that any new record system have a description of less than 250 words that's of a style that might go in the GHC manual and that causes few if any misinterpretations. Is that too ambitious? Okay, it is. So. Differently, I propose that any new record system have a description of less than 500 words that completely specifies its behaviour and that at least half of libraries@ interprets correctly. (It's fine if the description refers to docs for other already-implemented type-system features, e.g. MPTCs and kind stuff.[2] ) Should we be trying for such a goal? (For reference: just SORF's The Base Design section is 223 words, and just DORF's Application Programmer's view only up to Option One is 451 words. (according to LibreOffice.) Neither one is a complete description, but then, my proposed 500 word description wouldn't mention design tradeoffs. A GHC User's Guide subsection I picked arbitrarily[3] is 402 words.) [1] I counted the main DORF page plus the one you pointed me to, each of which is about 2000: http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields + http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/ImplementorsView [2] My sense is that (customer_id r) uses familiar type instance resolution [...] is only a precise enough statement if the user declared the exact, unedited type of customer_id; and that having constraints like r{ customer_id :: Int } would need explanation in terms of familiar type inference such as classes. e.g... in a way that would explain r{ SomeModule.customer_id :: Int } (is that allowed?). I could try to write such a description and you could tell me where I go wrong... [3] Record field disambiguation
Re: Error while installing new packages with GHC 7.4.1
Hi Antoras, My suspicion is you've ended up with corrupted packages in your package database - nothing to do with Hoogle. I suspect trying to install parsec-3.1.2 directly would give the same error message. Can you try ghc-pkg list, and at the bottom it will probably say something like: The following packages are broken, either because they have a problem listed above, or because they depend on a broken package. warp-1.1.0 I often find ghc-pkg unregister warp --force on all the packages cleans them up enough, but someone else may have a better suggestion. Thanks, Neil On Fri, Mar 2, 2012 at 12:02 AM, Antoras m...@antoras.de wrote: Hi Neil, thanks for your effort. But it still does not work. The old errors disappeared, but new ones occur. Maybe I have not yet the most current versions: $ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.4.1 $ cabal --version cabal-install version 0.10.2 using version 1.10.1.0 of the Cabal library This seems to be the most current version of Cabal. The command 'cabal info cabal' brings: Versions installed: 1.14.0 but not 1.15 An extract of the error messages: [...] Configuring parsec-3.1.2... Preprocessing library parsec-3.1.2... Building parsec-3.1.2... command line: cannot satisfy -package-id text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d: text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d is unusable due to missing or recursive dependencies: deepseq-1.3.0.0-a73ec930018135e0dc0a1a3d29c74c88 (use -v for more information) command line: cannot satisfy -package Cabal-1.14.0: Cabal-1.14.0-5875475606fe70ef919bbc055077d744 is unusable due to missing or recursive dependencies: array-0.4.0.0-59d1cc0e7979167b002f021942d60f46 containers-0.4.2.1-cfc6420ecc2194c9ed977b06bdfd9e69 directory-1.1.0.2-07820857642f1427d8b3bb49f93f97b0 process-1.1.0.1-18dadd8ad5fc640f55a7afdc7aace500 (use -v for more information) [...] On Thu 01 Mar 2012 11:06:43 PM CET, Neil Mitchell wrote: Hi Antoras, I've just released Hoogle 4.2.9, which allows Cabal 1.15, so hopefully will install correctly for you. Thanks, Neil On Thu, Mar 1, 2012 at 5:02 PM, Neil Mitchellndmitch...@gmail.com wrote: Hi Antoras, The darcs version of Hoogle has had a more permissive dependency for a few weeks. Had I realised the dependency caused problems I'd have released a new version immediately! As it stands, I'll release a new version in about 4 hours. If you can't wait that long, try darcs get http://code.haskell.org/hoogle Thanks, Neil On Thursday, March 1, 2012, Antoras wrote: Ok, interesting info. But how to solve the problem now? Should I contact the author of Hoogle and ask him about how solving this? On 03/01/2012 02:02 AM, Albert Y. C. Lai wrote: On 12-02-29 06:04 AM, Antoras wrote: I don't know where the dependency to array-0.3.0.3 comes from. Is it possible to get more info from cabal than -v? hoogle-4.2.8 has Cabal= 1.8 1.13, this brings in Cabal-1.12.0. Cabal-1.12.0 has array= 0.1 0.4, this brings in array-0.3.0.3. It is a mess to have 2nd instances of libraries that already come with GHC, unless you are an expert in knowing and avoiding the treacherous consequences. See my http://www.vex.net/~trebla/haskell/sicp.xhtml It is possible to fish the output of cabal install --dry-run -v3 hoogle for why array-0.3.0.3 is brought in. It really is fishing, since the output is copious and of low information density. Chinese idiom: needle in ocean (haystack is too easy). Example: selecting hoogle-4.2.8 (hackage) and discarding Cabal-1.1.6, 1.2.1, 1.2.2.0, 1.2.3.0, 1.2.4.0, 1.4.0.0, 1.4.0.1, 1.4.0.2, 1.6.0.1, 1.6.0.2, 1.6.0.3, 1.14.0, blaze-builder-0.1, case-insensitive-0.1, We see that selecting hoogle-4.2.8 causes ruling out Cabal 1.14.0 Similarly, the line for selecting Cabal-1.12.0 mentions ruling out array-0.4.0.0 ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Isaac Dupree ml at isaac.cedarswampstudios.org writes: AntC (in an unrelated reply to Ian) : I prefer DORF's sticking to conventional/well-understood H98 namespacing controls. ... I'm not sure yet that DORF's namespacing is well-understood by anyone but you. No of course I'm not saying DORF's namespacing is well-understood. I mean: 1. H98 namespacing controls are conventional and well understood. 2. DORF uses H98 controls and only them. Re 2: Partly you'll just have to take my word for it, but mainly the implementors will have to prove it to themselves if DORF is ever going to see the light of day, so I'd be daft to claim something I didn't have good evidence for. Also there's strong corroboration: there's a prototype implementation attached to the wiki. You can download it and compile it (one module importing the other), and run it and try to break the abstractions, and demonstrate sharing the fields that are sharable. You can inspect the code to see if I've desugarred my syntax correctly, or introduced some trick. (If you see anything 'suspicious', please ask.) In fact, doing all that would be a far better use of your time (and mine) than all that verbiage and word counting. AntC ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users