Re: Records in Haskell
On 2/26/12 12:38 AM, Anthony Clayden wrote: Wren/all Please remember SPJ's request on the Records wiki to stick to the namespace issue. We're trying to make something better that H98's name clash. We are not trying to build some ideal polymorphic record system. I believe my concern is a namespace issue. There are certain circumstances under which we do not want names to clash, and there are certain circumstances under which we do want them to clash; just as sometimes we want things to be polymorphic and sometimes not. I haven't been following all the different proposals out there, but the ones I did see before tuning-out all took the stance that for each given field either (1) this field name is unique and always clashes, or (2) this field name is shared and never clashes. This is problematic for a number of reasons. The particular reason I raised is that there are times when we would like for a field name to be shared, but only shared among a specified group of records and clashing with all other records (which may themselves form groups that share the name as well). That's not a complaint against DORF per se. I haven't read the DORF proposal, so perhaps it already handles this issue. Rather, it's a general concern that I haven't seen discussed very much while skimming this thread. -- Live well, ~wren ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
I'm not sure it's a good proposal, but it seems like the only way to handle this issue is to (1) introduce a new kind for semantically-oriented field names, and (2) make the Has class use that kind rather than a type-level string. The second half of my message showed exactly how to handle the problem, using nothing more than existing Haskel features (and SORF for the record fields). The point is that the extra complexity of DORF is completely unnecessary. Barney. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Please remember SPJ's request on the Records wiki to stick to the namespace issue. We're trying to make something better that H98's name clash. We are not trying to build some ideal polymorphic record system. I must admit that this attitude really gets my hackles up. You are effectively saying that, so long as the one narrow problem you have come across is solved, it doesn't matter how bad the design is in other ways. This is the attitude that gave us the H98 records system with all its problems, and the opposite of the attitude which gave us type classes and all the valuable work that has flowed from them. Haskel is supposed to be a theoretically sound, cleanly designed language, and if we lose sight of this we might as well use C++. Whatever new records system gets chosen for Haskel, we are almost certain to be stuck with it for a long time, so it is important to get it right. Barney. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Barney Hilken : Haskel is supposed to be a theoretically sound, cleanly designed language, and if we lose sight of this we might as well use C++. Well, since I have nothing to say about new records, I don't say anything, but I have the impression that when we got to this level of discussion, it is a beginning of the end. Veeery, very funny... Imagine an ecclesiastic General Council, and the Pope saying: Brothers Bishops! Our new dogmas must be absolutely flawless, pure and sound, otherwise we might as well become Muslims. Inchaa whatever. Jerzy Karczmarczuk Caen, France ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
The DORF proposal is bringing to light some universal issues with records, so I am glad they are being hashed out. However, at this point it is premature optimization: we still don't have a proposal that solves the narrow issue of record name-spacing with Haskell. At this point SORF/DORF need a hero to figure out how to make them work with all of Haskell's current type capabilities. The DORF proposal makes some steps forward, but also backwards: it only solves the narrow name-spacing issue within a module. If a record is imported into another module, it will still clash. I stated this months ago, and I think it is even truer now: the sugar approach to records does not appear to actually be simplifying things, therefore we should consider adding a new first-class construct. I don't know much about the subject of first-class records, but so far I have come across a few styles of existing implementations in FP: structural typing, records as modules, and row types. I recently linked to Ur's extensible record impementation (that uses row types) from the wiki: http://adam.chlipala.net/papers/UrPLDI10/UrPLDI10.pdf We are trying to stay focused on the narrow issue of solving name-spacing. I think we can stay narrow if we do implement first class records but hold off for now on presenting any special capabilities to the programmer. At this point we are months into the records process without a clear way forward. I think we should be willing to take any workable implementation and just avoid exposing the implementation details for now. If anyone can lend a hand at figuring out SORF updates or determining if type inference of records in the Ur paper can be made to work in Haskell, that would be very helpful! Greg Weber On Sun, Feb 26, 2012 at 7:01 AM, Jerzy Karczmarczuk jerzy.karczmarc...@unicaen.fr wrote: Barney Hilken : Haskel is supposed to be a theoretically sound, cleanly designed language, and if we lose sight of this we might as well use C++. Well, since I have nothing to say about new records, I don't say anything, but I have the impression that when we got to this level of discussion, it is a beginning of the end. Veeery, very funny... Imagine an ecclesiastic General Council, and the Pope saying: Brothers Bishops! Our new dogmas must be absolutely flawless, pure and sound, otherwise we might as well become Muslims. Inchaa whatever. Jerzy Karczmarczuk Caen, France ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Using 'git bisect' on the GHC tree
Hi all, I am trying to track down a build failure on PowerPC that was introduced some time this year. Usually, the easiest way to do this is by using 'git bisect'. Unfortunately, this doesn't work with the GHC tree since its easy to go to a GHC revision which is incompatible with one of the many GHC sub-modules (eg Cabal). Given a GHC git commit hash, is there a way to get the various libraries into a state where I can build the GHC version specified by the hash? Regards, Erik -- -- Erik de Castro Lopo http://www.mega-nerd.com/ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Using 'git bisect' on the GHC tree
On Mon, Feb 27, 2012 at 10:37:25AM +1100, Erik de Castro Lopo wrote: Given a GHC git commit hash, is there a way to get the various libraries into a state where I can build the GHC version specified by the hash? No, but if you have a list of nightly builds, e.g. http://darcs.haskell.org/ghcBuilder/builders/pgj2/ then you can get the fingerprint for that night's build, e.g. http://darcs.haskell.org/ghcBuilder/builders/pgj2/607/4.html and use that to reconstruct the tree using utils/fingerprint/fingerprint.py Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Using 'git bisect' on the GHC tree
Erik de Castro Lopo wrote: Given a GHC git commit hash, is there a way to get the various libraries into a state where I can build the GHC version specified by the hash? As suggested by this: http://hackage.haskell.org/trac/ghc/wiki/Building/GettingTheSources#Trackingthefullrepositorystate and some help from Igloo on #ghc, I grabbed a build log from http://darcs.haskell.org/ghcBuilder/builders/tn23/534/4.html and generated a fingerprint file from that. I then did ./utils/fingerprint/fingerprint.py restore -f tn23build534.fp and tried to build it, but was still thwarted by the following: Configuring Cabal-1.13.3... ghc-cabal: At least the following dependencies are missing: base =4 3 =2 5, filepath =1 1.3 make[1]: *** [libraries/Cabal/Cabal/dist-boot/package-data.mk] Error 1 make: *** [all] Error 2 At this point I think I basically have to give up on using git bisect which I have found so useful on other projects. Erik -- -- Erik de Castro Lopo http://www.mega-nerd.com/ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
Hi Greg, (Apologies for second mail, I didn't include the list) I think the DORF approach is quite principled in it's namespacing. The labels are just normal functions which can be exported and imported between modules. I believe that is it's main strength - so I think to say it only solves the narrow name-spacing issue within a module. is not quite accurate. Sure - if you have two unrelated modules - say Data.Foo and Data.Bar each with records with fields x and y they will clash. But this is a very common situation e.g. how many functions called map are defined in various modules? If the modules are related, however - we can re-use the same label without problem (in the same way we can define a type class Functor for all the various map functions). I don't think it is so important that we have globally common labels - if anything I would think that would be an engineering goal to avoid? (Imagine how many labels called x with different types may spring up) -- First you create the labels: module A (width, height) width :: r { width :: Float } = r - Float height d :: r { height :: Float } = r - Float -- We can use them in one module. module B (Rectangle (..)) where import A (width, height) data Rectangle = Rectangle { width, height } -- Potentially don't need to give these types, since they're already defined by the label module C (Box (..)) where import A (width, height) length d :: r { length :: Float } = r - Float data Box = Box { width, height, length } -- Use the same fields again I've been following the discussion with interest. Cheers, Oliver On Mon, Feb 27, 2012 at 5:47 AM, Greg Weber g...@gregweber.info wrote: The DORF proposal is bringing to light some universal issues with records, so I am glad they are being hashed out. However, at this point it is premature optimization: we still don't have a proposal that solves the narrow issue of record name-spacing with Haskell. At this point SORF/DORF need a hero to figure out how to make them work with all of Haskell's current type capabilities. The DORF proposal makes some steps forward, but also backwards: it only solves the narrow name-spacing issue within a module. If a record is imported into another module, it will still clash. I stated this months ago, and I think it is even truer now: the sugar approach to records does not appear to actually be simplifying things, therefore we should consider adding a new first-class construct. I don't know much about the subject of first-class records, but so far I have come across a few styles of existing implementations in FP: structural typing, records as modules, and row types. I recently linked to Ur's extensible record impementation (that uses row types) from the wiki: http://adam.chlipala.net/papers/UrPLDI10/UrPLDI10.pdf We are trying to stay focused on the narrow issue of solving name-spacing. I think we can stay narrow if we do implement first class records but hold off for now on presenting any special capabilities to the programmer. At this point we are months into the records process without a clear way forward. I think we should be willing to take any workable implementation and just avoid exposing the implementation details for now. If anyone can lend a hand at figuring out SORF updates or determining if type inference of records in the Ur paper can be made to work in Haskell, that would be very helpful! Greg Weber On Sun, Feb 26, 2012 at 7:01 AM, Jerzy Karczmarczuk jerzy.karczmarc...@unicaen.fr wrote: Barney Hilken : Haskel is supposed to be a theoretically sound, cleanly designed language, and if we lose sight of this we might as well use C++. Well, since I have nothing to say about new records, I don't say anything, but I have the impression that when we got to this level of discussion, it is a beginning of the end. Veeery, very funny... Imagine an ecclesiastic General Council, and the Pope saying: Brothers Bishops! Our new dogmas must be absolutely flawless, pure and sound, otherwise we might as well become Muslims. Inchaa whatever. Jerzy Karczmarczuk Caen, France ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghci 7.4.1 no longer loading .o files?
Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that. I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint? Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file. After some fiddling, I think that -osuf should probably be omitted from the fingerprint. I use ghc -c -o x/y/Z.hs.o. Since I set the output directly, I don't use -osuf. But since ghci needs to be able to find the .o files, I need to pass it -osuf. The result is that I need to pass ghc -osuf when compiling to get ghci to load the .o files, even though it doesn't make any difference to ghc -c, which is a somewhat confusing requirement. In fact, since -osuf as well as the -outputdir flags affect the location of the output files, I'd think they wouldn't need to be in the fingerprint either. They affect the location of the files, not the contents. If you found the files it means you already figured out what you needed to figure out, it shouldn't matter *how* you found the files. And doesn't the same go for -i? Isn't it valid to start ghci from a different directory and it should work as long as it's able to find the files to load? Further updates: this has continued to cause problems for me, and now I'm wondering if the CPP flags such as -D shouldn't be omitted from the fingerprint too. Here's the rationale: I use CPP in a few places to enable or disable some expensive features. My build system knows which files depend on which defines and hence which files to rebuild. However, ghci now has no way of loading all the .o files, since the ones that don't depend on the -D flag were probably not compiled with it and those that do were. This also plays havoc with the 'hint' library, which is a wrapper around the GHC API. I can't get it to load any .o files and it's hard to debug because it doesn't tell you why it's not loading them. In addition, ghc --make used to figure out which files depended on the changed CPP flags and recompile only those. Now it unconditionally recompiles everything. I always assumed it was because GHC ran CPP on the files before the recompilation checker. If that's the case, do the CPP flags need to be included in the fingerprint at al? It seems like they're already taken into account by the time the fingerprints are calculated. I reviewed http://hackage.haskell.org/trac/ghc/ticket/437 and I noticed there was some question about which flags should be included. Including the language flags and -main-is since that was the original motivation (but only for the module it applies to, of course) makes sense, but I feel like the rest should be omitted. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Sun, Feb 26, 2012 at 2:00 AM, wren ng thornton w...@freegeek.org wrote: I haven't been following all the different proposals out there, but the ones I did see before tuning-out all took the stance that for each given field either (1) this field name is unique and always clashes, or (2) this field name is shared and never clashes. This is problematic for a number of reasons. The particular reason I raised is that there are times when we would like for a field name to be shared, but only shared among a specified group of records and clashing with all other records (which may themselves form groups that share the name as well). I had a proposal that, I think, wouldn't have that clash/no clash distinction, because it doesn't have the notion of overloading a single symbol ala typeclasses. So I think it would sidestep that whole problem. Anyway, I copied it up at http://hackage.haskell.org/trac/ghc/wiki/Records/SyntaxDirectedNameResolution if only so I can feel like I said my thing and can stop mentioning it :) ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users