[ ghc-Bugs-679963 ] hp2hs -c appears broken in ghc-5.04.2
Bugs item #679963, was opened at 2003-02-04 02:14 You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=108032aid=679963group_id=8032 Category: Profiling Group: None Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: hp2hs -c appears broken in ghc-5.04.2 Initial Comment: It generates -4.80 -4.80 -4.80 setrgbcolor in the postcript file. ghc-5.04.01 didn't do that, so I'm using the old version. Sengan -- Comment By: Simon Marlow (simonmar) Date: 2003-02-04 11:22 Message: Logged In: YES user_id=48280 There were some changes to hp2ps to avoid overflowing some 32-bit counters between 5.04.1 and 5.04.2, but that's all. Could you give an example that generates incorrect output? Or attach the .ps files generated by 5.04.1 and 5.04.2? -- Comment By: Nobody/Anonymous (nobody) Date: 2003-02-04 02:19 Message: Logged In: NO (senganb `at` ia `dot` nsc `dot` com) -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=108032aid=679963group_id=8032 ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
[ ghc-Bugs-670756 ] package cc and ld opts inconsistent
Bugs item #670756, was opened at 2003-01-19 18:59 You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=108032aid=670756group_id=8032 Category: Driver Group: 5.04.2 Status: Open Resolution: None Priority: 5 Submitted By: Axel Simon (as49) Assigned to: Nobody/Anonymous (nobody) Summary: package cc and ld opts inconsistent Initial Comment: I try to create a package which passes --subsystem windows to the linker. As far as I understand it should be ok to have extra_ld_opts = [--subsystem windows] or, as ghc itself does it extra_ld_opts = [--subsystem, windows] in the package description. However these flags are not given to the linker. I tried extra_cc_opts = [-Wl,--subsystem, -Wl,windows] but these flags are not passed to the C compiler either. I finally succeeded with extra_ld_opts = [-Wl,--subsystem, -Wl,windows] which I suspect fails when compiling with the native code generator. I am using the MSI of ghc 5.04.2. Axel. -- Comment By: Simon Marlow (simonmar) Date: 2003-02-04 12:16 Message: Logged In: YES user_id=48280 extra_cc_opts gets passed to gcc when doing C compilations (either Haskell compilations via C, or compilation of plain C files). extra_ld_opts gets passed to gcc when doing linking. You can use -v to see what options are being passed to which commands. What exactly is wrong? -- Comment By: Axel Simon (as49) Date: 2003-02-04 11:52 Message: Logged In: YES user_id=489164 I thought the linker is the linker and not the C compiler. With this in mind I would have expected that extra_ld_opts = [--subsystem, windows] winds up as -Wl,--subsystem -Wl,windows on the gcc command line. Similarly, I expected extra_cc_opts = [-Wl,--subsystem, -Wl,windows] to end up on the command line of gcc. Where do these flags go to? Furthermore, how am I to specify linker options if compilation is not via C? I find the behaviour just very confusing. Axel. -- Comment By: Simon Marlow (simonmar) Date: 2003-02-04 11:26 Message: Logged In: YES user_id=48280 The linker is in fact gcc, so when you add options to extra_ld_opts they are passed on the gcc command line. If you want to actually pass options to ld, then you need to use the -Wl, prefix as you mentioned. I don't understand your comments about extra_cc_opts - as far as I can tell these are always passed to C compilations. Could you give a specific example? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detailatid=108032aid=670756group_id=8032 ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
RE: Core, implicit bindings are emitted in the wrong order
I've made this fix in the HEAD, thank you. Simon | -Original Message- | From: Tobias Gedell [mailto:[EMAIL PROTECTED]] | Sent: 25 January 2003 11:54 | To: glasgow-haskell-bugs | Subject: Core, implicit bindings are emitted in the wrong order | | The implicit bindings are emitted in the wrong order when generating Core. | | It seems like the problem occurs since the implicit bindings does not | only consist of wrappers and class functions but also conversion | functions. These conversion functions sometimes use wrappers that have | not yet been defined. | | An example of this is found at line 240 in Base.hcr (generated with the | -O0 flag): | | - | GHCziBase.zdgfromBool :: | GHCziBase.Bool - | GHCziBase.ZCzpZC GHCziBase.Unit GHCziBase.Unit = | \ (g::GHCziBase.Bool) - |%case g %of (g1::GHCziBase.Bool) |{ | GHCziBase.False - | GHCziBase.Inl @ GHCziBase.Unit @ GHCziBase.Unit GHCziBase.Unit; | GHCziBase.True - | GHCziBase.Inr @ GHCziBase.Unit @ GHCziBase.Unit GHCziBase.Unit |}; | -- | | Here we try to use the wrappers GHCziBase.Inl and GHCziBase.Inr but they | are defined first at line 346. | | | Maybe these conversion functions only use GHCziBase.ZCzpZC and then a | possible fix would be to make sure that GHCziBase.Inl and GHCziBase.Inr | are defined before all other implicit bindings. | | Another possible fix that I have implemented is to make sure that all | wrappers are defined before all other implicit bindings. | | | I have replaced line 53-70 in ghc/compiler/coreSyn/MkExternalCore.lhs with: | | -- | mkExternalCore :: ModGuts - C.Module | mkExternalCore (ModGuts {mg_module=this_mod, mg_types = type_env, | mg_binds = binds}) |= C.Module mname tdefs vdefs |where | mname = make_mid this_mod | tdefs = foldr collect_tdefs [] tycons | -- Don't forget to include the implicit bindings! | vdefs = map make_vdef (implicit_wbinds ++ implicit_binds ++ binds) | tycons = map classTyCon (typeEnvClasses type_env) ++ typeEnvTyCons | type_env | | tything = typeEnvElts type_env | implicit_wbinds = map get_defn $ concatMap implicit_wids tything | implicit_binds = map get_defn $ concatMap implicit_ids tything | | -- Get only the wrappers | implicit_wids :: TyThing - [Id] | -- C.f. HscTypes.mkImplicitBinds, but we do not include constructor workers | implicit_wids (ATyCon tc) = map dataConWrapId (tyConDataCons_maybe tc | `orElse` []) | implicit_wids other = [] | | -- Get all other bindings | implicit_ids :: TyThing - [Id] | implicit_ids (ATyCon tc) = tyConSelIds tc ++ tyConGenIds tc | implicit_ids (AClass cl) = classSelIds cl | implicit_ids other = [] | -- | | | | | Sincerely, | Tobias | | ___ | Glasgow-haskell-bugs mailing list | [EMAIL PROTECTED] | http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
Re: Yet another External Core bug
On Fri, Jan 24, 2003 at 01:31:46PM -, Simon Peyton-Jones wrote: In short, it's really a bug, but not a particularly easy one to fix. Rumination required. But it probably only happens on a few programs, right? It happens on at least two of the nofib benchmarks, so it seems to be pretty significant. I didn't understand your explanation of the problem, so unfortunately I can't suggest any solution. Did the other Core bug I reported at the same time http://www.haskell.org/pipermail/glasgow-haskell-bugs/2003-January/002877.html ever get fixed? Thanks, Kirsten -- Kirsten Chevalier * [EMAIL PROTECTED] * Often in error, never in doubt ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
badness with -fmax-simplifier-iterations
A few things. First of all, if you're stupid and say: ghc ... -fmax-simplifier-iterations=5 then ghc crashes with: ghc-5.05: panic! (the `impossible' happened, GHC version 5.05): Prelude.read: no parse Please report it as a compiler bug to [EMAIL PROTECTED], or http://sourceforge.net/projects/ghc/. (same thing happens in 5.04.2) I think in DriverFlags.hs, the command for max-simpl... should be changed from: Prefix (writeIORef v_MaxSimplifierIterations . read) to PrefixPred (all isDigit) (writeIORef v_MaxSimplifierIterations . read) Now, more importantly, it seems to obey this command unless you give it, for instance, 0 as an argument: ghc ... -v5 -fmax-simplifier-iterations0 | grep 'Simplifier' yields: Simplifier phase 0, iteration 1 out of 0 Simplifier phase 0, iteration 1 out of 0 Simplifier phase 0, iteration 2 out of 0 Simplifier phase 0, iteration 2 out of 0 which seems pretty bad to me :) - Hal -- Hal Daume III Computer science is no more about computers| [EMAIL PROTECTED] than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
RE: Modifying GHC to accept external Styx code
Aha! So what you really want is a code generator! I suggest you take a look at C--. www.cminusminus.org (Check out the papers especially.) It's designed for exactly what Stix is designed for, only much, much better. There's a prototype implementation from Fermin Reig, and Norman Ramsey is working hard on another. (I'm ccing him.) Hmm. Maybe someone can write a C-- front end for GHC's code generator! Simon | -Original Message- | From: Mark Alexander Wotton [mailto:[EMAIL PROTECTED]] | Sent: 03 February 2003 22:19 | To: Simon Peyton-Jones | Cc: [EMAIL PROTECTED] | Subject: RE: Modifying GHC to accept external Styx code | | On Mon, 3 Feb 2003, Simon Peyton-Jones wrote: | | | | I want to use GHC as a backend for my experimental compiler. | | I'm trying to modify it to accept styx code from my compiler, and I'm | | having some trouble working out where I need to make changes. In | truth, | | GHC is more haskell code than I've ever seen before in one place, and | I'm | | a bit lost: can anyone suggest which modules I should become familiar | with | | and which I should leave alone? | | You don't say what 'styx code' is. My advice: either translate styx to | Haskell, or to External Core. (Probably the latter.) GHC can read both | of these in without modification. | | External Core is a typed lambda calculus. | | Yes, I know External Core: I'm using it as my input language. (GHC is my | frontend as well.) The optimisation algorithms I want to implement need | a lower-level language than Core, which is why I want to use Styx, the | native generation intermediate language. Abstract C is also an option, but | if I've got to hack GHC to get either of them going, I'd rather do it for | Styx. | | Mark | | -- | no-one takes up music to play their own instrument | -- Randy Hiatt, Chalkhills list | ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: ghc feature request: core notes
I have to say that I'm not very keen. There is an annotation facility in Core, but it's easy for the notes to be discarded. So if they are conveying important info, it might easily get lost... and if not, what's it doing there in the first place. What do you expect to happen to these annotations? So just now I'm a bit sceptical. Core is a data type that I'm trying hard to keep simple. Simon | -Original Message- | From: Hal Daume III [mailto:[EMAIL PROTECTED]] | Sent: 03 February 2003 23:24 | To: GHC Users Mailing List | Subject: ghc feature request: core notes | | I'm not sure how generally useful this would be, but I would find it | useful to be able to attach notes to core expressions from the Haskell | code. The idea being something along the lines of a code annotation like | a pragma. For instance, in my Haskell program I would have something | like: | | f x y = | case {-# CORE my first note #-} g x of | ... | | then, the core would come out with, instead of: | | case g x of ... | | we would have | | case {note my first note} g x of ... | | The reason I would find this useful is somewhat obscure, but the basic | idea is that I need to be able to both preprocess code and then change | core based on how it was preprocessed. I'd like to send annotations like | these out of the preprocessor so they can then be picked up later by my | core transformer. | | If this sounds like a good enough idea to go in, but no one has time to | implement it, I could do it myself (probably), but I thought I'd ask the | experts first (or if there's anything like this in there currently)... | | - Hal | | -- | Hal Daume III | | Computer science is no more about computers| [EMAIL PROTECTED] | than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume | | ___ | Glasgow-haskell-users mailing list | [EMAIL PROTECTED] | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHC and C++ (was RE: Creating COM objects and passing out pointers to them via a COM interface)
I'm replying to two threads at the same time and cross posting my reply, because they are very relevant to each other. I'm sorry if anyone here ends up seeing this more than once as a consequence. [s] -- Sarah Did you get this problem sorted out? Not directly. I ended up building an 'outer' COM component in C++ that mapped an MSXML-like interface onto a simpler, internal COM component that wraps up the Haskell code. This approach seems to work well - I might have been able to build the necessary wrapper directly in Haskell using HDirect, but I was becoming aware that I was probably starting to take up an unreasonable amount of Sigbjorn's time with increasingly detailed questions and I didn't have time to figure out the necessary intricacies from scratch. HDirect seems very good, but I don't think it's currently in quite as stable a state as the main GHC release. The problems I had in building a version that works with the current GHC are an example of this. I'm not complaining - it worked well in the end, but I do think it might be a 'good thing' to consider rolling HDirect into the main GHC release at some point. The other thing that bit me was exception handling, or rather the default behaviour when an uncaught exception 'falls out' and reaches the top level of the runtime system. Currently, the functionality appears to be to abort the current process. Whilst this is probably desirable for a typical application executable (be that console or GUI based), it is a problem if it happens inside a COM component. My thinking is quite straightforward on this. Preferred behaviour should really be for a call to a COM component to fail returning an error code, rather than abort the calling process. 'Server-style' applications are usually expected to keep running when minor faults occur, rather than terminate. HDirect/GHC's current behaviour makes this difficult to achieve - I managed to successfully catch exceptions, but the price was needing to use some code based on DeepSeq to force deep strict evaluation for all state updates. This (seemingly) was the only way I could stop exceptions from happening in the context of the automatically generated COM wrapper - passing it a fully evaluated data structure was apparently the only solution available. However, this does break the advantages of laziness as applied to the current state of the object - it would be pretty neat, for example, to be able to define a view of a database as the result of an expression involving relational algebra combinators, with the query evaluated one record at a time as the data is pulled back by the client application. The need for DeepSeq forces strict evaluation, so this benefit is lost. This is actually more of an issue than it might immediately seem - many queries don't actually need to retrieve all of a record set before useful things can be done. A particular example is updating a GUI, where a list box reflects the results of a query. Lazy evaluation will allow the screen to start updating more or less immediately even on a query that returns thousands of records - strict evaluation might incur a delay of a few seconds (for a very complex query or a large database) before updates can start happening. In an ideal world, what I'd dearly love to see would be an HDirect option that allows a choice between existing functionality and having calls return a failure code, or better still allowing a user-supplied handler function to determine the correct course of action. So, to sum up my wish list: 1. A facility to replace the default uncaught exception behaviour for COM objects and similar shared library code 2. HDirect rolled into the main GHC tree 3. A facility within HDirect to create COM objects based on existing (Haskell) data structures, returning an interface pointer as a return parameter on a call to an existing COM object. (This was the question Simon asked if I'd had an answer to - I've attached a copy below for reference). Sarah PS: I've not yet looked closely at the .NET integration in GHC yet, but I can imagine myself having a very similar wish list in that case too. Hi again, A slightly harder problem this time. I want to implement something that looks something like the Microsoft XML parser. This is a very COM heavy contraption that provides an interface that looks to client applications like a tree of COM objects that reflects the internal structure of an XML document. The idea is, you start with the document object, make a method call requesting the root node, and have a COM object that wraps that node returned to you. You can then request the children of that node, and their children and siblings, all which involves being passed back interface pointers to COM objects. Effectively, there is one COM object per syntactic item in the XML document. I already have my document object working nicely. This is where the actual database resides. I want to be able to spawn views of this database as COM
RE: ghc feature request: core notes
Simon, I wasn't suggesting extending the Core datatype. Part of the Core Exp datatype is these notes. From ExternalCore.hs: data Exp ... | Note String Exp ... My suggestion is simply to allow pragmas to fill in these Note values. I've actually implemented it in my copy of GHC and it's a very very small addition. It requires essentially modifying slightly the parser and lexer to accept '{-# CORE' as a pragma and then to pass this value down through all the passes (typechecking, renaming, etc.). I think it's about 20 additional lines of code in all. It doesn't seem that these get thrown away anywhere in the AbsSyn - Core piping process. I'm not sure the extend to which this de-simplifies Core. Currently these Notes are only used to store SCC information for profiling and INLINE information. - Hal -- Hal Daume III Computer science is no more about computers| [EMAIL PROTECTED] than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume On Tue, 4 Feb 2003, Simon Peyton-Jones wrote: I have to say that I'm not very keen. There is an annotation facility in Core, but it's easy for the notes to be discarded. So if they are conveying important info, it might easily get lost... and if not, what's it doing there in the first place. What do you expect to happen to these annotations? So just now I'm a bit sceptical. Core is a data type that I'm trying hard to keep simple. Simon | -Original Message- | From: Hal Daume III [mailto:[EMAIL PROTECTED]] | Sent: 03 February 2003 23:24 | To: GHC Users Mailing List | Subject: ghc feature request: core notes | | I'm not sure how generally useful this would be, but I would find it | useful to be able to attach notes to core expressions from the Haskell | code. The idea being something along the lines of a code annotation like | a pragma. For instance, in my Haskell program I would have something | like: | | f x y = | case {-# CORE my first note #-} g x of | ... | | then, the core would come out with, instead of: | | case g x of ... | | we would have | | case {note my first note} g x of ... | | The reason I would find this useful is somewhat obscure, but the basic | idea is that I need to be able to both preprocess code and then change | core based on how it was preprocessed. I'd like to send annotations like | these out of the preprocessor so they can then be picked up later by my | core transformer. | | If this sounds like a good enough idea to go in, but no one has time to | implement it, I could do it myself (probably), but I thought I'd ask the | experts first (or if there's anything like this in there currently)... | | - Hal | | -- | Hal Daume III | | Computer science is no more about computers| [EMAIL PROTECTED] | than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume | | ___ | Glasgow-haskell-users mailing list | [EMAIL PROTECTED] | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: Modifying GHC to accept external Styx code
On Tue, 4 Feb 2003, Simon Peyton-Jones wrote: Aha! So what you really want is a code generator! I suggest you take a look at C--. www.cminusminus.org (Check out the papers especially.) It's designed for exactly what Stix is designed for, only much, much better. There's a prototype implementation from Fermin Reig, and Norman Ramsey is working hard on another. (I'm ccing him.) Hmm. Maybe someone can write a C-- front end for GHC's code generator! I've looked at C--. It's certainly very interesting, but one of the benefits of using ghc for the frontend and the backend is that I don't need to think about the primitive operations much; I can just pass them through. A C-- front end for GHC would be almost perfect, if one existed. Mark ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: [OT] Teaching Haskell in High School
Don't forget Helium (recently announced) http://www.cs.uu.nl/~afie/helium/index.html Also Manuel Chakravarty teaches Haskell to hordes. Simon | -Original Message- | From: Hal Daume III [mailto:[EMAIL PROTECTED]] | Sent: 04 February 2003 00:02 | To: Haskell Mailing List | Subject: [OT] Teaching Haskell in High School | | Hi all, | | Before getting in to this, let me preface my question(s) with a note that | I have checked through the Haskell in Education web page and have found | various links off there of interest (and I've googled, etc. In | short: I've done my homework). | | That said, I've been in rather close correspondence with my math/computer | science teacher from high school. When I first took CS there, they taught | Pascal (a year early they had been teaching Scheme). They switched over | to VB (alas) recently and have been teaching that for a few years now. | | The teacher really wants to get away from VB, but is having a somewhat | difficult time deciding what to go to. The two most promising options are | Haskell and Java. | | Aside from hype, etc., the primary advantage to Java is that the Advanced | Placement (AP) tests are in Java. For those of you unfamiliar with these, | high school students can take AP tests and then (typically) skip out of | first semester college courses. They're essentially proficiency exams. | | The way the computer science curriculum is set up at my old school is | essentially as either (a) an elective or (b) a replacement for senior year | math. The students in the course are usually about 2/3 juniors (16 year | olds) taking it as an elective and 1/3 seniors who want to get our of | senior year math :). Either way, they've both taken differential | calculus, algebra, etc. Note, however, that high school math in the | states is very rudimentary when it comes to things like induction and | proofs and things of this sort. | | Due to the fact that CS is essentially an alternative math course, I think | it would be interesting to teach Haskell. It would enable the instruction | of things the students wouldn't have come across in their ordinary math | studies, etc. | | However, I'm also well aware that Haskell is very difficult to learn (and, | I'd imagine, to teach). Given that this would in large part be a first | language for them and that they won't have a college-level math | background, do you think it would be too much to attempt to teach Haskell | at this level, and stick with Java? | | I'm really interested in any comments/experience/etc. that people have | that might assist the teacher (and, to some extent, me) make this | decision. | | Thanks in advance! | | - Hal | | | ___ | Haskell mailing list | [EMAIL PROTECTED] | http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [OT] Teaching Haskell in High School
John Peterson wrote: The downside of Haskell is that none of the regular implementations (ghc, hugs) are really right for this level of student. Type inference is an especially nasty problem. There also a number of gotcha's lurking in the language that cause problems. For exactly these reasons we have implemented Helium; not for replacing Haskell (we're very happy with Haskell), but for *learning* Haskell. There is no overloading, so types and type errors are easier to understand. The Helium compiler produces warnings for situations that are probably incorrect. I've found myself look at a program with a comparable problem as the one below for fifteen minutes until I saw what was happening (please look at the program with a non-proportional font like Courier for optimal confusion): filterr :: (a - Bool) - [a] - [a] filterr p [] = [] fi1terr p (x:xs) = if p x then x : filterr p xs else filterr p xs Hugs, which we used for teaching, doesn't say a word. GHC does if you helpful warnings if you pass the -Wall flag. Helium says: (2,9): Variable p is not used (3,1): Missing type signature: fi1terr :: (a - Bool) - [a] - [a] (3,1): Suspicious adjacent functions fi1terr and filterr I'm curious what the other gotcha's are that John refers to because it might give us inspiration for more warnings/language design decisions. Arjan http://www.cs.uu.nl/~afie/helium/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [OT] Teaching Haskell in High School
John Peterson wrote: The downside of Haskell is that none of the regular implementations (ghc, hugs) are really right for this level of student. Type inference is an especially nasty problem. There also a number of gotcha's lurking in the language that cause problems. For exactly these reasons we have implemented Helium; not for replacing Haskell (we're very happy with Haskell), but for *learning* Haskell. There is no overloading, so types and type errors are easier to understand. The Helium compiler produces warnings for situations that are probably incorrect. I've found myself look at a program with a comparable problem as the one below for fifteen minutes until I saw what was happening (please look at the program with a non-proportional font like Courier for optimal confusion): filterr :: (a - Bool) - [a] - [a] filterr p [] = [] fi1terr p (x:xs) = if p x then x : filterr p xs else filterr p xs Hugs, which we used for teaching, doesn't say a word. GHC does give helpful warnings if you pass the -Wall flag. Helium says: (2,9): Variable p is not used (3,1): Missing type signature: fi1terr :: (a - Bool) - [a] - [a] (3,1): Suspicious adjacent functions fi1terr and filterr I'm curious what the other gotcha's are that John refers to because it might give us inspiration for more warnings/language design decisions. Arjan http://www.cs.uu.nl/~afie/helium/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [OT] Teaching Haskell in High School (fwd)
On Mon, 3 Feb 2003, Rex Page wrote: This matches my experience, too. When I've taught Haskell to first year college students, there have always been some hard core hackers who've been at it in C or VB or Perl or something like that for years, and they rarely take kindly to Haskell. The ones without any programming background do better. I think Haskell would be great for a high school math class. They could learn some logic and induction along with it, and get a few proofs back into the high school math curriculum. Rex Page Actually, this doesn't match my experience. I teach first years too, and I always have some hard core hackers. In my experience, they have the advantage that they already understand notions such as a formal syntax and an algorithm, as opposed to expecting that the computer will somehow know what they mean. They also have something concrete to compare Haskell against ... which often leads them to become real Haskell enthusiasts! But then again, my course emphasises real programming and real-world problem solving, at the expense of logic and induction. John Hughes ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Dispatch on what? (Was: seeking ideas for short lecture on type classes)
This is a somewhat older thread, but I ask you to enlighten me. Norman Ramsey wrote: A fact that I know but don't understand the implication of is that Haskell dispatches on the static type of a value, whereas OO languages dispatch on the dynamic type of a value. But I suspect I'll leave that out :-) Dean Herington: Perhaps I misunderstand, but I would suggest that fact is, if not incorrect, at least oversimplified. I would say Haskell dispatches on the dynamic type of a value, in the sense that a single polymorphic function varies its behavior based on the specific type(s) of its argument(s). What may distinguish Haskell from typical OO languages (I'm not an expert on them) is that in Haskell such polymorphic functions could (always or at least nearly so) be specialized statically for their uses at different types. Fergus Henderson wrote: I agree. The above characterization is highly misleading. It would be more accurate and informative to say that both Haskell and OO languages dispatch on the dynamic type of a value. Now my brain ceased to understand... Are you sure that OO dispatch schemas are based on the *argument* type? I would say that - unless I am dead wrong, the OO languages such as Smalltalk do not dispatch on dynamic types of a value. The receiver is known, so its vir. f. table (belonging to the receiver's class) is known as well, the dispatching is based on the *message identifiers* independently of subsidiary arguments. Only after that - perhaps - some reversions, message propagation depending on the arg(s) value(s), etc. may take place, but all this is irrelevant... Forgive me if I write stupidities. Jerzy Karczmarczuk ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: [OT] Teaching Haskell in High School
For exactly these reasons we have implemented Helium; not for replacing Haskell (we're very happy with Haskell), but for *learning* Haskell. There is no overloading, so types and type errors are easier to understand. The Helium compiler produces warnings for situations that are probably incorrect. I'm curious what the other gotcha's are that John refers to because it might give us inspiration for more warnings/language design decisions. I've been negligent in not playing with Helium yet - I'm sure that this sort of system is going to be a significant improvement for this sort of audience. I'm also hoping that Helium can be adapted into other problem domains which use typed functional languages. As far as language gotchas, the issue is much more in the way the programming environment responds to a problem than in the language design. If a system can generate a sensible response / suggestion when a student makes an error then I don't think there's any particular problem. Another issue is in how the material is presented - I think this makes a big difference in what errors a student is likely to encounter. When you teach in a slow, constructive manner I think you're less likely to encounter problems than in my case, where I was teaching from a mathematics perspective and asking students to transcribe their mathematical ideas directly into Haskell and hope that they work. Notationally, there were three problems that hit students hard and generated error messages that had nothing to do with their mistake. The numeric syntax requires digits on both sides of a decimal. Mistakes like a = .2 were extremely common. Unary minus was also a big problem - I solved this by requiring all negative number to be wrapped in parens. The varid / conid stuff also was a problem - it was hard to explain why a = 1 is OK and A = 1 isn't. This definitely was a problem with going straight into programming from math. There were a lot of problems with type errors. The numeric type classes pop up unexpectedly in error messages, rendering them cryptic. Monomorphism also caused some problems but as I recall it was the old Hugs that didn't handle monomorphism quite right anyway. Debugging was a serious problem - we basicly gave up on that. Since the programs were fairly short I could usually spot bugs fairly easily but I couldn't get students to do this on their own. Sorry if this isn't very precise - it's been a while. John ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Dispatch on what? (Was: seeking ideas for short lecture on type classes)
On Tuesday, February 4, 2003, at 03:46 PM, Jerzy Karczmarczuk wrote: I would say that - unless I am dead wrong, the OO languages such as Smalltalk do not dispatch on dynamic types of a value. The receiver is known, so its vir. f. table (belonging to the receiver's class) is known as well, the dispatching is based on the *message identifiers* independently of subsidiary arguments. Yes, normally only the receiver and message identifier matters. In languages like Java, where the type of the other arguments is considered, this is handled statically. For languages like Smalltalk, Objective-C, etc, all that you know at compile time is that the receiver is an object. You don't know what kind of object. Thus you cannot use a static dispatch style or vtables as you would in a language like C++. Even when the type is bounded, vtables are not enough because of categories and method packages that add methods to classes at run time. Dispatch is done on the target (receiver) and selector (message identifier). So if you consider an expression like: target_expr doSomething:arg_expr then you should break that up into v = target_expr f = lookup_method(v, doSomething:) f(v, doSomething:, argexpr). You dispatch on the dynamic type of the target (it's class) and on the dynamic state of its method tables. Note that you cannot assume that the receiver has a method that matches the selector! Dynamic method lookup is a fairly complicated and expensive process, and it's a barrier to many optimizations. This can be mitigated by program analysis and specialization or similar techniques; dynamic dispatch and the surrounding problems are getting fairly well researched. Languages with dynamic dispatch: * Brad J Cox, Object oriented programming: an evolutionary approach, Addison-Wesley Longman Publishing Co., Inc., 1986 * Apple Computer, Inc., The Objective-C Programming Language, 2002. http://developer.apple.com/techpubs/macosx/Cocoa/ObjectiveC/index.html * Pieter J. Schoenmaker, Supporting the Evolution of Software, Ph.D. thesis, Eindhoven University of Technology, July 1, 1999. Some research: * Zoran Budimlic, Ken Kennedy, and Jeff Piper, The cost of being object-oriented: A preliminary study, Scientific Computing 7(2), 1999 * David Detlefs and Ole Agesen, Inlining of Virtual Methods, Proc. of 13th ECOOP * A Diwan. Understanding and improving the performance of modern programming languages. Ph.D. thesis, University of Massachusetts at Amherst. 1997 * Peeter Laud, Analysis for Object Inlining in Java; http://citeseer.nj.nec.com/laud01analysis.html * Ole Agesen and Jens Palsberg and Michael I. Schwartzbach, Type Inference of {SELF}: Analysis of Objects with Dynamic and Multiple Inheritance, Lecture Notes in Computer Science, http://citeseer.nj.nec.com/agesen93type.html Only after that - perhaps - some reversions, message propagation depending on the arg(s) value(s), etc. may take place, but all this is irrelevant... Well, self and the selector is usually an argument to the method, but otherwise I agree. Regards, John Hornkvist ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: [OT] Teaching Haskell in High School (fwd)
On Tue, 4 Feb 2003, David Bergman wrote: Rex wrote: This matches my experience, too. When I've taught Haskell to first year college students, there have always been some hard core hackers who've been at it in C or VB or Perl or something like that for years, and they rarely take kindly to Haskell. The ones without any programming background do better. I think Haskell would be great for a high school math class. They could learn some logic and induction along with it, and get a few proofs back into the high school math curriculum. Rex Page I have always had that same experience with any (more or less) declarative language. BUT, as soon as the hackers (well, maybe not VB programmers, they are kind of doomed...) have passed the initial frustration, and acquired the new way of thinking, they actually perform much better than the real beginners. /David Yes, I've seen the same thing with some of the people who come in with experience. Some are lost, but a few of them really embrace the expressiveness of a language like Haskell. Rex -- Forwarded message -- Date: Tue, 04 Feb 2003 03:03:03 +0100 From: Wolfgang Jeltsch [EMAIL PROTECTED] To: The Haskell Mailing List [EMAIL PROTECTED] Subject: Re: [OT] Teaching Haskell in High School On Tuesday, 2003-02-04, 01:01, CET, Hal Daume wrote: [...] However, I'm also well aware that Haskell is very difficult to learn (and, I'd imagine, to teach). Hi, I wouldn't claim that Haskell is very difficult to learn. I think, people often have problems with learning Haskell because they know imperative programming and try to apply their imperative thinking to programming in Haskell. Some months ago, a first year student told me that she liked Haskell very much and that she didn't find it very difficult. I asked her if she had had experiences with other programming languages before learning Haskell. She answered: No. [...] Wolfgang ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Looking for large Haskell programs
Hi, I'm looking for large haskell programs with more than 15000 lines of code. Does any of you know where I can find such programs? The programs found in the nofib suite are not large enough. //Tobias ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Looking for large Haskell programs
GHC is such a program, as are the other Haskell compilers. Perhaps too complicated for your purposes, though. I can give you a few ~5000-1 line programs if you want. I don't quite have anything as large as 15000 lines, though. -- Hal Daume III Computer science is no more about computers| [EMAIL PROTECTED] than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume On Tue, 4 Feb 2003, Tobias Gedell wrote: Hi, I'm looking for large haskell programs with more than 15000 lines of code. Does any of you know where I can find such programs? The programs found in the nofib suite are not large enough. //Tobias ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Looking for large Haskell programs
ginsu (my gale chat client, implemented in haskell) is about ~8000 lines of really convoluted haskell which abuses every dirty trick in the book at some point or another. (designed pragmatically, no elegance here.) http://repetae.net/john/computer/ginsu/ John On Tue, Feb 04, 2003 at 08:29:15PM +0100, Tobias Gedell wrote: I'm looking for large haskell programs with more than 15000 lines of code. Does any of you know where I can find such programs? The programs found in the nofib suite are not large enough. -- --- John Meacham - California Institute of Technology, Alum. - [EMAIL PROTECTED] --- ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Looking for large Haskell programs
GHC is such a program, as are the other Haskell compilers. Perhaps too complicated for your purposes, though. GHC has too many mutually recursive modules to be useful, otherwise it would be great! But I will look more into the other compilers, are they written in Haskell?, thanks for the suggestion! I can give you a few ~5000-1 line programs if you want. I don't quite have anything as large as 15000 lines, though. I have quite many 5-10k programs, but thanks anyway! I really need programs with more than 15000 lines of code. //Tobias ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: GHC and C++ (was RE: Creating COM objects and passing out pointers to them via a COM interface)
I'm replying to two threads at the same time and cross posting my reply, because they are very relevant to each other. I'm sorry if anyone here ends up seeing this more than once as a consequence. [s] -- Sarah Did you get this problem sorted out? Not directly. I ended up building an 'outer' COM component in C++ that mapped an MSXML-like interface onto a simpler, internal COM component that wraps up the Haskell code. This approach seems to work well - I might have been able to build the necessary wrapper directly in Haskell using HDirect, but I was becoming aware that I was probably starting to take up an unreasonable amount of Sigbjorn's time with increasingly detailed questions and I didn't have time to figure out the necessary intricacies from scratch. HDirect seems very good, but I don't think it's currently in quite as stable a state as the main GHC release. The problems I had in building a version that works with the current GHC are an example of this. I'm not complaining - it worked well in the end, but I do think it might be a 'good thing' to consider rolling HDirect into the main GHC release at some point. The other thing that bit me was exception handling, or rather the default behaviour when an uncaught exception 'falls out' and reaches the top level of the runtime system. Currently, the functionality appears to be to abort the current process. Whilst this is probably desirable for a typical application executable (be that console or GUI based), it is a problem if it happens inside a COM component. My thinking is quite straightforward on this. Preferred behaviour should really be for a call to a COM component to fail returning an error code, rather than abort the calling process. 'Server-style' applications are usually expected to keep running when minor faults occur, rather than terminate. HDirect/GHC's current behaviour makes this difficult to achieve - I managed to successfully catch exceptions, but the price was needing to use some code based on DeepSeq to force deep strict evaluation for all state updates. This (seemingly) was the only way I could stop exceptions from happening in the context of the automatically generated COM wrapper - passing it a fully evaluated data structure was apparently the only solution available. However, this does break the advantages of laziness as applied to the current state of the object - it would be pretty neat, for example, to be able to define a view of a database as the result of an expression involving relational algebra combinators, with the query evaluated one record at a time as the data is pulled back by the client application. The need for DeepSeq forces strict evaluation, so this benefit is lost. This is actually more of an issue than it might immediately seem - many queries don't actually need to retrieve all of a record set before useful things can be done. A particular example is updating a GUI, where a list box reflects the results of a query. Lazy evaluation will allow the screen to start updating more or less immediately even on a query that returns thousands of records - strict evaluation might incur a delay of a few seconds (for a very complex query or a large database) before updates can start happening. In an ideal world, what I'd dearly love to see would be an HDirect option that allows a choice between existing functionality and having calls return a failure code, or better still allowing a user-supplied handler function to determine the correct course of action. So, to sum up my wish list: 1. A facility to replace the default uncaught exception behaviour for COM objects and similar shared library code 2. HDirect rolled into the main GHC tree 3. A facility within HDirect to create COM objects based on existing (Haskell) data structures, returning an interface pointer as a return parameter on a call to an existing COM object. (This was the question Simon asked if I'd had an answer to - I've attached a copy below for reference). Sarah PS: I've not yet looked closely at the .NET integration in GHC yet, but I can imagine myself having a very similar wish list in that case too. Hi again, A slightly harder problem this time. I want to implement something that looks something like the Microsoft XML parser. This is a very COM heavy contraption that provides an interface that looks to client applications like a tree of COM objects that reflects the internal structure of an XML document. The idea is, you start with the document object, make a method call requesting the root node, and have a COM object that wraps that node returned to you. You can then request the children of that node, and their children and siblings, all which involves being passed back interface pointers to COM objects. Effectively, there is one COM object per syntactic item in the XML document. I already have my document object working nicely. This is where the actual database resides. I want to be able to spawn views of this database as COM
RE: Concurrent-Haskell in GHC
| I've started looking into the docs on GHC's | implementation of Concurrent-Haskell, and I got | confused about the architecture. Different sources | seem to indicate that it either: | - Uses one OS thread, and blocking IO calls will | therefore block all the Haskell-threads, or | - Uses one OS thread, occasionaly (50 times a second?) | calling select() to simulate blocking IO calls without | blocking other Haskell-threads, or | - Uses a pool of OS threads so that when a blocking IO | call is made, another thread takes over execution. Basically the last of these. (At least in the most recent GHC.) Simon Marlow implemented a web server using Concurrent Haskell, and has a paper about it too. http://www.haskell.org/~simonmar/bib.html But we should really point out that you get the second method by default; the pool of OS threads method is currently experimental and not enabled unless you rebuild GHC's RTS with the right flags. The default single-threaded IO multiplexing works pretty well, although it doesn't scale to multiple processors of course. Cheers, Simon ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
arrays and lamda terms
Greetings. My question is not directly related to Haskell. Is it possible to encode an array using lamda terms only, and recover the term specified by an index term in O(1) time (in other words in one step reduction, without cheating using several steps behind the scenes)? Or is it possible in any rewriting system? I might not be making any sense, but I am sure there are people out there who can read my mind, and possibly put my question into better terminology. Thanks
Stacking up state transformers
Thanks to Andrew, I'm now backtracking my state correctly. Now what I want to do is have two elements of state, an element that gets backtracked, and an element that doesn't. My monad now looks like this: type NondetState bs ns a = StateT bs (NondetT (StateT ns Maybe)) a where bs is my backtracked state, and ns my non-backtracked state. I can still access my backtracked state using Control.Monad.State.{get,put}, but I can't access my non-backtracked state. How do I burrow through the stack of monad transformers to get and put the 'ns' state? Supplementary question: What documentation should I be reading? cheers, Simon Registered Office: Roke Manor Research Ltd, Siemens House, Oldbury, Bracknell, Berkshire. RG12 8FZ The information contained in this e-mail and any attachments is confidential to Roke Manor Research Ltd and must not be passed to any third party without permission. This communication is for information only and shall not create or change any contractual relationship.
Re: arrays and lamda terms
Is it possible to encode an array using lamda terms only, and recover the term specified by an index term in O(1) time (in other words in one step reduction, without cheating using several steps behind the scenes)? depends a bit on your definitions, doesn't it? if you take lambda with unary abstractions only, it is going to be rather difficult; if you take lambda with n-ary abstractions, it is going to be rather easy, assuming you mean constant-steps, not single-step (typically, access to function parameters is implemented viaa singleinstruction, directly indexinginto some parameter structure, which is little different from array access - you can play with the idea in Haskell, so it's not even off topic;-). hth, claus
Re: arrays and lamda terms
Cagdas Ozgenc wrote: Is it possible to encode an array using lamda terms only, and recover the term specified by an index term in O(1) time? I'd like to go on a limb and argue that there is generally no such thing as O(1) array access operation. On the conventional hardware, the fastest access to the i-th element of an array 'ar' is expressed by the following formula, which underlies C arrays: ar[i] = *(ar + i) The formula involves the addition of two numbers -- which is, generally, a O(log(n)) operation. Granted, if we operate in the restricted domain of mod 2^32 integers, addition can be considered O(1). If we stay in this domain however, the size of all our arrays is limited to M (which is 2^32, often less). If we implement our arrays as linked lists, the cost of accessing one element will not exceed M*2*cost(dereference), which is constant, which is O(1). Thus pedantically speaking, within the restricted integral domain, any terminating array access method is O(1). The choice of hardware may greatly affect the apparent complexity. For example, associative memory access is difficult on conventional architectures and fast for holographic memory. Or is it possible in any rewriting system? If all your arrays are limited in size to 2^32, and if your re-writing system offers projection operators pi_0, pi_1, ... pi_4294967295 (i.e., the right hardware), well, then it is possible. ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Stacking up state transformers
G'day. On Tue, Feb 04, 2003 at 05:24:29PM -, Guest, Simon wrote: I can still access my backtracked state using Control.Monad.State.{get,put}, but I can't access my non-backtracked state. Iavor mentioned using lift, plus some other ideas. That's what I'd do: liftNondet = lift liftOuterState = lift . lift (Naturally I'd call these something more meaningful.) As a matter of style, I generally advocate the philosophy that your basic operations should be semantically meaningful, rather than operationally meaningful. So, for example, rather than: type FooM a = StateT Bar (StateT Baz IO) a munch :: FooM () munch = do baz - lift (lift get) doStuffWith baz I prefer: type FooM a = StateT Bar (StateT Baz IO) a getBaz :: FooM Baz getBaz = lift (lift get) munch :: FooM () munch = do baz - getBaz doStuffWith baz Not only is it more readable, it's also more robust in the face of change (e.g. when you decide to change to ReaderT instead). In your case, it's a state monad you're trying to get at, and a state monad only supports a few meaningful operations (get, put, modify, gets) not all of which are generally useful for a given kind of state. I think it makes more sense to specify a public interface for your monad and use only that. Cheers, Andrew Bromage ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: arrays and lamda terms
The formula involves the addition of two numbers -- which is, generally, a O(log(n)) operation. Granted, if we operate in the restricted domain of mod 2^32 integers, addition can be considered O(1). If we stay in this domain however, the size of all our arrays is limited to M (which is 2^32, often less). If we implement our arrays as linked lists, the cost of accessing one element will not exceed M*2*cost(dereference), which is constant, which is O(1). Thus pedantically speaking, within the restricted integral domain, any terminating array access method is O(1). Yes but then it will unstable complexity, whereas usually when array access is considered worst/best/average case complexity are equal(i.e. one can write a program to a hard realtime system). With linked list implementation you'll never have that. The choice of hardware may greatly affect the apparent complexity. For example, associative memory access is difficult on conventional architectures and fast for holographic memory. Very true. Or is it possible in any rewriting system? If all your arrays are limited in size to 2^32, and if your re-writing system offers projection operators pi_0, pi_1, ... pi_4294967295 (i.e., the right hardware), well, then it is possible. The projection operators all should take equal amount of reduction steps. So, does that mean it is not possible to do this in lamda calculus if the hardware only supports alpha,beta reduction for example (and without cheating behind the scenes for projection functions)? ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe