Perhaps slightly off-topic. I have looked at the core-spec document, and had a question regarding the operational semantics part.
Given the Core expressions: > case (let r = 1 : r in r) of (x:xs) -> x An interpreter following the semantics would loop on the above expression, as the S_LetRecReturn rule, the one that throws away let-expressions, never applies. > case (let r = 1 : r in r) of (x:xs) -> x => S_Case + S_LetRec + S_Var > case (let r = 1 : r in 1:r) of (x:xs) -> x => S_Case + S_LetRec + S_Var > case (let r = 1 : r in 1:1:r) of (x:xs) -> x etc. Adding a step reduction rule: [S_CaseLet] case (let us in e) of ps --> let us in (case e of ps) We would however get: > case (let r = 1 : r in r) of (x:xs) -> x => S_CaseLet > let r = 1 : r in (case r of (x:xs) -> x) => S_LetRec + S_Var > let r = 1 : r in (case 1:r of (x:xs) -> x) => S_LetRec + S_MatchData > let r = 1 : r in 1 => S_LetRecReturn > 1 Would it make sense to add such a step reduction rule? or am I incorrect in assuming that an interpreter following the current rules would loop? -- Christiaan On Oct 22, 2014, at 4:28 PM, Richard Eisenberg <e...@cis.upenn.edu> wrote: > Hi Sophie, > > I agree with Simon in that I'm skeptical that arrows should *require* a > change in Core, but I'm more willing to believe that a change in Core could > permit better optimizations over arrow-intensive code. Though, I would say we > should spend some time looking for ways to achieve this without changing Core. > > All that said, I'm happy to help you understand Core better, and can explain > some of that core-spec document you've been referred to. It's terse, and > intended to be a somewhat minimal explanation. > > Let me know if I can be of help. > Richard > > On Oct 22, 2014, at 6:18 AM, Sophie Taylor <sop...@traumapony.org> wrote: > >> Yeah, definitely. Part of the reason why arrow notation is so frustrating at >> the moment is because it forces everything into lambda calculus; that is, it >> requires every category to be Cartesian Closed. When your arrow category >> isn't Cartesian Closed, it raises two issues. 1) When it's not Cartesian, >> you have to lie and say it supports products instead of tensors (that is, >> you are able to get back the arguments of a product unchanged, i.e. simple >> tuples), but this isn't the relevant part for Core. 2) When it's not closed, >> you have to lie and say it supports higher order functions (i.e., lambda >> abstractions applied to lambda abstractions) and implement arr. Now, you can >> lie at the syntax level and typecheck it as kappa calculus (i.e. first order >> functions only unless you are explicitly a Closed category) but then say it >> is lambda calculus at the core level; this would work because lambda >> calculus subsumes kappa calculus. This would allow the optimiser/RULES etc >> to work unchanged. However, you would lose a lot of the internal consistency >> checking usefulness of Core, and could miss out on kappa-calculus-specific >> optimisations (although come to think of it, call arity analysis might solve >> a lot of this issue). >> >> On 22 October 2014 19:59, Simon Peyton Jones <simo...@microsoft.com> wrote: >> Interesting. There is a pretty high bar for changes to Core itself. >> Currently arrow notation desugars into Core with no changes. If you want to >> change Core, then arrow “notation” is actually much more than syntactic >> sugar. Go for it – but it would be a much more foundational change than >> previously, and hence would require more motivation. >> >> >> >> S >> >> >> >> From: Sophie Taylor [mailto:sop...@traumapony.org] >> Sent: 22 October 2014 10:53 >> To: Simon Peyton Jones >> Cc: ghc-devs@haskell.org >> Subject: Re: Current description of Core? >> >> >> >> Ah, thanks HEAPS. I've been banging my head against a wall for the last few >> days trying to see exactly what is going on :) I'm trying to find a way to >> minimise/eliminate the changes required to Core for the arrow notation >> rewrite - specifically, introducing kappa abstraction and application - >> semantically different to lambda abstraction/application but close enough >> that I can probably get away with either adding a simple flag to the >> Abstraction/Application constructors or doing it higher up in the HsExpr >> land, but the latter method leaves a sour taste in my mouth. >> >> >> >> On 22 October 2014 19:35, Simon Peyton Jones <simo...@microsoft.com> wrote: >> >> Is the current description of Core still System FC_2 (described in >> https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? >> >> >> >> We never implemented that particular version (too complicated!). >> >> >> >> This is the full current story (thanks to Richard for keeping it up to >> date), in the GHC source tree >> >> : >> >> https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf >> >> >> >> Simon >> >> >> >> From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Sophie >> Taylor >> Sent: 22 October 2014 10:26 >> To: ghc-devs@haskell.org >> Subject: Current description of Core? >> >> >> >> Hi, >> >> >> >> Is the current description of Core still System FC_2 (described in >> https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? >> >> >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs@haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs@haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs