Re[4]: [Haskell-cafe] Fractional/negative fixity?
Hello Nicolas, Wednesday, November 8, 2006, 1:25:23 AM, you wrote: prec ?? $ over-specification). You want ?? to bind more tightly than does $; that's exactly what this approach would let you specify. and how then compiler will guess that is relational priority of this operator comparing to '$!' ? :) -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
Re: Fractional/negative fixity?
Lennart Augustsson wrote: On Nov 7, 2006, at 11:47 , [EMAIL PROTECTED] wrote: Henning Thielemann wrote: On Tue, 7 Nov 2006, Simon Marlow wrote: I'd support fractional and negative fixity. It's a simple change to make, but we also have to adopt [...] I think that computable real fixity levels are useful, too. A further step to complex numbers is not advised because those cannot be ordered. But ordering of the computable reals is not computable. So it could cause the compiler to loop during parsing. :) Actually, that's one of the use cases ;) Regards, apfelmus ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
RE: Re: [Haskell-cafe] Fractional/negative fixity?
Nicolas Frisby wrote: Let's remember that if something is broke, it's only _right_ to _fix_ it. I patiently waited for someone else to make that pun. Understanding the language won't be much harder, but understanding fixity declarations will become a task. Consider: infixl -1.7521 -- what and why? As the operator space becomes more dense, negative and fractional fixities are going to become more obfuscated. The negative and fractional fixities will satisfy a number purposes well, but they will also be abused and lead to confusion. This smells like a wart growing on a wart to me. All these are valid points. However, given that we can't completely redesign, implement and test a new fixity system in time for Haskell', it makes sense to make a simple change that unambiguously improves the current system, and is no more difficult to implement (in fact, I bet it adds zero lines of code to the compiler). Cheers, Simon Nick On 11/7/06, David House [EMAIL PROTECTED] wrote: On 07/11/06, Jon Fairbairn [EMAIL PROTECTED] wrote: I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack. Well, It's a _conceptually_ simple idea, one that doesn't make understanding the language much harder. Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. It would allow for extensibility into the future, where the operator space will only become more dense, and maintaining a complete order with only 10 integers to play will become more and more difficult. Allowing an infinite amount of operators to come between any two operators sounds like a solid design decision to me. -- -David House, [EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
[Haskell-cafe] Fractional/negative fixity?
On Wed, 8 Nov 2006, Bulat Ziganshin wrote: Hello Nicolas, Wednesday, November 8, 2006, 1:25:23 AM, you wrote: prec ?? $ over-specification). You want ?? to bind more tightly than does $; that's exactly what this approach would let you specify. and how then compiler will guess that is relational priority of this operator comparing to '$!' ? :) (What might the smiley mean?) It could not guess it, and this is good! However, if in the Prelude it is defined, that ($) and ($!) have the same precedence, then the compiler could derive automatically that prec ?? $!. ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
Re: [Haskell-cafe] Fractional/negative fixity?
On Tue, 7 Nov 2006, David House wrote: On 07/11/06, Jon Fairbairn [EMAIL PROTECTED] wrote: I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack. Well, It's a _conceptually_ simple idea, one that doesn't make understanding the language much harder. Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. In school we learnt dot operations (multiplication, division) bind more tightly than dash operations (addition, subtraction). I imagine we would have learnt dot operations have precedence 7, dash operations have precedence 6. :-) ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
Re: [Haskell-cafe] Fractional/negative fixity?
Bulat Ziganshin schrieb: Hello Nicolas, Wednesday, November 8, 2006, 1:25:23 AM, you wrote: prec ?? $ over-specification). You want ?? to bind more tightly than does $; that's exactly what this approach would let you specify. and how then compiler will guess that is relational priority of this operator comparing to '$!' ? :) For an expression like a ?? b $! c it would have to emit an error message, since it isn't clear whether the programmer meant (a ?? b) $! c or a ?? (b $! c) In fact that wouldn't be clear to a human reader, either, so it's actually a Good Thing that the programmer must explicitly disambiguate the expression! Dan Weston's proposal (let local fixity declarations augment and/or override those imported from the module that defines an operator) would eliminate the pain in code that makes heavy use of both operators. Fractional priorities, on the other hand, would silently resolve this kind of ambiguity. What makes this troublesome is that programmers and maintainers won't always assume the same resolution as the one that the compiler chose; such things are one of the more powerful ingredients for the occasional debugging nightmare. This lesson has already been learned several times now (I know of PL/I and C++, which both have had their share of problems due to overly ambitious defaulting mechanisms). No need to repeat that. Regards, Jo ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
Re: Fractional/negative fixity?
Simon Marlow [EMAIL PROTECTED] writes: Nicolas Frisby wrote: Let's remember that if something is broke, it's only _right_ to _fix_ it. I patiently waited for someone else to make that pun. Understanding the language won't be much harder, but understanding fixity declarations will become a task. Consider: infixl -1.7521 -- what and why? As the operator space becomes more dense, negative and fractional fixities are going to become more obfuscated. The negative and fractional fixities will satisfy a number purposes well, but they will also be abused and lead to confusion. This smells like a wart growing on a wart to me. All these are valid points. However, given that we can't completely redesign, implement and test a new fixity system in time for Haskell', ...the correct thing to do is to leave it alone, rather than make a change that addresses only one of the problems. it makes sense to make a simple change that unambiguously improves the current system, I dispute that. It does make it possible to introduce a new operator between two others, but on its own, that strikes me as as likely to be a new problem as an improvement because of the difficulty of remembering numeric precedences. It's bad enough with the present number, let alone a countable infinity of them. The biggest flaw in the present system (and something I wanted to address in my original proposal way back when) is that there is no way to state that there is /no/ precedence relationship between two operators. It would be far better to have the compiler give an error message saying that an expression needs some parentheses than have it choose the wrong parse. The next smaller flaw is that numeric precedences are a poor match for the way we think. I can easily remember that (*) binds more tightly than (+), or that (+) beats (:) (though the latter is slightly less obviously correct), but I don't remember the numbers so when I want to define something new that has a similar precedence to (*) (some new kind of multiplication), I have to look it up, which is tedious. Wanting to insert an operator between two others comes lower in importance even than that, because in many cases giving it the same precedence as something and appropriate associativity gets you most of the way there. It bites because you can't say you want an error if you use it next to something else without parentheses. Let me throw out a couple of possibilities differing only in syntax (one of my fears is that if we get fractional fixities the other problems will be forgotten, so a real improvement will never be discussed). I don't expect either of them to go into Haskell', but putting them forward might encourage further discussion and discourage introduction of something temporary that will stay with us forever. Syntax 1, based on Phil Wadler's improvement of my old proposal. The precedence relation is a preorder. infix {ops_1; ops_2; ...; ops_n} (where each ops is a collection of operators optionally annotated with L or R) would mean that each operator in ops_i binds more tightly than all the operators in ops_j for ji. (and we require ops_i `intersect` ops_j = empty_set for i/=j) Layout rule applies for {;...}. An op can be a varsym or a backquoted varid. It says nothing about the relationship between the those operators and operators not mentioned, except by virtue of transitivity. So infix R ^ L * / L + - would replicate the current relationships between those arithmetic operators. An additional declaration infix + R : says that (+) binds more tightly than (:) and by transitivity, so do (^ * and /). The associativity label obviously has to be the same for all occurrences of an operator in scope, so omitting it just says that it's specified elsewhere. infix * R @+ @- + says that (@+) and (@-) fall between (*) and (-), and that (a @+ b @- c) parses as (a @+ ([EMAIL PROTECTED])) but infix * R @@ says that (a * b @@ c @@ d) parses as ((a*b) @@ (c@@d)) but leaves (a + b @@ c) undefined (a compile time error) unless another declaration specifies it elsewhere. And infix R @@ @@@ says nothing about the relationship between @@ or @@@ and other operators, but indicates that they associate to the right individually and together. The alternative syntax is exemplified thus: infix L + - (L * / (R ^)) The precedence increases the more deeply you go into the parentheses. Arguably this is more suggestive and avoids the possibility of reading precedences as increasing down the page (danger of endianism argument cropping up there!), but may be harder to read. With both syntaxes there's no reason to reserve L and R, since the context disambiguates. For exports (imports) you pass the graph of the relation with the unexported (unimported) operators omitted. and is no more difficult to implement (in fact, I bet it adds zero lines of code to the compiler). If ease of implementation had been a
Re: Fractional/negative fixity?
On Nov 7, 2006, at 5:49 PM, Robert Dockins wrote: [On operator precedence] Ha! Well, as long as we're being pedantic, surely we wouldn't need any set larger than the rationals (which does have a decidable ordering)? Also, since I'm commenting anyway, I rather like the idea of specifying operator precedences via a partial order. However, I also feel that there needs to be some work done to make sure there aren't gremlins hiding in the details. Has anyone worked out the theory on this? How does associating to the right vs left play into the picture? How does it fit into the parsing technology? Actually, we *do* use a DAG on operator precedence in the Fortress programming language (my day job). Our goal is to require parentheses when it is not blatantly obvious what is going on (eg because we're using operators from two libraries written in isolation). We can only use operators together in an expression without parentheses if there is an edge between them in the graph. Graph nodes are sets of operators with the same precedence (eg addition and subtraction). Among other things this means that if ++ has higher precedence than ==, and == has higher precedence than , we can't necessarily mix ++ and in the same expression without parentheses. That said, this would be a pretty big change for Haskell, and would break existing code unless you somehow wired in the transitive closure of all the existing operators. As another message in this discussion (from Simon M?) mentioned, you might want to be able to specify the relationship between operators imported from different modules, because you *do* know that a well-known relationship exists. -Jan-Willem Maessen -- Rob Dockins Talk softly and drive a Sherman tank. Laugh hard, it's a long way to the bank. -- TMBG ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime smime.p7s Description: S/MIME cryptographic signature ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime