This is a reply to the notices by
  David Barton       <[EMAIL PROTECTED]>,
  Jerzy Karczmarczuk <[EMAIL PROTECTED]> 

on the (algebraic) *rules* in Haskell.


David Barton  writes

> What began with a fairly limited, and practical, suggestion on Simon's
> part to assist the compiler with optimizations and transformations
> that are valid in some cases and not in others has blossomed into a
> search for a full logical language, with inference, proof checking,
> and all the rest.
> 
> Look, if you want a logical language, go for it.  Frankly, I am in the
> throes of Language Puppy Love(tm) with Maude right this second (those
> who are interested, check out http://maude.csl.sri.com).  Neat stuff.
> But that doesn't mean I want to twist Haskell to fit that frame.  Nor
> does it mean that I want to abandon Haskell and do all my graphics
> interface programming and scripting via term rewriting logic.  Have no
> fear, Haskell, you are still my first love!


Adding *rules* to language would NOT cause scripting graphics via term
rewriting logic. I suppose, you know this.
If you do not set {rules..} in your program, you would never notice 
they exist.

On the second,
Haskell is related to lambda calculus, to Haskell B. Curry.
And very probably, it is he, who introduced the rewrite rule technique
- rules for S,K combinators, and others.
Let somebody correct me, if i mistake.
Rather the "graphic interface" affairs, "file handlers", and such,
can be considered as twisting Haskell to what it should not be.

On the third,
there exists the  Gofer  dialect of Haskell,
Gofer means "good for equational reasoning".
"for reasoning" not for "graphic interface". 
Does not this all show what is Haskell about?

Bringing in a small part of "logical language" possibility helps to 
optimize all the programs - for graphic interfaces too.
Bringing more of it could, maybe, allow stronger optimizations.
The first step is to allow {rules..} to language, with minimal 
implementation support. Then, it all depends on the practice of 
compiler design.


Jerzy Karczmarczuk <[EMAIL PROTECTED]>  writes

> Sergey Mechveliani:

>> i asked the list, why the examples deal only with transformations like
>> (map f).(map g) -> map (f.g),
>> why not deal with algebraic simplification, say,
>>        (x^2+x+1)*(x-1) -> x^3-1.

> I am afraid that Sergey is dreaming of transforming Haskell into a
> universal Computer Algebra system. We know for more than 30 years that 
> the general problem of algebraic simplification is a mess. 

Every theory is a mess: lambda calculus is a mess too (peer into 
Barendregt's book), and so on. 
And all of them have simple applications.

> OK, some polynomial transformations can be done, but I suspect that 
> enhancing a parser with such rule recognition system containing 
> obviously the non-linear pattern matching (if not a full unifier...)
> seems a deadly idea.

As Simon mentioned, many things, like   (map f).(map g) --> map (f.g)
can be done easier. They are NOT of polynomials. Neither are the 
generic rules for Group - processed very simply. The are many other
examples.
And do you think it is a problem to program in Haskell the transformer
that would handle with, say
                      ('a':).(map f).(map g) --> ('a':).(map (f.g))  ?
This is not hard to write in Haskell the unifier for such expressions. 
Generally, one needs to start only with interpreter, written in 
Haskell, for the *normal form* by a rewrite rule list.


> His proposals, e.g.

>>   {rules       (x*y)*z = x*(y*z)  ...}
>>
>>       x==x  &  (x==y ==> y==x)  &  (x==y & y==z ==> x==z)  &
>>       (x==y ==> x*z == y*z)
>>      )
>>   }
>>   {rules  belongs x & belongs y & belongs z ==>
>>           (x*y)*z   = x*(y*z) &
>>           unity*x   = x       &  x*unity   = x
>>           x*(inv x) = unity   &  (inv x)*x = unity
>>   }

> are needlessly verbose. I would say that:
>
> (==) is an equivalence relation (symm., trans., etc)
> (*) is associative (eventually commutative)

Saying "(*) is associative", you need to teach the compiler how to 
use this, to build-in something in it. 
While setting  {rules  (x*y)*z = x*(y*z) ...}  
simply makes the Normal form to apply this rule, and this has part of 
the effect of compiler exploiting associativity. 
Similar is with *equations* for (==).

> We know that for real (inexact) numbers it is a touchy affair, and 
> rearranging the expression may yield different numeric answers, but 
> let's skip that problem. 
> [..]
> there is no point in polluting program with 
> rules unless a *concrete* implementation of them is provided. The
> algebraic ideas of Sergey do not seem to follow this principle.


As we observe, the implementors do think of providing such 
implementation.
As to inexact numbers, one could invent another class, with another 
rules, personally, i have no thoughts on this.

Besides, the above rules for MulGroup are NOT for standard, this is an
*example of the user-defined algebraic class*. 

Actually, we speak of some language, say,  Haskell-e
to be - or not to be - designed and implemented, that applies *rules*
if the program asks for this.
If one does not like rules, one does not set {rules..}. This offends
nobody.
`-e' stands for the equational reasoning - a thing close to rewrite 
rules.
There exists the  Gofer  language ("good for equational reasoning").
But, as i recall, it does not allow {rules..}, nor provides an 
implementation support for them.


> By the way, the proliferation of such rules as:
> unity*x=x;  x*unity=x
> is slightly redundant as well; there are some provable subsumptions 
> in algebra, for example that if there is a left and a right unity, it
> must be the same entity. But how the compiler can use it?

The user could set  leftUnity  and  rightUnity  - with their rules. 
And in many cases (for Group) the completion algorithm, or mayby, the 
resolution procedure, derives automatically  leftUnity==rightUnity.
But as the user knows of this ab initio, one simply adds extra rule
- not to bother the compiler with proofs. Normally, some rules are 
added that could be derived, this spoils nothing.


>> Again, this is *not* a bug in compiler to optimize the program 
>> "around bottom" and throw the "bottom" out. This is done according to
>> user program, that may arrange rules this or that way.
>> And this is not necessarily an optimization. It is often hard to 
>> control whether the rules make the program worse.

> Hm. If you can't control your rules, and if the compiler cannot
> do that either...

As the algorithm of bringing matrix to triangular form helps to control 
the linear system solution, 
similarly, the rule completion and resolution in logic helps to 
transform non-polynomial equations to reliable form, fit for equation
solving. But the completion does not terminate sometimes. The user is
free to call for it or not.
My impression of the topic bases mostly on the paper 
Buchberger B., Loos R.
  Algebraic Simplification.
  In "Computer Algebra. Symbolic and Algebraic Computation",
  edited by B.Buchberger et al., Springer Verlag, Wien, New York,
  1982,1983

But the first approach is: if you fear of certain rule consequences,
omit it.


------------------
Sergey Mechveliani
[EMAIL PROTECTED]








Reply via email to