J.R. Malaquias  <[EMAIL PROTECTED]> 
writes on  Dynamic scopes in Haskell

> One of the algorithms I have to implement is the addition of 
> symbolic expressions. It should have two symbolic expressions as 
> arguments and should produce a symbolic expression as the result. 
> But how the result is produced is depending on series of flags that 
> control how the expressions is to be manipulated. This set of flags 
> should then be passed as a third argument to addition function.
> This is the correct way of doing it. But, being a Mathematics 
> application, my system should preserve the tradicional Math notation 
> (that is, infix operators with suitable associations defined). So
> my symbolic expression type should be an instance of the Num class
> so that the (+) operator can be overloaded for it. But, as the 
> function has now three arguments, it cannot be a binary operator
> anymore.
> [..]
> Dynamic scoping is the solution:
> -------------------------------
> If I could pass around the environment indirectly to my addition 
> function, then it would remain a 2-argument function and would be
> implemented as binary operator. Defining the environment as dynamic 
> scoped variable just do it for me. In fact, that is how the
> computer algebra system has been implemented in its first
> version using the Scheme programming language.
>
> [..]


I guess, i am dealing with the same problem.
In 1993 a CA program called CAC was written in certain _typeless_
functional langauge.
For example, the multiplication in this system works like this:
                 $ P mul x x --> x^2     - for  x <- P = Z[x]
                 $ Q mul x x --> -1      - for  x <- Q = Z[x]/(x^2+1)

In the first line, the domain of operands is  Polynomial Integer,
in the second, Q means the polynomials modulo x^2+1.
We see, `mul' has actually three arguments. 
To model the binary (*) in the interactive shell, the list of 
preprocessors can be defined by the user. This makes possible to set
the default third argument. For example,
  > loadInterface Q;
  > ...
  > x *. x;
  @ -1
  >
After `loadInterface DD', any further operation of kind  `Op.' 
in the dialogue expands to  `$ DD Op'.  
In the above example,   x *. x   expanded to  $ Q mul x x.
Then `$' extracts the individual `mul' definition from the 
_domain description_ Q and applies it.
`$' is called the domain interpreter.
Probably, this can be considered as part of implementing 
`vocabularies' for the data classes in typeless language. 
Do i mistake?

What i call `domain description Q'  Prof. J.R.Malaquias,  probably,
calls `series of flags'
"that control how the expressions is to be manipulated. This set of 
flags should then be passed as a third argument to addition 
function."

Then, i started CA in Haskell.
I doubt, whether dependent types, dynamic scopes may help to model
such _dynamic_   x *. x  -->  $ Q mul x x    expansion.
Though, i do no know so far, what dependent types are. 

Besides, it remains parasitic `.'. And the true _old_ programs, 
apart from the dialogue, are written in  $ &R mul  style - not so 
nice. The  _old_ program extracts explicitly `add',`mul' ... 
definitions from  &R, binds them to variables  &add, &mul ..., 
and then, computes by applying, when needed, these values of 
&add, &mul ...
Haskell almost forces a domain to be presented by several data class 
_instances_. Otherwise, too much of the language benefits would be 
lost. 
But an instance is _static_. And CA needs the dynamic domains too.
Thus, for the above example, one may need to perform  $ &R mul
in different domains  &R = Z[x]/(x^n-1)  for many different  n, 
the number of needed domains may be known only at the run time.  

So, the new system DoCon-2 preserves the domain description 
expressions, but had left this  $ &R mul  style.
Furher,  +,*,... become the class operations, and DoCon-2 represents 
a domain as several class instances.
I fear, the old trick with  $ &R mul  is not good in this situation.
Better to leave the domain interperting $ to the Haskell data class
mechanism (vocabularies?).
But for the _dynamic domain_ effect, DoCon-2 puts cinically the 
domain description data into the domain _element_ data.
For example, a polynomial contains in itself the data term for the
coefficient domain.
This does not lead to much space expenses, due to the sharing effect
and due to other reasons.
The usage of certain cast-by-sample maps reduces greatly the size of
the element data denotations.
This all leads to the so-called the domain-by-sample approach:
          <http://haskell.org/docon>, manual.txt, section 'prp.ske'.

It has its drawbacks, described in Manual too.
But so far, i cannot guess of anything better.

To my mind, the most important _recent_ improvement that can be done
to Haskell in order to make it better suit CA is to relax the 
restrictions on the instance overlaps. This will lead, in particular,
to easier composed conversion between the algebraic domains.
 

------------------
Sergey Mechveliani
[EMAIL PROTECTED]




Reply via email to