I am sorry for the long letter.
This is discussing an official proposal for the Haskell standard 
library 
        http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/
I wonder in what place such discussion might happen and decided that
at least the first response may to occur in this mail list.

It replies to the notices of 19 Apr 2000 by 
Marcin Qrczak Kowalczyk <[EMAIL PROTECTED]>:


> Why I don't like this proposal:
>
> - It's too complicated.

> - Relies on controversial type system features, like undecidable
>   instances and overlapping instances.

> - Relies on type system features that are not implemented and it's
>  not clear if they can be correctly designed or implemented at all,
>  like "domain conversions".

> - Has many instances that should not exist because the relevant type
>  does not have the class property; they return Nothing or fail,
>  instead of failing to compile.

> - Properties like commutativity cannot be specified in Haskell.
>  The compiler won't be able to automatically perform any optimizations
>  based on commutativity.

> - belongs is strange. IMHO it should always return True for valid
>  arguments, and invalid arguments should be impossible to construct
>  if the validity can be checked at all.

> - Tries to turn a compiled language into an interpreted language.
>  FuncExpr, too much parsing (with arbitrary rules hardwired into
>  the language), too much runtime checks.

> - It's not true that it's "not necessary to dig into mathematics".
>  I studied mathematics and did not have that much of algebra.

> - I prefer minBound to looking at element under Just under Just under
>  tuple of osetBounds.

> - Uses ugly character and string arguments that tune the behavior,
>  e.g. in syzygyGens, divRem, canFr. I like Haskell98's divMod+quotRem
>  better.

> - Uses unneeded sample arguments, e.g. in toEnum, zero, primes, read.



Let us use now the above points as the structuring sections of the 
respond.

> - It's too complicated.

The Basic Algebra Proposal  (basAlgPropos)  is arranged so (and
mentions this) that the complex parts can be rejected, as the jury
decides, so, the simple proposal would remain.
Its minimal part is the category (class) hierarchy (picture 'in.h'),
together with the minimal description for each class.

I do not think its minimal part is complex.
Look for example, at the category picture of the Axiom system for 
computer algebra
  Jenks, R.D., Sutor, R.S.,  et al.
  Axiom, the Scientific Computation System.
  Springer-Verlag, New York-Heidelberg-Berlin (1992).

By the way, the basic language of Axiom resembles Haskell: strongly 
typed, has categories (something like classes), and so on
(though, this is not for sure, I had not studied it closely).
The main difference may be that it is strict and allows more 
non-functional features. 

And why at all the Haskell basic algebra classes have to compare to 
the ones of certain very scientifically designed large CA system?
Because Haskell decided to make the standard algebraic operations 
overloaded.
A nicest solution. But starting from this point, it has to follow 
the logic of the algebraic hierarchy assumed in mathematics. 
Otherwise, many good properties would be lost.

Nothing essentially smaller than the minimal part of  basAlgPropos
would make sense. The average programmer is expected to exploit 
about half of this picture.
The ordinary hacker has *not* to study modern algebra. The 
difference for the one would be only in that some operations and 
classes are renamed. How much sense is in using  (+)  of  Num  and
quotRem  of  Integral ? The same much it will be for the hacker in
using    (+)     of Additive  
and      divRem  of EuclideanRing.
But for the mathematically minded users, and for the scientific 
purposes of good future design it gives an advantage. Because the 
names of the categories are related to certain properties.


> - Relies on controversial type system features, like undecidable
>   instances and overlapping instances.

These features are not mandatory in proposal. The jury may decide to
rewrite the proposal removing the usage of these features.
Only I do not see why one has to make things worse.
Overlapping instances are consistent and very helpful. I believe 
this was shown in the discussion in this mail list 3-5 maonth ago. 
See the archive. Though, I remember that you do not agree with this.

As to the `undecidable' instances, this is only the word that sounds 
frightening, I do not think they are likely to cause any real harm.
I use them in a large application, and never was bitten by them.


> - Relies on type system features that are not implemented and it's
>  not clear if they can be correctly designed or implemented at all,
>  like "domain conversions".

It does not rely. The proposal can be easily rewritten omitting this 
feature. Let the jury decide.
Again, I do not see why this should be done, why make things worse.
Because the domain conversion feature is generally extremely useful.
Part of this conversion works usefully under the existing 
implementations. 


> - Has many instances that should not exist because the relevant type
>  does not have the class property; they return Nothing or fail,
>  instead of failing to compile.

For example, it has  instance Multiplicative Integer
for the class
      class Set a => Multiplicative a  where
                                       (*)      :: a -> a -> a
                                       divide_m :: a -> a -> Maybe a
                                       ...
Are you saying that this instance should not exist for  Integer
because  divide_m  may return  Nothing ?
Your requirement is against the common mathematical practice.  
     divide_m 4 2 = Just 2   :: Maybe Integer
     divide_m 5 2 = Nothing 
look quite natural. Also there is provided the ordinary  divide  
which breaks the program on the fail.
Do you suggest for                             divide n m :: Integer
to produce the compilation error because it may fail?
Then, you should expect, for example,            quot n m :: Integer
to cause the compilation error in Haskell-98,
because                                          quot 1 0   fails.

Here is the example showing the usefulness of the partial 
operations. Many algorithms in algebra are formulated like this:
find the root  r  of degree 4 of  x  in the domain  D.  If this  r
exists in  D,  then apply the algorithm  f  to  r, x,  otherwise
apply  g  to  x.  For the domain of Integer, this may look like 

    case  root 4 n :: Integer  of  Just r  -> f r n
                                   Nothing -> g n


> - Properties like commutativity cannot be specified in Haskell.
>  The compiler won't be able to automatically perform any optimizations
>  based on commutativity.

The matter is that they *can* be performed in many cases.
But not in all cases: there may be hard ones.
When the implementors gain the force to deal with this, they would 
start it. The language and the library should not cut out the very 
principal possibility of this. 
Even without any special support, these properties are partially
exploited. For example, the proposal says that  

  Additive  is a superclass for  AddSemigroup,  
  and in  AddSemigroup  (+)  is associative and commutative.
Also it declares
  instance (Additive a,Additive b) => Additive (a,b) where
                                             (x,y)+(u,v) = (x+u,y+v)
                                             ...
Hence, if the reader of the program sees the type 
                                        T = AddSemigroup a => (a,a),
then one expects that (+) is associative and commutative on  T. 
It follows from the instance for  (a,b)  and from the requirement on
AddSemigroup.
Such a conclusion is impossible for     T = Num a => (a,a)  
in Haskell-98.


> - belongs is strange. IMHO it should always return True for valid
>  arguments, and invalid arguments should be impossible to construct
>  if the validity can be checked at all.

Well ... just ignore `belongs'. 
Probably, the proposal says that `belongs' can be ignored.
`belongs' is for the mental discipline. Seeing the word `belongs' 
the user has to recall in one's mind what the values of the type are
considered as belonging to the Domain - which ones are in the 
canonical form.
Now, what we can do about invalid arguments being impossible to 
construct.
For example, Haskell-98 provides  %  to create a value of abstract 
type  Ratio a,  so, the user cannot create any wrongly represented 
rational. 
It has a disadvantage that  n % m  will cause the  gcd  finding for
the cancellation to the canonical form, even if the user knows from 
the start that (n,m) is already canceled.

I suggest for the common design style with the abstract data types
the following approach.

  Provide the constructing function for the abstract type with the 
  additional argument  
                       cn :: Bool
  So that the user may call the creator with  cn = True,  when one
  knows that the argument data are already in the canonical form. 
  Motivation: 
    * this saves the cost in some cases
    * setting  cn = True  for the non-canonical arguments is less
      probable to happen than forgetting to bring the arguments to
      the canonical form (?)

  For the types where the additional argument spoils the denotations
  provide another constructing function.

For example,  Ratio  in Haskell-98,  or  Fraction  in the proposal
could be built as
                  n %  d    -- find gcd, cancel ...
                  n %% d    -- skip gcd ... only build the construct

Another example:               data Pol a = Pol Char [(a,Integer)] 
(contrived) is for polynomial. 
Then, for                    f  =           2*x^3 +x^2 +4*x +5
                             f' = Pol 'x' [(2,3),(1,2),(4,1),(5,0)],
is a representation for  f.
There are infinitely many values of the same type as  f'  that may 
denote the same polynomial. One may cons (0,4) to its monomial 
list, re-order the list in  n!  ways ...  The user can create a 
wrongly represented polynomial by forgetting to sort the list.

Maybe, this can be improved by making Pol an abstract type and
Introducing the constructing function 
                           pol :: Bool -> Char -> [(a,Int)] -> Pol a
                                  --cn    varName mons
which also brings the given monomial list to the canonical form - if
cn = False.


> - Tries to turn a compiled language into an interpreted language.
>  FuncExpr, too much parsing (with arbitrary rules hardwired into
>  the language), too much runtime checks.

FuncExpr is not the whole initial language.
And this is an extra feature, optional. 
I put in  basAlgPropos  everything I like, for an occasion. 
Let the jury choose the realistic part of it.
Why the interpreted  FuncExpr  is good?
Because, for example, Haskell might need to treat the really  Real  
(ReallyReal) numbers (Float does not model them).
And they can be represented adequately as the infinite sequences of
some items. Representing an infinite sequence as merely a "lazy" 
list, or maybe, as a black-box function  f :: Integer -> a,  is not 
so good. Because solving (==) will be principally incomplete: the 
program will compare only the finite parts of the sequences.
If  f  is given by its interpreted program expression, the program 
can analyze the full information about the sequence and may have, in 
principle, more chances to solve the equality of the sequences
(this is not always possible for the algorithm, though).

Generally: the Haskell language drops out the explicit function data
remaining only with the black boxes.
This is against the air of the lambda calculus by Haskell B.Curry.
The user can write the lambda-like interpreter in Haskell. 
But why do it, if a Haskell implementation, probably, contains some 
interpreter inside it?  Maybe, there should be some standard 
interpreter in the library. I doubt.


> - It's not true that it's "not necessary to dig into mathematics".
>  I studied mathematics and did not have that much of algebra.

"not necessary" - to use the library.
To discuss it sensibly, and especially, to make a decision - it 
might need more understanding. What can one do about this?
Consult to your mathematician colleagues. As to our mail list,
Marc Van Dongen, J.Karczmarczuk, maybe some others, deal with these 
matters.
To *use* the  basAlgPropos  library would not need to study algebra 
any more than using, say,  Num, Integral  of  Haskell-98.

As I said above, Haskell had chosen all this long ago, by stepping 
in the path of "overloaded" standard algebra.


> - I prefer minBound to looking at element under Just under Just under
>  tuple of osetBounds.

This is not a problem. We can add to library the polymorphic function  
minBound  which extracts what is needed from this tuple.


> - Uses ugly character and string arguments that tune the behavior,
>  e.g. in syzygyGens, divRem, canFr. I like Haskell98's divMod+quotRem
>  better.

For example,  canFr  get the fraction to the canonical form.
In what way do you explain to  canFr  that these particular 
arguments  n, m  allow to skip the cancellation by GCD,  or to skip
some other part of computation?
In programming, this is called *keys*, *options* for the function 
call.
Avoiding them will need introducing several function names for the 
same mathematical operation, which is indeed an ugly thing to do.

Also  Haskell98's divMod+quotRem  
do not care in what generic situations the operation of kind 
"divide with remainder" may occur. This is why they do not provide 
the keys. 
Commonly,  basAlgPropos  is designed for more generic situation than
the Haskell-98 algebra.
For example, it is most evident that  Ratio  cancellation and  
division with remainder *should not rely on Ord*: should not
mention positiveness or negativeness of the arguments. Because they
can be applied to polynomials and other good and non-trivial things,
where the very Ord existence is under question.

If you are saying that the keys slow down the performance, I would
answer that  %  in Haskell-98 would cost more due to its involved
gcd  than matching against the key  "" | "g" | "i"  in  canFr  of 
Proposal. Also this key can be rewised to  Char.
In other cases, the key matching costs much less than the rest of 
the function. 


> - Uses unneeded sample arguments, e.g. in toEnum, zero, primes, read.

This is the principal Haskell language problem of the sample argument,
this is not me who introduced it.
For example, a finite set has to export its cardinality:

  class .. => FiniteSet a where card :: Integer

Sadly, you cannot exploit such a program with Haskell.
Further, if I designed the Haskell-98 library, I would suggest, for
example,      fromInteger :: a -> Integer -> a
rather than   fromInteger :: Integer -> a

More (contrived) example: an additive group should provide zero.

  class ...=> AdditiveGroup a where zero :: a

In many places in the program, like    f x = g x zero 
the compilers will report the ambiguity.
The programmer improves:               f x = g x (zero `asTypeOf` x)
(zero :: a  would not work), 
and it is satisfied.
But in Proposal, it is simpler:        f x = g x (zero x),
because of the type
                       ... zero :: a -> a 

------------------
Sergey Mechveliani
[EMAIL PROTECTED]








Reply via email to