Re: Revamping the numeric classes

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

Mon, 12 Feb 2001 00:02:00 +0100 (MET), Bjorn Lisper [EMAIL PROTECTED] pisze:

 Functions themselves are never lifted. They just appear to be lifted
 when applied to arguments of certain types, and then the function
 application is always statically resolved into an expression where
 the function has its original type.

This does not solve the problems. Instead of composed liftings I will
split the code into separate bindings. Suppose I write:

let
g x y = x + y
f x y = g x y
in f [1, 2] [10, 20] :: [[Int]]

What does it mean? I could mean this:

let
g :: Int - Int
g x y = x + y
f :: Int - [Int] - [Int]
f x y = g x y
in f [1, 2] [10, 20] :: [[Int]]

which results in [[11, 21], [12, 22]], or this:

let
g :: Int - Int
g x y = x + y
f :: [Int] - Int - [Int]
f x y = g x y
in f [1, 2] [10, 20] :: [[Int]]

which results in [[11, 12], [21, 22]]. Anyway, somebody loses (one
who thought that his version would be chosen by the compiler).

If you think that the fact that bodies of let-bound variables are
typechecked prior to their usage help, let's transform let to lambdas
(it's not used polymorphically so it's possible):

(\g - (\f - f [1, 2] [10, 20] :: [[Int]]) (\x y - g x y))
(\x y - x + y)

Now it is not clear in what order this should be typechecked, and
different orders give different results.

It can be beta/eta-reduced to
[1, 2] + [10, 20] :: [[Int]]
and I really don't know what meaning would you give to it.

Anyway, examples are pointless. Functional programming, as opposed to
imperative programming, leads to more common use of lists and functions
as values (instead of repeated stateful calls you return all elements
in a list to process them later), also Maybes, tuples etc. Function
explicitly take as arguments things they depend on, explicitly return
things they modify, functions are curried, there are combinators like
(.) or curry which manipulate functions without applying them...

All this means that there are many more places when somebody can
make a type error by mismatching levels of lifting functions or
lifting repetition.

When I am making a type error, the last thing I want from a compiler
is to guess what I could mean and silently compile incorrect code
basing on this assumption. I could have done the error in a different
place!

 Does
 f x y = (x, x + y)
 has type
 Num a = a - a - (a, a)
 and thus it cannot be used on the type
 Int - [Int] - (Int, [Int])
 even though if its body was inlined into an expression requiring that
 type, it could (by lifting x+y to map (x+) y)?
 
 Your function has its polymorphism constrained to the Num class. So we could
 allow elemental overloading (on the uncurried form of f) as long as [Int] is
 not an instance of Num.

Suppose [Int] is not Num. I want to treat f as if it meant
f :: Int - [Int] - (Int, [Int])
f x y = (x, map (x+) y)
because you told me that map is optional: it will be inserted
automatically when necessary.

 BTW, the type signature for your "lifted f" (if it existed) should be
 (Int,[Int]) - [(Int, Int)]. See second example below.

I want to lift (+) used inside f. Haskell does not require from me
to write all type signatures so I haven't write one in this case,
because I know that Haskell's type system recovers principal types
automatically (except ambiguities related to classes).

 The rewrite of the overloaded application is guided only by type
 information.

Often there is no any type information at a given place. Only at some
later point, when we are considering a toplevel definition with a
type signature. Usually types are inferred from definitions and usages
of each identifier, and a mismatch means that there is a type error.
There is no attempt to guess how to fix it automatically because the
error might be detected at a different place than the change really
should be made.

 So with these rules alone fancyPrint 17 would not be rewritten into
 fancyPrint (repeat 17). But the rules can of course be extended to
 cover this case.

I thought it was rewritten to fancyPrint [17], as this is the obvious
way to convert a scalar to a list...

See, it cannot be implicit, because different people mean different
things.

 Then he applies it to a single String. Guess what? It is not
 promoted to a single-element list, but each character is printed
 separately. Oops!
 
 Which is the original meaning of fancyPrint applied to a string. Why "oops"?

Because I forgot that String is a list, did not treat it as a list,
but as a scalar. I can do it now - except a few cases, but when I
can't (some instances) the compiler will remind me by an error. This
is not the case with your proposal.

 fancyPrint will always be a function over lists, no matter whether
 its use is overloaded on arguments of other types or not.

Similarly, (+) will always be a function over scalars, unless you
make Num instances for lists.

 Of 

Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread William Lee Irwin III

In a later posting Marcin Kowalczyk says:
 If (+) can be implicitly lifted to functions, then why not signum?
 Note that I would lift neither signum nor (+). I don't feel the need.
  ...

On Mon, Feb 12, 2001 at 09:33:03AM +, Jerzy Karczmarczuk wrote:
 I not only feel the need, but I feel that this is important that the
 additive structure in the codomain is inherited by functions. In a more
 specific context: the fact that linear functionals over a vector space
 form also a vector space, is simply *fundamental* for the quantum 
 mechanics, for the cristallography, etc. You don't need to be a Royal
 Abstractor to see this. 

I see this in a somewhat different light, though I'm in general agreement.

What I'd like to do is to be able to effectively model module structures
in the type system, and furthermore be able to simultaneously impose
distinct module structures on a particular type. For instance, complex
n-vectors are simultaneously C-modules and R-modules. and an arbitrary
commutative ring R is at once a Z-module and an R-module. Linear
functionals, which seem like common beasts (try a partially applied
inner product) live in the mathematical structure Hom_R(M,R) which is once
again an R-module, and, perhaps, by inheriting structure on R, an R'
module from various R'. So how does this affect Prelude design? Examining
a small bit of code could be helpful:

-- The group must be Abelian. I suppose anyone could think of this.
class (AdditiveGroup g, Ring r) = LeftModule g r where
() :: r - g - g

instance AdditiveGroup g = LeftModule g Integer where
n  x   | n == 0 = one
| n  0  = -(n  (-x))
| n  0  = x + (n-1)  x

... and we naturally acquire the sort of structure we're looking for.
But this only shows a possible outcome, and doesn't motivate the
implementation. What _will_ motivate the implementation is the sort
of impact this has on various sorts of code:

(1) The fact that R is an AdditiveGroup immediately makes it a
Z-module, so we have mixed-mode arithmetic by a different
means from the usual implicit coercion.

(2) This sort of business handles vectors quite handily.

(3) The following tidbit of code immediately handles curried innerprods:

instance (AdditiveGroup group, Ring ring) = LeftModule (group-ring) ring
where
r  g = \g' - r  g g'

(4) Why would we want to curry innerprods? I envision:

type SurfaceAPoles foo = SomeGraph (SomeVector foo)

and then

surface :: SurfaceAPoles bar
innerprod v `fmap` normalsOf faces where faces = facesOf surface

(5) Why would we want to do arithmetic on these beasts now that
we think we might need them at all?

If we're doing things like determining the light reflected off of the
various surfaces we will want to scale and add together the various
beasties. Deferring the innerprod operation so we can do this is inelegant
and perhaps inflexible compared to:

lightSources :: [(SomeVector foo - Intensity foo, Position)]
lightSources = getLightSources boundingSomething
reflection = sum $ map (\(f,p) - getSourceWeight p * f) lightSources
reflection `fmap` normalsOf faces where faces = facesOf surface

and now in the lightSources perhaps ambient light can be represented
very conveniently, or at least the function type serves to abstract out
the manner in which the orientation of a surface determines the amount
of light reflected off it.

(My apologies for whatever inaccuracies are happening with the optics
here, it's quite far removed from my direct experience.)

Furthermore, within things like small interpreters, it is perhaps
convenient to represent the semantic values of various expressions by
function types. If one should care to define arithmetic on vectors and
vector functions in the interpreted language, support in the source
language allows a more direct approach. This would arise within solid
modelling and graphics once again, as little languages are often used
to describe objects, images, and the like.

How can we anticipate all the possible usages of pretty-looking vector
and matrix algebra? I suspect graphics isn't the only place where
linear algebra could arise. All sorts of differential equation models
of physical phenomena, Markov models of state transition systems, even
economic models at some point require linear algebra in their
computational methods.  It's something I at least regard as a fairly
fundamental and important aspect of computation. And to me, that means
that the full power of the language should be applied toward beautifying,
simplifying, and otherwise enabling linear algebraic computations.


Cheers,
Bill
P.S.:   Please forgive the harangue-like nature of the post, it's the best
I could do at 3AM.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:

 I not only feel the need, but I feel that this is important that the
 additive structure in the codomain is inherited by functions.

It could support only the basic arithmetic. It would not automatically
lift an expression which uses () and if. It would be inconsistent to
provide a shortcut for a specific case, where generally it must be
explicitly lifted anyway. Note that it does make sense to lift () and if,
only the type system does not permit it implicitly because a type is fixed
to Bool.

Lifting is so easy to do manually that I would definitely not constrain
the whole Prelude class system only to have convenient lifting of basic
arithmetic. When it happens that an instance of an otherwise sane class
for functions makes sense, then OK, but nothing more.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:

 I want to be *able* to define mathematical operations upon objects
 which by their intrinsic nature permit so!

You can't do it in Haskell as it stands now, no matter what the Prelude
would be.

For example I would say that with the definition
abs x = if x = 0 then x else -x
it's obvious how to obtain abs :: ([Int]-Int) - ([Int]-Int): apply the
definition pointwise.

But it will never work in Haskell, unless we changed the type rules for if
and the tyoe of the result of (=).

You are asking for letting
abs x = max x (-x)
work on functions. OK, in this particular case it can be made to work by
making appropriate instances, but it's because this is a special case
where all intermediate types are appropriately polymorphic.

This technique cannot work in general, as the previous example shows. So
IMHO it's better to not try to pretend that functions can be implicitly
lifted. Better provide as convenient as possible way of manual lifting
arbitrary functions, so it doesn't matter if they have fixed Integer in
the result or not.

You are asking for an impossible thing.

 I defined hundred times some special functions to add lists or
 records, to multiply a tree by a scalar (btw.: Jn Fairbarn proposes
 (.*), I have in principle nothing against, but these operators is used
 elsewhere, in other languages, CAML and Matlab; I use (*) ).

Please show a concrete proposal how Prelude classes could be improved.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Jerzy Karczmarczuk

Marcin Kowalczyk continues:

 On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:
 
  I want to be *able* to define mathematical operations upon objects
  which by their intrinsic nature permit so!
 
 You can't do it in Haskell as it stands now, no matter what the Prelude
 would be.
 
 For example I would say that with the definition
 abs x = if x = 0 then x else -x
 it's obvious how to obtain abs :: ([Int]-Int) - ([Int]-Int): apply the
 definition pointwise.
 
 But it will never work in Haskell, unless we changed the type rules for if
 and the tyoe of the result of (=).
 
 You are asking for letting
 abs x = max x (-x)
 work on functions. OK, in this particular case it can be made to work 
 

Why don't you try from time to time to attempt to understand what
other people want? And wait, say 2 hours, before responding? 

I DON'T WANT max TO WORK ON FUNCTIONS. I never did. I will soon (because
I am writing a graphical package where max serves to intersect implicit
graphical objects) need that, but for very specific functions which
represent textures, but NOT in general.

I repeat for the last time, that I want to have those operations which
are *implied* by the mathematical properties. And anyway, if you replace
x=0 by x=zero with an appropriate zero, this should work as well.
I want only that Prelude avoids spurious dependencies.

This is the way I program in Clean, where there is no Num, and (+), (*),
zero, abs, etc. constitute classes by themselves. So, when you say:

 You are asking for an impossible thing.

My impression is what is impossible, is your way of interpreting/
understanding the statements (and/or desiderata) of other people. 

  I defined hundred times some special functions to add lists or
  records, to multiply a tree by a scalar (btw.: Jn Fairbarn proposes
  (.*), I have in principle nothing against, but these operators is used
  elsewhere, in other languages, CAML and Matlab; I use (*) ).
 
 Please show a concrete proposal how Prelude classes could be improved.

(Why do you precede your query by this citation? What do you have to say
here about the syntax proposed by Jn Fairbarn, or whatever??)

I am Haskell USER. I have no ambition to save the world. The "proposal"
has been presented in 1995 in Nijmegen (FP in education). Actually, it
hasn't, I concentrated on lazy power series etc., and the math oriented
prelude has been mentioned casually. Jeroen Fokker presented similar
ideas, implemented differently. 
If you have nothing else to do (but only in this case!) you may find 
the modified prelude called math.hs for Hugs (which needs a modified 
prelude.hs exporting primitives) in 

http://users.info.unicaen.fr/~karczma/humat/

This is NOT a "public proposal" and I *don't want* your public comments
on it. If you want to be nice, show me some of *your* Haskell programs.

Jerzy Karczmarczuk
Caen, France

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Deja vu: Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Laszlo Nemeth


[incomprehensible (not necessarily wrong!) stuff about polynomials,
 rings, modules over Z and complaints about the current prelude nuked]

--- Marcin 'Qrczak' Kowalczyk pisze ---

 Please show a concrete proposal how Prelude classes could be improved.

--- Jerzy Karczmarczuk repondre ---

 I am Haskell USER. I have no ambition to save the world. The "proposal"
 has been presented in 1995 in Nijmegen (FP in education). Actually, it
 hasn't, I concentrated on lazy power series etc., and the math oriented
 prelude has been mentioned casually. Jeroen Fokker presented similar
 ideas, implemented differently. 

I'm afraid all this discussion reminds me the one we had a year or two
ago. At that time the mathematically inclined side was lead by Sergei,
who to his credit developed the Basic Algebra Proposal, which I don't
understand, but many people seemed to be happy about at that time. And
then of course nothing happend, because no haskell implementor has
bitten the bullet and implemented the proposal. This is something
understandable as supporting Sergei's proposal seem to be a lot of
work, most of which would be incompatible with current
implementations. And noone wants to maintain *two* haskell compilers
within one.

Even if this discussion continues and another brave soul develops
another algebra proposal I am prepared to bet with both of you in one
years supply of Ben and Jerry's (not Jerzy :)!) icecream that nothing
will continue to happen on the implementors side. It is simply too
much work for an *untested* (in practice, for teaching etc)
alternative prelude.

So instead of wasting time, why don't you guys ask the implementors to
provide a flag '-IDontWantYourStinkingPrelude' which would give you a
bare metal compiler with no predefined types, functions, classes, no
derived instances, no fancy stuff and build and test your proposals
with it?

I guess the RULES pragma (in GHC) could be abused to allow access to
the primitive operations (on Ints), but you are still likely to loose
much of the elegance, conciseness and perhaps even some efficiency of
Haskell (e.g. list comprehensions), but this should allow us to gain
experience in what sort of support is essential for providing
alternative prelude(s). Once we learnt how to decouple the prelude
from the compiler, and gained experience with alternative preludes
implementors would have no excuse not to provide the possibility
(unless it turns out to be completely impossible or impractical, in
which case we learnt something genuinely useful).

So, Marcin (as you are one of the GHC implementors), how much work
would it be do disable the disputed Prelude stuff within the compiler,
and what would be lost?

Laszlo

[Disclaimer: Just my 10 wons. This message is not in disagreement or
 agreement with any of the previous messages]

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-11 Thread Bjorn Lisper

Marcin Kowalczyk:
Me:
 No, the transformation is a single step procedure where a term
 is transformed into a typeable term (if possible) with a minimal
 amount of lifting. You don't compose transformations.

So functions implicitly lifted can't be used in the same ways as
functions originally defined as lifted (namely, they can't be lifted
again)... This is bad.

Functions themselves are never lifted. They just appear to be lifted when
applied to arguments of certain types, and then the function application is
always statically resolved into an expression where the function has its
original type.

 Due to the principle of minimal lifting (which implies already
 well-typed terms should not be transformed) a call to a polymorphic
 function should not be transformed unless there is a dependence
 between the type variable(s) of the function and of the argument(s)
 in the application.

Does
f x y = (x, x + y)
has type
Num a = a - a - (a, a)
and thus it cannot be used on the type
Int - [Int] - (Int, [Int])
even though if its body was inlined into an expression requiring that
type, it could (by lifting x+y to map (x+) y)?

Your function has its polymorphism constrained to the Num class. So we could
allow elemental overloading (on the uncurried form of f) as long as [Int] is
not an instance of Num.

Yes, if [Int] is made an instance of Num then the meaning of calls to f on
lists will change from the elemental meaning to the meaning defined through
the instance declarations for [Int]. This can surely be a problem in some
cases. But this is not a property of the elemental overloading mechanism per
se, but rather that we would have two different overloading mechanisms in
the language powerful enough to specify conflicting overloading.

BTW, the type signature for your "lifted f" (if it existed) should be
(Int,[Int]) - [(Int, Int)]. See second example below.

You can't lift arbitrary function of type Int - Int - (Int, Int)
into Int - [Int] - (Int, [Int]) without knowing its defintion.
Try it with g x y = (y, x).

Consider uncurried g: g (x,y) = (y,x) and assume it has an explicit type
declaration to (Int,Int) - (Int, Int) (so it's not polymorphic).
if x :: Int and l :: [Int], then
g (x,l) - zipWith (g.(,)) (repeat x) l :: [(Int,Int)].
The rewrite of the overloaded application is guided only by type
information. (Again note only the application of g is rewritten, neither g
itself nor its type does change.)

Suppose there is a function
fancyPrint :: Printable a = [a] - IO ()
which applies some fancy printing rules to a list of printable values.

A programmer knows that this function can be used on lists as well
as on single elements, because those elements will be promoted to
single-element lists as necessary. So far so good.

The rules I have sketched so far only promote a value in connection with
elemental overloading. A good example is g (x,l) above. Here, g is
elementwise applied to l and in the process x becomes promoted into
(repeat x), similar to the original scaling example a*x where a is a scalar
and x a matrix. So with these rules alone fancyPrint 17 would not be
rewritten into fancyPrint (repeat 17). But the rules can of course be
extended to cover this case.

Then he applies it to a single String. Guess what? It is not
promoted to a single-element list, but each character is printed
separately. Oops!

Which is the original meaning of fancyPrint applied to a string. Why "oops"?
A programmer must be aware of the meaning of function he writes. fancyPrint
will always be a function over lists, no matter whether its use is
overloaded on arguments of other types or not.

 It's not enough, because the least lifted type is not the most general
 answer. Answers for different amounts of liftedness are incompatible
 with that answer - they are not its instances as in the HM typing.
 
 It does not matter that they are not instances.

It does. It's not enough to check that there exists a set of places
to insert map or zipWith which transforms what is written to what I
need. Because there can be a different, incompatible set of places,
which is considered "better" by the compiler and my set is not
obtainable from what the compiled has done. See the first example
above.

(I think your example was broken, but in principle you're right.) Of course,
the use of the overloading I have described (and any kind of overloading) is
justified only if the overloading matches the intuition of the programmer.
If it misleads you then it is harmful. Regarding elemental overloading, my
experience is that data parallel programmers quickly develop a strong
intuition for it. What I have seen through examples is that elemental
overloading using the rules I have sketched and the "minimal lifting"
principle always seems to produce the intuitively correct meaning, also in a
language like Haskell. If the produced result is the "right" one, then the
fact that other possible transformations produce terms with incompatible
types is 

Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread Marcin 'Qrczak' Kowalczyk

Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel [EMAIL PROTECTED] pisze:

 Can you demonstrate a revised hierarchy without Eq? What would
 happen to Ord and the numeric classes with default class method
 definitions that use (==) either explicitly or in pattern matching
 against numeric literals?

OK, then you can't write these default method definitions.

I'm against removing Eq from the numeric hierarchy, against making Num
instances for functions, but I would probably remove Show. I haven't
seen a sensible proposal of a replacement of the whole hierarchy.

 In an instance declaration, if a method requires operations of
 another class which is not a superclass of the class being instanced,
 it is sufficient to place the requirement in the context,

Better: it is sufficient if the right instance is defined somewhere.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread William Lee Irwin III

Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel [EMAIL PROTECTED] pisze:
 Can you demonstrate a revised hierarchy without Eq? What would
 happen to Ord and the numeric classes with default class method
 definitions that use (==) either explicitly or in pattern matching
 against numeric literals?

I anticipate that some restructuring of the numeric classes must be
done in order to accomplish this. I am, of course, attempting to
contrive such a beast for my own personal use.

On Sun, Feb 11, 2001 at 07:59:38AM +, Marcin 'Qrczak' Kowalczyk wrote:
 OK, then you can't write these default method definitions.
 I'm against removing Eq from the numeric hierarchy, against making Num
 instances for functions, but I would probably remove Show. I haven't
 seen a sensible proposal of a replacement of the whole hierarchy.

Well, there are a couple of problems with someone like myself trying
to make such a proposal. First, I'm a bit too marginalized and/or
committed to a radical alternative. Second, I don't have the right
associations or perhaps other resources.

Removing Eq sounds like a good idea to me, in all honesty, though I
think numeric instances for functions (at least by default) aren't
great ideas. More details follow:

Regarding Eq, there are other types besides functions which might
not be good ideas to define equality on, either because they're not
efficiently implementable or are still inappropriate. Matrix types
aren't good candidates for defining equality, for one. Another one
you might not want to define equality on are formal power series
represented by infinite lists, since equality tests will never
terminate. A third counterexample comes, of course, from graphics,
where one might want to conveniently scale and translate solids.
Testing meshes and surface representations for equality is once
again not a great idea. Perhaps these counterexamples are a little
contrived, but perhaps other people can come up with better ones.

As far as the function instances of numeric types, there are some
nasty properties that they have that probably make it a bad idea.
In particular, I discovered that numeric literals' fromInteger
property creates the possibility that something which is supposed
to be a scalar or some other numeric result might accidentally be
applied. For instance, given an expression with an intermediate
numeric result like:

f u v . g x y $ h z

which is expected to produce a number, one could accidentally apply
a numeric literal or something bound to one to some arguments, creating
a bug. So this is for at least partial agreement, though I think it
should be available in controlled circumstances. Local module
importations and/or scoped instances might help here, or perhaps
separating out code that relies upon them into a module where the
instance is in scope, as it probably needs control which is that tight.

Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel [EMAIL PROTECTED] pisze:
 In an instance declaration, if a method requires operations of
 another class which is not a superclass of the class being instanced,
 it is sufficient to place the requirement in the context,

On Sun, Feb 11, 2001 at 07:59:38AM +, Marcin 'Qrczak' Kowalczyk wrote:
 Better: it is sufficient if the right instance is defined somewhere.

Again, I'd be careful with this idea. It's poor design to unnecessarily
restrict the generality of code. Of course, it's poor design to not try
to enforce necessary conditions in the type system, too, which is why
library design is nontrivial. And, of course, keeping it simple enough
for use by the general populace (or whatever semblance thereof exists
within the Haskell community) might well conflict with the desires of
persons like myself who could easily fall prey to the accusation that
they're trying to turn Haskell into a computer algebra system, and adds
yet another constraint to the library design making it even tougher.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread Brian Boutel

Marcin 'Qrczak' Kowalczyk wrote:


 I'm against removing Eq from the numeric hierarchy, against making Num
 instances for functions, but I would probably remove Show. I haven't
 seen a sensible proposal of a replacement of the whole hierarchy.
 

Then we probably are in agreement. 

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread William Lee Irwin III

On 11-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
 There may be some misunderstanding here. If you are talking about type
 for which equality is always undefined, then I agree with you, but that
 is not what I was talking about. I was thinking about types where
 equality is defined for some pairs of argument values and undefined for
 others - I think the original example was some kind of arbitrary
 precision reals.

On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote:
 The original example was treating functions as a numeric type.  In the
 case of functions, computing equality is almost always infeasible.
 But you can easily define addition etc. pointwise:
   
   f + g = (\ x - f x + g x)

I have a fairly complete implementation of this with dummy instances of
Eq and Show for those who want to see the consequences of this. I found,
interestingly enough, that any type constructor f with the following
three properties could have an instance of Num defined upon f a:

(1) it has a unary constructor to lift scalars 
(2) it has a Functor instance
(3) it has an analogue of zip which can be defined upon it

or, more precisely:

\begin{code}
instance (Eq (f a), Show (f a), Num a, Functor f,
Zippable f, HasUnaryCon f) = Num (f a)
where
f + g = fmap (uncurry (+)) $ fzip f g
f * g = fmap (uncurry (*)) $ fzip f g
f - g = fmap (uncurry (-)) $ fzip f g
negate = fmap negate
abs = fmap abs
signum = fmap signum
fromInteger = unaryCon . fromInteger

class Zippable f where
fzip :: f a - f b - f (a,b)

class HasUnaryCon f where
unaryCon :: a - f a

instance Functor ((-) a) where
fmap = (.)

instance Zippable ((-) a) where
fzip f g = \x - (f x, g x)

instance HasUnaryCon ((-) a) where
unaryCon = const
\end{code}

and this generalizes nicely to other data types:

\begin{code}
instance Zippable Maybe where
fzip (Just x) (Just y) = Just (x,y)
fzip _ Nothing = Nothing
fzip Nothing _ = Nothing

instance HasUnaryCon Maybe where
unaryCon = Just

instance Zippable [ ] where
fzip = zip

instance HasUnaryCon [ ] where
unaryCon = cycle . (:[])
\end{code}

On 11-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
 Returning to the basic issue, I understood the desire to remove Eq as a
 superclass of Num was so that people were not required to implement
 equality if they did not need it, not that there were significant
 numbers of useful numeric types for which equality was not meaningful. 

On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote:
 The argument is the latter, with functions as the canonical example.

Well, usually equality as a mathematical concept is meaningful, but
either not effectively or efficiently computable. Given an enumerable
and bounded domain, equality may be defined (perhaps inefficiently)
on functions by

\begin{code}
instance (Enum a, Bounded a, Eq b) = Eq (a-b) where
f == g = all (uncurry (==))
$ zipWith (\x - (f x, g x)) [minBound..maxBound]
\end{code}

and as I've said in another post, equality instances on data structures
expected to be infinite, very large, or where the semantics of equality
are make it difficult to compute, or perhaps even cases where it's just
not useful are also not good to be forced.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-11 Thread Tom Pledger

Marcin 'Qrczak' Kowalczyk writes:
 | Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger [EMAIL PROTECTED] pisze:
 | 
 |  (x + y) + z
 |  
 |  we know from the explicit type signature (in your question that I was
 |  responding to) that x,y::Int and z::Double.  Type inference does not
 |  need to treat x or y up, because it can take the first (+) to be Int
 |  addition.  However, it must treat the result (x + y) up to the most
 |  specific supertype which can be added to a Double.
 | 
 | Approach it differently. z is Double, (x+y) is added to it, so
 | (x+y) must have type Double.

That's a restriction I'd like to avoid.  Instead: ...so the most
specific common supertype of Double and (x+y)'s type must support
addition.

 | This means that x and y must have type Double.  This is OK, because
 | they are Ints now, which can be converted to Double.
 | 
 | Why is your approach better than mine?

It used a definition of (+) which was a closer fit for the types of x
and y.

 :
 |  h:: (Subtype a b, Subtype Int b, Eq b) = (Int - a) - Bool
 | 
 | This type is ambiguous: the type variable b is needed in the
 | context but not present in the type itself, so it can never be
 | determined from the usage of h.

Yes, I rashly glossed over the importance of having well-defined most
specific common supertype (MSCS) and least specific common subtype
(LSCS) operators in a subtype lattice.  Here's a more respectable
version:

h :: Eq (MSCS a Int) = (Int - a) - Bool

 |  That can be inferred by following the structure of the term.
 |  Function terms do seem prone to an accumulation of deferred
 |  subtype constraints.
 | 
 | When function application generates a constraint, the language gets
 | ambiguous as hell. Applications are found everywhere through the
 | program! Very often the type of the argument or result of an
 | internal application does not appear in the type of the whole
 | function being defined, which makes it ambiguous.
 | 
 | Not to mention that there would be *LOTS* of these constraints.
 | Application is used everywhere. It's important to have its typing
 | rule simple and cheap. Generating a constraint for every
 | application is not an option.

These constraints tend to get discharged whenever the result of an
application is not another function.  The hellish ambiguities can be
substantially tamed by insisting on a properly constructed subtype
lattice.

Anyway, since neither of us is about to have a change of mind, and
nobody else is showing an interest in this branch of the discussion,
it appears that the most constructive thing for me to do is return to
try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode.

Regards,
Tom

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-10 Thread Brian Boutel

Marcin 'Qrczak' Kowalczyk wrote:
 
 Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel [EMAIL PROTECTED] pisze:
 
  Can you demonstrate a revised hierarchy without Eq? What would happen to
  Ord, and the numeric classes that require Eq because they need signum?
 
 signum doesn't require Eq. You can use signum without having Eq, and
 you can sometimes define signum without having Eq (e.g. on functions).
 Sometimes you do require (==) to define signum, but it has nothing to
 do with superclasses.
 

Let me restate my question more carefully:

Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord and the numeric classes with default class method definitions that
use (==) either explicitly or in pattern matching against numeric
literals? Both Integral and RealFrac do this to compare or test the
value of signum.

In an instance declaration, if a method requires operations of another
class which is not a superclass of the class being instanced, it is
sufficient to place the requirement in the context, but for default
class method definitions, all class methods used must belong to the
class being defined or its superclasses.


--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-10 Thread Brian Boutel

Fergus Henderson wrote:
 
 On 09-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
  Patrik Jansson wrote:
  
   The fact that equality can be trivially defined as bottom does not imply
   that it should be a superclass of Num, it only explains that there is an
   ugly way of working around the problem.
 ...
 
  There is nothing trivial or ugly about a definition that reflects
  reality and bottoms only where equality is undefined.
 
 I disagree.  Haskell is a statically typed language, and having errors
 which could easily be detected at compile instead being deferred to
 run time is ugly in a statically typed language.

There may be some misunderstanding here. If you are talking about type
for which equality is always undefined, then I agree with you, but that
is not what I was talking about. I was thinking about types where
equality is defined for some pairs of argument values and undefined for
others - I think the original example was some kind of arbitrary
precision reals. My remark about "a definition that reflects reality and
bottoms only where equality is undefined" was referring to this
situation.

Returning to the basic issue, I understood the desire to remove Eq as a
superclass of Num was so that people were not required to implement
equality if they did not need it, not that there were significant
numbers of useful numeric types for which equality was not meaningful. 

Whichever of these was meant, I feel strongly that accomodating this and
other similar changes by weakening the constraints on what Num in
Haskell implies, is going too far. It devalues the Class structure in
Haskell to the point where its purpose, to control ad hoc polymorphism
in a way that ensures that operators are overloaded only on closely
related types, is lost, and one might as well abandon Classes and allow
arbitrary overloading.

--brian





--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-10 Thread Fergus Henderson

On 11-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
 Fergus Henderson wrote:
  
  On 09-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
   Patrik Jansson wrote:
   
The fact that equality can be trivially defined as bottom does not imply
that it should be a superclass of Num, it only explains that there is an
ugly way of working around the problem.
  ...
  
   There is nothing trivial or ugly about a definition that reflects
   reality and bottoms only where equality is undefined.
  
  I disagree.  Haskell is a statically typed language, and having errors
  which could easily be detected at compile instead being deferred to
  run time is ugly in a statically typed language.
 
 There may be some misunderstanding here. If you are talking about type
 for which equality is always undefined, then I agree with you, but that
 is not what I was talking about. I was thinking about types where
 equality is defined for some pairs of argument values and undefined for
 others - I think the original example was some kind of arbitrary
 precision reals.

The original example was treating functions as a numeric type.  In the
case of functions, computing equality is almost always infeasible.
But you can easily define addition etc. pointwise:

f + g = (\ x - f x + g x)

 Returning to the basic issue, I understood the desire to remove Eq as a
 superclass of Num was so that people were not required to implement
 equality if they did not need it, not that there were significant
 numbers of useful numeric types for which equality was not meaningful. 

The argument is the latter, with functions as the canonical example.

-- 
Fergus Henderson [EMAIL PROTECTED]  |  "I have always known that the pursuit
|  of excellence is a lethal habit"
WWW: http://www.cs.mu.oz.au/~fjh  | -- the last words of T. S. Garp.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-09 Thread Bjorn Lisper

 I see. So you can transform arbitrary function of type a-b-c
 to a function of type [a]-b-[c], by applying
 \f x y - map (\z - f z y) x
 and similarly a-b-c to a-[b]-[c]. But then there are two ways of
 transforming a-b-c to [a]-[b]-[[c]

 There should be no transformation to type [a]-[b]-[[c]] in this case.

Wait wait wait. You told that a-b-c is convertible to [a]-b-[c]
for *any* a,b,c. Now I have x = [a], y = b, z = [c] and use the
transformation x-y-z to x-[y]-[z] and obtain [a]-[b]-[[c]].

Both steps are legal, so their composition must be legal, or I don't
like this Haskell-like language anymore.

(You never seem to have liked it much ... :-)

No, the transformation is a single step procedure where a term is
transformed into a typeable term (if possible) with a minimal amount of
lifting. You don't compose transformations.

Let us write [a]^n for the type of n-deep lists of lists of a. If
f :: a - b - c, x :: [a]^m, and y :: [b]^n, then
f x y is transformed into a term with type [c]^max(m,n). This is the minimal
lifting necessary to obtain the correct elementwise application of f.
If m  n then promotion will take place on the first argument, if m  n on
the second, and if m = n then there will be an elementwise application
without promotion.

Unless you say that a-b-c is convertible to a-[b]-[c] *except*
when a is a list. Then it's bad again. There should be no negative
conditions in the type system! Moreover, in a polymorphic function
you don't know yet if a will be a list or not.

Due to the principle of minimal lifting (which implies already well-typed
terms should not be transformed) a call to a polymorphic function should not
be transformed unless there is a dependence between the type variable(s) of
the function and of the argument(s) in the application. Such dependencies
could possibly occur in recursive calls in function definitions. Consider,
for instance the (somewhat meaningless) definition

f x y = head (f [x,x] y)

During type inference, f will first be assigned the type a - b - c.  In
the recursive call, f will be called on arguments with types [a] and b. This
causes a lifting to occur, where f is elementwise applied to [x,x] with
promotion of y. The transformed definition becomes

f x y = head (zipWith f [x,x] (repeat y))

On the other hand, if f somewhere else is applied to some other arguments,
with types not containing a, then no transformation of that call will occur.

There are no full and partial applications because of currying.
It's impossible to say when you should consider a function as a
multiparameter function. There are only single-argument functions.
So you would have to say that some rule apply only *unless* the result
has a function type, which does not work again.

Touch! To keep the discussion simple I have kept multiparameter functions
curried, but you nailed me. Yes, there will be ambiguities if you allow
overloading on other than the first argument in a curried definition (since
there really is only one argument). So for a function f :: a - b - c we
should only allow elementwise overloadings corresponding to functions of
types [a]^n - [b - c]^n. Elementwise overloading on multiparameter
functions must appear only on their uncurried forms, so only if
f :: (a,b) - c then we can allow transformations of calls corresponding to
type signature ([a]^m,[b]^n) - [c]^max(m,n).

(This would give problems with elementwise overloading of arithmetic
operators in Haskell, since these are curried. But, as I said earlier, I'm
not proposing to actually extend Haskell with this overloading, I'm only
discussing the concept as such in the Haskell context.)

With your rules a programmer writes code which is meant to implicitly
convert a value to a single-element list, because something tries
to iterate over it like on a list. Unfortunately the element happens
to be a string, and he gets iteration over its characters. And if it
works the other way, another programmer meant iteration over characters
and got iteration over a single string. You can't tell which was meant.

I don't think there will be any ambiguities here. The overloading is
resolved statically, at compile-time, for each call to the function. Calls
to polymorphic functions are not transformed (except for cases like I showed
above).

 Of course the type/term transformation system must have the property that
 if different transformations can yield the "best" type (wrt liftedness),
 then the transformed expressions should be semantically equivalent.

It's not enough, because the least lifted type is not the most general
answer. Answers for different amounts of liftedness are incompatible
with that answer - they are not its instances as in the HM typing.

It does not matter that they are not instances. Each call is transformed
statically, separately. The liftedness ordering is used only to direct the
resolution of the overloading, so we pick the minimal lifting (the others
are not interesting). The overloaded function 

Re: Revamping the numeric classes

2001-02-09 Thread Marcin 'Qrczak' Kowalczyk

Fri, 9 Feb 2001 15:21:45 +0100 (MET), Bjorn Lisper [EMAIL PROTECTED] pisze:

 No, the transformation is a single step procedure where a term
 is transformed into a typeable term (if possible) with a minimal
 amount of lifting. You don't compose transformations.

So functions implicitly lifted can't be used in the same ways as
functions originally defined as lifted (namely, they can't be lifted
again)... This is bad.

 Due to the principle of minimal lifting (which implies already
 well-typed terms should not be transformed) a call to a polymorphic
 function should not be transformed unless there is a dependence
 between the type variable(s) of the function and of the argument(s)
 in the application.

Does
f x y = (x, x + y)
has type
Num a = a - a - (a, a)
and thus it cannot be used on the type
Int - [Int] - (Int, [Int])
even though if its body was inlined into an expression requiring that
type, it could (by lifting x+y to map (x+) y)?

You can't lift arbitrary function of type Int - Int - (Int, Int)
into Int - [Int] - (Int, [Int]) without knowing its defintion.
Try it with g x y = (y, x).

This is bad: I cannot always take a subexpression and move it into
a separate function.

 With your rules a programmer writes code which is meant to implicitly
 convert a value to a single-element list, because something tries
 to iterate over it like on a list. Unfortunately the element happens
 to be a string, and he gets iteration over its characters. And if it
 works the other way, another programmer meant iteration over characters
 and got iteration over a single string. You can't tell which was meant.
 
 I don't think there will be any ambiguities here. The overloading
 is resolved statically, at compile-time, for each call to the
 function. Calls to polymorphic functions are not transformed
 (except for cases like I showed above).

Suppose there is a function
fancyPrint :: Printable a = [a] - IO ()
which applies some fancy printing rules to a list of printable values.

A programmer knows that this function can be used on lists as well
as on single elements, because those elements will be promoted to
single-element lists as necessary. So far so good.

Then he applies it to a single String. Guess what? It is not
promoted to a single-element list, but each character is printed
separately. Oops!

The rule that fancyPrint works for single printable objects is valid
as long as this single element is *not* a list.

Your rules create many opportunites for functions which work on
a certain well-described domain *except* some specific types on which
they break.

 It's not enough, because the least lifted type is not the most general
 answer. Answers for different amounts of liftedness are incompatible
 with that answer - they are not its instances as in the HM typing.
 
 It does not matter that they are not instances.

It does. It's not enough to check that there exists a set of places
to insert map or zipWith which transforms what is written to what I
need. Because there can be a different, incompatible set of places,
which is considered "better" by the compiler and my set is not
obtainable from what the compiled has done. See the first example
above.

The HM type system does have the property that I can think about more
specific types than ones inferred by the compiler, and as long as the
program can be typed under stricter assumptions, it works as expected.
The compiler may infer more general types than I thought about, but in
such case my type is an instance of the compiler's type and the result
is the same.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-09 Thread Ketil Malde

Brian Boutel [EMAIL PROTECTED] writes:

 The fact that equality can be trivially defined as bottom does not imply
 that it should be a superclass of Num, it only explains that there is an
 ugly way of working around the problem.

 There is nothing trivial or ugly about a definition that reflects
 reality and bottoms only where equality is undefined.

I think there is.  If I design a class and derive it from Num with
(==) is bottom, I am allowed to apply to it functions requiring a Num
argument, but I have no guarantee it will work.

The implementor of that function can change its internals (to use
(==)), and suddenly my previously working program is non-terminating. 
If I defined (==) to give a run time error, it'd be a bit better, but
I'd much prefer the compiler to tell me about this in advance.

 Of course, if you do not need to apply equality to your "numeric" type
 then having to define it is a waste of time, but consider this:

It's not about "needing to apply", but about finding a reasonable
definition. 

 - Having a class hierarchy at all (or making any design decision)
 implies compromise.

I think the argument is that we should move Eq and Show *out* of the
Num hierarchy.  Less hierarchy - less compromise.

 - The current hierarchy (and its predecessors) represent a reasonable
 compromise that meets most needs.

Obviously a lot of people seem to think we could find compromises that
are more reasonable.

 - Users have a choice: either work within the class hierarchy and
 accept the pain of having to define things you don't need in order
 to get the things that come for free,

Isn't it a good idea to reduce the amount of pain?

 or omit the instance declarations and work outside the hierarchy. In
 that case you will not be able to use the overloaded operator
 symbols of the class, but that is just a matter of concrete syntax,
 and ultimately unimportant.

I don't think syntax is unimportant.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-09 Thread Jerzy Karczmarczuk

Marcin 'Qrczak' Kowalczyk wrote:


 JK Now, signum and abs seem to be quite distincts beasts. Signum seem
 JK to require Ord (and a generic zero...).
 
 Signum doesn't require Ord.
 signum z = z / abs z
 for complex numbers.

Thank you, I know. And I ignore it. Calling "signum" the result of
a vector normalization (on the gauss plane in this case) is something
I don't really appreciate, and I wonder why this definition infiltrated
the prelude. Just because it conforms to the "normal" definition of
signum for reals?

Again, a violation of the orthogonality principle. Needing division
just to define signum. And of course a completely different approach
do define the signum of integers. Or of polynomials...


Jerzy Karczmarczuk

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-09 Thread William Lee Irwin III

Fri, 09 Feb 2001 10:52:39 +, Jerzy Karczmarczuk pisze:
 Again, a violation of the orthogonality principle. Needing division
 just to define signum. And of course a completely different approach
 do define the signum of integers. Or of polynomials...

On Fri, Feb 09, 2001 at 07:19:21PM +, Marcin 'Qrczak' Kowalczyk wrote:
 So what? That's why it's a class method and not a plain function with
 a single definition.
 
 Multiplication of matrices is implemented differently than
 multiplication of integers. Why don't you call it a violation of the
 orthogonality principle (whatever it is)?

Matrix rings actually manage to expose the inappropriateness of signum
and abs' definitions and relationships to Num very well:

class  (Eq a, Show a) = Num a  where
(+), (-), (*)   :: a - a - a
negate  :: a - a
abs, signum :: a - a
fromInteger :: Integer - a
fromInt :: Int - a -- partain: Glasgow extension

Pure arithmetic ((+), (-), (*), negate) works just fine.

But there are no good injections to use for fromInteger or fromInt,
the type of abs is wrong if it's going to be a norm, and it's not
clear that signum makes much sense.

So we have two totally inappropriate operations (fromInteger and
fromInt), one operation which has the wrong type (abs), and an operation
which doesn't have well-defined meaning (signum) on matrices. If
we want people doing graphics or linear algebraic computations to
be able to go about their business with their code looking like
ordinary arithmetic, this is, perhaps, a real concern.

I believe that these applications are widespread enough to be concerned
about how the library design affects their aesthetics.


Cheers,
Bill
-- 
craving Weak coffee is only fit for lemmas.
--

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-09 Thread Brian Boutel

Ketil Malde wrote:
 
 Brian Boutel [EMAIL PROTECTED] writes:
 
  - Having a class hierarchy at all (or making any design decision)
  implies compromise.
 
 I think the argument is that we should move Eq and Show *out* of the
 Num hierarchy.  Less hierarchy - less compromise.


Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord, and the numeric classes that require Eq because they need signum? 


 
  - The current hierarchy (and its predecessors) represent a reasonable
  compromise that meets most needs.
 
 Obviously a lot of people seem to think we could find compromises that
 are more reasonable.

I would put this differently. "A particular group of people want to
change the language to make it more convenient for their special
interests."

 
  - Users have a choice: either work within the class hierarchy and
  accept the pain of having to define things you don't need in order
  to get the things that come for free,
 
 Isn't it a good idea to reduce the amount of pain?

Not always.

 
  or omit the instance declarations and work outside the hierarchy. In
  that case you will not be able to use the overloaded operator
  symbols of the class, but that is just a matter of concrete syntax,
  and ultimately unimportant.
 
 I don't think syntax is unimportant.


I wrote that *concrete* syntax is ultimately unimportant, not *syntax*.
There is a big difference. In particular, *lexical syntax*, the choice
of marks on paper used to represent a language element, is not
important, although it does give rise to arguments, as do all mattters
of taste and style.

Thre are not enough usable operator symbols to go round, so they get
overloaded. Mathematicians have overloaded common symbols like (+) and
(*) for concepts that have may some affinity with addition and
multiplication in arithmetic, but which are actually quite different.
That's fine, because, in context, expert human readers can distinguish
what is meant. From a software engineering point of view, though, such
free overloading is dangerous, because readers may assume, incorrectly,
that an operator has properties that are typically associated with
operators using that symbol. This may not matter in a private world
where the program writer is the only person who will see and use the
code, and no mission-critial decisions depend on the results, but it
should not be the fate of Haskell to be confined to such use.

Haskell could have allowed free ad hoc overloading, but one of the first
major decisions made by the Haskell Committee in 1988 was not to do so.
Instead, it adopted John Hughes' proposal to introduce type classes to
control overloading. A symbol could only be overloaded if the whole of a
group of related symbols (the Class) was overloaded with it, and the
class hierarchy provided an even stronger constraint by restricting
overloading of the class operators to cases where other classes,
intended to be closely related, were also overloaded. This tended to
ensure that the new type at which the classes were overloaded had strong
resemblences to the standard types. Simplifying the hierarchy weakens
these constraints and so should be approached with extreme caution. Of
course, the details of the classes and the hierarchy have changed over
the years - there is, always has been and always will be pressure to
make changes to meet particular needs - but the essence is still there,
and the essence is of a general-purpose language, not a domain-specific
language for some branches of mathematics.

A consequence of this is that certain uses of overloaded symbols are
inconvenient, because they are too far from the mainstream intended
meaning. If you have such a use, and you want to write in Haskell, you
have to choose other lexical symbols to represent your operators. You
make your choice.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-09 Thread Fergus Henderson

On 08-Feb-2001, Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] wrote:
 
 I don't like the idea of treating the case "no explicit definitions
 were given because all have default definitions which are OK"
 differently than "some explicit definitions were given".

I don't really like it that much either, but...

 When there is a superclass, it must have an instance defined, so if
 we permit such thing at all, I would let it implicitly define all
 superclass instances not defined explicitly, or something like that.
 At least when all methods have default definitions. Yes, I know that
 they can be mutually recursive and thus all will be bottoms...

... that is the problem I was trying to solve.

 So maybe there should be a way to specify that default definitions
 are cyclic and some of them must be defined?

I agree 100%.

 It is usually written in comments anyway, because it is not immediately
 visible in the definitions.

Yes.  Much better to make it part of the language, so that the compiler
can check it.

 (now any method definition
 can be omitted even if it has no default!),

Yeah, that one really sucks.

-- 
Fergus Henderson [EMAIL PROTECTED]  |  "I have always known that the pursuit
|  of excellence is a lethal habit"
WWW: http://www.cs.mu.oz.au/~fjh  | -- the last words of T. S. Garp.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-09 Thread Fergus Henderson

On 09-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
 Patrik Jansson wrote:
 
  The fact that equality can be trivially defined as bottom does not imply
  that it should be a superclass of Num, it only explains that there is an
  ugly way of working around the problem.
...
 
 There is nothing trivial or ugly about a definition that reflects
 reality and bottoms only where equality is undefined.

I disagree.  Haskell is a statically typed language, and having errors
which could easily be detected at compile instead being deferred to
run time is ugly in a statically typed language.

-- 
Fergus Henderson [EMAIL PROTECTED]  |  "I have always known that the pursuit
|  of excellence is a lethal habit"
WWW: http://www.cs.mu.oz.au/~fjh  | -- the last words of T. S. Garp.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-09 Thread Marcin 'Qrczak' Kowalczyk

Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel [EMAIL PROTECTED] pisze:

 Can you demonstrate a revised hierarchy without Eq? What would happen to
 Ord, and the numeric classes that require Eq because they need signum? 

signum doesn't require Eq. You can use signum without having Eq, and
you can sometimes define signum without having Eq (e.g. on functions).
Sometimes you do require (==) to define signum, but it has nothing to
do with superclasses.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Bjorn Lisper

 Also, what is the inferred type of, for example
 f x y = x + length y
 ? It can be
 Int - [a] - Int
 [Int] - [a] - [Int]
 and neither is more general than the other. And this is a simple
 function.
 
 Int - [a] - Int, since this is the type it will get in the original type
 system.

So I can't apply f to lists, but I could if I inline its body. This
means that I cannot arbitrarily refactor a piece of code by moving
parts of it into separate definitions: subexpressions are given
some extra meanings only if they are physically placed in certain
contexts. This is bad.

This is a misunderstanding. the transformation of f l y , where l :: [Int]
for instance, should depend only on the type of f and not its definition.
It is the call to f, not f itself, that becomes transformed. No inlining
takes place.

Ah, so what uses of f are correct depends on its definition, not type!
Sorry, this is way to radical.

Types exist to formalize possible ways a value can be used. HM allows
to determine most general types variables in a let-block (or: of a
module) before their uses, so separate compilation is possible. In
your system typechecking of a function's definition is done each time
it is used!

No. See above.

Bjrn Lisper

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-08 Thread Bjorn Lisper

 It is quite similar in spirit to the concept of principal type in
 Hindley-Milner type systems. An expression can have many types but
 only one "best" (most general) type in that system.

Now, I'm not any kind of expert on this, but isn't the most general
HM type one that encompasses the others, and *not* one out of a set of
ambigous (and mutually exclusive) types?

In a sense. You define a partial order on types by a  a' (a more general
than a') if there is a substitution s of type variables such that
a' = sa. The interesting property of HM type systems is that for each term t
and all type judgements t:a that can be derived, there is a type judgement
t:a' such that a'  a. a' is called the most general type of t.

What I suggested was to define a different relation between types, measuring
"relative liftedness". We can call it "". Now, if it is the case that for
all judgements t - t':a in the type system I sketch, there is a judgement
t - t'':a' where a'  a, the we can select the transformation to be
t - t''. t'' will then have a "most general type" among the possible
transformed terms, but wrt  rather than .

Ambiguity between types depends on the ordering between types that you
consider!

Bjrn Lisper

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

On Thu, 8 Feb 2001, Bjorn Lisper wrote:

  Int - [a] - Int, since this is the type it will get in the original type
  system.

 This is a misunderstanding. the transformation of f l y , where l :: [Int]
 for instance, should depend only on the type of f and not its definition.
 It is the call to f, not f itself, that becomes transformed. No inlining
 takes place.

I see. So you can transform arbitrary function of type a-b-c
to a function of type [a]-b-[c], by applying
\f x y - map (\z - f z y) x
and similarly a-b-c to a-[b]-[c]. But then there are two ways of
transforming a-b-c to [a]-[b]-[[c]] and the order of applying the
former transformations does matter. Worse: a third way is to apply zipWith
and then promote the result to a single-element list. Or maybe map the
result to a list of single-element lists... Sorry, IMHO it's ambiguous as
hell except very simple cases.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-08 Thread Bjorn Lisper

I see. So you can transform arbitrary function of type a-b-c
to a function of type [a]-b-[c], by applying
\f x y - map (\z - f z y) x
and similarly a-b-c to a-[b]-[c]. But then there are two ways of
transforming a-b-c to [a]-[b]-[[c]] and the order of applying the
former transformations does matter. Worse: a third way is to apply zipWith
and then promote the result to a single-element list. Or maybe map the
result to a list of single-element lists...

There should be no transformation to type [a]-[b]-[[c]] in this case.
If f is applied to arguments of type [a] and [b] then this should be
interpreted as the elementwise application of f to the two argument lists,
and the result type should then be [c]. Note that [a]-[b]-[[c]] is
"more lifted" than [a]-[b]-[c].

Elementwise application to one argument should transform to map, of several
arguments to zipWith with appropriate arity.

It is easier to see how it should work if we skip lists, so we don't have to
deal with maps and zipWiths and other list functions.  Let us consider
elementwise application of f over indexed entitites. For simplicity we
consider functions as our indexed entities, but it could as well be arrays.
With f as above, then f x y should be transformed to:

(1) x :: d - a, y :: b yields \i - f (x i) y
(2) x :: a, y :: d - b yields \i - f x (y i)
(3) x :: d - a, y :: d - b yields \i - f (x i) (y i)

Here (3) is "full" elementwise application, and (1) and (2) are "partial"
elementwise applications where the unlifted argument can be seen as
promoted.  If you have list instead of functions, then the transformation
should insert list primitives with the corresponding effect.

Sorry, IMHO it's ambiguous as hell except very simple cases.

Of course the type/term transformation system must have the property that
if different transformations can yield the "best" type (wrt liftedness),
then the transformed expressions should be semantically equivalent. I believe
a type/term transformation system with this property can be designed, but
the details remain to be worked out.

Bjrn Lisper

(Is this discussion still of interest to the Haskell list members? Or should
we take it offline?)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

On Thu, 8 Feb 2001, Tom Pledger wrote:

 nice answer: give the numeric literal 10 the range type 10..10, which
 is defined implicitly and is a subtype of both -128..127 (Int8) and
 0..255 (Word8).

What are the inferred types for
f = map (\x - x+10)
g l = l ++ f l
? I hope I can use them as [Int] - [Int].

 x + y + z -- as above
 
 -- (x + y) + z   -- left-associativity of (+)
 
 -- realToFrac (x + y) + z-- injection (or treating up) done
   -- conservatively, i.e. only where needed

What does it mean "where needed"? Type inference does not proceed
inside-out. What about this?
h f = f (1::Int) == (2::Int)
Can I apply f to a function of type Int-Double? If no, then it's a
pity, because I could inline it (the comparison would be done on Doubles).
If yes, then what is the inferred type for h? Note that Int-Double is not
a subtype of Int-Int, so if h :: (Int-Int)-Bool, then I can't imagine
how h can be applied to something :: Int-Double.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Jerzy Karczmarczuk

First, a general remark which has nothing to do with Num.

PLEASE WATCH YOUR DESTINATION ADDRESSES
People send regularly their postings to haskell-cafe with
several private receiver addresses, which is a bit annoying
when you click "reply all"...


Brian Boutel after Dylan Thurston:

  Why doesn't your argument show that all types should by instances of
  Eq and Show?  Why are numeric types special?
 
 Why do you think it does? I certainly don't think so.
 
 The point about Eq was that a objection was raised to Num being a
 subclass of Eq because, for some numeric types, equality is undecidable.
 I suggested that Haskell equality could be undecidable, so (==) on those
 types could reflect the real situation. One would expect that it could
 do so in a natural way, producing a value of True or False when
 possible, and diverging otherwise. Thus no convincing argument has been
 given for removing Eq as a superclass of Num.
 
 In general, if you fine-grain the Class heirarchy too much, the picture
 gets very complicated. If you need to define separate subclases of Num
 for those types which have both Eq and Show, those that only Have Eq,
 those than only have Show and those that have neither, not to mention
 those that have Ord as well as Eq and those that don't, and then for all
 the other distinctions that will be suggested, my guess is that Haskell
 will become the preserve of a few mathematicians and everyone else will
 give up in disgust. Then the likely result is that no-one will be
 interested in maintaining and developing Haskell and it will die.

Strange, but from the objectives mentioned in the last part of this 
posting (even if a little demagogic [insert smiley here if you wish])
I draw opposite conclusions.

The fact that the number of cases is quite large suggests that Eq, Show
and arithmetic should be treated as *orthogonal* issues, and treated
independently. 

If somebody needs Show for his favourite data type, he is free to
arrange
this himself. I repeat what I have already said: I work with functional
objects as mathematical entities. I want to add parametric surfaces, to
rotate trajectories. Also, to handle gracefully and legibly for those
simpletons who call themselves 'theoretical physicists', the arithmetic
of un-truncated lazy streams representing power series, or infinitely
dimensional differential algebra elements. Perhaps those are not 
convincing arguments for Brian Boutel. They are certainly so for me.

Num, with this forced marriage of (+) and (*) violates the principle
of orthogonality. Eq and Show constraints make it worse.

===

And, last, but very high on my check-list:

The implicit coercion of numeric constants: 3.14 -=-  (fromDouble
3.14)
etc. is sick. (Or was; I still didn't install the last version of GHC,
and with Hugs it is bad). The decision is taken by the compiler
internally,
and it doesn't care at all about the fact that in my prelude 
I have eliminated the Num class and redefined fromDouble, fromInt, etc. 

+

Dylan Thurston terminates his previous posting about Num with:

 Footnotes:
 [1]  Except for the lack of abs and signum, which should be in some
 other class.  I have to think about their semantics before I can say
 where they belong.

Now, signum and abs seem to be quite distincts beasts. Signum seem to
require Ord (and a generic zero...).

Abs from the mathematical point of view constitutes a *norm*. Now,
frankly, I haven't the slightest idea how to cast this concept into
Haskell class hierarchy in a sufficiently general way...

I'll tell you anyway that if you try to "sanitize" the numeric
classes, if you separate additive structures and the multiplication,
if you finally define abstract Vectors over some field of scalars,
and if you demand the existence of a generic normalization for your
vectors, than *most probably* you will need multiparametric classes
with dependencies. 


Jerzy Karczmarczuk
Caen, France

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Dylan Thurston

On Thu, Feb 08, 2001 at 11:24:49AM +, Jerzy Karczmarczuk wrote:
 First, a general remark which has nothing to do with Num.
 
 PLEASE WATCH YOUR DESTINATION ADDRESSES
 People send regularly their postings to haskell-cafe with
 several private receiver addresses, which is a bit annoying
 when you click "reply all"...

Yes, apologies.  The way the lists do the headers make it very easy to
reply to individuals, and hard to reply to the list.

 And, last, but very high on my check-list:
 
 The implicit coercion of numeric constants: 3.14 -=-  (fromDouble
 3.14)
 etc. is sick. (Or was; I still didn't install the last version of GHC,
 and with Hugs it is bad). The decision is taken by the compiler
 internally,
 and it doesn't care at all about the fact that in my prelude 
 I have eliminated the Num class and redefined fromDouble, fromInt, etc. 

Can't you just put "default ()" at the top of each module?

I suppose you still have the problem that a numeric literal "5" means
"Prelude.fromInteger 5".  Can't you define your types to be instances
of Prelude.Num, with no operations defined except Prelude.fromInteger?

 Dylan Thurston terminates his previous posting about Num with:
 
  Footnotes:
  [1]  Except for the lack of abs and signum, which should be in some
  other class.  I have to think about their semantics before I can say
  where they belong.
 
 Now, signum and abs seem to be quite distincts beasts. Signum seem to
 require Ord (and a generic zero...).
 
 Abs from the mathematical point of view constitutes a *norm*. Now,
 frankly, I haven't the slightest idea how to cast this concept into
 Haskell class hierarchy in a sufficiently general way...

This was one thing I liked with the Haskell hierarchy: the observation
that "signum" of real numbers is very much like "argument" of complex
numbers.  abs and signum in Haskell satisfy an implicit law:
   abs x * signum x = x  [1]
So signum can be defined anywhere you can define abs (except that it's
not a continuous function, so is not terribly well-defined).  A
default definition for signum x might read
   signum x = let a = abs x in if (a == 0) then 0 else x / abs x
(Possibly signum is the wrong name.  What is the standard name for
this operation for, e.g., matrices?)  [Er, on second thoughts, it's
not as well-defined as I thought.  Abs x needs to be in a field for
the definition above to work.]

 I'll tell you anyway that if you try to "sanitize" the numeric
 classes, if you separate additive structures and the multiplication,
 if you finally define abstract Vectors over some field of scalars,
 and if you demand the existence of a generic normalization for your
 vectors, than *most probably* you will need multiparametric classes
 with dependencies. 

Multiparametric classes, certainly (for Vectors, at least).
Fortunately, they will be in Haskell 2 with high probability.  I'm not
convinced about dependencies yet.

 Jerzy Karczmarczuk
 Caen, France

Best,
Dylan Thurston

Footnotes: 
[1]  I'm not sure what I mean by "=" there, since I do not believe
these should be forced to be instances of Eq.  For clearer cases,
consider the various Monad laws, e.g.,
   join . join = join . map join
(Hope I got that right.)  What does "=" mean there?  Some sort of
denotational equality, I suppose.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

Thu, 8 Feb 2001 10:51:58 -0500, Peter Douglass [EMAIL PROTECTED] pisze:

 The first part of my question (not contained in your reply) is
 whether it is feasible to disable a developer's access to the
 "unsafe" numerical operations.

import Prelude hiding (quot, rem, (/) {- etc. -})
import YourPrelude -- which defines substitutes

You can "disable" it now. You cannot disable them entirely - anyone can
define present functions in terms of your functions if he really wants.

 Whether or not an individual developer chooses to do so is another
 matter.

Why only quot? There are many other ways to write bottom:
head []
(\(x:xs) - (x,xs)) []
let x = x in x
log (-1)
asin 2
error "foo"

 If you "know" the value is non-zero before run-time, then that is
 statically determined.

I know but the compiler does not know, and I have no way to convince it.

 It is possible that the developer writes a function which returns a
 nonZeroNumeric value which actually has a value of zero.  However,
 the value of requiring division to have a nonZeroNumeric denominator
 is to catch at compile time the "error" of failing to scrutinize
 (correctly or incorrectly) for zero.

IMHO it would be more painful than useful.

 For most commercial software, the quality of run-time error messages
 is far less important than their absence.

It would not avoid them if the interface does not give a place to
report the error:
average xs = sum xs / case checkZero (length xs) of
Just notZero - notZero
Nothing  - error "This should never happen"
is not any more safe than
average xs = sum xs / length xs

and I can report bad input without trouble now:
average xs = case length xs of
0 - Nothing
l - Just (sum xs / l)

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

Thu, 8 Feb 2001 21:41:56 +1100, Fergus Henderson [EMAIL PROTECTED] pisze:

 Should this define an instance for `foo T'?
 (I think not.)
 
 How about if the instance declaration is changed to
 
 instance bar T where
   f = 41
   -- no definition for f2
   b = 42
 
 ?
 (In that case, I think it should.)

I don't like the idea of treating the case "no explicit definitions
were given because all have default definitions which are OK"
differently than "some explicit definitions were given".

When there is a superclass, it must have an instance defined, so if
we permit such thing at all, I would let it implicitly define all
superclass instances not defined explicitly, or something like that.
At least when all methods have default definitions. Yes, I know that
they can be mutually recursive and thus all will be bottoms...

So maybe there should be a way to specify that default definitions
are cyclic and some of them must be defined? It is usually written
in comments anyway, because it is not immediately visible in the
definitions. If not formally in the language (now any method definition
can be omitted even if it has no default!), then perhaps the compiler
could detect most cases when methods are defined in terms of one
another and give a warning.

Generally the compiler could warn if the programmer has written bottom
in an unusual way. For example
f x = g some_expression
g x = f some_expression
is almost certainly a programmer error.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

Thu, 08 Feb 2001 11:24:49 +, Jerzy Karczmarczuk [EMAIL PROTECTED] pisze:

 The implicit coercion of numeric constants: 3.14 -=-  (fromDouble
 3.14) etc. is sick.

What do you propose instead?

(BTW, it's fromRational, to keep arbitrarily large precision.)

 Now, signum and abs seem to be quite distincts beasts. Signum seem
 to require Ord (and a generic zero...).

Signum doesn't require Ord.
signum z = z / abs z
for complex numbers.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread William Lee Irwin III

On Thu, Feb 08, 2001 at 08:30:31PM +, Marcin 'Qrczak' Kowalczyk wrote:
 Signum doesn't require Ord.
 signum z = z / abs z
 for complex numbers.

I'd be careful here.

\begin{code}
signum 0 = 0
signum z = z / abs z
\end{code}

This is, perhaps, neither precise nor general enough.

The signum/abs pair seem to represent direction and magnitude.
According to the line of reasoning in some of the earlier posts in this
flamewar, the following constraints:

(1) z = signum z * abs z where * is appropriately defined
(2) abs $ signum z = 1

should be enforced, if possible, by the type system. This suggests
that for any type having a vector space structure over Fractional
(or whatever the hierarchy you're brewing up uses for rings with
a division partial function on them) that the result type of signum
lives in a more restricted universe, perhaps even one with a different
structure (operations defined on it, set of elements) than the argument
type, and it seems more than possible to parametrize it on the argument
type. The abs is in fact a norm, and the signum projects V^n - V^n / V.
Attempts to define these things on Gaussian integers, p-adic numbers,
polynomial rings, and rational points on elliptic curves will quickly
reveal limitations of the stock class hierarchy.

Now, whether it's actually desirable to scare newcomers to the language
into math phobia, wetting their pants, and running screaming with
subtleties like this suggests perhaps that one or more "alternative
Preludes" may be desirable to have. There is a standard Prelude, why not
a nonstandard one or two? We have the source. The needs of the geek do
not outweigh the needs of the many. Hence, we can cook up a few Preludes
or so on our own, and certainly if we can tinker enough to spam the list
with counterexamples and suggestions of what we'd like the Prelude to
have, we can compile up a Prelude for ourselves with our "suggested
changes" included and perhaps one day knock together something which can
actually be used and has been tested, no?

The Standard Prelude serves its purpose well and accommodates the
largest cross-section of users. Perhaps a Geek Prelude could
accommodate the few of us who do need these sorts of schenanigans.


Cheers,
Bill
-- 
j0][nD33R:#math Excel/Spreadsheet Q: What is the formula for finding
out the time passed between two dates and or two times in the same day?
MatroiDN:#math excel/spreadsheet? Hmm, this is math? Is there a GTM on
excel or maybe an article in annals about spreadsheets or maybe
there's a link from wolfram to doing your own computer work, eh?
danprime:#math jeeem, haven't you seen "Introduction to Algebraic Excel"?
danprime:#math or "Spreadsheet Space Embeddings in 2-Manifolds"
brouwer:#math i got my phd in spreadsheet theory
brouwer:#math i did my thesis on the spreadsheet conjecture

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Tom Pledger

Marcin 'Qrczak' Kowalczyk writes:
 | On Thu, 8 Feb 2001, Tom Pledger wrote:
 | 
 |  nice answer: give the numeric literal 10 the range type 10..10, which
 |  is defined implicitly and is a subtype of both -128..127 (Int8) and
 |  0..255 (Word8).
 | 
 | What are the inferred types for
 | f = map (\x - x+10)
 | g l = l ++ f l
 | ? I hope I can use them as [Int] - [Int].

f, g :: (Subtype a b, Subtype 10..10 b, Num b) = [a] - [b]
Yes, because of the substitution {Int/a, Int/b}.

 |  x + y + z -- as above
 |  
 |  -- (x + y) + z   -- left-associativity of (+)
 |  
 |  -- realToFrac (x + y) + z-- injection (or treating up) done
 |-- conservatively, i.e. only where needed
 | 
 | What does it mean "where needed"? Type inference does not proceed
 | inside-out.

In the expression

(x + y) + z

we know from the explicit type signature (in your question that I was
responding to) that x,y::Int and z::Double.  Type inference does not
need to treat x or y up, because it can take the first (+) to be Int
addition.  However, it must treat the result (x + y) up to the most
specific supertype which can be added to a Double.

 | What about this?
 | h f = f (1::Int) == (2::Int)
 | Can I apply f

h?

 | to a function of type Int-Double?

Yes.

 | If no, then it's a pity, because I could inline it (the comparison
 | would be done on Doubles).  If yes, then what is the inferred type
 | for h? Note that Int-Double is not a subtype of Int-Int, so if h
 | :: (Int-Int)-Bool, then I can't imagine how h can be applied to
 | something :: Int-Double.

There's no explicit type signature for the result of applying f to
(1::Int), so...

h :: (Subtype a b, Subtype Int b, Eq b) = (Int - a) - Bool

That can be inferred by following the structure of the term.  Function
terms do seem prone to an accumulation of deferred subtype
constraints.

Regards,
Tom

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Brian Boutel

William Lee Irwin III wrote:
 
 
 The Standard Prelude serves its purpose well and accommodates the
 largest cross-section of users. Perhaps a Geek Prelude could
 accommodate the few of us who do need these sorts of schenanigans.
 


Amen.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Ch. A. Herrmann

moved to haskell-cafe

Ketil E.g. way back, I wrote a simple differential equation solver.
Ketil Now, the same function *could* have been applied to vector
Ketil functions, except that I'd have to decide on how to implement
Ketil all the "Num" stuff that really didn't fit well.  Ideally, a
Ketil nice class design would infer, or at least allow me to
Ketil specify, the mathematical constraints inherent in an
Ketil algorithm, and let my implementation work with any data
Ketil satisfying those constraints.

the problem is that the --majority, I suppose?-- of mathematicians
tend to overload operators. They use "*" for matrix-matrix
multiplication as well as for matrix-vector multiplication etc.

Therefore, a quick solution that implements groups, monoids, Abelian
groups, rings, Euclidean rings, fields, etc. will not be sufficient.

I don't think that it is acceptable for a language like Haskell
to permit the user to overload predefined operators, like "*".

A cheap solution could be to define a type MathObject and operators like 
   :*: MathObject - MathObject - MathObject
Then, the user can implement:

a :*: b = case (a,b) of
 (Matrix x, Matrix y) - foo
 (Matrix x, Vector y) - bar
-- 
 Christoph Herrmann
 E-mail:  [EMAIL PROTECTED]
 WWW: http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Marcin 'Qrczak' Kowalczyk

On Wed, 7 Feb 2001, Bjorn Lisper wrote:

 I'd like to point out the connection between the use of +, - on vector
 spaces and * for scaling with features in some data parallel languages.  In
 these languages, writing a + b where a and b are arrays of numerics is
 interpreted as elementwise addition of a and b. This features generalises to
 other operations than +, -, other types than numerical ones, and other data
 structures than arrays.

This is what I dislike. It's implicit fmap / zipWith / etc. But it only
works as long as there is only one meaningful way to insert these fmaps.
When I apply length to a list of lists, is it the length of the whole list
or a list of lengths of its elements? So there must be explicit ways of
specifying the amount of fmaps, and one cannot assume that they will be
always placed automatically. It might be convenient for very specific
types computation but is not a working general idea.

 Furthermore, some of these languages support "promotion": "lifting" a
 "scalar"-typed expression, appearing in a context where an array (say)
 is expected, into an array with suitable dimensions containing copies
 of the scalar.

Again, Haskell does not have subtyping. It is not compatible with type
inference - it can only work in poor languages which require an operation
to be fully applied where it is used, and either don't have static types
or require them to be specified explicitly. In Haskell trying to implement
such overloading would be too clumsy and would not work as expected in all
cases, so better don't go this way.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Bjorn Lisper

Marcin 'Qrczak' Kowalczyk:
Me:
 I'd like to point out the connection between the use of +, - on vector
 spaces and * for scaling with features in some data parallel languages.  In
 these languages, writing a + b where a and b are arrays of numerics is
 interpreted as elementwise addition of a and b. This features generalises to
 other operations than +, -, other types than numerical ones, and other data
 structures than arrays.

This is what I dislike. It's implicit fmap / zipWith / etc. But it only
works as long as there is only one meaningful way to insert these fmaps.
When I apply length to a list of lists, is it the length of the whole list
or a list of lengths of its elements? So there must be explicit ways of
specifying the amount of fmaps, and one cannot assume that they will be
always placed automatically. It might be convenient for very specific
types computation but is not a working general idea.

A natural principle to adopt is that an already typeable expression should
not be transformed. This will for instance resolve the ambiguity in the list
of list example: if l :: [[a]] then length l is already well-typed and
should not be transformed into map length l.

 Furthermore, some of these languages support "promotion": "lifting" a
 "scalar"-typed expression, appearing in a context where an array (say)
 is expected, into an array with suitable dimensions containing copies
 of the scalar.

Again, Haskell does not have subtyping. It is not compatible with type
inference - it can only work in poor languages which require an operation
to be fully applied where it is used, and either don't have static types
or require them to be specified explicitly.

I am not so sure about this. Could you exemplify?

Note that you can do some of this overloading already within Haskell's class
system. For instance, one can make lists of Nums into Nums by declaring

instance (Num a) = Num [a] where
x + y = zipWith (+) x y
x * y = zipWith (*) x y
...
fromInteger x = repeat (fromInteger x)
...

Now, if x and y are lists of Nums, then 2*x + y becomes

zipWith (+) (zipWith (*) (repeat fromInteger 2) x) y

In Haskell trying to implement
such overloading would be too clumsy and would not work as expected in all
cases, so better don't go this way.

I should point out that I didn't suggest adding this overloading in Haskell,
I was merely pointing out the connection between vector space syntax/scaling
and features in data parallel languages.

Bjrn Lisper

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Marcin 'Qrczak' Kowalczyk

Wed, 7 Feb 2001 13:04:12 +0100 (MET), Bjorn Lisper [EMAIL PROTECTED] pisze:

 A natural principle to adopt is that an already typeable expression should
 not be transformed. This will for instance resolve the ambiguity in the list
 of list example: if l :: [[a]] then length l is already well-typed and
 should not be transformed into map length l.

So there are ways to interpret an expression which are not chosen
only because some other way is a better match? This is very dangerous
in principle.

Two interpretations of a code are "correct", but one is "more correct"
than the other.

Say there is a code which relies on the implicit fmap, and it's
slightly changed by replacing the function with a more general
function, which has the same result on this instance. Then suddenly
without a warning the code has a different meaning, because it is
now applied in a different way (different placement of implicit fmaps).

Also, what is the inferred type of, for example
f x y = x + length y
? It can be
Int - [a] - Int
[Int] - [a] - [Int]
and neither is more general than the other. And this is a simple
function.

 Again, Haskell does not have subtyping. It is not compatible with type
 inference - it can only work in poor languages which require an operation
 to be fully applied where it is used, and either don't have static types
 or require them to be specified explicitly.
 
 I am not so sure about this. Could you exemplify?

Sorry, I don't have a concrete example in mind. How to infer types when
implicit conversions are possible anywhere? The above function f can
be applied even to two numbers (because the second would be promoted
to a list of length = 1), so what is its inferred most general type?

Assuming that Ints can be implicitly converted to Doubles, is the function
f :: Int - Int - Double - Double
f x y z = x + y + z
ambiguous? Because there are two interpretations:
f x y z = realToFrac x + realToFrac y + z
f x y z = realToFrac (x + y) + z

Making this and similar case ambiguous means inserting lots of explicit
type signatures to disambiguate subexpressions.

Again, arbitrarily choosing one of the alternatives basing on some
set of weighting rules is dangerous, because a programmer might mean
the other alternative - there is no simple way to ensure that the
compiler interprets it in the same way as I wanted. It's not enough
to check that all types match modulo conversions - I must carefully
check that no "better" interpretation is possible.

 Note that you can do some of this overloading already within
 Haskell's class system.

But it's quite rigorous: all uses of an identifier must be at a type
which is an instance of a single generic type. There is enough type
information to disambiguate the meaning in most cases - *without*
rejecting an interpretation because another was better (except
defaulting of numeric types - yes, it's ugly).

Another advantage of the Haskell's class system is that no code
relies on absence of something. Adding an instance does not make
previously working code ambiguous or otherwise incorrect! Except when
the instance conflicts with another one - this is the only kind of
"negative constraint".

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Bjorn Lisper

Marcin Kowalczyk:
Me:
 A natural principle to adopt is that an already typeable expression should
 not be transformed. This will for instance resolve the ambiguity in the list
 of list example: if l :: [[a]] then length l is already well-typed and
 should not be transformed into map length l.

So there are ways to interpret an expression which are not chosen
only because some other way is a better match? This is very dangerous
in principle.

Two interpretations of a code are "correct", but one is "more correct"
than the other.

It is quite similar in spirit to the concept of principal type in
Hindley-Milner type systems. An expression can have many types but only one
"best" (most general) type in that system.

Also, what is the inferred type of, for example
f x y = x + length y
? It can be
Int - [a] - Int
[Int] - [a] - [Int]
and neither is more general than the other. And this is a simple
function.

Int - [a] - Int, since this is the type it will get in the original type
system.

The types you mention are incomparable w.r.t. the usual "more
general"-ordering on types, but one could consider also other orderings. For
the types you give, the second is more "lifted" than the first in that it
contains [Int] in places where the first type has Int. One can define a
"liftedness" order on types in this vein.

(OK, so one would need to go through the formalities and prove that there
are "principal types" w.r.t. this relation between types, and that this new
principal type concept is not in conflict with the old one. I cannot say for
sure that it works.)

I should be more specific about what a type system could look like that
implements this kind of overloading. It could be a coercive type system,
with judgements of the form

t - t':a

where t, t' are terms, a is a type, and t:a is a correct judgement in the
original type system. So the type system not only gives a type but also a
transformation that resolves the overloading into a well-typed term.

 Again, Haskell does not have subtyping. It is not compatible with type
 inference - it can only work in poor languages which require an operation
 to be fully applied where it is used, and either don't have static types
 or require them to be specified explicitly.
 
 I am not so sure about this. Could you exemplify?

Sorry, I don't have a concrete example in mind. How to infer types when
implicit conversions are possible anywhere? The above function f can
be applied even to two numbers (because the second would be promoted
to a list of length = 1), so what is its inferred most general type?

Int - [a] - Int. If f is applied to some arguments with other types for
which the overloading is defined, say f l1 l2 where l1 :: [Int] and
l2 :: [a], then the term f l1 l2 would be transformed into a well-typed term
but the type of f itself would not change.

Again, arbitrarily choosing one of the alternatives basing on some
set of weighting rules is dangerous, because a programmer might mean
the other alternative - there is no simple way to ensure that the
compiler interprets it in the same way as I wanted. It's not enough
to check that all types match modulo conversions - I must carefully
check that no "better" interpretation is possible.

I surely agree that this kind of overloading should be used only when it is
in accordance with the intuition of the programmer. This could, for
instance, imply restrictions to certain types or operators.

Bjrn Lisper

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Dylan Thurston

On Tue, Feb 06, 2001 at 10:29:36PM +0100, Andreas Gruenbacher wrote:
 ...
 Also not all instances of Num can be shown. I have a monad that is an
 instance of Num, for example. I cannot possibly show the monad.

I've been thinking about this a little.  It's quite an interesting problem
in general to write classes that can be defined for monads.  This
can be done for any class in which each member returns the type variable:

class C a where
  foo :: ... - a
  (etc.)

is good, but anything else seems to cause problems.  So '+', '-',
'max', etc., are good, but '' and 'show' cause problems.  'quotRem'
and 'divMod' are interesting cases: they return a pair (a,a), which is
OK for some monads but not for others.

I wonder if there is a way to set things up so that all classes could
be written for monadic types.

Best,
Dylan Thurston

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Marcin 'Qrczak' Kowalczyk

Thu, 8 Feb 2001 00:32:18 +0100 (MET), Bjorn Lisper [EMAIL PROTECTED] pisze:

 Two interpretations of a code are "correct", but one is "more correct"
 than the other.
 
 It is quite similar in spirit to the concept of principal type in
 Hindley-Milner type systems. An expression can have many types but
 only one "best" (most general) type in that system.

But other are its instances! The point of HM is that I can forget
that something is more general and treat a definition
f xs = [] : xs
as of type [[Int]] - [[Int]]. Once I determine a possible meaning
of a code, I know it's correct, no matter if it's the most general
meaning or not.

(Well, this is not exactly true when classes come. Two uses of f don't
unify types of their arguments to the same type, where they would do
that if f had type [[Int]] - [[Int]]. Fortunately it's very rarely
a problem I would say. Overloading should not be abused because it
easily leads to ambiguous types.)

 Also, what is the inferred type of, for example
 f x y = x + length y
 ? It can be
 Int - [a] - Int
 [Int] - [a] - [Int]
 and neither is more general than the other. And this is a simple
 function.
 
 Int - [a] - Int, since this is the type it will get in the original type
 system.

So I can't apply f to lists, but I could if I inline its body. This
means that I cannot arbitrarily refactor a piece of code by moving
parts of it into separate definitions: subexpressions are given
some extra meanings only if they are physically placed in certain
contexts. This is bad.

 The types you mention are incomparable w.r.t. the usual "more
 general"-ordering on types, but one could consider also other
 orderings. For the types you give, the second is more "lifted" than
 the first in that it contains [Int] in places where the first type
 has Int. One can define a "liftedness" order on types in this vein.

Argh, Haskell's type system is complex enough. This is going to be
horror for people trying to understand it. I'm not saying that we
should not think about extending the type system at all, but this
is IMHO too ugly.

 (OK, so one would need to go through the formalities and prove that
 there are "principal types" w.r.t. this relation between types,
 and that this new principal type concept is not in conflict with
 the old one. I cannot say for sure that it works.)

Here other types are not instances of the principal type! So it's
not principal: it's just an arbitrary ordering.

 Sorry, I don't have a concrete example in mind. How to infer types when
 implicit conversions are possible anywhere? The above function f can
 be applied even to two numbers (because the second would be promoted
 to a list of length = 1), so what is its inferred most general type?
 
 Int - [a] - Int. If f is applied to some arguments with other types for
 which the overloading is defined, say f l1 l2 where l1 :: [Int] and
 l2:: [a], then the term f l1 l2 would be transformed into a well-typed term
 but the type of f itself would not change.

Ah, so what uses of f are correct depends on its definition, not type!
Sorry, this is way to radical.

Types exist to formalize possible ways a value can be used. HM allows
to determine most general types variables in a let-block (or: of a
module) before their uses, so separate compilation is possible. In
your system typechecking of a function's definition is done each time
it is used!

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Ketil Malde

Bjorn Lisper [EMAIL PROTECTED] writes:

 Two interpretations of a code are "correct", but one is "more correct"
 than the other.

 It is quite similar in spirit to the concept of principal type in
 Hindley-Milner type systems. An expression can have many types but
 only one "best" (most general) type in that system.

Now, I'm not any kind of expert on this, but isn't the most general
HM type one that encompasses the others, and *not* one out of a set of
ambigous (and mutually exclusive) types?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-07 Thread Ch. A. Herrmann

moved to haskell-cafe

Ketil E.g. way back, I wrote a simple differential equation solver.
Ketil Now, the same function *could* have been applied to vector
Ketil functions, except that I'd have to decide on how to implement
Ketil all the "Num" stuff that really didn't fit well.  Ideally, a
Ketil nice class design would infer, or at least allow me to
Ketil specify, the mathematical constraints inherent in an
Ketil algorithm, and let my implementation work with any data
Ketil satisfying those constraints.

the problem is that the --majority, I suppose?-- of mathematicians
tend to overload operators. They use "*" for matrix-matrix
multiplication as well as for matrix-vector multiplication etc.

Therefore, a quick solution that implements groups, monoids, Abelian
groups, rings, Euclidean rings, fields, etc. will not be sufficient.

I don't think that it is acceptable for a language like Haskell
to permit the user to overload predefined operators, like "*".

A cheap solution could be to define a type MathObject and operators like 
   :*: MathObject - MathObject - MathObject
Then, the user can implement:

a :*: b = case (a,b) of
 (Matrix x, Matrix y) - foo
 (Matrix x, Vector y) - bar
-- 
 Christoph Herrmann
 E-mail:  [EMAIL PROTECTED]
 WWW: http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-07 Thread Jerzy Karczmarczuk

"Ch. A. Herrmann" answers my questions:

 Jerzy What do you mean "predefined" operators? Predefined where?
 
 In hugs, ":t (*)" tells you:
(*) :: Num a = a - a - a
 which is an intended property of Haskell, I suppose.

Aha. But I would never call this a DEFINITION of this operator.
This is just the type, isn't it?
A misunderstanding, I presume.

 Jerzy Forbid what?
 A definition like (a trivial example, instead of matrix/vector)
class NewClass a where
  (*) :: a-[a]-a
 leads to an error 

OK, OK. Actually my only point was to suggest that the type for (*)
as above should be constrained oinly by an *appropriate class*, not
by this horrible Num which contains additive operators as well. So
this is not the answer I expected, concerning the "overloading of
a predefined operator".


BTW.

In Clean (*) constitutes a class by itself, that is this simplicity
I appreciate, although I am far from saying that they have an ideal
type system for a working mathemaniac.

 ... Also, the programming language should
 not prescribe that the "standard" mathematics is the right mathematics
 and the only the user is allowed to deal with. If the user likes to
 multiply two strings, like "ten" * "six" (= "sixty"), and he/she has a
 semantics for that, why not?

Aaa, here we might, although need not disagree. I would like to see some
rational constraints, preventing the user from inventing a completely
insane semantics for this multiplication, mainly to discourage writing
of programs impossible to understand.



Jerzy Karczmarczuk
Caen, France

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Marcin 'Qrczak' Kowalczyk

07 Feb 2001 11:47:11 +0100, Ketil Malde [EMAIL PROTECTED] pisze:

 If it is useful to have a fine granularity of classes, you can
 imagine doing:
 
 class Multiplicative a b c where
 (*) :: a - b - c

Then a*b*c is ambiguous no matter what are types of a,b,c and the
result. Sorry, this does not work. Too general is too bad, it's
impossible to have everything at once.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Dylan Thurston

Other people have been making great points for me.  (I particularly
liked the example of Dollars as a type with addition but not
multiplication.)  One point that has not been made: given a class
setup like

class Additive a where
  (+) :: a - a - a
  (-) :: a - a - a
  negate :: a - a
  zero :: a

class Multiplicative a where
  (*) :: a - a - a
  one :: a

class (Additive a, Multiplicative a) = Num a where
  fromInteger :: Integer - a

then naive users can continue to use (Num a) in contexts, and the same
programs will continue to work.[1]

(A question in the above context is whether the literal '0' should be
interpreted as 'fromInteger (0::Integer)' or as 'zero'.  Opinions?)

On Wed, Feb 07, 2001 at 06:27:02PM +1300, Brian Boutel wrote:
 * Haskell equality is a defined operation, not a primitive, and may not
 be decidable. It does not always define equivalence classes, because
 a==a may be Bottom, so what's the problem? It would be a problem,
 though, to have to explain to a beginner why they can't print the result
 of a computation.

Why doesn't your argument show that all types should by instances of
Eq and Show?  Why are numeric types special?

Best,
Dylan Thurston

Footnotes: 
[1]  Except for the lack of abs and signum, which should be in some
other class.  I have to think about their semantics before I can say
where they belong.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread andrew

On Wed, Feb 07, 2001 at 11:47:11AM +0100, Ketil Malde wrote:
 "Ch. A. Herrmann" [EMAIL PROTECTED] writes:
[...]
  the problem is that the --majority, I suppose?-- of mathematicians
  tend to overload operators. They use "*" for matrix-matrix
  multiplication as well as for matrix-vector multiplication etc.
 Yes, obviously.  On the other hand, I think you could get far by
 defining (+) as an operator in a Group, (*) in a Ring, and so forth.

As a complete newbie can I add a few points?  They may be misguided,
but they may also help identify what appears obvious only through
use...

- understanding the hierarchy of classes (ie constanly referring to
Fig 5 in the report) takes a fair amount of effort.  It would have
been much clearer for me to have classes that simply listed the
required super classes (as suggested in an earlier post).

- even for me, no great mathematician, I found the forced inclusion of
certain classes irritating (in my case - effectively implementing
arithmetic on tuples - Enum made little sense and ordering is hacked
in order to be total; why do I need to define either to overload "+"?)

- what's the deal with fmap and map?

 Another problem is that the mathematical constructs include properties
 not easily encoded in Haskell, like commutativity, associativity, etc.
 
  I don't think that it is acceptable for a language like Haskell
  to permit the user to overload predefined operators, like "*".

Do you mean that the numeric classes should be dropped or are you
talking about some other overloading procedure?

Isn't one popular use of Haskell to define/extend it to support small
domain-specific languages?  In those cases, overloading operatores via
the class mechanism is very useful - you can give the user concise,
but stll understandable, syntax for the problem domain.

I can see that overloading operators is not good in general purpose
libraries, unless carefully controlled, but that doesn't mean it is
always bad, or should always be strictly controlled.  Maybe the
programmer could decide what is appropriate, faced with a particular
problem, rather than a language designer, from more general
considerations?  Balance, as ever, is the key :-)

[...]
 From experience, I guess there are probably issues that haven't
 crossed my mind.   :-)

This is certainly true in my case - I presumed there was some deep
reason for the complex hierarchy that exists at the moment.  It was a
surprise to see it questioned here.

Sorry if I've used the wrong terminology anywhere.  Hope the above
makes some sense.

Andrew

-- 
http://www.andrewcooke.free-online.co.uk/index.html


- End forwarded message -

-- 
http://www.andrewcooke.free-online.co.uk/index.html

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Revamping the numeric classes

2001-02-07 Thread Peter Douglass

 I have some questions about how Haskell's numeric classes might be
revamped.

 Is it possible in Haskell to circumscribe the availability of certain
"unsafe" numeric operations such as div, /, mod?  If this is not possible
already, could perhaps a compiler flag "-noUnsafeDivide" could be added to
make such a restriction?

 What I have in mind is to remove division by zero as an untypable
expression.  The idea is to require div, /, mod to take NonZeroNumeric
values in their second argument.  NonZeroNumeric values could be created by
functions of type: 
  Number a = a - Maybe NonZeroNumeric
or something similar.

  Has this been tried and failed?  I'm curious as to what problems there
might be with such an approach.

--PeterD  

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Tom Pledger

Dylan Thurston writes:
 :
 | (A question in the above context is whether the literal '0' should
 | be interpreted as 'fromInteger (0::Integer)' or as 'zero'.
 | Opinions?)

Opinions?  Be careful what you wish for.  ;-)

In a similar discussion last year, I was making wistful noises about
subtyping, and one of Marcin's questions

http://www.mail-archive.com/haskell-cafe@haskell.org/msg00125.html

was whether the numeric literal 10 should have type Int8 (2's
complement octet) or Word8 (unsigned octet).  At the time I couldn't
give a wholly satisfactory answer.  Since then I've read the oft-cited
paper "On Understanding Types, Data Abstraction, and Polymorphism"
(Cardelli  Wegner, ACM Computing Surveys, Dec 1985), which suggests a
nice answer: give the numeric literal 10 the range type 10..10, which
is defined implicitly and is a subtype of both -128..127 (Int8) and
0..255 (Word8).

The differences in arithmetic on certain important range types could
be represented by multiple primitive functions (or perhaps foreign
functions, through the FFI):

primAdd   :: Integer - Integer - Integer-- arbitrary precision
primAdd8s :: Int8- Int8- Int8   -- overflow at -129, 128
primAdd8u :: Word8   - Word8   - Word8  -- overflow at -1, 256
-- etc.

instance Additive Integer where
zero = 0
(+)  = primAdd

...with similar instances for the integer subrange types which may
overflow.  These other instances would belong outside the standard
Prelude, so that the ambiguity questions don't trouble people (such as
beginners) who don't care about the space and time advantages of fixed
precision integers.

Subtyping offers an alternative approach to handling arithmetic
overflows:
  - Use only arbitrary precision arithmetic.
  - When calculated result *really* needs to be packed into a fixed
precision format, project it (or treat it down, etc., whatever's
your preferred name), so that overflows are represented as
Nothing.

For references to other uses of  class Subtype  see:

http://www.mail-archive.com/haskell@haskell.org/msg07303.html

For a reference to some unification-driven rewrites, see:

http://www.mail-archive.com/haskell@haskell.org/msg07327.html

Marcin 'Qrczak' Kowalczyk writes:
 :
 | Assuming that Ints can be implicitly converted to Doubles, is the
 | function
 | f :: Int - Int - Double - Double
 | f x y z = x + y + z
 | ambiguous? Because there are two interpretations:
 | f x y z = realToFrac x + realToFrac y + z
 | f x y z = realToFrac (x + y) + z
 | 
 | Making this and similar case ambiguous means inserting lots of explicit
 | type signatures to disambiguate subexpressions.
 | 
 | Again, arbitrarily choosing one of the alternatives basing on some
 | set of weighting rules is dangerous,

I don't think the following disambiguation is too arbitrary:

x + y + z -- as above

-- (x + y) + z   -- left-associativity of (+)

-- realToFrac (x + y) + z-- injection (or treating up) done
  -- conservatively, i.e. only where needed

Regards,
Tom

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Revamping the numeric classes

2001-02-06 Thread Dylan Thurston

On Tue, Feb 06, 2001 at 03:25:11PM +0100, Koen Claessen wrote:
 What do people think about this? If people prefer these
 stylistic changes, I think we should not hesitate making
 them for Haskell/2 by completely redesigning the module
 structure and using more consistent naming conventions.

These sound great to me.  If Haskell/2 is indeed open to such changes,
would also be possible to revamp the numeric modules?  As a
mathematician, I get annoyed by such things as

* (+) and (-) being lumped in with (*) (doesn't anyone use vector spaces?)

* the function 'atan2' being mixed in with a bunch of operations very
  specific to the floating point format in the 'RealFloat' class.
  Same problem (though less serious) with 'quot', etc., and
  'toInteger' in the Integral class.

* Superfluous superclasses: why are Show and Eq superclasses of Num?
  Not all numeric types have decidable equality.  Think arbitrary
  precision reals.

(I saw Mechvelliani's Basic Algebra Proposal; it strikes me as being
too complicated for the task.)

Best,
Dylan Thurston

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-06 Thread Andreas Gruenbacher

On Tue, 6 Feb 2001, Dylan Thurston wrote:

 On Tue, Feb 06, 2001 at 03:25:11PM +0100, Koen Claessen wrote:
  What do people think about this? If people prefer these
  stylistic changes, I think we should not hesitate making
  them for Haskell/2 by completely redesigning the module
  structure and using more consistent naming conventions.

 These sound great to me.  If Haskell/2 is indeed open to such changes,
 would also be possible to revamp the numeric modules?  As a
 mathematician, I get annoyed by such things as

 * (+) and (-) being lumped in with (*) (doesn't anyone use vector spaces?)

That also causes me some headaches.

For various number-like algebras (*) is not even defined. Not defining it
in an instance of Num leads to annoying runtime errors that would
otherwise be caught at compile time. For others algebras (*) is
self-multiplication (t - t - t). Even for others, it might be scaling (s
- t - t).

It may be the case that using (*) for scaling too is a generally bad
idea...

Another problem: The right place for fromInteger is probably not in class
Num. One cannot use the remaining Num operations on everything one can
construct from an Integer.
Otherwise it would be possible to write `0' for different kinds of zero
elements (and perhaps `1' for different one elements), even for things on
a nominal scale.

 * the function 'atan2' being mixed in with a bunch of operations very
   specific to the floating point format in the 'RealFloat' class.
   Same problem (though less serious) with 'quot', etc., and
   'toInteger' in the Integral class.

 * Superfluous superclasses: why are Show and Eq superclasses of Num?
   Not all numeric types have decidable equality.  Think arbitrary
   precision reals.

Also not all instances of Num can be shown. I have a monad that is an
instance of Num, for example. I cannot possibly show the monad.

 (I saw Mechvelliani's Basic Algebra Proposal; it strikes me as being
 too complicated for the task.)

 Best,
   Dylan Thurston


Regards,
Andreas Gruenbacher.


 Andreas Gruenbacher  [EMAIL PROTECTED]
 Research Assistant   Phone  +43(1)58801-12723
 Institute for Geoinformation Fax+43(1)58801-12799
 Technical University of Vienna   Cell phone   +43(664)4064789


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Revamping the numeric classes

2001-02-06 Thread Dylan Thurston

On Tue, Feb 06, 2001 at 10:29:36PM +0100, Andreas Gruenbacher wrote:
 On Tue, 6 Feb 2001, Dylan Thurston wrote:
  * (+) and (-) being lumped in with (*) (doesn't anyone use vector spaces?)
 
 That also causes me some headaches.
 
 ... Even for others, it might be scaling (s
 - t - t).
 
 It may be the case that using (*) for scaling too is a generally bad
 idea...

It may not be type sound to have the same operation, but there should
be some standard operation for scaling.  (Probably you need
multi-parameter type classes for this.)

 Another problem: The right place for fromInteger is probably not in class
 Num. One cannot use the remaining Num operations on everything one can
 construct from an Integer.
 Otherwise it would be possible to write `0' for different kinds of zero
 elements (and perhaps `1' for different one elements), even for things on
 a nominal scale.

When I thought about it, I concluded that '0' belongs with '+' and '-',
'1' belongs with '*', and 'fromInteger' belongs with their join (which
is mathematically called a Ring, but could keep the name Num).

Best,
Dylan Thurston

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell