The idea about changing the precedence and meaning of ^ from
XOR to exponentiation is a bit provoking. I've been
programming so long that I often just forget that ^
means exponentiation for most people.

I did a quick search and it reveals that the current Python
symbol for exponentiation (**) comes from Fortran. It
used that symbol because the ^ did not exist yet. The ^ as
bitwise XOR was introduced by C language. Even the search
results pointed out that this is confusing for newcomers.

If I did what you propose, everyone who has been programming
C would do the mix up from time to time. But these guys are
actually more capable of dealing with the problem, compared
to people who might enter programming through symbolic
algebra. Replacing the XOR ^ with POW ^ could make a lot of
sense even if I did not really consider it at first.


Another, slightly wilder idea that I've had is that what if
you actually made ordinary numbers written by users bignums,
and numbers such as 1.2 would be fractionals by default.
This could slow down some programs considerably but the way
I see it the dynamic language is usually an excellent
starting point and when you go further from that it might be
better to instead have tools to optimize the dynamically
typed programs into something else.


I did follow Richard's advice and trying to list and/or
identify the points of friction.

To help this off I made myself a model of what a typical
operator overloading looks like in Python at this moment:
https://gist.github.com/cheery/228b6651fb6a460b91f26195fe58e397

To resolve a+b, Python first calls a.__add__(b)  for the
left-side object. If it returns NotImplemented, then it
tries b.__radd__(a). If it still returns NotImplemented then
it fails.

Operator overloading in Python looks better than I remember.
It's probably because many guides and memos fail to state
the important part of how Python specifically resolves the
call and they do not even include the 'isinstance' -part to
identify the other side on the expression.

In my language I have implemented the '+' as a multimethod
and added another multimethod to coerce. I were happy to
that for a while but then I realised that it may make some
things worse because the multimethod resolution ends up
being more complex rather than simpler, and not that many
problems appear to come from the dispatch mechanism after
all.

Another thing that seemed important but is maybe not that
crucial flaw is that Python's approach ends up dominating
with the left side of __add__. If the operation is found in
the __add__ then it won't be searched from the __radd__ and
this can cause conflicts between different libraries that
extend the arithmetic.

In my opinion the real problem is, and will likely always
be in getting multiple libraries interoperate. I've been
looking that statically typed languages seem to handle this
better than dynamically typed ones but in the end it turns
out to be the same ways by which you successfully cope with
the challenges.

If you have M different kinds of things, then in worst case
you need M*M different implementations for arithmetic. In an
one project it's likely that you cope well, but in presence
of multiple systems it becomes harder.

I think these ideas boil down to this: When you create new
behavior for arithmetic you're creating a new numerical
system that extends from the base types you have. This new
"number system" describes how the new values behave with the
existing values.

Languages like Haskell seem acknowledge that people create
new numerical systems when they overload operators. The
numbers are implicitly wrapped with (fromInteger N), and
you are supposed to define those conversion functions from
numeric values when you define the + operation. Different
numerical systems defined this way do not interact with each
other in Haskell.

I've been studying subtyping in order to optimize
dynamically typed programs directly from their source code
and get them to translate and run in C or GLSL -like
environments alongside the dynamic portions of the programs.
I think that this interacts with the way how you overload
the arithmetic.


The another, kind of a potential starting point I've been
thinking about is related to how numerical libraries deal
efficiently with computing.

The kind of an obvious thing that a programming language
implementation could provide would be tight memory layouts,
or even in-memory relational tables where you can fill up
the numerical data. The tight memory layouts would appear to
matter the most for efficient computing.

Numerical data itself seems to always be bundles of numbers.
You either got quaternions or matrix data with single or
double precision floating point.

I've been thinking that actually when you have quaternions
or matrices, it looks like neither one of those actually
need to take precedence and the number bundle end up staying
as an array. Perhaps those bundles could be treated as sums
of terms*constants, where the terms -part is a parametric
type? 


Lisp is often mentioned being a family of various languages
such as CLisp, scheme, racket. If you people provided specific
examples it would help but I will try to find out those
myself as well.

maanantai 15. tammikuuta 2018 21.04.31 UTC+2 Aaron Meurer kirjoitti:
>
> You might look at multiple dispatch as an improved method of operator 
> overloading, such as what is implemented in Julia. There are also 
> potentially more advanced things which can be useful, such as pattern 
> matching (Mathematica, Haskell). 
>
> You have to consider the tradeoffs, however, with expresibility. The 
> most general possible system is a full macro system, which lets you 
> effectively define whatever syntax you want. But the downside to that 
> is that you no longer have a fully consistent language. Someone 
> reading the code for a library must first learn the syntax. To 
> contrast something like Python, which does not have a macro system and 
> has (relatively) limited operator overloading, someone who already 
> knows the language from other uses can read the SymPy code and have a 
> good idea of what things mean (it also helps that SymPy and Python use 
> a fairly strict adherence to standard mathematical notation). 
>
> To give an example of this, we sometimes wish that Python allowed 
> changing the operator precedence. For instance, we can't use ^ for 
> exponentiation, because even though Python allows overriding that 
> operator, it is fixed at a precedence lower than +, so a + b ^ 2 
> without parentheses will give (a + b)^2 (^ is XOR). The downside is 
> that new users to Python must learn to use ** instead of ^ for 
> exponents. The upside is that anyone already familiar with Python can 
> see any SymPy expression and know how it will be interpreted, because 
> operator precedence is uniform across all Python code, even if it 
> overloads the operators to mean different things. 
>
> So I think there's a sweet spot. We have run into limitations of 
> Python's operator overloading mechanism. I personally think that 
> multiple dispatch and algebraic pattern matching are both interesting, 
> but having not used a language that implements either extensively, I 
> can't say for sure if either are that sweet spot. 
>
> Aaron Meurer 
>
>
> On Sun, Jan 14, 2018 at 7:46 PM, Henri Tuhola <[email protected] 
> <javascript:>> wrote: 
> > Hello, 
> > 
> > I would directly want to reach people who have been developing sympy, so 
> I posted on your mailing list. 
> > 
> > Development of sympy on top of python has not been frictionless, and I 
> think you know that. Unfortunately only few people know what's even wrong 
> there. 
> > 
> > I am working on a new programming language and found out that the choice 
> of how to allow extension of arithmetic is a really tough problem to figure 
> out. Simply implementing arbitrary operator overloading and calling it a 
> day doesn't seem to be so good choice. 
> > 
> > I wonder that if you had a choice of redesigning arithmetic in Python to 
> better support sympy, what would you attempt to solve? If you know what you 
> would do, I would gladly read that. 
> > 
> > Thank you ahead of the time. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sympy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > To post to this group, send email to [email protected] 
> <javascript:>. 
> > Visit this group at https://groups.google.com/group/sympy. 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sympy/2ab5c94f-97f2-46d9-aa92-39839a68566a%40googlegroups.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/05adcf16-ef85-4101-87da-aa421d9dd4f5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to