On Nov 8, 2004, at 3:08 AM, Leopold Toetsch wrote:

Jeff Clites <[EMAIL PROTECTED]> wrote:

No. The binary operations in Python are opcodes, as well as in Parrot.
And both provide the snytax to override the opcode doing a method call,
that's it.

I guess we'll just have to disagree here. I don't see any evidence of
this

UTSL please. The code is even inlined:

,--[ Python/ceval.c ]--------------------------------
|       case BINARY_ADD:
|               w = POP();
|               v = TOP();
|               if (PyInt_CheckExact(v) && PyInt_CheckExact(w)) {
|                       /* INLINE: int + int */
|                       register long a, b, i;
|                       a = PyInt_AS_LONG(v);
|                       b = PyInt_AS_LONG(w);
|                       i = a + b;
|                       if ((i^a) < 0 && (i^b) < 0)
|                               goto slow_add;
|                       x = PyInt_FromLong(i);
`----------------------------------------------------

But I said, "from an API/behavior perspective". How the regular Python interpreter is implemented isn't the point--it's how the language acts that's important. And I can't think of any user code in which "a+b" and "a.__add__(b)" act differently, and I think that intentional--an explicit languages design decision. The impl. above of BINARY_ADD is most likely an optimization--the code for BINARY_MULTIPLY (and exponentiation, division, etc.) looks like this:


                case BINARY_MULTIPLY:
                        w = POP();
                        v = TOP();
                        x = PyNumber_Multiply(v, w);
                        Py_DECREF(v);
                        Py_DECREF(w);
                        SET_TOP(x);
                        if (x != NULL) continue;
                        break;

And again, what about Ruby? If you believe in matching the current philosophy of the language, it won't use ops for operators (but rather, method calls), and won't do the right thing for objects with vtable/MMDs, and no corresponding methods.

Not actually MMD in Python--behavior only depends on the left operand,
it seems.

It's hard to say what Python actually does. It's a mess of nested if's.

Just look at the behavior--that's what's important:

Behavior depends only on left operand:

==> class Foo:
...     def __add__(a,b): return 7
...
==> x = Foo()
==> x + x
7
==> x + 3
7
==> x + "b"
7
==> x + (1,2)
7

All the following are error cases. Error statement varies depending on left operand only:

==> 3 + "b"
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsupported operand type(s) for +: 'int' and 'str'
==> 3 + x
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsupported operand type(s) for +: 'int' and 'instance'
==> 3 + (1,2)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsupported operand type(s) for +: 'int' and 'tuple'

==> "b" + 3
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: cannot concatenate 'str' and 'int' objects
==> "b" + x
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: cannot concatenate 'str' and 'instance' objects
==> "b" + (1,2)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: cannot concatenate 'str' and 'tuple' objects

==> (1,2) + 3
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: can only concatenate tuple (not "int") to tuple
==> (1,2) + "b"
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: can only concatenate tuple (not "str") to tuple
==> (1,2) + x
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: can only concatenate tuple (not "instance") to tuple

   null dest
   dest = l + r

should produce a *new* dest PMC.

Yes, it's a separate issue, but it's pointing out a general design
problem with these ops--their baseline behavior isn't useful.

It *is* useful. If the destination exists, you can use it. The destination PMC acts as a reference then, changing the value in place. But in case of Python it's not of much use

Right, changing the value in-place would do the wrong thing, for Python. (It depends on whether the arguments to the op are references, or the actual values. If they're references, then it can work correctly, but then we don't want to be MMD dispatching on the (reference) types, but rather on the types of what they're pointing to.)


except for the inplace (augmented) operations.

Yes, but that ends up being just for the two-argument forms (and even those don't work for Python--"a += 3" doesn't really update in-place in Python, but returns a new instance). In-place operators tend to only take one argument "on the right side", so the p_p_p forms aren't useful for this.


..., but for
PMCs this could compile like "a = b.plus(c)".

but you don't need add_p_p_p, just method invocation.

Why should we do method invocation with all it's overhead, if for the normal case a plain function call we'll do it?

Ah, that's the key. Method invocation shouldn't have onerous overhead. If it does, then we've got problems, and nobody will want to write in bytecode--we'll have PMCs and custom ops coming out our ears (more than now...), if that's the only way to get decent performance. Our PMC ops will only ever cover a tiny sliver of the API that people will need from their objects--most objects aren't number-like or string-like, so if ops are the only way to get good performance, then that's a problem.


Method call overhead doesn't have to be high. Objective-C has dynamic method invocation, but the time for a method call is only about 1.5 times that of a C function call, and not the dominating performance factor in real applications.

Certainly Parrot needs work in this area, to improve speed.

I suppose it may be justified to special-case mathematical operations, if HLLs will be implementing them as PMCs always (which might be the case), and if method calls end up slow. But the real way to get high-performance math is to use the I/N types.

For complex numbers and such, I'd want to be able to define classes for
them in bytecode. For that to work, ops would eventually have to
resolve to method calls anyway.

This is all working now already. You can do that. Again: if a method is there it's used (or almost MMD not yet, vtables are fine):

Good to know that you can do MMD and vtable implementations in bytecode--that's encouraging at least. But in my opinion vtable entries are just acting as the compiled/PMC analog of methods--things would be cleaner and simpler if they were one-and-the-same. I suppose it's just a philosophical difference, that I find it inelegant to have multiple overlapping ways to provide the same functionality, when a single general-purpose approach is available and would work.


JEff



Reply via email to