On 14/03/2011 3:21 AM, Marijn Haverbeke wrote:

The 'wrapping' is done implicitly and transparently, and the compiler
knows about it, and can thus optimize trivial things like, 'x == 0',
if int is an instance of class 'Comparable', into direct calls to the
int compare function, rather than creating two objects wrapping ints
and looking up the compare operation in their vtables at runtime. This
will allow things to be generalized without having to pay for the
generality when we don't need it.

In general we're not supporting (or not *starting* with the notion of supporting) any transparent conversions at all. Not even subtyping. Particularly not a conversion that involves allocating a wrapper object and a vtbl. We don't even auto-box 10 to @int!

And it will, of course, allocate in cases where the compiler can't or won't inline. Like, as others have pointed out, across crate boundaries. Within a crate we might well see LLVM doing interprocedural inlining of obj vtable calls as well, but that's neither here nor there. The artifact ("an obj") exists for the programmer to see what's going on and decide when to construct a wrapper. Cost model is visible and obvious. "No reliance on smart compilers." Maybe I should have made that our slogan?

As for terminology, using a new name (typeclasses) for something
that's new (they are simply a different thing from Java/C++/etc
classes) is necessary to prevent confusion. In the nineties, I'm sure
some people derided objects as elitist. Right now, it's not hard to
find people who consider functional programming ivory-tower nonsense.
Progress, by being different from the old thing, is always going to
take some getting used to.

Yeah, I'm sorry. It was a crude thing to say.

I didn't mean it in terms of "FP tech is intrinsically elitist", just that using its encodings for a problem will inevitably collide with, or sacrifice, the more widely understood non-FP encoding of the same problem (unless you want your language to carry both; doubles the cognitive load). So if I have to choose I will go with the more mainstream terminology and encoding.

Case in point: Sebastian contacted me off-list to note that there remains the more particular case typeclasses encode better than objs: that of N-ary operations over your type. As he put it: the vtable adheres to the type, not the value.

I find this similar to the argument in favour of existentials (which, while you weren't here for it, we used to support, along with first class modules). For existentials we eventually decided that since an inverted-control-flow encoding of them exists using universals, they didn't justify their cognitive load. I have a similar feeling here: we have a way of making vtables, it's a way that looks and feels like what more people will recognize, so let's see how much mileage we can get out of that. I *suspect* (though cannot prove) that most of the typeclass use-cases encode as objects + universals with a little rearrangement. And that's worth pursuing if true.

Keep these graphs in mind:

http://langpop.com/#normalized

The first 13-or-so entries on that list (and the vast majority of implied population) are languages that encode such problems in objects + universals. It's not that it's a strictly better encoding, merely a strong argument in favour of going with the flow when picking out own encoding.

-Graydon
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to