On Thu, Dec 5, 2013 at 10:43 AM, Gábor Lehel <[email protected]> wrote: > >> Free theorems aren't. > > > This is referring to the idea that if type variables are completely opaque > to a generic function parameterized over them, then you can formulate some > equational theorems for the function "for free", which you know will always > be true, because they can't not be. For example, for the Haskell function > fmap :: forall a b. (a -> b) -> (f a -> fb): > > fmap f . fmap g = fmap (f . g) > fmap id = id > > Due to unrestricted (non-mutation) side effects Rust flunks this, but it > would be nice if, for functions which can be assumed to be pure, it didn't. > For that we have to be vigilant about never allowing things like > dynamic_cast or various built-in intrinsics on completely generic types, at > least outside of unsafe code. This seems more hopeful, but Rust still flunks > it right now: because of, for one instance, size_of::<T>(). I'm not sure > whether there are other instances. It would be nice if this could be > addressed.
There are worse cases than `size_of` like `type_id`, `get_tydesc` and `visit_tydesc`. The `visit_tydesc` intrinsic is by far the worst since it allows walking the entire type, even cross-crate private fields. This is used by `std::repr` for "%?" in format strings. >> The container types they do have have weird ad hoc overloadings. e.g. Map >> is treated as an iterable container of pairs, but this means you can't write >> code that is parametric in the Traversable container type that can do >> anything sensible. It is one of those solutions that seems like it might be >> a nice idea unless you've had experience programming with more principled >> classes like Foldable/Traversable. > > Not quite sure what sort of overloadings this is referring to, but if/when > we add an Iterable trait, this means that perhaps the impl for maps should > iterate only over values, rather than key-value pairs, so that it makes more > sense to use the same generic code for both e.g. maps and arrays. (Of course > a (key, value) iterator should still be provided separately.) > > When we gain higher-kinded types, we should also be able to add the actual > Foldable and Traversable traits themselves. I think it makes sense to treat a map as a sequence of tuples. There is a corresponding array type and you can always extract only the keys or only the values with `map`. I expect there isn't going to be an `Iterable` type for a while though. There's not a single obvious way to implement it. >> You wind up with code that looks like myMap.map(...).toMap all over the >> place due to CanBuildFrom inference woes. > > No idea what this is about. I think this is similar to `collect` in Rust, where a container type needs to be provided and usually can't be inferred. I'm not sure how it could be done differently. >> Tail-call optimization is only performed for self-tail calls, where you do >> not do polymorphic recursion. > > Rust is a little bit better here (LLVM does "sibling call" optimization, > which is somewhat more general than just self calls), but not by much. It's worse too, because you have to understand the obscure rules used by LLVM, it only works for internal functions unless you use link-time optimization and doesn't happen at all for non-optimized builds. An example of those obscure rules is a function returning a large value (returned via an out pointer) not ever being a candidate for tail call elimination on most architectures. You also have to understand which cases Rust will use an out pointer for, and that's likely to keep changing. >> Monads are toys due to the aforementioned restriction. (>>=) is called >> flatMap. Any chain of monadic binds is going to be a series of non-self >> tailcalls. A function calls flatMap which calls a function, which calls >> flatMap... This means that non-trivial operations in even the identity >> monad, like using a Haskell style traverse for a monad over an arbitrary >> container blows the stack after a few thousand entries. > > Rust is not any better on this front. While we still had segmented stacks we > could've handled this, if not in constant space but at least on the stack, > but we no longer do. > > I'm wondering whether the fact that we're going to have moves and drops > statically tracked by the compiler might make it more realistic to think > about TCE again. I think the biggest remaining obstacle would be the calling > convention. I don't think it's a good idea to add any frontend support for tail-call elimination. It's not something that's ever going to be acceptable with optimizations disabled because it drastically changes debugging. >> We can fix this, and have in scalaz by adapting apfelmus' operational >> monad to get a trampoline that moves us off the stack to the heap, hiding >> the problem, but at a 50x slowdown, as the JIT no longer knows how to help. >> >> We can also fix it by passing imperative state around, and maybe getting >> scala to pass the state for me using implicits and hoping I don't >> accidentally use a lazy val. Guess which one is the only viable solution I >> know at scale? The code winds up less than 1/2 the size and 3x faster than >> the identity monad version. If scala was the only language I had to think >> in, I'd think functional programming was a bad idea that didn't scale, too. >> >> for yield sugar is a very simple expansion, but that means it has all >> sorts of rules about what you can't define locally inside of it, e.g. you >> can't stop and def a function, lazy val, etc. without nesting another for >> yield block. > > Not sure what this is about, but if we ever add a `yield` construct we > should presumably try to take this into consideration. If we have `yield` sugar, it's going to be more restrictive than a regular function in order to maintain memory safety. Move semantics are very relevant here. _______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
