On 29/01/14 10:45, Kevin Ballard wrote:
On Jan 28, 2014, at 3:16 PM, Pierre Talbot <ptal...@hyc.io> wrote:
On 01/28/2014 11:24 PM, Kevin Ballard wrote:
It sounds like you're proposing that arbitrary functions may be eligible for
CTFE if they happen to meet all the requirements, without any special
annotations. This seems like a bad idea to me. I understand why it's
attractive, but it means that seemingly harmless changes to a function's
implementation (but not its signature) can cause compiler errors in other
modules, or even other crates if the AST for the function happens to be made
extern.
A more conservative approach would be to require the #[ctfe] annotation, which
then imposes all the given restrictions on the function. The downside is such a
function then is restricted to only calling other CTFE functions, so we'd have
to go in to the standard libraries and add this annotation whenever we think
it's both useful and possible.
This approach mirrors the approach being used by C++11/C++14.
-Kevin
I understand your point of view but adding #[ctfe] doesn't solve the problem
either, the library designer could remove this annotation, isn't it? I didn't
precise it, but I gave a different semantic to #[ctfe] than what you
understood. Let me rephrase it:
* #[ctfe] hints the compiler that performing CTFE outside of the contexts (as
specified) is safe. It means that for any input this function will terminate
[in a reasonable amount of time and memory].
We should keep in mind the drawbacks of the constexpr semantic:
1. Force the library designer to think about CTFE, the user might be in a
better position since he knows well which parameters he'll give to this
function.
2. Annotate functions means more maintenance, more changes and more errors.
Moreover, the C++11 constexpr only allow a subset of the language, which is
practical for the compiler implementor but not for the library designer. In D,
they specify when a function is *not* eligible.
Yes, I was using #[ctfe] to mean something slightly different than you were. In my case,
it meant "mark this function as eligible for CTFE, and impose all the CTFE
restrictions". And it does fix the problem I mentioned, because #[ctfe] would be
considered part of the function signature, not the function implementation. Everyone is
already used to the idea that modifying the function signature may cause compiler errors
at the call site. But the only example I can think of right now for when changing a
function's _implementation_ causes call site compiler errors is when you're using C++
templates.
FWIW, `transmute` causes such errors in Rust. e.g. `fn errors<T>(x: T) {
unsafe { std::cast::transmute::<T, int>(x); } }` will fail to compile
when passed u16, but not when passed uint, and this isn't encoded in the
type signature.
Huon
Not only that, but with your approach, changing the implementation of one
function could accidentally cause a whole host of other functions to become
ineligible for CTFE. And the farther apart the actual source of the problem,
and the resulting error, the harder it is to diagnose and fix such errors.
That said, I was not aware that D already takes this approach, of allowing CTFE
by default. I'm curious how it works for them, and how they handle these
problems.
-Kevin
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev