On 26/04/2013 3:07 AM, Niko Matsakis wrote:
I think checked integer overflow could be a good idea, presuming the
cost is reasonable, but I am somewhat skeptical of having it be enabled
or disabled on a module-by-module basis. This would imply that if I move
a function from one module to another, it stops working? It seems
surprising. Having the overflow checking be based on the type seems more
robust. But maybe this would not be a big issue in practice; after all,
it's already the case that moving a function requires various
corrections (name resolution etc) to account for the new scope. But
somehow this feels "different" than those, because this would mean that
compilation succeeds but the dynamic behavior silently changes.
Yeah. This is tricky state to enforce. Reminds me a bit of locales, tbh.
If you think that's awful, allow me also to direct your attention to the
_global_ state that controls the interpretation of floating point
arithmetic. How on earth does a programmer influence that reliably, when
composing programs from subprograms with differing assumptions?
Save-and-reload at module-crossings? Attach the modes to types? Figure
out a subtype lattice for dispatching operations? Hahaha. Thankfully the
754 authors left all of this undefined, for us to struggle with.
There _is_ some interesting work in this space. I recommend browsing
through these notes:
http://grouper.ieee.org/groups/754/meeting-materials/2001-10-18-langdesign.pdf
and perhaps the Borneo design. It might (oh, interesting!) also
influence how we think about rounding modes on integer division. Weirdly.
In the past I had thought about saying that the unsized types (`uint`,
`int`) are overflow checked but the sized types (`u32`, `i32`, etc) act
as they do in C. But Roc's email made me reconsider if that is
reasonable. Perhaps it's just too pat, and we would actually want a
distinct family of "overflow-checked" types (perhaps a library, as
Graydon suggests, though it seems like something that you might want to
opt out of, rather than opt in).
I think it has to be opt-in, yeah. Sadly. I mean, I wish we were living
in the world of hardware garbage collection and tagged memory words too,
and hensel codes had won out over floating point, and all our languages
had well defined total functional subsets with industrial strength
provers attached to them that we were legally obliged to use.
But some such dreams remain off the table when competing for market
acceptance with C++ :)
Definitely a Milestone 2 consideration ;)
Maybe. At least consideration. I think any variant would have to be
additive (backwards compatible) simply because in languages like this,
it's _really_ not what people assume as the default.
We've already got in trouble for / trapping divide-by-zero by a
branch/fault rather than a signal; it hits inner loops and costs us
performance. Really.
(We don't currently have a way to route signals into task failure; we
will need this and hopefully the new UV-based scheduler will help here.
I believe it will.)
-Graydon
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev