On 04/16/2011 07:44 AM, Rafael Ávila de Espíndola wrote:
The problem is that they have some hidden costs. Assuming no overflows
makes it easier for the compiler to compute loop trip counts for example.

In the particular case of rust, there is an extra cost too for what it
is meant to "fail". If an "a+b" that overflows should have the same
effect as the fail statement, we would have to insert code to start an
stack unwind. It also means that very basic math only functions can throw.

It's not quite that bad. The system described in the CERT paper delayed the overflow checks until an externally visible effect occurred:

"AIR Integers do not require Ada-style precise traps, which require that an exception is raised every time there is an integer overflow. In the AIR integer model, it is acceptable to delay catching an incorrectly represented value until an observation point is reached just before it either affects the output or causes a critical undefined behavior [Plum 09]. This model improves the ability of compilers to optimize, without sacrificing safety and security."

So (this is off the top of my head and I may be totally wrong), but the two examples you gave (one in another message) could have these solutions:

(1) Loop trip counts can be computed assuming no overflow, and the compiler could insert an overflow check prior to the start of the loop. Failing this sets a trap flag and the first externally visible effect within the loop checks the trap flag and throws if it's set.

(2) Autovectorization could occur as usual. Overflow for any of the values in the vector is checked at the site of the first externally visible effect in the loop.

These scenarios result in the loss of precision (overflow errors can be delayed), but since task failure is non-recoverable precision doesn't seem to me to be that important anyway.

Patrick
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to