On Tue, 15 Dec 2009 03:02:01 -0500, Don <[email protected]> wrote:

Phil Deets wrote:
On Mon, 14 Dec 2009 04:57:26 -0500, Don <[email protected]> wrote:

In the very rare cases where the result of an integer division was actually intended to be stored in a float, an explicit cast would be required. So you'd write:
double y = cast(int)(1/x);
 To me,
 double y = cast(double)(1/x);
 makes more sense. Why cast to int?

That'd compile, too. But, it's pretty confusing to the reader, because that code will only set y == -1.0, +1.0, +0.0, -0.0, or else create a divide by zero error. So I'd recommend a cast to int.

I agree with Phil, in no situation that I can think of does:

T i, j;

T k = i/j;
U k = cast(T)i/j;

Make sense.  I'd expect to see cast(U) there.

You can think of it as i/j returns an undisclosed type that implicitly casts to T, but not U, even if T implicitly casts to U.

Wow, this is bizarre.

I like the idea, but the recommendation to cast to int makes no sense to me. I'd also actually recommend this instead:

auto y = cast(double)(1/x);

On the idea as a whole, I think it's very sound. Note that the only case where it gets ugly (i.e. requiring casts) is when both operands of division are symbols, since it's trivial to turn an integer literal into a floating point.

-Steve

Reply via email to