On Tuesday, October 24, 2017 16:59:31 Arun Chandrasekaran via Digitalmars-d- learn wrote: > On Tuesday, 24 October 2017 at 16:18:03 UTC, H. S. Teoh wrote: > > On Tue, Oct 24, 2017 at 10:02:11AM +0000, Arun Chandrasekaran > > > > via Digitalmars-d-learn wrote: > >> On Monday, 23 October 2017 at 18:08:43 UTC, Ali Çehreli wrote: > >> > On 10/23/2017 07:22 AM, Arun Chandrasekaran wrote: > >> > > [...] > >> > > >> > The rule is that every expression has a type and 22/7 is int. > >> > >> Thanks Ali. Is this for backward compatibility with C? > >> Because, if there is a division, a natural/mathematical (not > >> programmatic) expectation is to see a a double in the result. > > > > [...] > > > > I have never seen a programming language in which dividing two > > integers yields a float or double. Either numbers default to a > > floating point type, in which case you begin with floats in the > > first place, or division is integer division, yielding an > > integer result. > > > > > > T > > I'm not denying that all the programming languages does it this > way (even if it is a cause of related bugs). > I'm just questioning the reasoning behind why D does it this way > and if it is for compatibility or if there is any other reasoning > behind the decision.
Part of it is compatibility. In general, valid C code should either be valid D code with the same semantics, or it shouldn't compile. We haven't done a perfect job with that, but we're close. And dividing two integers resulting in a floating point value doesn't fit with that at all. But regardless of that, there's the question of whether it's even desirable, and in a language that's geared towards performance, it really isn't. Also, many us don't want floating point values creeping into our code anywhere without us being explicit about it. Floating point math is not precise in the way that integer math is, and IMHO it's best to be avoided if it's not needed. Personally, I avoid using floating point types as much as possible and only use them when I definitely need them. If they acted like actual math, that would be one thing, but they don't, because they live in a computer, and D's built-in numerical types are closely modeled after what's in the hardware. Obviously, floating point types can be quite useful, and we want them to work properly, but having stuff automatically convert to floating point types on you without being asked would be a serious problem. And in general, D is far more strict about implicit conversions than C/C++ is, not less. So, it would be really out of character for it to do floating point division with two integers. - Jonathan M Davis